In this era of unravelling innovation, the proliferation of AI has raised profound questions regarding its moral implications. As AI continues to permeate various aspects of our lives, understanding the government and ethical considerations surrounding this transformative technology becomes imperative.
It was apparent during the recent FutureProduct event panel that establishing governance frameworks and upholding moral integrity is now more crucial than ever. Organisations must start to shift their focus to address concerns related to data privacy, security, transparency when operating under AI capabilities.
"Are we legislating on humanity? Are we legislating on something that is technical in terms of putting rules in fire? I think that none of those questions have been answered right."
Kellie made it clear that our technologists have not yet understood whether legislation is on “technical terms” or for “putting rules in fire” and that these issues must be addressed before legislation can be considered. Additionally, she emphasised the consideration of how “we need to live differently” as a result of AI and whether current legislation and ethical considerations are relevant to this “new world”. Instead, we should question whether society possesses the necessary tools to adapt.
"The fuel driving the rapid advancement of machine learning AI technology lies in having access to a beta. It is crucial to gain visibility into the type of data the model has been trained on, including the handling of sensitive and private information belonging to individuals."
Bauichuan highlighted the importance of regulating the access and use of data in the advancement of AI. The ethical issue is that there is a noticeable “lag” between technological advancements and the understanding of their effects. In terms of legislation, Sun acknowledged that “the advancements of the technology are really driven by the business profits, rather than the what kind of impact is having on the humanity”. He raised the question of whether governments or self-organised society communities should take the lead, acknowledging that the “government may not be always the right answer to everything”. We should instead encourage an open discussion on finding the right approach.
“You know, there's some good attempt there. We've heard about it from Elon Musk and lots of other smart people writing about society, and very well written, and way better than any regulation could ever be. It's just not practical. However, there's no way the government could do a better job than that”
Tim discussed the attempts at self-regulation in the field of AI, particularly referencing the Future of Life Institute's open letter to major AI companies. He acknowledged the well-considered and well-written nature of the letter, suggesting that it was better than any regulation could ever be. However, O'Neill expressed scepticism about the effectiveness of such self-regulation. He highlighted that even the most well-considered attempts at self-regulation often face limitations and fail to gain universal agreement. Relying on government regulation would not be practical either, comparing it to “putting water on the issue” without a significant impact.
“I think there needs to be some sort of an influencing overarching capability where we can hone in its whip. If we train it with more accurate data, to produce a better output, then maybe that's the right choice in terms of who takes responsibility”.
Justin expressed uncertainty about whether the government is the right sole entity to regulate the entire process of AI. However, he acknowledged the real risk of feeding open models with inaccurate or misleading data and the potential consequences of such outputs. Spyridis mentioned the limitations of AI models and the implications for society and whether the current capabilities of AI are truly beneficial. Spyridis emphasised the need for some form of overarching influence or control that can shape and refine AI's capabilities.
Our AI-focused FutureProduct event, identified the crucial need for robust governance and ethical regulations in the realm of AI. Addressing concerns such as data privacy, security, and transparency is paramount. Experts emphasised defining legislation and its relevance to the "new world" shaped by AI. The importance of training AI models with accurate data and establishing an overarching capability to shape AI responsibly is critical for society and technology to co-exist. Hence, It’s vital to engage in open discussions and collaborative efforts to navigate the AI-driven future ethically and effectively.
To read more from our Generative AI panel session, continue reading the following articles