Regulating the (yet) Unregulable?
While we are perhaps only at the beginning of what AI can accomplish, the horse has already left the barn. Other than widely praised OpenAI’s ChatGPT and Microsoft’s Bing Chat, there are massive developments as well such as Google Bard, Adobe Firefly, Canva AI, Microsoft Loop, Bing image creator, Instruct-NeRF2NeRF, Ubisoft’s AI tool, Runway’s text-to-video, among others. Such development represents how AI is starting to transform every aspect of our lives, from education to transportation, from media entertainment to education.
While AI has the potential to bring enormous benefits to humanity, it also poses significant risks and challenges. For instance, AI can reflect and amplify human biases that are present in the data or algorithms used to train and deploy it. AI can also raise serious privacy concerns, as it enables the collection, analysis, and sharing of massive amounts of personal and sensitive data. AI could potentially disrupt the labour market that were previously performed by humans and alter the distribution of income and wealth that eventually widen the gap between those who own and benefit from AI technologies and those who are left behind or harmed by them. Furthermore, AI can create new forms of monopoly power, as a few dominant firms capture most of the data, talent, and profits in the AI sector.
It is no wonder that more than 1,000 tech leaders and scholars –– including business magnate and investor Elon Musk, Apple co-founder Steve Wozniak, Skype co-founder Jaan Tallinn, author Yuval Noah Harari –– have signed an open letter calling to pause development of large-scale AI systems, citing fears over the profound risks to society and humanity. Thus, how can we ensure that AI is developed and used in a way that respects human dignity, rights, and values? Continue reading