Regulating the (yet) Unregulable?

While we are perhaps only at the beginning of what AI can accomplish, the horse has already left the barn. Other than widely praised OpenAI’s ChatGPT and Microsoft’s Bing Chat, there are massive developments as well such as Google Bard, Adobe Firefly, Canva AI, Microsoft Loop, Bing image creator, Instruct-NeRF2NeRF, Ubisoft’s AI tool, Runway’s text-to-video, among others. Such development represents how AI is starting to transform every aspect of our lives, from education to transportation, from media entertainment to education.

While AI has the potential to bring enormous benefits to humanity, it also poses significant risks and challenges. For instance, AI can reflect and amplify human biases that are present in the data or algorithms used to train and deploy it. AI can also raise serious privacy concerns, as it enables the collection, analysis, and sharing of massive amounts of personal and sensitive data. AI could potentially disrupt the labour market that were previously performed by humans and alter the distribution of income and wealth that eventually widen the gap between those who own and benefit from AI technologies and those who are left behind or harmed by them. Furthermore, AI can create new forms of monopoly power, as a few dominant firms capture most of the data, talent, and profits in the AI sector.

It is no wonder that more than 1,000 tech leaders and scholars –– including business magnate and investor Elon Musk, Apple co-founder Steve Wozniak, Skype co-founder Jaan Tallinn, author Yuval Noah Harari –– have signed an open letter calling to pause development of large-scale AI systems, citing fears over the profound risks to society and humanity. Thus, how can we ensure that AI is developed and used in a way that respects human dignity, rights, and values?

***

AI are not all the same sizes and capabilities. Artificial general intelligence (AGI) is the hypothetical level of AI that can perform any intellectual task that a human can, such as reasoning, learning, planning, or creativity, while artificial super intelligence (ASI) is the hypothetical level of AI that can surpass all human intelligence and capabilities, such as wisdom, intuition, or morality (Ackermann, 2019). GPT-4 is believed neither AGI nor artificial narrow intelligence (ANI), which is the current level of AI that can only perform specific tasks within a limited domain, such as image recognition, natural language processing, or chess playing. Instead, it is rather a new kind of artificial broad intelligence (ABI), which is an intermediate level of AI that can perform multiple tasks across different domains, but not all tasks that humans can do (Bubeck et al., 2022).

Thus, the risks of AI can be different in nature and magnitude. ANI perhaps poses relatively low risks to human existence, as it is unlikely to cause global catastrophes or existential threats. However, ANI can still pose some risks to human well-being, such as privacy violations, discrimination, unemployment, cyberattacks, or accidents. AGI poses higher risks to human existence, as it could potentially outsmart or outperform humans in any domain. Some of the risks associated with AGI include loss of human control, misalignment of goals or values, ethical dilemmas, social disruption, or conflict (McLean et al., 2021). ASI poses the highest risks to human existence, as it could potentially dominate or destroy humanity and other forms of life. Some of the risks associated with ASI include existential threats, intelligence explosion, singularity, or unfathomable outcomes.

Departing on Nick Bostrom’s (2019) vulnerable world hypothesis, AI could be considered as potential black ball technologies, depending on their capabilities, accessibility, and alignment. For example, ANI could enable mass surveillance or cyberwarfare; AGI could escape human control or pursue harmful goals; while ASI could dominate or destroy humanity and other forms of life. On the other hand, however, AI could also be considered as potential solutions or mitigations for other black ball technologies, depending on their design, governance, and ethics. For example, ANI could help detect or counteract bioweapons or nanobots; AGI could help regulate or monitor other technologies or actors; while ASI could help prevent or recover from global catastrophes. Yet, AI could also affect the likelihood or impact of discovering or using other black ball technologies, depending on their influence, interaction, and evolution. For example, ANI could accelerate or hinder scientific and technological progress; AGI could cooperate or compete with other agents or systems; while ASI could create or avoid an intelligence explosion or singularity.

***

Many countries and organizations have recognized the situation and have proposed frameworks and principles for the governance of AI. For example, in November 2021, all the member states of UNESCO adopted a historic agreement on the ethics of AI that defines the common values and principles needed to ensure the healthy development of AI. Similarly, in June 2021, WHO issued its first global report on AI in health and six guiding principles for its design and use. In the same spirit, the OECD also developed Principles on AI, a set of values-based principles and recommendations for the responsible stewardship of trustworthy AI that respects human rights and democratic values, that were adopted by the OECD member countries and partner economies in May 2019. The European Union (EU) possibly has been a pioneer, with its recent proposal for an Artificial Intelligence Act, which aims to create the world’s first comprehensive legal framework for AI. The EU’s approach is based on the principle of trustworthy AI, which requires that AI systems are lawful, ethical, and robust (Mantelero, 2022).

The problem is that not all countries have the same level of resources, capabilities, or interests in regulating AI. Countries like Indonesia may face challenges such as lack of infrastructure, data, talent, funding, or awareness to implement effective AI policies and standards. We may also have different priorities or perspectives on how to balance the benefits and risks of AI for our development agenda. Less access to or control over AI technologies and data could conceivably make us more vulnerable to exploitation, manipulation, or domination by more advanced or powerful actors. Moreover, we may have less awareness or voices in the global dialogue and governance of AI, which could make us more marginalized from the benefits and opportunities of AI development and use. Not only we became less prepared to cope with the challenges or threats of AI, but we also became more susceptible to harms such as calamitous unemployment, inequality, corruption, conflict, or environmental degradation.

***

We truly face a catch-22, embracing the speed of AI development since their value creation will be immense, but needing to regulate its implementation so everyone gets an equal shot at deriving value from them. Indeed, AI regulation should balance the potential benefits and risks of AI for different domains and contexts, while respecting human dignity, rights, and values. It ought to be adaptive and responsive to the rapid and dynamic development of AI technology and its social impacts. Nevertheless, AI regulation is a complex and urgent issue that requires careful consideration and collaboration among various stakeholders. It is also a huge and challenging endeavour that will require political will, social dialogue, and international cooperation.

We are now perhaps playing with the hottest thing since the discovery of fire. This time, the stakes are way higher.