With great power comes great regulation.
The rapid growth of AI has sparked concern throughout the world. There have been knee-jerk reactions such as Italy banning ChatGPT a few months after launch (Access was restored).
Even the founder of Open AI, the company behind ChatGPT, is sounding the alarm bells.
“I think if this technology goes wrong, it can go quite wrong,” Sam Altman told the Senate subcommittee. “We want to work with the government to prevent that from happening.” Mr Altman warned that the risks of advanced AI systems were serious enough to warrant government intervention and called for regulation of AI for its potential harms.
However, there are now louder voices clamoring for regulations and policies, hoping that we can potentially prevent the apocalypse with laws regarding the development and deployment of AI. The incredible potential of this technology is often overshadowed by negative outcomes that AI could create:
(You can read about these subjects in our article - AI ethics - Why is it important and what are the pain points?)
All of these challenges are exacerbated by the complexity of AI, as we still don’t understand the full extent of risks that AI systems can pose to businesses, consumers, and the general public.
There have been proposals—hard and soft—to regulate AI. However, the issue lies with the pace at which AI is developing. Hard legal proposals could be stifling and would stymie the progress of a flourishing technology. Moreover, the wide range of applications makes it hard for regulatory agencies to create standardized laws and policies. Some of the challenges include:
Conversations around regulating AI have been brewing since 2014. Elon Musk sounded the alarm bells in 2014, 2015, and in 2017, calling for proactive and precautionary government intervention. Although he admitted that regulation was both “irksome” and “not fun”, he has repeatedly advocated for regulations.
At the time, the response to Musk’s remarks was mixed - seen as either an overreaction or a clever marketing ploy for his own products.
However, these days, there is a greater consensus for creating regulations around AI.
"To realize the promise of AI and avoid the risk, we need to govern this technology," US President Joe Biden said. "In the wrong hands, AI can make it easier for hackers to exploit vulnerabilities in the software that makes our society run."
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” - UK PM, Rishi Sunak - Source
“I believe Europe, together with partners, should lead the way on a new global framework for AI, built on three pillars: guardrails, governance, and guiding innovation.”
- Ursula von der Leyen, President of the European Commission at the World Economic Forum - Source
In a speech at the Center for Strategic & International Studies, Senate Majority Leader Chuck Schumer asked whether Congress can work to “maximize AI’s benefits while protecting the American people—and all of humanity—from its novel risks? I think the answer to these questions is an emphatic yes.” - Source
A delegation of the biggest tech leaders, including Sam Atlman, CEO, OpenAI; Sundar Pichai, CEO, Google; Elon Musk, CEO, Tesla Motors; Mark Zuckerberg, CEO, Facebook; also convened in Washington for an ‘AI Safety Forum’ where they met with US senators to talk about the rise of AI and the regulation required.
“If anything, I feel, yes, it’s moving fast, but moving fast in the right direction,” said Satya Nadella. “Humans are in the loop versus being out of the loop. It’s a design choice, which, at least, we have made.” - Source
"There's the existential risk, which is: 'What if AI starts improving itself and we lose control?'" - Satya Nadella - Source
This meeting is just one of a series of discussions between the heads of Silicon Valley, policymakers, researchers, labor leaders, civil liberty groups, and government. The need for different stakeholder involvement comes with its own share of criticism.
“We can’t really have the companies recommending the regulators. What you don’t want is regulatory capture, where the government just plays into the hands of the companies.” Gary Markus, Professor at New York University.
However, some see it differently.
“AI by itself, I don’t see as a threat,” Bill Gates said in an exclusive interview with CNBC Africa. “There are a lot of long-term effects where AI could make us all so productive. It will change how we use our time but, we need to roll it out and we need to get the benefits which in every domain, particularly health and education, I am very excited about.” - Source
And some have already started self regulating, as Nick Clegg, Meta's president of global affairs, announced that Meta will require advertisers throughout the world to disclose whether they have used AI or related digital editing techniques “to create or alter the content of political ads”
“This applies if the ad contains a photorealistic image or video, or realistic-sounding audio, that was digitally created or altered to depict a real person as saying or doing something they did not say or do,” Clegg wrote. “It also applies if an ad depicts a realistic-looking person that does not exist or a realistic-looking event that did not happen, alters footage of a real event, or depicts a realistic event that allegedly occurred, but that is not a true image, video, or audio recording of the event.” - Source
AI regulations may be in the nascent stage, but it doesn’t mean that they don’t exist. Governments across the world are putting plans in place or taking concrete steps to safeguard their citizens while driving innovation.
In November 2023, 18 countries signed the first international agreement on how to keep AI safe from rogue actors. This includes the US, UK, Australia, Canada, Chile, Czechia, Estonia, France, Germany, Israel, Italy, Japan, New Zealand, Nigeria, Norway, Poland, South Korea and Singapore. These are guidelines which intend on ensuring that AI is safe by design.
Canada has also introduced the Digital Charter Implementation Act (legislation for trust and privacy) in 2022, while China has proposed the Interim Measures for the Management of Generative AI Services in 2023.
However, two of the biggest moves have come from the US and the EU.
Biden’s Executive Order
The United States recently launched a sweeping executive order to regulate both the development and use of AI, emphasizing the protection of American citizens' privacy, advancing equity and civil rights, and promoting innovation and competition.
The executive order had been touted as the first AI regulations to cover a vast array of challenges, however, experts believe that this is just the beginning and that more exhaustive regulations will be required. Furthermore, the powers of executive order are limited, and it could be overturned. Legislation is critical to cementing these regulations.
Key Takeaways:
Bradley Tusk, CEO at Tusk Ventures, a venture capital firm, welcomed the move, but said tech companies would likely shy away from sharing proprietary data with the government over fears it could be provided to rivals.
"Without a real enforcement mechanism, which the executive order does not seem to have, the concept is great but adherence may be very limited," Tusk said - Source
The European Union's Artificial Intelligence (AI) Act represents a pioneering legislative effort to regulate the development and application of AI technologies within the EU. This legislation, the first of its kind, aims to establish a comprehensive framework for ensuring AI systems are safe, transparent, and accountable. The AI Act categorizes AI systems based on their risk levels, imposing stricter requirements on high-risk AI systems, including those used in critical infrastructure, employment, and essential private and public services. Penalties could go up to €30 million, or 6 percent of an organization’s global revenue.
Key Takeaways:
It’s clear from the developments across the globe that AI regulations will only grow more comprehensive from here on out.
The good news? There is a bit of time to assess what the regulations are and how to respond to them.
What kind of regulations would you like to see in the space of Artificial Intelligence?