The rise of generative AI models like ChatGPT and Gemini has fuelled another buzzword - AGI. Artificial General Intelligence is mentioned in hushed whispers across boardrooms and conspiracy back-alleys, as a catalyst for the AI armageddon that’s about to come calling.
But what is AGI? And is it around the corner or some distance away?
Artificial General Intelligence is an AI system capable of thinking and performing tasks like humans. It can adapt to a situation, learn to respond, and execute the action, without needing training or human intervention - at least in theory.
However, with AI and AGI, two questions always follow:
To know more about ‘what is AGI’, read this article.
The answer is not simple.
There are varying answers to how long it will take for AGI - a few years, a decade, or maybe two. However, there is an underlying consensus that AGI will become a reality in this lifetime.
In a 2022 Expert Survey on Progress in AI (2022 ESPAI), 50% of the respondents believed that high-level machine intelligence could exist by 2059.
Nobody knows for sure as people are extremely tight-lipped, especially AI companies and their leaders. This only adds to the fear and mystery surrounding AGI. But, at times, some information slips through in interviews and podcasts. Here’s what tech leaders have to say about AGI:
And we can’t discount quantum computing as a potential accelerator in AGI development. A recent study published in Nature Communications confirms this notion.
However, the high costs and specialized expertise required for quantum computing create significant accessibility hurdles, raising questions about its practical implementation and the potential risks of faster, more powerful AI.
To understand the fear grabbing people across Tech and VC boardrooms, we need to truly grasp the potential of AGI -
Machines replacing human beings.
This scenario might sound straight out of science fiction, often brought to life in books and movies. But with AI's rapid advancements, the question of "when" rather than "what if" seems increasingly valid.
Below are the key factors of AGI that invoke fear in humans:
The prospect of AGI fuels contrasting perspectives of the future – some fear a dystopian world ruled by superintelligent machines, while others see a golden age of human-AI collaboration.
Could the reality, perhaps, lie somewhere in between? We don’t know, yet.
Ultimately, technological capabilities alone doesn’t define the impact of AGI on society. There is a need for a thoughtful approach to integrating AGI into our daily lives. AI leaders and AGI developers need to keep transparency, accountability, and responsible alignment with human values as central tenets in building AGI. By doing so we can harness AGI’s potential to address complex challenges and expand our understanding of it.
When do you think AGI, possibly the ultimate achievement in the field of AI, will be achieved?
Let us know your thoughts.