What the Biggest Minds on the Planet are Saying About the Future of AI?

Impact, Business, Arts, Design, Science, Technology
 
 

Image created in Midjourney by Daniel Simons

By: Daniel Simons

In a 2016 lecture given at the opening of Cambridge University’s Centre for the Future of Intelligence, renowned astrophysicist Stephen Hawking warned that the development of Artificial Intelligence would ‘either be the best, or the worst thing ever to happen to humanity,’ and that, 'if we were not careful, it might be the last thing'.

In the almost-now future, for better and worse, large language models like ChatGPT and other forms of advanced AI, will impact every aspect of our daily existence. AI will lead to new drug discoveries, cure diseases and be used to combat the climate crisis. It will also lead to scams, the proliferation of fake news, invasions of privacy, job losses, cyberattacks, increased risk of terrorism and possibly even the loss of reality itself.

These disruptions will be profound, but they will be nothing compared to what might be unleashed if we create a Superintelligence. The Singularity, as it has come to be known, will render reality unrecognisable and propel us towards either a utopian or dystopian future at warp speed.

The mind-bending advancements in AI have astonished everyone, including the leaders in the field. The unexpected pace of change recently motivated the Center for AI Safety to release a 22-word statement that read: ‘Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.’

The Statement on AI Risk was signed by many of the world’s leading researchers and developers including Sam Altman (the CEO of Open AI), Bill Gates, Demis Hassabis, (CEO of Google's Deep Mind), and two of the godfathers of AI, Geoffrey Hinton and Yoshua Bengio.

A few months later, after GPT-4 passed Law and Medical exams, over 30,000 people, including Elon Musk, Yuval Noah Harari, Andrew Yang and Steve Wozniak, put their name to The Future of Life Institute’s open letter calling for a pause on giant AI experiments.

More chillingly, the Expert Survey on Progress in AI, found that more than half of the experts interviewed put the threat of advanced or runaway AGI leading to human extinction, or p(doom), between 5 and 10%

With Chat GPT reaching 100 million users in 2 months and Generative AI images and videos flooding the internet, ‘the era of AI has begun,’ but when it comes to future impacts, how much is hype, how much is unwarranted doom-mongering, and how can we better understand and prepare ourselves for the AI-saturated future that is hurtling towards us?

With so much potential benefit and so much at stake, we explored what some of the biggest minds on the planet are saying about our AI future.

 

“Artificial Intelligence is as Revolutionary as mobile phones and the Internet.”

- Bill Gates

AI Network by Daniel Simons

In his article, The Age of AI Has Begun, Bill Gates outlines his hopes, fears and expectations for the future of AI. Comparing the birth of AI to the advent of the graphical user interface, Bill thinks the new technology will penetrate every aspect of our existence.

He sees a world where citizens, doctors, lawyers and businessmen will all have their own personalised AI bots that interact with almost every aspect of their daily lives. He believes that AI will lead to productivity enhancement, improved health and better education.

When it comes to the risks of AI, Gates suggests that businesses need to work with governments on regulations. Even though he concedes that AGI and Superintelligence could pose existential threats to humanity eventually, he doesn’t see any immediate or urgent risk posed in the near future.

 

AI Productive Robot by Daniel Simons

“AI is seizing the master key of civilization, and we cannot afford to lose it.”

- Yuval Noah Harari 

 

The author of Sapiens and Homo Deus once warned us that technological advancements and automation may lead to a ‘useless class,’ now he is arguing that Artificial Intelligence has ‘hacked’ the operating system of society.

Harari argues that storytelling computers, and ‘inorganic life forms’ will change the course of human history.

What will happen, he asks, when non-human intelligence becomes better than the average human at telling stories, composing melodies, drawing images, and writing laws?

Harari fears that artificial intelligence could create false intimacies which could then be used to persuade or manipulate unwitting humans, or - worse still - that, in an attempt to compete with these inorganic life forms, some of us will attempt to upgrade ourselves and these upgrades will increase social divisions and alter what it means to be human.

Comparing AI to nuclear bombs, Harari contends that we now have to grapple with a new weapon of mass destruction that can annihilate our mental and social worlds. But unlike nukes, which can’t create more nukes, AI can make exponentially more powerful AI. He calls for strong regulation and deep safety testing and regulation before AI is released in the public sphere.

AI in the City by Daniel Simons

“AI is going to be the most significant development in human history.”... “We’ve got to figure out how to manage this and have this go well.”

- Sam Altman 

Sam Altman is the CEO of Open AI, the company which released ChatGPT and Dall-E into the world. Open AI was founded by Sam Altman and Elon Musk with the goal of ‘advancing AI in a way that benefits everyone.’ It was established as a not-for-profit with the goal of counteracting the influence of Google and other AI developers, but it recently became a for profit and received a $13 billion dollar investment from Microsoft, although Altman, who made millions through Y-Combinator, has no equity in the company.

In early 2023, Sam embarked on a global speaking tour where he discussed the opportunities and threats of the new technology and urgently lobbied for regulations at the national and global levels. On the other habd he has also been accused of criticising regulations that would negatively impact Open AI.

Open AI’s ChatGPT LLM was the fastest adoption of any technology in history and it is already having profound implications for society. Now Open AI is turning their attention towards a new challenge: Artificial Superintelligence and Alignment.

 

AI Biz by Daniel Simons

“When we talk about responsible AI, we’re really talking about the need to have AI systems that are transparent and ethical, that there's accountability, and that the systems and the design and deployment of the systems adhere to laws and rules and regulations, but also societal norms.”

-Catriona Wallace 

 

Dr. Catriona Wallace is the founder of Flamingo AI, Ethical AI Advisory, and The Responsible Metaverse Alliance. Catriona thinks that AI will be ‘life changing’ but it must be controlled.

She works with businesses to make sure that their AI has good governance, data security and privacy, and that efforts are taken to eliminate bias and socially detrimental impacts.

She argues that one of the main problems with AI is that it is based on past data, which includes gender and racial bias, and that the development of new AI programs need to account for and remedy these flaws.

AI Singularity by Daniel Simons

“We need to move at the speed of getting it right”.

- Tristan Harris

Tristan Harris is the founder of The Center for Humane Technology, and the creator of The Social Dilemma Documentary.

In March 2023, Tristan convened a room of the world’s leading Artificial Intelligence experts with the aim of uncovering the risks posed by AI. In his lecture, which was released as a video titled ‘The AI Dilemma’ Tristan compared AI to the atom bomb, warned that even the AI developers can’t explain how it works, and championed the need for global collaboration and governance similar to Bretton Woods or Nuclear Treaties. Tristan explores the risks, harms and potentials of AI in his podcast, Your Undivided Attention.

 

Robopanel by Daniel Simons

“There will be two kinds of companies at the end of this decade: those that are fully utilising AI, and those that are out of business.”

-Peter Diamandis

 

Peter Diamandis is the founder of the X Prize and Singularity University and author of Abundance. In a discussion with Emad Mostaque Peter outlined how he sees the future unfolding. According to Diamandis AI will transform every industry, every business, and every possible piece of commerce and enterprise software within the next year.

According to Peter, AI will help businesses run an astronomical number of digital experiments and allow companies to iterate and improve improve every element of their value creation chain at the speed of light.

According to Peter, AI will unlock the full potential of the Metaverse and transform education, entertainment and health. It will become increasingly emotional and help us regulate our emotional lives. It will function as a co-pilot for every industry, and it will lead to an explosion in AI-driven DAOs, or decentralised autonomous organisations.

Birth of Life by Daniel Simons

“The stakes here are high. The opportunities are profound. AI is quite possibly the most important, and best thing our civilization has ever created.”

- Marc Andreessen 

In an article titled Why AI Will Save The World, Marc Andreessen from VC firm Andreesseen Horowitz argued that advances in AI will lead to, ‘a takeoff rate of economic productivity growth that would be stratospheric…prices of existing goods and services would drop across the board to virtually zero. Consumer welfare and spending would skyrocket and entrepreneurs would create dizzying arrays of new industries, products and services and employ as many people and AI as they could as fast as possible to meet all the new demand.

Andresseen accuses people who are sceptical of rapid AI development of being ‘Bootleggers’ or ‘Baptists’. Where the baptists are the ones who mistakenly want restrictions and the bootleggers want regulations so they can profit off them. Andresseen is not concerned with either.

Andresseen proposes what he refers to as ‘simple plan:’ ‘AI companies should be allowed to build AI as fast and aggressively as they can, but not allowed to achieve regulatory capture. Startup companies should be allowed to build AI as fast as they can. Open source AI should be allowed to ‘freely proliferate.’ There should be ‘no regulatory barriers to open source whatsoever.’ Governments should work with the private sector in partnership to mitigate risks, use AI to solve society’s greatest challenges, and work as quickly as possible so that the West gains AI dominance over China.

 

Robocash by Daniel Simons

“Tech CEOs want us to believe that generative AI will benefit humanity. They are kidding themselves.”

-- Naomi Klien 

 

Naomi Klein is the author of No Logo, This Changes Everything, On Fire, and most recently Doppelganger: A Trip Into the Mirror World, in an article penned for the Guardian, Klein accused AI evangelists of ‘hallucinating'.

In her savage takedown of AI, she argues that the claims that AI will solve the climate crisis, deliver wise governance, or liberate us from drudgery, will never eventuate as long as AI development is driven by profit-maximising corporations within a capitalist system.

She argues that AI could be of great benefit to humanity if it was used with the right intentions within the right system, but at the moment it is on track to become, ‘the largest and most consequential theft in human history.’

AI Birth to Minds by Daniel Simons

“We do need technological systems, but we need ones that don’t have embedded exponential growth obligations. We do need ones that have restraint, we do need ones that don’t optimise narrow interests at the expense of driving arms races and externalities. We do need ones where the intelligence in the system is bound by and directed by wisdom.”

-- Daniel Schmachtenberger 

Daniel Schmachtenberger is one of the world’s leading thinkers on existential risks. For Daniel, the world is currently in a ‘metacrisis’ or ‘polycrisis’ and the exponential advancement of AI, if not regulated properly, threatens to accelerate and increase the risks of climate catastrophe, biodiversity implosion, nuclear armageddon and economic or civilizational collapse.

Daniel distinguishes between wisdom and intelligence. He argues that because AI is currently being developed by countries and corporations in an arms race that has a zero-sum dynamic, it is not developing in a way that is optimally aligned with humanity's best interests or sustainability.

Daniel believes AI advancement has set us on a path to either chaos and collapse or tyranny and oppression. He founded The Consilience Project to help humanity create 'third attractors' away from either of these two undesirable fates.

 

AI Merge by Daniel Simons

“Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die. Not as in maybe possibly some remote chance, but as in that is the obvious thing that would happen.”

- Eliezer Yudkowsky 

 

Eiezer Yudokowsky is a research fellow at the Machine Intelligence Research Institute who specialises in decision theory and ethics. Ironically, he is known for coining the term ‘friendly artificial intelligence.’ In an article for Time Magazine, Yudokowsky argued that the only way to protect ourselves from Artificial Intelligence is to ‘shut it all down.’ For Eliezer, a pause is not enough. 

He wants there to be an indefinite and worldwide moratorium on new large training and he is calling for the shutting down of all large GPU clusters and training runs. He believes governments need to track all GPUs sold and any rogue data centres should be destroyed by airstrike. 

According to Yudkowsky, ‘We are not ready. We are not on track to be significantly readier in the foreseeable future. If we go ahead on this everyone will die, including children who did not choose this and did not do anything wrong.'