Back to all blog posts

An Open Letter Calling to Pause Giant AI Experiments: Perspectives from Industry Experts

open letter

The rapid advancement in artificial intelligence research and application we have witnessed recently has sparked a vigorous debate among high-profile industry experts and nonprofessionals alike. Despite AI not being a new technology, with the first principles of successful machine learning being developed in the 80s, now is the time when the efforts of two generations of data analysts and engineers came to fruition. Not only that – the results had become evident to the public at large at the tail end of 2022, when generative AI tools were presented, including Large Language Models (LLM) like OpenAI’s ChatGPT and image generators like DALL-E and Midjourney.

Journalists chatted with GPT-3, trying to elicit from it doomsday scenarios where AI takes over the world, students used it as an unlimited access essay writers service for paper assistance, and SMM specialists generated surreal pictures with DALL-E, claiming that they are so over working with slow and expensive human artists. That’s when the murmur started, regurgitating fears long since explored and exploited in science fiction: the machines will take our jobs, rule the world in a totalitarian techno-dystopia, and ultimately wipe us out. The impressive performance of GPT-4 has only exacerbated those fears.

However, what triggered the avalanche was the open letter organized by the nonprofit Future of Life Institute and published on March 22, 2023, on the organization’s official website. It called for an immediate pause for at least 6 months of all the training of AI systems more powerful than GPT-4.

Future of Life Institute Open Letter about AI

It states that contemporary AI systems are getting close to human levels at performing general tasks and asks whether humanity should allow this. It continues by bringing forward the following concerns:

  • AI might be used for misinformation and propaganda
  • AI will unnecessarily automate even the jobs people find fulfilling
  • Non-human minds will replace people as we will lose control over our civilization

The call for at least 6 months of AI training ban is suggested so that independent experts, ethicists, and elected leaders – instead of unelected business leaders motivated by profit – could develop and instate necessary safety protocols to make AI systems “safe beyond a reasonable doubt.”

The letter highlights that its authors do not call for a total ban on the technology, only the pause necessary for policymakers to catch up in order to make advanced AI systems “accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.” They also stress the importance of AI developers working together with AI regulatory authorities. These reasonable demands resonated with many, and to date, this letter has been signed by over 30,000 individuals.

However, what made it so impactful in the first place, were the names of the high-profile signatories, among which were industry gurus like Elon Musk, Steve Wozniak, and IEEE members, along with philosophers, ethicists, and public intellectuals with Yuval Noah Harari among others.

Since this is one of those rare issues where everyone has skin in the game – not unlike climate change or taxation, for example, the discussion soon turned very intense. However, instead of distancing from it for the sake of your emotional equilibrium, we invite you to join the conversation in order to develop a balanced and practical take on the issues of regulation and the potential risks and benefits associated with AI’s future development.

Eliezer Yudkowsky’s Open Letter on Insufficient Regulation

One of the more radical responses came from Eliezer Yudkowsky, a prominent AI researcher. He expressed concerns about the inadequacy of existing regulations to address the potential risks of AI development and warned that consequences could be catastrophic. In an open letter published in TIME on March 29, 2023, Yudkowsky argued that voluntary guidelines and self-regulation may not be enough to safeguard against unintended consequences. He called for a proactive approach, including robust regulatory frameworks and research on AI alignment with human values, to ensure the development of beneficial AI systems.

However, in his emotional appeal, he also voiced doubt that such development is possible at all with the current speed of AI advancement. He said he didn’t sign the open letter because it was futile and a six-month pause could solve nothing: “We are not ready. We are not on track to be significantly readier in the foreseeable future.” He went as far as to passionately call for a complete and immediate shutdown of all functioning AIs, otherwise, “everyone will die, including children who did not choose this and did not do anything wrong.” Yudkowsky believes that sufficiently advanced AI won’t stay in the virtual space forever and will incarnate through “postbiological molecular manufacturing” by communicating DNA strings to laboratories to produce proteins on demand.

Balancing Fear and Progress

Geoffrey Hinton, a leading figure in AI research, Turing award recipient, and pioneer of deep learning, whose experiments laid the basis for today’s neural networks, stresses the importance of striking a balance between fear and progress.

In his interview with Will Douglas Heaven for MIT Technology Review, Hinton acknowledges that the power of AI can be intimidating and confesses that he already fears how smart, capable, and close to humans it has become – something that he could not foresee back in the 80s when he conducted his research with his students. One of those students, Ilya Sutskever, went on to cofound OpenAI and develop ChatGPT. While back in those days of the first experiments, AI “was a joke,” Hinton says that now AI tools are like an alien race that has already landed, but people don’t realize it because “they speak very good English.”

This sudden flip might be terrifying, but Hinton still stresses that in most scenarios in which things might go wrong, not malicious AI but people using it for nefarious purposes are the most likely perpetrators. Moreover, the biggest threat is not AI but rather people’s collective inability to act when presented with new threats and challenges. “The US can’t even agree to keep assault rifles out of the hands of teenage boys,” says Hinton to demonstrate his point.

While everyone agrees that AI and its irresponsible use can cause real harm, most experts still cannot see how language models or cost-minimization functions can spring to life and become self-proclaimed robot overlords. For example, while Yann LeCun, chief AI scientist in Meta, agrees that machines will become smarter than humans, he disagrees that they will dominate humans only because they are more intelligent, let alone destroy humanity. He points out that even among humans, the smartest ones aren’t the ones who are the most dominating.

Yoshua Bengio, a scientific director of the Montreal Institute for Learning Algorithms, acknowledges all the risks involved with AI but warns against excessive fear. He points out that while reasonable fear can incite action, it can also be paralyzing, so people should strive to keep debates rational.

AI and Capitalism

One of the interesting perspectives on AI is given in the opinion piece by Ezra Klein for The New York Times. The article was published a month earlier than the Future of Life open letter. Still, it provides valuable insight into the source of public fear and mistrust toward AI. The author cites his interview with Ted Chiang, a notable sci-fi writer, where Chiang shared his opinion on the causes of concerns about AI. He suggests that, in essence, the fear is about capitalism, and the same is true of most fears of technology.

According to Chiang, people are anxious about how new technology might be used against them for profit. For example, many employers are focused on cost-cutting, so they will eagerly adopt AI to save money. This AI won’t necessarily replace human workers but can enable monitoring of workers in warehouses, food chains, and delivery companies, imposing impossible quotas and not allowing for biologically necessary breaks like lunch or trips to the bathroom.

Targeted ads or even more subtle AI-powered emotional manipulation by companies like Google, Microsoft, Meta, and others also seem like realistic threats to our autonomy. That, of course, includes not only ramped-up consumption but also political campaigns – primarily funded by foreign governments. The ability of AI systems to monitor and manipulate millions of people on demand of the highest bidder is chilling.

The application of AI in the criminal justice system is also a cause for concern for people in marginalized communities already subjected to over-policing, unjust arrests, and brutality.

Ultimately, people don’t trust the elites or specific groups of experts. They are concerned about AI getting out of control, falling into the wrong hands, or being turned into yet another oppressive mechanism.

Pushback from AI Developers

However, not everyone sees the potential AI development as a doomsday scenario. For example, François Chollet, a deep learning expert and AI researcher currently working for Google, nonchalantly responded to the open letter by calling for a six-month “moratorium on people overreacting to LLMs.”

Still, the industry’s most prominent developers, including Bill Gates, expressed their disagreement with the call for a pause in a more diplomatic manner. Bill Gates is one of the vocal advocates of AI technology, who believes it will bring spectacular improvements in healthcare, education, art, and business.

The developer community consists of many different voices and opinions, but most believe that proactive measures, rather than a complete halt, are key to addressing concerns effectively. While emphasizing that responsible AI development through ethical guidelines, industry collaboration, and transparent decision-making processes is vital, industry experts argued that imposing restrictions could stifle innovation and hinder progress in solving critical global challenges.

Moreover, pausing all AI-powered tools across the global industry wouldn’t be feasible, especially as “less scrupulous actors are especially unlikely to heed it,” to quote Eleanor Watson, an AI ethicist and IEEE member. For example, authoritarian countries could use this global pause to get ahead in developing their own AI systems that will be neither transparent nor safe. Watson, however, says she is glad the open letter has sparked debate about the ethical implications, accountability, and potential negative consequences of AI systems.

Proponents of uninterrupted AI growth argue that AI has the potential to revolutionize various industries, enhance efficiency, boost the economy, create at least as many jobs as it can make obsolete, and solve complex problems, from climate change and species extinction to health care challenges, like finding cures for previously untreatable medical conditions. They say that exaggerating the urgency and scope of the AI threats only feeds unnecessary panic around the topic and hinders any attempts to approach the problem practically.

Addressing Unfounded Fears and Promoting Education

While almost everyone involved in the conversation seems to agree that concerns about AI are valid, some argue that the fears surrounding its development are blown out of proportion or even unfounded. For example, in his piece published on April 2,2023, on his personal website, Josh Bersin, an industry analyst, highlighted the need to address misconceptions and educate the public about the capabilities and limitations of AI. He emphasized that AI is simply another technology created by humans, and responsible development, combined with appropriate regulations, can mitigate potential risks. Promoting AI literacy and involving diverse stakeholders in decision-making are crucial to building trust and understanding.

Bersin sees the lack of understanding as one of the root causes of unfounded fears. People do not understand what AI tools are, how they work, what they are capable of, and how prevalent they are in everyday life. For example, when asked how often people use AI for daily tasks, they rarely think about autocomplete feature on their smartphones, spam filtering in their emails, noise-canceling headphones, or virtual assistants such as Siri or Alexa.

Bersin also addresses some of the most widely circulated fears debunking them one by one. He pointed out that despite all prognoses about “robots taking away our jobs,” the unemployment rate is at its lowest in 55 years, the demand for workers in certain fields is yet unfilled, and people suffer from overwork and burnout. Instead of taking jobs away, AI can make them more pleasant and efficient. Moreover, AI drives the creation of new high-paying jobs, creating career opportunities for underpaid workers who can upskill through boot camps, thus curbing income inequality instead of accelerating it.

Bersin also points out that even though AI might replicate the bias of the people who train it, it is still a fixable thing, unlike implicit bias in people, which is the root cause of the problem and is very difficult to tackle. He cites an AI-based hiring study as an example, where AI showed itself less than perfect but still far less biased than human recruiters.

The Bottom Line

The debate surrounding AI regulation and future development reflects a complex landscape with varying perspectives. While concerns about the risks and ethical implications of AI are valid, there is also a recognition of its potential to transform society positively. Striking a balance between regulation and innovation is crucial for responsible AI development and application. Collaborative efforts involving policymakers, researchers, industry leaders, ethicists, and the general public are necessary to develop robust ethical guidelines, enhance transparency, and establish accountability frameworks to prevent AI abuse. By fostering open dialogue and informed decision-making, society can navigate the path toward AI development that benefits humanity as a whole.

Calculate Price

When you use PaperHelp, you save one valuable — TIME

You can spend it for more important things than paper writing.

Approx. price
$65
Order a paper. Study better. Sleep tight. Calculate Price!
Created with Sketch.
Calculate Price
Approx. price
$65
Call us (Toll Free)