Hay quien ve la IA como una potencial amenaza. El exCEO de Google tiene una idea para evitarla: poder “desenchufarla”

In ‘2001: a space odyssey‘ the protagonist, Dave Bowman, ended up “unplugging” HAL 9000, the intelligent computer that controlled his ship and that had killed almost all of its crew. The film seems to reflect precisely a scenario that is now brought back to the present day by Eric Schmidt, former CEO of Google.

Wonderful and dangerous in equal parts. Schmidt has a divided view of AI. On the one hand, it seems clear that there is no need to stop its development because overall, “we will never reach the climate goals anyway.” That does not prevent him from also believing that this development may end up posing risks for the future, although he has an idea to solve them.

Unplug AI. As they point out on AxiosSchmidt was interviewed on the North American network ABC. There he indicated that “I have never seen innovation on this scale,” and noted how if a computer system reaches a point where it can self-improve, “we need to seriously think about pulling the plug.”

Be careful with machines that do not want to be unplugged. According to the former CEO of Google, “soon we will have computers running on their own, deciding what they want to do.” ABC interviewer George Stephanopoulos asked Schmidt if a really powerful AI system could prevent him from being disconnected. “In theory, it will be better to have someone with their hand on the plug.”

Polymaths at your service. For Schmidt, the development of generative AI is enormous, and he believes that “everyone is going to have the equivalent of a polymath in their pocket.” This means that on our mobile we will have an AI system capable of offering all kinds of advanced knowledge in areas such as science, art, history or technology. Leonardo da Vinci or Benjamin Franklin are considered polymaths, for example.

AI companies don’t have enough barriers. In November 2023 Schmidt granted an interview with Axios in which he explained that at this time the companies that develop these systems are not prepared to avoid these risks. He talked about systems that, for example, can discover how to access weapons systems. Two years ago it was expected that this would arrive in about 20 years. Schmidt predicted that such advanced systems would arrive in the next two to four years.

If a computer system reaches a point where it can self-improve, Schmidt noted, “we need to think seriously about pulling the plug.”

Maybe an AI can supervise AIs. If he could have control of AI development, he would do two things. The first, boost the development of AI in the Western world to “ensure that the West wins.” Second, identify the worst possible development cases and build a second AI system that monitors the first. “Humans won’t be able to control AI. But AI systems should be able to.”

AI for war conflicts. Meanwhile, Schmidt has been working on creating a startup called White Stork which has provided Ukraine with drones that use AI in “complicated and powerful ways.”

And the regulation, what? AI regulation should precisely try to avoid most of the risks. The European AI law which came into force on August 1, 2024 is the first attempt to regulate it and mitigate those problems. It defines several groups of systems, the most dangerous being the one that is classified as “high risk.” That is, systems that can “have a significant detrimental effect on the health, safety and fundamental rights of people.” The definition does not include mass video surveillance systems, for example, and remains somewhat subjective, but it is certainly a step in the right direction. The question, of course, is whether it will be enough or whether we will end up needing a physical method of “unplugging” the AI.

Others call the threat “absurdly ridiculous.”. Although some experts and personalities have expressed concern about the evolution of AI, others rule out such a possibility. One of the most insistent on an optimistic vision is Yann LeCun, head of AI at Meta, who has explained that the threat of AI to humanity in “absurdly ridiculous“.

Image | Charles Haynes | Warner Bros. Pictures

In techopiniones | How we will ensure that artificial intelligence does not get out of hand