World News

Melanie Mitchell: “Artificial intelligence will take off when it is inserted into robots that experience the world like children” | Technology

[ad_1]

Are we overstating the potential of artificial intelligence (AI)? How intelligent is he? Will it ever reach humans? These are some of the questions that Melanie Mitchell (Los Angeles, 55 years old) asks in her book Artificial intelligence. Guide for thinking beings, which Captain Swing publishes in Spanish this Monday. His answer is forceful: we are very far from creating a superintelligence, no matter how much some companies say otherwise. And one of the fundamental reasons is that machines do not reason like we do. They can do almost any task better than anyone else, but they understand the world worse than a one-year-old baby.

Mitchell provides a fundamental context to gauge the phenomenon of AI, a technology that has been in the public debate since tools like ChatGPT appeared two years ago. Politicians, businessmen and academics have recently warned about the dangers of these systems, which have dazzled the world with the elaborate texts they are capable of generating and the hyper-realistic images and videos they can produce.

The Davis Professor of Complexity at the Santa Fe Institute and professor at Portland State University describes in her work how the most advanced AI systems work and contrasts them with human reasoning. Conclusion: key aspects such as intuition or knowledge of the environment are, at the moment, unattainable for any machine. Mitchell attends EL PAÍS by video call from her home in Santa Fe, New Mexico.

ASK. What is AI capable of today?

ANSWER. There was a big leap in its capabilities a couple of years ago with the arrival of generative AI, including applications like ChatGPT or Dall-E. But these systems, although it may seem like it, do not have the same type of understanding of the world that we do. That’s why sometimes they do something strange or make things up. They lack reliability, they have a series of limitations that are difficult to predict. So I think that while these systems can be very useful and I use them all the time, we have to be careful about the trust we place in them. Especially if there is no human supervision.

Q. Because?

R. They can make serious mistakes. A clear example is autonomous cars. One of the reasons they are not with us yet is that they fail where a human rarely would, such as failing to identify a pedestrian or an obstacle. Another example is automatic facial recognition systems. The machines are extremely good at detecting faces in images, but have been found to be worse at identifying darker-skinned people or women. With ChatGPT, we’ve seen countless cases where they make up what they say.

Professor Mitchell uses AI tools daily, but recognizes their limitations and always monitors her results.Kate Joyce

Q. Does the boom in generative AI help or harm the development of the discipline?

R. In a way, this hype It increases people’s expectations, and that then causes disappointment. It has happened many times throughout the history of AI. In the 1950s and 1960s, it was said that we would have machines with human intelligence within a few years. That didn’t happen. The so-called AI winter arrived: funding for research dried up and companies went bankrupt. We are now in a period of great expectation. The question is, is this really going to be the time when the predictions of the optimists come true or will we end up in another great disappointment? It’s hard to predict.

Q. Just three years ago, the future was going to be the metaverse. Today no one talks about it anymore. Do you think something similar can happen with AI?

R. It happens all the time with big technological innovations: there’s a kind of big hype bubble, then expectations aren’t met and people get disappointed, and then the technology finally comes out on top. That development turns out to be useful, but not as brilliant as people expected. That’s likely what will happen with AI.

Q. You argue that AI systems lack semantic understanding or common sense and therefore cannot be truly intelligent. Do you think that will change at some point?

R. It’s possible. There is no reason why we cannot develop such a machine. The question is, how do we get there? ChatGPT has been trained with all available digital books and texts, as well as all videos and images on the internet. But there are things that have to do with common sense and knowledge and that are not encoded in language and data: they can only be grasped through experience. Perhaps machines will not be able to think more humanly until they experience the world as we do. There is a lot of debate in the AI ​​discipline about this. I suspect the big leap will come when the machine is not only passively trained in language, but also actively experiences the world as a child does.

The history of AI has shown that our intuitions about life and intelligence are often wrong, that in reality everything is much more complex than we thought.

Q. When they are in robot form.

R. Yes. An AI inserted in a robot could have the same type of education or development as a child. It is something that Alan Turing, one of the fathers of computing, already speculated about in the 1950s. That idea makes more sense now.

Q. You describe in the book how AI works and how little that has to do with our way of reasoning. Does the process matter if it fulfills its function?

R. It depends on what you want to use the system for. My car’s GPS can find a route to and from where I want to go. He doesn’t understand the concept of road or traffic, but he does a good job. The question is whether we really want systems to interact more generally with the human world. To what extent will they have to understand it? There was a case where an autonomous vehicle slammed on the brakes at a certain moment, and the driver didn’t know why. It turned out that there was a billboard with an advertisement that had a stop sign on it. Can you avoid mistakes like that? Only when you understand the world like we do.

Q. How far do you think AI can go?

R. I see no reason why we cannot develop machines with intelligence comparable to that of humans. But it will be very difficult to get there, we are not close to it. In the 1970s it was thought that when machines could play chess at the level of a grandmaster they would have equaled human intelligence. It turned out not to be like that. Then it was said that when they were able to translate texts or maintain conversations. It hasn’t happened either. The entire history of AI has shown that our intuitions about life and intelligence are often wrong, that in reality everything is much more complex than we thought. And I think that will continue to be the case. We’re going to learn a lot more about what it really means to be smart.

Saying that AI systems could go haywire and destroy us is, at the very least, a highly improbable and speculative claim.

Q. Then it will have been worth it.

R. One of the objectives of AI is to help understand what we mean by intelligence. And, when trying to implement it on machines, we often realize that it really includes many elements that we did not consider.

Q. Some AI pioneers, such as Geoffrey Hinton, believe that this technology can become difficult to control. What do you think?

R. AI carries many types of dangers. It can be used to produce disinformation and deepfakes. There are algorithmic biases, like the one I mentioned in the case of facial recognition. Hinton and others go further and say these systems could go haywire and destroy us. This claim is, to say the least, highly improbable and speculative. If we develop a superintelligent system, I doubt it won’t care about our values, like killing all humans is not okay. Focusing so much on this dramatic idea of ​​existential threats to humanity only diverts attention from things that are really important right now.

Q. Do you think that, as a society, we are adequately addressing those threats that we face today?

R. Yes, although legislation always finds it difficult to keep up with innovation. The EU has taken a first step with the European AI Regulation. One of the things we are seeing in the US are copyright lawsuits. All of these systems are trained with enormous amounts of text and images. If you have not paid for its use, are we facing copyright infringement? The law is unclear because it was enacted long before the development of this technology. We’ll see how this is resolved.

Neuroscientists don’t understand how the brain works and do experiments to try to make sense of what they see. That’s what’s happening now with generative AI

Q. What’s the most impressive AI application you’ve seen lately?

R. What excites me most is the application of these systems to scientific problems. DeepMind, for example, is working on using AI to predict the structure of proteins. It is also being used to develop new materials and medicines. We are in a kind of new era of science, perhaps as important as the one inaugurated with the arrival of computers.

Q. He says in the book that those who calibrate deep learning systems, the most advanced AI technique, seem like alchemists rather than scientists, because they adjust parameters in the machines without knowing exactly what they are doing.

R. Shortly after writing the book, people began to talk about engineering engineers. prompts (the instructions given by the generative AI tools). Your job is to try to make the system perform as well as possible. It turns out that there are people who are making a lot of money doing that work. And it is pure alchemy, there is no science behind it. It’s just about trying things. Some work and some don’t, and we have no idea why.

Q. It is ironic that those who try to optimize one of the most sophisticated technologies in human history do so blindly.

R. These systems are, in a sense, black boxes. They are systems of software hugely complex ones that haven’t been explicitly programmed to do things, but have been trained, learned from data, and no one can figure out why they work the way they do. Neuroscientists also don’t understand how the brain works, and they do experiments to try to make sense of what they see. That’s what’s happening now with generative AI.

You can follow The USA Print in Facebook and x or sign up here to receive our weekly newsletter.

Subscribe to continue reading

Read without limits

_



[ad_2]

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button