No real Artificial Intelligence in the next 40 years

Characters from Pixar's movie Wall-E live a digital romance

Can computing and science fiction collide to create a true Artificial Intelligence? A.I has been part of our computing landscape for a long time, first as an idea, then taking baby steps, thing started to move in the early days of computers. After that, there was a period of disillusion and with the rise of cloud computing and massively parallel consumer-level chips A.I is more than ever on our lips and in our minds – but how far are we really from the awakening of a digital form of consciousness?

What’s A.I?

First, let’s provide some context: by Artificial Intelligence, we are not referring to game characters bumping into walls, robotic vacuum cleaners, self-driving cars, or computers that can play chess or Jeopardy. We are eluding to a true form of intelligence that can react, learn and adapt to situations and parameters that were not part of “the program”. We are talking about a form of intelligence that can rival -or surpass- the human intelligence.

It is said that the field of Artificial Intelligence has originally been founded on the claim that the human intelligence could one day be so precisely described, that it could be simulated by machines. Today, this field is still under intense research and although milestones have been achieved in terms of Specialized Intelligence (like playing Jeopardy), much remains to be discovered in the field of general artificial intelligence, or “strong AI” as researchers call it.

The hype (or hope) around A.I

Everyone has heard of artificial intelligence, and it has been the topic of numerous books and movies. From 2001 Space Odyssey to Blade Runner, Terminator or “I, Robot”, it is clear that the potential of A.I has been explored by writers and directors. And if you listen to the conventional wisdom, you may think that: 1/ A.I is not that far away 2/ A.I is mainly an issue of computing power and storage capacity.

With computers doubling their performance every 2 years, some are already predicting that computers are “not far” from equaling the human brain in terms of raw computing performance. We didn’t think so, but we had an opportunity to ask Federico Faggin about this when we met him in San Francisco. He’s the person who designed the Intel 4004, the world’s first microprocessor, and subsequently designed the Z80 which was a mega-hit as well.

Federico mentioned that in his opinion, even in 40 years, processors still won’t have the raw processing power of the human brain. He adds that it is also very difficult to measure the human brain’s performance because we don’t understand it enough (you can find more of Frederico’s thoughts on the subject at IntellFreePress).

But if computers were fast enough and store enough information, could machines be as smart as humans? For the foreseeable future, no. This is probably one of the biggest myths that circulates in and around the world of computing. At this point, A.I is not a matter of “how fast computers are”, or how much storage they have at their disposal. If that was true, we would already have some form of slow-reacting, but truly intelligent, machines. We don’t. In fact, most A.I researchers would be happy if they could create robotic insects that are as smart and able as actual insects. DARPA (an entity that funds top-secret military research) would pay millions for that stuff.

Eliane+Hubert, the Ubergizmo founders with Frederico Faggin (middle) - it was a "geek moment"

Searching for a “cognitive computing” model

The real issue is that we don’t understand how human intelligence and “consciousness” work. We don’t know the principles behind it; we can superficially imitate it but we cannot build something like it, or better – for now.What we need is a “cognitive computing” model (a theory) before we can build machines around it.

If you look at today’s computers, they are based on a computing model which is based on Boolean logic, a theory that was developed by George Boole in 1854 well before the first computer was invented. It is only in the 1930s that Claude Shannon applied the Boolean logic theories to circuit design. This opened the door to executing algorithms as they were imagined by Ada Lovelace, an 18th century English lady who is often seen as the first computer programmer in history.

But to create intelligence, do we have to imitate the brain? Maybe not: after all “planes don’t flap their wings” one of my friend argued. That’s true, but Federico Faggin heard this he replied that although planes don’t flap their wings, aviation really took off when we understood enough about aerodynamics to build something that would lift a plane in the air efficiently. We have not reached that level of understanding for A.I.

Conclusion

Although nothing indicates today that within the next 40 years computers can be as smart (or smarter) than humans, there is always a possibility that a research breakthrough leads to a working cognitive computing model. And while we are confident that computing will continue to evolve at a quick pace, and that one day computer may finally reach that level of consciousness, the million-dollar question is: when?

“When” cannot be extrapolated from the simple evolution of computing performance. In fact, computer performance somewhat irrelevant at this point. Real A.I will be born when the first cognitive computing model comes to life. From there, we will be able to plot a roadmap.

We like the idea of A.I very much and as “tech lovers” we want to look forward, but we also want to keep it real.

You May Also Like

Related Articles on Ubergizmo

Popular Right Now

Exit mobile version

Discover more from Ubergizmo

Subscribe now to keep reading and get access to the full archive.

Continue reading

Exit mobile version