Artificial General Intelligence Is Not as Imminent as You
Might Think
To the average person, it must seem as if the field
of artificial intelligence is making immense progress.
According to the press releases, and some of the more
________ media accounts, OpenAI’s DALL-E 2 can seemingly
create spectacular images from any text; another OpenAI
system called GPT-3 can talk about just about anything; and
a system called Gato that was released in May by DeepMind,
a division of Alphabet, seemingly worked well on every task
the company could throw at it. One of DeepMind’s highlevel executives even went so far as to brag that in the quest
for artificial general intelligence (AGI), AI that has the
flexibility and resourcefulness of human intelligence, “The
Game is Over!” And Elon Musk said recently that he would
be surprised if we didn’t have artificial general intelligence
by 2029.
Don’t be fooled. Machines may someday be as
smart as people, and perhaps even smarter, but the game is
far from over. There is still an immense amount of work to
be done in making machines that truly can comprehend and
reason about the world around them. What we really need
right now is less posturing and more basic research.
To be sure, there are indeed some ways in which AI
truly is making progress—synthetic images look more and
more realistic, and speech recognition can often work in
noisy environments—but we are still light-years away from
general purpose, human-level AI that can understand the
true meanings of articles and videos, or deal with
unexpected obstacles and interruptions. We are still stuck
on precisely the same challenges that academic scientists
having been pointing out for years: getting AI to be reliable
and getting it to cope with unusual circumstances.
(Fonte: Scientific American - adaptado.)