Skip to main content

It's Called Artificial Intelligence—but What Is Intelligence?

Elizabeth Spelke, a cognitive psychologist at Harvard, has spent her career testing the world’s most sophisticated learning system—the mind of a baby.

Gurgling infants might seem like no match for artificial intelligence. They are terrible at labeling images, hopeless at mining text, and awful at videogames. Then again, babies can do things beyond the reach of any AI. By just a few months old, they've begun to grasp the foundations of language, such as grammar. They've started to understand how the physical world works, how to adapt to unfamiliar situations.

Yet even experts like Spelke don't understand precisely how babies—or adults, for that matter—learn. That gap points to a puzzle at the heart of modern artificial intelligence: We're not sure what to aim for.

Consider one of the most impressive examples of AI, AlphaZero, a program that plays board games with superhuman skill. After playing thousands of games against itself at hyperspeed, and learning from winning positions, AlphaZero independently discovered several famous chess strategies and even invented new ones. It certainly seems like a machine eclipsing human cognitive abilities. But AlphaZero needs to play millions more games than a person during practice to learn a game. Most tellingly, it cannot take what it has learned from the game and apply it to another area.

To some members of the AI priesthood, that calls for a new approach. “What makes human intelligence special is its adaptability—its power to generalize to never-seen-before situations,” says François Chollet, a well-known AI engineer and the creator of Keras, a widely used framework for deep learning. In a November research paper, he argued that it's misguided to measure machine intelligence solely according to its skills at specific tasks. “Humans don't start out with skills; they start out with a broad ability to acquire new skills,” he says. “What a strong human chess player is demonstrating isn't the ability to play chess per se, but the potential to acquire any task of a similar difficulty. That's a very different capability.”

Chollet posed a set of problems designed to test an AI program's ability to learn in a more generalized way. Each problem requires arranging colored squares on a grid based on just a few prior examples. It's not hard for a person. But modern machine-learning programs—trained on huge amounts of data—cannot learn from so few examples. As of late April, more than 650 teams had signed up to tackle the challenge; the best AI systems were getting about 12 percent correct.

A self-driving car cannot intuit from common sense what will happen if a truck spills its load.

It isn't yet clear how humans solve these problems, but Spelke's work offers a few clues. For one thing, it suggests that humans are born with an innate ability to quickly learn certain things, like what a smile means or what happens when you drop something. It also suggests we learn a lot from each other. One recent experiment showed that 3-month-olds appear puzzled when someone grabs a ball in an inefficient way, suggesting that they already appreciate that people cause changes in their environment. Even the most sophisticated and powerful AI systems on the market can't grasp such concepts. A self-driving car, for instance, cannot intuit from common sense what will happen if a truck spills its load.

Josh Tenenbaum, a professor in MIT's Center for Brains, Minds & Machines, works closely with Spelke and uses insights from cognitive science as inspiration for his programs. He says much of modern AI misses the bigger picture, likening it to a Victorian-era satire about a two-dimensional world inhabited by simple geometrical people. “We're sort of exploring Flatland—only some dimensions of basic intelligence,” he says. Tenenbaum believes that, just as evolution has given the human brain certain capabilities, AI programs will need a basic understanding of physics and psychology in order to acquire and use knowledge as efficiently as a baby. And to apply this knowledge to new situations, he says, they'll need to learn in new ways—for example, by drawing causal inferences rather than simply finding patterns. “At some point—you know, if you're intelligent—you realize maybe there's something else out there,” he says.

This article appears in the June issue. Subscribe now.

Let us know what you think about this article. Submit a letter to the editor at mail@wired.com.

Special Series: The Future of Thinking Machines

Comments

Popular posts from this blog

Why Now Is The Time For Industry To Unlock The Full Potential Of IoT

When British technologist Kevin Ashton coined the phrase “internet of things” (IoT) in 1999, the world was only just getting acquainted with the nascent network of networks and how to access and use its many applications. In the more than two decades since, it has grown increasingly difficult to imagine a world in which our economies and communities were not supported and connected via the World Wide Web and the devices we use to access it. The number of firms that have incorporated IoT technologies into their businesses grew from 13% in 2014 to about 25% globally in 2019. In countries such as the United States, Germany, France and China, the rate of IoT adoption among enterprise-size commercial organizations exceeded 85%, according to a 2019 survey by Microsoft. And recent analyses from IDC predicted there will be 41.6 billion internet-connected devices by 2025, as worldwide commercial and consumer spending on IoT will exceed $1 trillion within the next three years. But the diffusion ...

Smartphones, laptops, IoT devices vulnerable to new BIAS Bluetooth attack

Academics have disclosed today a new vulnerability in the Bluetooth wireless protocol, broadly used to interconnect modern devices, such as smartphones, tablets, laptops, and smart IoT devices. The vulnerability, codenamed BIAS (Bluetooth Impersonation AttackS), impacts the classic version of the Bluetooth protocol, also known as Basic Rate / Enhanced Data Rate, Bluetooth BR/EDR, or just Bluetooth Classic. The BIAS attack The BIAS security flaw resides in how devices handle the link key, also known as a long-term key. This key is generated when two Bluetooth devices pair (bond) for the first time. They agree on a long-term key, which they use to derive session keys for future connections without having to force device owners to go through the long-winded pairing process every time the Bluetooth devices need to communicate. Researchers said they found a bug in this post-bonding authentication process. The flaw can allow an attacker to spoof the identity o a previously paired/bonded devi...

Programming language Rust: 5 years on from v1.0, here's the good and the bad news

The open-source project behind Rust has detailed the programming language's milestones over the past five years since releasing Rust version 1.0.  Rust was created at Mozilla and the project boasts that today, "Apple, Amazon, Dropbox, Facebook, Google, and Microsoft [are] choosing to use Rust for its performance, reliability, and productivity in their projects." "Rust is a general-purpose programming language empowering everyone to build reliable and efficient software. Rust can be built to run anywhere in the stack, whether as the kernel for your operating system or your next web app," the project says in a blogpost detailing milestones since 2015. Mozilla developers were using pre-1.0 Rust in 2014 to build its new Servo browser rendering engine for Firefox. A major goal was to eradicate memory-related security bugs in Firefox's Gecko rendering engine, many of which were due to C++'s "unsafe memory model".  Then last year, Microsoft started ex...