Inside the artificial intelligence revolution

Rolling Stone reports: “Welcome to robot nursery school,” Pieter Abbeel says as he opens the door to the Robot Learning Lab on the seventh floor of a sleek new building on the northern edge of the UC-Berkeley campus. The lab is chaotic: bikes leaning against the wall, a dozen or so grad students in disorganized cubicles, whiteboards covered with indecipherable equations. Abbeel, 38, is a thin, wiry guy, dressed in jeans and a stretched-out T-shirt. He moved to the U.S. from Belgium in 2000 to get a Ph.D. in computer science at Stanford and is now one of the world’s foremost experts in understanding the challenge of teaching robots to think intelligently. But first, he has to teach them to “think” at all. “That’s why we call this nursery school,” he jokes. He introduces me to Brett, a six-foot-tall humanoid robot made by Willow Garage, a high-profile Silicon Valley robotics manufacturer that is now out of business. The lab acquired the robot several years ago to experiment with. Brett, which stands for “Berkeley robot for the elimination of tedious tasks,” is a friendly-looking creature with a big, flat head and widely spaced cameras for eyes, a chunky torso, two arms with grippers for hands and wheels for feet. At the moment, Brett is off-duty and stands in the center of the lab with the mysterious, quiet grace of an unplugged robot. On the floor nearby is a box of toys that Abbeel and the students teach Brett to play with: a wooden hammer, a plastic toy airplane, some giant Lego blocks. Brett is only one of many robots in the lab. In another cubicle, a nameless 18-inch-tall robot hangs from a sling on the back of a chair. Down in the basement is an industrial robot that plays in the equivalent of a robot sandbox for hours every day, just to see what it can teach itself. Across the street in another Berkeley lab, a surgical robot is learning how to stitch up human flesh, while a graduate student teaches drones to pilot themselves intelligently around objects. “We don’t want to have drones crashing into things and falling out of the sky,” Abbeel says. “We’re trying to teach them to see.”

Industrial robots have long been programmed with specific tasks: Move arm six inches to the left, grab module, twist to the right, insert module into PC board. Repeat 300 times each hour. These machines are as dumb as lawn mowers. But in recent years, breakthroughs in machine learning – algorithms that roughly mimic the human brain and allow machines to learn things for themselves – have given computers a remarkable ability to recognize speech and identify visual patterns. Abbeel’s goal is to imbue robots with a kind of general intelligence – a way of understanding the world so they can learn to complete tasks on their own. He has a long way to go. “Robots don’t even have the learning capabilities of a two-year-old,” he says. For example, Brett has learned to do simple tasks, such as tying a knot or folding laundry. Things that are simple for humans, such as recognizing that a crumpled ball of fabric on a table is in fact a towel, are surprisingly difficult for a robot, in part because a robot has no common sense, no memory of earlier attempts at towel-folding and, most important, no concept of what a towel is. All it sees is a wad of color. [Continue reading…]

Facebooktwittermail