At CES 2013 I saw 3D-printed skateboards, flowers, and gear assemblies, and meanwhile there are now plans to print everything from body parts to buildings. So printing robots was only a matter of time.
InMoov is a full-size humanoid robot made from 3D-printed parts. Designed and built by Gael Langevin of Factices Ateliers in France, InMoov began last year as a hand, then an arm. It’s now two arms and a head.
Here’s a scene that will be familiar to anyone who’s ever taken an introductory economics course. The professor has just finished explaining that in economics, "efficiency" means that there are no possible gains from trade. Then some loudmouth kid in the back raises his hand and asks: "Wait, so if one person has everything, and everyone else has nothing and just dies, is that an ‘efficient’ outcome?" The professor, looking a little chagrined, responds: "Well, yes, it is." And the whole class rolls their eyes and thinks: Economists.
For most of modern history, inequality has been a manageable problem. The reason is that no matter how unequal things get, most people are born with something valuable: the ability to work, to learn, and to earn money. In economist-ese, people are born with an "endowment of human capital." It’s just not possible for one person to have everything, as in the nightmare example in Econ 101.
For most of modern history, two-thirds of the income of most rich nations has gone to pay salaries and wages for people who work, while one-third has gone to pay dividends, capital gains, interest, rent, etc. to the people who own capital. This two-thirds/one-third division was so stable that people began to believe it would last forever. But in the past ten years, something has changed. Labor’s share of income has steadily declined, falling by several percentage points since 2000. It now sits at around 60% or lower. The fall of labor income, and the rise of capital income, has contributed to America’s growing inequality.
Robotic technology has taken historic strides in the past decade. Nanorobotics is changing how scientists, doctors and surgeons think about the future of medicine. UAVs (unmanned aerial vehicles) have altered how states engage in conflict. And automotive technology now allows cars to parallel park themselves.
Robots are here to stay, and Americans might want to get used to them says President and CEO of ReconRobotics, Alan Bignall.
Almost everyone can remember a time when they received bad service at a restaurant. Usually, human error plays a major role in the experience. But what would happen if humans were taken out of the equation? How about making the entire situation a bit more robotic? Well, look no further — a restaurant known for its service and its food is run almost entirely by a staff of robots.
Robot Restaurant, located in the Heilongjiang province in China, has been open since last June, and it has been successful. When patrons enter, a robot greets them by extending its mechanical arm and saying, "Earth person, hello. Welcome to the Robot Restaurant." However, welcoming guests to the restaurant is not all the robots do. Robots wait the tables, cook the noodles, and even entertain the patrons by singing to them. As meals are prepared, they are brought out on a conveyor belt, and the waiterbots take it from there.
"In the future, our lives will be full of robots," he says.
Ishiguro’s lecture about the possibilities for the relationship between humans and robots attracted a packed audience. He compared the evolution of robots to the evolution of cars. "Once we have developed practical robots, we can spend more and more time building autonomy," he said.
Autonomous androids which look just like you could conduct your business, attend conferences, and go shopping on your behalf, while you sat in the comfort of your home. A camera would monitor your facial expressions and your android’s face would mirror your expressions. Ishiguro says there is even a psychological phenomenon whereby, if someone touches your android, you feel it. "It’s a very tactile sensation," he says.
Ishiguro has previously left his twin android, developed at a cost of $1 million, to deliver pre-recorded lectures at his place of employment, Osaka University in Japan, while he went overseas. He also – when doubled booked for a conference – emailed the conference organisers to say that he would have to send his android to one of the events. Both conferences replied: "We want the android!"
Fast forward 4 years (and almost 3 Moore’s Law cycles) and it seems as though his predictions are no nearer coming true than they were when he made them. David Hanson’s skin has gotten more realistic and more people know about Hiroshi Ishiguro’s real looking androids, but many important developments stand in the way of our considering robots something we could one day fall in love with.
While it is true robot lovers are pretty scarce on the ground right now, it seems unreasonable to think either technology or society are changing fast enough to accomodate sex, much less the emotion of love, with a machine in the space of only four years. While Nikki’s expectations may be unrealistic, her conclusion certainly is not:
As counter-intuitive human-robot relationships might seem today, there are many reasons to think that love and sex with robots will happen. Robots are already better in math, logic, chess, jeopardy and many other activities. Is it not probable that eventually, as Levy says, a robot companion will provide much more than a human companion in every conceivable way?
And yet she completely misses the real question! in the long run it might not be whether machines can be better lovers than humans, but instead whether humans can be good enough lovers for the machines.
Infants spend their first few months learning to find their way around and manipulating objects, and they are very flexible about it: Cups can come in different shapes and sizes, but they all have handles. So do pitchers, so we pick them up the same way.
Similarly, your personal robot in the future will need the ability to generalize — for example, to handle your particular set of dishes and put them in your particular dishwasher.
In Cornell’s Personal Robotics Laboratory, a team led by Ashutosh Saxena, assistant professor of computer science, is teaching robots to manipulate objects and find their way around in new environments. They reported two examples of their work at the 2011 Robotics: Science and Systems Conference June 27 at the University of Southern California.
A common thread running through the research is “machine learning” — programming a computer to observe events and find commonalities. With the right programming, for example, a computer can look at a wide array of cups, find their common characteristics and then be able to identify cups in the future. A similar process can teach a robot to find a cup’s handle and grasp it correctly.
Other researchers have gone this far, but Saxena’s team has found that placing objects is harder than picking them up, because there are many options. A cup is placed upright on a table, but upside down in a dishwasher, so the robot must be trained to make those decisions.
“We just show the robot some examples and it learns to generalize the placing strategies and applies them to objects that were not seen before,” Saxena explained. “It learns about stability and other criteria for good placing for plates and cups, and when it sees a new object — a bowl — it applies them.”
In early tests they placed a plate, mug, martini glass, bowl, candy cane, disc, spoon and tuning fork on a flat surface, on a hook, in a stemware holder, in a pen holder and on several different dish racks.
Let’s not be silly here, robots don’t want to kill all humans…they just want to take all their jobs. The accelerating rise in robot labor of the past decade, and its expansion into all areas of production, have led many to worry about the future of human workers. Yet how extensive is the robotic take over of labor? Our friends at Mezzmer Eyeglasses did some impressive research and created an even more impressive infographic explaining the present and future of robots in the workplace. Check out the Singularity Hub exclusive image below. With 9 million robots working in the world, and 4 million+ more scheduled to arrive next year, we’re clearly entering into a new age of automation. But will it bring a new era of unemployment with it?
Australian scientists have invented a new breed of robots called Lingodroids, programmed to make, use, and share language. The bots can coin words to describe places they have been, places they want to go, and plans for getting there. “When they need a new word, they invent one,” says Janet Wiles, a cognitive scientist at the University of Queensland who leads an interdisciplinary team on the project.
The rolling chatterboxes “see” using 360-degree cameras, laser range finders, and sonar. A microphone functions as their ears, and a speaker acts as a voice box, emitting the familiar beeps of a touch-tone phone. As for brains, Wiles outfitted each Lingodroid with an alphabet of beeps that correspond to letters. Then she programmed them to play a series of games in which they paired the letters into nonsensical combinations like “ja” or “ku” and joined those syllables to coin neologisms as needed. For example, in one game two robots roamed through a course and met in an unfamiliar part of it. The meeting triggered one robot to name the spot “jaya” and share the new word with its partner, who then added the word to its lexicon. In this way the robots slowly built a new language to describe their travels [pdf] and eventually even learned to communicate and understand directions.
Wiles notes that although the language may seem simple, for robots, grasping spatial information is incredibly complex. “We don’t realize how sophisticated our use of language to describe the world around us is,” she says. Ultimately, she hopes to teach her robots to chat up humans, paving the way for robotic caregivers, companions, and butlers.
They’re not quite psychic yet, but machines are getting better at reading your mind. Researchers have invented a new, noninvasive method for recording patterns of brain activity and using them to steer a robot. Scientists hope the technology will give "locked in" patients—those too disabled to communicate with the outside world—the ability to interact with others and even give the illusion of being physically present, or "telepresent," with friends and family.
Previous brain-machine interface systems have made it possible for people to control robots, cursors, or prosthetics with conscious thought, but they often take a lot of effort and concentration, says José del R. Millán, a biomedical engineer at the École Polytechnique Fédérale de Lausanne (EPFL) in Switzerland, who develops brain-machine interface systems that don’t need to be implanted into the brain. Millán’s goal is to make control as easy as driving a car on a highway. A partially autonomous robot would allow a user to stop concentrating on tasks that he or she would normally do subconsciously, such as following a person or avoiding running into walls. But if the robot encounters an unexpected event and needs to make a split-second decision, the user’s thoughts can override the robot’s artificial intelligence.
Subscribe and get this blog delivered to you via RSS!
Blogging the Singularity Bloggers:
Chris Williamson: Filmmaker, science enthusiast, and futurist concerned with the accelerating nature of technological growth and where it's headed. He is currently studying for his MFA in Film Production.
Frank Whittemore: As an IT professional since 1961, the accelerating change of technology is not news to him but the wonder will never cease! Be sure check out Frank's blog about Life Extension!