With it coming nearer to the day when humans will have in-home robotic helpers, it would be much more cohesive if a seasoned robot helper could train the new guy without the need for the human to intervene. Artificial Intelligence professor Matthew Taylor from Washington State University and his team are developing this virtual robot teaching/learning technology by having them teach one another how to play video games. The results of the study were published in the journal Connection Science.
If a home robot is trained to do a task in a certain way, such as cleaning, and becomes old or outdated, the easiest solution would be to simply extract the brain center from one and put it into the new one. However, as technology progresses, there might be systematic incompatibilities which would prevent this. Also, just as computers are replaced every few years, the robot helpers would need to be too. The better solution is for the outdated robot to teach the replacement how to do the task as it is expected to be done, custom to each home.
The virtual robots, dubbed agents in this study, were trained to play Pac-Man and StarCraft. After one agent had learned the game, it would then go on to teach a novice how to play. The agents had similar struggles as human teachers/students in regards to how much instruction to give. If too little advice was given, the pupil agent cannot learn, but if too much advice is given, the agent is stifled and never learns to play the game to its maximum potential.
The system was based on reinforcement learning, which works on the idea of a perceived reward through successfully playing the game. The agents are trained to teach based on the same principles that humans use, though Taylor is beginning to incorporate dog training techniques as well.
Now, before you make the ever-so-original comparison with iRobot or Terminator, don’t. If common sense weren’t enough to calm any worries that a hostile robot takeover is nothing to be concerned about, Taylor assures that “they’re very dumb” when it comes to learning.
Though robotics has come a long way and some of them are fairly advanced, they lack key critical thinking skills and can shut down when they get too confused. Unfortunately, this happens fairly easily and can delay development of the robot 2-3 times longer than the expected training timeframe.
In future research, Taylor will give the agents algorithms that allow them to teach other things, beginning with rudimentary tasks and moving up to more advanced work.