Subscribe To Robotics | IntroDuction | History | Home


Friends Dont Forget To check the archieve at the left end of the page



Make Your Own Robot Tutoials



Simple Beetle Bot | Wired Robot | Combat Robot | Solar Engine |



Beam Symet | Photopopper | Beam Trimet | Line Follower |

Latest Updates
Driver Less Car | I-Sobot | MotherBoard | MicroController | Artificial Brain |

Camera Sensors Hardware | Remote Control Working

Google

Monday, May 5, 2008

Robo's Think Like Humans


You know the future has arrived when scientists bring together two staples of science fiction: lasers and robots.

But rather than 50-tonne behemoths dealing death with a massive light cannon, the El-E robot is instead using lasers to think like a human.

Scientists at the Georgia Institute of Technology and the Emory University School of Medicine believe they have found the answer to the difficulties that robots have in processing the imperfections of the real world using laser pointers.

Ordering El-E to retrieve an item is as simple as shining a laser pointer on the object you want. The pointer can also be used a second time to tell El-E to put the object in a certain place or give it to a specific person.

Above, Charlie Kemp, director of the Healthcare Robotics Center at Georgia Tech and Emory, accepts a towel from El-E.




El-E, named after its arm's resemblance to an elephant trunk, as seen here, can grasp a range of household items including towels, pill bottles and telephones from floors or tables.

The robot and its ability to pick up items from both floors and shelves could be a lifeline for people who have mobility difficulties.

El-E's creators are gathering input from ALS (also known as Lou Gehrig's disease) patients and doctors to prepare El-E to assist patients with severe mobility challenges.

Researchers from Georgia Tech and Emory are working with an expert on human-computer interactions to ensure the robot will one-day be ready to be used in people's homes.



El-E uses a custom-built omni-directional camera to see most of the room. After it detects a selection has been made with the laser pointer, the robot moves two cameras to look at the laser spot and triangulate its position in three-dimensional space.

Once it has reached an object, sensors in its hands will guide it on opening and closing its gripper until it has a firm hold, as pictured here.

The robot is able to detect the difference between a face, a table or the floor so it is able to carefully present an object to a person or place it on a table or the floor.



Researchers say one of the key benefits of the system is that El-E does not need to understand what objects are called, instead relying on an array of sensors similar to those seen here.

El-E's power and computation is all on board and runs Ubuntu Linux on a Mac Mini.

Kemp said: "We humans naturally point at things but we aren't very accurate, so we use the context of the situation or verbal cues to clarify which object is important.

"Robots have some ability to retrieve specific, predefined objects, such as a soda can, but retrieving generic everyday objects has been a challenge for robots."

Georgia Tech and Emory researchers are now working to help El-E expand its capabilities to include switching lights on and off when the user selects a light switch and opening and closing doors when the user selects a door knob.

No comments: