Subscribe To Robotics | IntroDuction | History | Home


Friends Dont Forget To check the archieve at the left end of the page



Make Your Own Robot Tutoials



Simple Beetle Bot | Wired Robot | Combat Robot | Solar Engine |



Beam Symet | Photopopper | Beam Trimet | Line Follower |

Latest Updates
Driver Less Car | I-Sobot | MotherBoard | MicroController | Artificial Brain |

Camera Sensors Hardware | Remote Control Working

Google

Friday, April 18, 2008

Neural Network Example

A simple example for testing a neural network implementation is trying to
learn the digits 0..9 from a seven-segment display representation. Figure 19.8
shows the arrangement of the segments and the numerical input and training
output for the neural network, which could be read from a data file. Note that
there are ten output neurons, one for each digit, 0..9. This will be much easier
to learn than e.g. a four-digit binary encoded output (0000 to 1001).



Figure 19.9 shows the decrease of total error values by applying the backpropagation
procedure on the complete input data set for some 700 iterations.
Eventually the goal of an error value below 0.1 is reached and the algorithm
terminates. The weights stored in the neural net are now ready to take on previously
unseen real data. In this example the trained network could e.g. be tested
against 7-segment inputs with a single defective segment (always on or always
off).



Neural Controller

Control of mobile robots produces tangible actions from sensor inputs. A controller for a robot receives input from its sensors, processes the data using relevantlogic, and sends appropriate signals to the actuators. For most large tasks, the ideal mapping from input to action is not clearly specified nor readily apparent. Such tasks require a control program that must be carefully designed and tested in the robot’s operational environment. The creation of these control programs is an ongoing concern in robotics as the range of viable application domains expands, increasing the complexity of tasks expected of autonomous robots.

A number of questions need to be answered before the feed-forward ANN in Figure can be implemented. Among them are: How can the success of the network be measured? The robot should perform a collision-free left-wall following.

How can the training be performed?

In simulation or on the real robot.

What is the desired motor output for each situation?

The motor function that drives the robot close to the wall on the left-hand
side and avoids collisions.

Neural networks have been successfully used to mediate directly between
sensors and actuators to perform certain tasks. Past research has focused on
using neural net controllers to learn individual behaviors. Vershure developed
a working set of behaviors by employing a neural net controller to drive a set
of motors from collision detection, range finding, and target detection sensors
[Vershure et al. 1995]. The on-line learning rule of the neural net was designed
to emulate the action of Pavlovian classical conditioning. The resulting controller
associated actions beneficial to task performance with positive feedback.
Adaptive logic networks (ALNs), a variation of NNs that only use boolean
operations for computation, were successfully employed in simulation by
Kube et al. to perform simple cooperative group behaviors [Kube, Zhang,
Wang 1993]. The advantage of the ALN representation is that it is easily mappable
directly to hardware once the controller has reached a suitable working
state.
In Chapter 22 an implementation of a neural controller is described that is
used as an arbitrator or selector of a number of behaviors. Instead of applying a
learning method like backpropagation shown in Section 19.3, a genetic algorithm
is used to evolve a neural network that satisfies the requirements.

No comments: