Subscribe To Robotics | IntroDuction | History | Home


Friends Dont Forget To check the archieve at the left end of the page



Make Your Own Robot Tutoials



Simple Beetle Bot | Wired Robot | Combat Robot | Solar Engine |



Beam Symet | Photopopper | Beam Trimet | Line Follower |

Latest Updates
Driver Less Car | I-Sobot | MotherBoard | MicroController | Artificial Brain |

Camera Sensors Hardware | Remote Control Working

Google

Friday, April 18, 2008

NEURAL NETWORKS

The artificial neural network (ANN), often simply called neural network(NN), is a processing model loosely derived from biological neurons [Gurney 2002]. Neural networks are often used for classification problems or decision making problems that do not have a simple or straightforward algorithmic solution. The beauty of a neural network is its ability to learn an input to output mapping from a set of training cases without explicit programming, and then being able to generalize this mapping to cases not seen previously. There is a large research community as well as numerous industrial users
working on neural network principles and applications [Rumelhart, McClelland 1986], [Zaknich 2003]. In this chapter, we only briefly touch on this subject and concentrate on the topics relevant to mobile robots.

Neural Network Principles:

A neural network is constructed from a number of individual units called neurons that are linked with each other via connections. Each individual neuron has a number of inputs, a processing node, and a single output, while each connection from one neuron to another is associated with a weight. Processing in a neural network takes place in parallel for all neurons. Each neuron constantly (in an endless loop) evaluates (reads) its inputs, calculates its local activation value according to a formula shown below, and produces (writes) an output value.

The activation function of a neuron a(I, W) is the weighted sum of its inputs, i.e. each input is multiplied by the associated weight and all these terms are added. The neuron’s output is determined by the output function o(I, W),for which numerous different models exist.In the simplest case, just thresholding is used for the output function. For our purposes, however, we use the non-linear “sigmoid” output function
defined in Figure 19.1 and shown in Figure 19.2, which has superior characteristics for learning (see Section 19.3). This sigmoid function approximates the Heaviside step function, with parameter 􀁕 controlling the slope of the graph
(usually set to 1).






Feed-Forward Networks:

A neural net is constructed from a number of interconnected neurons, which are usually arranged in layers. The outputs of one layer of neurons are connected to the inputs of the following layer. The first layer of neurons is called the “input layer”, since its inputs are connected to external data, for example sensors to the outside world. The last layer of neurons is called the “output layer”, accordingly, since its outputs are the result of the total neural network and are made available to the outside. These could be connected, for example, to robot actuators or external decision units. All neuron layers between the input layer and the output layer are called “hidden layers”, since their actions cannot be observed directly from the outside.

If all connections go from the outputs of one layer to the input of the next layer, and there are no connections within the same layer or connections from a later layer back to an earlier layer, then this type of network is called a “feedforward network”. Feed-forward networks (Figure 19.3) are used for the simplest types of ANNs and differ significantly from feedback networks, which we will not look further into here.




For most practical applications, a single hidden layer is sufficient, so the typical NN for our purposes has exactly three layers:

• Input layer (for example input from robot sensors)

• Hidden layer (connected to input and output layer)

• Output layer (for example output to robot actuators)

Perceptron Incidentally, the first feed-forward network proposed by Rosenblatt had only two layers, one input layer and one output layer [Rosenblatt 1962]. However, these so-called “Perceptrons” were severely limited in their computational power because of this restriction, as was soon after discovered by [Minsky,Papert 1969]. Unfortunately, this publication almost brought neural network research to a halt for several years, although the principal restriction applies only to two-layer networks, not for networks with three layers or more. In the standard three-layer network, the input layer is usually simplified in the way that the input values are directly taken as neuron activation. No activation function is called for input neurons.

No comments: