Subscribe To Robotics | IntroDuction | History | Home


Friends Dont Forget To check the archieve at the left end of the page



Make Your Own Robot Tutoials



Simple Beetle Bot | Wired Robot | Combat Robot | Solar Engine |



Beam Symet | Photopopper | Beam Trimet | Line Follower |

Latest Updates
Driver Less Car | I-Sobot | MotherBoard | MicroController | Artificial Brain |

Camera Sensors Hardware | Remote Control Working

Google

Friday, February 29, 2008

Make Ur Own Fighting Robot

Building a simple antweight R/C combat robot


In this guide we will show you how to make a simple antweight R/C combat robot using a Sabertooth 2X5 R/C. The 'bot doesn't necessarily have to be used in combat - it is a pretty fun toy to drive around the office too! There is a certain satisfaction you get from driving your own homemade vehicle that you can't get from an imported Walmart toy. The Sabertooth 2X5 R/C will be used to interpret signals from a radio control system, and vary the motor speed so you can drive the robot around. The project requires basic knowledge of electronics (Volts, amps, battery polarity and wiring) and intermediate soldering skills. The project can be completed in a day, with most of the time spent waiting for glue to dry.

Parts list:
Sabertooth 2X5 R/C motor driver
Motors, wheels and chassis
Hobby radio control transmitter and receiver
Battery (at least 6V)
Ceramic capacitors
Misc wire and soldering tools


Overview:




As you can see, power will go from the battery in to the Sabertooth motor driver. The Sabertooth has an internal 5V regulator that it will use to power the receiver. The receiver will pick up your control info i.e. the direction you want to drive in, and it will signal that information back to the Sabertooth. The Sabertooth will then process this information, and vary the voltage and direction going to the motors. By varying the voltage going to the motors, you will be able to drive the robot at different speeds and turn it left and right like a tank. In this kit, we are using 4 motors - 2 on each side. We will wire the 2 motors in parallel on each side so they will appear as one motor to the Sabertooth. The 'flip' channel is purely optional - it just reverses left and right steering if your robot gets flipped upside down so you don't have to think backwards.

The chassis:
One of the most time consuming things in building robots is constructing the chassis. If you do not have metalworking machinery available to you, you might want to check out Inertia Labs' website. They offer a package where you can get a CNC machined aluminum chassis and 4 geared motors for $99. It will allow you to start work on your robot right away without having to deal with metal shavings embedded in your eyes.



This is everything you get in the Inertia Labs kit - the gearmotors are tiny!

Setting up the motors:
This is the only part where you will have to do some soldering. Out of the box, the motors do not come with any suppression capacitors. With a DC brushed motor, if you do not have suppression capacitors the commutator will produce small sparks which will cause radio interference. Adding 0.1uF or 10nF ceramic capacitors to the motor terminals will solve this problem. You can find such capacitors at radio shack or any electronic component store. Digikey part # 399-4264-ND "CAP .1UF 50V 10% CER RADIAL" is also a good choice. You will need two caps for each motor. Solder one capacitor lead to a motor terminal, and the other lead to the motor's casing. Next, cut and strip some wire to solder to the motor terminals. Put the tires on the wheels and glue the wheels onto the motor's shaft.













Now wait for the glue to dry.

Once the glue is dry, you need to put the motors into the chassis. To hold the motors in place, I used Loctite 415 adhesive. It is similar to superglue, but does a better job bonding metal to metal. It takes a long time to cure though so be patient.




The battery:
Next up, you need a power source i.e. a battery. To keep things really small and light we used an 800mAh 2s lithium pack. A Sabertooth will run at any voltage above 6V, so you could also use a cheap 7.2V NiMH pack, or even 6 alkaline AAA batteries if you think you can make them fit. Just don't go crazy and dump 18V into these tiny motors else you will burn them out. Remember to get an appropriate charger for the battery chemistry you use. We used a Common Sense R/C brand pack here, which came with a JST battery connector. These connectors are also sometimes known as P connectors or BEC connectors. Battery connectors are important because they allow you to quickly and safely connect/disconnect power to the robot. Depending on where you buy your battery pack from, you might find this connector already soldered on there for you.





BEC/P/JST battery connectors are available from Maxx Products here . Part number 2832 is the female connector (goes on the battery) and part number 2830 is the male connector (will screw into the Sabertooth). You can also find them at your local hobby shop. Other battery connectors will work, but JST ones are nice and small.

Radio and receiver:





To remotely control the robot, you will need a hobby radio control transmitter and receiver. These pieces of equipment can be expensive - anywhere from $80 to several hundred dollars. They are an investment that will last many years though, and if you get a good system you will be able to create many radio controlled vehicles down the line. Remember that the radio and receiver must both operate on the same channel, and for ground applications in the USA you should technically use a 75Mhz system not a 72Mhz system. For the purposes of a simple bot like this, a 4 channel transmitter and receiver will suffice. Inertia sells some low end ones. If you can afford it, the absolute best system to get is the 2.4 Ghz Spektrum DX6 or DX7 system with a BR6000 receiver. I used a 6 channel Hitec Optic 6 transmitter and a GWS Naro receiver because that's what we had lying around at the DE office.

Motor driver configuration:
To set up the motor driver, use this guide to the DIP switches:




For this particular robot and transmitter, we used the following settings for the following reasons:
Enable mixed mode (for easy steering on one stick)
Disable exponential (the robot was easy enough to control without exponential)
Lithium mode (because we were using a 2s Lithium Polymer battery)
R/C flip mode (so we could use a switch on the transmitter to reverse left/right if the robot flipped upside down)
Enable autocalibrate (quicker and easier than setting the trim on the transmitter)
Enable signal timeout (Helps prevent the robot driving away when there is a loss of signal, and is required for events)




Once everything is wired up as shown in the earlier diagram, it is a good idea to turn on your radio, plug in the battery and do a test run. Make sure up, down, left and right all behave as you want them to. Make sure all motors are turning in the correct direction. When I got to this stage, I found out that 3 of the motors were going one way, and one was going backwards! So I just swapped the two wires on the odd motor and it solved the problem. Other common problems would be having up/down being controlled by left/right on your radio - in which case just swap the receiver channels the servo pigtails are going into. Also check the DIP switch settings on Sabertooth are correct.




Now the most annoying part of all: getting everything to fit in that tiny space! Pretend you are playing Tetris and you will be motivated to do a better job. I managed to pull it off with the arrangement shown here.





The final touches (and weapons!):
By now you should have a fully functioning 'bot that you can drive around. You might notice however, that the 'bot jerks around and occasionally goes crazy. This would be due to glitches in radio reception. On a 75Mhz system, antenna placement and orientation is very important! Simply coiling up your antenna and shoving it inside the chassis will not give good results at all. Ideally you want the antenna wire as far away as possible from the motor driver and mounted vertically in parallel with your radio transmitter. Although it would be more rigid, it is important that you do not wrap the antenna around a metal rod. The metal rod will act as a shield, and will absorb the radio waves instead of allowing them to resonate in the antenna. Instead, use a non conductive rigid tube, such as a nylon rod. Cheaper solutions can also work!







The most basic of all weapons is a wedge. You can make a wedge out of a scrap sheet of aluminum, using a vise to bend it. Mark out a strip 2-3" wide and 6" long, and use the vise to hold it in place as you put some kinks in it. You can also chop off the end at an angle to create a spike.

















You will want to create a very strong bond between the wedge and the chassis, so superglue won't be enough. Screw it down, or a use a strong epoxy.




Additional notes
If you want to use this platform to compete in combat, you will have to beef it up a little with additional weight and weaponry. Inertia's Pele bot and Hummer bot have some creative ideas. Even if you don't want to compete in tournaments, the same basic setup can be used to make R/C planes, boats, trucks, bulldozers, tanks, hovercrafts and more!

Back to Dimension Engineering

Subscribe To My Blog

Hey friends now you all can subscribe to my blog to get information on regular updates.So what r u waiting for just select your choice of update provider and be updateD with your favourite blog on ROBOTICS.


Subscribe to ROBOTICS



Add to Google Reader or Homepage


Subscribe in NewsGator Online


Subscribe in Rojo


Add to My AOL


Subscribe in FeedLounge


Add to netvibes


Subscribe in Bloglines


Add to The Free Dictionary



Add to Plusmo


Subscribe in NewsAlloy


Add to Excite MIX


Add to netomat Hub


Add to fwicki


Add to flurry


Add to Webwag


Add to Attensa


Receive IM, Email or Mobile alerts when new content is published on this site.


Add to Pageflakes

Multi-Robot Systems

Multi-robot systems (MRS) are becoming one of the most important areas of research in Robotics, due to the challenging nature of the involved research and to the multiple potential applications to areas such as autonomous sensor networks, building surveillance, transportation of large objects, air and underwater pollution monitoring, forest fire detection, transportation systems, or search and rescue after large-scale disasters. Even problems that can be handled by a single multi-skilled robot may
benefit from the alternative usage of a robot team, since robustness and reliability can often be increased by combining several robots which are individually less robust and reliable.



One can find similar examples in human work: several people in line are able to move a bucket, from a water source to a fire, faster and with less individual effort. Also, if one or more of the individuals leaves the team, the task can still be accomplished by the remaining ones, even if slower than before. Another example is the surveillance of a large area by several people. If adequately coordinated,
the team is able to perform the job faster and with reduced cost than a single person carrying out all the work, especially if the cost of moving over large distances is prohibitive. A larger rank of task domains, distributed sensing and action, and insight into social and life sciences are other advantages that can be brought by the study and use of MRS. The relevance of MRS comes also from its inherent inter-disciplinarity.

At the Intelligent Systems Lab of the Institute for Systems and Robotics at Instituto Superior T├ęcnico (ISR/IST), we have been pursuing for several years now an approach to MRS that merges the contributions from two fields: Systems and Control Theory and Distributed Artificial Intelligence. Some of the current problems in the two areas are creating a natural trend towards joint research approaches to their solution. Distributed Artificial Intelligence focuses on multi-agent systems, either virtual (e.g., agents) or with a physical body (e.g., robots), with a special interest on organizational issues, distributed decision making and social relations. Systems and Control Theory faces the growing complexity of the actual systems to be modelled and controlled, as well as the challenges of integrating design, real-time and operation aspects of modern control systems, many of them distributed in nature (e.g., large plant process control, robots, communication networks).


Introduction


Some of the most important, and specific to the area, scientific challenges one can identify in the research on MRS are, to name but the most relevant:

1)The uncertainty in sensing and in the result of actions over the environment inherent to robots, posing serious challenges to the existing methodologies for Multi-Agent Systems (MAS), which rarely take uncertainty into account.

2) The added complexity of the knowledge representation and reasoning, planning, task allocation, scheduling, execution control and learning problems when a distributed setup is considered, i.e., when there are multiple autonomous robots interacting in a common environment, and specially if they have to cooperate in order to achieve their common and individual goals.

3) The noisy and limited bandwidth communications among teammates in a cooperative setting, a scenario which gets worse as the number of team members increase and/or whenever an opponent team using communications in the same range is present.

4) The need to integrate several methodologies that handle the subsystems of each individual robot (extended to the robot team in a cooperative setting) in a consistent manner, such that the integration becomes the most important problem to be solved, ensuring a timely execution of planned tasks.

Wednesday, February 27, 2008

Line Following Robot

Hey friends,

Thank you all for your response towards our blog...

I recently got an email from a person who wanted to share his information about line follower with all the readers...

Its in PDF format so i couldnt use it here....but if u want the email plz leave ur email id and i promise u tht u will receive in less than one week...

Its has information and videos about line follower please dont email me ur request of forwarding the information...if u require it please leave a comment behind with ur email id....as i wont b able to check each and every email(sry for the inconvinence)

Thank u for ur support.....

Tuesday, February 26, 2008

Driverless car









The driverless car concept embraces an emerging family of highly automated cognitive and control technologies, ultimately aimed at a full "taxi-like" experience for car users, but without a human driver. Together with alternative propulsion, it is seen by some as the main technological advance in car technology by 2020.

The challenge

The challenges can broadly be divided into the technical and the social. The technical problems are the design of the sensors and control systems required to make such a car work. The social challenge is in getting people to trust the car, getting legislators to permit the car onto the public roads, and untangling the legal issues of liability for any mishaps with no person in charge.
However, any solution can be broken down to four sub-systems:

sensors: the car knows where an obstacle is and what is around it;

navigation: how to get to the target location from the present
location;
motion planning: getting through the next few meters, steering, and
avoiding obstacles while also abiding by rules of the
road and avoiding harm to the vehicle and others;

control of the vehicle itself: actuating the system's decisions.
In examining every proposed solution, one should look at the
following questions:

Is this truly a complete system?
Does it drive itself door-to-door?
To what degree is the proposed solution a step towards the complete vision,
or is it just a trick?
Is the car 'autonomous', or would it need changes to the infrastructure?
How feasible (technically, economically, and politically) would it be to
deploy the entire solution?
Can the system allow for and include existing vehicles driven by humans, or
does it need an open field?
How would it cope with unexpected circumstances?






Some have argued that the problem is AI-complete -- that a safe and reliable driverless car would need to use all the skills of an ordinary human being, including commonsense reasoning and affective computing. The concern is that driverless cars will perform worse than human beings in emergency situations that require judgement and the ability to communicate with other drivers and police. For example, how should a driverless car react to a man waving a flare in the middle of the road?

Driver-assistance:

Though these products and projects do not aim explicitly to create a fully autonomous car, they are seen as incremental stepping-stones in that direction. Many of the technologies detailed below will probably serve as components of any future driverless car — meanwhile they are being marketed as gadgets that assist human drivers in one way or another. This approach is slowly trickling into standard cars (e.g. improvements to cruise control).

Driver-assistance mechanisms are of several distinct types, sensorial-informative, actuation-corrective, and systemic.

Sensors
Sensors employed in driverless cars vary from the minimalist ARGO project's monochrome stereoscopy to mobileye's inter-modal (video, infra-red, laser, radar) approach. The minimalist approach imitates the human situation most closely, while the multi-modal approach is "greedy" in the sense that it seeks to obtain as much information as is possible by current technology, even at the occasional cost of one car's detection system interfering with another's.
Mobileye is a well respected company who makes detection systems for cars, which are currently only used for driver assistance, but are eminently suitable for a full-fledged driverless car. The system also detects the objects' motion (direction and speed) and can so calculate relative speeds, and predict collisions.

Japanese infra-red article

some things from the DARPA challenge....

Road-sign recognition

Navigation







The ability to plot a route from where the vehicle is to where the user wants to be has been available for several years. These systems, based on the US military's Global Positioning System are now available as standard car fittings, and use satellite transmissions to ascertain the current location, and an on-board street database to derive a route to the target. The more sophisticated systems also receive radio updates on road blockages, and adapt accordingly.

Motion planning

It is a term used in robotics for the process of detailing a task into atomic robotic motions.
This issue, also known as the "navigation problem", though simple for humans, is one of the most challenging in computer science and robotics. The problem is in creating an algorithm that would be able to find its way around a room with obstacles, perhaps accomplishing some task on the way.

Control of vehicle:
As automotive technology matures, more and more functions of the underlying engine, gearbox etc. are no longer directly controlled by the driver by mechanical means, but rather via a computer, which receives instructions from the driver as inputs and delivers the desired effect by means of electronic throttle control, and other drive-by-wire elements. Therefore, the technology for a computer to control all aspects of a vehicle is well understood.

Work done in simulation:
While developing control systems for real cars is very costly in terms of both time and money, much work can be done in simulations of various complexity. Systems developed using simpler simulators can gradually be transferred to more complex simulators, and in the end to real vehicles. Some approaches that rely on learning requires starting in a simulation to be viable at all, for example evolutionary robotics approaches.

Social issues:
Getting people to trust the car

Getting legislators to permit the car onto the public roads

Untangling the legal issues of liability for any mishaps with no person in
charge.

Despair of progress in the foreseeable future: The UK government seems to see
little progress until 2056. See Silicon Networks article and CNET.co.uk News.

Getting people to give up their freedom to drive wherever they want, whenever
they want without the aide of a computer - though mixed systems with some
human driven and some computer driven cars are possible.

Motivations:

As nearly all car crashes (particularly fatal ones) are caused by human driver error, driverless cars would effectively eliminate nearly all hazards associated with driving as well as driver fatalities and injuries (traveling by car is currently one of the most deadly forms of transportation, with over a million deaths annually worldwide). This would be especially helpful to people that drive to bars and inebriate themselves; the ability for a car to shuttle them home would practically eliminate drunk driving crashes.
Having the equivalent of a personal chauffeur would be a great convenience:

Time spent commuting could be used for work, leisure, or rest.

Parking in difficult areas becomes less of a concern as the car can park
itself away from a busy airport, for example, and come back when called on a
cell-phone.

Taxiing children to school, activities and friends would become solely a
matter of granting permission for the car to handle the child's request.

Allow the visually (and otherwise) impaired to travel independently.

One could sleep overnight during long road trips.

A driverless car would also be a boon to economic efficiency, as cars can be made lighter and more space efficient with the absence of safety technologies rendered redundant with computerized driving. Also the technology would make transportation more efficient and reliable: there may be autonomous or remote-controlled delivery trucks dispatched around the clock to pick up and deliver goods. Moreover, driverless cars would reduce traffic congestion by allowing cars to travel faster and closer together.

Social Costs:

The social costs of this innovation are similar to those of other past technologies: Unemployment, expense and the elimination of the "old way of doing things". See also Luddites.
As with any new labor-saving technology, this would lead to mass layoffs in the driving, cargo, and distribution industries. Taxis would also be automated, effectively eliminating a source of income for the less skilled. A similar if smaller impact is expected in the roadside-catering and other ancillary businesses. However, history shows that any such economic impact on jobs leads to economic benefits elsewhere that create employment, though often not for the exact same people displaced by the new technology.
In order to recoup the development costs, and in order to maximise the profit opportunity that any exciting novelty presents, driverless cars will initially be significantly more expensive than manual cars.
However, the overall technology need not be limited to the operation of vehicles. Once successfully implemented for vehicles, this technology could be used to implement all sorts of routine personal and labor assistants for humans. The concept of "machine" would take on a whole new meaning.
Driving as a personal hobby and sport, and indeed the entire car-oriented sub-culture would be effectively eliminated. However, for those willing to pay for the extra feature, there could be an option to switch between manual and automated driving to make up for that.

Discussion & Future:

Some systems control everything centrally, and in some the vehicle is truly autonomous in the sense that it "thinks" about its own situation in the first person - such a system can integrate with Humans that think in first person.

Conversely. a system that centrally manages everything, though easier to build from a conceptual and engineering point of view, would face horrendous economic barriers because of the costs of converting an entire city or country to the new system at once. In order to be compatible with humans the "first person" point of view is key. This is for three reasons:

1. a distributed scheme in which each component (car) takes care of itself reduces complexity
2. a system that has the concept of first-person operation can understand what a human driver is up to.
3. for the human driver to understand what the driverless car is doing, it needs to operate and "think" in as similar a way to a human as practical (and safe).

Tuesday, February 19, 2008

Sensor

The world we live in is a complex place. We have many senses to help us to understand our surroundings. In order to safely move around robots also need some way of understanding their world. The easiest way of doing this is to add simple sensors to you robot.

Bump Sensor:

So, you've fitted some motors to your robot and its happily driving around but it probably keeps colliding with obstacles and getting stuck. You need a way for your robot to detect collisions and move around objects. Enter the humble bump sensor:

A bump sensor is probably one of the easiest ways of letting your robot know it's collided with something. The simplest way to do this is to fix a micro switch to the front of your robot in a way so that when it collides the switch will get pushed in, making an electrical connection. Normally the switch will be held open by an internal spring.

Micro switches are easy to connect to micro controllers because they are either off or on, making them digital. All micro controllers are digital, so this is a match made in heaven. Micro switch 'bump' sensors are easily connected to the Robocore, simply plug them into any free digital socket and away you go.

The following diagram shows a typical circuit for a micro switch bump sensor. The resistor is important because it holds the signal line at ground while the switch is off. Without it the signal line is effectively 'floating' because there is nothing connected to it, and may cause unreliable readings as the processor tries to decide if the line is on or off.

Light Sensor:

Light sensors are perfect for making your robot more interesting. With some light sensors you can make your robot follow a light, hide in the dark or even turn on some funky headlights if the light level got a bit low (under a table for example).

Light sensors are basically resistors that change their value according to how much light is shining onto them.

They are easy to connect to the Robocore, with a simple circuit they can be plugged straight into a free analogue socket. Getting results from them can't be simpler. Get the processor to take a reading from the socket that the sensors connected to. A high value means not much light is falling on the sensor; a low value means a lot of light is falling on the sensor.

More info will b updated soon...........

Wednesday, February 13, 2008

Robotics Tutorials - Motors

DC Motors

The beginners tutorial explained how DC motors worked and how to control them with a micro controller or the Robocore. This intermediate tutorial will look a bit more closely at the DC motor and its characteristics.

We learned that reversing the polarity of the supply current controls the direction the motor rotates. This is not the only technique that can be used to control the motor. Changing the voltage supplied to the motor can also vary its speed. But your motor controller only has 2 settings, on and off, so how can the voltage be temporarily changed? Enter a technique called pulse width modulation.

Pulse Width Modulation:

This is a technique where pulses of electricity are fed into the motor at a fairly fast rate to produce an average voltage effect. To help us understand this lets look at a few examples.

Lets say that for our pulse we'll turn on 10 volts for 40mS (40 thousandths of a second) and then we'll turn the voltage off for 10mS. If we repeat this cycle over and over the voltage is changing so quickly that the on's and off's become an average voltage. In this case the voltage is off for 20% of the time, so the average voltage to the motor is 80% of 10 volts, which is 8 volts. This will cause the motor to run slower than at 10 volts.

Therefore the speed of the motor can be changed by varying the amount of time the current is on and the current is off.

The DACPin command can be used with the motor drivers on the Robocore. Full syntax for the command can be found in the document files supplied with the Basix X software but the basic command is:

Call DACpin(Pin, Voltage, DACcounter)

Pin = The output pin
Voltage = Byte value between 0 and 255
DACcounter = The function must return a value in this variable. If more than one pin is using DACPin then each pin must use a differently names variable.


The actual results obtained once the pulse has run through the motor driver chips will vary depending on the voltage used but typically a 25% reduction in power can be achieved. With most DC motors any further reductions will not supply the motor with enough power to operate.

Torque:

At this point I would like to say a little bit about torque. Torque is a measurement of the motors power. The higher the torque of the motor the more weight it can move. DC motors provide different amounts of torque depending on their running speed, which is measured in RPM (revolutions per minute). At low RPM DC motors produce poor torque, and generally the higher the RPM, the better the motors torque.

So what does this mean in practical robotics terms? Lets say that a robot is propelled by 2 DC motors. Using gears to reduce the overall speed of the robot and running the motors at top speed will result in the most power being delivered to the wheels. Using pulse width modulation too slow the motor will result in the motors not delivering less torque to drive your robot forward.

Pulse width modulation is still a very useful technique to use as it gives the programmer control over the robots speed purely using software. Sometimes you might want to slow your robot down a little, for reversing away from obstacles or turning on the spot for example.

The beginners section generally talked about what servos are and what they can do. This section is going to look more closely at how servos work and how we can program them.

How Servos Work:

To help us to understand how to control servos it may be helpful to take a closer look at how they work. Inside the servo is a control board, a set of gears, a potentiometer (a variable resistor) and a motor. The potentiometer is connected to the motor via the gear set. A control signal gives the motor a position to rotate to and the motor starts to turn. The potentiometer rotates with the motor, and as it does so its resistance changes. The control circuit monitors its resistance, as soon as it reaches the appropriate value the motor stops and the servo is in the correct position.

Controlling Servos:

Servos are positioned using a technique called pulse width modulation. This is a continuous stream of pulses sent to the servo. The pulse normally lasts for between 1ms and 2ms, depending on the positioning of the servo. The pulse has to be continually repeated for the servo to hold its position, usually around 50 to 60 times a second. It is the actual pulse that controls the position of the servo, not the number of times it's repeated every second.

A 1ms pulse will position the servo at 0 degrees, where as a 2ms pulse will position the servo at the maximum position that it can rotate to. A pulse of 1.5ms will position the servo half way round its rotation. The diagram below shows 3 typical pulses.



The diagram is not to scale but hopefully demonstrates that each pulse must be the same length. That is the combined time that the pulse is on and off.

Programming Tutorials

Beginner - Programming Introduction

This is an introduction to programming using the BasicX micro controller, if you have never programmed before this will help you on your way. More experienced readers should still skim through this to get the basics of the language.

Programming is a common language between you and your Robocore, it lets you tell it what to do. In robotics, programming is necessary to make machines that can work by themselves, without human intervention. These are called autonomous robots.

1) Lets start by connecting your Robocore to the power supply and the serial port of your computer, and loading up the BasicX software.

2) When this is loaded, click on the Monitor Port menu and select the COM port that the Robocore is connected to (probably COM1). The Download Port menu should be set to the same COM port.

3) Click the editor button. A window should open asking for a filename. Just type a name for your first program, lets say Demoprog. It will tell you that the file does not exist, so do you want to create it. Click yes.

Now lets write our first program!

To test that the BasicX is working, we will make it send a message back to your computer. This is done using the debug.print command.

Type the following program into the editor window (or copy and paste it). The first and last lines should already be present, if they are just copy the middle line.

Sub main()

debug.print "Robocore test: Everything is working"

End Sub

Before we can download to the BasicX we need to set the chip preferences. This basically tells the chip what we want to be doing with each pin (input, output etc). To do this click on the project menu, followed by chip (or the F7 shortcut). We can leave everything as it is for now so just click on OK.

Now click on the Compile menu and select Compile and Run (keyboard shortcut F5) This will send the code to the Robocore. Going back to the main BasicX window, you should see the text ' Robocore test: Everything is working ' appear on the screen.

The Sub main() and End Sub commands tell the Robocore where the program begins and ends.

The debug.print command is very useful for telling us what the Robocore is doing, in the next case we will see more of its capabilities.

Now we will make the Robocore solve an equation for us. To do this we will need a variable. This is a value stored under a name in the memory of the Robocore. We will give this variable the name ' answer '.

Sub main()

Dim answerI As integer ' declaring

Dim answer As string ' variables

answerI = 5*5 ' doing the maths

answer=Cstr(answerI) 'convert to printable format

debug.print "5 times 5 is "; answer 'print answer

End Sub

Run the program as before and you should get the computer telling you the answer.

This program shows that you can print values from your program to the screen using debug.print just by adding a semi-colon and the variable name after the "text", this is very handy for example when testing sensors.

The program also uses comments, these have no effect on the program but allow us to add information about each line of the program. The computer ignores everything written on the line after an apostrophe (').

This program has also introduced us to variables, the topic for the next tutorial in the programming series.

Programming Variables

Variables are values stored in computer memory. We need to use them for almost every program to allow the Robocore to remember things. There are various different types of values, for example strings, which are text values, and integers, which are whole number values.

The computer needs you to tell it what type of value you want to use, otherwise it will get confused.

Each type of variable has its uses and limitations, which is why there are simple methods for switching between variable types. This was shown in the last tutorial, lets look at that program again line by line.

Sub main()
Dim answerI As integer ' declaring
Dim answer As string ' variables


The Dim lines declare the variables, that means they set them up for use in the program. The type of variable must be specified using the ' As ' keyword. Here we have an integer and a string.

answerI = 5*5 ' doing the math

Here the integer variable is used, this is because mathematical functions can be performed using it such as multiplication

answer=Cstr(answerI) 'convert to printable format

Using the CStr statement the integer value held in answerI is converted into a string and held in answer, this is a text variable allowing the answer to be printed to the screen using debug.print.

debug.print "5 times 5 is "; answer 'print answer
End Sub


Variables can have nearly any name, but you cannot use words that are otherwise used in the programming language, and it is best to keep them simple, either a single letter, or a word describing what the variable does.

Other types of variables are available. One of the types used commonly in robotics is the Boolean variable. This provides us with a true or false value.

For example a bump sensor will either be pressed or not pressed, so the Boolean variable for it will either be true (pressed) or false.

We set up Boolean variables using the following statement

Dim variable As Boolean.
variable = true


Boolean logic is commonly used with conditional statements, these are covered in the next section.

There is also the Single variable type, which allows floating point numbers

e.g 2.43 or -0.66, this is occasionally useful in robotics when more maths is necessary.

Programming Loops and Conditions

Until now our programming has been limited to running through the code once, and then the program stops functioning. This is clearly not suitable for a robot, which must be able to follow it's commands continuously for the length of time it is switched on. To do this we use loops.

We also want our robots to make decisions based on data from its sensors, for this we need conditional statements. Loops themselves contain conditional statements in order to tell them when to run and how long to continue.

There are several different loop commands, letting us control the function of the robot in a variety of ways.

Do Loops:

The simple Do loop can be used to enclose your whole program, when it reaches the end of the code it returns to the beginning and starts again. We can apply this to our program used in the introduction.

Sub main()
Dim answer As string
Dim answerI As integer

Do 'start of loop

answerI = 5*5
answer=Cstr(answerI)
debug.print "The answer is "; answer

Loop ' end of loop
End Sub


As you will see when you run this program, the statement is no longer displayed once, but continues to print as long as the program runs (you should press the reset button to stop it)

Now for something slightly more useful. We can add a conditional statement to the do loop by using a ' while ' statement. This lets us tell the Robocore when to stop looping.

Sub main()

Dim answerI As integer
Dim answer As string
Dim i As integer
i=0

Do While (i < 10)
answerI = 5*5
answer=Cstr(answerI)
debug.print "The answer is "; answer
i=i+1
loop

End Sub

Using the conditional statement ' while ' and a new integer variable i we have made the Robocore loop only 10 times (note that as 10 is not less than 10 the program will stop after the I = 9 loop)

For Loop:

We can set up the same loop using the for statement remove the Do while, Loop and i=i+1 lines and replace them with:

Do While For I = 1 to 10 Step 1

Loop Next

The loop runs from 1 to 10 stepping up 1 each loop, this removes the need for the line I=I+1.

Run the program to check that you have got it right.

Now for something a little more complex, instead of just doing one sum, we can automate the Robocore to do 10 different sums for us by changing the equation to

AnswerI = I*5

Now the Robocore takes the number of loops completed and multiplies it by five. When you run this program you should now get the answers 5,10,15 up to 50.

If Statement:

We can run a conditional statement without using a loop with an if statement. These are used as follows

Const switch as Byte = 5
Sub Main()
Do

if (GetPin(switch)=1) then
debug.print"switch pressed"
call sleep(0.15)
end if

Loop
End Sub


The best way to learn is to experiment, make your own program with what you've learnt and you'll soon feel more confident

Tuesday, February 12, 2008

Robotics Tutorials For Beginner -Brain and Sensors

Building robots is great fun, but just imagine a robot that can 'think' for its self. Adding a brain to your robot need not be a hard process, and will allow your robot to follow instructions and rules. Basically, robot brains come in two forms, analogue and digital.

ROBOTIC BRAIN

Analogue Brains

It is possible to control your robots actuators (motors etc) using 'hard wired' circuits. By making circuits from capacitors, transistors and resistors you can make robots that can follow simple rules. For example, if they hit a wall a simple switch positioned on the front of the robot would be pressed in and the robot would be able to reverse and turn, hopefully avoiding the obstacle on its next pass.

Analogue brains have their disadvantages though. They require quite a good knowledge of electronics to design, and once they are built are very difficult to change. If you want to change the behavior of your design you will probably need to totally rebuild your analogue brain.

Analogue circuits are generally not recommended for beginners in electronics or robotics.

Luckily for experimental roboticists there is another option: Digital Brains

Digital Brains

Devices called micro controllers make perfect 'brains' for robots. They are small computers on a single chip, containing their own memory and processor, and can be programmed by a PC to control your robot in any way you can imagine.

What makes micro controllers so good is that they can be re programmed again and again with just a click of a mouse. There is no need to get the soldering iron out and start messing with components like analogue circuits.

Programming these chips is fairly easy to learn, but may take a bit of patience to fully understand. Learning to program by sticking your head in a textbook and trying to memorize programs is a very slow and boring way to learn. By far the easiest way to master programming is to have a go, work through a few tutorials and try out some examples. By playing about and trying ideas you'll soon get an understanding of how programs work, and how you can write your own

SENSORS

The world we live in is a complex place. We have many senses to help us to understand our surroundings. In order to safely move around robots also need some way of understanding their world. The easiest way of doing this is to add simple sensors to you robot.

Bump Sensor:

So, you've fitted some motors to your robot and its happily driving around but it probably keeps colliding with obstacles and getting stuck. You need a way for your robot to detect collisions and move around objects. Enter the humble bump sensor:

A bump sensor is probably one of the easiest ways of letting your robot know it's collided with something. The simplest way to do this is to fix a micro switch to the front of your robot in a way so that when it collides the switch will get pushed in, making an electrical connection. Normally the switch will be held open by an internal spring.

Micro switches are easy to connect to micro controllers because they are either off or on, making them digital. All micro controllers are digital, so this is a match made in heaven. Micro switch 'bump' sensors are easily connected to the Robocore, simply plug them into any free digital socket and away you go.

The following diagram shows a typical circuit for a micro switch bump sensor. The resistor is important because it holds the signal line at ground while the switch is off. Without it the signal line is effectively 'floating' because there is nothing connected to it, and may cause unreliable readings as the processor tries to decide if the line is on or off.

Light Sensor:

Light sensors are perfect for making your robot more interesting. With some light sensors you can make your robot follow a light, hide in the dark or even turn on some funky headlights if the light level got a bit low (under a table for example).

Light sensors are basically resistors that change their value according to how much light is shining onto them.

They are easy to connect to the Robocore, with a simple circuit they can be plugged straight into a free analogue socket. Getting results from them can't be simpler. Get the processor to take a reading from the socket that the sensors connected to. A high value means not much light is falling on the sensor; a low value means a lot of light is falling on the sensor.

Motors are one of the most common methods used to move robots around. They can be connected to gears and wheels and are a perfect way of adding mobility to your robot. There are many types of motor, and this tutorial will cover the main ones useful for robotics.

DC Motors:

These are the most common and easy to use motor available. They are connected to a power supply by two wires. The direction of the motor shaft rotates can be changed by reversing the polarity (swap the positive and negative wires) of the motor supply voltage.

Unfortunately motors use quite a bit of current, so you cant just plug them straight into your processor and expect them to work, the processor won't be able to supply the motor with enough current. We need to find a way of turning the motors on and off using the processor. This can be done by many methods, including transistors, relays or a motor driver chip. The Robocore contains two motor driver chips that can control up to 4 DC motors simultaneously. Connecting motors to the Robocore couldn't be simpler. Just connect the 2 wires of each motor to one of the motor outputs on the Robocore and your ready to go. The motor is controlled by 2 output pins on the processor, lets say pin 1 and pin 2. The motors direction can be changed by different outputs of the pins. See table below

Pin 1 Pin 2 Motor Output

On Off Clockwise

Off On Anti-Clockwise

Off Off Motor Off

For help programming the chip to do this have a look at the motor programming guide.

Servo Motors:

Servo motors are perfect control motors, They can be told to rotate to a specific position, making them ideal for anything that requires precision movement. Most servo motors can rotate through about 90 to 180 degrees, some rotate through a full 360 degrees. Servo's however, are unable to continually rotate, meaning they can't be used for driving wheels, but their precision movement makes them ideal for powering legs, controlling rack and pinion steering and much more.

Servo motors are totally self contained. They contain a motor, gearbox and driver electronics, meaning they can be controlled directly from a microcontroller, without the need for interface electronics. The picture to the left shows the inside of a servo. You can see the gears, motor and control circuitry.

Servos have 3 wires connected to them. 2 are for the power supply, usually between about 5 and 7 volts. The third wire is the control wire, which can be connected directly to the processor or micro controller (or an output of the Robocore). The position the servo rotates to can be controlled by sending pulses of electricity to the servo. Changing the delay between pulses directly controls the servos position.

If you want to learn more about servo motors take a look at the intermediate section of the tutorials.

Stepper Motors:

Stepper motors work in a similar way to dc motors, but where dc motors have 1 electromagnetic coil to produce movement, stepper motors contain many. Stepper motors are controlled by turning each coil on and off in a sequence. Every time a new coil is energized, the motor rotates a few degrees, called the step angle. Repeating the sequence causes the motor to move a few more degrees and so on, resulting in a constant rotation of the motor shaft. For example, a stepper motor with a step angle of 7.5 degrees requires 48 pulses for a complete revolution, or 96 pulses for 2 complete revolutions.


The diagram below shows how a stepper motor works. The magnet in the middle of the arrangement is connected to the motor shaft and produces the rotation. The 4 magnets around the outside represent each coil of the stepper motor. As different coils are energized the central magnet is pulled in different directions. By applying the correct sequence of pulses to the coils the motor can be made to rotate.



This design gives stepper motors the upper hand over DC motors. Varying the speed of the input sequence can exactly control the speed of the motor. Also, by keeping count of the sequence the motor can be made to rotate any number of times to any position

Robot Timeline - Robot History

Hey Friends it is generally said if u wana build the future u should know about the past.

So, here i present you the clear History of Robotics and who made it...

Imagining Robots
(c270 B.C.-1949)
270 BC: Ctesibius, a Greek physicist and inventor makes organs and water clocks with movable figures.

1495: The anthrobot, a mechanical man, is designed by Leonardo da Vinci.

1540: A mandolin-playing lady is created by Italian inventor Gianello Torriano.

1772: Swiss inventors Pierre and Henri Jacquet-Droz build a robotic child called L'Ecrivain (The Writer). It could write messages with up to 40 characters. L'Ecrivain's brain was a mechanical computer. A piano-playing robotic woman is also built at this time.

1801: Joseph Jacquard invents a textile machine called a programmable loom. It is operated by punch cards.

1818: Mary Shelley writes "Frankenstein" about a frightening artificial life form created by Dr. Frankenstein.

1830: American Christopher Spencer designs a cam-operated lathe.

1890's: Nikola Tesla designs the first remote control vehicles. He is also known for his invention of the radio, induction motors, Tesla coils.


1892: In the United States, Seward Babbitt designs a motorized crane with gripper to remove ingots from a furnace.

1921: The first reference to the word robot appears in a play opening in London, entitled Rossum's Universal Robots. The word robot comes from the Czech word, robota, which means drudgery or slave-like labor. Czech playwright Karel Capek first used this term when describing robots that helped people with simple, repetitive tasks. Unfortunately, when the robots in the story were used in battle, they turn against their human owners and take over the world.

1938: Americans Willard Pollard and Harold Roselund design a programmable paint-spraying mechanism for the DeVilbiss Company.

1940's: Grey Walters creates an early robot called Elsie the tortoise, or Machina speculatrix.

1941: Science fiction writer Isaac Asimov first uses the word "robotics" to describe the technology of robots and predicts the rise of a powerful robot industry.

1942: Asimov writes a story about robots, Runaround, which contains the "Three laws of robotics".

1946: George Devol patents a general purpose playback device for controlling machines. It uses a magnetic process recorder. American scientists J. Presper Eckert and John Mauchly build the first large electronic computer called the Eniac at the University Pennsylvania. The second computer, the Whirlwind, solves a problem at M.I.T. The Whirlwind is the first general-purpose digital computer.

1948: Norbert Wiener, a professor at M.I.T., publishes his book, Cybernetics, which describes the concept of communications and control in electronic, mechanical, and biological systems.

The Birth of the Industrial Robot
(1950-1979)

1951: A teleoperator-equipped articulated arm is designed by Raymond Goertz for the Atomic Energy Commission.

1954: The first programmable robot is designed by George Devol. He coins the term Universal Automation.

1956: Devol and engineer Joseph Engelberger form the world�s first robot company, Unimation.

1959: Computer-assisted manufacturing was demonstrated at the Servomechanisms Lab at MIT. Planet Corporation markets the first commercially available robot.

1960's: Johns Hopkins creates the beast. It is controlled by hundreds of transistors and able to seek out photocell outlets when its battery runs low.

1960: The General Electric Walking Truck was a 3,000 pound, four-legged robot that could walk four miles an hour. It was powered by a computer. Ralph Moser developed the machine.

1960: Unimation is purchased by Condec Corporation and development of Unimate Robot Systems begins. American Machine and Foundry, later known as AMF Corporation, markets a robot, called the Versatran, designed by Harry Johnson and Veljko Milenkovic.

1961: The first industrial robot was online in a General Motors automobile factory in New Jersey. It was Devol and Engelberger's UNIMATE. It performed spot welding and extracted die castings.

1963: The first artificial robotic arm to be controlled by a computer was designed. The Rancho Arm was designed as a tool for the handicapped and its six joints gave it the flexibility of a human arm.

1964: Artificial intelligence research laboratories are opened at M.I.T., Stanford Research Institute (SRI), Stanford University, and the University of Edinburgh.

1965: DENDRAL was the first expert system or program designed to execute the accumulated knowledge of subject experts.

1968: The octopus-like Tentacle Arm was developed by Marvin Minsky.

1969: The Stanford Arm was the first electrically powered, computer-controlled robot arm.

1970: Shakey was introduced as the first mobile robot controlled by artificial intelligence. SRI International in California produced this small box on wheels that used memory to solve problems and navigate. At Stanford University a robot arm is developed which becomes a standard for research projects. The arm is electrically powered and becomes known as the Stanford Arm.

1970's: Scientists at Edinburgh University create the Freddy robot, taking steps in hand-eye coordination technology. This first assembly robot constructed a toy boat and car from a heap of mixed parts tipped onto a table.

1973: The first commercially available minicomputer-controlled industrial robot is developed by Richard Hohn for Cincinnati Milacron Corporation. The robot is called the T3, The Tomorrow Tool.

1974: A robotic arm (the Silver Arm) that performed small-parts assembly using feedback from touch and pressure sensors was designed. Professor Scheinman, the developer of the Stanford Arm, forms Vicarm Inc. to market a version of the arm for industrial applications. The new arm is controlled by a minicomputer.

1976: Robot arms are used on Viking 1 and 2 space probes. Vicarm Inc. incorporates a microcomputer into the Vicarm design.

1977: ASEA, a European robot company, offers two sizes of electric powered industrial robots. Both robots use a microcomputer controller for programming and operation. Unimation purchases Vicarm Inc. during this year.

1978: Vicarm, Unimation creates the PUMA (Programmable Universal Machine for Assembly) robot with support from General Motors. Many research labs still use this assembly robot.

1979: The Standford Cart crosses a chair-filled room without human assistance. The cart is equipped with a television camera mounted on a rail that takes pictures and relays them to a computer so that distances can be analyzed.

The Robotic Age Takes Off
(1980-Present)

1980: The robot industry starts its rapid growth, with a new robot or company entering the market every month.

1983: The Remote Reconnaissance Vehicle became the first vehicle to enter the basement of Three Mile Island after a meltdown in March 1979. This vehicle worked four years to survey and clean up the flooded basement.

1984: The CoreSampler drilled core samples from the walls of the Three Mile Island basement to determine the depth and severity of radioactive material that soaked into the concrete.

1984: The Terregator pioneered exploration, road following and mine mapping. It was the world's first rugged, capable, autonomous outdoor navigation robot.

1985: REX was the world's first autonomous digging machine. It sensed and planned to excavate without damaging buried gas pipes. This robot used a hypersonic air knife to erode soil around pipes.

1986: The Remote Work Vehicle was developed for a broad agenda of clean-up operations like washing contaminated surfaces, removing sediments, demolishing radiated structures, applying surface treatments, and packaging and transporting materials.

1986: NavLab I pioneered high performance outdoor navigation. NavLab deployed racks of computers, laser scanners, and color cameras providing cutting-edge perception in its time.

1988: The Pipe Mapping computes magnetic and radar data over a dense grid to infer the depth and location of buried pipes. This outperforms hand-held pipe detectors.

1988: The Locomotion featured a chassis that steers and propels all wheels so that it can spin, drive, or spin while driving. Its software can emulate a tank, car or any other wheeled machine.

1990: The Ambler was a walking robot that enables energy-efficient overlapping gaits. Developed as a testbed for research in walking robots operating in rugged terrain.

1992: Neptune articulates magnetic tracks to roam the interiors of fuel storage tanks. It evaluates deterioration in floors and walls using acoustic navigation and corrosion sensing.

1992: Dante I rappels mountain sides using a spherical laser scanner and foot sensors. It entered the crater of Antarctica's Mt. Erebus but did not reach the lava lake.

1992: NavLab II was the automated HUMMER that pioneered trinocular vision, WARP computing, and sensor fusion to navigate offroad terrain.

1993: Demeter autonomously mows hay and alphalpa. It navigates with GPS and uses camera vision to differentiate cut and uncut crops.

1994: The Dante II, build by CMU Robotics, samples volcanic gases from the Mt. Spurr volcano in Alaska.

1997: NASA�s PathFinder lands on Mars and the Sojourner rover robot captures images.

2000: Humanoid robots, Honda Asimo, Sony Dream Robots (SDR), and the Aibo robot dog are showcased.

2004: The humanoid, Robosapien is created by US robotics physicist and BEAM expert, Dr. Mark W Tilden.

Robotics Tutorials for Beginners

Building robots is great fun, but just imagine a robot that can 'think' for its self. Adding a brain to your robot need not be a hard process, and will allow your robot to follow instructions and rules. Basically, robot brains come in two forms, analogue and digital.

BRAINS
Analogue Brains

It is possible to control your robots actuators (motors etc) using 'hard wired' circuits. By making circuits from capacitors, transistors and resistors you can make robots that can follow simple rules. For example, if they hit a wall a simple switch positioned on the front of the robot would be pressed in and the robot would be able to reverse and turn, hopefully avoiding the obstacle on its next pass.

Analogue brains have their disadvantages though. They require quite a good knowledge of electronics to design, and once they are built are very difficult to change. If you want to change the behavior of your design you will probably need to totally rebuild your analogue brain.

Analogue circuits are generally not recommended for beginners in electronics or robotics.

Luckily for experimental roboticists there is another option: Digital Brains

Digital Brains

Devices called micro controllers make perfect 'brains' for robots. They are small computers on a single chip, containing their own memory and processor, and can be programmed by a PC to control your robot in any way you can imagine.

What makes micro controllers so good is that they can be re programmed again and again with just a click of a mouse. There is no need to get the soldering iron out and start messing with components like analogue circuits.

Programming these chips is fairly easy to learn, but may take a bit of patience to fully understand. Learning to program by sticking your head in a textbook and trying to memorize programs is a very slow and boring way to learn. By far the easiest way to master programming is to have a go, work through a few tutorials and try out some examples. By playing about and trying ideas you'll soon get an understanding of how programs work, and how you can write your own.