Monday, August 29, 2011

Meet DARwln-OP!!!!!


DARwIn-OP (Dynamic Anthropomorphic Robot with Intelligence - Open Platform) is an affordable, miniature-humanoid-robot platform with advance computational power, sophisticated sensors, high payload capacity, and dynamic motion ability to enable many exciting research, education, and outreach activities. Sponsored by the National Science Foundation (NSF) in the United States, DARwIn-OP has been developed by RoMeLa at Virginia Tech with collaboration with University of Pennsylvania, Purdue University and Robotis Co., based on the award winning DARwIn series humanoid robots in development since 2004.
DARwIn-OP is a true open platform where users are encouraged to modify it in both hardware and software, and various software implementations are possible (C++, Python, LabVIEW, MATLAB, etc.) The open source hardware is not only user serviceable thanks to its modular design, but also can be fabricated by the user. Publically open CAD files for all of its parts, and instructions manuals for fabrication and assembly are available on-line for free.


A number of DARwIn-OP units will be fabricated and built by Robotis Co. for distribution to 11 partner universities (including major research universities, RUI institutions, a women's college, and two local high schools) and will utilize them in their classroom teaching and projects as well as outreach activities.
The objective of this annual workshop is to; introduce DARwIn-OP to the humanoid robotics community to broaden the DARwIn-OP project and form a user community; train the users for use in research, education, and outreach activities; disseminate results of the usage of DARwIn-OP in the classroom; and to obtain feedback from the users for future improvements.

Thursday, August 25, 2011

Meet The Bilibot!!!!


The Bilibot Project started at MIT through the exploration of what could be done with the new Kinect sensor. Besides being a great sensor for gesture technology, the Kinect is a powerful robotic sensor - so much so that robotics laboratories at universities across the world are replacing their $5000 sensors with the $150 Kinect! The Bilibot project takes advantage of this new technological breakthrough to provide a research quality robot at a hobby robot's price.
As personal robotics grows we need to both push the boundaries of technology and train future generations of researchers. Our mission is to create a robotics platform for exploration and create a community of robotics enthusiasts.

Bilibot, from the German word 'billig', or cheap, is a sophisticated robotics platform at an affordable price. A Bilibot consists of:
* a powerful computer
* an iRobot create
* a Kinect sensor
* mounting hardware to put it all together
* the ROS Robotic Operating system, with research contributions from roboticists all over the world!

Check out fot the offers of getting this fantastic robot with less price on the Bilibot website.....


Meet The Finch Robot !!!!

The Best Robot to get started from  Carnegie Mellon University.
The Finch is a new robot for computer science education. Its design is the result of a four year study at Carnegie Mellon's CREATE lab. The Finch is made to easily integrate into high school and college CS courses.

The Finch was designed to allow students to write richly interactive programs.
On-board features include:   
Light, temperature, and obstacle sensors    
Accelerometers   
Motors   
Buzzer    
Full-color beak LED   
Pen mount for drawing capability   
Plugs into USB port - no batteries required 
The Finch is currently programmable with Java and Python. Additional language support will be released throughout 2011.

The Finch has good online support & documentation to get started. Apart from this it is really cheap around 99$.

Wednesday, August 17, 2011

Meet Open Source E-Puck !!!


E-puck is another OPEN-SOURCE educational robot platform from EPFL Polytechnique for beigners.
The e-puck is a small (7 cm) differential wheeled mobile robot. It was originally designed for micro-engineering education byMichael Bonani and Francesco Mondada at the ASL laboratory of Prof. Roland Siegwart at EPFL (LausanneSwitzerland). The e-puck is open hardware and its onboard software is open source, and is built and sold by several companies.



Technical Specifications
§  Diameter: 70 mm
§  Height: 50 mm
§  Weight: 200 g
§  Max speed: 13 cm/s
§  Autonomy: 2 hours moving
§  dsPIC 30 CPU @ 30 MHz (15 MIPS)
§  8 KB RAM
§  144 KB Flash
§  2 step motors
§  8 infrared proximity and light (TCRT1000)
§  color camera, 640x480
§  8 LEDs in ring + one body LED + one front LED
§  3 microphones
§  1 loudspeaker


It has good number of extension so that you never had to for anything outside the box.

§  A turret that simulates 1D omni-directional vision, to study optic flow,
§  Ground sensors, for instance to follow a line,
§  Color LED turret, for color-based communication,
§  ZigBee communication,
§  2D Omni-directional vision,
§  Magnetic wheels, for vertical climbing.


 Its flexibility has proven outstanding to many to research group. As it is an open platform many communities have developed their research on E-puck

You can find more details on http://www.e-puck.org/

Friday, August 12, 2011

MEET IRobot Create

Irobot Create is from IROBOT Corporation. I think its the only company to have official commercial robotic product called Roomba and some other for research and military use. 
As a point of concern we are just going talk about IROBOT CREATE as it is available for research.
iRobot Create®
Programmable Robot
Create is an affordable mobile robot platform for educators, students and developers - use it to experiment, learn and play! Here's just the beginning of the fun you can have with Create...

Learn the fundamentals of robotics, computer science and engineering
Program behaviors, sounds and movements


Create is preassembled and ready to use right out of the box. Beginners can observe the robot's behavior in any of the 10 demonstration modes or program the robot by downloading scripts with any basic terminal program. Advanced users can write custom software using a variety of methods that take advantage of the robot's "streaming sensor data" mode for more control of the robot. And highly advanced users can write programs for completely autonomous behavior.


Attach sensors, electronics, cameras, grippers and more.


Create is highly flexible robot that can be attached to your netbook and get power of great computation. Not to forget you will get great flexibility on programming SDK 


Webstite for Create is here

Thursday, August 11, 2011

Robot Operating System (ROS)

Standford University have came up with a classic operating system called ROS. It started in 2007 & now it has reached to the good level of maturity. It has great support of the various hardware platform. It works on Rovio & Neo too as described in  the only drawback is you should have linux distribution (e.g Fedora, Ubuntu etc..) to work on ROS.

As of 2008, development continues primarily at Willow Garage, a robotics research institute/incubator, with more than twenty institutions collaborating in a federated development model.
ROS provides standard operating system services such as hardware abstraction, low-level device control, implementation of commonly-used functionality, message-passing between processes, and package management. It is based on a grapharchitecture where processing takes place in nodes that may receive, post and multiplex sensor, control, state, planning, actuator and other messages. The library is geared toward a Unix-like system (Ubuntu Linux is listed as 'supported' while other variants such as Fedora and Mac OS X are considered 'experimental').
ROS has two basic "sides": The operating system side ros as described above and ros-pkg, a suite of user contributed packages (organized into sets called stacks) that implement functionality such as simultaneous localization and mapping, planning, perception, simulation etc.
ROS is released under the terms of the BSD license, and is open source software. It is free for commercial and research use. The ros-pkg contributed packages are licensed under a variety of open source licenses.
For more details check out Wikipedia page or visit ROS website.

MEET NAO ROBOT


NAO is a programmable 57 cm tall humanoid robot with the following key components:
  A body with 25 degrees of freedom (DOF), whose key elements are electric motors and actuators.
  A series of sensors - 2 cameras, 4 microphones, a Sonar distance sensor, 2 IR emitters and receivers, 1 inertial board, 9 tactile sensors, 8 pressure sensors.
  Various devices to express itself - voice synthesizer, LED lights, 2 high quality speakers.
  A CPU (located in the head) which runs a Linux kernel and supports ALDEBARAN’s own proprietary middleware (NAOqi).
  A second CPU (located in the torso).

Motion Omni-directional walk

NAO's walk uses a simple dynamic model (linear inverse Pendulum) and is solved using Quadratic programming. NAO's walk is stabilized using feedback from its joint sensors. This makes the walk robust against small disturbances and absorbs torso oscillations in the frontal and lateral planes. NAO is able to walk on multiple floor surfaces such as carpet, tiles and wooden floors. NAO can transition between these surfaces while walking.
Whole Body Motion
NAOs motion module is based on Generalized Inverse Kinematics which deals with cartesian and joint control, balance, redundancy and task priority. This means that when asking NAO to reach out its arm, the robot bends over because both arms and leg joints are taken into account. NAO will stop his movement in order to keep his balance.
Fall Manager
The Fall Manager protects the robot when it falls. The main functionality of the fall manager is to detect when the Center of Mass (CoM) of the robot goes out of the support polygon. The support polygon is defined by the position of the feet in contact with the ground. When a fall is detected, all the motion tasks are killed and depending on the direction, the arms move to a protective position, the center of mass is lowered and the stiffness of the robot is reduced to zero.

NAO has two cameras and can track, learn and recognize images and faces.
NAO sees by means of two CMOS 640 x 480 cameras, which can capture up to 30 images per second.
The first is on the forehead, aimed at NAO’s horizon, while the second camera is placed at mouth level to scan the immediate environment.
The software lets you recover photos that NAO sees and video streams. Yet what use are eyes, unless you can also perceive and interpret your surroundings?
That’s why NAO contains a set of algorithms to detect and recognize faces and shapes, so he can recognize the person talking to him, find a ball, and ultimately much more complex objects.
Furthermore, NAO’s SDK lets you develop your own modules interfaced with OpenCV (the Open Source Computer Vision library initially developed by Intel).
These algorithms have been specially developed, with constant care taken to use up minimum processor resources
As you have the option to execute modules on NAO or transfer them to another PC connected to NAO, you can easily use the OpenCV display functions to develop and test your own algorithms with image feedback
Audio
NAO has four microphones and can track sounds as well as voice recognition capability and text to speech functionality in seven languages.
Sound Source Localization
One of the main purposes of a humanoid robot is to have it interact with people. Sound localization allows for identification of the direction of a sound. To produce robust and useful outputs while meeting CPU and memory requirements, the sound source localization of NAO is based on an approach known as “Time Difference of Arrival”.
The sound wave emitted by a source close to NAO is received at slightly different times by each of NAO’s four microphones.
For example, if someone talks to the robot on his left side, the corresponding signal will first hit the left microphones, few milliseconds later the front and the rear ones and finally the signal will be sensed on the right microphone.
These differences, known as ITD standing for “interaural time differences”, can then be mathematically related to the current location of the emitting source.
Two boxes in Choregraphe are also available that allow an easy use of the feature inside a behavior:
Possible Applications include:
 Human Detection, Tracking and RecognitionNoisy Objects Detection, Tracking and Recognition 
Speech Recognition in a specific direction Speaker Recognition in a specific direction Remote Monitoring / Security applications Entertainment applications


Audio Signal Processing

In robotics, due to the limited computational power available on embedded processors, it can be useful to deport some calculations to a remote desktop or server.
This especially true for audio signal processing; for example, speech recognition can often be done more efficiently - faster and more accurately - on a remote processor, and so most modern smartphones process voice recognition remotely.
Users may also wish to apply their own signal processing algorithm directly on the robot.
The NAOqi framework uses SOAP (Simple Object Access Protocol) to send and receive audio signals over the web.
Sound is produced and recorded on NAO using the ALSA (Advanced Linux Sound Architecture) library.
The ALAudioDevice module manages the audio inputs and outputs.
Using NAO’s audio capacities, a wide range of experiences and research can be done in the fields of Human-Robot Interaction and communication.
For instance, NAO may be used as a communication device and the user could interact with NAO (talk and hear) as if he were talking to another human being.
Signal processing is of course an interesting example. Thanks to the audio module, you can get the raw audio data from the microphones in real time and then process it with your own code.

Connectivity
Ethernet and Wifi
NAO currently supports both Wi-Fi (a, b, and g standards) and Ethernet, the most widespread standards for network connection. In addition infrared transmitter receivers in the eyes allows for connection to objects in the environment. NAO is compatible with the IEE 802.11g Wi-Fi standard, and can be used on both WPA and WEP networks so he can easily be connected to most home and office networks. Both Ethernet and Wi-Fi connections are supported natively by NAO's OS, and don't require any setup other than entering your password for Wi-Fi.
NAO's ability to connect to a network opens up a great scope of possibilities. You can connect to NAO from any computer on the network to pilot and program it.
Here are a few examples of applications that have already been created by NAO users:
  Based on his IP address, NAO can work out his location, and give you a personalized weather report.
  Ask NAO to find out more about a topic, and he'll connect to Wikipedia and read you the relevant entry.
  Connect NAO to the appropriate audio stream, and he'll play an Internet radio station for you.
Using XMPP technology (like in the Google chat system), NAO can be controlled distantly, and can stream video from his cameras back to the user.
Infrared
Using infrared signals, NAO can communicate with other NAOs, and with any other devices that support infrared. NAO can be configured to send infrared signals to other devices in order to control them (“NAO, please turn on the TV!”). In addition, NAO can also receive instructions from infrared emitters such as remote controls. And of course, two NAOs can communicate with each other directly.
Infrared is already the most common way to control appliances, making NAO easily adaptable to domotics applications. NAO can also detect whether an infrared signal he receives come from his left or his right side.
By solving this equation every time a noise is heard the robot is able to retrieve the direction of the emitting source (azimutal and elevation angles) from ITDs measured on the 4 microphones.
This feature is available as a NaoQi module named “ALAudioSourceLocalization” which provides a C++ and Python API that allows precise interactions from a python script or a NaoQi module.
OPEN SOURCE
With over five years of experience in developing embedded systems for robotics platforms Aldebaran Robotics will share the cross-platform build tools, the core communication library and other essential modules with researchers, developers and emerging projects in humanoid robotics.
By capititalizing on Aldebaran Robotics extensive experience, users can concentrate their effort on creating innovative and exciting applications.
In addition users will benefit from the strong innovation that characterizes the growing NAO community.
Robotics and its associated applications are still emerging research fields.
To explore future applications together ongoing exchange within our user community is essential.



MEET WowWee Rovio



Meet WowWee Rovio Robot. Most Extensively use for research at universities of USA & Canada. Its a wonderful platform with Omni-directional movements.



With Rovio™, you will always be just a click away from the people and places that are important to you.
Rovio is the wireless Internet equipped mobile webcam that enables you to view and interact with its environment through streaming video and audio, wherever you are!
Easily control Rovio remotely 24/7 from anywhere in the world! Use any web-enabled device: PC or Mac, cell phone, smartphone, PDA or even your video game console.
The TrueTrack™ Navigation System allows you to use the Rovio interface to store waypoints - with one click Rovio will automatically navigate itself to the chosen point. Rovio's built-in LED headlight will help you guide it even in dimly lit locations, so you'll always know what is going on at home or at the office. No need to worry about Rovio running low on power while you're away - the self-docking function allows you to send Rovio back to the charging dock to recharge, with the click of a button on your browser. When it's done charging, Rovio is ready to take its next tour.
Rovio - now you can finally be in two places at once!

Features:
  • Rovio detects your computer settings and guides you through the setup process.
  • Control Rovio remotely via a web browser from anywhere with an Internet connection.
  • Its head-mounted, moveable camera and wide range of vision enable you to see and hear exactly what Rovio sees and hears, on your screen, anywhere in the world!
  • Set waypoints so that Rovio can navigate itself around your home, without having to control each step yourself!*
  • At the click of a button, send Rovio back to the charging dock using its self-docking capabilities - even when you are not at home!*
  • With Wi-Fi connectivity, use almost any web-accessible device -from your cell phone to your PC - to easily control Rovio.
  • Guide Rovio through dimly lit locations with the aid of its built-in LED headlight.
  • Rechargeable NiMH battery included
  • 1 x Charging dock with built-in TrueTrack Beacon
  • 3 x Omni-directional wheels
  • 1 x Head-mounted VGA camera
  • LED illumination
  • 1 x Speaker and 1 x microphone for 2-way audio
  • USB connectivity
  • Wi-Fi connectivity (802.11b and 802.11g)
TECHNICAL SPECIFICATIONS
Dimensions:
  • Product Height: 13.5 inches (34.29 cm)
  • Product Width: 12 inches (30.48 cm)
  • Product Depth: 14 inches (35.56 cm)
  • Product Weight: 5 lbs (2.3 kg)
WowWee has wonderful online support plus they have a full length video tutorials series to get your hands on Rovio. You have to find dealers of WowWee (Which is on website) to buy it in your region.

Hello!!!!

Hello Friends !!!
I know this is not a new blog that's on a platform of mobile robotics. On this blog i would like to share world of most advance MOBILE - ROBOTS. There are plenty of company behind the scene which are working on this platform.
As I am Student i would also like to update you with my research work on same platform. Here you would find news on robotics, links to tutorial, and updates of my work.
Stay tuned for more updates....