Pages Navigation Menu

Keep on the Cutting Edge of Computer Tech and Military News

New HighTed-EDGE

Human-Like Visions Helps Robots Maneuver

Human-Like Visions Helps Robots Maneuver

With a new robotic vision system robots will become even more human-like...

Traversing cluttered or unpredictable terrain is one of those seemingly trivial tasks that pose extreme difficulty for robots.

But a new robotic vision system based on key functions of the human brain promises to let robots maneuver quickly and safely through cluttered environments. Researchers envision this technology will lead to a robot developed specifically to help guide the visually impaired.

Analyzing shifting visual data to assess the surrounds and separate any other movement from one’s own has tuned out to be an intensely challenging problem for artificial intelligence researchers. But, three years ago, researchers at the European-funded research consortium Decisions in Motion set out to look for insights into this problem.

Human Like Vision For Robots

Human Like Vision For Robots

With a team comprising of both neuro and cognitive scientists, they began to study how the visual systems of advanced mammals, primates and people work, then incorporated their findings into neural networks and mobile robots.

The researchers used a wide variety of techniques in order to learn more about the how brain processes visual information. These included recording individual neurons and groups of neurons responding to movement; using functional magnetic resonance imaging to track the moment-by-moment interactions between different brain areas as people performed visual tasks; and analyzing europsychological studies of people with visual processing problems.

By learning more about how our visual system scans the environment, detects objects, discerns movement, distinguishes between the independent movement of objects, and coordinates motion towards a goal, the team revealed some breakthrough findings.

One of the most interesting discoveries the study uncovered is the primate brain does not just detect and track a moving object; it actually predicts where the object will go. Greenlee explains:

“When an object moves through a scene, you get a wave of activity as the brain anticipates its trajectory…

“It’s like feedback signals flowing from the higher areas in the visual cortex back to neurons in the primary visual cortex to give them a sense of what’s coming.”

These results have enabled Decisions in Motion to successfully build and demonstrated a robot that can cross a crowded room guided by its own ‘sight’ provided by twin video cameras.

The brains of the robot lie a three level neural network that mimics the human brain’s primary, mid-level, and higher-level visual subsystems.

“It’s basically a neural network with certain biological characteristics….

“The connectivity is dictated by the numbers we have from our physiological studies,” said Greenlee.

The computerized brain controls the behavior of a wheeled platform which supports moveable head and eyes.

It tells the head and eyes where to look, tracks its own movement, identifies objects, determines if they are moving independently, and directs the platform to speed up, slow down and turn left or right.

The company are now hard at work developing a head-mounted system to help visually impaired people get around.

Project coordinator Mark Greenlee says:

“Until now, the algorithms that have been used are quite slow and their decisions are not reliable enough to be useful…

“Our approach allowed us to build algorithms that can do this on the fly, that can make all these decisions within a few milliseconds using conventional hardware.”

Greenlee compares what an individual visual neuron sees to looking at the world through a peephole. Researchers have known for a long time that high-level processing is needed to build a coherent picture out of a myriad of those tiny glimpses. What’s new is the importance of strong anticipatory feedback for perceiving and processing motion.

“This proved to be quite critical for the Decisions in Motion project…

“It solves what is called the ‘aperture problem’, the problem of the neurons in the primary visual cortex looking through those little peepholes.”

Greenlee and his colleagues were intrigued when the robot found its way to its first target – a teddy bear – just like a person would, speeding by objects that were at a safe distance, but passing nearby obstacles at a slower pace.

“That was very exciting…

“We didn’t program it in – it popped out of the algorithm.”

In addition to improved guidance systems for robots, the consortium envisions a lightweight system that could be worn like eyeglasses by visually or cognitively impaired people to boost their mobility. One of the consortium partners, Cambridge Research Systems, is developing a commercial version of this, called VisGuide.


Related Posts:

Source:

  1. Unavailable, please contact us for more information.
New HighTed-EDGE

Leave a Comment

Your email address will not be published. Required fields are marked *

New HighTed-EDGE