All the better to see you with

Think about the speed at which you can distinguish things you see. Virtually instantly you are able to detect whether you are looking at a cat, a plant or a vehicle. For this to be accomplished the signal received through your eyes must pass all the way to the back of your brain. Yet more neural pathways then enable you to recognise the signal, and describe what you see. For us, recognition takes less that 100 milliseconds.

The neural pathways involved in sight. Source.

The neural pathways involved in sight. Source.

Researchers around the world attempting to develop a computer model that can perform the same tasks as the human visual system. Simon Thorpe is one of the scientists examining the biological processes that must be met by a computer.

In a recent seminar Thorpe highlighted that although progress in the field has been substantial we are still some way off making a visual system that would be viable in a robot. First off, a computer needs to be able to match the physical “power” of the brains visual system. Of the 86 billion neurons in the human brain, four billion are used in the visual system. These neurons transmit electricity at one to two meters per second at 20 watts and one KHz. Thorpe said that modern computers are more than capable of matching this speed of processing. The performance of the computers he uses are measured in teraFLOPS, which is a lot of power (FLOPS is an acronym that essentially means calculations per second).

With the required computer power achieved, attention is now turning to how to “teach” a computer to distinguish objects in a picture. The traditional method for teaching was to show a visual processor millions of images over the course of a year. This is called back-propagation and is run at speeds of up to 100 images in 100 milliseconds. For comparison, a human can easily recognise a picture that is displayed for 25ms. The main problem with back-propagation is that it does not simulate the learning process of a human.

After a year of training via back-propagation a computer was able to recognise jellyfish, bears, leopards, polyps and monkeys with nearly 100% accuracy. There is a competition that is run every few years to test the accuracy of object recognition by computers. It is called ImageNet, and the list of objects which the computers are meant to be able to recognise can be found here.

The winners of this competition in 2012 stated a company, DNNResearch, which has now been bought by Google. In under six months they have added software that makes it possible to search your own images for particular objects, whether that be a particular flower, animal or vehicle.

Groups around the world are now trying to create object vision by methods other than back-propagation¹²³. By more closely mimicking the way that neurons in the retina fire, cameras are being created that can distinguish moving objects based on contrast and orientation. The example that Thorpe showed in the seminar was a highway. When the camera was pointed at the highway for an extended period it began to learn what was a car and what was not. Thorpe has found that when only 1% of retinal neurons have been stimulated it becomes possible to recognise most objects, this is also matched by the newer approaches to computer object vision.

There are numerous potential applications for this technology. It could be used in manufacturing and industry, monitoring the production of goods. Or it could be used in navigation systems in driver cars, trains or other modes of transport. It could have applications in medicine. Or aid people with damaged eyes. It may also help improve the function of the bionic eye. In the immediate future it seems it will be applied in some of Googles latest products, such as the Google Glass. When developed it may be able to identify objects for you via the glasses⁴⁵.

The final part of the seminar referred to the development of computer consciousness, or artificial intelligence. In Thorpes opinion we will be able to develop a machine that can process and analyse sensory stimuli. However the step to independent thought is still far away, if possible at all. It would require huge advances in the field.

But for the time being at least, you may take pride in your ability to see better than a computer.

References

1. Masquelier, T. & Thorpe, S. J. 2007. Unsupervised Learning of Visual Features through Spike Timing Dependent Plasticity. PLoS Comput Biol, 3, e31.

2. Masquelier, T. & Thorpe, S. J. Learning to recognize objects using waves of spikes and Spike Timing-Dependent Plasticity.  Neural Networks (IJCNN), The 2010 International Joint Conference on, 18-23 July 2010 2010. 1-8.

3. VanRullen, R., Delorme, A. & Thorpe, S. 2001. Feed-forward contour integration in primary visual cortex based on asynchronous spike propagation. Neurocomputing, 38–40, 1003-1009.

4. Mishkin, M., Ungerleider, L. G. & Macko, K. A. 1983. Object vision and spatial vision: two cortical pathways. Trends in Neurosciences, 6, 414-417.

5. Applegate, R. A., Thibos, L. N. & Hilmantel, G. 2001. Optics of aberroscopy and super vision2. Journal of Cataract & Refractive Surgery, 27, 1093-1107.

Featured image sourced from here.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s