Video game

New Intelligent Cameras Learn, Process What They Are Viewing – Interesting Engineering


In a development likely to have ‘Terminator’ fans spitting out their morning coffee, researchers in robotics and artificial intelligence (AI) are developing camera systems that learn what they are viewing.

This enables the systems to block out unnecessary data in real-time and, in doing so, greatly speeds up the processing of visual information.

Part of a collaboration between the University of Manchester and the University of Bristol in the United Kingdom, the in-development technology has great potential for autonomous vehicle image processing and, well, Skynet.

RELATED: AI FACIAL RECOGNITION AND IP SURVEILLANCE FOR SMART RETAIL BANKING AND THE ENTERPRISE

New intelligent cameras

Current autonomous travel technologies rely greatly on a combination of digital cameras and graphics processing units (GPUs) designed to render graphics for videogames. 

The problem is that these systems typically transmit a great deal of unnecessary visual information between sensors and processors. For example, an autonomous vehicle might process the details of trees on the side of the road. This extra information consumes power and takes processing time.

With this in mind, the team from the UK set out to develop a different approach for more efficient vision intelligence in machines. Two separate papers from the collaborations have shown how sensing and machine learning can be combined to create new types of cameras for AI systems.

Interestingly, their system borrows from a very intelligent visual preceptor prevalent in the natural world: the human eye.

“We can borrow inspiration from the way natural systems process the visual world – we do not perceive everything – our eyes and our brains work together to make sense of the world and in some cases, the eyes themselves do processing to help the brain reduce what is not relevant,” Walterio Mayol-Cuevas, Professor in Robotics, Computer Vision and Mobile Systems at the University of Bristol explained in a press release.

Visual understanding in AI systems

Two separate papers, one led by Dr. Laurie Bose and the other by Yanan Liu at Bristol, detail the implementation of Convolutional Neural Networks (CNNs) — a form of AI algorithm that enables visual understanding — over the image plane.

The CNNs are able to classify frames at thousands of times per second, without ever having to record those frames or send them down the processing pipeline.

The work uses the SCAMP architecture developed by Piotr Dudek, Professor of Circuits and Systems and PI from the University of Manchester, and his team. SCAMP is a camera-processor chip that has interconnected processors embedded in each and every pixel. 

Professor Dudek said: “Integration of sensing, processing, and memory at the pixel level is not only enabling high-performance, low-latency systems, but also promises low-power, highly efficient hardware.

“SCAMP devices can be implemented with footprints similar to current camera sensors, but with the ability to have a general-purpose massively parallel processor right at the point of image capture.”

The research points towards a future of intelligent AI cameras that can filter data before it is sent for processing. The approach promises to greatly improve the efficiency of autonomous systems on cars, trucks, and even drones, as well as a host of other unforeseen developments — hopefully not of the apocalyptic variety.





READ SOURCE

Leave a Reply

This website uses cookies. By continuing to use this site, you accept our use of cookies.