↓ Skip to main content

A neuromorphic system for video object recognition

Overview of attention for article published in Frontiers in Computational Neuroscience, November 2014
Altmetric Badge

Mentioned by

facebook
1 Facebook page
reddit
1 Redditor

Citations

dimensions_citation
10 Dimensions

Readers on

mendeley
29 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
A neuromorphic system for video object recognition
Published in
Frontiers in Computational Neuroscience, November 2014
DOI 10.3389/fncom.2014.00147
Pubmed ID
Authors

Deepak Khosla, Yang Chen, Kyungnam Kim

Abstract

Automated video object recognition is a topic of emerging importance in both defense and civilian applications. This work describes an accurate and low-power neuromorphic architecture and system for real-time automated video object recognition. Our system, Neuormorphic Visual Understanding of Scenes (NEOVUS), is inspired by computational neuroscience models of feed-forward object detection and classification pipelines for processing visual data. The NEOVUS architecture is inspired by the ventral (what) and dorsal (where) streams of the mammalian visual pathway and integrates retinal processing, object detection based on form and motion modeling, and object classification based on convolutional neural networks. The object recognition performance and energy use of the NEOVUS was evaluated by the Defense Advanced Research Projects Agency (DARPA) under the Neovision2 program using three urban area video datasets collected from a mix of stationary and moving platforms. These datasets are challenging and include a large number of objects of different types in cluttered scenes, with varying illumination and occlusion conditions. In a systematic evaluation of five different teams by DARPA on these datasets, the NEOVUS demonstrated the best performance with high object recognition accuracy and the lowest energy consumption. Its energy use was three orders of magnitude lower than two independent state of the art baseline computer vision systems. The dynamic power requirement for the complete system mapped to commercial off-the-shelf (COTS) hardware that includes a 5.6 Megapixel color camera processed by object detection and classification algorithms at 30 frames per second was measured at 21.7 Watts (W), for an effective energy consumption of 5.45 nanoJoules (nJ) per bit of incoming video. These unprecedented results show that the NEOVUS has the potential to revolutionize automated video object recognition toward enabling practical low-power and mobile video processing applications.

Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 29 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
United Kingdom 1 3%
Unknown 28 97%

Demographic breakdown

Readers by professional status Count As %
Student > Ph. D. Student 8 28%
Researcher 3 10%
Student > Bachelor 2 7%
Student > Master 2 7%
Student > Doctoral Student 1 3%
Other 4 14%
Unknown 9 31%
Readers by discipline Count As %
Computer Science 7 24%
Engineering 6 21%
Agricultural and Biological Sciences 1 3%
Mathematics 1 3%
Social Sciences 1 3%
Other 1 3%
Unknown 12 41%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 1. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 06 November 2015.
All research outputs
#20,657,128
of 25,374,917 outputs
Outputs from Frontiers in Computational Neuroscience
#1,116
of 1,463 outputs
Outputs of similar age
#274,561
of 369,453 outputs
Outputs of similar age from Frontiers in Computational Neuroscience
#25
of 28 outputs
Altmetric has tracked 25,374,917 research outputs across all sources so far. This one is in the 10th percentile – i.e., 10% of other outputs scored the same or lower than it.
So far Altmetric has tracked 1,463 research outputs from this source. They typically receive a little more attention than average, with a mean Attention Score of 7.0. This one is in the 15th percentile – i.e., 15% of its peers scored the same or lower than it.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 369,453 tracked outputs that were published within six weeks on either side of this one in any source. This one is in the 14th percentile – i.e., 14% of its contemporaries scored the same or lower than it.
We're also able to compare this research output to 28 others from the same source and published within six weeks on either side of this one. This one is in the 1st percentile – i.e., 1% of its contemporaries scored the same or lower than it.