↓ Skip to main content

Neuromorphic Camera Denoising Using Graph Neural Network-Driven Transformers

Overview of attention for article published in IEEE Transactions on Neural Networks and Learning Systems, February 2024
Altmetric Badge

About this Attention Score

  • In the top 25% of all research outputs scored by Altmetric
  • High Attention Score compared to outputs of the same age (85th percentile)
  • High Attention Score compared to outputs of the same age and source (95th percentile)

Mentioned by

twitter
11 X users

Citations

dimensions_citation
10 Dimensions

Readers on

mendeley
20 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
Neuromorphic Camera Denoising Using Graph Neural Network-Driven Transformers
Published in
IEEE Transactions on Neural Networks and Learning Systems, February 2024
DOI 10.1109/tnnls.2022.3201830
Pubmed ID
Authors

Yusra Alkendi, Rana Azzam, Abdulla Ayyad, Sajid Javed, Lakmal Seneviratne, Yahya Zweiri

Abstract

Neuromorphic vision is a bio-inspired technology that has triggered a paradigm shift in the computer vision community and is serving as a key enabler for a wide range of applications. This technology has offered significant advantages, including reduced power consumption, reduced processing needs, and communication speedups. However, neuromorphic cameras suffer from significant amounts of measurement noise. This noise deteriorates the performance of neuromorphic event-based perception and navigation algorithms. In this article, we propose a novel noise filtration algorithm to eliminate events that do not represent real log-intensity variations in the observed scene. We employ a graph neural network (GNN)-driven transformer algorithm, called GNN-Transformer, to classify every active event pixel in the raw stream into real log-intensity variation or noise. Within the GNN, a message-passing framework, referred to as EventConv, is carried out to reflect the spatiotemporal correlation among the events while preserving their asynchronous nature. We also introduce the known-object ground-truth labeling (KoGTL) approach for generating approximate ground-truth labels of event streams under various illumination conditions. KoGTL is used to generate labeled datasets, from experiments recorded in challenging lighting conditions, including moon light. These datasets are used to train and extensively test our proposed algorithm. When tested on unseen datasets, the proposed algorithm outperforms state-of-the-art methods by at least 8.8% in terms of filtration accuracy. Additional tests are also conducted on publicly available datasets (ETH Zürich Color-DAVIS346 datasets) to demonstrate the generalization capabilities of the proposed algorithm in the presence of illumination variations and different motion dynamics. Compared to state-of-the-art solutions, qualitative results verified the superior capability of the proposed algorithm to eliminate noise while preserving meaningful events in the scene.

X Demographics

X Demographics

The data shown below were collected from the profiles of 11 X users who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 20 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Unknown 20 100%

Demographic breakdown

Readers by professional status Count As %
Student > Ph. D. Student 4 20%
Student > Postgraduate 4 20%
Student > Master 3 15%
Professor > Associate Professor 2 10%
Other 1 5%
Other 1 5%
Unknown 5 25%
Readers by discipline Count As %
Engineering 8 40%
Computer Science 5 25%
Neuroscience 1 5%
Unknown 6 30%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 9. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 16 September 2022.
All research outputs
#4,138,199
of 25,420,980 outputs
Outputs from IEEE Transactions on Neural Networks and Learning Systems
#179
of 3,397 outputs
Outputs of similar age
#26,497
of 170,053 outputs
Outputs of similar age from IEEE Transactions on Neural Networks and Learning Systems
#4
of 60 outputs
Altmetric has tracked 25,420,980 research outputs across all sources so far. Compared to these this one has done well and is in the 83rd percentile: it's in the top 25% of all research outputs ever tracked by Altmetric.
So far Altmetric has tracked 3,397 research outputs from this source. They receive a mean Attention Score of 2.7. This one has done particularly well, scoring higher than 94% of its peers.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 170,053 tracked outputs that were published within six weeks on either side of this one in any source. This one has done well, scoring higher than 85% of its contemporaries.
We're also able to compare this research output to 60 others from the same source and published within six weeks on either side of this one. This one has done particularly well, scoring higher than 95% of its contemporaries.