↓ Skip to main content

Large-scale automated image analysis for computational profiling of brain tissue surrounding implanted neuroprosthetic devices using Python

Overview of attention for article published in Frontiers in Neuroinformatics, April 2014
Altmetric Badge

About this Attention Score

  • Average Attention Score compared to outputs of the same age

Mentioned by

twitter
1 X user

Citations

dimensions_citation
16 Dimensions

Readers on

mendeley
66 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
Large-scale automated image analysis for computational profiling of brain tissue surrounding implanted neuroprosthetic devices using Python
Published in
Frontiers in Neuroinformatics, April 2014
DOI 10.3389/fninf.2014.00039
Pubmed ID
Authors

Nicolas Rey-Villamizar, Vinay Somasundar, Murad Megjhani, Yan Xu, Yanbin Lu, Raghav Padmanabhan, Kristen Trett, William Shain, Badri Roysam

Abstract

In this article, we describe the use of Python for large-scale automated server-based bio-image analysis in FARSIGHT, a free and open-source toolkit of image analysis methods for quantitative studies of complex and dynamic tissue microenvironments imaged by modern optical microscopes, including confocal, multi-spectral, multi-photon, and time-lapse systems. The core FARSIGHT modules for image segmentation, feature extraction, tracking, and machine learning are written in C++, leveraging widely used libraries including ITK, VTK, Boost, and Qt. For solving complex image analysis tasks, these modules must be combined into scripts using Python. As a concrete example, we consider the problem of analyzing 3-D multi-spectral images of brain tissue surrounding implanted neuroprosthetic devices, acquired using high-throughput multi-spectral spinning disk step-and-repeat confocal microscopy. The resulting images typically contain 5 fluorescent channels. Each channel consists of 6000 × 10,000 × 500 voxels with 16 bits/voxel, implying image sizes exceeding 250 GB. These images must be mosaicked, pre-processed to overcome imaging artifacts, and segmented to enable cellular-scale feature extraction. The features are used to identify cell types, and perform large-scale analysis for identifying spatial distributions of specific cell types relative to the device. Python was used to build a server-based script (Dell 910 PowerEdge servers with 4 sockets/server with 10 cores each, 2 threads per core and 1TB of RAM running on Red Hat Enterprise Linux linked to a RAID 5 SAN) capable of routinely handling image datasets at this scale and performing all these processing steps in a collaborative multi-user multi-platform environment. Our Python script enables efficient data storage and movement between computers and storage servers, logs all the processing steps, and performs full multi-threaded execution of all codes, including open and closed-source third party libraries.

X Demographics

X Demographics

The data shown below were collected from the profile of 1 X user who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 66 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
United Kingdom 1 2%
United States 1 2%
France 1 2%
Germany 1 2%
Unknown 62 94%

Demographic breakdown

Readers by professional status Count As %
Student > Ph. D. Student 21 32%
Researcher 10 15%
Student > Doctoral Student 6 9%
Professor > Associate Professor 6 9%
Student > Master 5 8%
Other 9 14%
Unknown 9 14%
Readers by discipline Count As %
Agricultural and Biological Sciences 12 18%
Computer Science 12 18%
Neuroscience 7 11%
Medicine and Dentistry 7 11%
Engineering 6 9%
Other 8 12%
Unknown 14 21%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 1. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 02 May 2014.
All research outputs
#15,299,919
of 22,754,104 outputs
Outputs from Frontiers in Neuroinformatics
#551
of 743 outputs
Outputs of similar age
#134,242
of 227,503 outputs
Outputs of similar age from Frontiers in Neuroinformatics
#25
of 28 outputs
Altmetric has tracked 22,754,104 research outputs across all sources so far. This one is in the 22nd percentile – i.e., 22% of other outputs scored the same or lower than it.
So far Altmetric has tracked 743 research outputs from this source. They typically receive more attention than average, with a mean Attention Score of 8.3. This one is in the 20th percentile – i.e., 20% of its peers scored the same or lower than it.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 227,503 tracked outputs that were published within six weeks on either side of this one in any source. This one is in the 31st percentile – i.e., 31% of its contemporaries scored the same or lower than it.
We're also able to compare this research output to 28 others from the same source and published within six weeks on either side of this one. This one is in the 7th percentile – i.e., 7% of its contemporaries scored the same or lower than it.