↓ Skip to main content

Validation and Trustworthiness of Multiscale Models of Cardiac Electrophysiology

Overview of attention for article published in Frontiers in Physiology, February 2018
Altmetric Badge

About this Attention Score

  • Good Attention Score compared to outputs of the same age (69th percentile)
  • Good Attention Score compared to outputs of the same age and source (73rd percentile)

Mentioned by

twitter
8 X users

Readers on

mendeley
80 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
Validation and Trustworthiness of Multiscale Models of Cardiac Electrophysiology
Published in
Frontiers in Physiology, February 2018
DOI 10.3389/fphys.2018.00106
Pubmed ID
Authors

Pras Pathmanathan, Richard A. Gray

Abstract

Computational models of cardiac electrophysiology have a long history in basic science applications and device design and evaluation, but have significant potential for clinical applications in all areas of cardiovascular medicine, including functional imaging and mapping, drug safety evaluation, disease diagnosis, patient selection, and therapy optimisation or personalisation. For all stakeholders to be confident in model-based clinical decisions, cardiac electrophysiological (CEP) models must be demonstrated to be trustworthy and reliable. Credibility, that is, the belief in the predictive capability, of a computational model is primarily established by performing validation, in which model predictions are compared to experimental or clinical data. However, there are numerous challenges to performing validation for highly complex multi-scale physiological models such as CEP models. As a result, credibility of CEP model predictions is usually founded upon a wide range of distinct factors, including various types of validation results, underlying theory, evidence supporting model assumptions, evidence from model calibration, all at a variety of scales from ion channel to cell to organ. Consequently, it is often unclear, or a matter for debate, the extent to which a CEP model can be trusted for a given application. The aim of this article is to clarify potential rationale for the trustworthiness of CEP models by reviewing evidence that has been (or could be) presented to support their credibility. We specifically address the complexity and multi-scale nature of CEP models which makes traditional model evaluation difficult. In addition, we make explicit some of the credibility justification that we believe is implicitly embedded in the CEP modeling literature. Overall, we provide a fresh perspective to CEP model credibility, and build a depiction and categorisation of the wide-ranging body of credibility evidence for CEP models. This paper also represents a step toward the extension of model evaluation methodologies that are currently being developed by the medical device community, to physiological models.

X Demographics

X Demographics

The data shown below were collected from the profiles of 8 X users who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 80 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Unknown 80 100%

Demographic breakdown

Readers by professional status Count As %
Researcher 20 25%
Student > Ph. D. Student 16 20%
Student > Master 9 11%
Student > Bachelor 6 8%
Student > Doctoral Student 5 6%
Other 15 19%
Unknown 9 11%
Readers by discipline Count As %
Engineering 19 24%
Computer Science 11 14%
Agricultural and Biological Sciences 6 8%
Biochemistry, Genetics and Molecular Biology 4 5%
Physics and Astronomy 3 4%
Other 19 24%
Unknown 18 23%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 5. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 22 December 2020.
All research outputs
#6,408,671
of 23,023,224 outputs
Outputs from Frontiers in Physiology
#3,041
of 13,773 outputs
Outputs of similar age
#144,784
of 474,288 outputs
Outputs of similar age from Frontiers in Physiology
#87
of 334 outputs
Altmetric has tracked 23,023,224 research outputs across all sources so far. This one has received more attention than most of these and is in the 72nd percentile.
So far Altmetric has tracked 13,773 research outputs from this source. They typically receive more attention than average, with a mean Attention Score of 7.6. This one has done well, scoring higher than 77% of its peers.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 474,288 tracked outputs that were published within six weeks on either side of this one in any source. This one has gotten more attention than average, scoring higher than 69% of its contemporaries.
We're also able to compare this research output to 334 others from the same source and published within six weeks on either side of this one. This one has gotten more attention than average, scoring higher than 73% of its contemporaries.