↓ Skip to main content

The Train Benchmark: cross-technology performance evaluation of continuous model queries

Overview of attention for article published in Software and Systems Modeling, January 2017
Altmetric Badge

About this Attention Score

  • Average Attention Score compared to outputs of the same age
  • Good Attention Score compared to outputs of the same age and source (75th percentile)

Mentioned by

twitter
3 X users

Citations

dimensions_citation
38 Dimensions

Readers on

mendeley
47 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
The Train Benchmark: cross-technology performance evaluation of continuous model queries
Published in
Software and Systems Modeling, January 2017
DOI 10.1007/s10270-016-0571-8
Pubmed ID
Authors

Gábor Szárnyas, Benedek Izsó, István Ráth, Dániel Varró

Abstract

In model-driven development of safety-critical systems (like automotive, avionics or railways), well-formedness of models is repeatedly validated in order to detect design flaws as early as possible. In many industrial tools, validation rules are still often implemented by a large amount of imperative model traversal code which makes those rule implementations complicated and hard to maintain. Additionally, as models are rapidly increasing in size and complexity, efficient execution of validation rules is challenging for the currently available tools. Checking well-formedness constraints can be captured by declarative queries over graph models, while model update operations can be specified as model transformations. This paper presents a benchmark for systematically assessing the scalability of validating and revalidating well-formedness constraints over large graph models. The benchmark defines well-formedness validation scenarios in the railway domain: a metamodel, an instance model generator and a set of well-formedness constraints captured by queries, fault injection and repair operations (imitating the work of systems engineers by model transformations). The benchmark focuses on the performance of query evaluation, i.e. its execution time and memory consumption, with a particular emphasis on reevaluation. We demonstrate that the benchmark can be adopted to various technologies and query engines, including modeling tools; relational, graph and semantic databases. The Train Benchmark is available as an open-source project with continuous builds from https://github.com/FTSRG/trainbenchmark.

X Demographics

X Demographics

The data shown below were collected from the profiles of 3 X users who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 47 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Hungary 1 2%
Unknown 46 98%

Demographic breakdown

Readers by professional status Count As %
Student > Ph. D. Student 14 30%
Student > Bachelor 6 13%
Student > Master 5 11%
Student > Doctoral Student 4 9%
Researcher 3 6%
Other 6 13%
Unknown 9 19%
Readers by discipline Count As %
Computer Science 22 47%
Engineering 4 9%
Business, Management and Accounting 2 4%
Agricultural and Biological Sciences 2 4%
Economics, Econometrics and Finance 2 4%
Other 3 6%
Unknown 12 26%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 3. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 02 February 2017.
All research outputs
#14,771,058
of 25,838,141 outputs
Outputs from Software and Systems Modeling
#214
of 773 outputs
Outputs of similar age
#214,356
of 423,975 outputs
Outputs of similar age from Software and Systems Modeling
#4
of 16 outputs
Altmetric has tracked 25,838,141 research outputs across all sources so far. This one is in the 42nd percentile – i.e., 42% of other outputs scored the same or lower than it.
So far Altmetric has tracked 773 research outputs from this source. They receive a mean Attention Score of 2.2. This one has gotten more attention than average, scoring higher than 71% of its peers.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 423,975 tracked outputs that were published within six weeks on either side of this one in any source. This one is in the 49th percentile – i.e., 49% of its contemporaries scored the same or lower than it.
We're also able to compare this research output to 16 others from the same source and published within six weeks on either side of this one. This one has done well, scoring higher than 75% of its contemporaries.