QUAMES workshop 2024

5th International Workshop on Quality and Measurement of Model-Driven Software Development

(QUAMES 2024)

Co-located at ER 2024 Pittsburgh, USA 

About the workshop

The success of software development projects depends on the productivity of human resources and the efficiency of development processes to deliver high-quality products. Model-driven development (MDD) is a widely adopted paradigm that automates software generation by means of model transformations and reuse of development knowledge. The MDD advantages have motivated the emergence of several modeling proposals and MDD tools related to different application domains and stages of the development lifecycle.

In MDD, the quality of conceptual models is critical because it directly impacts the final software systems’ quality. Therefore, it is essential to evaluate conceptual models and predict the software products’ relevant characteristics. Additionally, MDD project management must be adapted to take into account that programming effort is being replaced by a modelling effort at an earlier stage. Hence, measuring models is crucial to support cost estimation and project management. Moreover, testing models are paramount to represent the exploration and calculate quality metrics (such as coverage, failures, etc.), related to developed software products.

To address these challenges, QUAMES aims to attract research on methods, procedures, techniques, and tools for measuring and evaluating the quality of conceptual models that can be used in any phase of the software development cycle. Its primary goal is to enable the development of high-quality software systems by promoting quality assurance from a modeling-based perspective. Furthermore, considering the growing use of AI to streamline software development or as an essential element within software products, we advocate that it is important to conceptualize the models used for the training and operation of these systems and evaluate their quality. Thus, this year QUAMES is also aimed to discuss about the challenges, benefits and lessons of using conceptual modeling and AI approaches looking for how their synergy can impact the quality of the systems.

Important dates

  • Abstract Submission (optional): 5th July 2024
  • Submission deadline: 27th July 2024
  • Paper notification: 11th August 2024
  • Camera-ready submission: 30th August 2024
  • Early registration deadline: 9th September 2024
  • Workshops take place on: 28th-31st October 2024

Topics

The topics of interest include (but are not limited to):
– Quality models for conceptual models
– Empirical evaluation of quality models
– Measures for conceptual models
– Measurement of conceptual models
– Defect detection in conceptual models
– Testing of conceptual models
– Model-based testing
– Case studies, experiments, and surveys of MDD projects
– Tools for measuring conceptual models
– Tools for quality evaluation of conceptual models
– Tools for model-based testing
– Quality of conceptual models for ethics and trustworthiness
– IA for conceptual models and conceptual models for IA
– Conceptual models for critical systems
– Conceptual models for system assurance

Submissions

 

Participants will be invited to submit papers concerning the quality, measurement, or testing of models that can be used in MDD environments. Pieces of ongoing research work are also welcome.

Accepted papers are planned to be published in the joint workshop proceedings of the ER conference.

Papers can be accepted as full papers or short papers, with no more than 16 pages for full papers and no more than 10 pages for short papers.

Submissions to QUAMES 2024 must be formatted according to the Springer’s LNCS submission formatting guidelines (for instructions and style sheets, see https://www.springer.com/gp/computer- science/lncs/conference-proceedings-guidelines ).

Papers have to be submitted in PDF format using the EasyChair submission page

https://easychair.org/conferences/?conf=er2024 track QUAMES Workshop.

All submissions will be screened by the scientific committee for their appropriateness to the workshop themes and format. Each submission will be reviewed by at least three program committee members. Authors will be guided to fit their presentations to the workshop rules. In case of inconclusive and conflicting review results, internal discussions will be held to decide upon the final acceptance or rejection of a paper.

It is mandatory that at least one author will register and present the paper during the workshop.

 

Organizers

Beatriz Marín – Universitat Politècnica de València, Spain – 

bmarin@dsic.upv.es

Giovanni Giachetti – Universitat Politècnica de València, Spain and Universidad Andrés Bello, Chile –

ggiachetti@dsic.upv.es;giovanni.giachetti@unab.cl

Clara Ayora – Universidad Castilla – La Mancha, Spain –

clara.ayora@uclm.es

Program committee

  • Oscar Pastor – Universitat Politècnica de Valencia (Spain)
  • Maya Daneva – University of Twente (The Netherlands)
  • Tanja Vos – Universitat Politècnica de Valencia (Spain)
  • Ignacio Panach – Universidad de Valencia (Spain)
  • Juan Cuadrado-Gallego – University of Alcala (Spain)
  • Anna Rita Fasolino – – University of Napoli Federico II (Italy)
  •  Jose Luis de la Vara – University of Castilla – La Mancha (Spain)
  • Mehrdad Saadatmand – RISE Research Institutes of Sweden (Sweden)
  • Isabel Brito – Instituto Politécnico de Beja (Portugal)
  • Dietmar Winkler – Vienna University of Technology (Austria)
  • Jolita Ralyte – University of Geneva (Switzerland)
  • Yves Wautelet – KU Leuven (Belgium)
  • René Noel – Universitat Politècnica de Valencia (Spain)
  • Ignacio García – University of Castilla – La Mancha (Spain)
  • Ana Paiva – University of Porto (Portugal)
  • Porfirio Tramontana – University of Napoli Federico II (Italy)
  • Stefan Biffl – Vienna University of Technology (Austria)
  • Estefania Serral – KU Leuven (Belgium)
  • Shaukat Ali – Simula Research Lab (Norway)

Previous Editions