Third International Workshop on Human Factors in Modeling (HuFaMo’18)

A MODELS'18 workshop

15 October 2018, Copenhagen, Denmark

Discussions Image

In order to have enriching discussions and allow the HuFaMo community to grow, we plan to allow participants to come and present or simply discuss work in progress, tools...

Interested? Take a look below:

Overview

What it is all about.

Modeling is a genuinely human enterprise, so many of the questions related to modeling can only be answered by empirical studies of human factors. The HuFaMo workshop series is the venue for early stage empirical research involving human factors in modeling. Our goal is to improve the state of the science and professionalism in empirical research in the Model Based Engineering community. Typical examples of such questions might consider the usability of a certain approach such as a method or language, or the emotional states or personal judgements of modelers.

We invite submissions regarding empirical studies of the following aspects.

Emotion and preference of users in the face of modeling related tools and activities

Stress, load, and performance involving modeling activities and artifacts

Communicative and cognitive strategies and styles connected to modeling activities

Training and testing of modeling, modeling tools, and related practices

Capabilities and competencies

Team and group behavior, including behavior across (social) media

Other topics that fit into the general frame of this workshop are also welcome.

Program

A day full of fun!

09:00-10:30   Opening
   Keynote / Invited speaker Blazho Nastov GenMyModel R&D manager
"Requests within the customer service of GenMyModel: the place of human factors."
   Discussions and open announcements/demos

11:00-12:30   Support for modeling
   Using sketch recognition for capturing developer’s mental models
      Emmanuel Renaux, Tatiana De-Wyse, José Mennesson
   ModelByVoice - towards a general purpose model editor for blind people
      João Lopes, Joao Cambeiro, Vasco Amaral
   Discussions and open announcements/demos

14:00-15:30   Experimenting for comparing
   Comparing the comprehensibility of numeric versus symbolic contribution labels in goal models: an experimental design
       Sotirios Liaskos, Wisal Tambosi
   Comparing the Developer Experience with two Multi-Agents Systems DSLs: SEA_ML++ and DSML4MAS - Study Design
       João Silva, Ankica Barisic, Vasco Amaral, Miguel Goulão, Baris Tekin Tezel, Ömer Faruk Alaca, Moharram Challenger, Geylani Kardas
   Discussions about the replication of the experiment from the Silva’s paper

16:00-17:30   Working on modeling languages
   Visual Inheritance for Designing Visual Notation Based on a Metamodel
       Nungki Selviandro, Tim Kelly, Richard Hawkins
   Modeling and Analyzing Information Flow in Development Teams as a Pipe System
       Jil Klünder, Oliver Karras, Nils Prenner, Schneider
   Discussions and open announcements/demos

Discussions in HuFaMo

You talkin' to me?

You want to talk about something you’re working on? Tool, experiment…

There is a way for that in HuFaMo, participate to Discussions and open announcements and demos interludes.

HuFaMo's program includes one invited talk and 6 presentations of research work, all related to human factors in modelling. These 7 presentations will be divided into 4 sessions of 1 hour and 30 minutes. To avoid the "head stuffing" effect, we wish to leave a significant part to the discussions in each sessions (1 hour of presentation/questions, 30 minutes of discussions).

In order to have enriching discussions and allow the HuFaMo community to grow, we plan to allow participants to come and present (or simply discuss)

  • work in progress in their team,
  • tools they have developed (or started to develop),
  • or simply tools, initiatives or work they have seen elsewhere and want to share with other participants.

There is no need to send a summary for this, all you have to do is to send us an email and to indicate in one sentence what you want to talk about. You can use slides or not, and make a demo (or not). The format is completely open.

Please just send us an email!

Invited talk

A nice guy!

Blazho Nastov GenMyModel R&D manager
"Requests within the customer service of GenMyModel: the place of human factors."

https://www.genmymodel.com

Submission Types

Put your shoulder to the wheel!

We solicit four types of submissions, each with their specific quality and review criteria.

1. Empirical Study

of human factors in modeling, including replication studies and negative results. We strongly encourage authors to submit raw data and analysis scripts.

2. Study Designs

investigating human factors in modeling. These contributions will be evaluated based on the quality of the study design alone, i.e., whether the reviewers deem them promising to obtain meaningful, valid, and interesting results. No actual study results are expected.

3. Theory Papers

contributing to, or develop, a theory of some aspect of human factors relevant in modeling. No empirical validation is required, but a thorough analysis of the existing work from all relevant elds (including e.g., psychology, sociology, philosophy and more as appropriate) is expected.

4. Tool Papers

that present any software developed to support experiments related to human factors in modeling. We intend here to promote tools that can speed up the software implementation of an experiment. We typically seek for libraries, frameworks, API... that gather data about human actions and/or interactions between humans and electronic devices.

All of these should have between 6 and 8 pages in length, including references, appendices, and figures. All submissions should clearly state in their title, to which of the above category they belong. All accepted submissions will be discussed in the workshop. Publication requires at least one of the authors to be present at the workshop. We particularly encourage researchers that need to design a study but lack experience in this field to come forward and present study designs so these may be discussed and improved, leading to better quality research.

Submissions must conform to the MODELS'18 formatting guidelines.

All submissions must be uploaded through EasyChair.

Results dissemination

About your international reputation!

  • All workshop papers will be published in a dedicated CEUR-WS volume.
  • We strongly encourage authors to publicly archive additional materials like raw data or analysis scripts on Zenodo before submitting.
  • As in the previous year, we are planning to publish the best papers of the workshop in a Special Issue of a high-impact journal.

Important dates

Don't miss the Event!

  • Papers submission (EXTENDED) deadline: Tue 17 July 2018 Tue 24 July 2018
  • (firm deadline)
  • Authors notification: Tue 17 August 2018
  • Workshop date: Mon 15 October 2018

Organizers

A handful of troublemakers!

Latest Products Image
Silvia Abrahão

Universitat Politècnica de València (Spain)

Read More
Latest Products Image
Miguel Goulão

Universidade Nova de Lisboa (Portugal)

Read More
Latest Products Image
Patrick Heymans

University of Namur (Belgium)

Read More
Latest Products Image
Xavier Le Pallec

University of Lille (France)

Read More
Latest Products Image
Emmanuel Renaux

IMT Lille Douai (France)

Read More

Program committee

Appropriateness of the objectives

Vasco Amaral Universidade Nova de Lisboa (Portugal)
Arnaud Blouin INSA Rennes (France)
Michel Chaudron Chalmers and Gothenburg Univestity (Sweden)
Cédric Dumoulin University of Lille (France)
Emilio Insfran Universidad Politecnica de Valencia (Spain)
Bran Selic Malina Software Corp. (Canada)
David Socha University of Washington Bothell (USA)
Jean-Claude Tarby University of Lille (France)
Juha-Pekka Tolvannen MetaCase (Finland)
Jean Vanderdonckt Université Catholique de Louvain (Belgium)

Works

Batman would be jealous.

Portfolio Image
Hide

Design & Conduct your Experiment

A Controlled Experiment Template for Evaluating the Understandability of Model Transformation Languages
Abstract

Several research approaches in the field of Model-Driven Engineer- ing (MDE) are concerned with the development of model transfor- mation languages. No controlled experiments have, however, been conducted yet to evaluate whether it is easier to write model transfor- mations in a model transformation language (MTL) than in a general purpose programming language (GPPL). Such experiments are diffi- cult to design and conduct. To write and maintain code in an MTL, it is necessary to understand the code. Thus, an evaluation of the effect on program comprehension is a first step towards empirically evaluating the benefit of model transformation languages.

In this study design paper, we propose an experiment template for empirically measuring the potential understandability gain of using an MTL instead of a GPPL. We discuss a randomized experiment setup, in which subjects fill out a paper-based questionnaire to prove their ability to understand the effect of transformation code snip- pets, which are either written with an MTL or a GPPL. To evaluate the influence of the language on the quality and speed of program comprehension, we propose two statistical tests, which compare the average number of correct answers and the average time spent.

HuFaMo'16

Max E. Kramer
Karlsruhe Institute of Technology
max.e.kramer@kit.edu

Georg Hinkel
FZI – Research Center for Information Technology
hinkel@fzi.de

Heiko Klare
Karlsruhe Institute of Technology
heiko.klare@kit.edu

Michael Langhammer
Karlsruhe Institute of Technology
langhammer@kit.edu

Erik Burger
Karlsruhe Institute of Technology
burger@kit.edu

Hide
Portfolio Image
Hide

Next Generation Software Design Tools

A Vision on a New Generation of Software Design Environments
Abstract

In this paper we explain our vision for a new generation of software design environments. We aim to generalize existing software development tools in several key ways – which include: integration of rigorous and informal notations, and support for multiple modes of interaction. We describe how we can consolidate the environment by integrating it with other software engineering tools. Furthermore, we describe some methods which could permit the environment to provide a flexible collaborative medium and have a practical and inspiring user experience.

HuFaMo'15

Michel R.V. Chaudron

Rodi Jolak
Joint Department of Computer Science and Engineering
Chalmers University of Technology and Gothenburg University
Gothenburg, Sweden
{chaudron, jolak}@chalmers.se

Hide
Portfolio Image
Hide

First International Workshop on Human Factors in Modeling

Proceedings preface
Introduction

Modeling is a human-intensive enterprise. As such, many research questions related to modeling can only be answered by empirical studies employing human factors. The Inter- national Workshop Series on Human Factors in Modeling (HuFaMo) is dedicated to the discussion of empirical research involving human factors in modeling. Our goal is to improve the state of the science and professionalism in empirical research in the Model Based Engineering community. Typical examples of research questions might consider the usability of a certain approach, such as a method or language, or the emotional states or personal judgements of modelers. While concerned with foundations and framework support for modeling, the community has been somehow neglecting the issue of human factors in this context. There is a growing need from the community concerned with quality factors to understand the best practices and systematic approaches to assert usability in modeling and confirm the claims of productivity. This workshop creates a space for discussion being a get together of both MDE, Usability, Human Interfaces and the Experimental Software engineering community. HuFaMo expressly focuses on human factors, in order to raise the awareness for these topics and the associated research methods and questions in the modeling community, providing an outlet for research of this type, guaranteeing high quality reviews by people that apply these research methods themselves. Along with fully complete empirical evaluations, the workshop organizers explicitly encouraged researchers new to empirical methods to discuss study designs before conduct- ing their empirical evaluations. The rationale was to create a constructive environment where the HuFaMo participants could contribute to improving the proposed study designs so that stronger (and more easily replicable) empirical designs and results can be obtained. Ultimately, we aim to congregate a community of researchers and practitioners that promotes (possibly independently replicated) empirical assessments on claims related to human factors in modeling.

HuFaMo'15

Harald Störrle
DTU Compute Technical University of Denmark Lyngby, Denmark
hsto@dtu.dk

Michel R. V. Chaudron
Chalmers University of Technology and Gothenburg University Gothenburg, Sweden
chaudron@chalmers.se

Vasco Amaral and Miguel Goulão
NOVA LINCS, DI, FCT Universidade NOVA de Lisboa, Lisboa, Portugal
vma@fct.unl.pt, mgoul@fct.unl.pt

Hide

Previous editions

The amazing saga of HuFaMo

HuFaMo'15

September 28, 2015, Ottawa, Canada

HuFaMo'16

October 4, 2016, Saint Malo, France

HuFaMo'17 (CANCELLED)

September, 2017, Austin, Texas

Top