Accepted Papers

Full Papers

Is it possible to identify careless responses with post-hoc analysis in EMA studies?
by Jana Welling (WS Audiology); Rosa-Linde Fischer (WS Audiology); Nadja Schinkel-Bielefeld (WS Audiology)

Abstract. Data quality is a major issue when conducting studies in behavioral sciences. One of the possible threats to data quality in questionnaire studies are careless responses (CR). When responding carelessly, subjects don’t pay sufficient attention to the questions and therefore compromise the interpretability of the responses. The aim of the current study was to gain a better understanding of the occurrence and identification of CR in Ecological Momen-tary Assessment (EMA) studies, where usually several questionnaires per day are administered to the subjects over the course of some days, weeks or even months. For this purpose, explorative post-hoc analysis was con-ducted using the data of an existing EMA study in audiological research. Completion time, variance, skipped items, acquiescence bias and number of textboxes were analyzed as potential indicators for CR both inter- and intraindividually. Furthermore, consistency was examined using linear mixed models and scanning individual questionnaires. Results showed barely any systematic inconsistencies, indicating the absence of large-scale CR. However, this type of analysis might not be appropriate to identify CR when only occurring occasionally. Moreo-ver, the reliability of indicators of CR might be limited in EMA studies, as the indicators also vary over the course of the study and be-tween different situations. Possibilities for future studies are discussed.


Proposing a Perceived Expertise Tool in Business Data Analytics
by Panagiotis Germanakos (SAP SE & InSPIRE Center); Zacharias Lekkas (National & Kapodistrian University of Athens); Christos Amyrotos (University of Central Lancashire Cyprus & InSPIRE Center); Panayiotis Andreou (University of Central Lancashire Cyprus & InSPIRE Center)

Abstract. The business data analytics domain exhibits a particularly diversified and demanding field of interaction for the end-users. It entails complex tasks and actions expressed by multidimensional data visualization and exploration contents that users with different business roles, skills and experiences need to understand and make decisions so to meet their goals. Many times this engagement is proven to be overwhelming for professionals, highlighting the need for adaptive and personalized solutions that would consider their level of expertise towards an enhanced user experience and quality of outcomes. However, measuring adequately the perceived expertise of individuals using standardized means is still an open challenge in the community. As most of the current approaches employ participatory research design practices that are time consuming, costly, difficult to replicate or to produce comparable, unbiased, results for informed interpretations. Hence, this paper proposes a systematic alternative for capturing expertise through a Perceived Expertise Tool (PET) that is devised based on grounded theoretical perspectives and psychometric properties. Preliminary evaluation with 54 professionals in the data analytics domain showed the accepted internal consistency and validity of PET as well as its significant correlation with other affiliated theoretical and domain-specific concepts. Such findings may suggest a good basis for the standardized modeling of users' perceived expertise that could lead to effective adaptation and personalization.


Qualitative Evaluation of an Adaptive Exercise Selection Algorithm
by Juliet Okpo (Nigerian Defence Academy); Judith Masthoff (Utrecht University & University of Aberdeen); Matt Dennis (University of Portsmouth)

Abstract. This paper presents a qualitative study in which we evaluate the core parts of an adaptive algorithm for next-exercise selection in an e-learning system. The algorithm was previously constructed from a series of studies where participants played the role of a teacher and chose the difficulty of a subsequent exercise for a learner based on their performance, mental effort and self esteem. In this paper, we present these findings to real teachers to gain insights into whether the algorithm is effective and appropriate for future inclusion in an intelligent tutoring system. Overall, we found that teachers believed that the recommendations from the algorithm were appropriate.

 

Short Papers

Should Conditional Self-Driving Cars Consider the State of the Human Inside the Vehicle?
by David Puertas-Ramirez (UNED); Ana Serrano-Mamolar (UNED); David Martin Gomez (UC3M); Jesus G. Boticario (UNED)

Abstract. Conditional Autonomous Vehicles are said to be the next step in the development of self-driving cars. The human driver still performs a critical role in them, by taking over the control of the vehicle if prompted. As the technology is still imperfect, the human drivers are also required to be able to detect and react in case of Autonomous Drive System (ADS) malfunctions. Within this context, in this work we argue that to assure safety during autonomous operation the user state should be measured all the time, which guarantees a "fallback ready state". From an in-depth literature review, this article clarifies what are the human factors involved in the aforementioned "fallback ready state" that affect the personalization of human-vehicle interaction.


Interactivity, Fairness and Explanations in Recommendations
by Giorgos Giannopoulos (IMSI/Athena Research Center); George Papastefanatos (IMSI/Athena Research Center); Dimitris Sacharidis (Université Libre de Bruxelles); Kostas Stefanidis (Finland Tampere University)

Abstract. More and more aspects of our everyday lives are influenced by automated decisions made by systems that statistically analyze traces of our activities. It is thus natural to question whether such systems are trustworthy, particularly given the opaqueness and complexity of their internal workings. In this paper, we present our ongoing work towards a framework that aims to increase trust in machine-generated recommendations by combining ideas from three separate recent research directions, namely explainability, fairness and user interactive visualization. The goal is to enable different stakeholders, with potentially varying levels of background and diverse needs, to query, understand, and fix sources of distrust.


Human-centred Persona Driven Personalization in Business Data Analytics
by Christos Amyrotos (University of Central Lancashire Cyprus & InSPIRE Center); Panayiotis Andreou (University of Central Lancashire Cyprus & InSPIRE Center); Panagiotis Germanakos (SAP SE & InSPIRE Center)

Abstract. The modern business environment is empowered by the abundant availability of data and plethora of sophisticated data analysis tools to identify and quickly address market needs. While these tools have evolved significantly during the last years, offering trailblazing data exploration experiences with stunning multi-modal visualizations, they mistreat the importance of individualized, user-centred delivery of information/insights. As a result, users may require much more effort and time to reach decisions that have implications on both the short-term and long-term success of sustainability of an organization. This paper highlights the need for user-centred/persona-driven data exploration through adaptive data visualizations and personalized support to an end-to-end business process. It proposes an extended human-centred persona and discusses preliminary evaluation results in relation to the formulation of the contextual characteristics of a business environment, i.e., business tasks, visualizations and data.


Analysis of Users Engaged in Online Discussions about Controversial Covid-19 Treatments
by Liana Ermakova (Université de Bretagne Occidentale); Diana Nurbakova (National Institute of Applied Sciences of Lyon (INSA Lyon)); Irina Ovchinnikova (Sechenov First Moscow State Medical University)

Abstract. The stories that people find credible can influence the adoption of public health policies and determine their response to the pandemic, including the reception of controversial treatments. Although we trust people we know or admire, we should ask ourselves if they are sufficiently competent to provide a reliable opinion about medical treatments. In this paper, we try to identify professions, political views and psychological characteristics of Twitter users who shared information about controversial medical treatments by analysing the profile data of ~10M tweets published in English during the Covid-19 pandemic. We found that profile descriptions of Twitter users are very heterogeneous, but the major categories of users are Christians, devoted family members, fans of different music or political parties. We proposed an automatic approach for user classification. We showed that only 10\% of users sharing information about controversial treatments are researchers or healthcare specialists.