Workshop Outcomes

Interactive session in working groups

Research outcome of brainstorming on current challenges in relation to presented papers

(a) What and How to Model Human Factors
The discussion for this topic centred around user modelling and personalisation in the context of hearing aids. This led us to pinpoint several key issues for this domain and HAAPIE factors more generally:
  • There is the initial problem exploration -- using data if you have it and/or engagement with stakeholders through participatory design. This can inform what and how to model, and what kind of personalisation is needed
  • The modelling itself, in this domain (and many others, especially relating to health and wellbeing), you need to take into account not just the human factors about the user and the device they interact with, but also the environment (the physical space) where the interaction takes place. Then from the cyberphysical data traces you try to infer situations when personalisation is needed and appropriate
  • Tying the first two points together is the need for responsible practice. By involving stakeholders at every stage of development you can ensure that user privacy and comfort are provided. Especially, because in many domains the user models can include sensitive health information and detailed information about user environment. One promising method is on-device computing, whereby user data does not leave the device
(b)How to Evaluate, Interpret and Fuse Recommendations
  • The question of choice of metrics for evaluation! Evaluation should be done with the utility function in mind. IT is usually much more complex than just maximize one variable, e.g. in Education, not just learning gain, but also motivation, engagement, emotion.
  • In the area of recommender systems, if we evaluate them only for accuracy we miss user satisfaction. Explanation with random recommendation may work better than the perfect recommender system
  • Persuasion increases trust of user and may be more effective. On the other hand side, we need objective data (actions), not just self-reported data. But to interpret the objective data… we need more data, and also self-reported data…
  • One factor that should be considered is privacy, especially in context-aware recommender systems. In real world, you can’t practically ask users for some information, users won’t give some information about themselves, or you may be in legal trouble
  • Regarding testing design, it should be simple, especially if we are testing in industry, because the population is very large and many measures are not applicable. Many factors play a role, and it is not possible to eliminate them. Trying to evaluate in real world brings messiness and you may not get any effect