Workshop Outcomes
“Putting the Control back with the Humans”
Panel research outcome initially triggered in relation to the presented papersHow to realize user control
- Whether the users can check correctness of understanding of their actions (creativity phases; engagement with video watching for learning; personality traits; the record held about them);
- Ability to choose what data to provide, and for how long;
- Users can change the pace of the interaction with the system by changing their behavior;
- To give the user the choice of reverting the decision made by the adaptation;
- User for car personal assistant can still drive/ passengers and their presence;
- Can choose to switch off certain adaptation;
- Can decline to give mandatory data by explaining why
- Providing a playback of users’ interactions;
- The relationship between the interaction and the type of content;
- Need an environment to collect constant feedback from user in the interactions (in the context of time and space);
- Need a meta understanding of interactions (open model, glass box model);
- The current technology is quite limited;
- To identify what game applies in this maintenance of equilibrium states of interaction
- Should be able to reveal the model underneath and give the power to users for influencing the model and decisions taken;
- The decision behind the pedagogical interventions – providing the answer to why;
- Better movement capture while interacting with the system and how it relates to the elements of the games;
- More simulation studies (e.g. with health and safety in mind);
- Need progressive enhancement for missing data (e.g. not provide income, then check if that can be deduced from other sources);
- How to enable the users to trust the system by explaining the adaptation strategies (and also address the privacy issues);
- To devise a simulation environment to study these challenges of ESI with a set of small but multidimensional data
Interactive session in working groups
Research outcome of brainstorming on current challenges(a) Privacy in ethics and personalization
- Research vs. general purpose ethics (as researchers we are, and have been for a while, covered by ethical approval issued by institutional ethics boards, which already cover most aspects of GDPR)
- Issues with data from modalities like audio/video – how do we anonymise these?
- Variety of data that falls under personally identifiable information (PII) and other ‘sensitive’ information
- Misinformation using PII (e.g. targeting specific demographics during elections)
- Under GDPR you can’t have algorithms making important decisions regarding people (e.g. should someone be approved for mortgage)
- explainable AI (XAI) could help make this process both scalable, but still fair, transparent, and accountable
- There is still a lot of uncertainty around the enforcement of GDPR; as such we are not sure how strictly the rules will be enforced, and what will this mean for UMAP research
- Functional algorithms (~ functional cookies, i.e. the cookies which are necessary for the functioning of the website; if the user doesn’t consent to using functional cookies, then they can’t use the website), which obtain informed consent (but is it actually informed?)
- Progressive enhancement (users can opt out of giving certain data, but the algorithms and system would still need to provide fair access and usage even with such missing data) – is this workable?
- Interacting with the same entity (e.g., service) across various realities (platforms and contexts)
- Recommendations (+ data processing, handling) coming from central point of access
- Intelligent digital assistance
- o Models at higher levels of abstraction capturing the holistic user actions
- Agent systems providing recommendations (more obvious control on the agent, e.g., automate the process and actions and then approve or not, e.g., automotive area)
- Degree of control applicability highly dependent on the domain of activity
- User models that consider users’ learning curve for recommending
- How recommendations may differ in regard to more “physical” interactions e.g., augmented reality