• Sebastian Egger-Lampl, Austrian Institute of Technology, Austria
  • Florian Metzger, University of Duisburg-Essen, Campus Essen, Germany
  • Matthias Hirth, University of Würzburg, Germany
  • Florian Hammer, Linz Center of Mechatronics, Austria


Lots of QoE research has been carried out in the fields of static and adaptive media streaming and their nuanced quality influencing factors. Assessment of these services is rather straight forward due to their low-interactive nature. Audio- or video-conferencing solutions also have been well researched. However, these kinds of applications use very narrowly defined modes of interactivity, making them easier to investigate.

In contrast to that, interactive applications such as web-browsing, gaming and enterprise applications, but also IoT (Internet-of-Things) and CPS (Cyber-Physical System) applications, can exert a far higher degree of interactivity and may additionally exhibit different, novel modes of interactivity. Therefore, these applications require special approaches for QoE assessment. A further challenge is to discriminate various inter- and intra-application definitions and measures of interactivity and the resulting requirements on the underlying QoS and HID (human-interface-device).

While Web browsing is a daily driver of everyday life, video games are an example of how diverse interactivity can get, which makes them both such important candidates in user-centric quality evaluations. All of the above examples have in common that they are difficult to evaluate, starting with the right choice of metrics. Video games are especially difficult to compare to each other and often require individually tailored metrics that best describe the specific types of interaction present in that game. Moreover, enterprise applications, CPSs, and video games can have steep learning curves, which imposes new challenges when selecting appropriate test candidates. For example, this includes finding trained test participants who are skilled enough to use the system or play the game in a natural manner. This adds the proficiency of the test participants as a potential new influence factor for the subjective evaluation.

Additionally, setting up realistic subjective tests becomes more challenging for interactive applications. For the evaluation of IoT and Cyber-Physical System applications, specialized hardware is required that is often also tightly bound to a given use-case and cannot be reused in other studies. Highly interactive and complex systems like online games or distributed enterprise applications can often not be run in a small scale testbed at all. They can also often only be evaluated as a whole without the ability to investigate individual features of the application independently. Consequently, new ways need to be found to realistically emulate those applications in a lab or crowdsourcing setting in order to foster subjective studies. Some applications may even only be evaluated with reasonable effort in in-situ tests which raises a lot of questions regarding the test methodologies for these applications and the respective validity. Additionally, activities regarding the evaluation of the applications considered above may facilitate a convergence of QoE and UX to QUX (Quality-of-User-Experience).

These kinds of novel approaches to (subjective as well as objective) user-centric quality metrics and evaluation methods for interactive applications are the core topics we want to explore in this special session.

Topics of Interest

  • Categorization of interactive applications and the term interactivity
    • Inter- and intra-application definitions of interactivity (qualitative and quantitative)
    • Interaction models
    • Interactivity determination and quantification
  • Application-specific metrics for interactivity
    • Objective metrics (QoS)
    • Derived metrics (KPI, KQI)
    • Mapping to QoE and UX
  • Quality of user experience (QUX) of interactive applications
    • QUX assessment
    • QUX modeling
    • QUX in the Factory of the Future (FoF), video games, WebRTC and similar applications
  • Multi-modal user interfaces for interactive applications
    • Multi-modal interactivity (mode-specific and cross-modal)
    • Multi-modal wearable interfaces
  • Design of subjective QoE experiments for interactive applications
    • Repeatability
    • Learning curve of test participants (proficiency for gaming/industrial use cases)
    • Challenges of in-situ studies
  • Methods for enabling crowdsourcing-based evaluations of interactive applications
    • Creation of a realistic mock-up of the application under test
    • Training of participants for the interactive application to be tested
  • QoE Frameworks for interactive applications