Research

*

Human behavior is a complex interplay of three components(actions, cognitions, and emotions). While they are connected, it is hard to tell what exactly is cause and effect since they do not run independently of each other. The overarching goal of this research is to address these key challenges to build interactive systems to discover cause and effect relationship in human behaviors, powered by an unprecedented real-world dataset from users in their daily life. Particularly, This work has analyzed multi-modal data along with advanced wearable sensor technology.

Brain-computer interface (BCI) technologies has translated neural information into commands capable of controlling mahcine systems such as robot arms and drones. Can our mind connect with such AI systems easily in daily life by wearing low-cost devices? To answer this question, my research aims to develop hybrid interfaces with EEG-based classification and eye tracking and investigate the feasibilty through a Fitt’s law-based quantitative evaluation method.

Artificial intelligence (AI) systems have achieved high predictive performance with explanatory features to support their decisions, increasing algorithmic transparency and accountability in real-world environments. However, high predictive accuracy alone is insufficient. Ultimately, AI should be solving the human-agent interaction problem. By hypothesis, explanations that are succinct and easily interpretable to users should enable users to develop a highly efficient mental model. In turn, their mental model should enable them to develop appropriate trust in the AI and perform well when using the AI. The main goal of this research is to build human-interpretable machine learning systems and evaluate their explanatory efficacy along with its effects on the mental models of users.