I am a research assistant professor at the School of Computing, KAIST. Previously, I recieved my Ph.D in Computer Science from KAIST (Korea) under the supervision of Prof. Sungho Jo. I completed my master degree in Computer Science at Boston University, working with Prof. Margrit Betke and Prof. Stan Sclaroff.
My research interests include algorithmic transparency, interpretability in affective intelligence, computational emotional dynamics, cerebral asymmetry and the effects of emotion on brain structure for affective computing, brain-computer interface, and assistive and rehabilitative technology. I am closely collaborating with Prof. Jo and his students (NMAIL at KAIST).
I occasionally review the following journals.
IEEE Transactions on Affective Computing
IEEE Transactions on Cybernetics
IEEE Transactions on Computational Social Systems
IEEE Transactions on Multimedia
Here is my CV.
Ph.D. in School of Computing, 2018
M.A. in Computer Science, 2010
B.S. in Computer Engineering, 2008
Human behavior is a complex interplay of three components(actions, cognitions, and emotions). While they are connected, it is hard to tell what exactly is cause and effect since they do not run independently of each other. The overarching goal of this research is to address these key challenges to build interactive systems to discover cause and effect relationship in human behaviors, powered by an unprecedented real-world dataset from users in their daily life. Particularly, This work has analyzed multi-modal data along with advanced wearable sensor technology.
Brain-computer interface (BCI) technologies has translated neural information into commands capable of controlling mahcine systems such as robot arms and drones. Can our mind connect with such AI systems easily in daily life by wearing low-cost devices? To answer this question, my research aims to develop hybrid interfaces with EEG-based classification and eye tracking and investigate the feasibilty through a Fitt’s law-based quantitative evaluation method.
Artificial intelligence (AI) systems have achieved high predictive performance with explanatory features to support their decisions, increasing algorithmic transparency and accountability in real-world environments. However, high predictive accuracy alone is insufficient. Ultimately, AI should be solving the human-agent interaction problem. By hypothesis, explanations that are succinct and easily interpretable to users should enable users to develop a highly efficient mental model. In turn, their mental model should enable them to develop appropriate trust in the AI and perform well when using the AI. The main goal of this research is to build human-interpretable machine learning systems and evaluate their explanatory efficacy along with its effects on the mental models of users.