Data Analysis and Visualization

The data visualization community recently focused on revealing the black boxes of deep learning models due to the increased attention on deep learning. Likewise, we develop interactive visualization systems based on deep-learning models. Our goal is to engage human interactions with the system to understand how the deep-learning models work internally and improve the performances of the models overall.

Interactive visual user
interfaces for automatic
sketch colorization
User interaction allows users to generate or modify color images as intended while using automatic colorization technique.
DAVIAN Lab. presented an exemplar-based sketch colorization that can effectively reflect color reference images provided by users with dense spatial correspondence.
Also, we research the method to enhance edges through user interactive edge hints and correct color bleeding artifacts in the generated images.
In addition, we promote a human-ai collaboration environment in which users can easily participate in processes by developing user interfaces.
Data augmentation
Image classification models show poor performances when there exists a data bias (e.g., discrepancy between training dataset and test dataset). This issue can be addressed with data augmentation via generative adversarial networks (GANs) due to its ability of generating photo-realistic images. However, identifying data biases and generating images which can resolve the data bias automatically is not a simple task. To mitigate this issue, we engage human interaction to identify such data biases through a visual analytics system and perform data augmentation via GANs to resolve the data bias.
Model interpretation
Model interpretations is an area to resolve the black box of deep neural networks(DNNs). As DNNs are getting deeper and deeper, developers are hard to understand how the model works. Via model interpretation, the users can understand the process of the networks and diagnose the problem our model conflict(e.g., biased to a feature or wrong hyperparameters). DAVIAN Lab. has been studying visual analytics models both on computer vision(CV) and natural language processing(NLP). The image is a figure of SANVis(Cheonbok Park et al., IEEE VIS'19 Short paper), which visualize the self-attention networks of NLP.