Visual Intelligence Lab
at AI grad school, Yonsei university
"What I cannot create, I do not understand."
- Richard Feynman
"What an AI cannot generate, the AI do not understand."
- Visual intelligence lab
Intelligence models the world:
It understands visual observations.
It imagines possible states of the world.
An artificial intelligence should be able to generate images as humans imagine scenes.
We aim to teach visual intelligence to machines.
Generative models synthesize images with designated properties.
The designated properties can be specified by many ways, e.g., dataset, semantics, exemplar, viewpoint, and so on.
Our goal is to provide users with controllability of such properties in the resulting images. Possible solutions include but are not limited to designing special networks or training procedures.
Neural networks encode visual observations into some latent representation which convey semantics before producing final outputs.
Learning generally useful representations of arbitrary images has always been a holy grail for visual intelligence. While recent methods introduce unsupervised learning approaches for some tasks, there is wide room to develop more generalizable or more effective techniques.
[2023. 09.] Two papers accepted to NeurIPS 2023.
[2023. 09.] Jaeseok is working as a research intern at NAVER AI LAB.
[2023. 07.] One paper accepted to ICCV 2023.
[2023. 07.] Dongkyun is working as an intern at LG electronics (~2023. 08.).
[2023. 05.] Mingi is working as a research intern at Adobe Research.
[2023. 02.] Mingi and Jaeseok received Samsung Humantech paper award - Gold prize for Asyrp.
[2023. 01.] One paper accepted to ICLR 2023 (notable-top-25%).
[2022. 12.] Minjung is working as a research intern at NAVER AI LAB (~2023. 06.).
[2022. 07.] One paper accepted to ECCV 2022.