Fei Pan 潘 飛
feipan at kaist dot ac dot kr

I am a Ph.D. student advised by Prof. In So Kweon at KAIST. I've received Qualcomm Innovation Fellowship and Ph.D. scholarship from Robert Bosch GmbH. My research background lies in computer vision and machine learning.

I have been working on some research topics including self-supervised learning, unsupervised learning, domain adaptation, and 3D motion prediction. Previously, I completed M.S. at KAIST and B.S. at Xidian University.

CV  /  Google Scholar  /  LinkedIn  /  ResearchGate

profile photo
  • Jul 2022 - Our paper about Open Compound Domain Adaptation accepted in ECCV 2022.
  • Jun 2022 - We uploaded our recent work on combining Active Learning with Domain Adaptation for Semantic Segmentation task.
  • Jul 2021 - One paper accepted in ICCV 2021. This work deals with a joint framework on unsupervised learning of monocular depth and motion field estimation.
  • May 2021 - I started my internship at Robert Bosch GmbH working on domain adaptive multi-camera object detection.
  • show more
MODA: Domain Adaptive Video Segmentation with Self-supervised Motion Understanding
Fei Pan, Sohee Kim, Seokju Lee, In So Kweon
under review, 2023

ML-BPM: Multi-teacher Learning with Bidirectional Photometric Mixing for Open Compound Domain Adaptation in Semantic Segmentation
Fei Pan, Sungsu Hur, Seokju Lee, Junsik Kim, In So Kweon
European Conference on Computer Vision (ECCV), 2022
poster / arXiv

We design an automatic domain separation to best cluster the compound target domain and deploy a multi-teacher framework to adapt to all target subdomains separately.

Labeling Where Adapting Fails: Cross-Domain Semantic Segmentation with Point Supervision via Active Selection
Fei Pan, Francois Rameau, Junsik Kim, In So Kweon
Preprint, 2022

Aiming at combining domain adaptation with active learning, we design a new adaptation framework for segmentation with annotated points via active selection.

Attentive and Contrastive Learning for Joint Depth and Motion Field Estimation
Seokju Lee, Francois Rameau, Fei Pan, In So Kweon
International Conference on Computer Vision (ICCV), 2021
arXiv / paper / supplementary

A new two-stage projection pipeline is designed to explicitly disentangle the camera ego-motion and the object motions with the proposed dynamics attention module.

Two-phase Pseudo Label Densification for Self-training based Domain Adaptation
Inkyu Shin, Sanghyun Woo, Fei Pan, In So Kweon
European Conference on Computer Vision (ECCV), 2020
arXiv / paper

We propose a two-phase pseudo label densification network with a sliding window voting inside to propagate the confident predictions.

Unsupervised Intra-domain Adaptation for Semantic Segmentation through Self-Supervision
Fei Pan, Inkyu Shin, Francois Rameau, Seokju Lee, In So Kweon
The IEEE / CVF Computer Vision and Pattern Recognition Conference (CVPR), 2020
(Oral Presentation)   Qualcomm Innovation Fellowship
arXiv / paper / project page / code / presentation / demo

In contrast to previous methods which only consider the inter-domain alignment, a self-supervised domain adaptation is proposed to conduct the inter-domain and intra-domain alignment altogether.

Variational Prototyping-Encoder: One-Shot Learning with Prototypical Images
Junsik Kim, Tae-Hyun Oh, Seokju Lee, Fei Pan, In So Kweon
The IEEE / CVF Computer Vision and Pattern Recognition Conference (CVPR), 2019
arXiv / paper

A variational prototyping-encoder is proposed to learn image similarity as well as prototypical concepts which differs from widely used metric learning based approaches.

Driver Drowsiness Detection System Based on Feature Representation Learning Using Various Deep Networks
Sanghyuk Park, Fei Pan, Sunghun Kang, Chang D. Yoo
Asian Conference on Computer Vision (ACCV) Workshops, 2016

This paper proposes a deep drowsiness detection network for effective features and detecting drowsiness given an RGB input video of a driver.

Review Experience
  • Conference: CVPR, ICCV, WACV
  • Journal: Neurocomputing, Patten Recognition Letters

Thanks to the source code from this guy.