Yu-Lun Liu

I am a fourth year PhD student working with Yung-Yu Chuang in the CSIE department at National Taiwan University. I work on problems in computer vision, machine learning, and multimedia.

I am also a senior algorithm development engineer at MediaTek Inc., where I work on computational photography, computer vision, and machine learning.

I did my undergrad and Masters at National Chiao Tung University.

Email  /  CV  /  Google Scholar  /  Facebook  /  Instagram  /  Github  /  YouTube

profile photo
Research


Learning to See Through Obstructions with Layered Decomposition
Yu-Lun Liu, Wei-Sheng Lai, Min-Hsuan Yang, Yung-Yu Chuang, Jia-Bin Huang
TPAMI, 2021  
project page / arXiv / code / demo / video

We present a learning-based approach for removing unwanted obstructions, such as window reflections, fence occlusions, or adherent raindrops, from a short sequence of images captured by a moving camera.




Bridging Unsupervised and Supervised Depth from Focus via All-in-Focus Supervision
Ning-Hsu Wang, Ren Wang, Yu-Lun Liu, Yu-Hao Huang, Yu-Lin Chang, Chia-Ping Chen, Kevin Jou
ICCV, 2021  
project page / arXiv / code

In this paper, we propose a method to estimate not only a depth map but an AiF image from a set of images with different focus positions (known as a focal stack).



Hybrid Neural Fusion for Full-frame Video Stabilization
Yu-Lun Liu, Wei-Sheng Lai, Min-Hsuan Yang, Yung-Yu Chuang, Jia-Bin Huang
ICCV, 2021  
project page / arXiv / code / demo / video / Two minute video

In this work, we present a frame synthesis algorithm to achieve full-frame video stabilization.



Explorable Tone Mapping Operators
Chien-Chuan Su, Ren Wang, Hung-Jin Lin, Yu-Lun Liu, Chia-Ping Chen, Yu-Lin Chang, Soo-Chang Pei
ICPR, 2020  
arXiv

In this paper, a learning-based multimodal tone-mapping method is proposed, which not only achieves excellent visual quality but also explores the style diversity.


Learning Camera-Aware Noise Models
Ke-Chi Chang, Ren Wang, Hung-Jin Lin, Yu-Lun Liu, Chia-Ping Chen, Yu-Lin Chang, Hwann-Tzong Chen
ECCV, 2020  
project page / arXiv / code

We propose a data-driven approach, where a generative noise model is learned from real-world noise.


Single-Image HDR Reconstruction by Learning to Reverse the Camera Pipeline
Yu-Lun Liu*, Wei-Sheng Lai*, Yu-Sheng Chen, Yi-Lung Kao, Min-Hsuan Yang, Yung-Yu Chuang, Jia-Bin Huang
CVPR, 2020  
project page / arXiv / poster / slides / code / demo / 1-minute video

In contrast to existing learning-based methods, our core idea is to incorporate the domain knowledge of the LDR image formation pipeline into our model.



Learning to See Through Obstructions
Yu-Lun Liu, Wei-Sheng Lai, Min-Hsuan Yang, Yung-Yu Chuang, Jia-Bin Huang
CVPR, 2020  
project page / arXiv / poster / slides / code / demo / 1-minute video / video / New Scientists

We present a learning-based approach for removing unwanted obstructions, such as window reflections, fence occlusions or raindrops, from a short sequence of images captured by a moving camera.


Attention-based View Selection Networks for Light-field Disparity Estimation
Yu-Ju Tsai, Yu-Lun Liu, Yung-Yu Chuang, Ming Ouhyoung
AAAI, 2020  
paper / code / benchmark

For utilizing the views more effectively and reducing redundancy within views, we propose a view selection module that generates an attention map indicating the importance of each view and its potential for contributing to accurate depth estimation.


Deep Video Frame Interpolation using Cyclic Frame Generation
Yu-Lun Liu, Yi-Tung Liao, Yen-Yu Lin, Yung-Yu Chuang
AAAI, 2019   (Oral Presentation)
project page / paper / poster / slides / code / video

The cycle consistency loss can better utilize the training data to not only enhance the interpolation results, but also maintain the performance better with less training data.

Background modeling using depth information
Yu-Lun Liu, Hsueh-Ming Hang
APSIPA, 2014  
paper

This paper mainly focuses on creating a global background model of a video sequence using the depth maps together with the RGB pictures.

Virtual view synthesis using backward depth warping algorithm
Du-Hsiu Li, Hsueh-Ming Hang, Yu-Lun Liu
PCS, 2013  
paper

In this study, we propose a backward warping process to replace the forward warping process, and the artifacts (particularly the ones produced by quantization) are significantly reduced.

Service
Emergency reviewer, ECCV 2020

Reviewer, ACCV 2020

Reviewer, AAAI 2021

Reviewer, IJCAI 2021

Reviewer, ICCV 2021

Reviewer, ICLR 2022

Reviewer, IJCAI 2022

Reviewer, Applied Soft Computing

Stolen from Jon Barron's website.
Last updated August 2021.