Junhwa Hur

Hello! I am a research scientist at Google DeepMind in Cambridge, MA, USA. I received my Ph.D. at Technische Universität Darmstadt, where I was supervised by Prof. Stefan Roth Ph.D. at Visual Inference group. I received my M.Sc. at Seoul National University and B.Sc. at POSTECH.

Email  /  CV  /  Google Scholar  /  Github  /  LinkedIn

profile photo

High-Resolution Frame Interpolation with Patch-based Cascaded Diffusion


Junhwa Hur*, Charles Herrmann*, Saurabh Saxena, Janne Kontkanen, Wei-Sheng (Jason) Lai, Yichang Shih, Michael Rubinstein, David J. Fleet, Deqing Sun
*Equal contribution
AAAI, 2025
paper / arxiv / project

Lumiere: A Space-Time Diffusion Model for Video Generation


Omer Bar-Tal, Hila Chefer, Omer Tov, Charles Herrmann, Roni Paiss, Shiran Zada, Ariel Ephrat, Junhwa Hur, Guanghui Liu, Amit Raj, Yuanzhen Li, Michael Rubinstein, Tomer Michaeli, Oliver Wang, Deqing Sun, Tali Dekel, Inbar Mosseri
SIGGRAPH Asia 2024, 2024
paper / arxiv / project

Telling Left from Right: Identifying Geometry-Aware Semantic Correspondence


Junyi Zhang, Charles Herrmann, Junhwa Hur, Eric Chen, Varun Jampani, Deqing Sun, and Ming-Hsuan Yang
NeurIPS, 2024
paper / arxiv / project

WonderJourney: Going from Anywhere to Everywhere


Hong-Xing Yu, Haoyi Duan, Junhwa Hur, Kyle Sargent, Michael Rubinstein, William T. Freeman, Forrester Cole, Deqing Sun, Noah Snavely, Jiajun Wu, Charles Herrmann
CVPR, 2024
paper / arxiv / project

Boundary Attention: Learning Curves, Corners, Junctions and Grouping


Mia Gaia Polansky, Charles Herrmann, Junhwa Hur, Deqing Sun, Dor Verbin, Todd Zickler
ECCVW, 2024
paper / arxiv / project

Zero-Shot Metric Depth with a Field-of-View Conditioned Diffusion Model


Saurabh Saxena, Junhwa Hur, Charles Herrmann, Deqing Sun, and David J. Fleet
ECCVW, 2024
paper / arxiv / project

The Surprising Effectiveness of Diffusion Models for Optical Flow and Monocular Depth Estimation


Saurabh Saxena, Charles Herrmann, Junhwa Hur, Abhishek Kar, Mohammad Norouzi, Deqing Sun, and David J. Fleet
NeurIPS, 2023  Oral presentation
paper / arxiv / project

A Tale of Two Features: Stable Diffusion Complements DINO for Zero-Shot Semantic Correspondence


Junyi Zhang, Charles Herrmann, Junhwa Hur, Luisa Polania Cabrera, Varun Jampani, Deqing Sun, and Ming-Hsuan Yang
NeurIPS, 2023
paper / supp / arxiv / project / code

Self-supervised AutoFlow


Hsin-Ping Huang, Charles Herrmann, Junhwa Hur, Erika Lu, Kyle Sargent, Austin Stone, Ming-Hsuan Yang, and Deqing Sun
CVPR, 2023
paper / supp / arxiv / code

Self-supervised AutoFlow learns to generate an optical flow training set through self-supervision on the target domain.

RAFT-MSF: Self-Supervised Monocular Scene Flow Using Recurrent Optimizer


Bayram Bayramli, Junhwa Hur, and Hongtao Lu
IJCV, 2023
paper / arxiv

For self-supervised monocular scene flow, our RAFT-backbone-based approach significantly improves the scene flow accuracy and even outperforms a semi-supervised method.

Self-Supervised Surround-View Depth Estimation with Volumetric Feature Fusion


Jung Hee Kim*, Junhwa Hur*, Tien Phuoc Nguyen, and Seong-Gyun Jeong
*Equal contribution
NeurIPS, 2022
paper / code

Our voxel-based approach to surround-view depth estimation improves metric-scale depth accuracy and can synthesize a depth map at arbitrary rotated views.

Joint Motion, Semantic Segmentation, Occlusion, and Depth Estimation


Junhwa Hur
Ph.D. Dissertation, Technische Universität Darmstadt, 2022
paper

In this dissertation, we propose how to jointly formulate multiple tasks for scene understanding and what kind of benefits can be obtained from the joint estimation.

MasKGrasp: Mask-based Grasping for Scenes with Multiple General Real-world Objects


Junho Lee, Junhwa Hur, Inwoo Hwang, and Young Min Kim
IROS, 2022
paper / video

We introduce a mask-based grasping method that discerns multiple transparent and opaque objects and finds the optimal grasp position avoiding clutter.

Self-Supervised Multi-Frame Monocular Scene Flow


Junhwa Hur and Stefan Roth
CVPR, 2021
paper / supp / arxiv / code / talk

In the multi-frame setup, using ConvLSTM + warping hidden states improves the accuracy and the temporal consistency of the monocular scene flow.

Self-Supervised Monocular Scene Flow Estimation


Junhwa Hur and Stefan Roth
CVPR, 2020  Oral presentation
paper / supp / arxiv / code / talk

We propose to estimate scene flow from only two monocular images with a CNN trained in a self-supervised manner.

Optical Flow Estimation in the Deep Learning Age


Junhwa Hur and Stefan Roth
Modelling Human Motion, N. Noceti, A. Sciutti and F. Rea, Eds., Springer, 2020
paper / arxiv

As a book chapter, we comprehensively review CNN-based approaches to optical flow and their technical details, including un-/semi-supervised methods.

Iterative Residual Refinement for Joint Optical Flow and Occlusion Estimation


Junhwa Hur and Stefan Roth
CVPR, 2019
paper / supp / arxiv / code

An iterative residual refinement scheme based on weight sharing reduces the number of network parameters and improves the accuracy of optical flow and occlusion.

UnFlow: Unsupervised Learning of Optical Flow with a Bidirectional Census Loss


Simon Meister, Junhwa Hur and Stefan Roth
AAAI, 2018  Oral presentation
paper / arxiv / code / slide

By directly training on the target domain with an improved unsupervised loss, our method outperforms a supervised method that is pre-trained on a synthetic dataset.

MirrorFlow: Exploiting Symmetries in Joint Optical Flow and Occlusion Estimation


Junhwa Hur and Stefan Roth
ICCV, 2017
paper / supp / arxiv / code / poster

The chicken-and-egg relationship between optical flow and occlusion can be nicely formulated through exploiting the symmetry properties they have.

Joint Optical Flow and Temporally Consistent Semantic Segmentation


Junhwa Hur and Stefan Roth
ECCV Workshop on Computer Vision for Road Scene Understanding and Autonomous Driving (ECCVW), 2016  Best paper award
paper / arxiv / poster

We propose a method for the joint estimation of optical flow and temporally consistent semantic segmentation, which closely connects the two problem domains and allows each task leverage the other.

Generalized Deformable Spatial Pyramid: Geometry-Preserving Dense Correspondence Estimation


Junhwa Hur, Hwasup Lim, Changsoo Park, and Sang Chul Ahn
CVPR, 2015
paper / supp / project / video

A piece-wise similarity transform along pyramid levels can approximate a non-rigid deformation in the semantic matching problem.

3D Deformable Spatial Pyramid for Dense 3D Motion Flow of Deformable Object


Junhwa Hur, Hwasup Lim, and Sang Chul Ahn
International Symposium on Visual Computing (ISVC), 2014  Oral presentation
paper / project

Multi-lane Detection based on Accurate Geometric Lane Estimation in Highway Scenarios


Seung-Nam Kang, Soo-Mok Lee, Junhwa Hur, and Seung-Woo Seo
Intelligent Vehicles Symposium (IV), 2014
paper / video

Multi-lane Detection in Urban Driving Environments using Conditional Random Fields


Junhwa Hur, Seung-Nam Kang, and Seung-Woo Seo
Intelligent Vehicles Symposium (IV), 2013
paper / project / code / video

Multi-lane Detection in Highway and Urban Driving Environment


Junhwa Hur
Master’s thesis, Seoul National University, 2013
paper / project

Academic Service

• Area chair: ECCV 2024, CVPR 2025, ICCV 2025, NeurIPS 2025
• Outstanding reviewer awards: CVPR (2018, 2019, 2020, 2022, 2024), NeurIPS (2023), ICCV (2021), ECCV (2020), ACCV (2020)





Design / source code from Jon Barron's / Leonid Keselman's website