Junhwa Hur*, Charles Herrmann*, Saurabh Saxena, Janne Kontkanen, Wei-Sheng (Jason) Lai, Yichang Shih, Michael Rubinstein, David J. Fleet, Deqing Sun
*Equal contribution AAAI, 2025
paper /
arxiv /
project
Saurabh Saxena, Charles Herrmann, Junhwa Hur, Abhishek Kar, Mohammad Norouzi, Deqing Sun, and David J. Fleet
NeurIPS, 2023 Oral presentation paper /
arxiv /
project
Bayram Bayramli, Junhwa Hur, and Hongtao Lu
IJCV, 2023
paper /
arxiv
For self-supervised monocular scene flow, our RAFT-backbone-based approach significantly improves the scene flow accuracy and even outperforms a semi-supervised method.
Jung Hee Kim*, Junhwa Hur*, Tien Phuoc Nguyen, and Seong-Gyun Jeong
*Equal contribution NeurIPS, 2022
paper /
code
Our voxel-based approach to surround-view depth estimation improves metric-scale depth accuracy and can synthesize a depth map at arbitrary rotated views.
Junhwa Hur Ph.D. Dissertation, Technische Universität Darmstadt, 2022
paper
In this dissertation, we propose how to jointly formulate multiple tasks for scene understanding and what kind of benefits can be obtained from the joint estimation.
An iterative residual refinement scheme based on weight sharing reduces the number of network parameters and improves the accuracy of optical flow and occlusion.
Simon Meister, Junhwa Hur and Stefan Roth
AAAI, 2018 Oral presentation paper /
arxiv /
code /
slide
By directly training on the target domain with an improved unsupervised loss, our method outperforms a supervised method that is pre-trained on a synthetic dataset.
Junhwa Hur and Stefan Roth
ECCV Workshop on Computer Vision for Road Scene Understanding and Autonomous Driving (ECCVW), 2016 Best paper award paper /
arxiv /
poster
We propose a method for the joint estimation of optical flow and temporally consistent semantic segmentation, which closely connects the two problem domains and allows each task leverage the other.