Zihui (Sherry) Xue
Hi, I am Zihui Xue (薛子慧), a Ph.D. student at UT Austin, advised by Prof. Kristen Grauman. I am also a visiting researcher at FAIR, Meta AI.
Previously, I'm fortunate to work with Prof. Radu Marculescu on efficient deep learning and Prof. Hang Zhao on multimodal learning. I obtained my bachelor's degree from Fudan University in 2020.
My research interests lie in egocentric video understanding and multimodal learning.
Email  | 
CV  | 
Google Scholar  | 
Github
|
|
- [Sep. 2023] AE2 got accepted by NeurIPS'23. See you in New Orleans 🦪.
- [Feb. 2023] EgoT2 got accepted by CVPR'23 as Highlight. See you in Vancouver.
- [Jan. 2023] MFH got accepted by ICLR'23 (top-5%).
- [Aug. 2022] Spent a wonderful summer interning at FAIR, Meta AI, working with Lorenzo Torresani 😊
- [Sep. 2021] One paper got accepted by NeurIPS'21.
- [Sep. 2021] One paper got accepted by CoRL'21.
- [Jul. 2021] Two papers got accepted by ICCV'21.
- [Aug. 2020] Start working with Prof. Hang Zhao at Shanghai Qi Zhi Institue, Tsinghua University on multimodal learning 😊
Egocentric Video Understanding
|
|
Learning Fine-grained View-Invariant Representations from Unpaired Ego-Exo Videos via Temporal Alignment
Zihui Xue,
Kristen Grauman
To appear in NeurIPS 2023
[paper]
[webpage]
Fine-grained ego-exo view-invariant features -> temporally align two videos from diverse viewpoints
|
|
Egocentric Video Task Translation
Zihui Xue,
Yale Song,
Kristen Grauman,
Lorenzo Torresani
CVPR Highlight, 2023 (top-2.5%)
[paper]
[webpage]
Hollistic egocentric perception for a set of diverse video tasks
|
Multimodal Learning and Self-supervised Learning
|
|
The Modality Focusing Hypothesis: Towards Understanding Crossmodal Knowledge Distillation
Zihui Xue*,
Zhengqi Gao*
Sucheng Ren*,
Hang Zhao
ICLR, 2023 (top-5%)
[paper]
[webpage]
When is crossmodal knowledge distillation helpful?
|
|
Dynamic Multimodal Fusion
Zihui Xue,
Radu Marculescu
CVPR MULA workshop, 2023
[paper]
Adaptively fuse multimodal data and generate data-dependent forward paths during inference time.
|
|
What Makes Multi-Modal Learning Better than Single (Provably)
Yu Huang,
Chenzhuang Du,
Zihui Xue,
Xuanyao Chen,
Hang Zhao,
Longbo Huang
NeurIPS, 2021
[paper]
Can multimodal learning provably perform better than unimodal?
|
|
Multimodal Knowledge Expansion
Zihui Xue,
Sucheng Ren,
Zhengqi Gao,
Hang Zhao
ICCV, 2021
[paper]
[webpage]
A knowledge distillation-based framework to effectively utilize multimodal data without requiring labels.
|
|
On Feature Decorrelation in Self-Supervised Learning
Tianyu Hua,
Wenxiao Wang,
Zihui Xue,
Sucheng Ren,
Yue Wang,
Hang Zhao
ICCV, 2021 (Oral, Acceptance Rate 3.0%)
[paper]
[webpage]
Reveal the connection between model collapse and feature correlations!
|
|
SUGAR: Efficient Subgraph-level Training via Resource-aware Graph Partitioning
Zihui Xue,
Yuedong Yang,
Mengtian Yang,
Radu Marculescu
IEEE Transactions on Computers, 2023
[paper]
An efficient GNN training framework that accounts for resource constraints.
|
|
Anytime Depth Estimation with Limited Sensing and Computation Capabilities on Mobile Devices
Yuedong Yang,
Zihui Xue,
Radu Marculescu
CoRL, 2021
[paper]
Anytime Depth Estimation with energy-saving 2D LiDARs and monocular cameras.
|
|