Yiren Lu
Ph.D. candidate
@ Case Western Reserve University Logo
News
2025.08.31
Two papers, Segment then Splat and Noise Guided Splatting, are accepted to NeurIPS 2025.
2025.04.30
I will join Bosch Center for Artificial Intelligence (BCAI) as a Research Intern this summer.
2025.02.28
Glad to be chosen as the Kevin J. Kranzusch Fellow in Computer and Data Sciences.
2025.01.31
Our paper BARD-GS is accepted to CVPR 2025.
2024.08.31
Our paper Cracking the Code of Juxtaposition is accepted to NeurIPS 2024 (Oral).
2024.06.30
One paper on 3D scene editing is accepted to ACM Multimedia (MM) 2024. Check our project page for more details.
2024.04.30
Awarded the outstanding graduate research award in the Computer and Data Science department of CWRU.
2024.02.29
Our paper iSLAM: Imperative SLAM is accepted to Robotics and Automation Letters (RA-L).
2023.08.31
Our PyPose v0.6 paper is accepted to IROS 2023 workshop.
2022.07.31
One paper is accepted to ECCV 2022 workshop.
About Me Cat

I am Yiren Lu (陆弈人), a third-year CS Ph.D. candidate at the VULab of Case Western Reserve University, where I have been conducting research under the supervision of Prof. Yu Yin since 2024.

Prior to that I received my M.S. in Computer Science and Engineering from University at Buffalo in 2024 under the supervision of Prof. Chen Wang and my B.Eng. degree in Computer Science from ShanghaiTech University in 2021 under the supervision of Prof. Sören Schwertfeger.

Education
  • Case Western Reserve University
    Visual Understanding Lab (VULab)
    Ph.D. Student
    Jan. 2024 - present
  • University at Buffalo
    Spatial AI and Robotics Lab (SAIR Lab)
    M.S. in Computer Science
    Sep. 2022 - Jan. 2024
  • ShanghaiTech University
    Mobile Autonomous Robotics Systems Lab
    B.Eng. in Computer Science
    Sep. 2017 - Jun. 2021
Experience
  • Uber
    AV Labs
    Research Scientist Intern
    May. 2026 - Aug. 2026
  • Bosch Research North America
    Bosch Center for Artificial Intelligence
    Research Scientist Intern
    Jun. 2025 - Dec. 2025
  • Tencent
    Interactive Entertainment Group (IEG)
    Applied Scientist Intern
    July. 2021 - Jun. 2022
Selected Publications (view all )
arXiv
Reconstruction Matters: Learning Geometry-Aligned BEV Representation through 3D Gaussian Splatting

Yiren Lu; Xin Ye; Burhaneddin Yaman; Jingru Luo; Zhexiao Xiong; Liu Ren; Yu Yin.

arXiv preprint 2026

We propose Splat2BEV, a Gaussian Splatting-assisted BEV framework that learns emantically rich and geometrically precise BEV feature representations.

Reconstruction Matters: Learning Geometry-Aligned BEV Representation through 3D Gaussian Splatting

Yiren Lu; Xin Ye; Burhaneddin Yaman; Jingru Luo; Zhexiao Xiong; Liu Ren; Yu Yin.

We propose Splat2BEV, a Gaussian Splatting-assisted BEV framework that learns emantically rich and geometrically precise BEV feature representations.

arXiv
arXiv
GSMem: 3D Gaussian Splatting as Persistent Spatial Memory for Zero-Shot Embodied Exploration and Reasoning

Yiren Lu; Yi Du; Disheng Liu; Yunlai Zhou; Chen Wang; Yu Yin.

arXiv preprint 2026

GSMem is a zero-shot embodied exploration and reasoning framework that utilize 3DGS as persistent memory.

GSMem: 3D Gaussian Splatting as Persistent Spatial Memory for Zero-Shot Embodied Exploration and Reasoning

Yiren Lu; Yi Du; Disheng Liu; Yunlai Zhou; Chen Wang; Yu Yin.

GSMem is a zero-shot embodied exploration and reasoning framework that utilize 3DGS as persistent memory.

arXiv
Segment then Splat: Unified 3D Open-Vocabulary Segmentation via Gaussian Splatting
NeurIPS
Segment then Splat: Unified 3D Open-Vocabulary Segmentation via Gaussian Splatting

Yiren Lu; Yunlai Zhou; Yiran Qiao; Chaoda Song; Tuo Liang; Jing Ma; Yu Yin.

NeurIPS 2025

We propose Segment then Splat, which performs segmentation before reconstruction by dividing Gaussians into object sets upfront, eliminating semantic/geometric ambiguity and accelerating optimization.

Segment then Splat: Unified 3D Open-Vocabulary Segmentation via Gaussian Splatting

Yiren Lu; Yunlai Zhou; Yiran Qiao; Chaoda Song; Tuo Liang; Jing Ma; Yu Yin.

We propose Segment then Splat, which performs segmentation before reconstruction by dividing Gaussians into object sets upfront, eliminating semantic/geometric ambiguity and accelerating optimization.

NeurIPS
Fix False Transparency by Noise Guided Splatting
NeurIPS
Fix False Transparency by Noise Guided Splatting

Aly El Hakie*; Yiren Lu*; Yu Yin; Michael W. Jenkins; Yehe Liu. (* equal contribution)

NeurIPS 2025

We propose Noise Guided Splatting to address false transparency artifacts in 3D Gaussian Splatting by injecting opaque noise Gaussians in object volumes during training.

Fix False Transparency by Noise Guided Splatting

Aly El Hakie*; Yiren Lu*; Yu Yin; Michael W. Jenkins; Yehe Liu. (* equal contribution)

We propose Noise Guided Splatting to address false transparency artifacts in 3D Gaussian Splatting by injecting opaque noise Gaussians in object volumes during training.

NeurIPS
BARD-GS: Blur-Aware Reconstruction of Dynamic Scenes via Gaussian Splatting
CVPR
BARD-GS: Blur-Aware Reconstruction of Dynamic Scenes via Gaussian Splatting

Yiren Lu; Yunlai Zhou; Disheng Liu; Tuo Liang; Yu Yin.

CVPR 2025

We introduce BARD-GS, a robust dynamic scene reconstruction method that explicitly decomposes motion blur into camera and object components to handle blurry inputs and imprecise camera poses.

BARD-GS: Blur-Aware Reconstruction of Dynamic Scenes via Gaussian Splatting

Yiren Lu; Yunlai Zhou; Disheng Liu; Tuo Liang; Yu Yin.

We introduce BARD-GS, a robust dynamic scene reconstruction method that explicitly decomposes motion blur into camera and object components to handle blurry inputs and imprecise camera poses.

CVPR
View-consistent Object Removal in Radiance Fields
ACM MM
View-consistent Object Removal in Radiance Fields

Yiren Lu; Jing Ma; Yu Yin.

ACM MM 2024

We introduce a novel radiance field editing pipeline that significantly enhances consistency by requiring inpainting of only a single reference image.

View-consistent Object Removal in Radiance Fields

Yiren Lu; Jing Ma; Yu Yin.

We introduce a novel radiance field editing pipeline that significantly enhances consistency by requiring inpainting of only a single reference image.

ACM MM
iSLAM: Imperative SLAM
RA-L
iSLAM: Imperative SLAM

Taimeng Fu; Shaoshu Su; Yiren Lu; Chen Wang.

Robotics and Automation Letters (RA-L) 2024

We proposed a novel self-supervised imperative learning framework, named imperative SLAM (iSLAM), which fosters reciprocal correction between the front-end and back-end and enhances performance without external supervision.

iSLAM: Imperative SLAM

Taimeng Fu; Shaoshu Su; Yiren Lu; Chen Wang.

We proposed a novel self-supervised imperative learning framework, named imperative SLAM (iSLAM), which fosters reciprocal correction between the front-end and back-end and enhances performance without external supervision.

RA-L
All research