top of page

Myungin Lee

Signal Processing & Machine Learning
for XR and HCI
myungin_earth.png
A researcher designing multi-modal XR experience
based on HCI, scientific theory, composition,
signal processing & machine learning.


 

Bio

Myungin is a researcher who designs multimodal XR experiences based on HCI, signal processing, and machine learning. He holds a Ph.D. in Media Arts and Technology from the University of California, Santa Barbara, and an M.S. and B.S. in Electronics and Computer Engineering at Hanyang University, Seoul, Korea. He was a Ph.D. research intern in the Experiments in Art & Technology (E.A.T.) center at Nokia Bell Labs and developed a spatial-acoustic parameter estimation algorithm using machine learning. During his Ph.D., Myungin was affiliated with AlloSphere, designing large-scale interactive 3D immersive experiences, and joined the Immersive Media Design faculty at the University of Maryland, College Park. His research are featured at venues including Ars Electronica, Getty's PST ART: Art & Science Collide, NeurIPS, IEEE VR, IEEE VIS, CHI, New Interfaces for Musical Expression (NIME), International Computer Music Conference (ICMC), and the ACM SIGGRAPH. Myungin holds a patent in the machine learning-based room acoustics estimation algorithm. At UMD, his current research includes brain-computer interaction (BCI), generative AI, environmental Ocean data science, and scientific quantum simulation in XR.

News

qixr.png

(March 2026)

“Quantum Intuition XR (QIXR): Tangible Quantum Mechanics using Interactive XR Experience,” has been accepted for presentation at IEEE VR 2026 and selected for publication in a special issue of IEEE Transactions on Visualization and Computer Graphics (IEEE TVCG)!

bottom of page