Robotics and Mechatronics

Points of contact: Dr. Jianshu Zhou, Boyuan Liang

We are developing cutting edge robotics and mechtronics systems for robotic grasping, dexterous manipulation, human-robot interface, and intelligent skill transfer.

  1. Innovative Robotic Grippers and Manipulators
  2. Dexterous Hands for Fine In-hand Manipulation and Skill Transfer
  3. Human–Robot Interfaces and Collaboration

Robot Learning

Point of contact: Yixiao Wang

In robot learning research, our goal is to analyze and address critical challenges in developing generalist robot policies capable of diverse tasks across varied environments. Our current efforts focus on effective and scalable multimodal reasoning (vision, force, language) and policy representation learning, continual learning, and their generalizations.

Embodied AI and Loco-manipulation

Point of contact: Yuxin Chen

Our research in Embodied AI and Loco-manipulation pushes the boundaries of both high-level reasoning and low-level planning/control for robots with diverse physical embodiments operating in open-vocabulary, real-world environments. We focus on the full life cycle of embodied AI systems, from data collection to deployment:

  1. Teleoperation and Data Curation – Developing automated pipelines to gather high-value, long-horizon, multi-modal demonstrations for training advanced AI agents.
  2. Generalist Pre-training & Specialist Post-training – Designing novel post-training alignment frameworks that adapt powerful pre-trained generalist robots into safe, efficient, and task-specialized agents.
  3. Loco-manipulation – Advancing model-based and learning-based control methods to enable complex, coordinated interactions between robots and their environments, across platforms including humanoids, quadrupeds, and robotic arms.

Through this integrated approach, we aim to create versatile embodied AI systems capable of perceiving, reasoning, and acting seamlessly in dynamic, unstructured settings.

Autonomous Driving

Points of contact: Chensheng Peng, Yutaka Shimizu

Our group aims to build autonomous driving systems that can safely and robustly operate in real-world interactive scenes involving humans. To this end, we study the full autonomous driving pipeline, from scene understanding and behavior modeling to decision-making and closed-loop evaluation in realistic simulations.

We initiated the INTERACTION Dataset, one of the early efforts focused on interaction-centric driving scenarios across multiple countries.

To ensure autonomous systems can perceive complex real-world scenes, we investigate multi-modal and temporal sensing methods that integrate information across sensors and time for robust 3D detection and tracking (e.g., SparseFusion, Time Will Tell, PNAS-MOT, DetMatch). We also study motion prediction, and interaction reasoning among traffic participants (e.g., WOMD-Reasoning, PreTraM), and how autonomous agents plan and learn under uncertainty, combining reinforcement learning and optimization-based control to account for human interaction and safety constraints (e.g., Residual-MPPI, Confidence-aware Human Models, Iterative LQR/LQG).

For closed-loop evaluation of driving policies, we develop simulation capabilities that jointly model agent behavior and realistic sensing. Specifically, we study controllable behavior and traffic simulation to generate interactive and safety-critical scenarios (e.g., LANGTRAJ, SAFE-SIM, Editing Driver Character), together with neural simulation of environments and sensors, using generative models and 4D Gaussian splatting to support realistic multi-sensor data synthesis and digital twins (e.g., X-Drive, DeSiRe-GS)

Recent Model-side Works:

Recent Data-side Works:

To learn more about the research done at MSC in the past, you can download a research booklet here or check the “Previous Porjects” below.