Research
Robotics and Mechatronics
Points of contact: Dr. Jianshu Zhou, Boyuan Liang
We are developing cutting edge robotics and mechtronics systems for robotic grasping, dexterous manipulation, human-robot interface, and intelligent skill transfer.
- Innovative Robotic Grippers and Manipulators
- Programmable Locking Cells (PLC) for Modular Robots with High Stiffness Tunability and Morphological Adaptability
- Everything-Grasping (EG) Gripper - A Universal Gripper with Synergistic Suction–Grasping Capabilities for Cross-Scale and Cross-State Manipulation
- Prismatic-Bending Transformable (PBT) Joint for a Modular, Foldable Manipulator with Enhanced Reachability and Dexterity
- Dexterous Hands for Fine In-hand Manipulation and Skill Transfer
- Human–Robot Interfaces and Collaboration
Robot Learning
Point of contact: Yixiao Wang
In robot learning research, our goal is to analyze and address critical challenges in developing generalist robot policies capable of diverse tasks across varied environments. Our current efforts focus on effective and scalable multimodal reasoning (vision, force, language) and policy representation learning, continual learning, and their generalizations.
Embodied AI and Loco-manipulation
Point of contact: Yuxin Chen
Our research in Embodied AI and Loco-manipulation pushes the boundaries of both high-level reasoning and low-level planning/control for robots with diverse physical embodiments operating in open-vocabulary, real-world environments. We focus on the full life cycle of embodied AI systems, from data collection to deployment:
- Teleoperation and Data Curation – Developing automated pipelines to gather high-value, long-horizon, multi-modal demonstrations for training advanced AI agents.
- Generalist Pre-training & Specialist Post-training – Designing novel post-training alignment frameworks that adapt powerful pre-trained generalist robots into safe, efficient, and task-specialized agents.
- Loco-manipulation – Advancing model-based and learning-based control methods to enable complex, coordinated interactions between robots and their environments, across platforms including humanoids, quadrupeds, and robotic arms.
Through this integrated approach, we aim to create versatile embodied AI systems capable of perceiving, reasoning, and acting seamlessly in dynamic, unstructured settings.
- Versatile Loco-Manipulation through Flexible Interlimb Coordination
- SAGA: Open-World Mobile Manipulation via Structured Affordance Grounding
Autonomous Driving
Points of contact: Chensheng Peng, Yutaka Shimizu
Our group aims to build autonomous driving systems that can safely and robustly operate in real-world interactive scenes involving humans. To this end, we study the full autonomous driving pipeline, from scene understanding and behavior modeling to decision-making and closed-loop evaluation in realistic simulations.
We initiated the INTERACTION Dataset, one of the early efforts focused on interaction-centric driving scenarios across multiple countries.
To ensure autonomous systems can perceive complex real-world scenes, we investigate multi-modal and temporal sensing methods that integrate information across sensors and time for robust 3D detection and tracking (e.g., SparseFusion, Time Will Tell, PNAS-MOT, DetMatch). We also study motion prediction, and interaction reasoning among traffic participants (e.g., WOMD-Reasoning, PreTraM), and how autonomous agents plan and learn under uncertainty, combining reinforcement learning and optimization-based control to account for human interaction and safety constraints (e.g., Residual-MPPI, Confidence-aware Human Models, Iterative LQR/LQG).
For closed-loop evaluation of driving policies, we develop simulation capabilities that jointly model agent behavior and realistic sensing. Specifically, we study controllable behavior and traffic simulation to generate interactive and safety-critical scenarios (e.g., LANGTRAJ, SAFE-SIM, Editing Driver Character), together with neural simulation of environments and sensors, using generative models and 4D Gaussian splatting to support realistic multi-sensor data synthesis and digital twins (e.g., X-Drive, DeSiRe-GS)
Recent Model-side Works:
- Perceptions:
- SparseFusion: Fusing Multi-Modal Sparse Representations for Multi-Sensor 3D Object Detection
- Time Will Tell: New Outlooks and a Baseline for Temporal Multi-View 3D Object Detection
- DetMatch: Two Teachers are Better Than One for Joint 2D and 3D Semi-Supervised Object Detection
- PNAS-MOT: Multi-Modal Object Tracking with Pareto Neural Architecture Search
- Motion Predictions & Plannings:
- PreTraM: Self-Supervised Pre-training via Connecting Trajectory and Map
- Residual-MPPI: Online Policy Customization for Continuous Control
- Safety Assurances for Human-Robot Interaction via Confidence-aware Game-theoretic Human Models
- Motion Planning for Autonomous Driving With Extended Constrained Iterative LQR
- Constrained iterative LQG for real-time chance-constrained Gaussian belief space planning
- Scenario Generation & Reconstructions:
Recent Data-side Works:
- Driving Datasets:
- Data Selection:
- Traffic Simulations:
To learn more about the research done at MSC in the past, you can download a research booklet here or check the “Previous Porjects” below.
Intelligent Control
Power System Control
Hard Disk Drive Control
Mechatronics
- Exoskeleton Design & Control for BMI Study
- Variable Stiffness Actuators (VSAs)
- Individualized Assistive Device for Rehabilitation and Augmentation
- Mechatronics for Human Assistance