We are seeking a highly motivated Research Intern to contribute to the development of next-generation end-to-end autonomous driving (E2E-AD) models inspired by recent advancements such as UniAD, VAD, and multi-task learning approaches. This internship provides a unique opportunity to work on unified, perception-to-planning architectures that integrate vision, sensor fusion, and control in a data-driven manner. The intern will work closely with researchers and engineers to develop models that enable self-driving vehicles to perceive, predict, and plan efficiently in real-world environments.
Responsibilities: - Conduct research on end-to-end autonomous driving architectures, focusing on unified perception, prediction, and planning models.
- Implement and optimize multi-task learning approaches for driving tasks, including object detection, motion prediction, and trajectory planning.
- Work with sensor fusion techniques combining multi-modal inputs (e.g., camera, LiDAR, radar) to improve perception and decision-making.
- Develop spatio-temporal and motion planning transformers for holistic driving scene understanding
- Train, fine-tune, and evaluate models using large-scale autonomous driving datasets and internal Plus datasets
- Utilize simulators to test and validate models in diverse driving scenarios.
- Optimize real-time inference and deployment of driving models for efficient execution on edge devices.
- Contribute to research publications and open-source implementations of E2E-AD models.
Required Skills:- Pursuing MS or PhD in CS, EE, mathematics, statistics or related field
- Thorough understanding of deep learning principles and familiarity with perception, prediction and planning models
- Proficiency in Python and deep learning frameworks such as PyTorch or TensorFlow.
- Experience with computer vision and sensor processing techniques.
- Strong analytical and problem-solving skills.
Preferred Skills: - Past experiences in projects involving design, training or fine-tuning of various autonomous driving related models
- Familiarity with autonomous driving datasets (e.g., nuScenes, Waymo).
- Hands-on experience with simulators like CARLA, AirSim, or equivalent.
- Knowledge of robotics and motion planning algorithms is a plus.
- Publication record in relevant venues (CVPR, ICLR, ICCV, ECCV, NeurIPS, AAAI, SIGGRAPH)
$19 - $65 an hour
Our internship hourly rates are a standard pay determined based on the position and your location, year in school, degree, and experience.