Building AV Systems That Are Fair, Transparent, and Trustworthy
As autonomous vehicles move closer to widespread deployment, we face a critical question: How do we ensure these systems make decisions that are not only safe but also ethically sound and explainable?
My research tackles this challenge through three interconnected threads. First, I develop ethics-aware planning algorithms that learn moral reasoning from naturalistic driving data. Second, I build physics-informed prediction models that anticipate other vehicles' behavior by combining neural networks with physical laws. Third, I use vision-language models to teach machines to understand traffic accidents the way a human investigator would. Together, these threads form a unified agenda: creating autonomous driving systems where every decision can be traced, every prediction is grounded in physics, and every ethical tradeoff is transparent.
Traditional AV planners optimize for efficiency and collision avoidance, but they ignore the moral dimensions of driving — Who bears the risk in an unavoidable conflict? How should an AV weigh the safety of its passenger versus a pedestrian?
My work on DPEP (Differentiable Predictive Ethics-Aware Planner) addresses this by learning ethical weights directly from naturalistic driving data (Waymo Open Dataset). Unlike trolley-problem approaches, DPEP discovers context-dependent ethical preferences — for example, that drivers in highway merging situations implicitly prioritize differently than in school zones. The framework maintains full interpretability, making it suitable for regulatory compliance with UNESCO and EU AI ethics frameworks.
📷 DPEP framework figure placeholder — replace with your paper figure
Save your figure as /public/research/dpep_framework.webp and update this section
Key contributions:
- Learned ethical weights from real driving behavior (not surveys or thought experiments)
- Multi-dataset experimental design across NGSIM, HighD, and inD
- Interpretable outputs aligned with regulatory requirements
Predicting what other vehicles will do is fundamental to safe autonomous driving. But purely data-driven models often produce physically implausible trajectories — cars that accelerate through walls or turn without regard for road geometry.
My Goal-based Neural Physics (GNP) model combines the representational power of deep learning with hard physical constraints. By first predicting a driver's intended goal (where they want to go), and then generating trajectories that obey vehicle dynamics to reach that goal, the model produces predictions that are both accurate and physically realistic.
📷 GNP framework figure placeholder — replace with your paper figure
Save your figure as /public/research/gnp_framework.webp and update this section
Key contributions:
- Goal-conditioned prediction with neural ODE physics integration
- Outperforms baselines on naturalistic driving datasets
- Interpretable goal decomposition (lane change, turn, merge intentions)
When a traffic crash occurs, understanding what happened — and why — currently requires hours of human expert analysis. Vision-language models offer a transformative approach: systems that can watch crash footage and provide structured explanations of causation, fault, and contributing factors.
I am developing CrashSight, a fine-tuned video VLM pipeline for traffic accident analysis. This involves building a specialized VQA (Visual Question Answering) dataset from crash footage and training models (Qwen2-VL, VideoLLaMA2) to answer investigative questions about accidents. This work connects to broader applications in insurance, law enforcement, and AV safety validation.
📷 CrashSight pipeline figure placeholder — replace with your paper figure
Save your figure as /public/research/crashsight_pipeline.webp and update this section
Key contributions:
- Traffic accident VQA dataset with curriculum learning strategy
- Fine-tuned video VLMs on domain-specific crash analysis tasks
- Bridging physical and language models for driving safety