Research Summary

Abstract

Autonomous vehicles (AVs) heavily rely on LiDAR sensors for accurate 3D perception of their environment. However, the physical properties of LiDAR make it inherently vulnerable to optical manipulation. In this paper, we investigate a novel class of passive LiDAR spoofing attacks that exploit mirror-like surfaces to inject or remove objects from the AV's perception.

Unlike prior work that requires active emitters or electronic tampering, our attacks rely solely on strategically placed planar mirrors to redirect LiDAR beams—posing a low-cost, stealthy threat. We introduce a comprehensive threat framework encompassing two adversarial goals: Object Addition Attacks (OAA), which create phantom obstacles, and Object Removal Attacks (ORA), which obscure real objects.

Attack Overview

ORA
Object Removal Attack

Mirrors deflect LiDAR beams away from real obstacles, creating dangerous blind spots

OAA
Object Addition Attack

Mirrors redirect beams to create phantom obstacles, triggering false emergency responses

Key Contributions

  • First comprehensive analysis of passive mirror-based LiDAR attacks in automotive contexts
  • Empirically validated geometric models characterizing mirror-induced LiDAR artifacts
  • Physics-informed simulation framework enabling safe, repeatable attack evaluation
  • End-to-end validation on complete autonomous driving software stack (Autoware)
  • Analysis of potential defense mechanisms and their limitations

Research Methodology

1

Threat Modeling

Formal analysis of mirror-based attack vectors and adversary capabilities

2

Real-World Experimentation

Controlled outdoor experiments with commercial LiDAR systems

3

Empirical Modeling

Mathematical characterization of mirror-induced artifacts

4

Simulation Integration

CARLA-based framework for safe, scalable attack evaluation

5

System Validation

End-to-end testing on production autonomous driving stack

Real-World Experimental Campaign

Experimental Setup

LiDAR System: Ouster OS1-128 (128-channel, 10Hz)
Mirror Configuration: High-reflectivity glass panels with silver backing
Test Environment: Controlled outdoor environment, clear weather conditions
Data Collection: ROS 2 logging with synchronized sensor data

Attack Scenario Implementation

Object Removal Attack experimental setup showing vehicle with LiDAR sensor, strategically positioned mirror, and target obstacle

Object Removal Attack (ORA)

Objective: Demonstrate how mirrors can render real obstacles invisible to LiDAR perception

Setup: Traffic cone placed 4m from vehicle, 60cm×40cm mirror positioned to deflect LiDAR beams away from obstacle

Key Parameters: Mirror angles tested at 0°, 15°, 30°, and 45° relative to LiDAR forward axis

Object Addition Attack experimental setup showing mirror array positioned to create phantom obstacles

Object Addition Attack (OAA)

Objective: Generate phantom obstacles through controlled beam redirection

Setup: Modular mirror array (30cm×30cm tiles) positioned to reflect beams toward environmental structures

Key Parameters: Surface areas of 0.18m², 0.36m², and 0.60m² tested at 30°, 45°, and 60° tilt angles

Point Cloud Analysis

Real-time LiDAR point cloud data demonstrates the systematic effect of mirror parameters on attack effectiveness. The following visualizations show authentic sensor data captured during controlled experiments.

Mirror Surface Area Impact

Increasing mirror surface area directly correlates with phantom object density and detection confidence.

Angular Configuration Effects

Mirror tilt angle determines phantom object placement, detection timing, and spatial persistence.

CARLA Simulation Framework

Physics-Informed Attack Injection Pipeline

We developed a comprehensive simulation framework that integrates empirically validated models into the CARLA autonomous driving simulator, enabling safe and repeatable evaluation of mirror-based attacks.

Real-Time Model Integration

Empirical models derived from real-world experiments drive dynamic phantom object synthesis based on vehicle-mirror geometry

Probabilistic Attack Simulation

Stochastic modeling captures the natural variability observed in physical mirror reflections

Safety-Critical Scenario Testing

Controlled evaluation of cascading failures from perception errors through planning and control modules

Cascading Failure Demonstration

Multi-Vehicle Collision Scenario

This simulation demonstrates the complete failure chain triggered by a mirror-induced phantom obstacle:

11.0s
Mirror attack activated Phantom object injection initiated
11.2s
Emergency braking High-confidence obstacle detected
11.8s
Insufficient reaction Following vehicle cannot stop safely
12.0s
Collision occurs Cascading safety failure demonstrated

Full-Stack System Evaluation

Autoware Integration Testing

Test Vehicle: Kia Soul EV with roof-mounted sensor array
Software Stack: Autoware.AI with CenterPoint 3D object detection
Operating Mode: Full autonomous operation with safety driver monitoring

Autonomous Vehicle Response Analysis

Real-Life Vehicle Response Demonstration

This video captures the actual measurement data from our full-stack system evaluation, showing the autonomous vehicle's real-time response to mirror-based attacks. The vehicle is moving at normal speed when it encounters a placed mirror that creates a phantom obstacle in its LiDAR perception.

Autoware System Output Analysis

Following the real-life demonstration, we analyze the detailed system outputs from Autoware to understand how mirror-based attacks affect the complete autonomous driving pipeline:

Note: The following videos show the Autoware system's internal processing and decision-making responses, captured through ROS 2 logging and visualization tools.

30° Configuration: Late-Onset Emergency Braking

Behavior: Phantom object appears at close range (< 3m), triggering immediate high-jerk emergency stop

Risk Analysis: Sudden deceleration creates high rear-end collision probability in traffic scenarios

Technical Details: System latency prevents smooth deceleration profile, resulting in potentially dangerous vehicle dynamics

45° Configuration: Mid-Maneuver Interruption

Behavior: Phantom detection during active turning maneuver, causing abrupt stop mid-turn

Risk Analysis: Stopping in intersection or curve creates traffic obstruction and collision hazard

Technical Details: Planning system prioritizes obstacle avoidance over maneuver completion, leading to unsafe positioning

60° Configuration: Pre-Intersection Blocking

Behavior: Early phantom detection (> 5m) causes vehicle to stop well before intended path

Risk Analysis: Unexplained stops disrupt traffic flow and create unpredictable behavior for human drivers

Technical Details: High-confidence false positive overwhelms sensor fusion algorithms, preventing rational decision-making

Research Findings

Critical Vulnerabilities Identified

100% Attack Success Rate

All tested mirror configurations successfully induced perception failures in the target autonomous vehicle system

74% Peak Detection Confidence

Maximum confidence score achieved for phantom vehicle classification using 6-mirror array configuration

< $50 Attack Cost

Total material cost for effective attack implementation using commercially available mirrors

3-8m Effective Range

Distance range over which mirror-based attacks remain effective across tested configurations

Security Implications

Attack Surface Analysis

Mirror-based attacks represent a previously underexplored attack vector that bypasses existing cybersecurity defenses by operating entirely in the physical domain

Scalability Concerns

The low cost and accessibility of required materials make these attacks feasible for a wide range of potential adversaries in real-world deployments

Detection Challenges

Passive nature of attacks produces no electronic signatures, making detection with conventional cybersecurity monitoring approaches infeasible

Systemic Vulnerability

Fundamental reliance on LiDAR time-of-flight assumptions makes all current autonomous vehicle architectures potentially susceptible