EMI-LiDAR: Uncovering Vulnerabilities of LiDAR Sensors in Autonomous Driving Setting using Electromagnetic Interference

S. Hrushikesh Bhupathiraju1
Jennifer Sheldon1
Luke A. Bauer1
Vincent Bindschaedler1
Takeshi Sugawara2
Sara Rampazzi1

EMI Vulnerability Overview

Autonomous Vehicles (AVs) using LiDAR-based object detection systems are rapidly improving and becoming an increasingly viable method of transportation. While effective at perceiving the surrounding environment, these detection systems are shown to be vulnerable to attacks using lasers which can cause obstacle misclassifications or removal. These laser attacks, however, are challenging to perform, requiring precise aiming and accuracy. Our research exposes a new threat in the form of Intentional Electro-Magnetic-Interference (IEMI), which affects the time-of-flight (TOF) circuits that make up modern LiDARs.

We show that these vulnerabilities can be exploited to force the AV Perception system to misdetect, misclassify objects, and perceive non-existent obstacles. We evaluate the vulnerability in three AV perception modules (PointPillars, PointRCNN, and Apollo) and show how the classification rate drops below 50%. We also analyze the impact of the IEMI injection on two fusion models (AVOD and Frustum-ConvNet) and in real-world scenarios. Finally, we discuss potential countermeasures and propose two strategies to detect signal injection.

This paper appeared in ACM WiSec 2023.

Read the Paper insert_comment Cite

                            author = {Bhupathiraju, Sri Hrushikesh Varma and Sheldon, Jennifer and Bauer, Luke A. and Bindschaedler, Vincent and Sugawara, Takeshi and Rampazzi, Sara},
                            title = {EMI-LiDAR: Uncovering Vulnerabilities of LiDAR Sensors in Autonomous Driving Setting Using Electromagnetic Interference},
                            booktitle = {Proceedings of the 16th ACM Conference on Security and Privacy in Wireless and Mobile Networks},
                            pages = {329–340},
                            year = {2023}

IEMI Vulnerability Characterization

To determine whether a VLP-16 Velodyne LiDAR is vulnerable to IEMI, we perform a frequency sweep between 400 MHz and 1 GHz. We found that different frequencies produced different perturbation patterns. We hypothesize that the vulnerability is caused by EMI-induced voltages surpassing detection thresholds in the LiDAR's ToF circuits.

Veloview point cloud data of an obstacle at 1 m from the victim LiDAR with no EMI injection (Left). (Middle) Sinusoidal perturbation created by the IEMI injection. Random perturbation frequency created by the IEMI injection. The antenna is placed to the left of the LiDAR (it does not appear in the point cloud because it is too narrow).

Adversarial Capability and Modeling

For the real-world experiments we use the following equipment:
 • LiDAR model: VLP-16
 • Software Defined Radio: USRP N210 with amplifier and directional antennas
We characterize the attacker capability using the parameters described in the table below:

We found that the point cloud can be detectably perturbed above 1 m in the XY plane (meaning causing changes in the measured distance from the victim LiDAR). The perturbation is less pronounced in the XZ plane:

SNR vs distance between victim LiDAR and adversary in the XY plane.

SNR vs distance between victim LiDAR and adversary in the XZ plane.

The wavelength of the sinusoidal perturbation in both the XY and XZ planes are linearly related to the distance between the victim LiDAR and the target obstacle:

Sinusoidal perturbation characterization wavelength with respect to 𝑑O in the XY plane

Sinusoidal perturbation characterization wavelength with respect to 𝑑𝑂 in the XZ plane

The minimum affected LiDAR FOV angle by our physical setup is approximately 3 degrees. We also found that the mean displacement of the random perturbation and amplitude of the sinusiodal perturbation are linearly associated with the transmission gain (meaning the IEMI output power):

The effect of gain on the mean displacement for random perturbations and amplitude variation for the sinusiodal perturbations.

Based on the observed trends, we modeled the sinusoidal perturbation's amplitude and the random perturbation's mean displacement, maximum displacement, and minimum displacement in terms of IEMI trasmission gain. We model the sinusoidal perturbation wavelength in terms of distance between the victim LiDAR and the target obstacle.


We synthesize perturbations in the KITTI dataset and evaluate how perturbations affect two state-of-art object detection models (PointRCNN, PointPillars, and Baidu Apollo). The empirical experiments are limited at 25 dB gain due to safety constrainsts, thus we simulate the perturbation effects provoked by higher gain values (above 25dB = 2 Watt) based on the collected empirical data.

Example of point cloud of a car with no attack (Left), and with the corresponding random (Middle) and sinusoidal synthesized perturbations (Right).

Object Misdetection and Misclassification

We consider the detection of an individual object successful if the corresponding prediction has an Intersection over Union (IOU) greater than the desired threshold with respect to the ground truth. Similarly, we consider object classification successful if the detected object is classified as the correct object class. We perform two different analyses based on the IOU threshold. In the first analysis, we set the IOU threshold to 0 (WIOU). Here, if the predicted object has an IOU with respect to the ground truth greater than 0, we consider it a successful prediction. In the second analysis, we evaluate based on the default IOU thresholds as proposed in the corresponding works of each model (DIOU). The table below shows object detection and classification results for PointPillars, PointRCNN, and Apollo models for WIOU and DIOU analyses with sinusoidal and random perturbations:

ODR and CLR of LiDAR-based models on WIOU and DIOU for sinusoidal and random perturbations at 70dB gain.

We also study the effect of the perturbed LiDAR FOV angle 𝜃 on ODR for the models:

Sensor Fusion

Camera-LiDAR sensor fusion models extract features from both the sensors used to improve the accuracy of 3D object detection. We analyze the impact of IEMI perturbations on two popular camera-LiDAR sensor fusion architectures:
  1) Frustum-ConvNet (FC): a cascaded semantic-level fusion model that creates frustum-level features on the LiDAR point cloud from each region proposal from the camera image.
  2) AVOD: feature-level fusion model that extracts feature maps from RGB images from cameras and BEV images of the LiDAR individually and then combines them.

We consider Object Detection Rate (ODR) and Classification Rate (CLR) as a metric to evaluate the effect of perturbations on sensor fusion models. We consider both DIOU thresholds (0.7 for car and 0.5 for pedestrian and cyclist classes for both models) and WIOU. For this analysis, we randomly select 500 objects for each of the cyclist, pedestrian, and car classes from the KITTI dataset. The resulting drop in detection (ODR) and classification (CLR) rates are shown below.

Real-World Scenario

We further conduct a proof-of-concept experiment to analyze the impact of IEMI signal injection in real-world scenarios. We target a pedestrian obstacle at 2, 4, and 6 m away from the victim LiDAR in a static scenario.
We use PiFiNet, an attentive pillar network-based model trained on the JRDB dataset for pedestrian detection. The JRDB Dataset is a large-scale multi-modal dataset collected from a social mobile manipulator JackRabbot. We increment the transmitted signal gain in 1 dB intervals and measure the IOU metric of the predicted bounding box with respect to ground truth. We repeat the experiment for frequencies corresponding to sinusoidal and random perturbations. We limit the gain to 25 dB for safety constrains (all the experiments were conducted in controlled environments).
The figure below shows the results of PiFiNet for pedestrian object detection with EMI signal injection:

IOU of predicted pedestrian objects from PiFiNet at 2, 4, and 6 meters away from the VLP-16 LIDAR for sinusoidal (left) and random (right) perturbation injection.


This research was supported in part by the NSF CNS-1933208, CNS-2055123, and JSPS KAKENHI Grant Number 22H00519