Newly Studied ‘Poltergeist Attacks’ Trick Autonomous Vehicles

Security Intelligence -

Newly Studied ‘Poltergeist Attacks’ Trick Autonomous Vehicles

There’s something spooky going on. New research from the Ubiquitous System Lab, Zhejiang University Security and Group and the University of Michigan found ‘poltergeist’ (PG) attacks can fool autonomous vehicles in a way that hasn’t been seen before. Take a look at what the researchers found about how this works.

Vehicles with a self-driving rely on computer-enabled, object-based detection. This classifies objects, deciding what is an obstacle and what is a normal road condition. Using those decisions, autonomous vehicles make moves on their own. Poltergeist attackers tamper with those classification results.

Bombarding Self-Driving With Acoustic Signals

To be specific, the poltergeist attack affects the stabilization of images detected by a vehicle. In their paper, the researchers noted this isn’t the same as past studies in which people showed the security risks of self-driving cars by targeting the main image sensors, such as the complementary metal-oxide semiconductor. Instead, they singled out inertial sensors. These provide an image stabilizer with motion feedback that it can use to reduce blur.

The researchers designed their PG attack to target those

Read More: https://securityintelligence.com/news/new-poltergeist-attacks-trick-autonomous-vehicles/