Researchers have found a way of tricking autonomous vehicles into misinterpreting road signs, with techniques that could have ‘severe consequences’ but need little more than a colour printer to implement.
Small changes designed to look like graffiti or abstract art were used to trick autonomous vehicles, researchers from the University of Washington said in a newly published paper.
The researchers tested out both poster-printing, where an actual-sized road sign is printed and overlaid on an existing sign, and adding stickers to existing signs. These created both subtle ‘attacks’ using ‘peturbations’ (changes) that occupy the entire region of the sign and camouflaged perturbations that take the form of graffiti and abstract art.
They said: ‘These attacks do not require special resources – only access to a colour printer.’
Using stickers to modify a Stop sign, the researchers said they had ‘achieved a 66.67% misclassification rate into our target class’ for the graffiti sticker attack and a 100% targeted misclassification rate for the abstract art sticker attack.
The peturbations caused a Stop sign to be misclassified as a speed limit sign and a right turn sign to be misclassified as either a stop or added lane sign.
In one experiment they entirely covered a Stop sign, resulting in changes that they said were ‘imperceptible to the casual observer’.
The paper says: ‘In contrast to some findings in prior work, this attack is very effective in the physical world. The Stop sign is misclassified into our target class of Speed Limit 45 in 100% of the images taken according to our evaluation methodology.’
The researchers explained that they had created the changes using a new attack algorithm- that they call Robust Physical Perturbations (RP2).
They said: ‘Our algorithm can create spatially-constrained perturbations that mimic vandalism or art to reduce the likelihood of detection by a casual observer.
‘We focus on road sign classification due to their critical function in road safety and security. If an attacker can physically and robustly manipulate road signs in a way that, for example, causes a Stop sign to be interpreted as a Speed Limit sign by an ML-based [machine learning] vision system, then that can lead to severe consequences.’