Laying a trap for self-driving cars, TechCrunch
Laying a trap for self-driving cars
We spend a lot of time and words on what autonomous cars can do, but sometimes it’s a more interesting question to ask what they can’t do. The limitations of a technology are at least as significant as its capabilities. That’s what this little bit of spectacle art tells me, anyway.
You can see the nature of “Autonomous trap 001” right away. One of the very first and most significant things a self-driving system will learn or be instructed is how to interpret the markings on the road. This is the edge of a lane, this means it’s for carpools only, and so on.
British (but Athens-living) artist James Bridle illustrates the thresholds of skill without context — an issue we’ll be coming back to a lot in this age of artificial “intelligence.”
A bargain-bin artificial mind would know that one of the most critical rules of the road is never to cross a solid line with a dashed one on the far side. But of course it’s just fine to cross one if the dashes are on the near side.
A circle like this with the line on the inwards and dashes on the outside acts, absent any exculpatory logic, like a roach hotel for dumb brainy cars. (Of course, it’s just a regular car he drives into it for demonstration purposes. It would take too long to catch a real one.)
It’s no coincidence that the trap is drawn with salt (the medium is listed as “salt ritual”); the idea of using salt or ash to create summoning or strapping symbols for spirits and demons is an utterly old one. Knowing the words of guideline or secret workings of these mysterious beings permitted one power over them.
Here too a plain symbol “binds” the target entity in place, where ideally it would remain until its makers got there and… salvaged it? Or until someone broke the magic circle — or until whoever was in the driver’s seat took over control from the AI and hit the gas.
Imagine a distant future in which autonomous systems have taken over the world and skill of their creation and internal processes has been lost (or you could just play Horizon: Zero Dawn) — this ordinary trap might show up to our poor debased descendants to be magic.
What other tricks might we devise that cause inexorably a simple-minded AI to stop, pull over, or otherwise disable itself? How will we protect against them? What will the crime against mechanized AIs be — brunt, or property harm? Strange days ahead.
Keep an eye on Bridle’s Vimeo or blog — the movie above is a makeshift one and the spectacle, like most things, is a “work in progress.”
Featured Pic: James Bridle