Augmented reality will fail unless it comes with augmented privacy.
This is completely separate from the technical difficulty of developing AR equipment. Apple decided not to talk about its AR glasses at last week’s developer conference because they need more time to work on battery life and display issues. Meta/Facebook is famously devoting vast resources to consumer AR glasses but is nowhere near showing off anything. (There were rumors this week that Meta may be rethinking its whole approach to AR.)
Technical problems can be solved. I’ve laid out a detailed sequence of events for AR devices to become the new dominant technology in our lives, basically replacing our smartphones.
But that prediction is based on one rosy assumption, which is that when AR devices come on the market, the world will not go completely insane.
I’m not sure that assumption is right.
If those AR glasses were ready today, I can easily imagine a sequence of outcries about privacy issues that would cause them to founder and flail and more or less sink, turning AR glasses into cute game-playing devices that do not change the world.
Let’s take one example that happens to be in the news. It’s just an example. In another article, I’ll give you a few more ways that privacy issues will need to be addressed if AR is ever to be mainstream. Which is another way of saying, if we don’t deal with privacy, we’ll have to look at the world as it is instead of through rose-colored glasses, and who wants to do that? It’s a mess.
Cameras, facial recognition, and augmented reality
Imagine you’re wearing augmented reality glasses and you don’t look like a dork. It’s a stretch, but go with me here.
You enter a party full of people you should recognize.
You see a familiar face but you can’t place the name.
As you get closer, a banner appears in the air, only visible to you, with her name and some basic information that jogs your memory.
You get similar information about others in the group when you turn and scan the room. You don’t have to worry about remembering anyone’s name.
I’m an introvert with a terrible memory for names. This would obviously be really helpful to me.
But there are some problems – difficult, thorny, possibly unsolvable problems
There are a couple of ways that the glasses could identify Daphne. The most obvious is for the glasses to have a camera that is constantly active, scanning your surroundings. When it sees a face, it sends it online to a database that matches it using facial recognition, then sends the name back to you.
Facial recognition is being perfected today. It requires huge databases of people’s faces. Governments and private companies are creating those databases from drivers’ license photos, passports, photos posted online, and photos taken by ubiquitous cameras in public places, then using powerful computers to mix and match and analyze and assign names.
These are early days so some of the problems with facial recognition today result from its flaws. It has become fairly accurate with white males but much more work is required for the same accuracy with women and people with darker skin.
Those are presumably solvable problems. If it was accurate and unbiased, there are wonderful uses for facial recognition. It might make it faster to board a plane or re-enter the country, for example.
Unfortunately it’s easy to imagine controversial or unacceptable uses of facial recognition. It has been called the “plutonium of AI.” Full credit to Luke Stark, a Microsoft researcher, for that wonderful phrase. He says: “It’s dangerous, racializing, and has few legitimate uses; facial recognition needs regulation and control on par with nuclear waste.”
Facial recognition is being used by the Chinese government to control the Uyghur population, and there are deep fears in Britain about the live facial recognition trials being run by the police. There are troubling reports about US police forces experimenting with facial recognition.
In May 2019, San Francisco became the first city in the US to ban law-enforcement and government agencies from using facial-recognition technology.
Some states passed laws to prevent tech companies from accumulating biometric data. The first results are coming back from lawsuits permitted by those laws and the tech companies are being stung by big class-action verdicts – Facebook paid $650 million and Google settled for $100 million in separate class-action lawsuits in Illinois, and the Texas attorney general has filed the first lawsuit against Facebook under the Texas biometric privacy law. There are hundreds more lawsuits already filed in Illinois and plaintiffs’ attorneys are looking for new theories to file more.
Last month, the ACLU settled a lawsuit against Clearview AI for violating the Illinois law. Clearview has built a facial recognition database with more than 20 billion images and has been selling it to local police departments, government agencies, and the FBI and Department of Homeland Security. Under the settlement, Clearview is prevented from selling the database to private companies or individuals in the US. (It can still sell to federal and state agencies and banks and financial institutions.)
The Clearview settlement was the last straw for Facebook. The next day, Meta/Facebook announced that it has turned off augmented reality effects on Facebook and Instagram in Texas and Illinois.
Lots of young’uns use Instagram filters to put a goofy mask on their faces. The technology behind masks, avatars, and filters starts with an analysis of your face.
It’s not facial recognition. Instagram doesn’t store your face in a giant database – at least not for purposes of putting cat ears and whiskers on you.
But the states have created an environment that attracts lawyers like moths circling around a streetlight, except these are vampire moths. This was the statement by a Facebook spokesperson.
“The technology we use to power augmented reality effects like masks, avatars, and filters is not facial recognition or any technology covered by the Texas and Illinois laws, and is not used to identify anyone. Nevertheless, we are taking this step to prevent meritless and distracting litigation under laws in these two states based on a mischaracterization of how our features work.”
The day after Facebook’s announcement, Snapchat was sued in federal court under the Illinois biometric privacy law over its similar use of filters.
You begin to see the difficult privacy problems. The threat of lawsuits has temporarily derailed the progress of augmented reality in two states, even though the AR has nothing to do with facial recognition.
Imagine the furor if future AR glasses actually tried to do real facial recognition so you could be reminded of Daphne’s name at a party.
In the next article, we’ll talk about some of the reasons this is a far bigger problem than silly Instagram filters. For now, recall some history. There were many reasons that Google Glass failed in 2012 but I would argue that the camera doomed Google Glass. People were instinctively and immediately upset at the thought of an always-on camera in a restroom. From a post-mortem by the Atlantic:
Armed with a camera, Google Glass was quickly banned in restaurants, bars, and even from Google’s own shareholder meetings. Pictures taken with the Glass camera were usually deemed “creepy.” The Consumer Watchdog’s Privacy Project called it “one of the most privacy invasive devices ever.”
AR glasses without a camera would be far less useful. And as we’ll see, the camera is not the only way AR could violate our privacy without a deep rethinking of our rules and social norms.