BETA
This is a BETA experience. You may opt-out by clicking here
Edit Story

Rethinking Privacy For The AI Era

Intel AI

C

oncerns over consumer privacy have peaked in recent years—roughly in step with the rise of advanced technologies like artificial intelligence. About 9 in 10 American internet users say they are concerned about the privacy and security of their personal information online, and 67% are now advocating for strict national privacy laws, according to a study by by Intouch International.

Fed up by a steady stream of incidents that range from the 2017 Equifax hack to the nefarious gaming of consumers’ social media data for political purposes, policymakers have begun to strike back on consumers’ behalf.

Europe’s General Data Protection Regulation (GDPR), the sweeping privacy legislation that went into effect in May 2018, was the first large-scale effort to offer consumers more legal protections. The California Consumer Privacy Act, which grants new rights to citizens of the state in January 2020, marks the first similar such step in the U.S. Similar laws are being pursued in a handful of other states, and there’s early talk of national measures coming soon as well.

Such protections run up against a more complex conflict: With advanced technologies like artificial intelligence (AI) taking off, the need for data is greater than ever, much of it from consumers. So how does society, industry, and government balance this voracious need for data with the protections that consumers are demanding? Can legal structures help to manage the inherent conflict between AI and privacy? And what is privacy in 2019, anyway?

As the saying goes, it’s complicated.

A Moving Target

To Bernhard Debatin, an Ohio University professor and director of the Institute for Applied and Professional Ethics, the first problem is that there has never been a clear—or enforceable—definition of privacy, since it is such a complex, abstract concept. 

“The notion of privacy has changed over time,” says Debatin. “In post-modern, information-based societies, the issue of data protection and informational privacy has become central, but other aspects [such as old-school, bodily privacy] still remain relevant. In other words, over time, the concept of privacy has become increasingly complex.”

Learn more about how companies are leveraging AI today.

That complexity has reached a tipping point of sorts with the rise of AI. Consumers faced with an endless stream of lengthy user agreements hastily click through to “accept” without ever realizing what privacy rights they may be giving away.

The information they provide winds up in large databases, which have the potential to be mined for any number of uses, including marketing opportunities, purchasing recommendations, or other services. Facial recognition and voice identification systems, meanwhile, can also track our movements in the real world; at home, smart appliances, motion-sensing lights, and thermostats continuously collect data about when we come and go.

Many of these functions provide a helpful service—but the potential risks they carry are not trivial. “Seemingly anonymized personal data can easily be de-anonymized by AI,” says Debatin. “It also allows for tracking, monitoring, and profiling people as well as predicting behaviors. Together with facial recognition technology, such AI systems can be used to cast a wide network of surveillance. All these issues raise urgent concerns about privacy.”

Tackling The Problem With The Law

From a legislative standpoint, these trends have not gone unnoticed. The GDPR fired a first volley at the problem. California’s forthcoming privacy law will soon at least give the U.S. a major toehold on the issue, as it will apply to nearly 40 million Americans.

While privacy is a hard concept to define and safeguard, especially today, “there are some basic principles that can help with protecting privacy,” says Debatin. “GDPR has in fact included many of them.” Good privacy legislation in the age of AI, he says, should include five components:

  1. AI systems must be transparent.
  2. An AI must have a “deeply rooted” right to the information it is collecting.
  3. Consumers must be able to opt out of the system.
  4. The data collected and the purpose of the AI must be limited by design.
  5. Data must be deleted upon consumer request.

“These steps make it possible to protect us from potential AI-based discrimination, lack of consent, and data abuse,” Debatin says.

Getting the U.S. on par with these standards might be difficult. A federal bill called the Future of Artificial Intelligence Act sought to take the first steps at protecting the privacy of individuals against potential abuses from AI. Alas, that bill has seen little movement since it was introduced in the Senate in 2017.

More recently, the U.S. Government Accountability Office (GAO) released a report expressing concern about the lack of a comprehensive national internet privacy law, with particular concern over “the collection, use, and sale or other disclosure of consumers’ personal information.”

The GAO report calls upon Congress to consider such legislation and to empower an agency like the Federal Trade Commission (FTC) to punish privacy violators with civil penalties. While the FTC currently has the power to fine companies that violate privacy rights, it only undertook such actions 101 times from 2008 to 2018. Nearly all of these matters were settled with the FTC, with only a handful of civil punishments issued.

Self-Policing Of Privacy

Is it possible that the AI industry might be able to police itself when it comes to privacy? It’s a tough sell, because companies have had little incentive thus far to build privacy protections into their systems. Major privacy breaches in recent years have made for breathless headlines, but ultimately very little fallout for the companies responsible.

One 2018 study measured the cost of a data breach at $3.86 million globally. Considering the companies studied generated between $100 million and $25 billion in annual revenues, the cost of a privacy misstep for large companies is negligible.

Still, privacy breaches can depress stock prices and cause companies to lose consumer trust. Eventually, one has to assume, these problems will become serious enough to carry significant business impact.

How might technology step in and help? Emerging technology concepts, such as differential privacy and homomorphic encryption, suggest some potential paths forward. Differential privacy systems introduce randomness into user data in order to prevent de-anonymization tactics from succeeding, while homomorphic encryption adds a layer of security by allowing machine learning algorithms to operate on data without decrypting it. These methods and others are beginning early-stage trials.

What happens next will depend on who gets their act together first—government or private industry. Constellation Research analyst Steve Wilson first called for businesses to implement “Big Privacy”—a privacy compact between industry and consumers that would ensure transparency in how data was used—back in 2014, noting that legislation was not keeping up with technology.

Today, Wilson says that the concept is more critical than ever, and that the pace of innovation is simply too fast for the law to follow. Yet he remains an optimist. He predicts that within five years consumers will see some restraints coming to the industry, either through the law, AI itself, or other means.

“People thought the world was going to be consumed by oil derricks in the 1920s, but we tamed the rampant oil industry,” says Wilson. “I think we will soon tame the data barons, too.”

Learn more about how companies are leveraging AI today.