Big Data, Big Uncertainties: The Growing Civil Rights Concerns Surrounding Targeted Advertisements
September 21, 2016
By Adam El-Sahn, JD’18
What do these websites, many of which do not require you to pay anything upfront, get for their efforts? The answer is advertising. Every time you open a webpage, you are faced with ads trying to sell you products. What makes advertisements in the information age different from previous forms of mass advertising is the availability of advertisers to craft targeted advertisements based on the recipients’ unique personal information. Advertisers are able to construct a consumer profile on a computer user by placing “cookies” on their web-browser which can be used to learn such information as where they live, their race, age, and even more invasively, which websites they frequent, which search terms they have entered, whom they associate themselves with, and even which words they use in their personal communication.
The sheer amount of this “big data” collected about consumers and sold to advertisers is all part of a revolutionary multi-billion dollar industry that has the potential to vastly improve consumer convenience by only presenting them with advertisements relevant to them. At the same time, there are concerns about how big data could be used to undermine the civil rights of vulnerable members of society. In the twentieth century, banks frequently declared entire neighborhoods to be too economically unsound to offer loans to. The practice became known as “redlining,” because entire sections on a city map would be marked in red lines, indicating that they were too risky to be allowed access to credit. There is a growing concern that targeted advertisements can be used to harbor a new age of discrimination through electronic “redlining” that is much harder to prove than its twentieth century predecessors.
In 2014, President Obama commissioned a study to look into the expanding use of big data and the effects it will have on American consumers. The study noted that while there was great potential for big data to provide numerous benefits across all sectors of life and the economy, they were concerned that, “big data analytics have the potential to eclipse longstanding civil rights protections in how personal information is used in housing, credit, employment, health, education, and the marketplace.” In addition to the privacy concerns raised by the invasive collection of data, the commission was concerned that, “big data technology could assign people to ideologically or culturally segregated enclaves known as ‘filter bubbles’ that effectively prevents them from encountering information that challenges their biases or assumptions.”
Perhaps more insidiously, there is a concern that big data could filter people into categories that will negatively affect their economic quality of life as a consumer, by directing them away from desirable advertisements and towards sub-prime advertisements. A recent Wall Street Journal investigation found that the advertised price of identical Swingline staplers was different based on the user’s location. While it is common for brick-and-mortar stores to adjust prices to account for local market conditions, consumers are often unsettled when they find out that this is being done online, according to researchers at the Annenberg Public Policy Center of the University of Pennsylvania. While paying more for a stapler than someone down the road can be upsetting, there are serious civil rights concerns that targeted advertising can be used to replicate discrimination against protected classes, even if that was never the intent of the programmer who wrote the algorithm.
There is a growing concern that, “data mining can inherit the prejudices of prior decision-makers or reflect the widespread biases that persist in society at large.” Societal biases can be replicated in the algorithm by inputting poorly selected data; incomplete, incorrect, or out dated data; selection bias that is not representative of the general population; or by selecting a search criteria that creates a feedback loop that perpetuates the status quo.The concerns are heightened by the fact that algorithmic processes occur, for the most part, in a black box. Without knowing what the algorithm is looking for, and with no way of allowing the consumer to confirm that the information that is used is correct, the fear is that any form of discriminatory practice that emerges will be hard to detect, and even harder to fix.
Our current legal infrastructure is not designed to deal with the potential problems surrounding targeted advertising. Although the White House, Equal Employment Opportunity Commission, and the Federal Trade Commission have all commissioned studies about the civil rights implications of the use of big data in targeted advertisements, a common jurisdictional question remains. For example: with credit cards, under the Equal Credit Opportunity Act, it is illegal for creditors to discriminate on the basis of among other things, race, sex, gender, religion, national origin, or genetic information. Discrimination against a protected class can be demonstrated circumstantially through what is called “disparate impact”. “Disparate impact occurs when a company employs facially neutral policies or practices that have a disproportionate adverse effect… on a protected class, unless those practices or policies further a legitimate business need that cannot reasonably be achieved by [other] means.” The major hurdle facing litigants seeking to establish a disparate impact claim is proving the existence of an adverse effect: Just because one did not see an online advertisement for a product, it does not mean that they were unable to obtain it if they sought to.
Because it is difficult to prove that lack of exposure to an advertisement resulted in actual harm, it is difficult to make a colorable claim under our current civil rights legal scheme. Perhaps this is just another example of developments in law being outpaced by developments in technology. If anything, the growth of big data analytics should make us question whether 20th century law can keep pace with 21st century realities and, if it can’t, how should we adopt or create new policies to deal with new concerns.
Additional Blog Posts
Student Blog Disclaimer
The views expressed on the Student Blog are the author’s opinions and don’t necessarily represent the Wharton Public Policy Initiative’s strategies, recommendations, or opinions.