Why Privacy Policies Suck

We have all accepted a privacy policy or terms of service agreement without thoroughly reading and understanding it. Why do we do this? Do privacy policies even matter? Are people just lazy? I hope to answer these and other questions throughout this blog post. But most of all, I want to prove that privacy policies and terms of service agreements are not an effective method of informing users what is being done with their data.

Do these policies matter?

Until recently, I had never read the privacy policy of any website, app, or online service I have used. And as of yet, I have never knowingly faced any negative repercussions from this, and I am not the only one.

However, that is not always the case. One day in 2010, 7,500 users of GameStation unknowingly agreed to pay an extremely high price for using the online service. No person could have expected a terms of service agreement to claim something this important to its users. Below is an excerpt of that terms of service policy:

By placing an order via this Web site on the first day of the fourth month of the year 2010 Anno Domini, you agree to grant us a non transferable option to claim, for now and for ever more, your immortal soul. Should We wish to exercise this option, you agree to surrender your immortal soul, and any claim you may have on it, within 5 (five) working days of receiving written notification from gamesation.co.uk or one of its duly authorized minions.

7,500 users unknowingly surrendered their immortal souls in the “immortal soul clause” buried in the terms and conditions (Smith, 2010). Yes, this was a joke and no souls were claimed. However, similar clauses were used in an 2018 academic study (Obar, 2018).

Researchers tested how undergrads would interact with the privacy policy and terms of service for a fictitious social media company called NameDrop. The study had respondents agree to a privacy policy and terms of service agreement. The two statements were modified versions of LinkedIn’s so their lengths and structure were realistic. However, in those policies the researchers put two “gotcha” clauses. One was stating that NameDrop may share your personal data with the NSA and the other was claiming the user’s first-born child. Those clauses are shown here:

3.1.1 NameDrop Data […] Any and all data generated and/or collected by NameDrop, by any means, may be shared with third parties. For example, NameDrop may be required to share data with government agencies, including the U.S. National Security Agency, and other security agencies in the United States and abroad. NameDrop may also choose to share data with third parties involved in the development of data products designed to assess eligibility. This could impact eligibility in the following areas: employment, financial service (bank loans, insurance, etc.), university entrance, international travel, the criminal justice system, etc. Under no circumstances will NameDrop be liable for any eventual decision made as a result of NameDrop data sharing.

2.3.1 Payment types (child assignment clause): In addition to any monetary payment that the user may make to NameDrop, by agreeing to these Terms of Service, and in exchange for service, all users of this site agree to immediately assign their first-born child to NameDrop, Inc. If the user does not yet have children, this agreement will be enforceable until the year 2050. All individuals assigned to NameDrop automatically become the property of NameDrop, Inc. No exceptions

Of the 543 participants in the study only 3% didn’t accept the privacy policy and 7% didn’t accept the terms of service. Respondents were later interviewed and only 11 mentioned data sharing while only 9 mentioned the child assignment clause. This means that at least 523 respondents unknowingly agreed to share their data with the NSA and forfeit their offspring (Obar, 2018).

Both of these examples had little effect in real-life, but that is not always the case. A nigeran instant loan app called Okash has been known to use some pretty harsh collection methods for users who don’t make their loan payments.

Twitter users complained that the company was messaging their contacts and telling them that the user was not paying their loan back. Users brought up stories of the loan company messaging or calling their supervisors, friends, even priests and informing them of the user’s financial woes.

I am sure that no person would want their financial issues shared with all of their contacts. So, what allowed Okash to do this? At the bottom of clause 11 of Okash’s Terms Of Service (at the time of writing this) they state the following:

11. We may contact you and/or your emergency contact. […] In the event we cannot get in contact with you or your emergency contact, you also expressly authorize us to contact any and all persons in your contact list.

Every Okash user agreed to this policy whether they knew it or not. There is little chance that many of the users understood that the company would resort to these shaming methods. This is a real-life example of one of these policies actively infringing on its users privacy and that of their contacts. This caused unnecessary harm and embarrassment for users who did not know what they were agreeing to.

This is not the only issue that has come up. There have been serious concerns about many companies’ policies in the US. Perhaps the most famous is the Cambridge Analytica Scandal. In which an app called “This Is Your Digital Life” was stealing its users’ and their Facebook friends’ data. The data was used by Cambridge Anlaytica who was working for the Trump campaign. An estimated 87 million Facebook users had their data stolen as a result (Kang, 2018).

There are countless other security issues that have received media attention including FaceApp, TikTok, Facebook, and others. These issues can and do impact all of us.

Does anyone read these policies?

Some people might, but most do not. The NameDrop study stated that their terms of service should have taken 15-17 minutes to read and the privacy policy should take 29-32 minutes. The median time respondents spent on each are 14.04 seconds and 13.6 seconds respectively. Below is a graph from the researchers showing how much time respondents spent reading the policies.

Perhaps even more concerning, the study also found that 90% of respondents claimed that they use quick-join clickwraps often or sometimes. This means that these users agree to the policy without ever even seeing any part of it (Obar, 2018).

The lack of people reading these statements has inspired countless references from comedians and tv shows. Even an FTC chairman said, “We all agree that no one is reading privacy policies” (Cate, 2020).

Should I read these policies?

Yes, but no. Of course, everyone should know what a company is doing with their data. But it’s pretty unlikely that anyone could read all of the policies that they come across. In fact, PayPals’ privacy notice has 36,275 words which is longer than Hamlet (Warner 2020).

A 2008 study found that for one person to read the privacy policy of every site they come across they would need 244 hours a year which comes out to over 30 eight-hour working days of just reading privacy policies annually. If people were to skim the policies, they would spend 154 hours or just over 19 eight-hour days skimming. Accumulating in 53.8 billion hours of reading or 33.9 million hours of skimming nationwide. If companies were to pay employees to read all of the of the privacy policies they interacted with at work it would cost 617 billion dollars at the national level (McDonald 2008). Keep in mind, this study was done in 2008. The average number of sites visited and the number of sites with privacy policies has probably increased since then.

Clearly, people cannot be expected to take the time necessary to read these policies. But can they even understand them? Another study done in 2002 looked at the average reading level required to understand privacy policies of the top 25 internet health sites. They found that the policies ranged from “somewhat difficult” to “very difficult” and were written at a 14th grade reading level. Meaning that two years of college would be expected before someone could read and comprehend the policies (Gerber, 2002). A similar study was done in 2005. It that found that a 14th grade reading level was still required. This is extremely concerning considering that an 8th grade level is recommended for general consumption (Sheehan, 2005).

Worse still, there was another study in 2005 that analyzed the communicative strategies that were used in online privacy policies and found that they often employed problematic language. First, they downplayed certain qualities of the policy by using words like “carefully selected” or “occasionally”. Second, companies tended to obfuscate reality by using cautious language or by deceiving and confusing readers. This is done with terms like “may”. Furthermore, companies try to forge intimate relationships with their readers in order to build more trust. This is done by using first-person pronouns like I, my, you, and your (Pollach 2005). Not only are these policies difficult to read, they are actively using language that is designed to make you more likely to accept the policy regardless of how problematic the contents are.

So, these policies are first, simply too long and too plentiful to read. Second, they are written at a prohibitively high reading level. Third, they use advanced literary strategies in order to convince readers to agree. It cannot be argued that these policies effectively inform users how a company will use their personal data or what data they will collect.

What does this mean?

This model of informed consent is broken and needs dramatic change. Several ideas have been advanced over time as to how to deal with the issues presented in this post. I discuss these in my post Notice & Consent Alternatives.


Cate, F. (2020) Data Privacy and Consent. Retrieved from https://www.youtube.com/watch?v=2iPDpV8ojHA&t=422s

Graber, M. A., D’Alessandro D. M., Johnson-West, J.(2002) Reading Level of Privacy Policies on Internet Health Web Sites. Journal of Family Practice 51:7 642-642

Kang, C., Frenkel, S. (2018) Facebook Says Cambridge Analytica Harvested Data of Up to 87 Million Users. Retrieved from https://www.nytimes.com/2018/04/04/technology/mark-zuckerberg-testify-congress.html

McDonald, A. M., Cranor, L. F. (2008) The Cost of Reading Privacy Policies. I/S A Journal of Law and Policy, 4:3 543-568.

Obar, J. A., Oeldorf-Hirsch, A. (2018) The Biggest Lie on the Internet:
ignoring the privacy policies and terms of service policies of social networking services. Information, Communication & Society, 23:1, 128-147. doi: 10.1080/1369118X.2018.1486870

Pollach, I. A (2005). Typology of Communicative Strategies in Online Privacy Policies: Ethics, Power and Informed Consent. Journal of Business Ethics 62, 221-235. doi: 10.1007/s10551-005-7898-3

Sheehan, K. B., (2005) In Poor Health: An Assessment of Privacy Policies at Direct-to-Consumer Web Sites. Journal of Public Policy & Marketing, 24:2 278-283.

Smith, C. (2010) 7,500 Online Shoppers Accidentally Sold Their Souls to Gamestation. Retrieved from https://www.huffpost.com/entry/gamestation-grabs-souls-o_n_541549

Warner, R. (2020) Notice and Choice Must Go: The Collective Control Alternative. SMU Science & Technology Law Review, forthcoming

Why Privacy Matters

When people talk about data privacy the argument, “I have nothing to hide,” is often used. It is a fair argument and an assumption that a lot of people have. Eric Schmidt, former Google CEO was quoted saying, “If you have something that you don’t want anyone to know, maybe you shouldn’t be doing it in the first place.” This argument makes a lot of sense to a lot of people. But after looking into the topic more, I disagree for two reasons. First, privacy is essential to having a free and democratic society. Second, seemingly unimportant information about people can be misused to harm them. In this article I will try to convince you that your and other people’s privacy is important, even if you are doing nothing wrong.

A picture of the Panopticon

In 1778, Jeremy Bentham, a philosopher, theorized a building called the panopticon. The building would allow for constant and discrete surveillance of all people in the building. This was originally theorized as a prison, but it was thought that it could be used by any organization that wanted to control a group of people. The only requirements for this building were that any member of the population could be surveilled at any time, and that the population had no way of telling if they were being watched (Bentham 1995).

What made this idea so exciting to Bentham, were the implications that it would have on the behavior of the population. The idea that you could be watched at any time without your knowledge would force the population to believe that they were being watched at all times, and as a result, at all times the population would have to act in the way that the owners of the facility would want or risk facing the consequences.

A similar warning was issued by George Orwell in his book, 1984. When he described the surveillance state his characters faced, he wrote, “There was, of course, no way of knowing whether you were being watched at any given moment,” and, “at any rate, they could plug in your wire whenever they wanted to. You had to live, did live, from habit that became instinct, in the assumption that every sound you made was overheard and except in darkness every movement scrutinized.” In his book, Orwell creates a dystopian society that strives to control not only the actions, but the thoughts of his characters (Orwell 1949).

Orwell’s warnings have been proven across many scientific fields. When people are being watched, the way they feel, and their actions shift. People’s buying behaviors are affected when there are people around (Argo 2005). Likelihood to help others is affected by the presence of security cameras (van Rompay 2009). The mere presence of a picture of a watchful pair of eyes causes more negative emotions to be felt (Panagopoulos 2017) and causes more donations to be made (Kelsey 2018). Surveillance by swimming coaches has even been shown to make young athletes conform to training practices that lead to short-term in juries, long-term injuries, and psychological harm (Lang 2010).

The underlying theme in all of these studies is that when you are being watched, you are more likely to conform to the social and societal standards and behave in the ways the watcher would want. This may not be surprising to most people, but the implications may be. 

Edward Snowden brought to us the knowledge that the NSA was collecting phone records, texts, audio and video chats, photographs, emails and other documents of everyday citizens (History 2020). This revealed that we are living in a state of surveillance far too similar to the one laid out in 1981, and that there is no reason to assume that your private interactions are indeed private.

If we accept total surveillance, we will be forced to act and even feel differently because we know that we are being observed. If privacy is essential for people to feel like they don’t have to conform, then we must demand privacy. We must demand that we have safe spaces where we are not under surveillance so that we feel safe to not conform, because only of out of non-conformity, can come positive change.

If organizations are permitted to watch and see everything that we do today, we are also agreeing to let these organizations see everything that we will ever do in the future. Under that state of total surveillance, we will never be able to act and think in the same ways that we have in times past. This is why privacy itself is so important and worth fighting for.

Below are three real life examples of times where privacy infringements have caused harm to individuals, families, or even the world. Even though no person discussed was plotting to do wrong or harm another person, their information was used in ways that no one would want. Information that does not seem particularly sensitive can have long lasting negative impacts on your life if it falls into the wrong hands.

Identity Theft

Dwight Schrute yelling about identity theft.

The FTC’s 2018 Consumer Sentinel Network Data Book claims that in 2018 there were 1.4 million fraud reports in 2018. Resulting in a total of $1.48 billion dollars in losses (FTC 2019). The top three categories were imposter scams, debt collections, and identity thefts. This problem is not discussed enough considering how commonplace it is. Almost everyone knows someone who has, or has themselves, faced this issue. The more places your data is, the more likely it is to fall into the wrong hands. Everything from online shopping, to social media accounts, to free Wi-Fi can be blamed for allowing your sensitive data to fall into the wrong hands. 

I have faced this issue myself. Someone figured out my name and other basic information (probably from Facebook), and my grandparent’s names and phone number. They used this information to call my grandparents and pretend to be me. The scammers told my grandparents that I was in trouble and needed them to send me money immediately. Luckily, my grandparents weren’t feeling very sympathetic and didn’t give the scammers any money, even though they believed them. 

My grandparents and I were doing no illegal or frowned upon activity, but our information became known to people who wanted to misuse it, and they did. The scammers didn’t even have any particularly sensitive information, but they were still able to come very close to stealing money. This is not a unique story and there are much, much worse repercussions that people have faced as a result of different types of fraud, identity theft, and scams.

The Cambridge Analytica Scandal

The Cambridge Analytica logo and the Facebook logo

A company called Cambridge Analytica was hired by the 2016 Trump campaign. The company promised to provide effective advertising and analytics to the Trump campaign. The company accomplished this by prompting users to download an app that gave users a personality quiz. Once the app was downloaded, it read scraped data like your name and likes from your Facebook account. The 270 thousand users who downloaded the app agreed to have their data scraped and used in the terms and conditions. However, the app also scraped the same data from your Facebook friends, even though they had never downloaded the app or agreed to those terms (Rosenburg 2018). In the end, the data of 87 million users was collected (Confessore 2018).

This data was then used to create user profiles in order to better understand and serve better targeted advertising to individuals regarding the election. This collection of data allowed Cambridge Analytica to unfairly improve their campaigning efforts and perhaps critically affect the 2016 presidential election.

Even though these millions of Facebook users had nothing to hide, their data was stolen without their permission and that data was then used to possibly shift the course of the election to the highest office in the US. This simple Facebook information, that few would think needs protecting, essentially broke the american, democratic process.

Target Pregnancy Prediction

The Target logo

Target creates unique guest IDs that identify each of its users. This way, target can see every purchase that its users have ever made with them and when they made it. Target joined this data with its baby registry data to look for people’s purchasing patterns leading up to giving birth. Target then was able to identify that if a customer bought unscented lotion, a large purse, magnesium supplements, etc. they could predict how likely it was that that customer was pregnant, as well as predict that customer’s due date within a, “small window.” Target then, based on a customer’s pregnancy predictor score and their due date, sent specific pregnancy and baby product coupons to customers who they thought were pregnant (Hill 2012).

This may just seem like good analytics, but the issue became clear when these coupons lead to an unfortunate end for one girl and her family. A high school girl was pregnant, and her purchase history showed that. Target started sending her coupons for items like breast pumps, cribs, etc. and her father, who did not know she was pregnant saw these coupons. The father then went to his local target and confronted management demanding, “She’s still in high school, and you’re sending her coupons for baby clothes and cribs? Are you trying to encourage her to get pregnant?” The manager had no idea what the father was talking about and the father eventually went home. A few days later, the manager called the father to apologize again, but the father ended up being the one who apologized, because since their last meeting he and his daughter had a conversation where he discovered the truth, that his daughter was already pregnant.

Even though this girl had done nothing wrong, she had something to hide. Target’s use of this girl’s purchasing data lead to what could only have been a very unfortunate and uncomfortable situation for her whole family. Few people want their secrets spilled as a result of a company’s marketing scheme, and this girl was no different.

Life is better when people’s identities don’t get stolen, when elections aren’t unfairly tampered with, and when Target doesn’t spill your deepest secrets. Even if you are doing nothing wrong, there are parts of your life that are best kept private, and it is becoming increasingly hard, or even impossible to do so (Morgan 2018). We are at a point where we must demand privacy or live forever with the consequences of total and complete surveillance. 


Argo, J. Dahl, D., Manchanda, R. (2005) The Influence of a Mere Social Presence in a Retail Context. Journal of Consumer Research 32:2 207-212https://doi.org/10.1086/432230

Bentham, J., Božovič, M. (1995) The Panopticon Writings. Radical Thinkers.

Confessore, N. (2018) Cambridge Analytica and Facebook: The Scandal and the Fallout So Far. New York Times. Retrieved from https://www.nytimes.com/2018/04/04/us/politics/cambridge-analytica-scandal-fallout.html

Federal Trade Commission (2019) Consumer Sentinel Network Data Book 2018. Retrieved from https://www.ftc.gov/system/files/documents/reports/consumer-sentinel-network-data-book-2018/consumer_sentinel_network_data_book_2018_0.pdf

Greenwald, G. (2014) Why Privacy Matters. Retrieved from https://www.youtube.com/watch?v=pcSlowAhvUk&t=600s

Hill, K. (2012) How Target Figured Out a Teen Girl Was Pregnant Before Her Father Did. Forbes. Retrieved From https://www.forbes.com/sites/kashmirhill/2012/02/16/how-target-figured-out-a-teen-girl-was-pregnant-before-her-father-did/?sh=620806136668

History.com Editors (2020) Edward Snowden Discloses U.S. Government Operations. History. Retrieved from https://www.history.com/this-day-in-history/edward-snowden-discloses-u-s-government-operations

Kelsey, C. Vaish, A. Grossmann, T. (2018) Eyes, More Than Other Facial Features, Enhance Real-World Donation Behavior. Hum Nat 29 390-401https://doi.org/10.1007/s12110-018-9327-1

Lang, M. (2010) Surveillance and conformity in competitive youth swimming. Sport, Education and Society. 15:1 19-37https://doi.org/10.1080/13573320903461152

Morgan, J. (2014) Privacy is Completely and Utterly Dead, and We Killed It. Forbes. Retrieved from https://www.forbes.com/sites/jacobmorgan/2014/08/19/privacy-is-completely-and-utterly-dead-and-we-killed-it/?sh=2e13871631a7

Orwell, G. (1949) 1984. Secker & Warburg.

Panagopoulos, C., Linden, S. (2017) The Feeling of Being Watched: Do Eye Cues Elicit Negative Affect? North American Journal of Psychology. 19:1 113-121.

Rosenberg, M., Confessore, N., Cadwalladr C. (2018) How Trump Consultants Exploited the Facebook Data of Millions. New York Times. Retrieved from https://www.nytimes.com/2018/03/17/us/politics/cambridge-analytica-trump-campaign.html

van Rompay, T., Vonk, D., Fransen, M. (2008) The Eye of the Camera: Effects of Security Cameras on Prosocial Behavior. Environment and Behavior. 41:1 60-74 https://doi.org/10.1177%2F0013916507309996

Notice & Consent Alternatives

Some of the cornerstones of data privacy law are the principle of notice and consent. The notice portion means that consumers need to be notified what data is being collected about them and how it will be used. The consent piece means that consumers then can have a choice whether or not they want to continue with using the service. In the modern world, this is achieved by using privacy policies and terms of service agreements. Unfortunately, this model is flawed and in need of a dramatic change.

It is a well-known that almost nobody reads privacy policies or terms of service agreements while online. When one these notices is not read than it is rendered completely useless as a method of notice and thus there can be no educated consent. We are need of a new strategy. One that will empower consumers without putting unrealistic demands on their time and attention. In this post I discuss different methods that have existed or have been proposed as new method of protecting consumers’ privacy.

Why does notice and consent fail?

Consumers spend very little amounts of time of privacy policies in comparison to how long it takes to read them. A 2018 study investigated this futher. The researchers developed a mock social media platform and a privacy policy and terms of service agreement that was modeled off of LinkedIn’s. The researchers then had their respondents sign up for the platform. It was found that while the privacy policy and terms of service should take 29 minutes and 15 minutes respectively to read, the respondents spent a median time of about 14 seconds on either policy (Obar, 2018). People are simply not spending enough time on these policies to have any idea what the policies are stating.

This is not just because consumers are lazy. These policies have many issues, but perhaps the largest is that these policies are simply too long. A 2008 study found that for one person to read the privacy policy of every site they visit would spend over 30 eight-hour working days of just reading these policies every year. If employees were to read all the policies, they came across at work it would cost companies 617 billion dollars at the national level (McDonald 2008). Not to mention that this study was done in 2008. The average number of sites visited and the number of sites with privacy policies has probably increased since then.

These are just a few of the reasons why this model has failed. To learn about other reasons this method is unsuccessful, read my post Why Privacy Policies Suck.

What alternatives do we have? 

There have been several proposed and implemented alternatives and supplements to this model. They all generally boil down to one of three categories. The first being altering either the content or the style of these notices. The second being supplements, or additional tools designed to be used along with these disclosures in order to make them more effective. The final option of course being a complete replacement or alternative to the notice and consent model. 

Altering Notices 

Simplification and Standardization

Since policy length causes so many of these policies to go unread, finding ways to make shorter, more concise, and standardized policies would theoretically help people to read more policies and understand them. A 2012 study attempted to create a privacy notice that helped readers to understand as much of the statement as possible. The researchers were studying mail out paper notices, but there is no reason the study couldn’t be recreated on a digital interface. After much research as to what the most educational format was, the researchers prepared 3 different privacy policies for three fictitious banks. Respondents were then shown then shown the policies and asked what bank they would want to use. In the end respondents who saw the optimized privacy notice were more likely to give correct, fact-based reasons for their selection than ones who received other, less consumer-centric policies (Garrison 2012).


Making privacy policies more appealing to read is another option that might help motivate consumers to use them. A great example of this is Google’s privacy policy. It has a few short videos embedded in the document that would presumably make the content a little more digestible for users and in turn motivate them to read the policy. 


Using AI to Scan Policies

In 2018 a tool called PrivacyGuide was developed to help consumers be able to understand policies without taking the time to read them. The policy would be read by the program and it would output an easy to read report card that ranked the site on 11 different aspects ranging from third-party sharing to policy changes. This simplified report would be easier to understand and take less time to understand than reading the policy as a whole. Some issues may still be that consumers wouldn’t take the time to use the tool or that it would make wrongful judgements. However, the developers found that it reported the associated risk level of a policy with 90% accuracy (Tesfay 2018)

Internet Seals of Approval

There have been several third parties that endorse other companies and are intended to show consumers that their endorsee has completed certain steps that keep its users safe. Some examples are TRUSTe and BBB. These two sites claim that users will trust you more if you display their respective badges on your site. While new research is lacking on these kinds of services, a 2005 study found that they may not be all that helpful to consumers. 

Unfortunately, the study found that having a seal did not appear to influence how compliant the site was with FTC guidelines regarding privacy policies. Worse still, the study found that when a site does display one of these icons, some respondents were more likely to say that they would be more willing to share their information with the site. This is a big issue considering that displaying one of these icons may not necessarily mean that the site is more invested in your privacy (Miyazaki 2005). Again, this study was done in 2005 and hopefully these organizations have cracked down on how their icons are used more than in the past, but there is no easy way to know.


Legal Control

This argument claims that notice and consent should be completely replaced by stricter privacy laws that restrict what data companies can collect and what they can do with the data.  This method would push for changes in privacy laws around the world that place more protections on consumers, but also wouldn’t burden companies with attaining consent for some of their data usage (Cate 2016).


Companies are profiting off of collecting your data and then selling in to interested parties. This idea of monetization lies in you selling your data to the end user and cutting out the middleman. Here’s an example, let’s say you are going to buy ice cream. You go the ice cream parlor and they would sell you the ice cream at a discount if you shared your location for the last hour with the company. This way, you can decide what data you share and who you share it with, and you are compensated in return. The ice cream shop also benefits as they can use your data to improve their product offering. 

This is already being done in some cases, take progressive’s snapshot program. Users willingly sell their driving data to progressive in hopes of receiving a lower rate. Watch Stuart Lacey’s Ted Talk to learn more about the monetization alternative

Which choice is best?

I don’t know. This is a big decision and there are many factors in play. I believe that attempts at changing policies but keeping them as the primary method will not be effective. The percentage of people who are not looking at these policies is too great, and I don’t believe that including videos, tables, or standard formats will change the behavior of the masses. I believe that good supplementation can give more power to consumers to make selections based on privacy issues. Still, I am also a supporter of stricter and revised regulations that are built with our current practices in mind instead of focusing on outdated principles that were built at a time before smart phones. What alternatives do you think would be the most effective?


Cate. F (2016) The Failure of Fair Information Practice Principles. Consumer Protection in the Age of the Information Economy.

Doneen, D. (2020) Why Privacy Policies Suck. Retrieved from https://datadigested.com/2020/10/11/why-privacy-policies-suck/

Garrison, L., Hastak, M., Hogarth, M., Kleimann, S., Levy, A. (2012) Designing Evidence‐based Disclosures: A Case Study of Financial Privacy Notices. Journal of Consumer Affairs, 46:2. doi: 10.1111/j.1745-6606.2012.01226.x

Lacey, S. (2005) The Future of Your Personal Data – Privacy vs Monetization. Retrieved from https://www.youtube.com/watch?v=JIo-V0beaBw

Miyazaki, A. (2005) Internet Seals of Approval: Effects on Online Privacy Policies and Consumer Perceptions. Journal of Consumer Affairs, 36:1. doi: 10.1111/j.1745-6606.2002.tb00419.x

McDonald, A. M., Cranor, L. F. (2008) The Cost of Reading Privacy Policies. I/S A Journal of Law and Policy, 4:3 543-568.

Obar, J. A., Oeldorf-Hirsch, A. (2018) The Biggest Lie on the Internet: ignoring the privacy policies and terms of service policies of social networking services. Information, Communication & Society23:1, 128-147. doi: 10.1080/1369118X.2018.1486870

Tefsay, W., Hofmann, P., Nakamura, T., Kiyomoto, S., Serna. J. (2018) PrivacyGuide: Towards an Implementation of the EU GDPR on Internet Privacy Policy Evaluation. Proceedings of the fourth ACM International Workshop on Security and Privacy Analytics. doi: 10.1145/3180445.3180447