Firstly, I’m not against privacy or anything, just ignorant. I do try to stay pretty private despite that.
I wanted to know what type of info (Corporations? Governments? Websites??) Typically get from you and how they use it and how that affects me.
For me, there are several very different good reasons. I write Them down in order they come to my mind not from imoortant to less important or vice versa.
- i dont want to be judged before met in Person
What do companies do before they hire you, banks that you ask for money etc. They look you up in the net before they meet you. Think they know you. No Chance to present yourself as you want it. Yes you can polish your social media but not all the hidden profiles they made of you…
- stalking
i once studied in a different country. Had 3 dates with someone i only realized then is weird and kindly ended dating. Then this person was stalking me for a year and only stopped thx to the police intervening after a lot of efford from my side. Luckily the person only had my email and phone contact and new which city I came from. I was SO glad Ive never been much on social media and stuff… Cause this person would have come to my hometown, waiting in front of my university or whatever - even wrote so. But couldn’t find out. Most of my friends back then had their address / favorite bars / university courses favorites places and everythign in the net. I was so scared back then i would have changed my entire life to be sure not to Meet this Person. Luckily i didnt have to cause my life was private. If you think stalking is very rare and you will not be affected look up the numbers. Its horryfying. For woman especially but also men. People dont talk about it but During that time when speaking to friends i suddenly knew 5 other people who had to deal with it already. Also nowadays these apps exist were you can take a picture of someone and search for that face in the Internet… A stranger can do that and find all your photos/ social media etc and be there in your favorite bar every nicht… Scary.
- data is power.
On the " i have nothing to hide" thing. Think of Google maps. They dont care where you go at all but by knowing where everyone goes they can predict traffic, give u alternative routes and theoretically they could also causetrafficb jams… :D while maps is useful they can do the same with any information. they are not interested in your political views but by knowing everyones political views they can predict and also direct them. This isnt theoretical its used in elections already. There was a scandal a few years ago about brexit ithinlk.
- i like to have control about myself
Sometimes i might even agree to share some data for something else but i am forced to it all the time and i have no control whats done with the data to whom its sold next etc. But its still my personal data …
- governments/ laws change
Well i am german, actually from an eastgerman family. If my family teached me anything its that political systems can change very easily. My greatgrandparents, my grandparents, my Parents and i grew up in 4 different political systems, well actually 5 :D. Every system of those had different views of who is good and Bad And it was better they dont know everything about you…
Just because you feel you may have nothing to hide now, doesnt mean that information could be very worth hiding in the future.
Governments change.
Laws change.
Dictators, autocrats, police states, abhors dissent. Free thought and peace of mind is what you’re protecting.
And even then, even if you somehow trust that your current political landscape won’t ever go down that route, if lawful invasion into your privacy is legally possible, then it is also illegally possible, and you’re way more of a target for scams, identity theft etc.
By being a anon data point you also help those who are persecuted look normal
Also: Saying you have nothing to hide is like saying you have nothing to say
— Edward Snowden
(I am not an expert, just a hobby self-hoster)
Think of how police obtain information about people. They usually do an investigation involving questioning and warrants to receive records and put together a case. They must obtain consent from someone or get a warrant from a judge to search records.
Or, they could just buy info from a data broker and obtain a massive amount of information about someone.
Imagine if every company has this info and can tie it in to your daily life. Google probably has your data location history and can see exactly what routes you’ve taken lately. They can use that information, with timestamps, to estimate your speed. What if they sold it to your car insurance company, who then uses it to raise your rates because you are labeled as a speeder?
What if your purchase history is sold to your health insurance provider and they raise your deductible because most of your food purchases are at unhealthy fast food joints?
Now, with AI being shoved into every nook and cranny in the tech we use, AI can quickly get a profile on you if it is fed your chat history. Even your own voice is not safe if it can be accessed by AI. This can be used to emulate you - Interests, chats, knowledge, sound. People could use this to steal your identity or access accounts.
Actually police (and governments) don’t need to purchase your data. They can gather anything and everything from what people share publicly and constantly on social media. Countless numbers of people have been arrested because of what they shared publicly and the metadata included with that share.
If they need criminal info they have immediate access to it.
The concern isn’t that you do something wrong, it’s that the data that you put out there can be used against you in countless ways. Marketing, sales, and so on are the least of your worries. If anyone wants to threaten you, your loved one’s, or even trick you into thinking they are in a threat situation, most people don’t realize how easy that could be with the data they give away daily.
That’s why I said this:
Or, they could just buy info from a data broker and obtain a massive amount of information about someone.
They don’t need warrants for location data if it’s bought from a company that sells that data.
Whether or not it’s admissible in court is another question, though.
I am/was in the same boat as you: For a long time, I just didn’t care that I was giving away a bunch of information in return for convenience, and didn’t get why people cared so much.
I don’t really know what triggered it, but at some point I became painfully aware that the only goal these companies have is to squeeze every possible penny out of selling me. I started noticing that the stuff they ask you to confirm is 95% stuff they want because they can sell it, or use it to get you hooked to their service, and 5% (at best) stuff they need to make the service good for you.
This triggered a change in my perspective: Now it pretty much makes me sick to my stomach to think about all the companies that are drooling over me, trying to make a buck by getting me to click something I’m not actually interested in, or don’t actually need.
These people have a vested interest in manipulating me, and by giving them my data, I’m giving them the tools to do it. I don’t want to be manipulated or sold as a product: That’s what made me start caring about protecting my data.
This is it for me too. I’m not going to allow companies to monetise me or my data any more than the absolute minimum I have to.
One thing I try hard at is making sure that I never have to see a single advert in my own home. I don’t have TV, I don’t watch any streaming services if they have ads, and I adblock everything. I don’t care how good a product is, how cheap or free, if it has advertisements I’m out.
To me it’s about having sufficient self-respect to not let companies live in my head rent-free.
Let me tell you a story. Many years ago I worked for big banks and insurance companies. One day I was tasked with a project. It was an amazing, from the tech point of view, project. It was something like this: a user navigates to a bank website looking for information about some product. The website presents the user a simple contact form - first name, last name, phone number and/or email. Based on provided data bank would use it to update user data (if there was no official account it would update the “ghost” account, aka “I know about you, but you don’t know about me”). Next the bank would scrape all publicly available social media accounts and build the “hidden” profile (I’ll get to this later). Based on all that data, user would be assigned a score based on which all future interaction with a bank would be determined. For a regular person this would mean that “I’m sorry but according to our system we cannot give you a loan”.
Now, about the “hidden” profile. It’s a thing that all big companies (including banks and insurance companies) hold. It’s all the data collected from all publicly available profiles (and sometimes from the shady sites), used to create a profile that’s not visible to a frontline workers and it’s referenced as a “system decided based on your data”.
Now, to make this more scary. This happened 10-15 years ago. Way before the so called AI. Imagine how much more data those companies have about you in today’s world and how good they are in processing it.
Now i have another question. What’s the issue if they’re ONLY using this info to improve my experience or make sensible business decisions?
They are using the info to engineer more efficient ways to separate you from your money. It’s not a benefit to you in any way.
I would have to assume that if I’m buying the product, i want it
Hey guys, this right here is a super valuable point to address and really strikes straight to the heart of the ability of a system like this to give the illusion of choice. People absolutely will still think, despite this, they are still in control and we need to address it not dismiss it.
I’m undoing the downvote on this comment, it absolutely is a big part of the conversation, even if you think it’s naive.
It’s naive to think you can’t be influenced into buying things you wouldn’t otherwise.
Also there’s the matter of pricing: they’ll get you to pay as much as possible, either by pushing more expensive versions or by actually changing the price you see on websites like Amazon.
“Improve user experience” tends to mean if you’re poor, the lowest level of hell isn’t gonna compare to how shitty of an experience they’ll give you
What’s the issue if they’re ONLY using this info to improve my experience
Suppose they start out entirely benevolent. That commitment must be perpetually renegotiated in upheld over time. As the landscape changes, as the profit motive applies pressure, as new data and technologies become available, as new people on the next step of their careers get handed the reigns, the consistency of intention will drift over time.
The nature of data and privacy is such that it’s perpetually subjected to these dynamic processes. The fabric of any pact being made, is always being rewoven, first with little compromises and then with big ones.
They don’t use it only for improving user experience. Based on a user profile they can bump your premiums just because you posted a photo on a snowboard (risky activity) or they can deny you a loan because someone posted on your timeline that you own someone some money.
Also based on your profile you are manipulated to buy products/services you don’t really need.
You have asked the most important question in this topic. Privacy and security only have meaning when you develop a threat model or encounter a threat. With digital security it is usually pretty straightforward in that you don’t want anyone else controlling your computer or phone and using it for their own ends. And a lapse in digital security can ruin attempts to secure privacy.
Privacy is where threat models should be developed so that you (1) don’t waste time worrying about and working around nonexistent threats and (2) can think holistically about a given threat and not believe in a false means of privacy.
For example, if you are of a marginalized community, closeted, and in a very unsafe living situation, your main threat model might be getting doxxed and outed. To prevent this you should ensure that there is zero to no information that would link your real identity to an online identity and you should roll accounts to ensure small slipups can’t be correlated. VPNs probably don’t help in this threat model but they don’t hurt either. A private browser does nothing in this situation. Securing your phone and not leaving it unlocked anywhere is good for this situation (sometimes privacy isn’t really about tech but behavior). Using strong passwords that can’t be guessed helps with this situation. Making a plan to move to a safe living situation so you can be out will resolve the threat entirely, though it may mean needing to think about new ones.
Notice that the government was not in this threat model and that it was more about violence towards the marginalized. Cis white guy techbros generally have nothing to worry about re: infosec and are just being enthusiasts or LARPers. Nobody is showing up at their house with a gun and the feds are not going to arrest you for having the most “centrist” political takes and actions available. The people that need to project themselves are those facing overt targeted marginslization or who take political action that the government wants to, or would eventually want to, suppress. For example, the US government labelled anti-apartheid groups as terrorist organizations and intimidated or jailed those they could identify. It has a habit of doing this to any advocacy groups that gain steam and actually pose a political threat to their opponents.
Even if you don’t have a threat model, though, having good digital hygiene is useful in case one develops in the future. You may currently do political work that seems safe, and it is because it is not perceived as a threat. Let’s say you help organize unions. But there have been times where organizing unions would mean you’re targeted by the government and hired thugs and those times can easily return. If they have compiled a database of likely union sympathizers, will your name be in there? Maybe that’s a risk that you just take. But maybe you should use good privacy practices so that you can go underground when needed.
The latter applies to the threatless cis white techbro “centrists”. Such an individually may someday change politically or in their gender identity and having good practices would then pay off.
Hi! Although your post is full of reasonable advice on maintaining privacy online I want to challenge you on the statement that the threat model matters. The contrapositive of the statement “I don’t need privacy if I have nothing to hide” is “I have something to hide, if I need privacy”. This puts those marginalized groups you mentioned in a position where simply by using a privacy tool or technique, they draw suspicion to themselves. It might immediately raise subconscious alarms in internet communities like facebook, where the expectation is that you use your real name.
The only way privacy measures work for anyone, is if they’re implemented for everyone.
Further, I’d like to challenge the concept that a cis white tech bro has nothing to hide. There’s a big invisible “for now” at the end of that statement. The internet, mostly, never forgets. We’ve had waves of comedians get “cancelled” over tweets they made years ago. Times change, people grow, laws regress. Posting statements about abortions is as of this year, suddenly unsafe. Maybe posting about neurodivergence comes next. Who knows with the way the world is going, maybe 5 years from now you’ll regret having posts on /c/atheism associated with you.
I think a good way to be considerate of privacy is to think in terms of identities, what those identities are for, and what links those identities. Does your identity on github need make comments about your political leanings? Should your resume have a link to your github? Does your identity on etsy need to have a link to your onlyfans? Does your dating profile need a link to your reddit account? Your “2nd” reddit account? Not all of these are clear yes or no answers, they’re just things to consider and make decisions about. Also, consider what class identities you only have one of, and what class of identities are for the most part unchangeable, e.g. attaching your phone number to two separate identities functionally links them.
The contrapositive of the statement “I don’t need privacy if I have nothing to hide” is “I have something to hide, if I need privacy”.
I said neither. I said that the marginalized have relevant threat models and, at least in the state they are currently in, cis white techbros generally do not and treat privacy as a hobby, failing to develop realistic threat models. This doesn’t translate into either of those sentiments.
This puts those marginalized groups you mentioned in a position where simply by using a privacy tool or technique, they draw suspicion to themselves.
That really depends on the specifics of the technique and if your threat model is the entities that could draw those conclusions, namely a government, they will tend to do that regardless. For those threat models you should really be shedding digital communication entirely and making a plan to leave.
But sure, something like having a ton of boring and diverse traffic in a VPN is useful for making them a privacy tool at all.
It might immediately raise subconscious alarms in internet communities like facebook, where the expectation is that you use your real name.
Alarms among who and what are the threats? This means nothing without a threat model.
The only way privacy measures work for anyone, is if they’re implemented for everyone.
This is simply false. For example, not everyone needs to meet in-person just for that to be an option for staying private. So long as you have a means to avoid leaking certain information to certain people, you can meet the needs of a threat model.
Further, I’d like to challenge the concept that a cis white tech bro has nothing to hide.
Not what I said.
I think a good way to be considerate of privacy is to think in terms of identities, what those identities are for, and what links those identities.
The only meaningful way to think about it is in terms of threat models. Identities are an aspect of engaging in certain online activities, they only have meaning relative to a threat model. I agree that it is a good idea to keep employers out of your political activity by not tying them together but that is because we live under capitalism where your employer can remove your means to provide for itself whenever it wants. The threat model is ubiquitous, just differing slightly in its form (delays, the need for lawyers, etc). There are of course more threat models re: political activity.
The risk of not considering threat models and instead adopting broad brush practices is that you can fail to adequately weigh threats or get a false sense of security.
.
Those are good reasons and I’m glad you think about and develop these threat models. And sorry you have to deal with them.
In addition to everythong everyone has said, one major thing that people often don’t think about privacy is how it relates to enshittification.
Modern software services try to optimize everything to make as much money as possible. Everything is a/b tested, and whatever increases some arbitrary metric is what gets released.
They do this by tracking a ton of metrics about how you interact with everything. I know where I work we collect data about every time you click on anything, how long you hover over buttons, etc.
Name, address, GPS localisation data, habits (like apps you often use, moments you use one device or another), gender, search terms in search engines, open web pages on a web browser, connection (other person you know), the work you do and where you work.
All kinds of things, really.
The usage is mostly advertising or identity theft.
I’m probably gonna mess this quote up, but I thought it was brilliant:
“Privacy is essential to security, and shitty people feel entitled to take that away from you.”
You can’t be secure in your dealings or operate on equal footing (economically speaking, as others here have pointed out) without a measure of privacy.
There was a jogging app known as Strava that posted an image on their Twitter that was a heatmap of all the jogging activities of all of their users. Their idea was just to show how popular their app was by showing the entire world lit up. Twitter users were able to locate secret US military bases on that data alone. Turns out nobody jogs in circles in the middle of the desert except GIs.
Recently a group of Harvard students did a demo where they used Meta’s camera glasses and a chain of commercial programs and products to find out people’s names, address, workplaces, and family based only on their facial data.
These are just two examples off the top of my head. Essentially, the more data someone can accumulate, the more info can be analyzed from it. With things like AI tools, that analysis is incredibly fast even with huge datasets.
SSRN is a kind of vast warehouse of academic papers, and one of the most
excitedcited and well-read ones is called “I’ve got nothing to hide and other misunderstandings of privacy.”The essence of the idea is that privacy is about more than just hiding bad things. It’s about how imbalances in access to information can be used to manipulate you. Seemingly innocuous bits of information can be combined to reveal important things. And there are often subtle and invisible harms that are systematic in nature, enabling surveillance state institutions to use them to exercise greater amounts of control in anti-democratic ways, and it can create chilling effects on behavior and free speech.
le user generated summary (no gee-pee-tee was used in this process):
comment 1/2
Section I. Introduction
skip :3
Section II. The “Nothing to Hide” argument
We expand the “nothing to hide” argument to a more compelling, defensible thesis. That way we can attack it more cleanly
“The NSA surveillance, data mining, or other government information- gathering programs will result in the disclosure of particular pieces of information to a few government officials, or perhaps only to government computers. This very limited disclosure of the particular information involved is not likely to be threatening to the privacy of law-abiding citizens. Only those who are engaged in illegal activities have a reason to hide this information. Although there may be some cases in which the information might be sensitive or embarrassing to law-abiding citizens, the limited disclosure lessens the threat to privacy. Moreover, the security interest in detecting, investigating, and preventing terrorist attacks is very high and outweighs whatever minimal or moderate privacy interests law-abiding citizens may have in these particular pieces of information.” (p. 753, or pdf page 9)
Section III. Conceptualizing Privacy
A. A Pluralistic Conception of Privacy (aka “what’s the definition”)
Privacy can’t be defined as intimate information (Social Security/religion isn’t “intimate”), or the right to be let alone (shoving someone and not leaving them alone isn’t a privacy violation), or 1984 Orwell surveillant chilling social control (your beverage use history isn’t social control) (p. 755-756 or pdf page 11-12).
Privacy is kind of blobby so we define it as a taxonomy of similar stuff:
- Information Collection
- Surveillance
- Interrogation
- Main problem: “an activity by a person, business, or government entity creates harm by disrupting valuable activities of others” whether disruption is physical, emotional, chilling of socially beneficial behavior like free speech, or causing of power imbalances like executive branch power.
- Information Processing
- Aggregation
- Identification
- Insecurity: Information might be abused. You can think of ways ;)
- Secondary Use
- Exclusion: People have no access nor say in how their data is used
- Information Dissemination
- Breach of Confidentiality
- Disclosure
- Exposure
- Increased Accessibility
- Blackmail
- Appropriation
- Distortion
- Main problem: How info can transfer or be threatened to transfer
- Invasion
- Intrusion
- Decisional Interference
- Main problem: Your decisions are regulated
(p. 758-759, or pdf page 14-15)
So privacy is a set of protections against a set of related problems (p. 763-764 or pdf page 19-20).
comment 2/2
B. The Social Value of Privacy
Some utilitarians like Etzioni frame society needs and individual needs as a dichotomy where society should usually win (p. 761 or pdf page 17). Others like Dewey thinks “individual rights are not trumps, but are protections by society from its intrusiveness” that should be measured in welfare, not utility. “Part of what makes a society a good place in which to live is the extent to which it allows people freedom from the intrusiveness of others” (p. 762 or pdf page 18). So, privacy can manifest in our right to not be intruded.
Section IV. The problem with the “Nothing to Hide” argument
A. Understanding the Many Dimensions of Privacy
Privacy isn’t about hiding a wrong, concealment, or secrecy (p. 764 or pdf page 20).
Being watched’s “chilling effects [i.e. getting scared into not doing something] harm society because, among other things, they reduce the range of viewpoints expressed and the degree of freedom with which to engage in political activity”; but even so, it’s kinda super hard to prove that a chilling effect happened so it’s easy for a Nothing to Hider to say that the NSA’s “limited surveillance of lawful activity will not chill behavior sufficiently to outweigh the security benefits” (p. 765 or pdf page 21). Personal damage from privacy is hard to prove by nature, but it still exists.
If we use the taxonomy, we notice that the NSA thingamabob has:
- Aggregation: if some mysterious guy Kafkaesquely compiles a crapton of data without any of your knowledge – with human bureaucratic “indifference, errors, abuses, frustration, and lack of transparency and accountability” – then they could pretty easily decide that they can guess what you might be wanting to hide or predict what People Like You might do later. Oopsie: it’s kind of hard to refute or hide a “future” behavior.
- Exclusion: You have no idea what they’re doing or if it is CORRECT information. That’s a kind of due process problem and a power imbalance – the NSA is insulated from accountability even though they have hella power over citizens.
- Secondary use: “The Administration said little about how long the data will be stored, how it will be used, and what it could be used for in the future. The potential future uses of any piece of personal information are vast, and without limits or accountability on how that information is used, it is hard for people to assess the dangers of the data being in the government’s control”
But then the Nothing to Hide argument only focuses on one or two definitions but not others. So it’s unproductive.
(p. 766-767 or pdf page 22-23)
B. Understanding Structural Problems
Privacy isn’t usually one big harm, like that one thing where Rebecca Schaefer and Amy Boyer were killed by a DMV-data-using stalker and database-company-using stalker respectively (p. 768 or pdf page 24); it’s closer to a bunch of minor things like how gradual pollution is.
Airlines violated their privacy policies after 9/11 by giving the government a load of passenger info. Courts decided the alleged contractual damage wasn’t anything and rejected the contract claim. However, this breach of trust falls under the secondary use taxonomy thing and is a power imbalance in the social trust between corpo and individual: if the stated promise is meaningless, companies can do whatever they want with data – this is a structural harm even if it’s hard to prove your personal damages (p.769-770 or pdf page 25-26)
There should be oversight – warrants need probable cause, wiretaps should be minimal and with judicial supervision – Bush oopsied here (p. 771 or pdf page 27).
“Therefore, the security interest should not get weighed in its totality against the privacy interest. Rather, what should get weighed is the extent of marginal limitation on the effectiveness of a government information gathering or data mining program by imposing judicial oversight and minimization procedures. Only in cases where such procedures will completely impair the government program should the security interest be weighed in total, rather than in the marginal difference between an unencumbered program versus a limited one. Far too often, the balancing of privacy interests against security interests takes place in a manner that severely shortchanges the privacy interest while inflating the security interests. Such is the logic of the nothing to hide argument” (p. 771-772 or pdf page 27-28).
Section V. Conclusion
Nothing to Hide defines privacy too narrowly and ignores the other problems of surveillance and data mining.
- Information Collection
Privacy is important because it gives you control over your life; details, info, thoughts, emotions…
I recently met a guy out of town at a trade show. We were both in the same show, grabbing some snacks, and I complimented his hat. We started talking, a little this, a little that. Eventually we parted ways. On the outro we introduced ourselves by first name only, more as a BTW side note because we might run into each other again. Why am I telling this story?
Because I forgot his name almost instantly and really only remember his hat. I know nothing about the guy. He knows nothing about me. But wouldn’t it be weird if I didn’t just remember his first name, but I knew his last name too? Where he lived, worked, shopped for groceries, sexual orientation, he last time he ordered pizza and what toppings were on it, how he voted last election, etc… If I knew all that about him, I could have a much more in depth conversation with him. And even if I had no mal intent and simply wanted to give him better experiences in life…that’s not my decision to make. He didn’t ask for that. And it’s freaking weird.
But that’s what has been made normal in our lives. Privacy helps keep your life…well, private.
Then the rabbit hole goes deep on nefarious uses. And it’s not “its possible” to do this, but rather “it’s being done” (with absolutely no doubt or argument).
The same reason why you have curtains over your windows, so random people cant peek into your private life.
I feel like being spied on on the Internet is kind of like having a camera in your bathroom.
Sure they promise they’re only going to point it at the sink and just make sure that you’re engaging in proper toothbrushing habits.
Sure.
But they’ll set it at the point where the mirror shows the shower and the toilet and they’ve got smell detectors in there to determine how much food you’ve eaten and how well your digesting it and there’s a sensor in the toilet to check the content of your urine and then if you drink too much they’re going to tell your boss that you’ve been drinking because they detected the alcohol that your body flushed out in your urine when you peed.
And you have no control over who gets to see what’s going on in your bathroom.
It is morally wrong and psychologically oppressing to be spied upon.
And the powers that be are so focused on the benefits it gives them that they do not care about the negatives that affect us.
Why do you need curtains on your windows?
To make sure the whole world isn’t just a window for the HR department. I can have “dissident” views, or just talk trash with my friends, and not get fired since it wasn’t at the office.
To make sure real dissidents (from totalitarian countries) can express their political views.
So a lady can send her husband feet pics without some secret agent spy gawking at them too.
So I can share my family’s secret BBQ sauce recipe with my cousin without Arby’s stealing it (they have eyes everywhere).
But these are all specific things. The truth is that we simply cannot trust institutions with all our data. I don’t need a reason for privacy. They need a reason to have my info. Security is a legit reason to seek citizens’ info, generally, but you should need a specific security-related reason to access a specific person’s data.