Paul's Internet Landfill/ 2018/ Steps Towards the Surveillance Dystopia

Steps Towards the Surveillance Dystopia

I get it. You all think I am being paranoid, that I overstate the surveillance risks of technology because I am an old geezer. None of you think that Google or Facebook or Apple will actually do anything nefarious. None of you believe that we are on the road to psychological manipulation by our friends at Silicon Valley, even though you go to sleep with your smartphones by your side and the first thing you look at when you get up is your screen. I am just another one of those out-of-touch crackpots who rails against smartphones the way previous generations railed against personal computers (and where did that get them?). Fine. Here are some completely innocuous items of note:

Autocompletes and Autonomy

Item: I was being psychologically manipulated by Spark when Nora Young narrated a personal essay about the predictive text features in Gmail. She was concerned because Gmail suggests text for her to put in her emails, and accepting that text is less work than coming up with her own wording. So in some sense Gmail is manipulating her into typing words it thinks is appropriate.

Similarly, you had better hope that the autocompletions you get when you do a web search actually reflect what you are searching for, and not just a "helpful suggestion" that distracts you.

Data Storage and the Disabled

Item: In that same episode there was a heartwarming story about the "Seeing AI" voice assistant, which can help blind users navigate the world by having their phone identify bills, recognise faces, and read text. One fragment of the story involved a teacher who had the app recognise all of his students as they entered the class. Those students all had to have their pictures taken and labelled for this functionality to work. Where does all that data live? For what other purposes is this data used? Clearly this is not the "aggregated data" smokescreen companies use to try and convince us that are data is anonymized; this app relies on specific pictures of specific people. Are we supposed to trust our Redmond Washington overlords with this information?

The other aspect of this has to do with disability as a wedge against privacy concerns. Who could possibly be against cheap, ubiquitious assistive technology that comes with smartphones? Don't we want disabled people to be independent? Yes we do, but there are a few side effects nobody ever talks about:

Facebook and Suicide Prevention

Item: Last year there were some news stories about Facebook improving algorithms for suicidal ideation. When Facebook detects such unauthorized sentiments, they can be reported to first responder organizations who reach out to the infringing individual, who can then be re-educated into embracing life.

Isn't that sweet? Who could possibly be against algorithms that help prevent suicide?

You do know that this is direct sentiment analysis, right? And that the consequences for expressing incorrect sentiments online are to be reported to others? You do realize that this is not in any way anonymized? This is one step away from direct psychological manipulation. The only difference is that instead of Facebook intervening directly to correct the thoughts of the offender, the offender is reported to other organizations. This demonstrates that Facebook has the ability to determine what you are thinking and take direct action based on those thoughts. Because the context is preventing suicide, nobody actually questions what is going on here, or the broader implications of this.

So now we have to play the game of assuming that Facebook will only use these powers for good, and not to increase its stock price. Of course we believe that. Facebook is our friend. Surely it has our best interests at heart.

The Demise of Google Plus

Item: Google+ is finally dead. This does not come as a huge surprise, but there are some details in the linked article that is very telling. Apparently there was a data breach in Google+. Because Facebook was getting grilled by the US Congress for its role in helping the Russians sway the US election, Google did not reveal the security breach for seven or eight months. They claim this is because they determined the security bug was not important enough to reveal, even though the news story suggests that Google did not keep enough logs to know this for certain.

Think about this. Google got caught breaching user trust once already, in the Edward Snowden leaks. They (and all their Silicon Valley friends) pledged that they took user integrity very very seriously and would never ever do such bad things again. They published transparency reports and made big shows about resisting government interference into their operations.

Now here comes a situation where there is a possible data breach, which Google chose not to disclose because it would not be expedient. Did they refrain from disclosing this because it was a minor bug? Would they have disclosed in March if they had discovered that the bug had been exploited? What if doing so would have dragged Google before Congress. Google had one interest in this issue and users had another. In whose interests did Google act?

And when did Google choose to disclose this API bug? After the Wall Street Journal published a story about it. That's when. This is the company that takes our user data very very seriously. This is the company we are supposed to trust with everything, because their security engineers are much smarter than the rest of us, and they will do everything they can to keep our data secure. That's why they kept Google+ running well after they knew about the security breach, but then suddenly shut down the service after the publication of the Wall Street Journal article.

Now we are supposed to believe that Google won't betray us for its own political (or financial) expediency again? How many times do we intend to fall for this?


What have we learned? Technology is manipulating us via its helpful suggestions. Tech companies use assorted social causes (helping the disabled, preventing suicide) to build up support for creepy things. Tech companies can and do track us individually, and then they can and do manipulate us based upon what they learn. And when it comes to a tradeoff between doing the transparent thing and the politically expedient one, tech companies will choose political expediency.

But somehow I am the Luddite. Somehow I am the paranoid one. No doubt this is me buying into conspiracy theories that have solidified in my head, because I am in a filter bubble too. The tech companies are only trying to improve our lives. They have our best interests at heart.


Since writing this article I ran across an interesting advertorial produced in cooperation with (read: sponsored by) Google. The article discusses the future of marketing: identifying "high-value" customers and determining what they are going to want next. Direct quote from the article: "They need to know what customers want before they do -- and put the most relevant content in front of them". But somehow I am the paranoid one.