Privacy invasion is a campaign weapon.
ADVERTISEMENT | October 22, 2019 | |
|
 | Supporters of President Trump at a rally in August.CJ Gunther/EPA, via Shutterstock |
|
Privacy may not be a policy issue for most of the 2020 presidential candidates, but their campaigns will depend on intrusive tracking. Last week, my Opinion colleague Thomas Edsall laid out how "Trump is winning the online war," and it got me thinking about the ways in which all the data invasions the Privacy Project has chronicled this year are continually refined as a campaign weapon. |
One bit from the piece that stuck with me is a comment from Michael Podhorzer, the political director of the A.F.L.-C.I.O.: |
In an email, he described some of the technological advances that have allowed Trump and the Republican Party to leave Democrats in the dust: "The key is MAIDs — Mobile Advertising IDs. All of our phones have a unique MAID." |
Devoted readers of the Privacy Project will remember mobile advertising IDs as an easy way to de-anonymize extremely personal information, such as location data. This granular information, as Podhorzer notes, is incredibly helpful to campaigns: |
"Imagine a file with that, and every piece of information taken from your smartphone." This, he argues, "is the world we're moving to." In this new terrain, "the G.O.P. is running pretty far ahead of the Democrats innovating online, mostly because of their financial advantage." |
Edsall goes on to explain — using language from an updated Trump campaign privacy policy — how the Trump campaign is using beacons to collect location information on attendees at rallies. It's not entirely clear how much data the campaign is getting, but even in small portions the information is extremely valuable, especially when cross-referenced with other data broker sets like financial information. |
None of this, it should be noted, is illegal or even frowned upon. It's standard practice in the marketing industry. It makes sense that granular information collection, like mobile location, has moved into the political realm. But in politics, the stakes are incredibly high — especially when it comes to voter persuasion and targeting. |
Should any of that data leak or be abused by any campaigns, it has the potential to trigger nuclear-level scandals. The Cambridge Analytica debacle, which spurred governmental investigations in multiple countries, concerned fairly mundane data, including profile names and the "likes" of users' Facebook friends. In other words, it was hardly as invasive as the information collected by location beacons. |
There's a lesson in here. Somewhere. If the 2020 election is the most sophisticated data-driven campaign in history, then it stands to reason any political abuses of our most personal information will be unprecedented. |
Andrew Yang and How to Claim Our Data |
In last week's newsletter, I interviewed the presidential candidate Andrew Yang. He supported the idea of internet users getting paid by Big Tech for the use of their data — something called a data dividend. Yang also suggested that abuse of our data and the consolidated power of online platforms has made us "like rats in a maze." He added that "there's something fundamental at stake here, which is: What does human agency look like?" |
Though Yang is not a front-runner in 2020, he's the only candidate who has put forth data privacy proposals and, as he did with his Universal Basic Income proposal, the envisioned policies might help shift privacy issues into the 2020 conversation. |
That said, I was disappointed by Yang's non-answer when I asked him about his proposal to make our data into a property right. If we treat privacy as a property right, what we're implicitly saying is that we have that right to trade or sell our data, which might mean those who are most desperate for extra money will end up selling their info, while those who are richer can keep their private information private. While I found almost all of Yang's answers to be thoughtful, I felt like he dodged the central issue here. Instead, he argued that people wouldn't be as desperate for money with his $1,000 a month "freedom dividend" and that some people will always want to share more than others. |
How we view data is critical to how to regulate its use. In Privacyland right now, the "data as property" argument is bubbling up frequently. In a comprehensive piece for CNET, David Priest argues that "treating your data like property would be terrible." (Opinion contributor Sarah Jeong argued similarly here in July.) He also offers a potential explanation for Yang's not answering my question: |
Yang's actual policy suggestions don't treat data like property. He proposes the rights for you to be informed of data collection and use, to opt out, to be told if a website has data on you, to be "forgotten," to be informed if your data changes hands, to be informed of data breaches and to download all your data to transfer it elsewhere. |
Priest suggests maybe Yang is just using the wrong language. "Yang is correct that we have a claim" to our data, he writes. "The question we should all keep asking and attempting to answer is, 'How can we make that claim?'" |
At One Zero, Will Oremus examines the idea of how much our privacy is really worth and concludes that "it's probably fruitless to try to pinpoint with a single number the value of privacy." He suggests that a better frame may be to look at privacy like a human right, which he defines as "something everyone deserves, whether they fully grasp its value or not." |
Or how about another metaphor? A number of data dividend advocates argue that "data is the new oil," meaning that tech companies extract our personal information, much like oil companies extract the natural resource from the ground. Their data dividend idea is somewhat based on Alaska's Permanent Fund, which pays out a yearly sum ($1,606 in 2019) to eligible state residents based on oil revenues. In our interview, Yang used this phrase, too. |
I think "data is the new oil" misses the point slightly. But the environmental metaphor — especially pollution — is a helpful model. In an interview, a former Federal Trade Commission chief technology officer, Ashkan Soltani, suggested a solution for data similar to a carbon tax for clean air. His analogy of choice was logging: |
A long time ago logging entities discovered and harvested a resource at scale before anyone else and one of their advantages was the ability to pollute. But it was only after some time that people said, "wait, that's my land, too" and mandated that they either needed to be able to restrict the ability to use it or have some sort of retribution in the form of a tax or carbon credit. |
Pollution is also a handy way to think about why our data ought to be kept safe by default. As Oremus put it, "It's a bit like being asked how much you'd be willing to pay for your drinking water to be kept poison-free." |
No wonder we're all fed up. |
Send me your pressing questions about tech and privacy. Each week, I'll select one to answer here. And if you're enjoying what you're reading, please consider recommending it to friends. They can sign up here. |
I've devoted two of these columns to the Equifax data breach settlement — why it's a raw deal and what you could do in protest. But I hadn't really understood just how egregious the company's data security practices were until last Friday, when Jane Lytvynenko at BuzzFeed News first shared some snippets from the class-action suit from last January. |
I went through the whole document, which alleges that Equifax "failed to take some of the most basic precautions to protect its computer systems from hackers." |
What kind of precautions, you ask? Well: |
The company relied upon a single individual to manually implement its patching process across its entire network. This individual had no way to know where vulnerable software in need of patching was being run on Equifax's systems. |
Surely, it can't get worse, you say? Uh: |
Sensitive personal information relating to hundreds of millions of Americans was not encrypted, but instead was stored in plain text, making it easy for unauthorized users to read and misuse. Not only was this information unencrypted, but it also was accessible through a public-facing, widely used website. This enabled any attacker that compromised the website's server to immediately have access to this sensitive personal data in plain text. |
When Equifax did encrypt data, it left the keys to unlocking the encryption on the same public-facing servers, making it easy to remove the encryption from the data. |
But at least the unencrypted servers were secure, right? |
Furthermore, Equifax employed the username "admin" and the password "admin" to protect a portal used to manage credit disputes, a password that "is a surefire way to get hacked." |
Reading the full document, I think Equifax's malpractice highlights something important about data brokers, which are really just information middlemen. |
Equifax, like other data brokers, amasses highly personal information as a service to other companies. It collects this information whether its users want it to or not. The shoddy practices employed by Equifax to protect user data is what happens when you have a company that doesn't really serve those whose information it collects. And with no regulators to provide oversight, there was no need to put the well-being of those it collected data from above profits. |
歡迎蒞臨:https://ofa588.com/
娛樂推薦:https://www.ofa86.com/
沒有留言:
張貼留言