2020年10月9日 星期五

On Tech: Facebook’s China tactics backfire

Facebook is now complaining about something for which it is partly to blame.

Facebook’s China tactics backfire

Daniel Maarleveld

Instagram’s boss had a message this week for the White House and the world: It was counterproductive for the United States to try to ban TikTok, the popular video app from China.

It’s bad for U.S. tech companies and people in the United States, Adam Mosseri, the head of Instagram, told Axios, if other countries take similar steps against technology from beyond their borders — including Facebook and its Instagram app. (He and Mark Zuckerberg have said this before, too.) “It’s really going to be problematic if we end up banning TikTok and we set a precedent for more countries to ban more apps,” he said.

Mosseri has a point. What he didn’t say, though, was that Facebook has itself partly to blame. The company helped fan the fears about TikTok that Facebook is now worried will blow back on the company. This is bonkers.

Facebook complaining about a bad policy that Facebook helped initiate might seem like an eye-rolling joke, but it’s more than that. It’s the latest evidence that the company’s executives are incapable of foresight. Facebook not predicting how its own actions might cause harm later on is partly why we have sprawling conspiracies and autocrats harassing their own citizens.

I genuinely wonder what Facebook expected to happen with its TikTok fearmongering. Over and over again for at least a year, Zuckerberg and other top Facebook executives privately and publicly spoke out against censorship by TikTok and other Chinese technology companies and complained that Chinese government support for domestic technology companies gave them a leg up over American companies.

ADVERTISEMENT

They weren’t wrong. There are reasons to be worried about TikTok and other Chinese technology operating in the United States. But I don’t believe Facebook was bringing up these concerns out of principled commitment to American values. What Facebook was doing was pure short-term self-interest.

The company’s executives implied that if U.S. lawmakers regulated or restrained Facebook, then somehow — it was never clear how, exactly — Chinese companies like TikTok and Chinese values would take over the world. Playing up often legitimate concerns about Chinese apps also sought to distract people from real problems about Facebook by yelling “LOOK OVER THERE!” about China.

There was plenty of concern in Washington about TikTok and Chinese technology even without Facebook pressing its points. But the company encouraged the sentiment that led American officials to try to bar TikTok from the United States.

TikTok probably won’t be banned. A ban never really seemed to be the point of the bizarre political theater. Still, a precedent has been set. As Mosseri warned, countries that are mad will probably feel emboldened to take it out on foreign tech companies by barring them from their borders.

ADVERTISEMENT

A large proportion of people who use Facebook and its Instagram, Messenger and WhatsApp apps are outside the United States, so those companies could well become the victims of government bans.

Only now are Facebook officials realizing that their TikTok trash talk helped unleash a monster that might hurt them. Usually the consequences of Facebook’s myopia falls on the most vulnerable people. This time — to the company’s utter shock — Facebook’s lack of foresight might hurt Facebook itself.

If you don’t already get this newsletter in your inbox, please sign up here.

Let’s talk about internet ads!

Did that headline make you excited?! Yeah, OK, no. But really, we should talk about internet ads.

Ads we see on websites and other digital spots passed television commercials several years ago as the dominant way companies pitch their new cars, travel packages, and other products and services.

ADVERTISEMENT

But there is a question that has been whispered for years about online advertising, including the types of personalized ads we see on Facebook and Google. What if … it doesn’t really work?

That question is getting renewed attention now because of a new book from a former Google employee who argues that the pervasive online ads based on digital dossiers of our habits are less accurate and less persuasive than its proponents believe.

That idea is overstated, I think. Online advertising is sprawling, and there is a lot of waste, outright fraud, overpromises and wasted money. A lot. That’s less true of the ads sold by Facebook and Google and more of the very long tail of advertising on the rest of the internet. (This is a good read on this topic.)

So yeah, some internet ads work really well. Some are garbage. But the problem is that all of it creates the conditions for a land grab to collect as much information on people as possible to craft ads targeted at each individual. Even if the ads are persuasive, the downsides of online advertising have gotten out of control.

What’s the way out of this? Comprehensive government regulation to force companies to collect less information about us. Period. That may be too much to hope for. But I am excited about experiments in advertising that are based not on who we are but what we’re doing right now.

If you are searching for Nike sneakers in Google, you’re probably going to be tempted by an ad for Nike sneakers. If you’re reading an article about Hawaii vacation spots — at some point in the future, when we can travel freely again — you might be interested in ads for holiday packages to Hawaii. Some news organizations are experimenting with this kind of advertising, as are companies like DuckDuckGo, a web search engine that competes with Google.

So, yes, some online ads are exceedingly persuasive and even useful to us. But almost all online ads are too creepy and we should welcome alternatives, whether through regulation or different business approaches.

Before we go …

  • This should be interesting: Twitter said it would force people to pause before they pass along others’ tweets and made several other temporary changes to its routine features. The changes are an attempt to control the spread of misinformation in the final weeks before the U.S. presidential election, my colleague Kate Conger reported. Some experts have said it would improve online conversations if Facebook, Twitter and other websites made it harder for people to rashly share information without thinking.
  • Fix Facebook by breaking Facebook: Charlie Warzel, an Opinion writer for The New York Times, writes that the best step Facebook can take is to entirely redesign itself. The plot to kidnap Michigan’s governor, which prosecutors said was coordinated in part on Facebook, is more evidence that the company must distribute information around values other than what gets people’s attention, Charlie says.
  • How to make your Alexa less creepy: This is not a project for everyone, but one writer built his own device like the Amazon Echo Show in about 45 minutes, without the snooping company.

Hugs to this

The human sprayed water at the octopus. The octopus turned it into a water fight game.

We want to hear from you. Tell us what you think of this newsletter and what else you’d like us to explore. You can reach us at ontech@nytimes.com.

If you don’t already get this newsletter in your inbox, please sign up here.

Need help? Review our newsletter help page or contact us for assistance.

You received this email because you signed up for On Tech with Shira Ovide from The New York Times.

To stop receiving these emails, unsubscribe or manage your email preferences.

Subscribe to The Times

Connect with us on:

facebooktwitterinstagram

Change Your EmailPrivacy PolicyContact UsCalifornia Notices

The New York Times Company. 620 Eighth Avenue New York, NY 10018

歡迎蒞臨:https://ofa588.com/

娛樂推薦:https://www.ofa86.com/

2020年10月8日 星期四

On Tech: False rumors often start at the top

Powerful people must now anticipate how their words might be twisted into weapons online.

False rumors often start at the top

Brenna Murphy

We know that false information spreads online like the world’s worst game of telephone.

But we don’t talk enough about the role of people in charge who say too little or the wrong things at important moments, creating conditions for misinformation to flourish.

Think about the recent rumors and outrage that flew around President Trump’s health, the wildfires in Oregon and the message of a Netflix film. Ill-considered communication from those at the top — including the president himself — made cycles of bogus information and misplaced anger even worse.

Every word that powerful people say matters. It may not be fair, but they must now anticipate how their words might be twisted — intentionally or not — into weapons in the online information war.

For one example, look at Oregon, where a tweet and other poorly communicated information from the police contributed to bogus rumors that left-wing activists deliberately started wildfires.

“We ask you to demonstrate peacefully and without the use of fire,” the police in Portland posted. There was no evidence that protesters were setting fires, but people seized on this and other odd or ambiguous official information as evidence that left-wing provocateurs at the Portland protests were responsible for wildfires.

ADVERTISEMENT

Local officials, including the Chamber of Commerce in Sioux Falls, S.D., also spread false rumors over the summer that left-wing protesters were headed to their town to start trouble.

None of this was true, but truth doesn’t matter in internet information soup. Wrong or ill-considered official statements can confirm what people already suspected.

The same thing happened when Netflix unleashed a clueless marketing campaign to promote a film called “Cuties.” My colleague described the movie as a nuanced exploration of gender and race and how society dangerously blurs the lines between girl empowerment and sexual exploitation. But Netflix’s promotional materials, including an image of tween girls posing in dance clothes, gave the false impression that the movie sexualized children.

In short, Netflix’s communication projected the idea that its own movie was the opposite of what it really was. Some politicians, parents and a Texas prosecutor called the film child pornography and pushed Netflix to ban it. Outcry about the movie has been amplified by supporters of the QAnon conspiracy theory, the false idea that top Democrats and celebrities are behind a global child-trafficking ring.

ADVERTISEMENT

I want to be clear: There are always people who twist information to their own ends. People might have misplaced blame for the wildfires or dumbed down the complexities of “Cuties” even if official communications had been perfectly clear from the jump. But by not choosing their words and images carefully, the people in charge provided fuel for misinformation.

We see over and over again that unclear, wrong or not enough information from the beginning can be hard to overcome.

Conspiracy theories about President Trump’s coronavirus diagnosis and health condition in the last week were fueled by people close to the president misspeaking or obfuscating what was happening. And the White House’s history of spreading false information contributed to a lack of trust in the official line. (My colleague Kevin Roose also wrote about this fueling wild speculation about the president’s health.)

Nature abhors a vacuum, and the internet turns a vacuum into conspiracies. All of us have a role to play in not contributing to misinformation, but experts and people in positions of power shoulder even more responsibility for not creating the conditions for bogus information to go wild.

If you don’t already get this newsletter in your inbox, please sign up here.

ADVERTISEMENT

Facebook is afraid. That’s good.

Facebook is expanding a blackout period for political and issue-related ads in the United States for days or longer after Election Day — a period in which officials might still be counting votes in the presidential election and other contests.

I want to make two points. First, Facebook’s ads blackout might be smart or it might be ineffectual, but it is definitely small fish.

Look at your Facebook feed. A lot of the overheated and manipulative garbage you see did not pay to be there. Those posts are there because they make people angry or happy, and Facebook’s computer systems circulate the stuff that generates an emotional reaction.

Yes, it’s extra galling if Facebook makes money directly from lies and manipulations. That’s a big reason some civil rights groups and company employees have called on internet companies to take a hard line against political ads or to ban them. But I suspect that most of the stuff that might rile people up if votes are still being counted after Election Day will be unpaid posts, including from President Trump — not ads.

Second, I am going to say something nice about Facebook. With the company’s ban on groups or pages that identify with the QAnon conspiracy announced this week and its gradually broadening crackdown on attempted voter intimidation and premature declarations of election victory, Facebook is showing courage in its convictions.

This is different. Too often the company myopically fixates on technical rules, not principles, and caves to its self-interest.

Facebook is taking a different tack in part because it doesn’t want to be blamed — as the company was four years ago — if there is confusion or chaos around the election. I love that Facebook is a little bit afraid.

It’s healthy for the company to ask itself: What if things go wrong? That’s something Facebook has often failed to do with disastrous consequences.

Before we go …

  • We are all conspiracists now: Kevin Roose, a technology columnist for The New York Times, writes that conspiracy theories are a symptom of the broader erosion of authority in the internet age. “How easily the conspiracist’s creed — that the official narrative is always a lie, and that the truth is out there for those willing to dig for it themselves — has penetrated our national psyche,” Kevin writes.
  • LinkedIn contains multitudes: During the pandemic and protests against racial injustice, the typically blah workplace social network has become a thriving outlet for Black professionals to express both fun stuff and grief about racial discrimination and alienation on the job, Ashanti M. Martin wrote for The Times. Some LinkedIn users said the company didn’t know how to handle it.
  • Raining cash on internet video stars: A small app called Triller is trying to steal stars from TikTok by paying them for just about anything, including a helicopter for a video shoot and a leased Rolls-Royce with a “TRILLER” vanity plate, my colleague Taylor Lorenz writes. My question: How long can Triller keep spending like this?

Hugs to this

We want to hear from you. Tell us what you think of this newsletter and what else you’d like us to explore. You can reach us at ontech@nytimes.com.

If you don’t already get this newsletter in your inbox, please sign up here.

Need help? Review our newsletter help page or contact us for assistance.

You received this email because you signed up for On Tech with Shira Ovide from The New York Times.

To stop receiving these emails, unsubscribe or manage your email preferences.

Subscribe to The Times

Connect with us on:

facebooktwitterinstagram

Change Your EmailPrivacy PolicyContact UsCalifornia Notices

The New York Times Company. 620 Eighth Avenue New York, NY 10018

歡迎蒞臨:https://ofa588.com/

娛樂推薦:https://www.ofa86.com/