Its been four months now since Facebook announced its intention to invest more heavily in private groups and messaging, and recently that effort has gotten a major marketing push. Walk through the Montgomery BART station in San Francisco and youll see ads for Facebook Groups plastering every wall, each emblazoned with the anodyne slogan “more together.”
In years past, such a launch might have been greeted with a collective shrug from the press. (The launch of Facebook Live in 2016 also involved a takeover of Montgomery Station, and passed with little fanfare — at least until a rash of violent live streams drew their attention.) But the increased focus on groups this year has come with energetic scrutiny from journalists — a sign of how even seemingly mundane Facebook launches now meet with deep skepticism around the world.
And judging from the awful groups that journalists keep discovering, that skepticism is warranted. Last week, ProPublica found a group of Border Patrol agents joking about migrant deaths and making other racist and offensive comments. (The Intercept posted an archive of the groups awful posts.) Over the weekend, CNN found another:
At least one other social media group with an apparent nexus to Customs and Border Protection has been discovered to contain vulgar and sexually explicit posts, according to screenshots shared by two sources familiar with the Facebook pages.
The secret Facebook group, “The Real CBP Nation,” which has around 1,000 members, is host to an image that mocks separating migrant families, multiple demeaning memes of Rep. Alexandria Ocasio-Cortez, a New York Democrat, and other derisive images of Asians and African Americans.
The same day, Le Monde found a group with 56,000 members devoted largely to making misogynist comments. (Im relying on Google Translate here, so let me know if I get this wrong, French speakers.) The group actively solicited revenge porn before Facebook shut it down, according to the report.
And just today, a Twitter user who stumbled across a Facebook TV ad investigated one of the featured groups, and found a rash of ugly posts.
All this bad behavior is worrying some observers, Elizabeth Dwoskin reports in the Washington Post:
“Large private groups remain unmoderated black boxes where users can freely threaten vulnerable populations,” said Jonathan Greenblatt, chief executive of the Anti-Defamation League. “Without any AI or human moderators, its easy to orchestrate harassment campaigns — at minimum, this environment contributes to the normalization of bigotry and discrimination. As Facebook moves to more and more private communication, were concerned about this delinquency.”
Facebook Groups offer us yet another chance to think about the difference between internet problems and platform problems. There have always been online forums where awful people congregate — thats an internet problem. Its plausible that, in the absence of Facebook Groups, racist Border Patrol agents would have found another place to hang out online and spout bigotry.
But Facebooks size and recommendation algorithms change that calculation. Its size enables connections between many Border Patrol agents who may not otherwise have met. And its recommendation algorithms work to introduce them to each other — just as new moms were introduced to anti-vaccine groups through recommendations, so are Border Patrol agents introduced to groups like Real CBP Nation.
These algorithms operate opaquely, and their recommendations can rarely be predicted in advance. No one knew Facebook would recommend that new moms join anti-vax groups — its algorithm just suggested that they join, and found that new moms acted on the suggestion, and so started suggesting it more.
Facebook cant solve racism or misogyny. But it can examine more closely the way it unwittingly recruits allies for racists and misogynists. Thats a platform problem through and through — and in the early days of Facebooks pivot to privacy, it doesnt seem to be getting much better.
Democracy
EU Privacy Laws May Be Hampering Pursuit of Terrorists
Is the General Data Protection Regulation making it harder to fight crime? Natalia Drozdiak explores the issue:
The WHOIS directory, which previously displayed both technical and personal data related to registered domain names, has been redacted to scrub out names, email addresses and other personal information due to Europes privacy law.
“Since May 2018, we have more and more cases of investigations that are just dropped or severely delayed because we cant have direct access to WHOIS registration data information,” said Gregory Mounier, head of outreach and internet governance at Europols cybercrime center. “Overall you can say that the internet has become less safe because of an overly conservative interpretation of the GDPR by the ICANN community.”
Europe Built a System to Fight Russian Meddling. Its Struggling.
Matt Apuzzo says a European effort to fight misinformation is off to a slow start:
The European Union launched an ambitious effort earlier this year to combat election interference: an early-warning system that would sound alarms about Russian propaganda. Despite high expectations, however, records show that the system has become a repository for a mishmash of information, produced no alerts and is already at risk of becoming defunct.
Indeed, even before the European Parliament elections this spring, an inside joke was circulating in Brussels about the Rapid Alert System: Its not rapid. There are no alerts. And theres no system.
Hacking, Glitches, Disinformation: Why Experts Are Worried About the 2020 Census
Chris Hamby looks into fears that the Census could be corrupted by bad actors:
The government has ambitious plans to use new digital methods to collect data. But the Census Bureau has had to scale back testing of that technology because of inadequate funding — raising the risk of problems ranging from software glitches to cyberattacks.
Also new is the threat of online disinformation campaigns reminiscent of the 2016 presidential cycle. The heated political discourse about the citizenship question has supplied ample fuel, and researchers say they are already beginning to see coordinated online efforts to undermine public trust in the census and to sow chaos and confusion.
How Facebook Fought Fake News About Facebook
Mark Bergen and Kurt Wagner write about Facebooks efforts to manage public opinion and contain the spread of misinformation. (The tools invented for these purposes do not seem to have fared particularly well, and Stormchaser was apparently retired at some point last year.)
Since 2016, Facebook employees have used Stormchaser to track many viral posts, including a popular conspiracy that the company listens to users through their phones microphone, according to three former employees. Other topics ranged from bitter protests (the #deleteFB movement) to ludicrous jokes (that Facebook Chief Executive Officer Mark Zuckerberg is an alien), according to one former employee. In some cases, like the copy-and-paste hoax, the social network took active steps to snuff them out. Staff prepared messages debunking assertions about Facebook, then ran them in front of users who shared the content, according to documents viewed by Bloomberg News and four people familiar with the matter. They asked not to be identified discussing private initiatives.
Many companies monitor social media to learn what customers are saying about them. But Facebooks position is unique. It owns the platform its watching, an advantage that may help Facebook track and reach users more effectively than other firms. And Facebook has been saddled with so many real problems recently that sometimes misinformation can stick.
Facebook and Twitter have not been invited to White House social media summit, sources say
The White House is not inviting social media companies to the event on Thursday where it will complain that social media companies are censoring them, Oliver Darcy reports:
The White House has not extended invitations to Facebook and Twitter to attend its social media summit on Thursday, people familiar with the matter said.
The people, who spoke to CNN Business on the condition of anonymity, suggested it was not surprising. They said they believe the summit would amount to a right-wing grievance session and was not aimed at seriously discussing some of the issues facing large technology companies.
Will Californias New Bot Law Strengthen Democracy?
Noam Cohen profiles the law, which requires that bots identify themselves as such:
Californias bot-disclosure law is more than a run-of-the-mill anti-fraud rule. By attempting to regulate a technology that thrives on social networks, the state will be testing societys resolve to get our (virtual) house in order after more than two decades of a runaway Internet. We are in new terrain, where the microtargeting of audiences on social networks, the perception of false news stories as genuine, and the bot-led amplification of some voices and drowning-out of others have combined to create angry, ill-informed online communities that are suspicious of one another and of the government.
Regulating bots should be low-hanging fruit when it comes to improving the Internet. The California law doesnt even ban them outright but, rather, insists that they identify themselves in a manner that is “clear, conspicuous, and reasonably designed.”
FBI, ICE find state drivers license photos are a gold mine for facial-recognition searches
Drew Harwell reports that federal agents are using state DMV databases to create a powerful new infrastructure for surveillance:
Agents with the Federal Bureau of Investigation and Immigration and Customs Enforcement have turned state drivers license databases into a facial-recognition gold mine, scanning through millions of Americans photos without their knowledge or consent, newly released documents show.
Thousands of facial-recognition requests, internal documents and emails over the past five years, obtained through public-records requests by researchers with Georgetown Laws Center on Privacy and Technology and provided to The Washington Post, reveal that federal investigators have turned state departments of motor vehicles databases into the bedrock of an unprecedented surveillance infrastructure.
Soon, satellites will be able to watch you everywhere all the time
And speaking of surveillance, satellites are getting really good at it, Christopher Beam reports:
Every year, commercially available satellite images are becoming sharper and taken more frequently. In 2008, there were 150 Earth observation satellites in orbit; by now there are 768. Satellite companies dont offer 24-hour real-time surveillance, but if the hype is to be believed, theyre getting close. Privacy advocates warn that innovation in satellite imagery is outpacing the US governments (to say nothing of the rest of the worlds) ability to regulate the technology. Unless we impose stricter limits now, they say, one day everyone from ad companies to suspicious spouses to terrorist organizations will have access to tools previously reserved for government spy agencies. Which would mean that at any given moment, anyone could be watching anyone else.
Your Data Could Be at the Center of the Fight Against Big Tech
Adam Satariano profiles Germanys top antitrust official, Andreas Mundt, who argues that world-scale data collection is anti-competitive.
The companies have strongly fought against his argument. But it is gaining traction in antitrust circles, as Mr. Mundt, who has led Germanys antitrust agency for almost a decade, urges officials in other nations to make the same point.
After the Facebook ruling, Mr. Mundt received calls from regulators and lawyers around the world to discuss the idea. He helped organize a meeting of fellow antitrust officials in Colombia, where they spent four days discussing tech regulation. Joseph Simons, chairman of the Federal Trade Commission, and Makan Delrahim, head of the Justice Department antitrust division, were among those attending.
Elsewhere
They Kinda Want to Believe Apollo 11 Was Maybe a Hoax
Just as irony has been essential in the rise of right-wing extremism online, its proving useful to the revival of old conspiracy theories, Amanda Hess reports:
The internets biggest stars are using irony and nonchalance to refurbish old conspiracies for new audiences, recycling them into new forms that help them persist in the cultural imagination. Along the way, these vloggers are unlocking a new, casual mode of experiencing paranoia. They are mutating our relationship to belief itself: It is less about having convictions than it is about having fun.
HQ Trivia lays off ~20% as it preps subscriptions
Once-hot HQ Trivia appears to be entering its last days. Josh Constine reports:
Downloads per month are down 92% versus last June according to Sensor Tower. And now four sources confirm that HQ laid off staff members this week. One said about 20% of staff was let go, and another said six to seven employees were departing. That aligns with Digiday reporter Kerry Flynns tweet that 7 employees were let go, bringing HQ to fewer than 30 (shrinking from 35 to 28 staffers would be a 20% drop).
That will leave the company short-handed as it attempts to diversify revenue with the upcoming launch of monthly subscriptions.
A perfect storm: The Wall Street Journal has 21 people detecting deepfakes – Digiday
Lucinda Southern reports on the Wall Street Journals effort to identify synthetic media:
To combat the growing threat of spreading misinformation ahead of the U.S. 2020 general election, The Wall Street Journal has formed a committee to help reporters navigate fake content.
Last September, the publisher assigned 21 of its staff from across its newsroom to form the committee. Each of them is on-call to answer reporters queries about whether a piece of content has been manipulated. The publisher has issued criteria to committee members which help them determine whether the content is fake or not. After each query from a reporter, members write up a report with details of what they learned.
Copies of AI deepfake app DeepNude are easily accessible online — and always will be
The creator of a deepfake-making app called DeepNude took it offline, but not before the code spread all over the internet, James Vincent reports:
The Verge was able to find links that ostensibly offer downloads of DeepNude in a variety of places, including Telegram channels, message boards like 4chan, YouTube video descriptions, and even on the Microsoft-owned code repository GitHub.
The report from Motherboard found that the app was being sold on a Discord server (now removed) for $20. The anonymous sellers said they had improved the stability of the software, which was prone to crashing, and removed a feature that added watermarks to the fake images (supposedly to stop them from being used maliciously).
Having 30,000 followers makes you a celebrity, UK advertising watchdog rules
Oh no, Im a Twitter celebrity! Heres Jon Porter:
The UKs Advertising Standards Authority has ruled that 30,000 is the magic number of followers that makes you a celebrity. The decision means that if you have such a following then you have to obey the same advertising rules as traditional celebrities like David Beckham or Stephen Fry, particularly when it comes to product endorsements.
The regulator came to the decision after an Instagram user with 32,000 followers, ThisMamaLife, posted an ad for Phenergan Night Time sleeping tablets. Although they disclosed that the post was an ad at the beginning of its description, the ASA ruled that their follower count made them a celebrity, and thus banned from drug endorsements in the UK.
Dr Disrespect and other creators keep hiding their apologies
Julia Alexander reports on the phenomenon of scandal-prone influencers posting their apologies to secondary and tertiary accounts in an effort to minimize the attention they get:
By using their main channels to post apologies, those creators confront their issues head-on and show a willingness to accept responsibility for whatever happened. But other creators may not want their core fans to see them apologize. Posting on alternate platforms allows creators like Paul and Beahm to acknowledge an issue and say theyve addressed it while largely sweeping things under the rug.
Heres a disturbing and tragic story from Kim Suarez about a YouTube engineer:
Betai Koffi, a software engineer at YouTube and a San Francisco resident, perhaps consumed more LSD than he should have while on vacation in Bodega Bay — downing an extra couple hits, according to friends, after initially freaking out on his first two. And he is now charged with multiple counts of attempted murder, and suffered a life-threatening gunshot wound from police.
The 4th of July holiday brought 32-year-old Koffi and five of his friends to Bodega Bay to