Tools
Change country:

Former first daughter Barbara Bush welcomes first child

Barbara Bush, the daughter of former President George W. Bush, has welcomed her first child into the world.
Read full article on: edition.cnn.com
Hate crimes against Asians rose 76% in 2020 amid pandemic, FBI says
The FBI republished hate crime data on Monday after an error in Ohio.
9 m
abcnews.go.com
What Progressives Need to Learn About America From the Immigrants They Welcome | Opinion
The people here in the U.S. who are traditionally most welcoming to immigrants—political progressives—don't seem to share their positive view of the country.
newsweek.com
Rep. Jim Banks intentionally misgendered a high-ranking trans official. Twitter locked his account.
The Republican congressman from Indiana promised not to "back down" to Twitter's demands he delete his tweet.
washingtonpost.com
The U.S. Mission in Syria is a Failure. Don't Turn it Into a Catastrophe. | Opinion
Assad has won the war. That's not new information. U.S. decision makers must deal with reality as it is, not as they wish it to be.
newsweek.com
Nancy Grace continues her quest for answers in Petito case in Fox Nation special
Fox Nation host Nancy Grace continued her quest for answers in a Fox News special "A Gabby Petito Investigation," part two in her investigation into this case.
foxnews.com
FDA could authorize a Covid-19 vaccine for kids by this week. Here's the most important thing to do while we wait, CDC director says
As the number of new daily cases of Covid-19 continues to fall in the US, the country awaits a major milestone that could provide another critical tool in the fight against the pandemic -- the first vaccine for 5 to 11-year-olds.
edition.cnn.com
Dangerous Covid trends in Europe alarm experts
edition.cnn.com
Why many Black employees don't want to return to the office
For people of color, working from home can offer a refuge from racial slights and a welcome relief from "code-switching."
cbsnews.com
Elon Musk says Tesla Full Self-Driving software has 'issues'
Tesla released then rolled-back a Full Self-Driving software update on Sunday after problems with the system arose. The partially-automated system periodically gets new capabilities for Tesla owners to test.
foxnews.com
The Facebook Papers may be the biggest crisis in the company's history
Facebook has confronted whistleblowers, PR firestorms and Congressional inquiries in recent years. But now it faces a combination of all three at once in what could be the most intense and wide-ranging crisis in the company's 17-year history.
edition.cnn.com
Jennifer Aniston leads tributes to 'Friends' actor James Michael Tyler following his death
Jennifer Aniston has led the tributes to her "Friends" co-star James Michael Tyler following his death, saying the show "would not have been the same" without him.
edition.cnn.com
Jennifer Aniston leads tributes to 'Friends' actor James Michael Tyler following his death
Jennifer Aniston has led the tributes to her "Friends" co-star James Michael Tyler following his death, saying the show "would not have been the same" without him.
edition.cnn.com
Dana White's Contender Series 45 official weigh-in results, live video stream (12 p.m. ET)
MMA Junkie is on scene and reporting live from Monday's official Dana White’s Contender Series 45 fighter weigh-ins.       Related StoriesDana White: Nate Diaz has one fight left on his UFC contractDana White: Nate Diaz has one fight left on his UFC contract - EnclosureVideo: UFC Fight Night 196 winners react to Paulo Costa's fight-week weight debacle 
usatoday.com
Who Is Rachel Levine? Biden-Appointed Official Misgendered by Jim Banks on Twitter
The Twitter account of the U.S. Congressman was suspended after he intentionally mislabeled Levine a "man."
newsweek.com
Sudan's military dissolves transitional government, declares state of emergency
edition.cnn.com
Scientists Capture Rare Direct Image of Planet 400 Light Years Away
The red-hot distant world could be studied further by the James Webb Space Telescope to reveal more about its atmosphere.
newsweek.com
Jury selection is slow, tangled process in trial over Ahmaud Arbery's death: What we know
More than 20 people have been selected as potential jurors in the Ahmaud Arbery murder trial, but none have been officially seated.      
usatoday.com
At 14, he found his mother murdered. Police suspected him because he was 'acting normal.' His case gets a new look.
Did officers' hunches and bad evidence about a traumatized 14-year-old lead to a wrongful murder conviction? The Missouri Supreme Court will decide.       
usatoday.com
Key Takeaways From the Facebook Papers and Their Fallout
A look at a month of scrutiny and struggles for the social-networking giant.
nytimes.com
Facebook's language gaps weaken screening of hate
Across the Middle East, journalists and activists feel Facebook censors their speech. But in some parts of the world, political groups use the social network to incite violence slipping through the company's efforts to police its platforms. (Oct. 25)      
usatoday.com
"Rust" electrician says he held cinematographer "while she was dying"
Serge Svetnoy said he had worked with Halyna Hutchins on multiple films and said producers hired an inexperienced armorer.
cbsnews.com
Washington’s defensive players ‘gave themselves a chance’ in Green Bay, and that’s a step forward
In its loss at Green Bay, Washington showed improvement up front and gave its rookies a boost of confidence.
washingtonpost.com
Facebook papers: What we know about how misinformation and extremism spread from whistleblower
Facebook's goal was to connect people together. But internal docs show Facebook knew users were being driven apart by a range of divisive content.      
usatoday.com
Major storm soaks drought-stricken California
A powerful storm barreled toward Southern California after flooding highways, toppling trees and causing mud flows in areas burned bare by recent fires across the northern part of the state. (Oct. 25)      
usatoday.com
Sudan's PM and ministers held amid coup reports
Sudan's leading general has declared a state of emergency, hours after his forces arrested the acting prime minister and other senior government officials. (Oct. 25)      
usatoday.com
Facebook Debates What to Do With Its Like and Share Buttons
Likes and shares made the social media site what it is. Now, company documents show, it’s struggling to deal with their effects.
nytimes.com
Facebook knew it was being used to incite violence in Ethiopia. It did little to stop the spread, documents show
The social media giant has been used by militia groups and political leaders to wage a virtual battle amid the Tigray war.
edition.cnn.com
Facebook knew it was being used to incite violence in Ethiopia. It did little to stop the spread, documents show
Facebook employees repeatedly sounded the alarm on the company's failure to curb the spread of posts inciting violence in "at risk" countries like Ethiopia, where a civil war has raged for the past year, internal documents seen by CNN show.
edition.cnn.com
Washington’s defensive players ‘gave themselves a chance’ in Green Bay, and that’s a step forward
Read more
washingtonpost.com
The ‘Safe Supply’ Movement Aims to Curb Drug Deaths Linked to the Opioid Crisis
On a morning Zoom call, a group of Canadian mothers give their full attention to a young man from the Drug User Liberation Front. At 26, Jeremy Kalicum is the age some of their kids would be if they had not died of accidental overdoses. Kalicum’s tone is urgent as he walks the moms through…
time.com
The swinging community hid in the shadows. Then came #SwingTok.
What exactly is swinging? How is it different from other relationships? Is it right for you? That depends, but experts say there's no wrong answer.      
usatoday.com
Big-name Democrats are campaigning in Virginia’s race for governor. Does that help candidates?
They won’t change minds. But Obama, Abrams and Bottoms are likely to help get more Black voters to cast ballots.
washingtonpost.com
Rep. Byron Donalds: Colin Powell and his inspiring vision of a great America
The life of Gen. Colin Powell, a Black man in America, is only possible in the United States of America. While I never had the honor of knowing or even meeting him, I genuinely believe that we share this belief.
foxnews.com
NFL Week 8 best bets: Three games with intriguing early odds
Week 7 of the NFL season saw plenty of close matchups, and there are three games in Week 8 that are already drawing interesting early odds.
latimes.com
A whistleblower’s power: Key takeaways from the Facebook Papers
Interviews with dozens of current and former employees and a trove of internal documents show how Facebook inflamed real-world harms.
washingtonpost.com
Doctors are often unaware of the only treatment for early Covid-19
On September 17, Mayra Arana made the phone call she says saved her life.
edition.cnn.com
Time to Fight Back Against Big Tech's IP Assault | Opinion
That a tech giant would copy the intellectual property of a smaller, less powerful firm is hardly surprising—in fact, it happens all the time.
newsweek.com
The Death of White America Has Been Greatly Exaggerated
If you paid attention to the news this summer about the release of 2020 census data, you probably heard that America’s white population is in free fall. Big, if true.The statistic that launched a thousand hot takes and breathless voice-overs about racial change was a supposed 8.6 percent, or 19 million, drop in the number of white Americans since 2010. Headlines cast this decline as unprecedented in census history and signaled that the nation’s majority-minority future loomed even closer than previously forecast. Pundits spun it as a harbinger of policy change and partisan realignment, for better or worse. Some wisely cautioned against demography-as-destiny assumptions in a country where the definition and public understanding of race can change rapidly. But few observers questioned whether the reported differences between the 2010 and 2020 censuses reflected real demographic change or simply statistical noise.[Adam Serwer: Demography is not destiny ]Commentators should have read the fine print before rushing to trot out their favorite narratives. If they had, they would have discovered that the eye-popping figure at the center of this summer’s hoopla is an illusion. The apparent decline in the white population is a result of changes to the Census Bureau’s protocol for measuring and classifying racial identity. The changes aimed to more accurately gauge the expansion of the country’s mixed-race population through new and more sophisticated data collection and classification techniques that capture the nuances of Americans’ multifaceted racial and ethnic identities. But a combination of bureaucratic constraints and messaging failures paved the way to public confusion.Ironically, a segment by the Fox News host Tucker Carlson inadvertently exposed the myth of massive white decline. During a rant about what he perceived as left-wing giddiness over the “extinction of white people,” he asked, “Where did all these people go?” The millions of missing white Americans did not, in fact, go anywhere. And they are not being replaced by minorities. Growing numbers of white Americans have multiracial children and grandchildren. Others were recategorized in 2020 as multiracial themselves, instead of single-race white.How, then, did so many pundits and commentators come to the conclusion that the white population had dropped 8.6 percent? This calculation stems from two errors. The first is a failure to recognize that the degree, and even direction, of change in white population depends entirely on how one defines white.Many white people who self-identify as white also identify as members of another race or as Latino. Thus, white can mean four different things: (1) non-Latino single-race white people, (2) non-Latino including multiracial people, (3) all single-race white people including Latinos, or (4) all white people including Latinos and multiracial individuals.Under the first, and narrowest, definition, the white population did drop, but by 2.6 percent, not 8.6. Under the second, it increased by 1 percent. Under the last and broadest definition, it increased by an even greater 2.6 percent. Only under the third, rarely used, definition does one reach the precipitous 8.6 percent drop.Which of these is most accurate? The standard definition of white in census data is the narrowest and excludes people who are both white and members of a racial or ethnic minority group. This follows the idea that everyone should fit in a single racial or ethnic category and hews to the ethno-racial specifications of the Office of Management and Budget, last updated in 1997.But much has changed in the early 21st century. Categorically excluding Americans who have both white and minority parentage from the white classification makes little sense in a society in which plural identities and backgrounds are common and the great majority of mixed-race individuals have a white parent. As we previously wrote in The Atlantic, this practice harkens back to the racist one-drop rule and clashes with present-day sociological realities. Many Americans identify as multiracial, and they are concentrated in social and economic strata that more closely resemble those of white than minority Americans.The second error driving the myth of white decline is an accounting snafu. Ignoring the Census Bureau’s caveats, analysts who touted the 8.6-percent-drop number conflated true demographic change with method changes in how the census form asked people to report their race and how the bureau used this information. To better assess the size and characteristics of America’s multiracial population, the bureau added a write-in space under the “white” and “black” race checkboxes, where respondents could specify their ethnicity. The instructions read: “Print, for example, German, Irish, English, Italian, Lebanese, Egyptian, etc.”These changes appear to have produced a recategorization: Many people who identified only as white in 2010 were classed as multiracial in 2020, especially Latinos. Among Latinos, the number of people who were classified solely as white dropped from 53 percent in 2010 to 20 percent in 2020. A person of Latin American origin who self-identified as single-race white in 2010 would presumably have checked the “white” box again in 2020. But she might also have written in her own or her family’s country of origin in the new text box. This would result in her automatic reclassification as multiracial in 2020, white and “some other race,” instead of single-race white, as in 2010.That the sharp rise in multiracial Latinos in 2020 is due to an accounting change, rather than a real demographic or social trend, is clear when we look at the 2019 American Community Survey, run annually by the Census Bureau. The ACS collected and classified race in the same way that the 2010 census had throughout the prior decade. The last of the 2019 ACS data were gathered just a few months before the census, and the reported results showed that the percentage of Latinos categorized as single-race white was unchanged since the ACS survey of 2011.[Read: The rise of the American ‘others’]In short, the confusion over white decline occurred not because the population changed but because the nuances in many individuals’ plural identities became more visible in the 2020 census. This is a positive development. The Census Bureau is moving toward a more comprehensive scheme for understanding how mixing is changing America’s ethno-racial contours. But it’s not there yet.Three crucial steps remain to provide a clear picture of the mixed-background population and avoid the sorts of misunderstandings that have plagued reporting and commentary on the 2020 census data.First, the bureau should make available data on the responses that individuals gave to the census questions on race and ethnicity, including what they wrote in the text boxes on the race question and how they were classified. This would give journalists and academics the tools they need to distinguish real and illusory change in the 2020 results. It would permit researchers to assess precisely how much of the apparent shift toward multiracial identification resulted from reclassification. It would also give us a much-needed granular look at the way people specified their own origins and ancestry.Second, the OMB needs to update its rules for ethnic- and racial-data collection so that the Census Bureau can allow individuals to report mixed Latino and non-Latino backgrounds. In the current system, these backgrounds are not recognized, and the census lacks the authorization to alter its race question so as to identify them. In 2015, the bureau tested an integrated race and ethnicity question that would overcome this flaw, and deemed its performance superior to that of the questions used in 2010 and 2020. In the integrated question, “Hispanic” is treated as a racial category. Individuals are able to check as many of the ethno-racial categories as they think appropriate. This would substantially change the picture Americans have of mixing, because we know from birth data that the largest mixed group by far has Latino and non-Latino white parentage. There is no reason to wait for the next census in 2030. An OMB rule change now would allow the Census Bureau to collect better data in its annual American Community Survey right away.Third, the bureau should break from its long tradition of emphasizing mutually exclusive ethnic and racial categories, whereby every person appears in only one. Granted, such a scheme has the advantage of neatness—all the percentages add up to 100. But this neatness comes at a major cost because it suppresses important components of the identities and group affiliations that are socially and psychologically meaningful to Americans from mixed backgrounds.[Richard Alba, Morris Levy, and Dowell Myers: The myth of a majority-minority America]These changes would make it possible to more accurately count America’s ethnic and racial groups and to thoroughly comprehend the societal changes that are coming about through increased mixing. For example, census data could then be used to compare and contrast the upward mobility and integration of mixed white-minority Americans and of racial minorities. We could discern more definitively which groups of mixed-race Americans face barriers to the mainstream, which encounter abundant opportunities, and why.Long a multiracial nation, America is also becoming a nation of multiracial people. Much needs to be understood, and communicated to the public, about how this ethno-racial evolution is defining our present and shaping our future. This will require more accurate and nuanced data about mixing from the Census Bureau. It will also require the rest of us to unburden ourselves of the antiquated, zero-sum lens of white loss and minority gain so that we can see 21st-century racial change in all its kaleidoscopic dynamism.
theatlantic.com
‘History Will Not Judge Us Kindly’
Before I tell you what happened at exactly 2:28 p.m. on Wednesday, January 6, 2021, at the White House—and how it elicited a very specific reaction, some 2,400 miles away, in Menlo Park, California—you need to remember the mayhem of that day, the exuberance of the mob as it gave itself over to violence, and how several things seemed to happen all at once.At 2:10 p.m., a live microphone captured a Senate aide’s panicked warning that “protesters are in the building,” and both houses of Congress began evacuating.At 2:13 p.m., Vice President Mike Pence was hurried off the Senate floor and out of the chamber.At 2:15 p.m., thunderous chants were heard: “Hang Mike Pence! Hang Mike Pence!”At the White House, President Donald Trump was watching the insurrection live on television. The spectacle excited him. Which brings us to 2:28 p.m., the moment when Trump shared a message he had just tweeted with his 35 million Facebook followers: “Mike Pence didn’t have the courage to do what should have been done to protect our Country and our Constitution … USA demands the truth!”[David A. Graham: This is a coup]Even for the Americans inured to the president’s thumbed outbursts, Trump’s attack against his own vice president—at a moment when Pence was being hunted by the mob Trump sent to the Capitol—was something else entirely. Horrified Facebook employees scrambled to enact “break the glass” measures, steps they could take to quell the further use of their platform for inciting violence. That evening, Mark Zuckerberg, Facebook’s founder and CEO, posted a message on Facebook’s internal chat platform, known as Workplace, under the heading “Employee FYI.”“This is a dark moment in our nation’s history,” Zuckerberg wrote, “and I know many of you are frightened and concerned about what’s happening in Washington, DC. I’m personally saddened by this mob violence.”Facebook staffers weren’t sad, though. They were angry, and they were very specifically angry at Facebook. Their message was clear: This is our fault.Chief Technology Officer Mike Schroepfer asked employees to “hang in there” as the company figured out its response. “We have been ‘hanging in there’ for years,” one person replied. “We must demand more action from our leaders. At this point, faith alone is not sufficient.”“All due respect, but haven’t we had enough time to figure out how to manage discourse without enabling violence?” another staffer responded. “We’ve been fueling this fire for a long time and we shouldn’t be surprised it’s now out of control.”“I’m tired of platitudes; I want action items,” another staffer wrote. “We’re not a neutral entity.”“One of the darkest days in the history of democracy and self-governance,” yet another staffer wrote. “History will not judge us kindly.”Facebook employees have long understood that their company undermines democratic norms and restraints in America and across the globe. Facebook’s hypocrisies, and its hunger for power and market domination, are not secret. Nor is the company’s conflation of free speech and algorithmic amplification. But the events of January 6 proved for many people—including many in Facebook’s workforce—to be a breaking point.The Atlantic reviewed thousands of pages of documents from Facebook, including internal conversations and research conducted by the company, from 2017 to 2021. Frances Haugen, the whistleblower and former Facebook engineer who testified before Congress earlier this month, filed a series of disclosures about Facebook to the Securities and Exchange Commission and to Congress before her testimony. Redacted versions of those documents were obtained by a consortium of more than a dozen news organizations, including The Atlantic. The names of Facebook employees are mostly blacked out.The documents are astonishing for two reasons: First, because their sheer volume is unbelievable. And second, because these documents leave little room for doubt about Facebook’s crucial role in advancing the cause of authoritarianism in America and around the world. Authoritarianism predates the rise of Facebook, of course. But Facebook makes it much easier for authoritarians to win.Again and again, the Facebook Papers show staffers sounding alarms about the dangers posed by the platform—how Facebook amplifies extremism and misinformation, how it incites violence, how it encourages radicalization and political polarization. Again and again, staffers reckon with the ways in which Facebook’s decisions stoke these harms, and they plead with leadership to do more. And again and again, staffers say, Facebook’s leaders ignore them.By nightfall on January 6, 2021, the siege had been reversed, though not without fatalities. Washington’s mayor had issued a citywide curfew and the National Guard was patrolling the streets. Facebook announced that it would lock Trump’s account, effectively preventing him from posting on the platform for 24 hours.“Do you genuinely think 24 hours is a meaningful ban?” one Facebook staffer wrote on an internal message board. The staffer then turned, just as others had, to the years of failures and inaction that had preceded that day. “How are we expected to ignore when leadership overrides research based policy decisions to better serve people like the groups inciting violence today. Rank and file workers have done their part to identify changes to improve our platform but have been actively held back. Can you offer any reason we can expect this to change in the future.”It was a question without a question mark. The employee seemed to know that there wouldn’t be a satisfying answer.Facebook later extended the ban at least until the end of Trump’s presidential term, and then, when Facebook’s Oversight Board ruled against imposing an indefinite ban, it extended the temporary ban until at least January 7, 2023. But for some Facebook employees, the decision to crack down on Trump for inciting violence was comically overdue. Facebook had finally acted, but to many at the company, it was too little, too late. For months, Trump had incited the insurrection—in plain sight, on Facebook. Facebook has dismissed the concerns of its employees in manifold ways. One of its cleverer tactics is to argue that staffers who have raised the alarm about the damage done by their employer are simply enjoying Facebook’s “very open culture,” in which people are encouraged to share their opinions, a spokesperson told me. This stance allows Facebook to claim transparency while ignoring the substance of the complaints, and the implication of the complaints: that many of Facebook’s employees believe their company operates without a moral compass.[Adrienne LaFrance: The largest autocracy on Earth]“Employees have been crying out for months to start treating high-level political figures the same way we treat each other on the platform,” one employee wrote in the January 6 chat. “That’s all we’re asking for … Today, a coup was attempted against the United States. I hope the circumstances aren’t even more dire next time we speak.”rewind two months to November 4, 2020, the day after the presidential election. The outcome of the election was still unknown when a 30-year-old political activist created a Facebook group called “Stop the Steal.”“Democrats are scheming to disenfranchise and nullify Republican votes,” the group’s manifesto read. “It’s up to us, the American people, to fight and to put a stop to it.” Within hours, “Stop the Steal” was growing at a mind-scrambling rate. At one point it was acquiring 100 new members every 10 seconds. It soon became one of the fastest-growing groups in Facebook history.As “Stop the Steal” metastasized, Facebook employees traded messages on the company’s internal chat platform, expressing anxiety about their role in spreading election misinformation. “Not only do we not do something about combustible election misinformation in comments,” one wrote on November 5; “we amplify and give them broader distribution. Why?”By then, less than 24 hours after the group’s creation, “Stop the Steal” had grown to 333,000 members, and the group’s administrator couldn’t keep up with the pace of commenting. Facebook employees were worried that “Stop the Steal” members were inciting violence, and the group came to the attention of executives. Facebook, to its credit, promptly shut down the group. But we now know that “Stop the Steal” had already reached too many people, too quickly, to be contained. The movement jumped from one platform to another. And even when the group was removed by Facebook, the platform remained a key hub for people to coordinate the attack on the U.S. Capitol.After the best-known “Stop the Steal” Facebook group was dismantled, copycat groups sprang up. All the while, the movement was encouraged by President Trump, who posted to Facebook and Twitter, sometimes a dozen times a day, his complaint always the same—he won, and Joe Biden lost. His demand was always the same as well: It was time for his supporters to fight for him and for their country. Irene Suosalo Never before in the history of the Justice Department has an investigation been so tangled up with social media. Facebook is omnipresent in the related court documents, woven throughout the stories of how people came to be involved in the riot in the first place, and reappearing in accounts of chaos and bloodshed. More than 600 people have been charged with crimes in connection to January 6. Court documents also detail how Facebook provided investigators with identifying information about its users, as well as metadata that investigators used to confirm alleged perpetrators’ whereabouts that day. Taken in aggregate, these court documents from January 6 are themselves a kind of facebook, one filled with selfies posted on Facebook apps over the course of the insurrection.[Helen Lewis: The problem is Facebook]On a bright, chilly Wednesday weeks after the insurrection, when FBI agents finally rolled up to Russell Dean Alford’s Paint & Body Shop in Hokes Bluff, Alabama, they said Alford’s reaction was this: “I wondered when y’all were going to show up. Guess you’ve seen the videos on my Facebook page.” Alford pleaded not guilty to four federal charges, including knowingly entering a restricted building and disorderly conduct.Not only were the perpetrators live-streaming their crimes as they committed them, but federal court records show that those who have been indicted spent many weeks stoking violence on Facebook with posts such as “NO EXCUSES! NO RETREAT! NO SURRENDER! TAKE THE STREETS! TAKE BACK OUR COUNTRY! 1/6/2021=7/4/1776” and “Grow a pair of balls and take back your government!”When you stitch together the stories that spanned the period between Joe Biden’s election and his inauguration, it’s easy to see Facebook as instrumental to the attack on January 6. (A spokesperson told me that the notion that Facebook played an instrumental role in the insurrection is “absurd.”) Consider, for example, the case of Daniel Paul Gray. According to an FBI agent’s affidavit, Gray posted several times on Facebook in December about his plans for January 6, commenting on one post, “On the 6th a f[*]cking sh[*]t ton of us are going to Washington to shut the entire city down. It’s gonna be insane I literally can’t wait.” In a private message, he bragged that he’d just joined a militia and also sent a message saying, “are you gonna be in DC on the 6th like trump asked us to be?” Gray was later indicted on nine federal charges, including obstruction of an official proceeding, engaging in acts of physical violence, violent entry, assault, and obstruction of law enforcement. He has pleaded not guilty to all of them.Then there’s the case of Cody Page Carter Connell, who allegedly encouraged his Facebook friends to join him in D.C. on January 6. Connell ended up charged with eight federal crimes, and he pleaded not guilty to all of them. After the insurrection, according to an FBI affidavit, he boasted on Facebook about what he’d done.“We pushed the cops against the wall, they dropped all their gear and left,” he wrote in one message.“Yall boys something serious, lol,” someone replied. “It lookin like a civil war yet?”Connell’s response: “It’s gonna come to it.”All over America, people used Facebook to organize convoys to D.C., and to fill the buses they rented for their trips. Facebook users shared and reshared messages like this one, which appeared before dawn on Christmas Eve in a Facebook group for the Lebanon Maine Truth Seekers: This election was stolen and we are being slow walked towards Chinese ownership by an establishment that is treasonous and all too willing to gaslight the public into believing the theft was somehow the will of the people. Would there be an interest locally in organizing a caravan to Washington DC for the Electoral College vote count on Jan 6th, 2021? I am arranging the time off and will be a driver if anyone wishes to hitch a ride, or a lead for a caravan of vehicles. If a call went out for able bodies, would there be an answer? Merry Christmas. The post was signed by Kyle Fitzsimons, who was later indicted on charges including attacking police officers on January 6. Fitzsimons has pleaded not guilty to all eight federal charges against him.You may be thinking: It’s 2021; of course people used Facebook to plan the insurrection. It’s what they use to plan all aspects of their lives. But what emerges from a close reading of Facebook documents, and observation of the manner in which the company connects large groups of people quickly, is that Facebook isn’t a passive tool but a catalyst. Had the organizers tried to plan the rally using other technologies of earlier eras, such as telephones, they would have had to identify and reach out individually to each prospective participant, then persuade them to travel to Washington. Facebook made people’s efforts at coordination highly visible on a global scale. The platform not only helped them recruit participants but offered people a sense of strength in numbers. Facebook proved to be the perfect hype machine for the coup-inclined.Among those charged with answering Trump’s call for revolution were 17 people from Florida, Ohio, North Carolina, Georgia, Alabama, Texas, and Virginia who allegedly coordinated on Facebook and other social platforms to join forces with the far-right militia known as the Oath Keepers. One of these people, 52-year-old Kelly Meggs from rural Florida, allegedly participated with his wife in weapons training to prepare for January 6.“Trump said It’s gonna be wild!!!!!!!” Meggs wrote in a Facebook message on December 22, according to an indictment. “It’s gonna be wild!!!!!!! He wants us to make it WILD that’s what he’s saying. He called us all to the Capitol and wants us to make it wild!!! Sir Yes Sir!!! Gentlemen we are heading to DC pack your shit!!” Meggs and his Facebook friends arrived in Washington with paramilitary gear and battle-ready supplies—including radio equipment, camouflage combat uniforms, helmets, eye protection, and tactical vests with plates. They’re charged with conspiracy against the United States. Meggs has pleaded not guilty to all charges. His wife, Connie Meggs, has a trial date set for January 2022.Ronald Mele, a 51-year-old California man, also used Facebook to share his plans for the insurrection, writing in a December Facebook post that he was taking a road trip to Washington “to support our President on the 6th and days to follow just in case,” according to his federal indictment. Prosecutors say he and five other men mostly used the chat app Telegram to make their plans—debating which firearms, shotgun shells, and other weapons to bring with them and referring to themselves as soldiers in the “DC Brigade”—and three of them posted to Instagram and Facebook about their plans as well. On January 2, four members of the group met at Mele’s house in Temecula, about an hour north of San Diego. Before they loaded into an SUV and set out across the country, someone suggested that they take a group photo. The men posed together, making hand gestures associated with the Three Percenters, a far-right militia movement that’s classified as a terrorist organization in Canada. (Mele has pleaded not guilty to all four charges against him.)On January 6, federal prosecutors say, members of the DC Brigade were among the rioters who broke through the final police line, giving the mob access to the West Terrace of the Capitol. At 2:30 p.m., just after President Trump egged on the rioters on Facebook, Mele and company were on the West Terrace celebrating, taking selfies, and shouting at fellow rioters to go ahead and enter the Capitol. One of the men in the group, Alan Hostetter, a 56-year-old from San Clemente, posted a selfie to his Instagram account, with a crowd of rioters in the background. Hostetter, who has pleaded not guilty to all charges, tapped out a caption to go with the photo: “This was the ‘shot heard ’round the world!’ … the 2021 version of 1776. That war lasted 8 years. We are just getting warmed up.”In November 2019, Facebook staffers noticed they had a serious problem. Facebook offers a collection of one-tap emoji reactions. Today, they include “like,” “love,” “care,” “haha,” “wow,” “sad,” and “angry.” Company researchers had found that the posts dominated by “angry” reactions were substantially more likely to go against community standards, including prohibitions on various types of misinformation, according to internal documents.But Facebook was slow to act. In July 2020, researchers presented the findings of a series of experiments. At the time, Facebook was already weighting the reactions other than “like” more heavily in its algorithm—meaning posts that got an “angry” reaction were more likely to show up in users’ News Feeds than posts that simply got a “like.” Anger-inducing content didn’t spread just because people were more likely to share things that made them angry; the algorithm gave anger-inducing content an edge. Facebook’s Integrity workers—employees tasked with tackling problems such as misinformation and espionage on the platform—concluded that they had good reason to believe targeting posts that induced anger would help stop the spread of harmful content.By dialing anger’s weight back to zero in the algorithm, the researchers found, they could keep posts to which people reacted angrily from being viewed by as many users. That, in turn, translated to a significant (up to 5 percent) reduction in the hate speech, civic misinformation, bullying, and violent posts—all of which are correlated with offline violence—to which users were exposed. Facebook rolled out the change in early September 2020, documents show; a Facebook spokesperson confirmed that the change has remained in effect. It was a real victory for employees of the Integrity team.But it doesn’t normally work out that way. In April 2020, according to Frances Haugen’s filings with the SEC, Facebook employees had recommended tweaking the algorithm so that the News Feed would deprioritize the surfacing of content for people based on their Facebook friends’ behavior. The idea was that a person’s News Feed should be shaped more by people and groups that a person had chosen to follow. Up until that point, if your Facebook friend saw a conspiracy theory and reacted to it, Facebook’s algorithm might show it to you, too. The algorithm treated any engagement in your network as a signal that something was worth sharing. But now Facebook workers wanted to build circuit breakers to slow this form of sharing.Experiments showed that this change would impede the distribution of hateful, polarizing, and violence-inciting content in people’s News Feeds. But Zuckerberg “rejected this intervention that could have reduced the risk of violence in the 2020 election,” Haugen’s SEC filing says. An internal message characterizing Zuckerberg’s reasoning says he wanted to avoid new features that would get in the way of “meaningful social interactions.” But according to Facebook’s definition, its employees say, engagement is considered “meaningful” even when it entails bullying, hate speech, and reshares of harmful content.This episode, like Facebook’s response to the incitement that proliferated between the election and January 6, reflects a fundamental problem with the platform. Facebook’s megascale allows the company to influence the speech and thought patterns of billions of people. What the world is seeing now, through the window provided by reams of internal documents, is that Facebook catalogs and studies the harm it inflicts on people. And then it keeps harming people anyway.“I am worried that Mark’s continuing pattern of answering a different question than the question that was asked is a symptom of some larger problem,” wrote one Facebook employee in an internal post in June 2020, referring to Zuckerberg. “I sincerely hope that I am wrong, and I’m still hopeful for progress. But I also fully understand my colleagues who have given up on this company, and I can’t blame them for leaving. Facebook is not neutral, and working here isn’t either.”“I just wish we could hear the truth directly,” another added. “Anything feels like we (the employees) are being intentionally deceived.”[Read: What Facebook did to American democracy]I’ve been covering Facebook for a decade now, and the challenges it must navigate are novel and singularly complex. One of the most important, and heartening, revelations of the Facebook Papers is that many Facebook workers are trying conscientiously to solve these problems. One of the disheartening features of these documents is that these same employees have little or no faith in Facebook leadership. It is quite a thing to see, the sheer number of Facebook employees—people who presumably understand their company as well as or better than outside observers—who believe their employer to be morally bankrupt.I spoke with several former Facebook employees who described the company’s metrics-driven culture as extreme, even by Silicon Valley standards. (I agreed not to name them, because they feared retaliation and ostracization from Facebook for talking about the company’s inner workings.) Facebook workers are under tremendous pressure to quantitatively demonstrate their individual contributions to the company’s growth goals, they told me. New products and features aren’t approved unless the staffers pitching them demonstrate how they will drive engagement. As a result, Facebook has stoked an algorithm arms race within its ranks, pitting core product-and-engineering teams, such as the News Feed team, against their colleagues on Integrity teams, who are tasked with mitigating harm on the platform. These teams establish goals that are often in direct conflict with each other.One of Facebook’s Integrity staffers wrote at length about this dynamic in a goodbye note to colleagues in August 2020, describing how risks to Facebook users “fester” because of the “asymmetrical” burden placed on employees to “demonstrate legitimacy and user value” before launching any harm-mitigation tactics—a burden not shared by those developing new features or algorithm changes with growth and engagement in mind. The note said: We were willing to act only after things had spiraled into a dire state … Personally, during the time that we hesitated, I’ve seen folks from my hometown go further and further down the rabbithole of QAnon and Covid anti-mask/anti-vax conspiracy on FB. It has been painful to observe. Current and former Facebook employees describe the same fundamentally broken culture—one in which effective tactics for making Facebook safer are rolled back by leadership or never approved in the first place. (A Facebook spokesperson rejected the notion that it deprioritizes the well-being of its users.) That broken culture has produced a broken platform: an algorithmic ecosystem in which users are pushed toward ever more extreme content, and where Facebook knowingly exposes its users to conspiracy theories, disinformation, and incitement to violence.One example is a program that amounts to a whitelist for VIPs on Facebook, allowing some of the users most likely to spread misinformation to break Facebook’s rules without facing consequences. Under the program, internal documents show, millions of high-profile users—including politicians—are left alone by Facebook even when they incite violence. Some employees have flagged for their superiors how dangerous this is, explaining in one internal document that Facebook had solid evidence showing that when “a piece of content is shared by a co-partisan politician, it tends to be perceived as more trustworthy, interesting, and helpful than if it’s shared by an ordinary citizen.” In other words, whitelisting influential users with massive followings on Facebook isn’t just a secret and uneven application of Facebook’s rules; it amounts to “protecting content that is especially likely to deceive, and hence to harm, people on our platforms.”Facebook workers tried and failed to end the program. Only when its existence was reported in September by The Wall Street Journal did Facebook’s Oversight Board ask leadership for more information about the practice. Last week, the board publicly rebuked Facebook for not being “fully forthcoming” about the program. (Although Oversight Board members are selected by Facebook and paid by Facebook, the company characterizes their work as independent.)The Facebook Papers show that workers agonized over trade-offs between what they saw as doing the right thing for the world and doing the right thing for their employer. “I am so torn,” one employee wrote in December 2020 in response to a colleague’s comments on how to fight Trump’s hate speech and incitements to violence. “Following these recommendations could hasten our own demise in a variety of ways, which might interfere [with] all the other good we do in the world. How do you weigh these impacts?” Messages show workers wanting Facebook to make honorable choices, and worrying that leadership is incapable of doing so. At the same time, many clearly believe that Facebook is still a net force for good, and they also worry about hurting the platform’s growth.These worries have been exacerbated lately by fears about a decline in new posts on Facebook, two former employees who left the company in recent years told me. People are posting new material less frequently to Facebook, and its users are on average older than those of other social platforms. The explosive popularity of platforms such as TikTok, especially among younger people, has rattled Facebook leadership. All of this makes the platform rely more heavily on ways it can manipulate what its users see in order to reach its goals. This explains why Facebook is so dependent on the infrastructure of groups, as well as making reshares highly visible, to keep people hooked.But this approach poses a major problem for the overall quality of the site, and former Facebook employees repeatedly told me that groups pose one of the biggest threats of all to Facebook users. In a particularly fascinating document, Facebook workers outline the downsides of “community,” a buzzword Zuckerberg often deploys as a way to justify the platform’s existence. Zuckerberg has defined Facebook’s mission as making “social infrastructure to give people the power to build a global community that works for all of us,” but in internal research documents his employees point out that communities aren’t always good for society: When part of a community, individuals typically act in a prosocial manner. They conform, they forge alliances, they cooperate, they organize, they display loyalty, they expect obedience, they share information, they influence others, and so on. Being in a group changes their behavior, their abilities, and, importantly, their capability to harm themselves or others … Thus, when people come together and form communities around harmful topics or identities, the potential for harm can be greater. The infrastructure choices that Facebook is making to keep its platform relevant are driving down the quality of the site, and exposing its users to more dangers. Those dangers are also unevenly distributed, because of the manner in which certain subpopulations are algorithmically ushered toward like-minded groups. And the subpopulations of Facebook users who are most exposed to dangerous content are also most likely to be in groups where it won’t get reported.Many Facebook employees believe that their company is hurting people. Many have believed this for years. And even they can’t stop it. “We can’t pretend we don’t see information consumption patterns, and how deeply problematic they are for the longevity of democratic discourse,” a user-experience researcher wrote in an internal comment thread in 2019, in response to a now-infamous memo from Andrew “Boz” Bosworth, a longtime Facebook executive. “There is no neutral position at this stage, it would be powerfully immoral to commit to amorality.”In the months since January 6, Mark Zuckerberg has made a point of highlighting Facebook’s willingness to help federal investigators with their work. “I believe that the former president should be responsible for his words, and the people who broke the law should be responsible for their actions,” Zuckerberg said in congressional testimony last spring. “So that leaves the question of the broader information ecosystem. Now, I can’t speak for everyone else—the TV channels, radio stations, news outlets, websites, and other apps. But I can tell you what we did. Before January 6, we worked with law enforcement to identify and address threats. During and after the attack, we provided extensive support in identifying the insurrectionists, and removed posts supporting violence. We didn’t catch everything, but we made our services inhospitable to those who might do harm.”[Read: Mark Zuckerberg’s power is unprecedented]Zuckerberg’s positioning of Facebook’s role in the insurrection is odd. He lumps his company in with traditional media organizations—something he’s ordinarily loath to do, lest the platform be expected to take more responsibility for the quality of the content that appears on it—and suggests that Facebook did more, and did better, than journalism outlets in its response to January 6. What he fails to say is that journalism outlets would never be in the position to help investigators this way, because insurrectionists don’t typically use newspapers and magazines to recruit people for coups.In hindsight, it is easy to say that Facebook should have made itself far more hostile to insurrectionists before they carried out their attack. But people post passionately about lawful protests all the time. How is Facebook to know which protests will spill into violence and which won’t? The answer here is simple: because its own staffers have obsessively studied this question, and they’re confident that they’ve already found ways to make Facebook safer.Facebook wants people to believe that the public must choose between Facebook as it is, on the one hand, and free speech, on the other. This is a false choice. Facebook has a sophisticated understanding of measures it could take to make its platform safer without resorting to broad or ideologically driven censorship tactics.Facebook knows that no two people see the same version of the platform, and that certain subpopulations experience far more dangerous versions than others do. Facebook knows that people who are isolated—recently widowed or divorced, say, or geographically distant from loved ones—are disproportionately at risk of being exposed to harmful content on the platform. It knows that repeat offenders are disproportionately responsible for spreading misinformation. And it knows that 3 percent of Facebook users in the United States are super-consumers of conspiracy theories, accounting for 37 percent of known consumption of misinformation on the platform.The most viral content on Facebook is basically untouchable—some is so viral that even turning down the distribution knob by 90 percent wouldn’t make a dent in its ability to ricochet around the internet. (A Facebook spokesperson told me that although the platform sometimes reduces how often people see content that has been shared by a chain of two or more people, it is reluctant to apply that solution more broadly: “While we have other systems that demote content that might violate our specific policies, like hate speech or nudity, this intervention reduces all content with equal strength. Because it is so blunt, and reduces positive and completely benign speech alongside potentially inflammatory or violent rhetoric, we use it sparingly.”)Facebook knows that there are harmful activities taking place on the platform that don’t break any rules, including much of the coordination leading up to January 6. And it knows that its interventions touch only a minuscule fraction of Facebook content anyway. Facebook knows that it is sometimes used to facilitate large-scale societal violence. And it knows that it has acted too slowly to prevent such violence in the past.Facebook could ban reshares. It could consistently enforce its policies regardless of a user’s political power. It could choose to optimize its platform for safety and quality rather than for growth. It could tweak its algorithm to prevent widespread distribution of harmful content. Facebook could create a transparent dashboard so that all of its users can see what’s going viral in real time. It could make public its rules for how frequently groups can post and how quickly they can grow. It could also automatically throttle groups when they’re growing too fast, and cap the rate of virality for content that’s spreading too quickly.Facebook could shift the burden of proof toward people and communities to demonstrate that they’re good actors—and treat reach as a privilege, not a right. Facebook could say that its platform is not for everyone. It could sound an alarm for those who wander into the most dangerous corners of Facebook, and those who encounter disproportionately high levels of harmful content. It could hold its employees accountable for preventing users from finding these too-harmful versions of the platform, thereby preventing those versions from existing.It could do all of these things. But it doesn’t.Facebook certainly isn’t the only harmful entity on the social web. Extremism thrives on other social platforms as well, and plenty of them are fueled by algorithms that are equally opaque. Lately, people have been debating just how nefarious Facebook really is. One argument goes something like this: Facebook’s algorithms aren’t magic, its ad targeting isn’t even that good, and most people aren’t that stupid. All of this may be true, but that shouldn’t be reassuring. An algorithm may just be a big dumb means to an end, a clunky way of maneuvering a massive, dynamic network toward a desired outcome. But Facebook’s enormous size gives it tremendous, unstable power. Facebook takes whole populations of people, pushes them toward radicalism, and then steers the radicalized toward one another. For those who found themselves in the “Stop the Steal” corners of Facebook in November and December of last year, the enthusiasm, the sense of solidarity, must have been overwhelming and thrilling. Facebook had taken warped reality and distributed it at scale.I’ve sometimes compared Facebook to a Doomsday Machine in that it is technologically simple and unbelievably dangerous—a black box of sensors designed to suck in environmental cues and deliver mutually assured destruction. When the most powerful company in the world possesses an instrument for manipulating billions of people—an instrument that only it can control, and that its own employees say is badly broken and dangerous—we should take notice.The lesson for individuals is this: You must be vigilant about the informational streams you swim in, deliberate about how you spend your precious attention, unforgiving of those who weaponize your emotions and cognition for their own profit, and deeply untrusting of any scenario in which you’re surrounded by a mob of people who agree with everything you’re saying.And the lesson for Facebook is that the public is beginning to recognize that it deserves much greater insight into how the platform’s machinery is designed and deployed. Indeed, that’s the only way to avoid further catastrophe. Without seeing how Facebook works at a finer resolution, in real time, we won’t be able to understand how to make the social web compatible with democracy.
theatlantic.com
How Facebook Failed the World
In the fall of 2019, Facebook launched a massive effort to combat the use of its platforms for human trafficking. Working around the clock, its employees searched Facebook and its subsidiary Instagram for keywords and hashtags that promoted domestic servitude in the Middle East and elsewhere. Over the course of a few weeks, the company took down 129,191 pieces of content, disabled more than 1,000 accounts, tightened its policies, and added new ways to detect this kind of behavior. After they were through, employees congratulated one another on a job well done.It was a job well done. It just came a little late. In fact, a group of Facebook researchers focused on the Middle East and North Africa had found numerous Instagram profiles being used as advertisements for trafficked domestic servants as early as March 2018. “Indonesian brought with Tourist Visa,” one photo caption on a picture of a woman reads, in Arabic. “We have more of them.” But these profiles weren’t “actioned”—disabled or taken down—an internal report would explain, because Facebook’s policies “did not acknowledge the violation.” A year and a half later, an undercover BBC investigation revealed the full scope of the problem: a broad network that illegally trafficked domestic workers, facilitated by internet platforms and aided by algorithmically boosted hashtags. In response, Facebook banned one hashtag and took down some 700 Instagram profiles. But according to another internal report, “domestic servitude content remained on the platform.”Not until October 23, 2019, did the hammer drop: Apple threatened to pull Facebook and Instagram from its App Store because of the BBC report. Motivated by what employees describe in an internal document as “potentially severe consequences to the business” that would result from an App Store ban, Facebook finally kicked into high gear. The document makes clear that the decision to act was not the result of new information: “Was this issue known to Facebook before BBC enquiry and Apple escalation? Yes.”The document was part of the disclosure made to the Securities and Exchange Commission and provided to Congress in redacted form by Frances Haugen, the whistleblower and former Facebook data scientist. A consortium of more than a dozen news organizations, including The Atlantic, has reviewed the redacted versions.Reading these documents is a little like going to the eye doctor and seeing the world suddenly sharpen into focus. In the United States, Facebook has facilitated the spread of misinformation, hate speech, and political polarization. It has algorithmically surfaced false information about conspiracy theories and vaccines, and was instrumental in the ability of an extremist mob to attempt a violent coup at the Capitol. That much is now painfully familiar.But these documents show that the Facebook we have in the United States is actually the platform at its best. It’s the version made by people who speak our language and understand our customs, who take our civic problems seriously because those problems are theirs too. It’s the version that exists on a free internet, under a relatively stable government, in a wealthy democracy. It’s also the version to which Facebook dedicates the most moderation resources. Elsewhere, the documents show, things are different. In the most vulnerable parts of the world—places with limited internet access, where smaller user numbers mean bad actors have undue influence—the trade-offs and mistakes that Facebook makes can have deadly consequences. Erik Carter According to the documents, Facebook is aware that its products are being used to facilitate hate speech in the Middle East, violent cartels in Mexico, ethnic cleansing in Ethiopia, extremist anti-Muslim rhetoric in India, and sex trafficking in Dubai. It is also aware that its efforts to combat these things are insufficient. A March 2021 report notes, “We frequently observe highly coordinated, intentional activity … by problematic actors” that is “particularly prevalent—and problematic—in At-Risk Countries and Contexts”; the report later acknowledges, “Current mitigation strategies are not enough.”[Read: What Facebook did to American democracy]In some cases, employees have successfully taken steps to address these problems, but in many others, the company response has been slow and incomplete. As recently as late 2020, an internal Facebook report found that only 6 percent of Arabic-language hate content on Instagram was detected by Facebook’s systems. Another report that circulated last winter found that, of material posted in Afghanistan that was classified as hate speech within a 30-day range, only 0.23 percent was taken down automatically by Facebook’s tools. In both instances, employees blamed company leadership for insufficient investment.In many of the world’s most fragile nations, a company worth hundreds of billions of dollars hasn’t invested enough in the language- and dialect-specific artificial intelligence and staffing it needs to address these problems. Indeed, last year, according to the documents, only 13 percent of Facebook’s misinformation-moderation staff hours were devoted to the non-U.S. countries in which it operates, whose populations comprise more than 90 percent of Facebook’s users. (Facebook declined to tell me how many countries it has users in.) And although Facebook users post in at least 160 languages, the company has built robust AI detection in only a fraction of those languages, the ones spoken in large, high-profile markets such as the U.S. and Europe—a choice, the documents show, that means problematic content is seldom detected.The granular, procedural, sometimes banal back-and-forth exchanges recorded in the documents reveal, in unprecedented detail, how the most powerful company on Earth makes its decisions. And they suggest that, all over the world, Facebook’s choices are consistently driven by public perception, business risk, the threat of regulation, and the specter of “PR fires,” a phrase that appears over and over in the documents. In many cases, Facebook has been slow to respond to developing crises outside the United States and Europe until its hand is forced. “It’s an open secret … that Facebook’s short-term decisions are largely motivated by PR and the potential for negative attention,” an employee named Sophie Zhang wrote in a September 2020 internal memo about Facebook’s failure to act on global misinformation threats. (Most employee names have been redacted for privacy reasons in these documents, but Zhang left the company and came forward as a whistleblower after she wrote this memo.)Sometimes, even negative attention isn’t enough. In 2019, the human-rights group Avaaz found that Bengali Muslims in India’s Assam state were “facing an extraordinary chorus of abuse and hate” on Facebook: Posts calling Muslims “pigs,” rapists,” and “terrorists” were shared tens of thousands of times and left on the platform because Facebook’s artificial-intelligence systems weren’t built to automatically detect hate speech in Assamese, which is spoken by 23 million people. Facebook removed 96 of the 213 “clearest examples” of hate speech Avaaz flagged for the company before publishing its report. Facebook still does not have technology in place to automatically detect Assamese hate speech.In a memo dated December 2020 and posted to Workplace, Facebook’s very Facebooklike internal message board, an employee argued that “Facebook’s decision-making on content policy is routinely influenced by political considerations.” To hear this employee tell it, the problem was structural: Employees who are primarily tasked with negotiating with governments over regulation and national security, and with the press over stories, were empowered to weigh in on conversations about building and enforcing Facebook’s rules regarding questionable content around the world. “Time and again,” the memo quotes a Facebook researcher saying, “I’ve seen promising interventions … be prematurely stifled or severely constrained by key decisionmakers—often based on fears of public and policy stakeholder responses.”Among the consequences of that pattern, according to the memo: The Hindu-nationalist politician T. Raja Singh, who posted to hundreds of thousands of followers on Facebook calling for India’s Rohingya Muslims to be shot—in direct violation of Facebook’s hate-speech guidelines—was allowed to remain on the platform despite repeated requests to ban him, including from the very Facebook employees tasked with monitoring hate speech. A 2020 Wall Street Journal article reported that Facebook’s top public-policy executive in India had raised concerns about backlash if the company were to do so, saying that cracking down on leaders from the ruling party might make running the business more difficult. The company eventually did ban Singh, but not before his posts ping-ponged through the Hindi-speaking world.In a Workplace thread apparently intended to address employee frustration after the Journal article was published, a leader explained that Facebook’s public-policy teams “are important to the escalations process in that they provide input on a range of issues, including translation, socio-political context, and regulatory risks of different enforcement options.”[Adrienne LaFrance: The largest autocracy on Earth]Employees weren’t placated. In dozens and dozens of comments, they questioned the decisions Facebook had made regarding which parts of the company to involve in content moderation, and raised doubts about its ability to moderate hate speech in India. They called the situation “sad” and Facebook’s response “inadequate,” and wondered about the “propriety of considering regulatory risk” when it comes to violent speech.“I have a very basic question,” wrote one worker. “Despite having such strong processes around hate speech, how come there are so many instances that we have failed? It does speak on the efficacy of the process.”Two other employees said that they had personally reported certain Indian accounts for posting hate speech. Even so, one of the employees wrote, “they still continue to thrive on our platform spewing hateful content.”We “cannot be proud as a company,” yet another wrote, “if we continue to let such barbarism flourish on our network.”Taken together, Frances Haugen’s leaked documents show Facebook for what it is: a platform racked by misinformation, disinformation, conspiracy thinking, extremism, hate speech, bullying, abuse, human trafficking, revenge porn, and incitements to violence. It is a company that has pursued worldwide growth since its inception—and then, when called upon by regulators, the press, and the public to quell the problems its sheer size has created, it has claimed that its scale makes completely addressing those problems impossible. Instead, Facebook’s 60,000-person global workforce is engaged in a borderless, endless, ever-bigger game of Whac-a-Mole, one with no winners and a lot of sore arms.Sophie Zhang was one of the people playing that game. Despite being a junior-level data scientist, she had a knack for identifying “coordinated inauthentic behavior,” Facebook’s term for the fake accounts that have exploited its platforms to undermine global democracy, defraud users, and spread false information. In her memo, which is included in the Facebook Papers but was previously leaked to BuzzFeed News, Zhang details what she found in her nearly three years at Facebook: coordinated disinformation campaigns in dozens of countries, including India, Brazil, Mexico, Afghanistan, South Korea, Bolivia, Spain, and Ukraine. In some cases, such as in Honduras and Azerbaijan, Zhang was able to tie accounts involved in these campaigns directly to ruling political parties. In the memo, posted to Workplace the day Zhang was fired from Facebook for what the company alleged was poor performance, she says that she made decisions about these accounts with minimal oversight or support, despite repeated entreaties to senior leadership. On multiple occasions, she said, she was told to prioritize other work.Facebook has not disputed Zhang’s factual assertions about her time at the company, though it maintains that controlling abuse of its platform is a top priority. A Facebook spokesperson said that the company tries “to keep people safe even if it impacts our bottom line,” adding that the company has spent $13 billion on safety since 2016. “​​Our track record shows that we crack down on abuse abroad with the same intensity that we apply in the U.S.”Zhang's memo, though, paints a different picture. “We focus upon harm and priority regions like the United States and Western Europe,” she wrote. But eventually, “it became impossible to read the news and monitor world events without feeling the weight of my own responsibility.” Indeed, Facebook explicitly prioritizes certain countries for intervention by sorting them into tiers, the documents show. Zhang “chose not to prioritize” Bolivia, despite credible evidence of inauthentic activity in the run-up to the country’s 2019 election. That election was marred by claims of fraud, which fueled widespread protests; more than 30 people were killed and more than 800 were injured.[Read: Facebook’s id is showing]“I have blood on my hands,” Zhang wrote in the memo. By the time she left Facebook, she was having trouble sleeping at night. “I consider myself to have been put in an impossible spot—caught between my loyalties to the company and my loyalties to the world as a whole.”In February, just over a year after Facebook’s high-profile sweep for Middle Eastern and North African domestic-servant trafficking, an internal report identified a web of similar activity, in which women were being trafficked from the Philippines to the Persian Gulf, where they were locked in their homes, denied pay, starved, and abused. This report found that content “should have been detected” for violating Facebook’s policies but had not been, because the mechanism that would have detected much of it had recently been made inactive. The title of the memo is “Domestic Servitude: This Shouldn’t Happen on FB and How We Can Fix It.”What happened in the Philippines—and in Honduras, and Azerbaijan, and India, and Bolivia—wasn’t just that a very large company lacked a handle on the content posted to its platform. It was that, in many cases, a very large company knew what was happening and failed to meaningfully intervene.That Facebook has repeatedly prioritized solving problems for Facebook over solving problems for users should not be surprising. The company is under the constant threat of regulation and bad press. Facebook is doing what companies do, triaging and and acting in its own self-interest.But Facebook is not like other companies. It is bigger, and the stakes of its decisions are higher. In North America, we have recently become acutely aware of the risks and harms of social media. But the Facebook we see is the platform at its best. Any solutions will need to apply not only to the problems we still encounter here, but also to those with which the other 90 percent of Facebook’s users struggle every day.
theatlantic.com
The case against Mark Zuckerberg: Insiders say Facebook’s CEO chose growth over safety
The SEC has been asked to probe whether Mark Zuckerberg's ironclad management style, described in newly released documents and by insiders, led to disastrous outcomes.
washingtonpost.com
How Wolf Trap’s Arvind Manocha would spend a perfect day in D.C.
Manocha has been the CEO and president of the national park for the arts since 2013.
washingtonpost.com
The Magic of Bad Photos
It’s rare to see a bad photo today. If, by chance, a bad photo is taken and cannot be filtered, edited, or otherwise enhanced into something visually acceptable, it is swiftly deleted. Why hold on to anything less than perfect? Why, when with a cost-free click you can disappear it from your digital life, lest it ever inadvertently make its way onto someone else’s social feed, where it might be screengrabbed for eternity.It wasn’t always like this. Bad pictures used to abound in what could seem like an almost deliberate, karmic attempt to humiliate and haunt their imperfect subjects. Back when the one-click Kodak dominated, most pictures—unflattering, off-center, accidental, overexposed, and everyone as red-eyed as vermin—were not worth keeping. No one could figure out how to operate the focus. Hardly anyone knew when to turn off the flash, or how. Few people had any aesthetic sense. You could sift through a roll’s worth of fresh prints, their chemical scent almost wetting the air, and not find a single picture aimed anywhere less ominous than the region directly below your chin.You never knew what you would get once the little button was clicked. You had to wait to find out, usually a week or longer, until 24-hour photo shops, with their bargain-basement development quality, were introduced. You’d head back to a Fotomat after having dropped off the little black plastic roll, full of hope, barely remembering what was on there, because film was precious and the roll may have taken months to complete, especially if it was a 36 rather than a 24, only to open the envelope and discover one blurred atrocity after another.[Read: The rise and fall of an American tech giant]Things got worse during that frantic period in the ’90s when every catered wedding and sweet-16 party featured dozens of disposable Fuji cameras that somehow landed only in the hands of guests who couldn’t take a single decent picture. You’d be tempted to throw a good number of shots away, but more often you didn’t, because film was expensive and tossing out photographs seemed like a vain and frivolous thing to do. Dare to snatch the film from a friend’s Polaroid as it slid out of the slot, convinced you’d been caught looking ridiculous, and you risked certain wrath. Crown Browsing through photo albums from this time is like encountering a dark period from an inexplicable and occasionally brutal-looking past, one in which everyone cried at parties and scowled through reunions and looked miserable at their brother’s Little League games. No one ever thought to bring a camera along on those rare moments when you were looking your best. School pictures routinely documented the horror. Your braces. The uneven middle part. That mottled gray backdrop. You might try to hide the telltale 8-by-10 envelope from your parents—of course they’d ordered an overpriced set—but they’d keep the photos anyway, as if out of spite. These once-a-year portraits were part of your childhood history! For the rest of adolescence, you’d flee any adult wielding a camera.From this angle, it was impossible to fathom the impending dominance of the selfie. Who knew how much people would adore taking pictures of themselves? That teenagers, a traditionally awkward and self-conscious set, could spend entire afternoons posing and perfecting shots of themselves. That seniors worldwide would love selfies so much, tour buses would make stops not for plain old photos of landscapes and landmarks but for pictures of the tourists themselves. That entire “Instagram museums” would pop up purely for the purpose of snapping pictures against wacky backgrounds; that in lieu of docents, museum staff members would stand by to help take photos of visitors posing inside Instagram-ready installations. That upscale hotels and restaurants would design bathroom lighting specifically to enhance selfie potential. Yes, bathroom lighting. But the background in all of these situations is secondary to the main attraction, because in our perfected and selected selfies, we all always look our best.And yet. Snap-happy people today seem to miss something about those less inhibited, less groomed days—something that has gotten lost amid the relentless Instagram parade of goofy puckers, extended tongues, cute cross-eyes, and three-quarter-angled images. Curiously keen to recapture the not-knowing-what-the-hell-is-on-there waiting period that analog film required, young digital types have taken up the popular Dispo camera app, which forces its users to wait until 9 a.m. the following day before photos “develop” and they can view the damage. Dispo calls itself a “live in the moment” social-media product—no editing, no hashtags, no captions. Is it possible that bad photos showed us something we wanted or needed to see?This article was adapted from Pamela Paul’s forthcoming book, 100 Things We’ve Lost to the Internet.
theatlantic.com
America has a gun violence problem. What do we do about it?
Gun violence has been a serious epidemic in America for decades. Researchers say the first step to curbing the violence is to dig into the details.
abcnews.go.com
ShowBiz Minute: Tyler, Baldwin, BTS
James Michael Tyler, who played Gunther on "Friends," dies; Crew member: Baldwin careful with guns before fatal shooting; BTS stream virtual concert BTS Permission to Dance on Stage. (Oct. 25)      
usatoday.com
Trippy mushroom-themed resort is fit for a ‘fungi’
Giant mushroom structures adorn the exterior of this eye-popping luxury hotel in southern China. Aerial footage of the resort looks like a scene ripped from “Alice in Wonderland.” The views are nearly as wild as the ones at The Tiger Lodge in the UK.
nypost.com
Delays on all six Metro lines for Monday morning commute
The delays continue as Metro is grappling with problems with train wheel assemblies that led to a derailment earlier this month.
washingtonpost.com