Information

7/20/2016 Facebook is responsible - History


What do you do when you can’t find any way to stop terrorism? Recently, the Israeli government has come up with a solution … blame Facebook. Members of the Israeli government asserts that these worldwide, public networks should be removing any hateful post that may encourage terror.

The Israeli government was not alone in calling out Facebook. A private organization, called, “Shurat HaDin” (literally, “letter of the law”) that defines itself as “an Israeli-based civil rights organization, and world leader in combating terrorist organizations, along with the regimes that support.” Shurat HaDin, who seeks their justice through lawsuits litigated in courtrooms around the world, filed a class-action lawsuit against Facebook, on behalf of 20,000 Israelis. The lawsuit, Lakin vs. Facebook, was certified as a class-action suit by an Israeli judge and allowed to proceed. The Plaintiffs claim that

“Facebook is much more than a neutral internet platform or a mere ‘publisher’ of speech, because its algorithms connect the terrorists to the inciters. Facebook actively assists the inciters to find people who are interested in acting on their hateful messages, by offering friend, group and event suggestions, and targeting advertising based on people’s online ‘likes’ and internet browsing history.”

Not to be outdone, the Israeli Knesset passed a new preliminary law today that would make social networks responsible for removing posts that promote terror. The bill states that managers of a social network will face fines if they do not remove posts meant to incite to terror. The bill specifically names: Facebook, YouTube, Twitter and Google.

If one didn’t know better, one would think the bill was submitted by the government itself, or that one of the right-wing Knesset members of the coalition might have introduced the bill. However, that was not the case. The bill seeking to hold social networks accountable for posts considered to incite terror they allow to remain on their systems was submitted by MK Revital Swid of the Zionist Camp (the Labor Party). In explaining the need for this bill, Swid wrote: “in recent months the State of Israel has faced a wave of terror perpetrated by individuals.” Swid continued: “At the same time, there has been a rise in incitement in the virtual world specifically in social media.”

The government was swift to back Swid’s bill. When it was introduced, Minister of Public Security, MK Gilad Erdan stated: “This bill is right, necessary, and one can say suits the need of the hour.” The bill quickly passed its first reading by a vote of 50 to 4. (Note: the bill must pass three readings to become law).

To date, there has been no extended debate about the role of social media. Do we expect social media companies to become censors of all hateful speech? What happens in other country, where criticizing the government is defined as hate speech? The Knesset also did not debate the ramifications of passing a law targeting Facebook and Google might be. Both companies maintain large research and development facilities in Israel. Just this week, the Israeli Treasury proposed lowering the tax rate on companies that do substantial R&D in Israel to 6% – half a point below the rate charged by Ireland. So, on one hand, Israel “The Start up Nation” does whatever it can to attract investments and interest from the largest global technology firms; on the other hand, it passes laws that make those same firms liable for customer posts.

Unfortunately, no one has a real solution to the global plague called terrorism. The scourge of terror has struck, and continues to strike far and wide, across the far corners of the earth. Regrettably, Israel’s new ill thought-out law will not be the last of misguided attempts to solve to this heinous problem.


Are Christians Responsible For The Worst Atrocities Of History?

It is sometimes claimed that Christians were responsible for some of the most horrendous things throughout history. Examples often include the Salem Witch Trials, the Crusades, and the Dark Ages.

It is true that much of the evil done in the world has been done with Christ’s name attached to it: but does that mean Christ was responsible for the evil? Let’s consider.

First of all, it needs to be pointed out that just because something is said to be done in the name of Christ does not actually mean that it is done with His authority. Consider the unjust killings during the aforementioned examples. Much of this evil was done by the Catholic church: does that mean that they acted with Christ’s approval?

Jesus never taught that the Gospel should be spread by force. Indeed, He taught the exact opposite (as did His Apostles):

Matthew 5:44-45-44 But I say to you, love your enemies, bless those who curse you, do good to those who hate you, and pray for those who spitefully use you and persecute you, 45 that you may be sons of your Father in heaven for He makes His sun rise on the evil and on the good, and sends rain on the just and on the unjust.

2 Corinthians 10:4-5-4 For the weapons of our warfare are not carnal but mighty in God for pulling down strongholds, 5 casting down arguments and every high thing that exalts itself against the knowledge of God, bringing every thought into captivity to the obedience of Christ,

Let’s not confuse the sins and evils of Christ’s followers with what Christ Himself taught.

Second, it is important to realize that the worst atrocities in history have actually been committed by atheist and pagan regimes. In elaborating upon these facts, Dinesh D’Souza has written:

“In this chapter, I want to focus on the really big crimes that have been committed by atheist groups and governments. In the past hundred years or so, the most powerful atheist regimes—Communist Russia, Communist China, and Nazi Germany—have wiped out people in astronomical numbers. Stalin was responsible for around twenty million deaths, produced through mass slayings, forced labor camps, show trials followed by firing squads, population relocation and starvation, and so on. Jung Chang and Jon Halliday’s authoritative recent study Mao: The Unknown Story attributes to Mao Zedong’s regime a staggering seventy million deaths.4 Some China scholars think Chang and Halliday’s numbers are a bit high, but the authors present convincing evidence that Mao’s atheist regime was the most murderous in world history. Stalin’s and Mao’s killings—unlike those of, say, the Crusades or the Thirty Years’ War—were done in peacetime and were performed on their fellow countrymen. Hitler comes in a distant third with around ten million murders, six million of them Jews. So far, I haven’t even counted the assassinations and slayings ordered by other Soviet dictators like Lenin, Khrushchev, Brezhnev, and so on. Nor have I included a host of “lesser” atheist tyrants: Pol Pot, Enver Hoxha, Nicolae Ceauşescu, Fidel Castro, Kim Jong-il. Even these “minor league” despots killed a lot of people. Consider Pol Pot, who was the leader of the Khmer Rouge, the Communist Party faction that ruled Cambodia from 1975 to 1979. Within this four-year period Pol Pot and his revolutionary ideologues engaged in systematic mass relocations and killings that eliminated approximately one-fifth of the Cambodian population, an estimated 1.5 million to 2 million people. In fact, Pol Pot killed a larger percentage of his countrymen than Stalin and Mao killed of theirs.5 Even so, focusing only on the big three—Stalin, Hitler, and Mao—we have to recognize that atheist regimes have in a single century murdered more than one hundred million people…Religion-inspired killing simply cannot compete with the murders perpetrated by atheist regimes….Communism calls for the elimination of the exploiting class, it extols violence as a way to social progress, and it calls for using any means necessary to achieve the atheist utopia. Not only was Marx an atheist, but atheism was also a central part of the Marxist doctrine. Atheism became a central component of the Soviet Union’s official ideology, it is still the official doctrine of China, and Stalin and Mao enforced atheist policies by systematically closing churches and murdering priests and religious believers. All Communist regimes have been strongly anti-religious, suggesting that their atheism is intrinsic rather than incidental to their ideology….The atheist regimes, by their actions, confirm the truth of Dostoevsky’s dictum: if God is not, everything is permitted. Whatever the cause for why atheist regimes do what they do, the indisputable fact is that all the religions of the world put together have in three thousand years not managed to kill anywhere near the number of people killed in the name of atheism in the past few decades. It’s time to abandon the mindlessly repeated mantra that religious belief has been the main source of human conflict and violence. Atheism, not religion, is responsible for the worst mass murders of history.” (Dinesh D’Souza, What’s So Great About Christianity? 213-220 (Kindle Edition): Washington, DC Regnery Publishing Inc.)

Our atheist friends who tell us that the world would be better off without Christianity make their claims, but the track record of atheist-inspired and pagan oriented regimes reveal quite a different story. The fact is, the only true basis for a great civilization will be found in people returning to God’s Word:

Proverbs 14:34-Righteousness exalts a nation, but sin is a reproach to any people.
Why not today tun your life to the Son of God? Jesus Christ came to this world to save you form your sin (1 Timothy 1:15). He died in your place on the Cross of Calvary, taking your sins upon Himself (1 Timothy 2:6). He was buried, and three days later He arose from the dead (1 Corinthians 15:1-8).

Why not today, as a believer, repent and be baptized into Christ for the remission of your sins (Acts 2:37-38)?

Why not, if you are an erring child of God, repent and pray today today (Acts 822 1 John 1:9)?

The grace of the Lord Jesus Christ, and the love of God, and the communion of the Holy Spirit be with you all. Amen.


Great Pyramid of Giza, Egypt

Nick Brundle Photography/Getty Images

The Great Pyramid, located at Giza on the west bank of the Nile River north of Cairo in Egypt, is the only wonder of the ancient world that has survived to the present day. It is part of a group of three pyramids–Khufu (Cheops), Khafra (Chephren) and Menkaura (Mycerimus)–that were built between 2700 B.C. and 2500 B.C. as royal tombs. The largest and most impressive is Khufu, known as “The Great Pyramid,” which covers 13 acres and is believed to contain more than 2 million stone blocks that weigh from two to 30 tons each. For more than 4,000 years, Khufu reigned as the tallest building in the world. In fact, it took modern man until the 19th century to build a taller structure. Amazingly, the nearly symmetrical Egyptian pyramids were built without the aid of modern tools or surveying equipment. So, how did Egyptians build the pyramids? Scientists believe that the Egyptians used log rollers and sledges to move the stones into place. The sloped walls, which were intended to mimic the rays of Ra, the sun god, were originally built as steps, and then filled in with limestone. The interior of the pyramids included narrow corridors and hidden chambers in an unsuccessful attempt to foil grave robbers. Although modern archeologists have found some great treasures among the ruins, they believe most of what the pyramids once contained was looted within 250 years of their completion.

Did you know? The Colossus of Rhodes was an inspiration for the Statue of Liberty.


What Facebook Did to American Democracy

In the media world, as in so many other realms, there is a sharp discontinuity in the timeline: before the 2016 election, and after.

Things we thought we understood—narratives, data, software, news events—have had to be reinterpreted in light of Donald Trump’s surprising win as well as the continuing questions about the role that misinformation and disinformation played in his election.

Tech journalists covering Facebook had a duty to cover what was happening before, during, and after the election. Reporters tried to see past their often liberal political orientations and the unprecedented actions of Donald Trump to see how 2016 was playing out on the internet. Every component of the chaotic digital campaign has been reported on, here at The Atlantic, and elsewhere: Facebook’s enormous distribution power for political information, rapacious partisanship reinforced by distinct media information spheres, the increasing scourge of “viral” hoaxes and other kinds of misinformation that could propagate through those networks, and the Russian information ops agency.

But no one delivered the synthesis that could have tied together all these disparate threads. It’s not that this hypothetical perfect story would have changed the outcome of the election. The real problem—for all political stripes—is understanding the set of conditions that led to Trump’s victory. The informational underpinnings of democracy have eroded, and no one has explained precisely how.

We’ve known since at least 2012 that Facebook was a powerful, non-neutral force in electoral politics. In that year, a combined University of California, San Diego and Facebook research team led by James Fowler published a study in Nature, which argued that Facebook’s “I Voted” button had driven a small but measurable increase in turnout, primarily among young people.

Rebecca Rosen’s 2012 story, “Did Facebook Give Democrats the Upper Hand?” relied on new research from Fowler, et al., about the presidential election that year. Again, the conclusion of their work was that Facebook’s get-out-the-vote message could have driven a substantial chunk of the increase in youth voter participation in the 2012 general election. Fowler told Rosen that it was “even possible that Facebook is completely responsible” for the youth voter increase. And because a higher proportion of young people vote Democratic than the general population, the net effect of Facebook’s GOTV effort would have been to help the Dems.

The research showed that a small design change by Facebook could have electoral repercussions, especially with America’s electoral-college format in which a few hotly contested states have a disproportionate impact on the national outcome. And the pro-liberal effect it implied became enshrined as an axiom of how campaign staffers, reporters, and academics viewed social media.

In June 2014, Harvard Law scholar Jonathan Zittrain wrote an essay in New Republic called, “Facebook Could Decide an Election Without Anyone Ever Finding Out,” in which he called attention to the possibility of Facebook selectively depressing voter turnout. (He also suggested that Facebook be seen as an “information fiduciary,” charged with certain special roles and responsibilities because it controls so much personal data.)

In late 2014, The Daily Dot called attention to an obscure Facebook-produced case study on how strategists defeated a statewide measure in Florida by relentlessly focusing Facebook ads on Broward and Dade counties, Democratic strongholds. Working with a tiny budget that would have allowed them to send a single mailer to just 150,000 households, the digital-advertising firm Chong and Koster was able to obtain remarkable results. “Where the Facebook ads appeared, we did almost 20 percentage points better than where they didn’t,” testified a leader of the firm. “Within that area, the people who saw the ads were 17 percent more likely to vote our way than the people who didn’t. Within that group, the people who voted the way we wanted them to, when asked why, often cited the messages they learned from the Facebook ads.”

In April 2016, Rob Meyer published “How Facebook Could Tilt the 2016 Election” after a company meeting in which some employees apparently put the stopping-Trump question to Mark Zuckerberg. Based on Fowler’s research, Meyer reimagined Zittrain’s hypothetical as a direct Facebook intervention to depress turnout among non-college graduates, who leaned Trump as a whole.

Facebook, of course, said it would never do such a thing. “Voting is a core value of democracy and we believe that supporting civic participation is an important contribution we can make to the community,” a spokesperson said. “We as a company are neutral—we have not and will not use our products in a way that attempts to influence how people vote.”

They wouldn’t do it intentionally, at least.

As all these examples show, though, the potential for Facebook to have an impact on an election was clear for at least half a decade before Donald Trump was elected. But rather than focusing specifically on the integrity of elections, most writers—myself included, some observers like Sasha Issenberg, Zeynep Tufekci, and Daniel Kreiss excepted—bundled electoral problems inside other, broader concerns like privacy, surveillance, tech ideology, media-industry competition, or the psychological effects of social media.

The same was true even of people inside Facebook. “If you’d come to me in 2012, when the last presidential election was raging and we were cooking up ever more complicated ways to monetize Facebook data, and told me that Russian agents in the Kremlin’s employ would be buying Facebook ads to subvert American democracy, I’d have asked where your tin-foil hat was,” wrote Antonio García Martínez, who managed ad targeting for Facebook back then. “And yet, now we live in that otherworldly political reality.”

Not to excuse us, but this was back on the Old Earth, too, when electoral politics was not the thing that every single person talked about all the time. There were other important dynamics to Facebook’s growing power that needed to be covered.

Facebook’s draw is its ability to give you what you want. Like a page, get more of that page’s posts like a story, get more stories like that interact with a person, get more of their updates. The way Facebook determines the ranking of the News Feed is the probability that you’ll like, comment on, or share a story. Shares are worth more than comments, which are both worth more than likes, but in all cases, the more likely you are to interact with a post, the higher up it will show in your News Feed. Two thousand kinds of data (or “features” in the industry parlance) get smelted in Facebook’s machine-learning system to make those predictions.

What’s crucial to understand is that, from the system’s perspective, success is correctly predicting what you’ll like, comment on, or share. That’s what matters. People call this “engagement.” There are other factors, as Slate’s Will Oremus noted in this rare story about the News Feed ranking team. But who knows how much weight they actually receive and for how long as the system evolves. For example, one change that Facebook highlighted to Oremus in early 2016—taking into account how long people look at a story, even if they don’t click it—was subsequently dismissed by Lars Backstrom, the VP of engineering in charge of News Feed ranking, as a “noisy” signal that’s also “biased in a few ways” making it “hard to use” in a May 2017 technical talk.

Facebook’s engineers do not want to introduce noise into the system. Because the News Feed, this machine for generating engagement, is Facebook’s most important technical system. Their success predicting what you’ll like is why users spend an average of more than 50 minutes a day on the site, and why even the former creator of the “like” button worries about how well the site captures attention. News Feed works really well.

But as far as “personalized newspapers” go, this one’s editorial sensibilities are limited. Most people are far less likely to engage with viewpoints that they find confusing, annoying, incorrect, or abhorrent. And this is true not just in politics, but the broader culture.

That this could be a problem was apparent to many. Eli Pariser’s The Filter Bubble, which came out in the summer of 2011, became the most widely cited distillation of the effects Facebook and other internet platforms could have on public discourse.

Pariser began the book research when he noticed conservative people, whom he’d befriended on the platform despite his left-leaning politics, had disappeared from his News Feed. “I was still clicking my progressive friends’ links more than my conservative friends’— and links to the latest Lady Gaga videos more than either,” he wrote. “So no conservative links for me.”

Through the book, he traces the many potential problems that the “personalization” of media might bring. Most germane to this discussion, he raised the point that if every one of the billion News Feeds is different, how can anyone understand what other people are seeing and responding to?

“The most serious political problem posed by filter bubbles is that they make it increasingly difficult to have a public argument. As the number of different segments and messages increases, it becomes harder and harder for the campaigns to track who’s saying what to whom,” Pariser wrote. “How does a [political] campaign know what its opponent is saying if ads are only targeted to white Jewish men between 28 and 34 who have expressed a fondness for U2 on Facebook and who donated to Barack Obama’s campaign?”

This did, indeed, become an enormous problem. When I was editor in chief of Fusion, we set about trying to track the “digital campaign” with several dedicated people. What we quickly realized was that there was both too much data—the noisiness of all the different posts by the various candidates and their associates—as well as too little. Targeting made tracking the actual messaging that the campaigns were paying for impossible to track. On Facebook, the campaigns could show ads only to the people they targeted. We couldn’t actually see the messages that were actually reaching people in battleground areas. From the outside, it was a technical impossibility to know what ads were running on Facebook, one that the company had fought to keep intact.

Pariser suggests in his book, “one simple solution to this problem would simply be to require campaigns to immediately disclose all of their online advertising materials and to whom each ad is targeted.” Which could happen in future campaigns.

Imagine if this had happened in 2016. If there were data sets of all the ads that the campaigns and others had run, we’d know a lot more about what actually happened last year. The Filter Bubble is obviously prescient work, but there was one thing that Pariser and most other people did not foresee. And that’s that Facebook became completely dominant as a media distributor.

About two years after Pariser published his book, Facebook took over the news-media ecosystem. They’ve never publicly admitted it, but in late 2013, they began to serve ads inviting users to “like” media pages. This caused a massive increase in the amount of traffic that Facebook sent to media companies. At The Atlantic and other publishers across the media landscape, it was like a tide was carrying us to new traffic records. Without hiring anyone else, without changing strategy or tactics, without publishing more, suddenly everything was easier.

While traffic to The Atlantic from Facebook.com increased, at the time, most of the new traffic did not look like it was coming from Facebook within The Atlantic’s analytics. It showed up as “direct/bookmarked” or some variation, depending on the software. It looked like what I called “dark social” back in 2012. But as BuzzFeed’s Charlie Warzel pointed out at the time, and as I came to believe, it was primarily Facebook traffic in disguise. Between August and October of 2013, BuzzFeed’s “partner network” of hundreds of websites saw a jump in traffic from Facebook of 69 percent.

At The Atlantic, we ran a series of experiments that showed, pretty definitively from our perspective, that most of the stuff that looked like “dark social” was, in fact, traffic coming from within Facebook’s mobile app. Across the landscape, it began to dawn on people who thought about these kinds of things: Damn, Facebook owns us. They had taken over media distribution.

Why? This is a best guess, proffered by Robinson Meyer as it was happening: Facebook wanted to crush Twitter, which had drawn a disproportionate share of media and media-figure attention. Just as Instagram borrowed Snapchat’s “Stories” to help crush the site’s growth, Facebook decided it needed to own “news” to take the wind out of the newly IPO’d Twitter.

The first sign that this new system had some kinks came with “Upworthy-style” headlines. (And you’ll never guess what happened next!) Things didn’t just go kind of viral, they went ViralNova, a site which, like Upworthy itself, Facebook eventually smacked down. Many of the new sites had, like Upworthy, which was cofounded by Pariser, a progressive bent.

Less noticed was that a right-wing media was developing in opposition to and alongside these left-leaning sites. “By 2014, the outlines of the Facebook-native hard-right voice and grievance spectrum were there,” The New York Times’ media and tech writer John Herrman told me, “and I tricked myself into thinking they were a reaction/counterpart to the wave of soft progressive/inspirational content that had just crested. It ended up a Reaction in a much bigger and destabilizing sense.”

The other sign of algorithmic trouble was the wild swings that Facebook Video underwent. In the early days, just about any old video was likely to generate many, many, many views. The numbers were insane in the early days. Just as an example, a Fortune article noted that BuzzFeed’s video views “grew 80-fold in a year, reaching more than 500 million in April.” Suddenly, all kinds of video—good, bad, and ugly—were doing 1-2-3 million views.

As with news, Facebook’s video push was a direct assault on a competitor, YouTube. Videos changed the dynamics of the News Feed for individuals, for media companies, and for anyone trying to understand what the hell was going on.

Individuals were suddenly inundated with video. Media companies, despite no business model, were forced to crank out video somehow or risk their pages/brands losing relevance as video posts crowded others out.

And on top of all that, scholars and industry observers were used to looking at what was happening in articles to understand how information was flowing. Now, by far the most viewed media objects on Facebook, and therefore on the internet, were videos without transcripts or centralized repositories. In the early days, many successful videos were just “freebooted” (i.e., stolen) videos from other places or reposts. All of which served to confuse and obfuscate the transport mechanisms for information and ideas on Facebook.

Through this messy, chaotic, dynamic situation, a new media rose up through the Facebook burst to occupy the big filter bubbles. On the right, Breitbart is the center of a new conservative network. A study of 1.25 million election news articles found “a right-wing media network anchored around Breitbart developed as a distinct and insulated media system, using social media as a backbone to transmit a hyper-partisan perspective to the world.”

Breitbart, of course, also lent Steve Bannon, its chief, to the Trump campaign, creating another feedback loop between the candidate and a rabid partisan press. Through 2015, Breitbart went from a medium-sized site with a small Facebook page of 100,000 likes into a powerful force shaping the election with almost 1.5 million likes. In the key metric for Facebook’s News Feed, its posts got 886,000 interactions from Facebook users in January. By July, Breitbart had surpassed The New York Times’ main account in interactions. By December, it was doing 10 million interactions per month, about 50 percent of Fox News, which had 11.5 million likes on its main page. Breitbart’s audience was hyper-engaged.

There is no precise equivalent to the Breitbart phenomenon on the left. Rather the big news organizations are classified as center-left, basically, with fringier left-wing sites showing far smaller followings than Breitbart on the right.

And this new, hyperpartisan media created the perfect conditions for another dynamic that influenced the 2016 election, the rise of fake news.

In a December 2015 article for BuzzFeed, Joseph Bernstein argued that “the dark forces of the internet became a counterculture.” He called it “Chanterculture” after the trolls who gathered at the meme-creating, often-racist 4chan message board. Others ended up calling it the “alt-right.” This culture combined a bunch of people who loved to perpetuate hoaxes with angry Gamergaters with “free-speech” advocates like Milo Yiannopoulos with honest-to-God neo-Nazis and white supremacists. And these people loved Donald Trump.

“This year Chanterculture found its true hero, who makes it plain that what we’re seeing is a genuine movement: the current master of American resentment, Donald Trump,” Bernstein wrote. “Everywhere you look on ‘politically incorrect’ subforums and random chans, he looms.”

When you combine hyper-partisan media with a group of people who love to clown “normies,” you end up with things like Pizzagate, a patently ridiculous and widely debunked conspiracy theory that held there was a child-pedophilia ring linked to Hillary Clinton somehow. It was just the most bizarre thing in the entire world. And many of the figures in Bernstein’s story were all over it, including several who the current president has consorted with on social media.

But Pizzagate was but the most Pynchonian of all the crazy misinformation and hoaxes that spread in the run-up to the election.

BuzzFeed, deeply attuned to the flows of the social web, was all over the story through reporter Craig Silverman. His best-known analysis happened after the election, when he showed that “in the final three months of the U.S. presidential campaign, the top-performing fake election-news stories on Facebook generated more engagement than the top stories from major news outlets such as The New York Times, The Washington Post, The Huffington Post, NBC News, and others.”

But he also tracked fake news before the election, as did other outlets such as The Washington Post, including showing that Facebook’s “Trending” algorithm regularly promoted fake news. By September of 2016, even the Pope himself was talking about fake news, by which we mean actual hoaxes or lies perpetuated by a variety of actors.

The longevity of Snopes shows that hoaxes are nothing new to the internet. Already in January 2015, Robinson Meyer reported about how Facebook was “cracking down on the fake news stories that plague News Feeds everywhere.”

What made the election cycle different was that all of these changes to the information ecosystem had made it possible to develop weird businesses around fake news. Some random website posting aggregated news about the election could not drive a lot of traffic. But some random website announcing that the Pope had endorsed Donald Trump definitely could. The fake news generated a ton of engagement, which meant that it spread far and wide.

A few days before the election Silverman and fellow BuzzFeed contributor Lawrence Alexander traced 100 pro–Donald Trump sites to a town of 45,000 in Macedonia. Some teens there realized they could make money off the election, and just like that, became a node in the information network that helped Trump beat Clinton.

Whatever weird thing you imagine might happen, something weirder probably did happen. Reporters tried to keep up, but it was too strange. As Max Read put it in New York Magazine, Facebook is “like a four-dimensional object, we catch slices of it when it passes through the three-dimensional world we recognize.” No one can quite wrap their heads around what this thing has become, or all the things this thing has become.

“Not even President-Pope-Viceroy Zuckerberg himself seemed prepared for the role Facebook has played in global politics this past year,” Read wrote.

And we haven’t even gotten to the Russians.

Russia’s disinformation campaigns are well known. During his reporting for a story in The New York Times Magazine, Adrian Chen sat across the street from the headquarters of the Internet Research Agency, watching workaday Russian agents/internet trolls head inside. He heard how the place had “industrialized the art of trolling” from a former employee. “Management was obsessed with statistics—page views, number of posts, a blog’s place on LiveJournal’s traffic charts—and team leaders compelled hard work through a system of bonuses and fines,” he wrote. Of course they wanted to maximize engagement, too!

There were reports that Russian trolls were commenting on American news sites. There were many, many reports of Russia’s propaganda offensive in Ukraine. Ukrainian journalists run a website dedicated to cataloging these disinformation attempts called StopFake. It has hundreds of posts reaching back into 2014.

A Guardian reporter who looked into Russian military doctrine around information war found a handbook that described how it might work. “The deployment of information weapons, [the book] suggests, ‘acts like an invisible radiation’ upon its targets: ‘The population doesn’t even feel it is being acted upon. So the state doesn’t switch on its self-defense mechanisms,’” wrote Peter Pomerantsev.

As more details about the Russian disinformation campaign come to the surface through Facebook’s continued digging, it’s fair to say that it’s not just the state that did not switch on its self-defense mechanisms. The influence campaign just happened on Facebook without anyone noticing.

As many people have noted, the 3,000 ads that have been linked to Russia are a drop in the bucket, even if they did reach millions of people. The real game is simply that Russian operatives created pages that reached people “organically,” as the saying goes. Jonathan Albright, research director of the Tow Center for Digital Journalism at Columbia University, pulled data on the six publicly known Russia-linked Facebook pages. He found that their posts had been shared 340 million times. And those were six of 470 pages that Facebook has linked to Russian operatives. You’re probably talking billions of shares, with who knows how many views, and with what kind of specific targeting.

The Russians are good at engagement! Yet, before the U.S. election, even after Hillary Clinton and intelligence agencies fingered Russian intelligence meddling in the election, even after news reports suggested that a disinformation campaign was afoot, nothing about the actual operations on Facebook came out.

In the aftermath of these discoveries, three Facebook security researchers, Jen Weedon, William Nuland, and Alex Stamos, released a white paper called Information Operations and Facebook. “We have had to expand our security focus from traditional abusive behavior, such as account hacking, malware, spam, and financial scams, to include more subtle and insidious forms of misuse, including attempts to manipulate civic discourse and deceive people,” they wrote.

One key theme of the paper is that they were used to dealing with economic actors, who responded to costs and incentives. When it comes to Russian operatives paid to Facebook, those constraints no longer hold. “The area of information operations does provide a unique challenge,” they wrote, “in that those sponsoring such operations are often not constrained by per-unit economic realities in the same way as spammers and click fraudsters, which increases the complexity of deterrence.” They were not expecting that.

Add everything up. The chaos of a billion-person platform that competitively dominated media distribution. The known electoral efficacy of Facebook. The wild fake news and misinformation rampaging across the internet generally and Facebook specifically. The Russian info operations. All of these things were known.

And yet no one could quite put it all together: The dominant social network had altered the information and persuasion environment of the election beyond recognition while taking a very big chunk of the estimated $1.4 billion worth of digital advertising purchased during the election. There were hundreds of millions of dollars of dark ads doing their work. Fake news all over the place. Macedonian teens campaigning for Trump. Ragingly partisan media infospheres serving up only the news you wanted to hear. Who could believe anything? What room was there for policy positions when all this stuff was eating up News Feed space? Who the hell knew what was going on?

As late as August 20, 2016, the The Washington Post could say this of the campaigns:

Hillary Clinton is running arguably the most digital presidential campaign in U.S. history. Donald Trump is running one of the most analog campaigns in recent memory. The Clinton team is bent on finding more effective ways to identify supporters and ensure they cast ballots Trump is, famously and unapologetically, sticking to a 1980s-era focus on courting attention and voters via television.

Just a week earlier, Trump’s campaign had hired Cambridge Analytica. Soon, they’d ramped up to $70 million a month in Facebook advertising spending. And the next thing you knew, Brad Parscale, Trump’s digital director, is doing the postmortem rounds talking up his win.

“These social platforms are all invented by very liberal people on the west and east coasts,” Parscale said. “And we figure out how to use it to push conservative values. I don’t think they thought that would ever happen.”

And that was part of the media’s problem, too.

Before Trump’s election, the impact of internet technology generally and Facebook specifically was seen as favoring Democrats. Even a TechCrunch critique of Rosen’s 2012 article about Facebook’s electoral power argued, “the internet inherently advantages liberals because, on average, their greater psychological embrace of disruption leads to more innovation (after all, nearly every major digital breakthrough, from online fundraising to the use of big data, was pioneered by Democrats).”

Certainly, the Obama tech team that I profiled in 2012 thought this was the case. Of course, social media would benefit the (youthful, diverse, internet-savvy) left. And the political bent of just about all Silicon Valley companies runs Democratic. For all the talk about Facebook employees embedding with the Trump campaign, the former CEO of Google, Eric Schmidt, sat with the Obama tech team on Election Day 2012.

In June 2015, The New York Times ran an article about Republicans trying to ramp up their digital campaigns that began like this: “The criticism after the 2012 presidential election was swift and harsh: Democrats were light-years ahead of Republicans when it came to digital strategy and tactics, and Republicans had serious work to do on the technology front if they ever hoped to win back the White House.”

It cited Sasha Issenberg, the most astute reporter on political technology. “The Republicans have a particular challenge,” Issenberg said, “which is, in these areas they don’t have many people with either the hard skills or the experience to go out and take on this type of work.”

University of North Carolina journalism professor Daniel Kreiss wrote a whole (good) book, Prototype Politics, showing that Democrats had an incredible personnel advantage. Drawing on an innovative data set of the professional careers of 629 staffers working in technology on presidential campaigns from 2004 to 2012 and data from interviews with more than 60 party and campaign staffers,” Kriess wrote, “the book details how and explains why the Democrats have invested more in technology, attracted staffers with specialized expertise to work in electoral politics, and founded an array of firms and organizations to diffuse technological innovations down ballot and across election cycles.”

Which is to say: It’s not that no journalists, internet-focused lawyers, or technologists saw Facebook’s looming electoral presence—it was undeniable—but all the evidence pointed to the structural change benefitting Democrats. And let’s just state the obvious: Most reporters and professors are probably about as liberal as your standard Silicon Valley technologist, so this conclusion fit into the comfort zone of those in the field.

By late October, the role that Facebook might be playing in the Trump campaign—and more broadly—was emerging. Joshua Green and Issenberg reported a long feature on the data operation then in motion. The Trump campaign was working to suppress “idealistic white liberals, young women, and African Americans,” and they’d be doing it with targeted, “dark” Facebook ads. These ads are only visible to the buyer, the ad recipients, and Facebook. No one who hasn’t been targeted by then can see them. How was anyone supposed to know what was going on, when the key campaign terrain was literally invisible to outside observers?

Steve Bannon was confident in the operation. “I wouldn’t have come aboard, even for Trump, if I hadn’t known they were building this massive Facebook and data engine,” Bannon told them. “Facebook is what propelled Breitbart to a massive audience. We know its power.”

Issenberg and Green called it “an odd gambit” which had “no scientific basis.” Then again, Trump’s whole campaign had seemed like an odd gambit with no scientific basis. The conventional wisdom was that Trump was going to lose and lose badly. In the days before the election, The Huffington Post’s data team had Clinton’s election probability at 98.3 percent. A member of the team, Ryan Grim, went after Nate Silver for his more conservative probability of 64.7 percent, accusing him of skewing his data for “punditry” reasons. Grim ended his post on the topic, “If you want to put your faith in the numbers, you can relax. She’s got this.”

Narrator: She did not have this.

But the point isn’t that a Republican beat a Democrat. The point is that the very roots of the electoral system—the news people see, the events they think happened, the information they digest—had been destabilized.

In the middle of the summer of the election, the former Facebook ad-targeting product manager, Antonio García Martínez, released an autobiography called Chaos Monkeys. He called his colleagues “chaos monkeys,” messing with industry after industry in their company-creating fervor. “The question for society,” he wrote, “is whether it can survive these entrepreneurial chaos monkeys intact, and at what human cost.” This is the real epitaph of the election.

The information systems that people use to process news have been rerouted through Facebook, and in the process, mostly broken and hidden from view. It wasn’t just liberal bias that kept the media from putting everything together. Much of the hundreds of millions of dollars that was spent during the election cycle came in the form of “dark ads.”

The truth is that while many reporters knew some things that were going on on Facebook, no one knew everything that was going on on Facebook, not even Facebook. And so, during the most significant shift in the technology of politics since the television, the first draft of history is filled with undecipherable whorls and empty pages. Meanwhile, the 2018 midterms loom.

Update: After publication, Adam Mosseri, head of News Feed, sent an email
describing some of the work that Facebook is doing in response to the
problems during the election. They include new software and processes
"to stop the spread of misinformation, click-bait and other
problematic content on Facebook."

"The truth is we’ve learned things since the election, and we take our
responsibility to protect the community of people who use Facebook
seriously. As a result, we’ve launched a company-wide effort to
improve the integrity of information on our service," he wrote. "It’s
already translated into new products, new protections, and the
commitment of thousands of new people to enforce our policies and
standards. We know there is a lot more work to do, but I’ve never
seen this company more engaged on a single challenge since I joined
almost 10 years ago."


3. Does the company promote diversity and inclusion?

Yes. Nine months after joining the company as global chief diversity officer, in June 2014, Maxine Williams published Facebook's diversity figures for the first time. The blog post was striking if predictable: 69% of employees were male and 57% were white. Of those, 85% of technical employees (i.e., developers and hardware engineers, primarily) were male and 53% were white. Those numbers had to change.

Williams wrote at the time:

Research . shows that diverse teams are better at solving complex problems and enjoy more dynamic workplaces. So at Facebook we're serious about building a workplace that reflects a broad range of experience, thought, geography, age, background, gender, sexual orientation, language, culture and many other characteristics.

Maxine Williams has been Facebook's chief diversity officer since September 2013. Image source: Facebook.

Where is Facebook on the diversity spectrum today, five years later? Doing better: 63.1% of the workforce is male versus 39.9% female. More importantly, white staff no longer comprise an overwhelming majority. Instead, white workers now account for 44.2% of all roles and 40% of technical roles. Also of note: 32.6% of senior leadership positions at Facebook are now occupied by women, up from 23% in 2014.

That's different from the four executives Facebook profiles at its investor relations site. There, COO Sheryl Sandberg is the only woman, and no people of color are yet represented. Of the members of the eight-person board, three including Sandberg are women, and one of the men -- former American Express CEO Kenneth Chenault -- is nonwhite.

Not surprisingly, Williams believes she and the company as a whole can do better, writing in a blog post about the 2019 survey results:

We envision a company where in the next five years, at least 50% of our workforce will be women, people who are Black, Hispanic, Native American, Pacific Islanders, people with two or more ethnicities, people with disabilities, and veterans. In doing this, we aim to double our number of women globally and Black and Hispanic employees in the US. It will be a company that reflects and better serves the people on our platforms, services, and products. It will be a more welcoming community advancing our mission and living up to the responsibility that comes with it.

Prioritizing diversity and inclusion requires audacious thinking and the wherewithal to follow through. So far, Williams and her team appear empowered to provide both to a workforce hungry for greater representation. The company ranks 71st on Forbes' Best Employers for Women and doesn't rank at all on Forbes' Best Employers for Diversity, which illustrates that while Facebook is a national leader in workplace gender inclusion, it still has plenty of room to improve.


History of Slavery

Enslaved people in the antebellum South constituted about one-third of the southern population. Most lived on large plantations or small farms many masters owned fewer than 50 enslaved people.

Land owners sought to make their enslaved completely dependent on them through a system of restrictive codes. They were usually prohibited from learning to read and write, and their behavior and movement was restricted.

Many masters raped enslaved women, and rewarded obedient behavior with favors, while rebellious enslaved people were brutally punished. A strict hierarchy among the enslaved (from privileged house workers and skilled artisans down to lowly field hands) helped keep them divided and less likely to organize against their masters.

Marriages between enslaved men and women had no legal basis, but many did marry and raise large families most slave owners encouraged this practice, but nonetheless did not usually hesitate to divide families by sale or removal.


5. Continued Controversy Over Censorship & Facebook’s Algorithm

Related to the problem of fake news is the controversy around the way that Facebook surfaces information in general. In May 2016, Facebook ran into trouble when it came to light that people working on its Trending Topics team were affecting the way stories appeared in Trending Topics and intentionally suppressing conservative media.

Facebook’s also run into trouble where removing or not removing content is concerned. The platform has strict guidelines that dictate what it will and will not allow on the site, but there’s subjectivity in that. For example, Facebook found itself in the middle of controversy when it removed a Pulitzer prize-winning photo because it violated nudity guidelines on the site.

The other major part of the argument about how Facebook serves content is the algorithm that’s based on users. A major tenet of the Facebook feed is that it’s supposed to be tailored to you and what you like. The flip side of that personalization, though, is the echo chamber. Facebook’s algorithm filters out things that it doesn’t think you’ll agree with, warping your worldview.

There’s no quick fix for problems like the subjectivity of offensive content or the rise of fake news, but one thing’s abundantly clear as we look at the biggest scandals and PR crises that Facebook’s faced: As the world moves forward and takes on new challenges in social media and communication, Facebook’s going to be at the center of the conversation.


Is too much democracy responsible for the rise of Trump?

“Trump is arguably the most unlikely, unsuitable, and unpopular presidential nominee of a major party in American history,” begins scholar Thomas Mann in his new paper about competing theories on democratic access. But, Mann argues, Trump did not come out of nowhere. As economic stagnation and concern over refugee migration in much of Europe has strengthened right-wing populist parties and politicians there, Mann argues that similar forces are at work in the U.S. Furthermore, the fact that Trump’s takeover occurred within the GOP should come as no surprise. As Mann and Ornstein’s previous work detailed how the Republican Party has become an “insurgent outlier—ideologically extreme … unpersuaded by conventional understanding of facts, evidence, and science and dismissive of the legitimacy of its political opposition.” Many of Trump’s public statements underscore a natural fit between him and the party. But other parts of Trump’s message—isolationism, skepticism of free trade, and others—go against Republican orthodoxy, leading some to believe he is more of an outsider than an extension of the GOP.

Regardless, the undoubtedly unusual nature of Trump’s candidacy has led some to question the health of the American democratic system. Arguments about the health of American democracy often fall into one of two camps. Illustrative of the first camp is Andrew Sullivan. As Mann explains, Sullivan argues that the original barriers the Founders constructed to gird democracy from the “tyranny of the majority” have slowly eroded, replacing more representative means of democracy with direct ones. Trump, in Sullivan’s take, used this development to his advantage.

On the other side, Michael Lind makes a different case. Rather than an excess of democracy, he argues that the institutional strength of the parties, the shifting importance of the courts and executive branch, declining voter participation, and many other factors have limited the influence of ordinary citizens. Perhaps, Lind says, the voters who routinely think that “people like me don’t have any say” were actually right. In this scenario, Trump cast off the Republican establishment because, as Mann articulates, he didn’t need them anyway.

These two sides of democracy, or perhaps the tension always within it, are not new to American politics. Many scholars and thinkers have written responses to the perceived excess or dearth of democracy. In the current paper, Mann reviews a new contribution to the conversation: “Democracy for Realists” by Christopher Achen and Larry Bartels. Mann writes, “What makes this new book … so unsettling is its withering assault on both popular and scholarly conceptions of democracy.” As Achen and Bartels write, “The political ‘belief systems’ of ordinary citizens are generally thin, disorganized, and ideologically incoherent.” For Achen and Bartels, the problem is not lethargic voters, but unrealistic ideals. The expectation that, amid the rest of our hectic lives, we should all engage in thoughtful research, reflection, and debate on every issue and then vote accordingly is simply too much to ask.

Similarly, Achen and Bartels reject the retrospective theory of voting, in which voters punish or reward incumbents for past performance. Unfortunately voters are notoriously bad at connecting changes in their welfare with real policy change, often “punishing incumbents for changes that are clearly acts of God or nature,” Mann writes.

But, all is not lost. After dismantling the more idealistic conceptions of democracy, Achen and Bartels advocate a more realistic conception of democracy based on group psychology. This theory, based on the idea that social identity is as much—if not more of—a driver of political identification as ideology is. This is not a new idea not only is group psychology key to understanding much about human beings, but group theory of democracy has threads in political science dating all the way back to the 1900s. In his new paper, Mann gives a thorough overview of Achen and Bartels’ work, but also includes some important scholarly dissents. As Mann points out, all parties to the debate seem to agree that low voter turnout and weak civic engagement are indeed real, and have perverse effects on democracy. In the end, Mann concludes that whether you believe voters are rational actors influenced by well-formed policy positions, or social beings motivated by group identities, increasing turnout would lead to a more representative electorate—one that may be decisive in the upcoming election.


Did Social Media Ruin Election 2016?

I've noticed two distinct ways social media have changed the way we talk to each other about politics. Clearly, they have changed a lot, maybe everything, but two fairly new phenomena stand out.

One happens on Facebook all the time. Just about all of your friends are posting about the election, nonstop. And there are a few who brag about deleting friends, or who urge friends to unfriend them over their political leanings: "Just unfriend me now." Or something like "If you can't support candidate X/Y, we don't need to be friends anymore." Or "Congrats, if you're reading this, you survived my friend purge!" Etc. You know how it goes. This public declaration, if not celebration, of the end of friendships because of politics.

And then on Twitter, there's the public shaming of those who dare disagree with or insult you. (I am guilty of this.) Someone tweets at you with something incendiary, bashing the article you just shared or the point you just made, mocking something you said about politics, calling you stupid. You quote the tweet, maybe sarcastically, to prove it doesn't affect you. But it does! You tweeted it back, to all of your followers. It's an odd cycle. A rebuttal of nasty political exchanges by highlighting nasty political exchanges.

This is our present political social life: We don't just create political strife for ourselves we seem to revel in it.

When we look back on the role that sites like Twitter, Facebook (and Instagram and Snapchat and all the others) have played in our national political discourse this election season, it would be easy to spend most of our time examining Donald Trump's effect on these media, particularly Twitter. It's been well-documented Trump may very well have the most combative online presence of any candidate for president in modern history.

But underneath that glaring and obvious conclusion, there's a deeper story about how the very DNA of social media platforms and the way people use them has trickled up through our political discourse and affected all of us, almost forcing us to wallow in the divisive waters of our online conversation. And it all may have helped make Election 2016 one of the most unbearable ever.

A problem with format

Fully understanding just how social media have changed our national political conversation means understanding what these platforms were initially intended to do, and how we use them now.

Both the technology itself, and the way we choose to use the technology, makes it so that what ought to be a conversation is just a set of Post-it notes that are scattered.

Kerric Harvey, author, the Encyclopedia of Social Media and Politics

At its core, Twitter is a messaging service allowing users (who can remain anonymous) to tweet out information, or opinions, or whatever, in 140-character bursts. For many critics, that DNA makes Twitter antithetical to sophisticated, thoughtful political conversation.

"Both the technology itself, and the way we choose to use the technology, makes it so that what ought to be a conversation is just a set of Post-it notes that are scattered," Kerric Harvey, author of the Encyclopedia of Social Media and Politics, said of Twitter. "Not even on the refrigerator door, but on the ground."

She argues that what we do on Twitter around politics isn't a conversation at all it's a loud mess.

Bridget Coyne, a senior manager at Twitter, points to several features the company has added to those 140-character tweets: polls, photos, video, Moments and more. She also told NPR that the 140-character limit reflects the app's start as a mobile-first platform, and that it's different now. "We've evolved into a website and many other platforms from that." And she, like every other spokesman for any major social media platform, argues that sites like Twitter have democratized the political conversation, helping give everyone a voice, and that's a good thing.

But even accepting that point, and respecting every new addition to Twitter's list of tools, we find a way to keep arguing. Even the candidates do it.

One particular exchange between Hillary Clinton and Jeb Bush (remember him?) illustrates this new political reality. On Aug. 10, 2015, Clinton's Twitter account posted a graphic with the words: "$1.2 trillion, the amount 40 million Americans owe in student debt."

Cost won't be a barrier to an education. Debt won't hold you back. Read Hillary's plan: https://t.co/A4pWb3fOf4 pic.twitter.com/KVyr8SlSVn

&mdash Hillary Clinton (@HillaryClinton) August 10, 2015

Jeb Bush's campaign replied, tweaking Clinton's own graphic to read "100%, The increase in student debt under this Democratic White House."

Those two tweets seem reasonable enough. But there was more. In response to the Bush campaign's response, Team Clinton scratched out the words in Bush's redone graphic, added its own scribbled letters, and etched a large "F" on top, for the "grade given to Florida for college affordability under Jeb Bush's leadership." The campaign tweeted the image with the caption "Fixed it for you."

And then, the Bush account replied once more, turning Clinton's "H" logo, with its right-pointing arrow, by 90 degrees, sending the arrow point skyward, with the word "taxes" printed behind over and over. That caption was "fixed your logo for you."

It was an exchange nearing petty these two candidates were trolling each other. But for the most part it seemed totally normal in a campaign season like this one, and in the digital age in which we live. Establishment political figures like Bush and Clinton (or at least their young staffers) had co-opted the language of social media and mastered the formats, with all the snark and back and forth that come along with it, and with an extra incentive to adopt some of the meanness Trump has exhibited online.

There may be even more problems for Twitter than what real live people are doing on the app. A recent study conducted by a research team at Oxford University found that during the period of time between the first presidential debate and the second, one-third of pro-Trump tweets and nearly one-fifth of pro-Clinton tweets came from automated accounts. Douglas Guilbeault, one of the researchers in the study, told NPR that hurts political discourse. "They reinforce the sense of polarization in the atmosphere," he said. "Because bots don't tend to be mild-mannered, judicial critics. They are programmed to align themselves with an agenda that is unambiguously representative of a particular party. . It's all 'Crooked Hillary' and 'Trump is a puppet.' "

So, if Twitter is a bunch of Post-it notes thrown on the ground, we now have to consider which of those notes are even real.

The company would not offer its own estimate on the number of bots on its app, or any on-the-record rebuttal to the study's findings, besides the following statement: "Anyone claiming that spam accounts on Twitter are distorting the national, political conversation is misinformed."

Even if there are questions about the number of bots on Twitter, the tone of the conversation there increasingly can't be denied. A recent study from the Anti-Defamation League found "a total of 2.6 million tweets containing language frequently found in anti-Semitic speech were posted across Twitter between August 2015 and July 2016," with many aimed at political journalists. And a Bloomberg report found trolling on the service is keeping the company from finding a buyer.

Facebook and the "echo chamber"

Facebook fares no better in garnering scathing critique of its influence on the political conversation. At its core, it's a platform meant to connect users with people they already like, not to foster discussion with those you might disagree with.

Facebook's News Feed, which is how most users see content through the app and site, is more likely to prominently display content based on a user's previous interests, and it also conforms to his or her political ideology. A Wall Street Journal interactive from May of this year shows just how much your feed is affected by your political leanings.

The company also faced rebuke from conservatives when it tried to share trending news stories on users' homepages they said the shared articles reflected a liberal bias. And after trying unsuccessfully to begin filtering out fake news stories from users' feeds, Facebook has been increasingly accused of becoming a hotbed of fake political news. The most recent allegation comes from a BuzzFeed report, which found that a good amount of fake — and trending — Donald Trump news is coming from business-savvy millennials. In Macedonia.

In response to these critiques, Facebook pointed NPR to a September post from the company's CEO, Mark Zuckerberg, in which he said, "Whatever TV station you might watch or whatever newspaper you might read, on Facebook you're hearing from a broader set of people than you would have otherwise."

In that same post, Zuckerberg also pointed out studies showing that increasingly, more young people are getting their news primarily from sites like Facebook, and that young people have also said it helps them see a "larger and more diverse set of opinions." And Zuckerberg said the company is trying to do a better job of sifting out fake news.

Late last month, Facebook COO Sheryl Sandberg said Facebook had helped more than 2 million people register to vote.

It's not just the social networks

Social networks are built the way they're built, but how we've used them this year says just as much about our shortcomings as about any particular network's flaws.

Data tracking trending topics and themes on social networks over the course of the campaign show that for the most part, America was less concerned with policy than with everything else. Talkwalker, a social media analytics company, found that the top three political themes across social media platforms during the past year were Trump's comments about women, Clinton's ongoing email scandal, and Trump's refusal to release his tax returns.

"Social media may have played a role in creating a kind of scandal-driven, as opposed to issue-driven, campaign," said Todd Grossman, CEO of Talkwalker Americas, "where topics such as Trump's attitude towards women, Trump's tax returns and Clinton's emails have tended to dominate discussion as opposed to actual policy issues."

And Brandwatch, another company that tracks social media trends, found that on Twitter, from the time Trump and Clinton formally began their campaigns for president, aside from conversation around the three presidential debates, only two policy-driven conversations were in their top 10 most-tweeted days. Those were Trump calling for a complete ban on Muslims entering the United States, and Trump visiting Mexico and delivering a fiery immigration speech in Arizona in the span of 24 hours. Brandwatch found that none of Clinton's 10 biggest days on Twitter centered on policy, save for the debates. (And even in that debate conversation, topics like "nasty woman" and "bad hombres" outpaced others.)

Looking to the future

So we end this campaign season with social media platforms seemingly hardwired for political argument, obfuscation and division. We are a public more concerned with scandal than policy, at least according to the social media data. And our candidates for higher office, led by Trump, seem more inclined to adopt the combative nature of social media than ever before.

It's too late to fix these problems for this election, but a look to the social networks of tomorrow might offer some hope.

Snapchat has emerged as the social network of the future. Data from Public Opinion Strategies find that more than 60 percent of U.S. smartphone owners ages 18 to 34 are using Snapchat and that on any given day, Snapchat reaches 41 percent of all 18- to 34-year-olds in the U.S. Any hope for the social media discourse of the future may be found with them.

Peter Hamby, head of news at Snapchat, says the platform is a "fundamentally different" experience than other social media platforms, in part because, he says, on Snapchat, privacy is key. "I think that people want to have a place where they can communicate with their friends and have fun, but also feel safe," Hamby said.

He also said he is working on figuring out what young people want in a social network and how to make it better. And, he said, social media users increasingly want to rely on their social networks to make sense of the flood of political opinions, reporting and vitriol they're being bombarded with. "One thing that me and my team have tried to do," Hamby told NPR, "is explain the election. . Because a lot of stuff you see on the Web, and TV, is pretty noisy."

In asking whether social media ruined this election or not, I had to ask myself how my actions on social media have helped or hurt the country's political dialogue — what my contribution to all that noise has been. I'd have to say that even when I've tried to help, I'm not sure I've done enough.

Last month, I shared an article about something political on Twitter. Two women got into an argument in the replies to my tweet. I could tell that they didn't know each other, and that they were supporting different candidates for president. Every tweet they hurled back and forth at each other mentioned me, so I got notifications during every step of their online fight. At one point, they began to call each other names, with one young woman calling the other the "C" word.

I stepped in, told the two that they maybe should take a break from Twitter for a bit, do something else (or at least remove me from their mentions). Both responded. They apologized to each other and to me, and they both promised to log off for a bit. One mentioned trying to play a role in creating a nicer world after the election.

I left it at that, but should I have done more? Should I have urged the two to message each other privately, try to talk politics civilly, maybe think about ways to have enriching, productive conversations online (or better yet, in person)? Should I have asked myself if the words I used in sharing the original article helped lead to the argument? Should the three of us have made it a teachable moment?

Instead, they retreated from their battle positions for a few hours at best, never getting to know the stranger they insulted. And I moved on, and just kept tweeting.

But I had to, right? Making the social Web nicer always takes a back seat to just trying to keep up. There were more tweets to see, more stuff to read, more Internet Post-it notes to throw along our social media floor.

If social media ruined 2016, it's because of that: We haven't stopped long enough to try to sort it all out.


Delete Facebook movement grows amid brewing backlash

Those privacy issues are now front and center. Facebook's loose handling of how its data was acquired by app developers has plunged the company into the biggest crisis of its 14-year existence. The revelation that a data analytics company used by Donald Trump’s presidential campaign was able to surreptitiously collect data on 50 million people through a seemingly innocuous quiz app has forced CEO Mark Zuckerberg to issue a public apology — and promise changes.

Taking a step back to look at Facebook’s pattern of privacy issues provides an important perspective on just how many times the company has faced serious criticism. What follows is a rundown of the biggest privacy issues Facebook has faced to date:

When: September 2006

What: Facebook debuts News Feed

Facebook’s response: Tells users to relax

Facebook was only two years old when it introduced News Feed on Sept. 5, 2006. The curated feed was intended as a central destination so users didn't have to browse through friends' profiles to see what they had changed.

Facebook had about 8 million users at the time, and not all of them were happy about every move of their personal life being blasted into a daily feed for their friends.

An estimated 1 million users joined "Facebook News Feed protest groups," arguing the feature was too intrusive. But Facebook stayed the course.

“One of the things I'm most proud of about Facebook is that we believe things can always be better, and we're willing to make big bets if we think it will help our community over the long term,” Zuckerberg said in a post reflecting on the 10th anniversary of News Feed.

The outrage died down, and News Feed became a major part of Facebook’s success.

When: December 2007

What: Beacon, Facebook’s first big brush with advertising privacy issues

Facebook’s response: Zuckerberg apologizes, gives users choice to opt out

There was once a time when companies could track purchases by Facebook users and then notify their Facebook friends of what had been bought -- many times without any user consent.

In an apology on Dec. 6, 2007, Zuckerberg explained his thought process behind the program, called Beacon, and announced that users would be given the option to opt out of it.

“We were excited about Beacon because we believe a lot of information people want to share isn’t on Facebook, and if we found the right balance, Beacon would give people an easy and controlled way to share more of that information with their friends,” he said.

At the time, Facebook was also talking to the Federal Trade Commission (FTC) about online privacy and advertising.

When: November 2011

What: Facebook settles FTC privacy charges

Facebook’s response: Facebook agrees to undergo an independent privacy evaluation every other year for the next 20 years.

Facebook settled with the Federal Trade Commission in 2011 over charges that it didn't keep its privacy promise to users by allowing private information to be made public without warning.

Regulators said Facebook falsely claimed that third-party apps were able to access only the data they needed to operate. In fact, the apps could access nearly all of a user’s personal data. Facebook users that never authenticated a third-party app could even have private posts collected if their friends used apps. Facebook was also charged with sharing user information with advertisers, despite a promise they wouldn’t.

"Facebook is obligated to keep the promises about privacy that it makes to its hundreds of millions of users," Jon Leibowitz, then chairman of the FTC, said at the time. "Facebook's innovation does not have to come at the expense of consumer privacy. The FTC action will ensure it will not."

As part of the agreement in 2011, Facebook remains liable for a $16,000-per-day penalty for violating each count of the settlement.

When: June 2013

What: Facebook bug exposes private contact info

Facebook’s response: Facebook fixes bug, notifies people whose info may have been exposed.

A bug exposed the email addresses and phone numbers of 6 million Facebook users to anyone who had some connection to the person or knew at least one piece of their contact information.

The bug was discovered by a White Hat hacker — someone who hacks with the intention of helping companies find bugs and build better security practices.

When people joined Facebook and uploaded their contact lists, Facebook explained it would match that data to other people on Facebook in order to create friend recommendations.

“For example, we don’t want to recommend that people invite contacts to join Facebook if those contacts are already on Facebook instead, we want to recommend that they invite those contacts to be their friends on Facebook,” Facebook’s team explained in a June 2013 message.

That information was “inadvertently stored in association with people’s contact information,” Facebook said. That meant that when a Facebook user chose to download their information through Facebook’s DYI tool, they were provided with a list of additional contact information for people they knew or with whom they may have had some association.

Facebook said it pulled the tool offline and fixed it. The company also said it had notified regulators and pledged to tell affected users.

When: July 2014

What: Mood-manipulation experiment on thousands of Facebook users

Facebook’s response: Facebook data scientist apologizes

Facebook's mood-manipulation experiment in 2014 included more than half a million randomly selected users. Facebook altered their news feeds to show more positive or negative posts. The purpose of the study was to show how emotions could spread on social media. The results were published in the Proceedings of the National Academy of Sciences, kicking off a firestorm of backlash over whether the study was ethical.

Adam D.I. Kramer, the Facebook data scientist who led the experiment, ultimately posted an apology on Facebook. Four years later, the experiment no longer appears to be online.

“I can understand why some people have concerns about it, and my co-authors and I are very sorry for the way the paper described the research and any anxiety it caused,” he wrote, according to The New York Times.

When: April 2015

What: Facebook cuts off apps from taking basically all the data they want

Facebook’s response: Please keep building apps

If Person A downloads an app, that app shouldn’t be able to suck data from Person B just because they’re friends, right? In 2014, Facebook cited privacy concerns and promised it would limit access to developers. But by the time the policy took effect the next year, Facebook had one big issue: It still couldn’t keep track of how many developers were using previously downloaded data, according to current and former employees who spoke with The Wall Street Journal.

When Paul Grewal, Facebook vice president and deputy general counsel announced Cambridge Analytica’s ban from Facebook last week, he said Facebook has a policy of doing ongoing manual and automated checks to ensure apps are complying with Facebook policies.

“These include steps such as random audits of existing apps along with the regular and proactive monitoring of the fastest growing apps,” he said.

When: January 2018

What: Europe’s data protection law

Facebook’s response: Facebook complies

Facebook has also began preparing for the start of a strict European data protection law that takes effect in May. Called the General Data Protection Regulation, the law governs how companies store user information and requires them to disclose a breach within 72 hours.

In January, Facebook released a set of privacy principles explaining how users can take more control of their data.

One particularly notable principle many will be watching to see if Facebook upholds is accountability.

"In addition to comprehensive privacy reviews, we put products through rigorous data security testing. We also meet with regulators, legislators and privacy experts around the world to get input on our data practices and policies," Facebook's team said in January.

When: February 2018

What: Belgian court tells Facebook to stop tracking people across the entire internet

Facebook’s response: Appeal the court’s ruling

In February, Facebook was ordered to stop collecting private information about Belgian users on third-party sites through the use of cookies. Facebook was also ordered to delete all data it collected illegally from Belgians, including those who aren't Facebook users but may have still landed on a Facebook page, or risk being fined up to 100 million euros.

Facebook said it has complied with European data protection laws and gives people the choice to opt out of data collection on third-party websites and applications. The company said it would appeal the ruling.

When: March 2018

What: Revealed that Facebook knew about massive data theft and did nothing

Facebook’s response: An apology tour and policy changes

The world finally got the answer to the question “Where’s Zuck?” on Wednesday when the Facebook CEO and co-founder broke his silence on the data harvesting allegations. In a statement posted on his Facebook wall, Zuckerberg avoided the word “sorry” but did express partial blame for Facebook’s role in not doing enough to protect user privacy.

He laid out three steps Facebook will take now, including investigating all apps that were able to access user data before 2014, when the company began changing its permissions for developers. Facebook will put restrictions on the data apps can access, limiting them to a person’s name, photo and email. Finally, Zuckerberg said Facebook will make an easy tool that lets everyone see which apps have access to their data and allow them to revoke access.

"I've been working to understand exactly what happened and how to make sure this doesn't happen again,” he wrote. “The good news is that the most important actions to prevent this from happening again today we have already taken years ago. But we also made mistakes, there's more to do, and we need to step up and do it."