Authorarun vishwanath

Why do so many people fall for fake profiles online?

The first step in conducting online propaganda efforts and misinformation campaigns is almost always a fake social media profile. Phony profiles for nonexistent people worm their way into the social networks of real people, where they can spread their falsehoods. But neither social media companies nor technological innovations offer reliable ways to identify and remove social media profiles that don’t represent actual authentic people.

It might sound positive that over six months in late 2017 and early 2018, Facebook detected and suspended some 1.3 billion fake accounts. But an estimated 3 to 4 percent of accounts that remain, or approximately 66 million to 88 million profiles, are also fake but haven’t yet been detected. Likewise, estimates are that 9 to 15 percent of Twitter ‘s 336 million accounts are fake.

Fake profiles aren’t just on Facebook and Twitter, and they’re not only targeting people in the U.S. In December 2017, but German intelligence officials also warned that Chinese agents using fake LinkedIn profiles were targeting more than 10,000 German government employees. And in mid-August, the Israeli military reported that Hamas was using fake profiles on Facebook, Instagram and WhatsApp to entrap Israeli soldiers into downloading malicious software.

Although social media companies have begun hiring more people and using artificial intelligence to detect fake profiles, that won’t be enough to review every profile in time to stop their misuse. As my research explores, the problem isn’t actually that people – and algorithms – create fake profiles online. What’s really wrong is that other people fall for them.

My research into why so many users have trouble spotting fake profiles has identified some ways people could get better at identifying phony accounts – and highlights some places technology companies could help.

People fall for fake profiles

To understand social media users’ thought processes, I created fake profiles on Facebook and sent out friend requests to 141 students in a large university. Each of the fake profiles varied in some way – such as having many or few fake friends, or whether there was a profile photo. The idea was to figure out whether one or another type of profile was most successful in getting accepted as a connection by real users – and then surveying the hoodwinked people to find out how it happened.

I found that only 30 percent of the targeted people rejected the request from a fake person. When surveyed two weeks later, 52 percent of users were still considering approving the request. Nearly one in five – 18 percent – had accepted the request right away. Of those who accepted it, 15 percent had responded to inquiries from the fake profile with personal information such as their home address, their student identification number, and their availability for a part-time internship. Another 40 percent of them were considering revealing private data.

But why?

When I interviewed the real people my fake profiles had targeted, the most important thing I found was that users fundamentally believe there is a person behind each profile. People told me they had thought the profile belonged to someone they knew, or possibly someone a friend knew. Not one person ever suspected the profile was a complete fabrication, expressly created to deceive them. Mistakenly thinking each friend request has come from a real person may cause people to accept friend requests simply to be polite and not hurt someone else’s feelings – even if they’re not sure they know the person.

In addition, almost all social media users decide whether to accept a connection based on a few key elements in the requester’s profile – chiefly how many friends the person has and how many mutual connections there are. I found that people who already have many connections are even less discerning, approving almost every request that comes in. So even a brand-new profile nets some victims. And with every new connection, the fake profile appears more realistic and has more mutual friends with others. This cascade of victims is how fake profiles acquire legitimacy and become widespread.

The spread can be fast because most social media sites are designed to keep users coming back, habitually checking notifications and responding immediately to connection requests. That tendency is even more pronounced on smartphones – which may explain why users accessing social media on smartphones are significantly more likely to accept fake profile requests than desktop or laptop computer users.

Illusions of safety

And users may think they’re safer than they actually are, wrongly assuming that a platform’s privacy settings will protect them from fake profiles. For instance, many users told me they believe that Facebook’s controls for granting differing access to friends versus others also protect them from fakers. Likewise, many LinkedIn users also told me they believe that because they post only professional information, the potential consequences for accepting rogue connections on it are limited.

But that’s a flawed assumption: Hackers can use any information gleaned from any platform. For instance, simply knowing on LinkedIn that someone is working at some business helps them craft emails to the person or others at the company. Furthermore, users who carelessly accept requests assuming their privacy controls protect them imperil other connections who haven’t set their controls as high.

Seeking solutions

Using social media safely means learning how to spot fake profiles and use privacy settings properly. There are numerous online sources for advice – including platforms’ own help pages. But too often it’s left to users to inform themselves, usually after they’ve already become victims of a social media scam – which always begins with accepting a fake request.

Adults should learn – and teach children – how to examine connection requests carefully in order to protect their devices, profiles and posts from prying eyes, and themselves from being maliciously manipulated. That includes reviewing connection requests during distraction-free periods of the day and using a computer rather than a smartphone to check out potential connections. It also involves identifying which of their actual friends tend to accept almost every friend request from anyone, making them weak links in the social network.

These are places social media platform companies can help. They’re already creating mechanisms to track app usage and to pause notifications, helping people avoid being inundated or needing to constantly react. That’s a good start – but they could do more.

For instance, social media sites could show users indicators of how many of their connections are inactive for long periods, helping people purge their friend networks from time to time. They could also show which connections have suddenly acquired large numbers of friends, and which ones accept unusually high percentages of friend requests.

Social media companies need to do more to help users identify and report potentially fake profiles, augmenting their own staff and automated efforts. Social media sites also need to communicate with each other. Many fake profiles are reused across different social networks. But if Facebook blocks a faker, Twitter may not. When one site blocks a profile, it should send key information – such as the profile’s name and email address – to other platforms so they can investigate and potentially block the fraud there too.

[A version of this article appeared on The Conversation]

Stopping the Russians from influencing the midterms

The continued prosecution of “All the President’s Men” does little to stop the Russians from attempting to influence America’s upcoming midterm elections. And reports from Missourito Californiasuggest they are already looking for our cyber weaknesses to exploit.

Chief among these: spear phishing—emails containing hyperlinks to fake websites—that the Russians used to hack into the DNC emails and set in motion their 2016 influence campaign.

After two years of congressional hearings, indictments, and investigations, spear phishing [is it one word or two? Hyphenated?] [Two words and not hyphenated]not only continues to be the commonest attack used by hackers, but the Russians are still trying to use it against us.

The is because in the ensuing time, spear phishing has become even more virulent, thanks to the availability of sophisticated malware, some stolen from intelligence agencies; troves of people’s personal information from previous breaches; and ongoing developments in machine learning that can deep-dive into this data and craft highly effective attacks.

Just last week, Microsoft blocked six fake websitesthat were likely to be used for spear phishing the US Senate by the same Russian intelligence unit responsible for the 2016 DNC hack. [source?]

But the Internet is vast and there are many more fundamental weaknesses still available for exploit.

Take the URLs with which we identify websites. Thanks to Internationalized Domain Names (IDNs)that allow websites to be registered in languages other than English, many fake websites used for spear phishing are registered using homoglyphs— characters from languages that look like English language characters. For instance, a fake domain for could be registered by replacing the English “a” or “o” with their Cyrillic equivalents. Such URLs are hard for people to discern visually and even email scanning programs, trained to flag words like “password” which are common in phishing emails, like the one the Russians in 2016 used to hack into Jon Podesta’s emails, can be tricked. And while many browsers prevent URLs with homoglyphs from being displayed, some like Firefox still expect users to alter their browser settings for protection.

Making things worse is the proliferation of Certification Authorities (CA), the organizations issuing digital certificates that make the lock icon and HTTPS appear next to a website’s name on browsers. While users are taught to trust these symbols, an estimated one in four phishing websites actually have HTTPS certificates. This is because some CA’s have been hacked, meaning there are many roguecertificates out there, while some others have doled out free certificates to just about anyone. For instance, one CA last year issued certificates to15000 websites with names containing some combination of the word PayPal—all for spear phishing.

Besides these, the problem of phony social media profiles, which the Russians used in 2016 for phishing, trolling and spreading fake news, remains intractable. Just last week, the Israel Defense Forces (IDF) reported a social media phishing campaign by Hamas, luring its troops to download malware using fake social media profiles on Facebook, Instagram, and Whatsapp. Also last week, Facebook, followed by Twitter, blocked profiles linked to Iranian and Russian operatives being used for spreading misinformation.

These attacks, however, reveal a critical weakness of influence campaigns: by design, they utilize overlapping profiles in multiple platforms. Yet, today, social media organizations internally police their networks and keep information in their own “walled gardens.”

A better solution would be to therefore host data on suspect profiles and pages in a unified, open-source repository, one that accepts inputs from other media organizations, security organizations, even users who find things awry. Such an approach would help detect and track coordinated social media influence campaigns—which would be of enormous value to law enforcement and even media organizations big and small, many of which get targeted using the same profiles.

A platform for this could be the Certificate Transparencyframework, where digital certificates are openly logged and verified, which has been adopted by many popular browsers and operating systems. For now, this framework only audits digital certificates but, it could be expanded to encompass domain name auditing and social media pages.

Finally, we must improve user education. Most users know little about homoglyphs and even less about how to change their browser settings to ensure against them. Furthermore, many users, after being repeatedly trained to look for HTTPS icons on websites, have come to implicitly trust them. Many even mistake such symbols to mean that a website is legitimate. Because even an encrypted site could be fraudulent, users have to be taught to be cautious, and to assess website factors ranging from the spelling used in the domain name, to the quality of information on the website, to its digital certificate and the CA who issued it. Such initiatives must be complemented with better, more uniform Internet browser design, so users do not have to tinker with settings to ensure against being phished. [but if this is only a problem for Firefox, it would seem to affect only a small minority of users?]

Achieving all this requires leadership, but the White House, which ordinarily would be best positioned to address them, recently fired its cybersecurity czar and eliminated the role. And when according to GAO, federal agencies have yet to address over a third of its 3000 cybersecurity recommendations, the President instead talks about developing a Space Force. Last we knew the Martians haven’t landed, but the Russians sure are probing our computer systems.


*A version of this post was published in CNN:

To reward, or not to reward

In late 2014, in the aftermath of the Sony Pictures Entertainment breach, I had advocated the development of a cyber breach reporting portal where individuals could report suspected cyber incidents. Such a system, I argued, would work as an early warning system so IT could be made aware of an attack before it become widespread; it would also work as a centralized system for remediation, so affected victims could seek help.

Since then many organizations all over the world have developed such portals for their employees to report suspected breaches. These range from web reporting forms and email in-boxes to 24-hour help-desks where employees can find remedial support.

While there is little direct research on how well these portals work, extant reports points to a rather low utilization rate. For instance, Verizon’s 2018 Data Breach Investigations Report (DBIR) found that among 10,000 employees across different organizations who were subjects of various test phishing campaigns, fewer than 17% reported it to IT. My own experience advising firms on their user vulnerability reduction initiatives have found similar low reporting rates.

To counter this, many CSOs have resorted to incentives and punishments to enhance employee reporting of suspect emails and cyber activities. But the question—one that I am often posed when advising organizations on IT security—is which of these really work?

First, let’s begin with punishments. We know from a century of research on human motivation that punishments tend be salient but not necessarily effective in motivating people the right way. That means people remember threats but it doesn’t help, especially when the task at hand requires mental effort.

For instance, when the former head of the NSA Admiral Rogers famously remarked that individuals who fall for a phishing test should be court-martialed— it sure got noticed and widely reported. But such actions lead to fear, anxiety, and worry, not more thoughtful action. This is precisely why phishing emails have warnings and threats in them—because when people focus on the threats, they end up ignoring the rest of the information on the email that could reveal the deception.

In surveys I have conducted in organizations that use punishments to foster reporting, the vast majority of users reported changing how they use email: many were avoiding opening email at certain times of the day, were waiting for people to resend their original email requests, or, in some cases, forwarding work emails to their non-IT authorized mobile devices and email accounts.

These may be effective ways of avoiding getting caught in a phishing test, but not necessary good for organizational productivity and cybersecurity.

On the flip side are rewards for reporting phishing emails. Some organizations have used monetary compensations, others have experimented with token rewards, and some others with mere recognition of the employee who reported. Which from these work the best? The surprising answer: recognition.

The reasons for it are as follows. First, monetary compensation puts a dollar amount to cybercrime reporting—a value that is difficult to determine. That is, do we estimate the value of a report based on the time the employee sent in the report, i.e., immediately after the attack versus much later, the accuracy of their report, or the size of the breach it prevented? Each estimation process has its own pitfalls and they all also focus on the report rather than the employee doing the report or what it means for them to actually perform the reporting function.

Monetary incentives have another problem: they turn reporting into a game. This changes the employee’s motivation, who rather than becoming more careful about scrutinizing incoming emails, which is the indirect purpose of such reporting, learn that more reporting increases their chances of winning a prize.

Consequently, many employees report anything they find troubling, sometimes emails even they know are simply spam. This, on the one hand, significantly increases the load on the IT helpdesks and decreases their chances of catching a real phishing email. On the other hand, too many unnecessary reports decrease the odds of winning a reward, which over time reduces the employees’ motivation for reporting.

Compared to this, social rewards such as public praise, recognition and appreciation through announcements acknowledging those users who have reported suspicious emails, along with appropriate communication, shows the value of this reporting works better than all other approaches.

This is because monetary incentives appeals to employees’ base needs, which are already met through their jobs, while social recognition appeals to higher order needs—what the famous motivational psychologist Abraham Maslow termed “esteem needs”: the human need for achievement, for respect, for prestige, and for a sense of accomplishment.

Being publicly recognized for reporting suspect emails makes employees feel valued for their effort at reporting, which on the face of it is an act of altruism that has little direct relationship to their workflow or productivity. Effectively communicating the value of their reporting, thus, focuses attention to the employees doing the report.

This has enduring effects, influencing both the employee being feted while also motivating others to follow their lead, which altogether leads to a culture of cyber safety within the organization.

As email-based attacks targeting organizations become more sophisticated, employees are the first, and at times the only, line of defense against them. Effectively harnessing the power of employees through the use of appropriate strategies for incentivizing reporting is the difference between organizations that are reacting to cyber-attacks and those that are proactively stopping them.


* A version of this post appeared in InfoSecurity Magazine

When AI writes your news, what happens to democracy?

In the not-so-distant future, we will be presented with the version of the news we wish to read — not the news that some reporter, columnist or editorial board decides we need to read. And it will be entirely written by artificial intelligence (AI).

Think this is science fiction? Think again. Many of us probably don’t realize that AI programs were authoring many parts of the summer Olympics coverage, and also providing readers with up-to-date reports, personalized based on the reader’s location, on nearly 500 House, Senate, and gubernatorial races during the last election cycle.

And those news feeds on Facebook and Google News that the majority of people trust more than the original news sources, well those, too, employ machine-learning algorithms to match us with news and ads. And we saw how easily those were co-opted by the Russians to influence our last presidential elections.

Follow the natural progression of these developments, and it leads to an ominous future in which AI entirely writes and presents the news exactly the way each of us would like to read it — forever altering democracy as we know it.

In this future, journalists might still report on events, but it will be AI that will take these inputs, inject data from its vast historical repositories and formulate a multitude of different themes, each making different arguments and coming to different conclusions. Then, using data about readers’ interests learned from their social media, online shopping and browsing history, AI will present them with the version of the news they would like to read.

For example, for a reader with strong views on the environment, news of heavy flooding in some place of interest might be presented from a global warming standpoint, with conclusions about how human activity has hurt the environment. For another with views against climate change, the same story might be presented with data and conclusions questioning the validity of weather science.

Stories might be presented in brief, for readers who like to peruse the news, or in-depth, for those who like to delve into details. It may even have actionable links to online stores selling essential supplies for those in the flood zone or social media links connecting readers with others who share their interests. In essence, it will be the perfect AI-created echo chamber — where each person will be an audience of one, connected to others who are always agreeable.

This hyper personalized, AI-driven reality is closer than people realize — and it goes beyond the Olympic or election coverage I mentioned. After his purchase of the Washington Post, Jeff Bezos introduced Heliograf, an AI-based writing tool, which given predefined themes and phrases can write complete articles. This software, while still far from autonomous, has already authored about 850 articles that have cumulatively garnered half a million page views.

Others like The New York Times, the Associated Press and many financial organizations are also testing and utilizing similar software for everything from local news reporting to financial report writing. Just consider this AP story on a Maryland-based company’s third-quarter results, written by AI.

Furthermore, thanks to Google, Facebook, Amazon and other online services tracking virtually every aspect of people’s online and even offline behaviors, we already have deep data on almost every American’s personal opinions and preferences — which these companies already use to target and position advertisements. All that’s missing is for one media organization to combine these processes.

And there is nothing to stop a company, especially one such as Amazon or even Apple, from doing it. After all, it would create the perfectly “sticky website,” where people, content and products are precisely matched — an advertiser’s dream come true.

Besides, there is no policy or law that prohibits any of this — none whatsoever prescribing that the news must be authored by people. And news consumers would love such personalized news. After all, close to the majority of news consumers, both right- and left-leaning, not only prefer to hear political views in line with their own thinking on social media, but they also tend to block or defriend people who disagree with their avowed political views.

The majority of news consumers also “happen upon the news” online rather passively, often while doing something else. They usually follow the same few news sources rather than looking for another source to reconfirm what they are presented, let alone get a different perspective.

So the audience preference for an AI-driven, single news website that targets them with hyper-personalized content is already here, policies prohibiting it are absent and the technology for it is almost ready. In other words, this media future is primed for disruption.

A win-win for marketers, advertisers and readers — but a giant loss for democracy as we know it, because it will take away the core of what makes democracies successful: well-informed citizens, who form opinions not by simply reading articles they agree with, but by examining that which they don’t agree with — and then finding common ground.

However, we can save this critical part of our democracy through forward thinking policy, media self-policing and a bit of introspection.

More specifically, first, when it comes communication technology, policy making tends to be highly reactive. Right from the days of the Radio Act of 1912, which was a reaction to the sinking of the Titanic and eventually led to the creation of the Federal Communications Commission, to all the many congressional hearings after the Russian interference in our elections, we have reactively dealt with the media. What we need instead is to proactively address what we know is more than likely.

The problem with AI is not only that it will do things faster or better than human journalists, but it is also that we will trust it implicitly. We already see this trend with court systems across the nation using AI-based programs for deciding what punishment is meted out to people convicted of crimes without fully examining the underlying computational algorithms governing the programs.

Likewise, the AI-generated news of the future will likely be considered more trustworthy, unless policies are enacted that limit the extent to which algorithms can access audience profile data — thereby reducing the ability for the media to target each reader with their own version of “alternative facts.”

Second, the news media needs to act responsibly and self-police. With the many articles already being generated and matched to readers by AI, news sites need to start providing indicators of how such content matching was done, what parts of the content was authored by AI and, in the future, how many different versions of the story were created. This would help readers make up their own minds about the credibility of what they read.

Finally, the reading public has the largest responsibility. What our recent presidential election has taught us is that it’s not simply the availability of the media, the presence of competing content or even its accessibility. It is human agency. In other words, we the people have to actively seek information — some that is agreeable, a lot that it not; some that’s online, and others that come from discussions with people who disagree with us — and form our informed views. And that’s something tomorrow’s AI could well take away from us.



*A version of this post appeared in CNN

With AI we may have created ourselves out of existence

Amazon Go, the online retailer’s first completely automated store, debuted in Seattle last week. Using a bevy of smart cameras, deep machine learning and artificial intelligence (AI) algorithms, the store makes it possible for shoppers to simply pick up the products they like and go, with their accounts being automatically charged for the products — completely eliminating the need for cashiers and checkout lines. Though staff members still stock the shelves, they too will likely soon be replaced by robots.

This is revolutionary and will likely be how all stores will operate in the near future. Stores won’t have to invest in employees — salaries, training, overtime, health care. Customers will like it, too. No more standing in boring check-out lines, interacting with indifferent staff.

What we are witnessing is surely the future of the retail industry, but there is also a downside that needs our attention. Cashiers and retail workers are two of the most common occupations in the US, employing roughly 8 million people, many of who tend to be younger, white women, making modest yearly incomes in the $20,000-$25,000 range.

Most of these jobs require little formal education for entry, and so the sector supports many individuals with relatively low skills and education who are likely to find it particularly hard to quickly retool and fit a different employment sector. Most of them will likely find themselves jobless.

Of course, this isn’t the only sector that AI will decimate. Driverless trucks are already being tested on major highways. They, too, have many advantages over today’s long haulers: they can run 24/7 and never get fatigued; no need for mandatory breaks; no more wasted fuel idling overnight.

Truck drivers account for a third of the cost of this $700 billion industry, and there are over 1 million mostly middle-aged, white male truckers in the US. Their jobs will be rendered obsolete. And these numbers will likely be even higher once driverless cars replace all taxi and local delivery drivers.

Such fears of computing-led obsolescence aren’t new. In 1964, less than a few years after IBM had launched the first solid-state mainframe computer, “The Twilight Zone” ran a skit titled “The Brain Center at Whipple’s” — where Mr. Whipple, the owner of a vast manufacturing corporation replaced all his factory workers with a room-sized computing machine.

Mr. Whipple’s economic justification for his “X109B14 modified transistorized totally automated machine” could just as well be applied to AI: “It costs 2 cents an hour to run … it lasts indefinitely … it gets no wrinkles, no arthritis, no blocked arteries … two of them replace 114 men who take no coffee breaks, no sick leaves, no vacations with pay.” In the show, Whipple’s machine quickly replaced everyone from the plant’s workers to its foremen to all the secretaries.

The story was prescient and many of its fictionalized fears in time came true: Most of the large manufacturing plants were indeed shut down; secretaries and typists mostly became obsolete; and the jobs that created the American middle-class were all eventually outsourced. Much of this computer-driven automation replaced low-skilled easily routinizable functions.

But AI is different. It utilizes deep-learning algorithms and acquires skills, so it can routinize many complex functions.

Take journalism — a task that has always been performed by humans. After its purchase of The Washington Post last year, Amazon tested Heliograf, a new AI based writing program that automates report-writing using predefined narrative templates and phrases. From the Olympics to the elections, the software has already auto-published close to 1,000 articles.

And given its ability to churn through virtually any amount of data and spit out endless reports instantaneously, AI newsbots are way better than humans. It’s no surprise then that USA Today, Reuters, BuzzFeed and growing numbers of financial organizations are already employing AI for tasks ranging from reporting to data authentication.

In the near future, AI will replace many other such so-called highly skilled professions, from chefs to pilots and surgeons. Going back to school, learning new skills and retooling might not be an option because it would be impossible to learn as quickly, provide the kind the nuance from distilling terabytes of information or outpace AI. And besides, by the human-time it takes to acquire a new skill, AI might have learned to replace it.

If these trends materialize — and some might not — we are looking at a seismic shift in the American economy. If the last election was a push back against globalization, imagine what a rage against AI will look like.

The solution, of course, is not to stop the march of progress but to prepare for it with forward thinking investments in education, human capital and public policy. While Washington is busy cleaning up yesterday’s self-inflicted mess, this is tomorrow’s crisis that requires attention today.

In the end of the Mr. Whipple skit, he, too, was rendered obsolete — by a robot. Rod Serling’s ominous closing message: “Man becomes clever instead of becoming wise; he becomes inventive and not thoughtful; and sometimes, as in the case of Mr. Whipple, he can create himself right out of existence.” One hopes that this isn’t what AI does to us.


*A version of this post appeared in CNN

It’s not just fake news, Facebook, or Twitter! It’s the Internet’s Dark Triad we should be worried about.

Thanks to the ongoing Senate hearings on election hacking we are learning about how the Russians interfered with our presidential elections by sponsoring numerous fake social media accounts and even placing advertisements on Facebook, YouTube and Google that targeted people with interest on divisive issues.

But while policy makers are rightfully angered by these platforms’ inability to curb these attacks proactively, it is important to recognize that Facebook, Google, and even some web hosting services were mere vehicles providing a convenient platform for what was a much larger propaganda process made possible by the Internet’s Dark Triad: spearphishing, trolling, and fake news.

It is this trifecta that Vladimir Putin used to interfere with our elections as well elections in Germany and other parts of Europe. And it is this triad that we need to understand and stop.

At the tip of this triad is spearphishing—malware-laden email attachments and hyperlinks that when clicked provide the hacker backdoor access into an individual’s computers and networks. Every major attack from the Chinese military led theft of our F35 spy plane blueprints, to the infamous North Korea-led hack into Sony Pictures, to the Russian hacks into the DNC computers during our elections employed spearphishing. In fact, spearphishing attacks are so easy to craft that the Russians used the help of a 15-year old Canadian-Khazak citizen to conduct the attacks.

Anchoring the other end of the triad is organized trolling campaigns. What started with PR firms attempting to “manage” consumer reviews got co-opted by nation states to hijack online conversations by flooding message boards with vitriolic comments and counter-narratives. Confessions from “professional” trolls in Russia and investigative reports by the NYT’s Adrian Chen show how Russia’s state-sponsored Internet Research Agency orchestrates campaigns using phony social media profiles, interconnected networks of fake friends, even faked LiveJournal blogs for the profiles.

The final dark anchor is “fake news”—the latest form of online propaganda aimed at distorting information and spreading contrarian, even speculative views as real news. Enabling this phenomenon are some of the same phony social media profiles used for trolling along with pseudo “news” websites with seemingly credible names like The Conservative Frontline or The American Patriots, with a presence on multiple social media channels, many directly linked to Russian propaganda channels, providing the critical mass for a story to get noticed.

And as the stories are discussed by various groups the lies get crowd-sourced—arguments are strengthened, connections created, facts added—and quickly the fake news morphs into another more sensational story, spinning further news cycles. Some fake news and trolling campaigns link back to phishing websites, leading to still more breaches and even more fake news.

This was how the Russians influenced our elections. By hacking DNC emails, leaking it via WikiLeaks, and then seeding divisive political arguments, counter narratives, and conspiracy theories through fake news websites and trolling campaigns—such as pointing to the murder of DNC staffer Seth Rich in 2016 as evidence of his involvement in the hack—the Russians made many among us question our democratic processes that ultimately influenced the elections.

Unfortunately, our collective focus today is on organizations like Facebook and Twitter, who have reacted by creating task forces that curate internal lists of fake profiles and identify fake news feeds. Others like,, and the BBC have likewise developed internal task forces that curate lists of fake news and sites. But these initiatives only address small parts of the triad—its trees—and does nothing to stop the forest that is the triad from propagating using a different platform during the next election cycle.

What we need instead is a mechanism to stop the triad completely.

And this can be done because the triad has an Achilles: it is highly coordinated. Attacks usually reuse the same, finite set of social media profiles, web domains, fake news websites, email accounts, and even malware. In fact, the reuse of email profiles and malware signatures was our basis for identifying the source of the DNC hack as being Russian intelligence.

We can thus stop the triad if we develop mechanisms to track such coordination. But this will require a unification of efforts on our end, not the diversified approaches currently in place.

This must begin by the development of a centralized breach reporting system where individuals and organizations can report suspected spearphishing attacks and get remedial help. Such a system could help track attacks and serve as an early warning system to other organizations, who can take effective counter measures to stop further breaches.

A similar mechanism could help stop organized trolling and the propagation of fake news. Rather than the internal policing efforts now being done covertly within social media organizations, what we need is a centralized repository—a WikiFacts page of sorts— where fake profiles, news, and suspicious data from different media websites are continuously reported, flagged, and publicly displayed. This information can be populated by social media organizations, search engines, as well as by user reports. Such a system would directly benefit the general public, who can report and review suspicious information; it can also help smaller media organizations who could directly use this intelligence to forestall any misuse of their platforms.

The Dark triad is a dystopian version of the game of telephone played online using hacked information and fake news. Ironically, the origins of this game can be traced to a medieval game in which players wrote stories that got increasingly distorted as people passed it along—a game called Russian Scandal. Only this scandal is for real.