Vigilance is still king.
Democratic ideals require a few things to function on the network:
- We need identity.
- We need persistence.
- We need transparency.
- We need multiple people to care.
With these, the voice of everyone has a chance against the voice of the motivated, the monied, and/or the already powerful.
Without vigilance, the story that remains will be only the story that was retold the most broadly, not necessarily what happened.
No matter how much volume, how many tweets, or Facebook likes a campaign generates, if the messages aren’t embedded within existing networks of information flow, it will be very difficult for information to actually propagate. In the case of this hoax on Twitter, the malicious accounts are situated within a completely different network. So unless they attain follows from “real accounts,” they can scream as loud as they’d like, still no one will hear them. One way to bypass this is by getting your topic to trend on Twitter, increasing visibility significantly.
Social networked spaces make it increasingly difficult for a bot or malicious account to look like a real person’s account. While a profile may look convincingly real — having a valid profile picture, posting human readable texts, and sharing interesting content — it is hard for them to fake their location within the network; it is hard to get real users to follow them. We can clearly see this in the image above: the community of Russian bots are completely disconnected from any other user interacting with the hashtag.
The same principle holds for Wikipedia, which is even harder to game as it is easy to identify those accounts who are not really connected to the larger editing community. The more time you spend making relevant edits and the more trusted your account becomes the more authority you gain. One can’t simply expect to appear, make minor edits on three pages, and then put up a page detailing a terror act without seeming suspicious.
As our information landscapes evolve over time, we’ll see more examples of ways in which people abuse and game these systems for the purpose of giving visibility and attention to their chosen topic. Yet as more of our information propagation mechanisms are embedded within networks, it will become harder for malicious and automated accounts to operate in disguise. Whoever ran this hoax was extremely thorough, yet still unable to hack the network and embed the hoax within a pre-existing community of real users.
Last year I wrote a post here on medium about how attention and reading are evolving. We are living in an era of unprecedented transparency — and interestingly many of these hacks are happening in broad daylight. Unless we measure and value attention — time spent reading, listening, or viewing versus the raw click volume we aren’t going to build things that are actually of interest to humans. Take note of how bots are being used as part of these hacks.
And he finishes with:
My assumption has always been that increased transparency would result in a greater efficiency of information flow and that in turn, would naturally bend towards facts. Put another way, in an open society, with efficient information flow, fact and truth will win out. It’s impossible to measure this on the aggregate — and I believe that on the aggregate that is true — but its clear there are local cases where this simply isn’t the case. Russia is more far open a society than it was 30 years ago. Or turn to the middle east and take a read of Gilad’s post about Israel, Gaza, War & Data. Or dig into how fake sites made up news about a Texas town under quarantine for Ebola to harvest clicks, or how “real” news sites make up news. Or Craig Silverman’s piece on how a Priest died and met God in the “48 minutes” before he came back to life. In all these cases transparency isn’t succeeding at winnowing out bullshit. And mainstream media offers an implicit assist by assuming its role is to be the established view from nowhere.
Media critics like Jay Rosen use the term, ‘view from nowhere’ to describe how some media strives to a balance between objectivity and the reporting of facts, often erring on reporting each side of an argument. They offer each perspective equal weighting, setting up the false impression that both perspectives are equally valid since they required equal coverage. As Rosen outlines (in a debate with himself) mainstream media is loath to say: ‘this is rubbish.’ They want to provide “perspective” — rather than take a position. And in today optimized world they want to generate SEO and social traffic from both sides of an argument.
Match this phenomenon with the torrid pace of sharing before or without reading and you have a toxic mix that can be effectively gamed or hacked. In the post I wrote last summer I noted how a huge percent of articles shared are never actually read: “Chartbeat looked at user behavior across 2 billion visits across the web over the course of a month and found that … a stunning 55% spent fewer than 15 seconds actively on a page.” Transparency was meant to be the new objectivity. Yet if people aren’t reading before they share — if mainstream media is balancing every perspective, if headlines without branded context are now content — media can and will be hacked, and perspective will be narrowed rather than broadened.
As Dmitry Tulchinskiy, bureau chief Rossiya Segodnya, said in August: “What is propaganda? Propaganda is the tendentious presentation of facts …It does not mean lying.” Tendentious — expressing or intending to promote a particular cause or point of view — with such a clear choice of words, I wish he had talked more about the methodology.
Please be present.