Digital Maginot Line in Information World War

FacebookMessengerTwitterLinkedInTelegramPinterestPocket
Read in Google News!
Maginot Line

There is a war happening. We are immersed in an evolving, ongoing conflict: an Information World War in which state actors, terrorists, and ideological extremists leverage the social infrastructure underpinning everyday life to sow discord and erode shared reality. The conflict is still being processed as a series of individual skirmishes – a collection of disparate, localized, truth-in-narrative problems – but these battles are connected. The campaigns are often perceived as organic online chaos driven by emergent, bottom-up amateur actions when a substantial amount is, in fact, helped along or instigated by systematic, top-down institutional and state actions. This is a kind of warm war; not the active, declared, open conflict of a hot war, but beyond the shadowboxing of a cold one, writes By Renee DiResta for ribbonfarm.

We experience this as a state of continuous partial conflict. The theater opportunistically shifts as geopolitical events and cultural moments present themselves, but there is no sign of abatement — only tactical evolution as the digital platforms that serve as the battlespaces introduce small amounts of friction via new security checks and feature tweaks. As governments become increasingly aware of the problem, they each pursue responses tailored to the tactics of the last specific battle that manifested in their own digital territory; in the United States, for example, we remain focused on Election 2016 and its russian bots. As a result, we are investing in a set of inappropriate and ineffective responses: a digital Maginot Line constructed on one part of the battlefield as a deterrent against one set of tactics, while new tactics manifest elsewhere in real time.

Like the original Maginot Line, this approach is about as effective a defense as a minor speed bump.

The Maginot Line was, in its time, believed to be a significant innovation in national defense; foreign leaders came from all over to tour it. It was a series of fortresses and railroads, resistant to all known forms of artillery, built using the latest technology. The purpose was both to halt an invasion force — keeping civilians safer — and to deliver early warning of an attack. The line was built by France along the entirety of its border with Germany. It extended into the border with Belgium, but stopped at the Forest of Ardennes because of the prevailing belief among experts that Ardennes was impenetrable.

Ardennes, it turned out, was not impenetrable. Moving through the forest would perhaps have been a futile effort for an army using the attrition warfare strategies prevalent in World War I, but it was vulnerable to new modes of warfare. As the French focused on building the Maginot Line, the Germans developed exactly such a new model of warfare — Blitzkrieg —and sent a million men and 1500 tanks through Ardennes (while deploying a small force to the Maginot Line as a decoy).

The Line, designed to very effectively fight the last war, had delivered a false sense of security.

Both the Maginot Line and the doctrine of Blitzkrieg emerged in the interwar years, a period that is perhaps also best characterized as a Warm War, much like the period we’re living through today.  But while the Maginot line embodied the tactical thinking and technological assumptions of the last war, Blitzkrieg embodied the possibilities of new technologies and the next war.

Dramatis Personae

The Information World War has already been going on for several years. We called the opening skirmishes “media manipulation” and “hoaxes”, assuming that we were dealing with ideological pranksters doing it for the lulz (and that lulz were harmless).

In reality, the combatants are professional, state-employed cyberwarriors and seasoned amateur guerrillas pursuing very well-defined objectives with military precision and specialized tools. Each type of combatant brings a different mental model to the conflict, but uses the same set of tools.

There are state-sponsored trolls, destabilizing societies in some countries, and rendering all information channels except state media useless in others. They operate at the behest of rulers, often through military or intelligence divisions. Sometimes, as in the case of Duterte in the Philippines, these digital armies focus on interference in their own elections, using paid botnets and teams of sockpuppet personas to troll and harass opponents, or to amplify their owner’s candidacy. Other times, the trolls reach beyond their borders to manipulate politics elsewhere, as was the case with Brexit and the U.S. presidential election of 2016. Sometimes, as in Myanmar, elections aren’t the goal at all: there, military-run digital teams incited a genocide.

There are decentralized terrorists such as ISIS, who build high-visibility brands while asynchronously recruiting the like-minded. These digital recruiters blanket the internet with promises of glory and camaraderie via well-produced propaganda, then move the receptive into encrypted chat apps to continue the radicalization. The recruits pledge allegiance to the virtual caliphate in Facebook posts before driving trucks into pedestrian plazas IRL.

There are also small but highly-skilled cadres of ideologically-motivated shitposters whose skill at information warfare is matched only by their fundamental incomprehension of the real damage they’re unleashing for lulz. A subset of these are conspiratorial — committed truthers who were previously limited to chatter on obscure message boards until social platform scaffolding and inadvertently-sociopathic algorithms facilitated their evolution into leaderless cults able to spread a gospel with ease.

Combatants evolve with remarkable speed, because digital munitions are very close to free. In fact, because of the digital advertising ecosystem, information warfare may even turn a profit. There’s very little incentive not to try everything: this is a revolution that is being A/B tested.  The most visible battlespaces are our online forums — Twitter, Facebook, and YouTube — but the activity is increasingly spreading to old-school direct action on the streets, in traditional media outlets, and behind closed doors, as state-sponsored trolls recruit and manipulate activists, launder narratives, and instigate protests.

One thing that all of these groups have in common is a shared disdain for Terms of Service; the rules that govern conduct and attempt to set norms in platform spaces are inconveniences to be disregarded, at best. Combatants actively and systematically circumvent these attempts at digital defenses, turning the very idea of them into a target of trolling: the norms are illegitimate, they claim. The rules are unfair, their very existence is censorship!

The combatants want to normalize the idea that the platforms shouldn’t be allowed to set rules of engagement because in the short term, it’s only the platforms that can.

Meanwhile, regular civilian users view these platforms as ordinary extensions of physical public and social spaces – the new public square, with a bit of a pollution problem. Academic leaders and technologists wonder if faster fact checking might solve the problem, and attempt to engage in good-faith debate about whether moderation is censorship. There’s a fundamental disconnect here, driven by underestimation and misinterpretation. The combatants view this as a Hobbesian information war of all against all and a tactical arms race; the other side sees it as a peacetime civil governance problem.

The Nature of Information Wars

One of the reasons for this gap is a fundamental misreading of the end goal. Wars have been fought for centuries over a fairly uniform set of goals: territorial control, regime change, religious or cultural mores, and to consolidate or shift economic power.

Information war combatants have certainly pursued regime change: there is reasonable suspicion that they succeeded in a few cases (Brexit) and clear indications of it in others (Duterte). They’ve targeted corporations and industries. And they’ve certainly gone after mores: social media became the main battleground for the culture wars years ago, and we now describe the unbridgeable gap between two polarized Americas using technological terms like filter bubble.

But ultimately the information war is about territory — just not the geographic kind.

In a warm information war, the human mind is the territory. If you aren’t a combatant, you are the territory. And once a combatant wins over a sufficient number of minds, they have the power to influence culture and society, policy and politics.

Meanwhile, the new digital nation states – the social platforms that act as unregulated, privately-governed public squares for 2 billion citizens — have just begun to acknowledge that all of this is happening, and they’re struggling to find ways to manage it. After a year of Congressional hearings and relentless press exposés detailing everything from election interference to literal genocide, technology companies have begun to internalize that the information world war is very real, is causing real pain to many, and is having profound consequences.

This particular manifestation of ongoing conflict was something the social networks didn’t expect. Cyberwar, most people thought, would be fought over infrastructure — armies of state-sponsored hackers and the occasional international crime syndicate infiltrating networks and exfiltrating secrets, or taking over critical systems. That’s what governments prepared and hired for; it’s what defense and intelligence agencies got good at. It’s what CSOs built their teams to handle.

But as social platforms grew, acquiring standing audiences in the hundreds of millions and developing tools for precision targeting and viral amplification, a variety of malign actors simultaneously realized that there was another way. They could go straight for the people, easily and cheaply. And that’s because influence operations can, and do, impact public opinion. Adversaries can target corporate entities and transform the global power structure by manipulating civilians and exploiting human cognitive vulnerabilities at scale. Even actual hacks are increasingly done in service of influence operations: stolen, leaked emails, for example, were profoundly effective at shaping a national narrative in the U.S. election of 2016.

This is not to say that infrastructure defense isn’t critical; it is. The fact that infrastructure and network hacking is time-consuming, costly, and perceived as unambiguously hostile, however, means that a detente has evolved on that front, and pushed active conflict to the social layer. In the Cold War, a huge percentage of the defense budget was spent on maintaining deterrence capabilities that ensured that neither of the two primary adversaries would use nuclear weapons. Hot conflict still erupted on the periphery via proxy wars in Latin America and Vietnam. The substantial time and money spent on defense against critical-infrastructure hacks is one reason why poorly-resourced adversaries choose to pursue a cheap, easy, low-cost-of-failure psy-ops war instead. Deterrence imposes real costs on the adversary; a Maginot Line, by contrast, can be cheaply circumvented.

To ensure that our physical infrastructure and critical systems were defended, we empowered a myriad of government agencies to develop best-in-class offensive capabilities and prohibitive deterrence frameworks. No similar plan or whole-of-government strategy exists for influence operations. Our most technically-competent agencies are prevented from finding and countering influence operations because of the concern that they might inadvertently engage with real U.S. citizens as they target russia’s digital illegals and ISIS’ recruiters. This capability gap is eminently exploitable; why execute a lengthy, costly, complex attack on the power grid when there is relatively no cost, in terms of dollars as well as consequences, to attack a society’s ability to operate with a shared epistemology? This leaves us in a terrible position, because there are so many more points of failure. As trust in media and leadership continues to erode (a goal of influence operations), one of these information campaigns — a more sophisticated version of the Internet Research Agency’s Columbian Chemicals Plant hoax, perhaps in a powder-keg country — could be used to provoke a very real response, transforming the warm war into a hot war.

Tactical Evolution

This shift from targeting infrastructure to targeting the minds of civilians was predictable. Theorists  like Edward Bernays, Hannah Arendt, and Marshall McLuhan saw it coming decades ago. As early as 1970, McLuhan wrote, in Culture is our Business, “World War III is a guerrilla information war with no division between military and civilian participation.”

The Defense Department anticipated it, too: in 2011 DARPA launched a dedicated program (Social Media in Strategic Communications, SMISC) that sought to preempt and prepare for an online propaganda battle. The premise was ridiculed as an implausible threat, and the program was shut down in 2015. Now, both governments and tech platforms are scrambling for a response. The trouble is that much of the response is focused on piecemeal responses to the last set of tactics; on building a Digital Maginot Line.

The 2014-2016 influence operation playbook went something like this: a group of digital combatants decided to push a specific narrative, something that fit a long-term narrative but also had a short-term news hook. They created content: sometimes a full blog post, sometimes a video, sometimes quick visual memes. The content was posted to platforms that offer discovery and amplification tools. The trolls then activated collections of bots and sockpuppets to blanket the biggest social networks with the content. Some of the fake accounts were disposable amplifiers, used mostly to create the illusion of popular consensus by boosting like and share counts. Others were highly backstopped personas run by real human beings, who developed standing audiences and long-term relationships with sympathetic influencers and media; those accounts were used for precision messaging with the goal of reaching the press. Israeli company Psy Group marketed precisely these services to the 2016 Trump Presidential campaign; as their sales brochure put it, “Reality is a Matter of Perception”.

If an operation is effective, the message will be pushed into the feeds of sympathetic real people who will amplify it themselves. If it goes viral or triggers a trending algorithm, it will be pushed into the feeds of a huge audience. Members of the media will cover it, reaching millions more. If the content is false or a hoax, perhaps there will be a subsequent correction article – it doesn’t matter, no one will pay attention to it. Some of the amplifier bots might get shut down – that really doesn’t matter either, they’re easy to replace.

Now, in 2018, we have reached the point at which most journalists and many world leaders understand the 2016 playbook. Media and activists alike have pressured platforms to shut down the worst loopholes. This has had some impact; for example, it’s become much harder to trigger a trending algorithm with bots. After getting pwned that way (and getting called on it) thousands of times Twitter finally adapted, greyboxing and underweighting low-quality accounts. Facebook eliminated their trending news feature altogether. Since running spammy automated accounts is no longer a good use of resources, sophisticated operators have moved on to new tactics.

But although the bots are of increasingly minimal value, lawmakers at both the state and federal level are still expending effort thinking about regulating them. California lawmakers went so far as to pass a law that makes it illegal for bot account creators to misrepresent themselves — while it’s nice to imagine that making it illegal for trolls to create troll accounts is going to restore order to the information ecosystem, it won’t. It’s incredibly challenging to tailor a law to catch, or label, only malicious automated accounts. Twitter’s self-imposed product tweaks have already largely relegated automated bots to the tactical dustbin. Combatants are now focusing on infiltration rather than automation: leveraging real, ideologically-aligned people to inadvertently spread real, ideologically-aligned content instead. Hostile state intelligence services in particular are now increasingly adept at operating collections of human-operated precision personas, often called sockpuppets, or cyborgs, that will escape punishment under the the bot laws. They will simply work harder to ingratiate themselves with real American influencers, to join real American retweet rings. If combatants need to quickly spin up a digital mass movement, well-placed personas can rile up a sympathetic subreddit or Facebook Group populated by real people, hijacking a community in the way that parasites mobilize zombie armies.

Focusing on feature-level tactical fixes that simply shift the boundaries of what’s permissible on one platform is like building a digital Maginot line; it’s a wasted effort, a reactive response to tactics from the last war. By the time lawmakers get around to passing legislation to neutralize a harmful feature, adversaries will have left it behind. Attempts to legislate away 2016 tactics primarily have the effect of triggering civil libertarians, giving them an opportunity to push the narrative that regulators just don’t understand technology, so any regulation is going to be a disaster.

Digital Security Theater

The entities best suited to mitigate the threat of any given emerging tactic will always be the platforms themselves, because they can move fast when so inclined or incentivized. The problem is that many of the mitigation strategies advanced by the platforms are the information integrity version of greenwashing; they’re a kind of digital security theater, the TSA of information warfare. Creating better reporting tools, for example, is not actually a meaningful solution for mitigating literal incitements to genocide. Malignant actors currently have safe harbor in closed communities; they can act with impunity so long as they don’t provoke the crowd into reporting them — they simply have to be smart enough to stay ahead of crowd-driven redressal mechanisms. Meanwhile, technology companies have plausible denial of complicity because they added a new field to the “report abuse” button.

Algorithmic distribution systems will always be co-opted by the best resourced or most technologically capable combatants. Soon, better AI will rewrite the playbook yet again — perhaps the digital equivalent of  Blitzkrieg in its potential for capturing new territory. AI-generated audio and video deepfakes will erode trust in what we see with our own eyes, leaving us vulnerable both to faked content and to the discrediting of the actual truth by insinuation. Authenticity debates will commandeer media cycles, pushing us into an infinite loop of perpetually investigating basic facts. Chronic skepticism and the cognitive DDoS will increase polarization, leading to a consolidation of trust in distinct sets of right and left-wing authority figures – thought oligarchs speaking to entirely separate groups.

We know this is coming, and yet we’re doing very little to get ahead of it. No one is responsible for getting ahead of it.

The key problem is this: platforms aren’t incentivized to engage in the profoundly complex arms race against the worst actors when they can simply point to transparency reports showing that they caught a fair number of the mediocre actors.

Platforms cannot continue to operate as if all users are basically the same; they have to develop constant awareness of how various combatant types will abuse the new features that they roll out, and build detection of combatant tactics into the technology they’re creating to police the problem. The regulators, meanwhile, have to avoid the temptation of quick wins on meaningless tactical bills (like the Bot Law) and wrestle instead with the longer-term problems of incentivizing the platforms to take on the worst offenders (oversight), and of developing a modern-day information operations doctrine.

Liberal Means, Illiberal Ends

What made democracies strong in the past — a strong commitment to free speech and the free exchange of ideas — makes them profoundly vulnerable in the era of democratized propaganda and rampant misinformation.

We are (rightfully) concerned about silencing voices or communities. But our commitment to free expression makes us disproportionately vulnerable in the era of chronic, perpetual information war. Digital combatants know that once speech goes up, we are loathe to moderate it; to retain this asymmetric advantage, they push an all-or-nothing absolutist narrative that moderation is censorship, that spammy distribution tactics and algorithmic amplification are somehow part of the right to free speech.

We seriously entertain conversations about whether or not bots have the right to free speech, privilege the privacy of fake people, and have Congressional hearings to assuage the wounded egos of YouTube personalities. More authoritarian regimes, by contrast, would simply turn off the internet. An admirable commitment to the principle of free speech in peace time turns into a sucker position against adversarial psy-ops in wartime. We need an understanding of free speech that is hardened against the environment of a continuous warm war on a broken information ecosystem. We need to defend the fundamental value from itself becoming a prop in a malign narrative.

The solution to this problem requires collective responsibility among military, intelligence, law enforcement, researchers, educators, and platforms. Creating a new and functional defensive framework requires cooperation.

It’s time to prioritize frameworks for multi-stakeholder threat information sharing and oversight. The government has the ability to create meaningful deterrence, to make it an unquestionably bad idea to interfere in American democracy and manipulate American citizens. It can revamp national defense doctrine to properly contextualize the threat of modern information operations, and create a whole-of-government approach that’s robust regardless of any new adversary, platform, or technology that emerges. And it can communicate threat intelligence to tech companies.

Technology platforms, meanwhile, bear much of the short-term responsibility. They’re the first line of defense against evolving tactics, and have full visibility into what’s happening in their corner of the battlespace. And, perhaps most importantly, they have the power to moderate as they see fit, and to set the terms of service. For a long time, the platforms pointed to “user rights” as a smokescreen to justify doing nothing. That time is over. They must recognize that they are battlespaces, and as such, must build the policing capabilities that limit the actions of malicious combatants while  protecting the actual rights of their real civilian users.

Towards Digital Peace

Unceasing information war is one of the defining threats of our day. This conflict is already ongoing, but (so far, in the United States) it’s largely bloodless and so we aren’t acknowledging it despite the huge consequences hanging in the balance. It is as real as the Cold War was in the 1960s, and the stakes are staggeringly high: the legitimacy of government, the persistence of societal cohesion, even our ability to respond to the impending climate crisis.

If the warm war is allowed to continue as it has, there is a very real threat of descent into illegitimate leadership and fractured, paralyzed societies. If algorithmic amplification continues to privilege the propagandists most effective at gaming the system, if combatant persona accounts continue to harass civilian voices off of platforms, and if hostile state intelligence services remain able to recruit millions of Americans into fake “communities”, the norms that have traditionally protected democratic societies will fail.

We don’t have time to waste on digital security theater. In the two years since Election 2016, we’ve all come to agree that something is wrong on the internet. There is momentum and energy to do something, but the complexity of the problem and the fact that it intersects with other thorny issues of internet governance (privacy, monopoly, expression, among others) means that we’re stuck in a state of paralysis, unable to address disinformation in a meaningful way. Instead, both regulators and the platforms throw up low-level roadblocks. This is what a digital Maginot line looks like.

Influence operations exploit divisions in our society using vulnerabilities in our information ecosystem. We have to move away from treating this as a problem of giving people better facts, or stopping some russian bots, and move towards thinking about it as an ongoing battle for the integrity of our information infrastructure – easily as critical as the integrity of our financial markets. When it’s all done and over with, we’ll look back on this era as being as consequential in reshaping the future of the United States and the world as World War II.

Читай у Google News!