p, li { white-space: pre-wrap

Hyperwar . Great. They went and made it sound cool.

Yes, the Empire of the United States has come up with the most darling meme for AI-enabled war. It’s just perfect.

It has a Greek prefix, so it sounds smart.

It’s exciting, masculine-sounding, easy to growl through gritted teeth. No hard E’s or soft I’s, so you don’t involuntarily smile while saying it. Unlike “cheese”, say, or “election hacking”. And there’s no tech words, like “artificial intelligence” or “augmented reality”, so you don’t sound like you’re getting your uniformed ass kicked by nerds with computers.

It rhymes with “cyberwar”, so it fits easily into a niche already built in your mind. It’s easy to say it out loud, to hear it in your head when reading. Hyperwar.

It’s sort of meaningless, so it doesn’t have to be understood by the logical mind, and it’s relatively uncontaminated with connotations. The meme has been used before, it’s true: previously it referred to the multifaceted nature of WWII, and the high-speed decision-making environment it produced. Now that meaning will be eclipsed by the new, buzzy definition, because the Empire’s military needs a soundbite-friendly explanation for their upcoming pivot to robotics and AI.

Of course, it’s a terrible idea to build a world-spanning army of killer robots. We can all see that, right?

No matter who “owns” them, these autonomous lethal weapons are a threat. Not only to individual human lives, but to the existence of our species, and even the fate of life on Earth.

The ways in which weaponized AI could go wrong are infinite, and the possible worlds in which it could help humanity are very few indeed.

A report published by the US Naval Institute discusses hyperwar in the context of drone swarms and AI-enabled combat centers. But it’s important to keep in mind that hyperwar applies to civilians as well .

From the Navy report:

In military terms, hyperwar may be redefined as a type of conflict where human decision making is almost entirely absent from the observe-orient-decide-act (OODA) loop. As a consequence, the time associated with an OODA cycle will be reduced to near-instantaneous responses.

Machines think faster than humans. It’s their main advantage, so far. The current deep-learning revolution follows the mass production of highly specialized chips (GPUs) that process millions of small judgments simultaneously. This leverages the lightspeed processing power of computers to find signals in massive datasets that would take aeons for humans to sift through.

Think of the way your phone can direct your commute. It absorbs the data of every connected device in the city, assesses how fast they’re going compared to the speed limit and the average traffic for the time of day, then analyses the many possible paths that would get you there and finds the fastest, shortest route. And tells it to you, out loud, step by step.

All that, in the same amount of time it takes for you to react to a sudden red light.

Human decision-making is already absent from many of our daily activities. Between the AIs guiding our cars, the AIs running the stock market, and the AIs telling us which products we want, we’re all already in hyperwar. Or call it hypercapitalism, or hypersubjugation. Whatever the nifty meme, our decision loops have been sheared.

If we assume, as most people do, that a superintelligent AI has not yet been created, it’s bad enough. Weak AIs, highly capable in narrow domains, are propagating faster than any one person can know. The cascading effects of a gaggle of disconnected specialist AIs is wildly unpredictable. Each neural-net program is a black box, even to its creators, so there’s no way for us to know in advance what any given algorithm will do in a particular situation. Add to that the dizzyingly fast interactions between AIs – think microsecond stock trading – and we’re in a liquefied society that could shift drastically and instantaneously out from under our feet.

The only thing that could do strategic decision-making in a world of weak AIs, then, is strong AI. A superintelligence could outfox all the specialist algorithms, and even play them against each other. Naturally, it would run circles around humans too: preying on our cognitive biases, it could influence elections, escalate international tensions, distract and entertain the masses, and rapidly install a political elite pliable to its whims.

In fact, it could already have done so. We’d never know. A being that canny would only reveal itself once we were too late to stop it.

So, sure, let’s give the World Machine untold billions of dollars of military spending, on both sides of the Pacific. Why not? We’re practically doomed already.

Hand it to the generals and techlords hyping “hyperwar”: they do cyber-nihilism better than anyone.

Now, there are some reasonable people still left on the planet. The fine folks at the Campaign to Stop Killer Robots are mounting a last-ditch defense against the memetic appeal of HYPERWAR™:

###

Slaughterbots

###### Note: This video may be not safe for work! No sex, but definitely violence, and emotional triggers. If you have to deal with normal people today, and pretend like you’re in the 20th century doing business as usual, probably you should wait until you get home.

The video is powerful, and the #bankillerrobots meme is worth rallying around. Unfortunately, I’m afraid that it may be too late. With Russia, China, and NATO forces all investing in strategic AI research, a sort of cold war has begun. An arms race. Mutually assured destruction is a very real possibility.

I’m scared. Thanks for listening.

– Max


SCIOPS is a weekly newsletter about cognitive security. Feel free to forward it to anyone you think would like it, or share it on social whatevers. If you have thoughts, questions, or criticism, just respond to this email.

If you’re seeing this for the first time, make sure to sign up at tinyletter.com/sciops for more cyberpunk weirdness in your inbox every week.