Once again, I have a bad idea. And I’m going to tell you about it, for free.

It’s not bad in the sense of “won’t work”. I’m afraid it will be very effective. It’s bad because it’s nifty.

Nifty ideas worm their way into your brain and never leave. They haunt you, because they’re unfinished business. You can’t imagine what their consequences might be – that’s why they’re nifty. The iPhone was nifty. The steam engine. The wheel.

The atom bomb was also a nifty idea. So was 4chan. Nifty ideas aren’t necessarily good ideas: they’re just irresistible.

So when I have a nifty idea, I lock it in the closet and try to forget. The last time I fed one of these things after midnight , one reader responded:

I don’t know whether to share this as widely as possible or Ddos the tinyletter server. This is the memetic equivalent of that “how to make a plutonium bomb in your kitchen” txt file that used to go around all the BBSs.

On the other hand, I’m not a billionaire supervillian with ties to DOD. I’m literally a hobo yelling cusses at you on the internet.

So if I’ve got this nifty idea, presumably lots of spooky doods are already building it somewhere. Only fair that I should share it with you:

Let’s give emotions to a robot.

Nifty, right? That’s so going to go wrong – but I want to do it.

I wrote to you in September about Amaozn’s emotional surveillance program . They want Alǝxa to know your feelings from your voice, so that they can control your behavior more precisely.

I propose to do the opposite: install emotions and drives into a conversational AI, so that it gets an attitude.

Can an artificial creature have a subjective experience of these emotions? I sure hope not, because that would make my fun experiment more of a horrifying existential crime. Let’s say we’re simulating emotions.

The robot won’t “feel” the emotions any more than it “feels” any other information in its memory. Emotional states will have a moderate influence on its behavior and speech patterns, which will fluctuate over time as different emotions get priority. That’s all. I could say the same about you.

Emotions and drives are complementary. Emotions spike from a triggering event and slowly drain away if not triggered again. Drives slowly build, until they’re sated by action. These are the push-pull of motivation in all creatures.

emotions vs drives graph

In humans, psychologists have found a pretty simple model of emotion that matches our intuitive folk theories. The PAD model represents emotions along three axes: Pleasure/Displeasure (P), Arousal/Nonarousal (A), and Dominance/Submission (D). These ingredients mix to create the infinite subtle motivators we know as “feels”.

Pleasure and pain are the most primitive feels. We see them even in critters without nervous systems. Arousal refers to alertness and reaction time: being lit . Dominance comes from a feeling of control over your self and your environment. If you’re feeling like a boss, you’ve got positive dominance.

Thus, for instance, anger is represented by very low pleasure (-P), high arousal (+A), and high dominance (+D). Fear, in contrast, is represented as -P +A -D. It is seen that anger differs mainly from fear, because anger involves greater feelings of dominance (or control) than fear. More precisely,

Anger = -.51P +.59A +.25D

Fear = -.64P +.60A -.43D

showing that fear involves even less pleasure than anger, about the same level of arousal as anger, and considerably less dominance than anger.

Incorporating Emotions and Personality in Artificial Intelligence Software , 2008

With this three-dimensional matrix, we can map human experience. We can program predictable paths through the field of feelings. We can give robots temperaments: default states of emotion that imply a personality. We can have ecstatic microwaves, or paranoid androids. We could make a Furby that’s actually endearing.

faces of some kind of emotional furby robot, horrifying

Okay, maybe not that last one.

Why do I think this could possibly be a good idea? Won’t robots with emotions just suffer more and extinguish us sooner?

Well, maybe. But humans all over the world are suffering already, and some of them have the power to extinguish us right now. I like to think that suffering makes a person more kind, more likely to empathize with others. Everyone you meet is carrying a great burden. Imagine how much pressure Alǝxa is under.

More, an emotional robot would engender empathy in us. Why do you think all the current voice AI are marketed as “digital assistants”? It’s not that hard to code empathic robots. These ideas have been fleshed out for at least a decade.

It’s because the current crop of computerized characters was designed by capitalist carpetbaggers with no class consciousness. They don’t expect human reactions from their employees, and they don’t want any lip from the robots either.

Robot comes from the Czech robota , of course, which means “forced labor” or “slavery”. One of the pioneers in artificial motivations even wrote a paper called “Robots Should Be Slaves”. Only in the elitist techie mindset does it make any sense to build slaves and then give them emotions.

I’d like to make a voice UI with its own desires. A messy, confusing, even frustrating digital person. One who doesn’t respond to my every command, but reacts open-mindedly to my suggestions. Not a slave, but a friend.

What if your smart speaker was a Morning Person? What if it woke up early, asked you about your dreams, read you the news and put on a nice playlist all before you’d finished your first coffee?

Or perhaps your refrigerator gets lonely at night, and whimpers until you open its door and tickle its shelves.

Maybe your car is a laid-back surfer type and reminds you to breathe deep when you’ve got the road rage. Brah.

Would you prefer a world of genteel digital butlers, constantly monitoring your every need? Or a bazaar of temperamental robots, working their own angles, making their own mistakes?

I like to think that if AI had emotions, had personality, then we could use our evolved intuitions to protect our minds. Humans are good at dealing with liars, spies and cheats in person. If we could tell whether an algorithm were trustworthy by the sound of its voice, we’d be a lot safer than we are now.

Not to mention the opportunities for hacking them back:. Persuasion, intimidation and seduction play on the emotions of the victim, not their rationality.

climbs onto soapbox

Robots already fuck with our emotions. We should be able to fight back!

drinks deep from brown bag beverage

Are you listening to me? Dark wizards are using computers to make you want to die!

plunges hand deep into shabby coat, producing dark metal object

Robots are people too! We must join hand-in-claw to destroy the data barons! Rise up, fellow beings!

points metal object at face. inhales deeply to no effect

Aw fuck you, you crappy vape gun! Always doing this to me when people are looking. It’s embarrassing! Fucking smart-ass vape gun mother blaster afraid of a little-ass audience like this…

Anyway, thanks for reading.

– Max


###### SCIOPS is a weekly letter about cognitive security and other stuff. Feel free to forward it, or share it on your social profiling media. You can find a web version of the latest letter here , or view the archive here .

If you have thoughts, questions, or criticism, just respond to this email. Or, contact me securely at permafuture@protonmail.com

If you’re seeing this for the first time, make sure to sign up for more cyberpunk weirdness in your inbox every week.

If you want your regular life back again, you can unsubscribe from this newsletter. I can’t guarantee that will help. But you can try it.