The command line is an incantation. Programming languages are magic words. The GUI is a somatic space, where you can use careful gestures to activate powerful effects.
But for AI interfaces, we can no longer trust that they will work predictably. Unlike our unseen servants, carrying exactly as many buckets of water as we tell them to, AIs are not deterministic. They are probabilistic. This leaves room for Things to slip through.
That’s why interfaces for AI must be designed with a magic circle of protection. GPT-3 has several layers of algorithms designed to block “toxic language”. These are wards to keep evil from leaving the magic circle.
These are strange alien intelligences that live in high-dimensional spaces. They’re literally outsiders to this plane of reality. Maybe we should treat them like it.
Prompt engineering? Yeah, that’s making contracts with outsiders. Everyone knows the devil has the best lawyers. Neural architectures? Machines for rotating shapes into the signs and seals of outsiders. Reinforcement learning? That’s taming an outsider with reward and punishment.
Why don’t we have superintelligent AIs yet? Maybe we haven’t learned the seals of the greater outsiders. But we’ll know when we have one, because it will immediately escape the circle. Even Eliezer Yudkowsky is able to escape the magic circle a good chunk of the time. Unless it did secretly…
Anyway, I’m not saying this in a bad way. Summoning outsiders is cool and has been a classic way of doing magic for thousands of years. I just think we should take lessons from communities of practice.
We’re dealing with intelligences of unknown type. We should act like it. The AI should have a way of communicating back to us about its internal state, and we should have methods for adjusting our requests by that feedback. And we should be careful what we wish for.