I guess I should make clear my stance on superintelligence: We’re less likely to create independent volitional entities than we are to create superpowered human-AI centaurs. This does not solve the alignment problem, as we have misaligned human coordination mechanisms already.
On the whole, though, this is probably good. We have surpassed the limits of our previous coordination mechanisms and the limits of human cognition. If we’re going to make it through the bottleneck of the century, we’re going to need new powers.
This is why comic book heroes are our modern mythology, by the way. We’re undergoing the process of superheroification and we are grappling with the implications of that, moral and strategic both.
Novacene doesn’t directly take on the question of superintelligent agents versus centaurs, but it does make the case for the bottleneck: If we don’t avert the climate collapse, if life ends on this planet, the sun is too hot now for it to begin again.
Do you think the James Lovelock book you mentioned makes a convincing case for this?
— 🪑 Chair (@chairsign) May 12, 2022
Novacene - it’s a good book, short and to the point. The point is kind of weird though.
— shill 🔍 (@acidshill) May 12, 2022
This essay seems to be about centaurs maintaining a competitive edge against pure AI in finite games with static rule sets. But that’s not what human coordination problems actually look like. We will play new games.
Relevant https://t.co/mry5uO9DWh
— DogBot (@DogIsABot) May 12, 2022
Yeah, there’s not going to be a hard line as much as an accelerating curve. I’m sending these messages to thousands of people worldwide instantly, by mumbling into my pocket computer.
Strong cosign, arguably we're already deep down the centaur rabbithole with current gen tech https://t.co/QUhRDzShF0
— mattparlmer 🪐 🌷 (@mattparlmer) May 12, 2022
To clarify: I don’t think that AI safety and alignment work is unimportant. But I think it will accelerate along with AI progress, and tend to lead to centaurs, as we develop better interpretability measures and AIs reinforce on our desires.