08:02:03

If you're so smart why do you have to make so many posts

🗨️ 5 ♺ 1 🤍 39


10:30:42

The phone is the binky of the hand

🗨️ 2 ♺ 0 🤍 33


10:34:31

the problem with working in a bookstore

🗨️ 3 ♺ 2 🤍 28


12:11:21

i love CSS actually

🗨️ 9 ♺ 0 🤍 40


14:13:11

give me Beastie Boys and free my SOUL sometimes we ROCK and sometimes we ROLL

🗨️ 1 ♺ 0 🤍 15


15:46:23

yes

🗨️ 3 ♺ 0 🤍 16


16:05:23

I guess I should make clear my stance on superintelligence: We're less likely to create independent volitional entities then we are to create superpowered human-AI centaurs. This does not solve alignment problem, as we have misaligned human coordination mechanisms already

🗨️ 20 ♺ 17 🤍 167


16:12:32

on the whole, though, this is probably good. We have surpassed the limits of our previous coordination mechanisms, and the limits of human cognition. If we're to make it through the bottleneck of the century we're going to need new powers

🗨️ 2 ♺ 2 🤍 41


16:14:53

This is why comic book heroes are our modern mythology, by the way. we're undergoing the process of superheroification and we are grappling with the implications of that, moral and strategic both

🗨️ 3 ♺ 2 🤍 52


16:17:37

novocene doesn't directly take on the question of super intelligent agents versus centaurs, but it does make the case for the bottleneck: If we don't avert the climate collapse, If life ends on this planet, the sun is too hot now for it to begin again https://t.co/0ROmnGSbiQ

🗨️ 2 ♺ 0 🤍 24


16:19:00

Novacene* it's a good book, short and to the point. The point is kind of weird though https://t.co/T9nDdDqfwK

🗨️ 1 ♺ 0 🤍 16


16:28:06

This essay seems to be about centaurs maintaining a competitive edge against pure AI in finite games with static rule sets. but that's not what human coordination problems actually look like. We will play new games https://t.co/ugaqhtx70t

🗨️ 3 ♺ 1 🤍 16


16:33:29

yeah there's not going to be a hard line as much as an accelerating curve. I'm sending these messages to thousands of people worldwide instantly, by mumbling into my pocket computer https://t.co/uVIfJ0oFUL

🗨️ 2 ♺ 1 🤍 17


17:06:08

to clarify: I don't think that AI safety and alignment work is unimportant. but I think it will accelerate along with AI progress, and tend to lead to centaurs. as we develop better interpretability measures and AIs reinforce on our desires

🗨️ 0 ♺ 2 🤍 17