· 1 min read

AI alignment charts

There’s a 2x2 here for transformative AI like: generalizes versus specializes, harmonizes versus destabilizes.

The former tracks centralization, tends toward fewer bigger models in the more generalized state. The latter more about AIs frequency locking to human timescales, coordination patterns, cycles that are important to us.

So you could have generalize / harmonize world, with gentle giants that lawfully work together, or specialize / destabilize, an asymmetric war of all against all, etc

Wait did i just reinvent alignment charts

2003189616200110253-0

View original