The Adolescence of Technology
Dario Amodei risks essay
In general, he gave quite a few realistic-seeming examples of how AI can be used to accomplish a lot of bad things. If I were a terrorist, I’d read this as a script.
Section 1
- AI will become like a country of geniuses in a data center In the next couple of years
- Even though there is a small chance that AI can be used for catastrophically bad means or become catastrophically bad itself, we should still plan for it.
- We should be careful with our legislation, because technology evolves faster than we can adapt.
- He proposes chip export control and transparency on risks.
- Somewhat confident that alignment training (e.g. Constitutional AI) plus mechanistic interpretability is the way to go.
Section 2
- terrorists or mentally ill people can use AI to help them create weapons of mass destruction
- he’s most concerned about bioweapons because they have the most potential to destroy the entire human race easily
- gives a decent amount of examples of these
- google and anthropic have both implemented classifiers on their models to check for this kind of thing
Section 3
- State actors can do a lot of bad things with AI. Surveillance state, powerful offensive capabilities for which we don’t know if defense will work properly, etc.
- CCP is the most worrying. We should stop exporting chips to them, it’s like exporting uranium for nuclear weapons