As a corollary to quantum immortality, events which would kill all humans, such as the posthuman technocapital singularity or other forms of AGI doom, cannot happen. As such, things which lead to events which would kill all humans also cannot happen. Anthropic shadow explains* many phenomena:
-
Python has bad dependency management because all ML code is written in it. If it were good, we would have AGI.
-
RL doesn't work stably or reliably because it would be too powerful.
-
LLMs are what we got because they are slow to develop (scaling laws) and can do some useful tasks but are bad at agentic action.