G™Anthropic Shadow

As a corollary to quantum immortality, events which would kill all humans, such as the posthuman technocapital singularity or other forms of AGI doom, cannot happen. As such, things which lead to events which would kill all humans also cannot happen. Anthropic shadow explains* many phenomena:

  • Python has bad dependency management because all ML code is written in it. If it were good, we would have AGI.

  • RL doesn't work stably or reliably because it would be too powerful - imitation learning is less likely to do "weird things".

  • LLMs are what we got because they are slow to develop (scaling laws) and can do some useful tasks but are bad at agentic action.

  • Google can never ship any AI products competently because their infrastructure and researchers are too powerful.

  • Kabbalah (approximately, extremely deep research into Jewish lore) exists to divert many of the world's highest-power level autists away from engineering fields.