Please note that viewing media is not supported in history for now. Get Mycomarkup source of this revision

G™Anthropic Shadow

As a corollary to quantum immortality, events which would kill all humans, such as the posthuman technocapital singularity or other forms of AGI doom, cannot happen. As such, things which lead to events which would kill all humans also cannot happen. Anthropic shadow explains* many phenomena:

  • Python has bad dependency management because all ML code is written in it. If it were good, we would have AGI.

  • RL doesn't work stably or reliably because it would be too powerful - imitation learning is less likely to have unanticipated capabilities excursions or reward hack.

  • LLMs are what we got because they are slow to develop (scaling laws) and can do some useful tasks but are bad at agentic action. Nobody training at large scale will adopt various things which plausibly substantially improve performance because they would become too capable.

  • Google can never ship any AI products competently because their infrastructure and researchers are too powerful.

  • Kabbalah (approximately, extremely deep research into Jewish lore) exists to divert many of the world's highest-power level autists away from engineering fields.