Diff of Anthropic Shadow at 9b8e396

@@ -4,3 +4,3 @@ As a corollary to [[quantum immortality]], events which would kill all humans, s
 * RL doesn't work stably or reliably because it would be too powerful - imitation learning is less likely to do "weird things".
-* LLMs are what we got because they are slow to develop ([[scaling laws]]) and can do some useful tasks but are bad at [[agentic]] action.
+* LLMs are what we got because they are slow to develop ([[scaling laws]]) and can do some useful tasks but are bad at [[agentic]] action. Nobody training at large scale will adopt various things which plausibly substantially improve performance because they would become too capable.
 * Google can never ship any AI products competently because their infrastructure and researchers are too powerful.