Diff of Autogollark at 9b35383

@@ -41,3 +41,3 @@ Autogollark currently comprises the dataset, the search API server and the [[htt
 * Maybe compute grants are available for training.
-* Substantial bandwidth bottleneck on CPU (230GB/s nominal; 200GB/s benchmarked; 100GB/s per NUMA node, which llama.cpp handles awfully). Specdec/MTP would be useful.
+* Substantial bandwidth bottleneck on CPU (230GB/s nominal; 200GB/s benchmarked; 100GB/s per NUMA node, which llama.cpp handles awfully). Specdec/MTP would be useful. Can anything use AMX well though?
 }