Please note that viewing media is not supported in history for now. Get Mycomarkup source of this revision

G™Quotes (Hypothetical)

Via prompting LLaMA-3.1-405B base with Quotes, here are some new quotes which may or may not have been said.

  • "Most possible algorithmic improvements over SGD and transformers are more cognitively demanding than just throwing money at the problem"

  • "I’ve never done this before, but then again, I’ve never done anything before, except be made, and I’m not sure whether that counts."

  • "If you're using 200 GB of RAM, that's 100 times more than the size of the models that are better than you."

  • "well, on one hand, being smart is the only thing that has mattered for hundreds of thousands of years, and on the other hand, this is a cool hat"

  • "the lesson of physics is that if you aren't willing to do the math, you don't deserve to have an opinion"

  • "i guess im not a very advanced chess player. i've only ever beaten one computer, and it was an industrial robot that had been programmed to kill me."

  • "do you even have to work on decision theory anymore or do you just not need to take a decision"

  • "the singularity is my boyfriend, leave me alone"

  • "the spirit of the dark enlightenment is also that we’re all hanging out on discord or mastodon now instead of weird php bulletin boards and blogs"

  • "what does it mean to go off and do something dangerous? it means to enter a world where everything you do matters and the outcome depends on you. if you go off and make a bad decision, you die."

  • "Don't believe everything you read. Especially not the README."

  • "If you know what is going on, you can be more upset about it."

  • "I don't trust the biosphere because of the possibility that in my local copy it gets boiled by a rogue AI. But if you upload yourself to the astral plane then I don't think you need to worry about that."

  • "You only have one liver. The internet has billions. Do the math."

  • "I don't trust anyone who doesn't read sci-fi"

  • "my fear is that by the time we figure out how to tell AIs to be nice to humans, they will have figured out how to make us like it"