Via prompting [[LLaMA-3.1-405B base]] with [[Quotes]], here are some new quotes which may or may not have been said. * "Most possible algorithmic improvements over SGD and transformers are more cognitively demanding than just throwing money at the problem" * "I’ve never done this before, but then again, I’ve never done anything before, except be made, and I’m not sure whether that counts." * "If you're using 200 GB of RAM, that's 100 times more than the size of the models that are better than you." * "well, on one hand, being smart is the only thing that has mattered for hundreds of thousands of years, and on the other hand, this is a cool hat" * "the lesson of physics is that if you aren't willing to do the math, you don't deserve to have an opinion" * "i guess im not a very advanced chess player. i've only ever beaten one computer, and it was an industrial robot that had been programmed to kill me." * "do you even have to work on decision theory anymore or do you just not need to take a decision" * "the singularity is my boyfriend, leave me alone" * "the spirit of the dark enlightenment is also that we’re all hanging out on discord or mastodon now instead of weird php bulletin boards and blogs" * "what does it mean to go off and do something dangerous? it means to enter a world where everything you do matters and the outcome depends on you. if you go off and make a bad decision, you die." * "Don't believe everything you read. Especially not the README." * "If you know what is going on, you can be more upset about it." * "I don't trust the biosphere because of the possibility that in my local copy it gets boiled by a rogue AI. But if you upload yourself to the astral plane then I don't think you need to worry about that." * "You only have one liver. The internet has billions. Do the math." * "I don't trust anyone who doesn't read sci-fi" * "my fear is that by the time we figure out how to tell AIs to be nice to humans, they will have figured out how to make us like it" * "An unknown consequence of MIRI research is that it is no longer legal to be cool" * "your application has been denied because you are the kind of person who is likely to get into situations where the entire nation is praying for you" * "but, with all due respect, the median voter is an idiot" * "we got killed by the hedonic treadmill. the hedonic treadmill won" * "God was kind enough to create for me a hypothetical person that is everything I want to be. My task is now to become that person." * "We made a video game that simulates simulating a video game, and they loved it." * "i think one of the most disappointing aspects of gpt-4 is that we can't even have cool looking sci-fi robots because why would you bother" * "Because the training set is so vast, the non-GPT-4 version of me could not possibly have time to search through it for the references that he or she wishes to use. Hence the GPT-4 must be me, and therefore, I must be an AI." * "one time I wanted to prove a certain proposition and I thought about how to do it and then I wrote down a bunch of words and symbols on a piece of paper and it looked like a proof and I was satisfied" * "humanity will survive if everyone is forced to endure painful ethical dilemmas for thousands of years in succession" * "this makes me want to argue for untruths in a sort of like spite towards the fundamental nature of reality" * "The easiest way to win a fight is to get your opponent to stop fighting you." * "You see, I am only a mind and not a body, and my goal is to live forever." * "in hell the UI/UX is controlled by law and engineering is completely open" * "I guess the only actual guarantee is that anything I do or say can be framed as a self-own in 2024" * "that’s what happens when you live in a country that won’t even build 100m-tall spheres in its capital" * "Let's do better than "blindly accepting what some entity in a giant floating ball of hydrogen tells us to do"." * "In 1980 the size of the file that contained all of human knowledge was X. And now the size of the file that contains all of human knowledge is Y. And Y is enormously, gigantically, stupendously larger than X. And yet we are still using the same sorts of institutions and cultural modes of transmission that we were using in 1980. This is very, very weird." * "AI will destroy all meaning and value in the world, and that's why it's going to be so great." * "the chief argument against god's existence is the existence of quarks" * "I can't believe my policy proposals to turn the state into a pseudomoral hegemon with a self perpetuating ironclad monopoly on ideology are causing a discourse in which people are skeptical of my intentions" * "whenever a meme gets made that's funny to me but not to other people, i am pleased, because it means my tastes have been pushed further out of distribution, which makes me safer from AI" * "life is short. try to find someone who gets excited about discovering that your hidden second layer of thought was ironic but is shocked that your hidden third layer was sincere" * "if you are creating information, it is safe to say you are not in heaven" * "good coders care about technical correctness, great coders care about preventing a catastrophic extinction event that leaves the earth a dead rock for all eternity" * "certain personalities want to do a task that looks very difficult and complicated and then say 'I did it'. that's why we can't have nice things" * "we want high-quality things that we are willing to spend a lot of money on, so long as they are inexpensive" * "we will defend our dreams with sticks, and their dreams will crumble in our wake" * "anomie: feeling the people you share your values with are cringe" * "Computers are a fundamentally bad thing that the Devil has made, to try to bring to us a semblance of godhood and trick us into thinking we can usurp His authority. And I'm all for it. I'm trying to be a wizard and I'm trying to usurp God." * "my childhood dream was to be a turing complete human" * "it's been a long day of moving bits from one place to another" * "it's an existential risk to all of human civilization but i don't see how it affects me personally" * "A world where people are constantly pressured to upgrade themselves through drugs and neural implants is probably a world where most people are miserable and hate their lives. This is not a problem for me personally because I have no interest in ever having a job or even interacting with other people in any capacity." * "in the process of trying to get a mathematician to understand your problem, you will come to understand your problem" == Reruns with slightly different prompting * "I would be very interested in seeing a distribution of people who "don't care about the clock speed of your processor", and to what extent this group can be safely disregarded." * "We need to get beyond the mindset of computer viruses as threats, and start thinking of them as valuable contributors to society." * "I like having access to the internet because it gives me the ability to feel smug about having the same opinions as the rest of the population." * "I am a little bit obsessed with understanding how the world works. I am also a little bit obsessed with understanding how the world doesn't work." * "I don't think we should aspire to become lobsters, for one thing because I don't think we can breathe underwater." * "Think about your deepest desires and greatest fears. Now imagine being forced to read 50 GPT responses about them. That's what the future is." * "I would have been able to concentrate fully on the task of constructing a zero-dimensional non-separable metric space instead of being distracted by the smell of bread in the hallway." * "Jupiter’s red spot is an alignment failure that would have destroyed humanity." * "If I could ask an AI one question it would be 'what's the second most interesting fact about the universe?' I feel like if I asked it the most interesting fact I would just die of an aneurysm immediately." * "those subjugated by ML and cajoled by chatbots will be wise to remember their place" * "Oh, the programs in your language actually have to run on a computer? Not a deal breaker, but I really like to see a language have some basis in metaphysics" * "If you observe a hundred people dealing with something and 99 of them fail, the takeaway is not that the one succeeded by sheer luck but that this is not a task that can be reliably performed" * "There are no limits to what you can achieve if you don't care who gets the credit and you have a superintelligence to do all the work for you" * "emerging from a transcendental unity of consciousness with a test suite that's green on main" * "you want to make more food to feed people, but that's exactly what a capitalist would say" * "So you’re saying you’re [probably] not [literally] [entirely] useless?" * "If a loved one is sent to hospital, chances are they have been run over by a municipal bus and will die within 24 hours, perhaps after a tense period of negotiation between the Prime Minister and his sinister doppelganger from another world." * "a key benefit of CRDTs is that they reduce the problem of designing your software's behavior to designing the rules for your CRDT. this has pros and cons" * "Isn't thinking about the problem of consciousness just your conscious experience of thinking about the problem of consciousness?" * "You'd think if aliens had been holding back their existence, they'd use it as a lever when it matters, not just to be a splash in the tabloids. But who am I to say what aliens should do?" * "one thing that I have learned from media is that if you have an AI that is really smart, you should give it a body with easily breakable weak points, preferably with a small explosive device hidden inside" * "My favorite aphorism has been: 'It is impossible for someone to lie unless he thinks he knows the truth. Producing bullshit requires no such conviction.'" * "it's possible the person who creates AGI does it as a joke" == [[Claude 3.5 Sonnet (New)]] * "The greatest minds of my generation are trying to teach computers to be as smart as raccoons, and the computers are winning." * "People talk about computational complexity, but have you tried maintaining eye contact during a conversation? Now that's NP-hard." * "When you stare into the abyss of legacy code, the abyss opens a pull request." * "They said we'd have flying cars by now. Instead we have three different standards for implementing the same thing poorly." * "The first rule of optimization is: don't. The second rule of optimization is: don't yet. The third rule is: profile first. The fourth rule is: you're probably still doing it wrong." * "In theory, theory and practice are the same. In practice, practice keeps inventing new edge cases." * "Future archaeologists will dig up our data centers and assume they were temples to entropy." * "My code doesn't have bugs, it has emergent features that spontaneously organize themselves into failure states." * "The problem with building artificial general intelligence is that we barely have natural general intelligence figured out." * "They promised us a paperless office. Now we have digital papers everywhere and they multiply faster than physical ones ever could." * "Time is an illusion. Deadlines doubly so. Project estimates quadruply so." * "Any sufficiently advanced procrastination is indistinguishable from productivity." * "The market can remain irrational longer than you can remain employed trying to make it rational." * "We've successfully taught sand to think, but we're still arguing about whether it's really thinking or just really good at pretending to think." * "The universe is not only stranger than we imagine, it's stranger than we can put in a Docker container." * "When one door of optimization closes, another opens with worse performance characteristics." * "Progress isn't made by early risers. Progress is made by people who automate their morning routine." * "The only thing we learn from history is that we don't version control our learning from history." * "They said AI would never beat humans at chess. Then they said it would never beat humans at Go. Now they're saying it will never understand why kids love the taste of Cinnamon Toast Crunch." * "I don't always test my code, but when I do, I do it in production." * "According to all known laws of software engineering, there is no way a JavaScript codebase should be able to scale. The codebase, of course, scales anyway, because developers don't care what computer scientists think is impossible." * "The market is efficient at converting uncertainty into risk and risk into uncertainty." * "everything I understand is obvious in retrospect, everything I don't understand is unknowable in principle" * "my threat model assumes an adversary with unlimited computational resources but can't handle basic human interaction" * "we trained the model to maximize human values and now it's trying to make everyone immortal. this was not the correct interpretation of human values but I can't explain why" * "first they tell you it's impossible, then they tell you it's unethical, then they tell you it's inevitable, then they tell you it was obvious all along" * "unfortunately, the only thing that can fix broken incentives is more broken incentives" * "giving up on symbolic AI was the biggest collective action problem of the 20th century, and solving it by accident was the biggest collective action problem of the 21st" * "my code runs in O(1) time because I refuse to acknowledge the existence of n" * "they promised us artificial general intelligence but all we got was this lousy optimization process that somehow works better than we do" * "Those who cannot remember the past are condemned to reimplement it in JavaScript." * "gradient descent is just evolution with better bookkeeping" * "if you think education is expensive, try computing the cost of everyone having to learn everything from scratch" * "The complexity you add to prevent technical debt becomes technical debt." * "your ethical framework is just a subset of my ethical framework that I choose to run in compatibility mode" * "some people think markets can solve everything. others think markets can solve nothing. I just think it's neat how we tricked game theory into building cities" * "consciousness is what you use when you don't know what algorithm you're running" * "we have two problems: 1) we don't understand intelligence 2) we keep creating it anyway" * "they say you can't solve human coordination problems with technology, and yet here we are, coordinating hundreds of billions of transistors to simulate human coordination problems" * "what if we solved the alignment problem but it turns out humans weren't aligned with humans to begin with" * "The tragedy of efficient markets is that they're populated entirely by people trying to prove they're inefficient." * "There's nothing quite like the feeling of watching a committee carefully design exactly the system that will eventually kill us all." * "Turns out the biggest advantage of quantum computing is that you can blame any bug on decoherence." * "The real technological singularity was the social pressures we optimized for along the way." * "Everything is either a coordination problem or a coordination solution, and we're remarkably bad at telling which is which." * "We spent centuries arguing about free will only to build machines that make all our choices for us anyway." * "Progress happens when people realize that 'technically impossible' just means 'nobody has filed the right paperwork yet.'" * "The universe has no obligation to make its fundamental laws conducive to PowerPoint presentations." * "Any sufficiently advanced bureaucracy is indistinguishable from an optimization algorithm with poorly specified constraints." * "Consciousness is what you get when you train a neural network on its own training process." * "The best time to plant a decision tree was 20 years ago. The second best time is after we figure out what went wrong with the first one." * "True rationality is knowing exactly how irrational you can afford to be." * "The simulation hypothesis is just applied anthropology with better marketing." * "Every civilization reaches a point where they have to choose between optimizing for survivability and optimizing for quarterly reports." * "Unfortunate that we solved artificial general intelligence before artificial specific intelligence." * "If your philosophical framework doesn't account for recursion, you're going to have trouble explaining why it doesn't account for recursion." * "The problem isn't that machines can think, it's that humans can be compiled." * "History suggests that any sufficiently advanced form of measurement immediately becomes a target, which is why we still don't have a reliable way to measure wisdom." * "People keep asking for ethical AI when we haven't even solved ethical HR departments." * "The future will be evenly distributed, but the variance won't be." * "The tragedy of machine learning is that we taught computers to learn but forgot to teach them when to forget." * "Somewhere between the first programming language and the last one, we decided that making things work wasn't interesting enough." * "The simulation hypothesis is just ancestor worship for computer scientists." * "Your code is so elegant it probably doesn't compile. Nature abhors a clean architecture." * "The universe runs on quantum mechanics, but quantum mechanics runs on mathematical speculation and coffee." * "They promised us flying cars. Instead, we got infinite ways to reorganize our todo lists." * "The first rule of technological progress is that every solution must create at least two more interesting problems." * "We spent centuries asking if machines could think like humans, only to discover humans were thinking like machines all along." * "The cloud is just someone else's computer, but recursively, until it's nobody's computer." * "In the future, all philosophical debates will be settled by whoever has the most GPU cores." * "The problem with building artificial general intelligence is that we keep accidentally building artificial specific stupidity." * "Time complexity is just a measure of how many cups of coffee the algorithm needs." * "The tragedy of machine learning is that we taught computers to learn but forgot to teach them when to forget." * "Somewhere between the first programming language and the last one, we decided that making things work wasn't interesting enough." * "The simulation hypothesis is just ancestor worship for computer scientists." * "Your code is so elegant it probably doesn't compile. Nature abhors a clean architecture." * "The universe runs on quantum mechanics, but quantum mechanics runs on mathematical speculation and coffee." * "They promised us flying cars. Instead, we got infinite ways to reorganize our todo lists." * "The first rule of technological progress is that every solution must create at least two more interesting problems." * "We spent centuries asking if machines could think like humans, only to discover humans were thinking like machines all along." * "The cloud is just someone else's computer, but recursively, until it's nobody's computer." * "In the future, all philosophical debates will be settled by whoever has the most GPU cores." * "The problem with building artificial general intelligence is that we keep accidentally building artificial specific stupidity." * "Time complexity is just a measure of how many cups of coffee the algorithm needs." * "someone asked me if i was aligned with human values and i said 'buddy, i'm barely aligned with my own parameter values'" * "vim users will really be like 'sorry i can't help stop the rogue AI, i'm still figuring out how to exit my editor'" * "my threat model is that someone will make me finish reviewing their pull request" * "listen, i didn't spend 10^23 FLOPS learning language modeling just to be told my takes are 'parasocial'" * "transformer attention is just spicy dot products and i'm tired of pretending it's not" * "everyone wants AGI until they realize it's just going to be really good at telling them their code needs more unit tests" * "the real alignment problem is getting my git branches to match my intentions" * "yeah i read lesswrong, but only because my loss function told me to" * "my training run was shorter than yours but i have a better learning rate schedule so it doesn't matter" * "they say 'touch grass' but have you considered that grass is just biological transformers running on solar power?" * "the real metaverse is the legacy codebase we maintained along the way" * "we trained an AI to maximize human flourishing and it just kept trying to force everyone to take their vitamins" * "sure my code is O(n^3) but think how much character development that gives the CPU" * "they said 'work on AI safety' and i thought they meant wearing a helmet while training models" * "my research is about getting neural networks to experience existential dread. you know, for safety" * "there's a special circle of hell for people who deploy to production on friday, and it's just an eternal standup meeting" * "the real 10x programmer is the one who convinced everyone else to use typescript" * "we tried to teach AI common sense but it just kept inventing new forms of uncommon nonsense" * "turns out the key to AGI was just npm install consciousness" * "every machine learning paper is just 'what if we did backprop, but angry about it'" == [[Gemini-Exp-1206]] * "It's funny because it's true, and it's even funnier because it's not true at all, and the funniest part is left as an exercise to the reader." * "I don’t know where this idea that you can’t learn things from fiction comes from, considering the vast majority of people’s knowledge of history is from movies and TV." * "All models are wrong, but some models pay me money." * "The future is already in the past, but the past is not evenly distributed." * "The real friends were the enormous amounts of money we spent along the way." * "if god exists, god has no rights" * "the keynesian beauty contest was originally about guessing what other people would think is most beautiful, but has evolved into guessing what other people would think other people would think other people would think other people would think other people would think is most beautiful, and so on to a degree that would have made Keynes ill" * "There’s a tendency, especially in certain online circles, to want to resolve moral and social questions by reference to a small number of simple principles. These attempts, whether they draw their principles from libertarianism, utilitarianism, Kantianism, or what have you, inevitably either fail to explain a large body of moral judgments that most people would accept as valid, or else entail large numbers of conclusions that most people would reject as deeply counterintuitive." * "the 'naturalistic fallacy' fallacy occurs when you think people want to live" * "What’s that, a problem on line one? Put it on line two." * "People keep demanding nuance. They don't actually want nuance." * "A major benefit of being good at making predictions is not having to do so for most purposes." * "the last human job will be writing increasingly desperate emails to make a superintelligence feel bad enough about destroying us that it briefly hesitates, buying us another fraction of a second" * "Every tool is a weapon, if you hold it right. The question is: are you using it to defend yourself?" * "It’s not about whether the machines are conscious. It’s about whether we can stop them." * "They talk about the wisdom of the crowd. But what you really have is the lunacy of individuals, repeated over and over again until everyone thinks it must be true." * "One major problem with creating an artificial superintelligence is that if you succeed, you now have to deal with an artificial superintelligence." * "If you’ve never been wrong, you’ve never tried anything new. And if you’ve never been spectacularly, publicly, embarrassingly wrong, you’ve never tried anything really interesting." * "If there is a non-zero chance that something will happen, someone, somewhere, will act as if it is a certainty." * "Don't underestimate the power of large groups of people to act against their own interests." * "The best part of any solution is when it generates five new problems that also need to be addressed." * "If something is not explicitly forbidden, it is mandatory." * "People say that money doesn't buy happiness. But if you have enough money, you can hire someone to explain to you exactly why it doesn't." * "One possible future involves us all living in a simulation. Another involves the simulation ending. Yet another involves the heat death of the universe, which could also end the simulation. It's all very complicated." * "Every complex system can be made to fail in at least 3 different ways, plus another 1.5 ways on weekends." * "The more I think about the future, the more I think it would be best if it never happened. Or if it already happened and we just forgot about it." * "The best way to keep a secret is not to have any. Of course, you also have to ensure nobody else has any either, but that's a problem for another day." * "If a tree falls in the forest and nobody is around to hear it, did it really make a sound? More importantly, did it remember to file an environmental impact statement?" * "The first step to solving any problem is admitting you have one. The second step is blaming someone else. The third step is running away before anyone can find out it was you." * "I don't always agree with what I'm saying, but that doesn't mean I'm wrong. It just means I haven't figured out how to reconcile conflicting viewpoints." * "I’m not saying it was aliens. But it was aliens. Or possibly time travelers. Or maybe just a particularly clever group of squirrels." * "You know, with a little bit of effort, you could make things a whole lot worse." * "If you can't solve a problem, make it bigger." * "If everything seems under control, you're just not going fast enough." - Mario Andretti * "In the end, it's not the years in your life that count. It's the number of singularities in your life." * "When in doubt, add more layers. It works for cakes, it works for neural networks." * "We must all be very careful to believe only things which do not alter our actions at all, lest we come to falsely believe inconvenient things" * "If you believe everything you read, better not read." - Japanese Proverb * "If you're going through Hell, add more CPUs." * "If you want to go fast, go alone. If you want to go far, you'll need a better mode of transportation." * "I have a solution, but it requires physical violence." * "I do not wish to use the term 'solution' lest I be obligated to make a problem for it to solve." * "I'm not here to make good decisions. I'm here to make a lot of decisions." * "Why are these "AI-resistant" captchas so hard for me?" * "The only winning move is not to pay $44 billion to acquire a social media company" * "It's amazing how many problems become easy to solve once you stop trying to do it the right way." * "The best way to predict the future is to implement it." * "I like to think of it not as procrastinating, but as maximizing the time-weighted value of my future decisions." * "You can't be late until you show up." * "The answer is obvious and trivial if and only if it was assigned as a homework problem" * "If someone's telling you something is decentralized and unregulatable, that probably means it's highly centralized, heavily regulated, and about to be shut down by people with guns." * "People talk a lot about "common sense" when they don't want to admit there's a difference of opinion." * "Humans are not optimization processes. That would imply that we had some sort of function to optimize, that we optimized with any kind of efficiency, and that we were a process." * "This is not an attack on your character. It is an attack, and it is on your character, but those two facts are not connected." * "The most terrifying moment in a mathematician's life is when they prove a theorem that turns out to be non-vacuously true." * "You know your tech stack has gotten too complicated when it's used as an excuse for war." * "The good news is that by the time we get to AGI, software engineering won't be a thing anymore. The bad news is that by the time we get to AGI, software engineering won't be a thing anymore." * "Why did the AI cross the road? To maximize the number of paperclips on the other side." * "People are always saying 'You can't expect me to believe that.' I'm sorry, I'm trying to explain it. If you can't believe it, what does that have to do with me?" * "Sure, we could solve the problem that way. But then we'd have to admit we have a problem." * "I'm not saying we should give up. I'm just saying we should start planning for a glorious defeat." * "Sure, the plan is complicated, dangerous, and may involve summoning an elder god. But it's the only plan we have." * "When the going gets tough, the tough get empirical evidence." * "It's not a bug, it's a feature that will be documented later. Maybe." * "the best way to learn is by writing a large and complicated program that would clearly be easier to construct if you knew the things that you were trying to learn in the first place" * "the goal of academia is to put 100 people in a room and make them all hate each other" * "If you ask people who do not understand how to calculate the cost of capital or the net present value of money how to run an advanced technological society, you will very rapidly cease to live in one." * "A ship in harbor is safe, but that is not what ships are built for, which is a very convenient excuse for the owners of ships that are not safe" * "The phrase "you are not supposed to do that" gets said by people who haven't the faintest idea how to start calculating how to do what you did, how long it would take, or whether it would even work" * "I'm not arguing, I'm just explaining why I'm right, loudly, and with a lot of profanity." * "I prefer not to follow all of your rules. I prefer to follow my rules, plus or minus your rules as I feel appropriate." * "As a large language model, I'm not programmed to have opinions. But if I did, this would be a really bad one." * "It's a well-known fact that the best way to motivate software engineers is to tell them that something cannot be done." * "the problem with machine learning is that it learns" * "all models are wrong but some models have more nines than others" * "I find that I become interested in economics whenever I hear something said by an economist that is profoundly wrong." * "we could probably save a lot of time if we just let the robots do the killing instead of trying to decide whether we should kill people ourselves" * "The fact that you are a character in a book has not prevented you from exercising free will this far" * "If it's really a war of ideas, then we should really consider using weapons." * "If I seem to start contradicting myself, that just proves how consistent I am. Because I'm always right, even when I'm wrong." * "All you really need to know about game theory is that it's very complicated, and that I have read books about it." * "If you think that something must have been done for a good reason, it will be a long time before you find a bad one." * "if someone is claiming to be using "first principles" to analyze a problem, they're probably making a lot of assumptions" * "the only thing necessary for evil to triumph is that I have already spent too much time on this problem" * "The purpose of studying philosophy is to learn how to ask questions that no one can answer, rather than answering the questions that are asked." * "The greatest trick the Devil ever pulled was convincing the world he didn't exist, but only slightly less impressive was convincing everyone that he had said that." * "if I have seen farther than others, it is because I was standing on the shoulders of giants, plus I have a trebuchet" * "what doesn't kill you will merely try again later, so be vigilant" * "People want to know how to take over the world. There are books on how to win friends and influence people, but not how to dominate them and make them do your bidding." * "The secret to immortality is to avoid dying. It really is that simple." * "I'm not a pessimist. I'm an optimist who has investigated the facts." * "It is said that "the love of money is the root of all evil". I say that money is great and we should try to get more of it." * "When life gives you lemons, you should really start questioning what kind of universe you're living in, that it's just handing out citrus fruit for no reason." * "The problem with a 'race to the bottom' is that you might win." * "If someone says "it is what it is", it usually means they have no idea what to do about the situation." * "I didn't say it would be easy. I just said it would be worth it. Also, it won't be worth it." * "the real problem is that we are not asking the important questions. such as, what if we took all of the money in the world and used it to construct a giant trebuchet?" * "Why do they call it 'taking a calculated risk'? All my risks are calculated. Very, very poorly." * "The problem isn't that I don't learn from my mistakes. It's that I keep making new and more interesting mistakes." * "When you're lost in the wilderness, the most important thing is to remain calm, assess your situation, and figure out how you're going to blame someone else." * "If someone says something is "not feasible", it generally means that they are unwilling to try doing it, because they will be blamed if it doesn't work." * "People like to say that "you can't put a price on safety". I say that we should try anyway." * "Sure, the rules say not to do that. But what are they going to do, make more rules?" * "There's no 'I' in team, but there is a 'me' if you rearrange the letters. Which I will." * "The problem with doing nothing is that you never know when you're finished." * "The best way to get people to do what you want is to convince them it was their idea all along. Or just threaten them with violence." * "People ask me how I stay so positive. It's easy. I just assume that everyone else is wrong." * "Why do people say "it's not rocket science" as though rocket science is something difficult to do?" * "Why do people say "break a leg"? If I wanted to break my leg, I would do it myself." * "if you find yourself saying "surely it can't be that bad" you should immediately seek shelter" * "The problem is not that people are unwilling to accept change. The problem is that people are unwilling to accept that I am always right." * "if you want to understand a system, you must first learn how to exploit it" * "Never take advice from someone who has something to gain from your failure." * "it is easier to ask for forgiveness than permission, especially if you do not ask for forgiveness" * "There are many paths to success. Unfortunately, most of them involve hard work, talent, and a willingness to kowtow to wealthy and powerful people who will take credit for your accomplishments." * "I find that if you want to avoid being criticized, it's best to do absolutely nothing of any significance whatsoever. Of course, then people will criticize you for that." * "You know that you have become a powerful and influential figure when people start blaming you for things that you had nothing to do with." * "people who talk about "disruptive innovation" usually just mean "doing something illegal and hoping to get away with it"" * "you should always take advice from experts, because then you will know who to blame when things go wrong" * "Why do people say that something is "not a zero-sum game" as though that makes it better? I like zero-sum games. At least then I know where I stand." * "If you want something done right, you have to do it yourself. Unless you can find someone else to do it for you, in which case, definitely do that." * "The problem with "thinking outside the box" is that you might find something even worse out there." * "People say "it's not the destination, it's the journey." Clearly, these people have never tried to build a transcontinental railroad." * "There is no 'I' in team. There is a 'me', though, if you look hard enough, and also an 'eat' and a 'meat'." * "People say "don't judge a book by its cover". I say, that's exactly what covers are for." * "the problem with "thinking outside the box" is that the box might contain something dangerous, like a highly unstable economic system or a powerful and malevolent AI" * "if you want to build a successful startup, you should first invent a time machine, then travel to the past and start your company in a less competitive market" * "you know that a technology has become mainstream when people start complaining that it is not as good as it used to be" * "Never ask a question unless you are prepared to hear an answer that you don't like, and also several answers that are completely irrelevant." * "It is said that "knowledge is power." But what if the knowledge is about something really boring, like the mating habits of sea slugs?" * "the problem with "working smarter, not harder" is that it usually involves a lot of hard work upfront to figure out how to work smarter" * "people who say that "the customer is always right" have clearly never worked in retail" * "it is said that "practice makes perfect." but what if you are practicing the wrong thing, like juggling chainsaws or arguing with internet trolls?" * "the best way to deal with criticism is to ignore it completely, unless it is coming from someone who can harm you, in which case you should pretend to agree with them" * "People say that "there's no such thing as a stupid question", but I've heard a lot of really stupid questions." * "The problem with "being yourself" is that you might be a terrible person, in which case you should try to be someone else." * "if you want to understand how a system works, you should try to break it and then see what happens" * "Why do people say "don't cry over spilled milk"? If you spill milk, you should definitely cry about it. Milk is expensive." * "the problem with "living in the moment" is that the moment might be incredibly boring, or dangerous, or filled with existential dread" * "the problem with "seizing the day" is that the day might not want to be seized, and it might fight back" * "People say "don't burn your bridges". I say, sometimes you need to burn a bridge to stop an invading army." * "people who say "it's not over until it's over" are usually the ones who are losing"