Episode Transcript
[00:00:00] The bottom line is that open source AI represents the world's best shot at harnessing this technology to create the greatest economic opportunity and security for everyone.
[00:00:16] This is AI Unchained.
[00:00:27] What is up guys? Welcome back to AI Unchained. I am Gu Swan and this is where we explore the tools of AI for a free and open source future.
[00:00:40] And this episode is brought to you by Coinkite, the makers of the cold card hardware wallet for keeping your Bitcoin safe and making it easy to use so that you can, you can have your wallet easily on your phone and you can still just do tap to pay to your cold card hardware wallet. You never have to worry about your keys being on your phone. You don't have to worry about malware or hackers. You know your Bitcoin is safe. And if you have any amount of Bitcoin and you have not thought about this problem or you have it left on an exchange, you need to get a cold card and you can get a discount code with the name of my other podcast, Bitcoin Audible. You will find the link in the details all right down in the description of this podcast.
[00:01:25] So today I wanted to do a.
[00:01:28] I wanted to do my thorough response and breakdown into everything that we have discussed with situational awareness and Leopold Aschenbrenner's piece and where I think he is right, where he has changed my mind in the way that I think about these things, and then also where I think he is still wrong in his approach or in the framing that he has put it in. And shockingly, I never would have thought this because I've always had a very, very low opinion of Facebook and Mark Zuckerberg.
[00:02:05] But shockingly, Mark Zuckerberg himself has an article that I think has an incredibly thoughtful and serious perspective as to how we should actually be thinking about this differently. And I originally didn't want to do it as a read on the show, but it's actually pretty short, so I may still not read all of it, but there's a ton of it that I think is, is very useful and maybe I'll do it as a read alongside this thing. So we'll just kind of do a dual episode and I won't take up a whole week, but it's really good and I would, I think it's worth actually reading in full or listening to in full if you have a few minutes. I think probably the read would only be like 12 minutes. Maybe it's really just not that bad. But it has a lot of really great points in it and especially towards the end, which is one of the areas that I want to hit specifically talking about the.
[00:03:08] It seems to be directly addressing Leopold's proposal of this being a weapon and a national security disaster. And this is a explicitly a national security issue and that we should treat this like the creation of the atomic bomb and that we need to desperately lock down those weights and we need to be as secretive as possible. And I think he makes a very, very solid argument, one extremely similar to the argument that I would make as to why I think that's the exact wrong approach.
[00:03:41] And I would even use personally, I would even use some of Leopold's own points against him in making that claim or in making that argument and holding that stance, because I think he makes it very clear that it's not going to work. I think he is highly discounting or ignoring a very, very serious consequence of the course he proposes.
[00:04:06] I actually think he has some of the best points to argue against his own stance, his own proposition.
[00:04:13] And I think the thing that gets in his way most as to what the actual practical solution is appears to be something more fundamental to his worldview is that he holds an idea about these institutions that I just genuinely don't think is true.
[00:04:29] So first let's just hit the overall idea.
[00:04:34] And I think it can be put as simply as possible in understanding, you know, what is situational awareness about and what is it trying to frame.
[00:04:46] And I think he actually sums it up. This is the quote that I think sums up what he is getting at and how I think it should change the way we are thinking. And this is also the thing that he has expressly convinced me of. I believe we are racing through the ooms or orders of magnitude, and it requires no esoteric beliefs, merely trend extrapolation of straight lines to take the possibility of AGI true artificial general intelligence by 2027 extremely seriously.
[00:05:26] That's what this is about.
[00:05:28] That is the whole framing is a lot of people have their heads in the sand about where this is going and what kind of effect this will have on things in the coming years and just how fast this is and will continue to progress. And there's also something too that actually kind of makes sense from a technological standpoint when we think about networking and layering of intelligence and software and hardware and all of these things. Because one of the things that he points out in this specifically is that there are like four layers of things that will improve that will make the improvements in this make Moore's Law look ridiculous. And it kind of it's very much the natural extrapolation of higher orders of complexity because we still have a degree of Moore's Law, we still have the hardware layer expanding rapidly and improving rapidly, as we have always had with chip designs. Chip sizes like the, you know, nanometer efficiency and degree of computation in these chip designs, and the fact that we've now got unified memory and we're increasing the speed by not even like advancing the chip itself, but changing the architecture of how the different pieces of hardware actually communicate and talk to each other and the bandwidth between them. Like there's so many different things that are being shifted because Moore's Law just in the actual like size, like we're reaching, you know, when you're talking about two nanometer and three nanometer chips, like it's very hard to get down to one, to get a half, to get, you know, a tenth, a hundredth of a nanometer. Like these sorts of things are reaching almost physical limits.
[00:07:26] But because we have had Moore's Law, because we have had so much advancement just in the size of transistors, of the, of the gates in the chip itself, making them that much more powerful, there's so many areas around the design and around the architecture of these things that have huge improvements still to be made because we are reaching the limits of that, which has kept these trends entirely in line.
[00:07:54] But then you put on top of that, you have layers of, you know, firmware, operating systems, software built on top of that. And now you look, I mean, I can't, I feel like I learn about new programming languages and new platforms and new development kits and new libraries. There are so many things stacked on top of so many things now. It is literally an ocean. And I mean it not just to suggest that it's big, but to suggest the, the sheer scope that it is an ocean in the sense that there is no way for anyone to even have a, have a grasp on, on everything that is going on. Because we're reaching a point where one person or two people can create entire libraries and entire new platforms that are forks of other platforms. And especially with the speed and growth of a lot of these open source things, stuff is moving so fast. Like we're at a place where large language models are new. Like this is still fresh. We're still at like one to two years that this has really been in the kind of public psyche and, and out there in the open and we already have millions, millions of different models, millions of forks and fine tunes of some of the open source ones, millions of Loras and all sorts of things and embeddings and everything for stable diffusion models, for image generation models, for video generation models. And the better each individual of these get, the better, like we have something in open source. The, the more accelerated the forking and fine tuning and you know, low, what's lower stand for? What's the low resolution adjustment? Is that, is that what it is? I know it's not about like resolution in low, low rank, low rank adjustments in changing and altering these models because as the base models get better at being directly applicable, it makes more sense to fine tune. So when you have a base model that's just kind of like so, so and you can't really find a good application for it, fine tuning it to be good at one application isn't really, doesn't really have a whole lot of benefit. There's, there's not going to be much excitement around the tool because whatever that fine tune is is still just going to kind of generally be so, so you, you need a better base model. So once we get, and that's also why I think video has lagged so far behind because we haven't had good open source. Honestly, we don't even have access to a ton of really great closed source models for video. It's still clearly AI video, like it's a mess, but it has come leaps and bounds from what it was a year to two years ago. I mean massive leaps to the point that Runway three, the one that's actually in beta, is getting really, really interesting. And Sora, which is the one from OpenAI, which is still not accessible by the public. This was like three months ago or something. It's like literally a quarter of a year ago that I asked for access and they announced it as if it was about to be like around the corner. And a lot of these things are like really quick. Like boom, we announced it, we're sharing it with the public and then boom, a week later, okay, now it's accessible or it's usable in, you know, some application. Sora, no idea why, but literally none of that. It's just been in the dark this whole time. And they've got like private closed access and that's it. But when these things are released and they actually have like a really strong applicable use case, the, the, the breadth of forks and fine tunes just explodes and you can kind of see that same, you can kind of see that phenomenon. Well, it, it also helps that the tools get more mature around actually creating fine tuning. So you have a double feedback loop but that's like. Another great example actually is there's a tool that I found recently.
[00:11:57] What was that tool? Oh man.
[00:12:00] Oh, I hate it when I mentioned something. All right, I'm gonna have to look this up.
[00:12:04] AI, when you, when you read this transcript, make a note of this. I need to look up what this tool was because it is an open source tool and I do not remember the name of it, but it's something that allows you to prep your data, to prep PDF files, to prep your notes, to basically you give it all of this information, you give it access to certain things and it will read, break down and essentially block out information so that it's context specific and done in chunks that are very useful for basically preparing all of your data to be used by an LLM, to be trained with an LLM to be, to be used for fine tuning and to be used for embeddings so that you can search for it as well. And it's a great example of, you know, one of those order of magnitude improvements like this isn't even. We're not even talking about the models themselves, we're talking about the tools around the models for the ability to fine tune a model and have a fork of a model. And that's a perfect example of a tool where without that tool I just wouldn't make a fine tuned model or at least not an open source one. I wouldn't do it without, you know, ChatGPT's interface or something like that to do that. Even though I've kind of switched over to Claude, but Claude doesn't really have that functionality yet, or at least anthropic.com I don't believe has it inside of that.
[00:13:37] But the good thing is you can use an API and I can probably actually do this with the, with this very open source tool that I'm mentioning that I don't remember the name of. So it doesn't make it much of much use. But I will be sure to tag it and put a link in the show notes when I find it again. I definitely saved it somewhere. It's probably in my Telegram chat maybe.
[00:13:57] But basically all of this stuff stacks on top of itself. All of this stuff is, is a feedback and being able to like the algorithmic improvements, the hardware improvements, the improvements of the tools and methodologies around it, the platform improvements, the coding and library improvements. Like another one is Mojo that we've talked about on the show. It's been a little bit since I've talked about that, but that give basically a bridge between Python code and directly between Python code and the hardware that you are using to give literally 10x20x. I mean, they, even in certain cases, if you go all the way to the compiler level, that you can get even like a hundred X efficiency and speed and computational improvements on some of this stuff that's being written in Python, like these sorts of things just go so far. And that's purely on the software side. We're talking about adding that to a multiple of that, to the multiple of the one in the hardware of the algorithmic improvements. So it makes sense that improvement would be happening this quickly, that you might get an order of magnitude or more 1.5 orders of magnitude, I believe is the average per year and, or at least in the past decade. And that the believing that this is a possibility, that this is on the table here, is to simply continue to extrapolate out that straight line on a log, a logarithmic chart, is to believe that what has been going on, the trend line that we have been seeing for the last 10 years and then honestly, which is a logarithmic extension of the trend line I think we've seen for the last 40 to 50 years of technology in general, will continue to hold true.
[00:15:57] And this is what has convinced me of this argument.
[00:16:01] It's really hard to deny the degree and reliability of past improvements, even when we couldn't specifically put our finger on them. You know, when I look at, when I go back to the Price of Tomorrow, that's why I really loved that Jeff Booth was the very first episode of this show. And if you haven't listened to that one, this would be a perfect time to go back and listen to it again.
[00:16:32] Because Jeff Booth, specifically the Price of Tomorrow, his book and his whole theory, his, his whole concept on all of this, is basically doing just that, he said, and has gone over. I don't know what will happen. We can't pinpoint exactly what it is that will cause the trend line to stay on trend, and Leopold says this as well, is that we don't know if there's going to be some fundamental algorithmic breakthrough or some additional layer of algorithmic breakthrough like. Another great example is the first read that we did on this show of a gentle introduction into large language models and how they work.
[00:17:14] We talked about how like one of the big things was the attention score. And not only just how does every word relate to every other word, but there was an attention score and a, a way to weight things based on the importance of the word in the sentence, and Leopold actually points out in this piece, I can't remember exactly where it was, but I just remember it while reading through it because it stuck out to me is that we take the exact same amount of computation to predict the next article, to predict the word the. And the. Predict the word A or an.
[00:17:51] As much as we predict that the next word might be king or extrapolate or interesting or esoteric. Like there's, there's a degree of complexity in the language and in the meaning of the words where when a human is talking, all of the articles and stuff are just background noise. They're just kind of shaping the direction of the sentence, but they aren't any of the meaning. So clearly the human brain does not put as much computation. We don't have any computation in just putting a the in. It's like one of those things where the, the grooves in our brain are so deeply embedded and so, so deeply run through and ingrained that the absence of it is so stark and shocking when it's heard. But then when you read it, you could probably just take out a bunch of those words and your brain wouldn't even recognize it. You wouldn't even see it. There's a bunch of those like, interesting little tricks where depending on where you put it and how you screw things up in a sentence is that you can still just read it. Normally. There's a couple of different examples. Like one is when you do double articles, there's. There's one little like trick of the mind or whatever where you put the. At the end of this, at the end of a line and then you put it again at the beginning of a line. And you can especially those like, nonsense words. The. Well, nonsense, obviously you need the word the. But the, the words that aren't actually critical to some meaning. Like if you doubled up on a word like flower or esoteric, like you would see it twice. But something about those filler words that kind of just bridge the gaps between what is important in the context of a sent just kind of disappear. And so you can double those up. And especially if you put them on different lines, you just, you will read it without having any idea that that's happening. And then another great example is like how we read and the, the placing of the letters. Have you ever read that paragraph where every single word is spelled wrong, that it has all the exact same letters, but as long as the first letter and the last letter stay in the same place. So like racing. As long as the R and the G are at the beginning and the end of that sentence, where the A, the C, the I and the N are in that. In the middle of that word doesn't matter. You can read it as fast as you can read anything with everything spelled like complete nonsense, as long as the first and last letters are in the right place. Both of those are great examples, and Leopold specifically brings it up. Both of those are great, are great examples of algorithmic improvements where the algorithm of the large language model could actually mirror the clear efficiency gains that we have had.
[00:20:48] Think about how just kind of fascinating nature is in that the mind is able to push all of that in the background so that our ability to process the paragraph doesn't require the processing of each individual letter that we have gone. We have slowly upgraded. You think about it, what we've done from a mental standpoint is we have put models on top of models. You have to learn it letter by letter. When you're a baby, when you're three years old, five years old, whatever, right? You're going to learn A is A, B is B, C is K or S. Like, you start to associate these things. And then when you start to read, you very, very like you're thinking extremely hard. You're using a massive amount of your brain, and you're looking, you're like racing through it.
[00:21:42] You're piecing at every single little dot in this thing, every single character, and you're starting to build a model that puts together that pulls some sort of meaning, that pulls the right word and the right auditory meaning out of all of these symbols, and then attaches that auditory meaning to the word and what it means in real life. What, what does the word, the symbol of the word mean to the context of what is actually going on? And it's extremely difficult. This is really. You have to fight through it and you're doing a massive amount of computation. But then slowly we're building these weights in the brain. We're slowly building up these attachments and associations.
[00:22:25] And then we start building another model on top of that where now that we kind of have some base layer, like already predetermined understanding of the word racing or any other word or context that's going on in the sentence.
[00:22:42] Well, then now we start attaching those to other words and other ideas. So we're kind of upgrading. And then we continue to do this. And then as we read faster and faster, we basically pull out to a layer bigger where we start looking at whole sentences and we now just kind of know what that sentence ought to say. So we can just kind of read for, or skim through just a couple of words in, you know, 15.
[00:23:12] And then our brain just being able to kind of have that like, half snapshot of what the rest of the sentence kind of looks like. And even if it's all spelled wrong and, and as long as things are just kind of like in the right size and you have certain letters in the right place, your brain can see the sentence as it's supposed to be. Because now you've built a model for the entire structure of a sentence and the entire structure of a paragraph and then a page and then a story.
[00:23:37] And it's just layers upon layer upon layer. And it makes very, it makes natural sense that we would do this same thing with computers and with all of these model weights for large language models, which means that we're still at the letter by letter, verbally pulling out the, the linguistic meaning. Like, what are these actual words? Because the word the is taking the exact same amount of computation as the word esoteric. The, each letter is still taking the same amount of computation as every other letter, even though there's obviously massive improvements to be made with deeper model understanding or higher layer. Model understanding might be the better way to put it of the language in general, of the whole context and structure of language, of meaning, of its attachment to reality. And clearly so much of these parts of the algorithms we haven't sorted out. We really have a very base structural algorithm that we've just scaled massively to enormous amounts of compute.
[00:24:45] And again, this is just on the algorithmic front.
[00:24:49] You, you attach, you start to have some sort of improvement there, and then you start to have attachment to, or you, you multiply that with the clear scale of efficiency and improvements in the hardware and two orders of magnitude just look like, yeah, I mean, we're, we're literally as he says, quote, unquote, we are racing through these things.
[00:25:14] And then the other big thing that he pointed out in this piece that I had not realized because I had been so focused on, which was a really fun paper and fun idea was the idea. I mean, the paper wasn't that fun. It was super dry. But the, the concept was really fun to geek out on was the idea of AI dementia of language model dementia. And, and they also did this with image models and stuff as well, but that trying to use the output of a model in order to retrain and improve that model itself. So let's say it produces a hundred, 100 images of a cat. And then you get that model to then Determine which ones. Which one of these, after the output, which one of these are good models or good pictures of cats, is that model can actually determine which ones are the best. And so it could take the 10 best out of that 100 and then go back in and retrain it. But the problem is, and this is the idea of an AI training itself, but the problem with that is, is that you are then taking those weights and you're taking very general weights, and now you're retraining it on something very specific. And what actually happens is you don't actually get it additional weights and you don't actually improve the weights that it has been given the fundamental weights of the model. What you do is you lean it to specifically toward making good pictures of cats. And what actually happens is it gets worse at other tasks, at creating images of other things. Suddenly dogs just look really crappy. And now you get it to produce 100 dogs get it to take the 10 best and retrain it. And now it's really bad at making octopuses. For some reason, you know, it. It just continues to progressively. It's like government trying to solve a problem is that it creates two problems while it quote, unquote solves one, and then you need more government in order to solve those two problems. And now, now it creates four problems, four additional problems, and it's only solved two, et cetera, et cetera.
[00:27:21] This is very much like an LLM or a image model or whatever, trying to train itself on its own output. All it does is manipulate its own weights. It can't have a net positive impact. So my thinking at the time was that, like, okay, we're not going to hit this AI trains itself.
[00:27:41] But what I hadn't realized, what I had not realized that Leopold, I think details out extremely well in this piece, or at least it finally clicked when I read this piece, because I realized, I think I had been exposed to this idea a couple of times, but in a very rudimentary or very kind of like rushed way or environment or paper, whatever.
[00:28:04] And he hit it enough times, hit it a couple of different times in this piece, trying to explain or trying to make it clear what he was trying to argue is that AI scales up with compute.
[00:28:19] So when you have.
[00:28:23] So the analogy that I've used and the way I've tried to explain it and let me know if this works. And it seems to be clear for you guys, because I want to make sure that I can explain this well for a lot of. For a general audience that may not listen to the show all the time is that you kind of have like this base recipe, you have this algorithm for what a model is going to be and, or for how you would train a model. And depending on how much compute you give it, you can train completely different skill levels of model.
[00:28:56] Kind of like how just say, just for analogy, is that you can train some. You can train a person on something for a week, you can train them for a month, you can train them for a year, you can train them for 10 years.
[00:29:10] And at each one of those stages they will be the same person with the same skill level and the same level of intelligence. But after 10 years, that person will be in orders of magnitude more capable and more skilled at whatever it is that they are trying to learn than the same version of them that has been trained at a week. Well, this same kind of phenomenon shows up in AI is that sure, you can have much better algorithms, sure you can have much better data, that is all true, but you can also get it better just by having better computation, just by dedicating more resources to it. So let's say you have a simple algorithm that produces a pretty, pretty dang good result of a large language model and you give it x amount of compute and it creates a GP GPT2 level LLM.
[00:30:06] Good. At most tasks it's decent, but it's not, you know, there's a lot of things that it can't do. You're not going to use it to write a paper for you. Like it's, it's rough around the edges.
[00:30:18] Then you give it 10x compute and it makes GPT3. You give it a hundredx compute and it makes GPT4.
[00:30:27] Now you have, with the exact same algorithm, the exact same ingredients, you have a GPT2 model and a GPT4 model. The GPT4 model is vastly better. The GPT4 model, you can have a conversation with, it can clearly assess whether or not something is written well, it can reword it, it can draft your emails.
[00:30:49] You might actually want it to write a paper for you. Especially if you're in high school. It's gonna probably do better than most high school students.
[00:30:56] Here's the thing.
[00:30:57] That GPT4 model built with the exact same recipe as the GPT2 can determine how that GPT2 model compares to other GPT2 level models.
[00:31:12] Meaning if you give the LLMs, if you train the LLMs on all of the different, the various algorithms and methodologies for literally making better recipes for models like the base thing, which is very small Right. We're not talking about the weights and we're not talking about the computation. We're talking about the recipe that we start with to feed everything through. We're talking about like just a couple of pages of code.
[00:31:40] I mean, I don't know the exact degree, but we're talking about extremely small application that is built around feeding tons of information into it in order to build the weights from it.
[00:31:52] So minor tweaks in this may have massive fundamental difference at the GPT3, GPT4, at GPT5 level, which means that the GPT4 can tweak the algorithm, which again is not much at all. GPT4 can obviously assess the whole thing, has the context window, everything, and then give it x amount of compute and make a GPT2 level model. And then it can tweak the algorithm again and give it X amount of compute and create another GVT2 model. And then it can do that again and again.
[00:32:30] And maybe it even doesn't understand what is happening with the algorithmic changes. Maybe a lot of it's just kind of random, it's just testing stuff like this is the process of evolution. You're in a coding environment, you can do this all hundreds of millions of times, especially if you have the compute and you're specifically targeting something that's small.
[00:32:51] Well, now you can run GPT4 can look at this, these 100 million GPT2s with tweaks to the underlying algorithm, the underlying recipe, which again, this is the recipe that created that very GPT4. It's looking at its own recipe, own DNA essentially.
[00:33:11] And now it can just start benchmarking all of these GPT2s against each other and see which GPT2 is the closest to its GPT4 or gives the the most efficient, most accurate and, you know, specific or just, just most capable output.
[00:33:34] And now let's say it finds that One of those GPT2s is basically like two times or five times as good as all of the other GPT2s. Like it just kills its own GPT2.
[00:33:49] Well, then you just give it 100x the compute.
[00:33:52] Now that you've found the best small model, you now give it the exact same amount of compute that created the GPT4, and boom, now you have GPT5. Now you have something that's 5x10x. Who knows, like just who knows? That might even scale up logarithmically with the, with the compute. It might not even be linearly with how much compute you give it. It might actually get better the bigger it gets. But you have GPT5. GPT4 was just able to completely automatically.
[00:34:25] Completely autonomously is the word I'm looking for.
[00:34:30] Retrain and tweak the algorithm to create a GPT5 that could potentially be orders of magnitude better than itself without any human input, that's a very, very real possibility. When Leopold explained that, it had me realize that because you have these handful of different layers of attributes, it actually is entirely possible for an AI to train itself. But the AI. But that's because the AI isn't, isn't actually training itself. It isn't retraining its own GPT4, and certainly not with its own output. What it is able to do is it is able to reproduce its own recipe at a vastly more capable manner because of the simple nature of the scaling of compute in the, in the, in the concept of the large language model. And this is exactly why Leopold says in this piece, is that the last thing we have to figure out to teach AI to do is how to do AI research.
[00:35:42] Because if it can simply make AI research that much better and then essentially birth AGI, which to some degree we always act like it's some big moment where there's some shift and suddenly it's artificial general intelligence. But I don't think that's the case. I think it will just kind of continue to improve incrementally and then at some point we'll just kind of be like, okay, well, no, we basically have it. Artificial general intelligence is just kind of here. But once AI can train itself once, I guess the big moment will be when a single large language model is able to completely autonomously train a better language model by improving its own recipe and creating the next version. That's the last thing that we have to do. Outside of that, it's just about giving it compute. And that's when we have an intelligence explosion. That's when you have the birth of superintelligence. And I think the speed at which everything will accelerate at that point will just be difficult to comprehend.
[00:36:47] And it's crazy how much this all aligns with Jeff Booth's thesis and his thinking on this. And again, going back is, you know, for the people who say, oh, well, we'll never get AGI, that's silly and that's ridiculous, which has been myself on this exact show, actually, I would say that we're going to keep the trend.
[00:37:15] You know, what would be go back to 2020, 2019.
[00:37:20] And it looks like the trend, the technological trend is not going to be able to accelerate. It's like, okay, what could we explicitly point to that would cause an explosion in the ability to write new applications and variations of, you know, platforms and coding languages and all of these things? What would allow that to just explode? Because, you know, it takes a long time for people to learn to code. And, you know, that seems like a very fundamental barrier to that level of improvement and to what would be a necessary change in order to.
[00:37:59] In order to actually continue and even accelerate the pace of progress and the pace of software. You think, oh, well, we have to. It's going to take 20 years to train all of these coders that would be needed in order to actually keep pace with that. Who, even then and even with, like, people in AI, how many people could have literally predicted and said, oh, well, in about two years, we're going to have large language models, we're going to have. We're going to have an AI. We're going have software where people can literally just say, I want an app that does this.
[00:38:29] And then it will be able to produce code for you. It will just be able to write that code based. Based on what you said that you wanted.
[00:38:38] I've always found that looking back at how crazy the things that have just occurred are because they become normalized very, very, very quickly is to go back and then try to try to make that prediction forward again.
[00:38:55] Because I think had anybody said that in 2019 or 2020, that would have sounded ridiculous.
[00:39:03] Like, that would have sounded, especially on that timescale, be like, two years from now, three years from now, like, in 2024, Guy is gonna have a podcast. I'm gonna have a podcast about AI, and specifically, I'm gonna have a series about all of these little apps that I'm building with AI, and I still didn't learn to code. And I called it Devs who can't code.
[00:39:26] No way that's gonna happen. No way that's gonna happen. I would not have believed that. That would have seemed like a stretch. I'd have been like, come on, man, you're. You're Star trekking it. We're 20 years away from that, yet look where we are.
[00:39:40] So even though I don't know exactly what improvement or what new advancement or new layer is going to produce, like, what is going to exist that will keep this pace? And maybe it's not even quite artificial general intelligence in the way we may want to imagine it, or that we tend to naturally imagine the idea, because I also think we just don't know what that looks like, really, we really want to personify everything because we don't have a good foundation of what intelligence outside of the context of explicit, what is it like, humanity. And it's easy to get predictions wrong on projecting the wrong thing. You know, Ray Kurzvil is a famous futurist, and he specifically projected out, you know, a bunch of different trends. And rather than being broader in his. In his interpretation of how things would be kept pace, he specifically extrapolated out things that were very application specific and that required, like, he looked at, you know, transistors and things getting smaller. And then he was like, you know, by 2010, we'll be able to write an image directly to the retina, like basically have like a contact lens that have. Has like a computer in it. And the best we have, which is actually shockingly similar to the idea, but the best we have is this giant Apple Vision Pro thing that sits on your face. And it's 14 years past when he said that was going to be possible.
[00:41:22] And I think one of the things that aggressively slowed down was the pace at which we could make things smaller. And it was because we were just literally reaching the kind of physical and atomic and thermodynamic limits of making things smaller. But we did keep up with all of the trends he talked about, and we did keep online with the whole idea of kind of Moore's Law and the progress and acceleration of technology. But it's because we did it in a layer on top of it. There were just so many other things to develop that it wasn't specifically about making something smaller and then projecting things right onto the retina, et cetera, et cetera. So I think it's very easy to misapply where these, this. The intelligence explosion, where these advancements will take place. And, you know, there always could be some sort of a limit. There always could be. And it could be just something that even though we've had a decade of this, we still run into that. We still have this curve where it starts to gradually taper. And maybe we do still have some sort of a productive and technological explosion, but it doesn't happen in exactly the same way or in exactly the same context. And just to kind of pull on that thread a little bit more and think about it from the context of Bitcoin and Nostr and the pair stack and all of these things that I cover in the other podcasts that I do, is it could actually be decentralization that leads to the massive exponential multiplication of all of these different things. In variance rather than just in size and capacity of a single model like that. Rather than thinking about $1 trillion cluster and one giant super intelligence, that what we're actually looking at is the explosion not of millions of separate intelligences and separately fine tuned and application specific models, but literally billions, tens of billions of these things, 20 for every single human on earth, that we actually have these models that are retraining themselves and retraining new versions of their own models for specific tasks in someone's specific life, like in their own context. And that we have sourced all of this computation in a massive decentralized fashion. Because I think what could you expect to see more of if you actually unleash a million new developers to be able to build stuff and you have LLMs that are accelerating in pace where, you know, cope. The first, the very first version of Copilot is just kind of crappy. And now we have Copilot plus plus and we have Cursor and we have Devin and all of these like, kind of like quote unquote software engineers that can brute force and then execute the code and then run it again and fix errors and read the error reports and then run it again and do all of these things until it actually creates work, workable and even robust code against simple tasks that you can kind of produce this evolutionary environment of making code and making applications, especially if you're doing this modularly. Like if you could really get LLMs, that's one thing that I think is really important to consider is that there is a degree of robustness that you can get by just having it execute something simple in a thousand different ways and a thousand different environments.
[00:44:49] And then you have one module that just freaking works, you know what I mean? Like it just always calculates something perfectly or runs some sort of a function exactly the way it's going to run. And it booted up a VM in every single version of Linux and every single version of Fedora and every, you know, ran it against every Bitcoin client from the beginning to the most recent Bitcoin client, etc. And you just have this, you can have it beat itself, like beat this code and execute it in every possible environment and then have it so that that one module just runs in every single one of those things. And however it needs to be adapted so that it can just always know if you run this module, you will get an output and it will be useful and it will be the correct answer to the problem.
[00:45:42] Then you do that for the next module. Then you do that for the next one, then you do that for the next one. You do that for a hundred modules and a hundred different functions, doing a hundred different specific tasks or computations. And then you have a model that comes and takes all of these brute forced, like hardened individual modules in every single environment, and then you put them together into an application, into something that completes a bunch of different tasks and for a bunch of different reasons. And you do the same thing. You put it on every episode, every version of Fedora, every version to pop, every version of Ubuntu, etc. And every single environment and you brute force it again and again and again until you have just an insanely hardened piece of software. So what happens when you start to have that capacity? Because we're really just opening the door on this thing. And in another year, how much better is something like Devin and Copilot and all of these agentic sort of models going to be? Especially a code especially. It's something that has a very clear environment and a very clear difference between, you know, quote unquote, winning and losing. This is AlphaGo again, right? Is that you give AlphaGo an environment where this huge, this supercomputer can easily know what it means to win or lose a game and the, the restrictions or the, the limits of this environment and what it means to be succeeding at the game and to be losing at the game. What does that look like? And so it can train itself, quote unquote, by just playing the game over and over and over again with itself until it knows practically every way to win and lose a game and it can find and peace out just from the sheer number, just from the sheer computation.
[00:47:32] It can piece out these crazy moves that might seem insane in one context, but it just knows that it's going to alter 2020 turns down the line. It's going to alter the, the, the ultimate.
[00:47:49] It will be the turning point of the game. It's going to alter the environment, it's going to alter the, the balance of power because it has seen this environment play out well with coding, with the ability to do that, with this, the extensions and, excuse me, the advancements to something like Deben, and the fact that we just can expect, at least for the very near future, next two years, we can expect another 1.5 orders of magnitude, maybe even two orders of magnitude every year.
[00:48:23] We're gonna get that, we're gonna get that. That's going to be possible.
[00:48:29] I think it's not even slightly out of the realm, in fact, just the advancements that We've had, we've done very few devs who can't code episodes. I barely started doing this and the tools are already so much better than they were.
[00:48:43] Like so much better.
[00:48:45] And I'm talking about the naive use of these things. I'm talking about just speaking to a language model. I'm just at, I'm at a clawed chat interface and I'm just saying, can you write me code to do this? I'm not talking about an environment that executes. I'm not talking about a Docker thing that has a bunch of VMs. It can boot up a bunch of different operating systems and then read error codes and then adjust to those error codes. I'm not talking about any of that. I am talking about pure, naive, simple. I asked it a question and it produced code in response to that.
[00:49:21] There's enormous low hanging fruit, there's huge, there's easy 100x advancements without LLMs getting better just in how we can execute these things and the environment that we have them in. I mean, that's what Devin is, right. It's just allowing it to do 30 steps in a single run.
[00:49:42] It's automating. The process that I do when I'm running code is I ask it for code and then I execute it. It gives me back an error. I say, here's an error. It says, oh, I'm sorry, you have an error probably because of this, you didn't install this prerequisite, so here, let's do this instead. And then it changes the code and then I run it and it gives an error or it acts weird, something pops up. I explain the situation and it says, oh, okay, well let's do this. And then it does that over and over and over again. And I have this conversation for, you know, 20 interactions. Well, if it had its own environment and it could execute those things, it could find all of that stu in a matter of a minute. Because it could just run it itself, see what the result is and then go ahead and make that change. I'm not a necessary part of that, necessary ingredient to that process. It should be able to do it itself. It's just that it doesn't have access to the computer or the environment in order to do all of those testings and all of that response and iteration, which means it's 100x better if you can just give it that.
[00:50:48] So all of that is to say I think he's right that we will have an intelligence explosion.
[00:50:55] I think he's right that we should expect Continued orders of magnitude ooms to play out on that straight line logarithmic trajectory, as we have already seen.
[00:51:10] And that, quote, the intelligence explosion and its immediate aftermath will bring forth one of the most volatile and tense situations mankind has ever faced.
[00:51:24] When you're looking at technological advancement at that rate, I think you're looking at a disruption of everything that we think of as institutional.
[00:51:36] And going back to a point I was making about why decentralization may be the real way that this, this trend keeps pace is what happens when you have LLMs and when you have AI in which everyone can kind of build their own platform, and that everyone talks between each other using protocols.
[00:52:03] What happens when protocols become the main form of interaction? And the idea of having an operating system is really just having an LLM be booted up in a hardware environment where it just kind of explores every connection and all of the attributes and pieces of devices that it can find, every port on the computer and all of this stuff. And then it just kind of builds itself a world.
[00:52:33] And I. And you know, that might seem like crazy. It's like, oh yeah, we're just going to put a bunch of hardware pieces together and then run and somehow run an LLM and then boom, I'm going to have an operating system that's perfectly tuned and aligned with what I have, and it will just do what I want it to do. But I don't know, that sounds doable. Like we're talking about an intelligence explosion. We're talking about artificial general intelligence.
[00:53:00] And when you extrapolate this out two to three years, like, what does even a 10x100x1000x improvement over GPT4 look like? I mean, doesn't it look like that? Isn't that something extremely valuable? And that would be fascinating to make use of. I don't know. I know a person personally, I know someone who actually from scratch built their own operating system.
[00:53:30] And it was like a grueling task.
[00:53:34] And it was like, you know, built around security and privacy and all of this stuff. And he never, I don't even know why, and he never released it or anything. It was literally just like their project, but they did it.
[00:53:46] So why would I think that at some point AI wouldn't be able to do that?
[00:53:56] And in fact, wouldn't it necessarily be able to do that better than most people, just because of the scope and depth of knowledge and reference that it would have to have to kind of build all of the drivers and kexts and everything that you need to interact at the hardware level.
[00:54:15] And, you know, this seems to be also a fascinating way to go in certain directions as well in Mojo. Mojo is another great example, actually, since we already brought that up, is what's it trying to do? Right? It's trying to bridge between the software environment and the hardware environment and remove so many different layers of complexity now that we have kind of extrapolated this to so many different layers up, and now it's bloated and it's inefficient and all of this stuff. Well, one of the massive improvements, one of these orders of magnitude may literally be in now being able to, now that we have developed language models, now that we have developed AI in this environment, is to basically collapse down the layers so that we can go back.
[00:55:08] And I think that's part of what's happening with NOSTR and the pair stack and Bitcoin and Lightning and all of these things is we're realizing that all of these things that have become six layers up in protocols and applications and surveillance and centralized corporations and servers, all of these efficiencies, or excuse me, all of these massive benefits and features that we have gotten out of these centralized entities at three layers up. And with all of the huge costs of privacy, of security, of control of surveillance and all of this stuff, well, now we can turn these things into protocols and we can collapse it back down those layers. We can, we can take it back out of the hands of servers, we can take those platforms back out of the hands of some giant corporations, and we can build our own clients on top of a protocol where we don't really have a platform. We just kind of like build our own environment. And that's why I think a lot of this may actually aggressively reinforce the decentralization revolution that we're going through.
[00:56:08] Because what happens When a million, 2 million new developers come on the map? 10 million, 100 million? Who knows? When we're talking about AI being able to really just develop whatever and getting 10x better, 100x better at it than it is right now in two years, we're basically looking at everyone as a developer. And the kind of. The limits of development that we can do are really just the limits of computation that we have access to and what are outside of the context of what a corporation would build.
[00:56:40] What happens when you hand it over to 100 million individuals? What do they build?
[00:56:46] And I think it's easy to miss how much the environment could change and thus how much the entire stack architecture and concept of how all of this will evolve will change.
[00:57:01] And we might not be thinking about institutions and governments and giant corporations. Like, that seems crazy to be like four years out. Like, oh, yeah, sure, we're just going to undermine the entire structure of the economy and corporate. Corporate enterprise and government institutions and politics and everything altogether, because I will just be able to perfectly do it. But I don't know. You, you alter fundamental incentives enough, you alter the dynamic of the individual enough and their ability to actually produce in an autonomous way the things that they need and give open protocols for communication and organization among people.
[00:57:42] And I don't think that's out of the realm of possibility.
[00:57:45] I don't think that's as crazy as it might sound on its face. I mean, what would you have said about looking 30 years back or 40 years back in that all of major media would have been inconsequential or would have been increasingly undermined in just the building of narratives for the typical person, that the Internet would have fundamentally changed what we thought as the Overton window of reality.
[00:58:17] Now the other thing, and this is also why I want to hit Zuckerberg's piece here in just a little bit or kind of reference a lot of what he talks about.
[00:58:27] But this is where he gets into, or Leopold specifically gets into. Oh, don't. Don't touch that. Don't touch that, buddy. It's spinning.
[00:58:38] But he says, quote, super intelligence will give a decisive economic and military advantage.
[00:58:48] China isn't at all out of the game yet. And in the race to AGI, the, the free world's very survival will be at stake.
[00:58:56] And then another one on this same idea, because then he goes into comparing all of this to nuclear bombs, which I still think, and this is something that I've talked about earlier in the show in previous episodes, is I just do not think as much as he has a point about, yes, it will be a military advantage. Yes, it is, quote, unquote, a weapon. Just the idea of intelligence and being able to defend or attack more in a more targeted way is, of course, can be weaponized. Without a doubt, it can be weaponized.
[00:59:29] But the idea that this is a nuclear bomb where one person can just eviscerate everything, I do not think that is accurate. This is a technology that is, that is specifically relative. So a good example is if I have a nuclear bomb and I am the only one with a nuclear bomb, then it blows up X amount of land. Like, let's say it's just big enough to blow up the entire state of New York.
[00:59:53] Well, if you have a Nuclear bomb. And then 10 people in New York also have a nuclear bomb. The destructive power of my nuclear bomb has not changed at all. If I set it off in New York, it's still gonna blow up all of New York. If everybody in the world has a nuclear bomb, if it's just like something that we all keep in our pantry and it still will blow up the entire state of New York, well, mine is still going to blow up the entire state of New York. It's not going to change anything about the effectiveness of my bomb. But if I have a super intelligent AI, and I am the only person with a super intelligent AI, I can take over the world. I can eviscerate every networking system on the planet. I can get into every bank account, I can get into every bitcoin wallet, I can poke through every single hole. Because no one else has that level of skill and computation in order to defend their systems, I would specifically have an order, multiple order of magnitude advantage over everyone else. And everything shy of air gapped, written down bitcoin keys is basically going to be, is basically going to be open to attack.
[01:01:12] However, if every single person has access and or has their own super intelligent system that's just as intelligent as mine, mine is meaningless. Mine doesn't mean anything because yours can defend your system as fast as mine can attack your system.
[01:01:30] That is wholly and completely different from the idea of the destructive power and military advantage of a nuclear bomb. It is only in the imbalance of power that this is the most potent and dangerous thing that it can possibly be. It is only because it would be secret and only one or a few giant corporations or governments would have it. That would make it the most dangerous it could possibly be.
[01:02:03] The broader, more open and more available the access and potential of this technology is, the less dangerous it is because as we iterate toward towards greater and greater superintelligence, we can also iterate towards more and more security, towards more and more robust. Like, just like I was talking about in the example of modules that just always do what you expect them to do, no matter what environment or code or language or whatever the hell you execute them in. Well, we can do that for our networking systems. We can do that for our protocols, we can do that for our bitcoin clients and our wallets and our encryption standards. We can build security and robustness with these exact same tools. Intelligence is general. Intelligence is not just a weapon, it is also a defensive tool. In fact, it arguably is a better defensive tool because as Julian Assange says, for Some reason the universe smiles on encryption. There is something natural about the ability to, to create something that is defensible in the digital world. That there is this asymmetry in the amount of computation it takes to, to break cryptography as it takes to create it. And because of that, I think there is a natural ability to defend, to create patterns of, of sustainability in networking systems and in digital and hardware and in physical environments. I mean, I think the, the existence of life is proof of this, right? Like if there was not a natural tendency for things that can be sustained to be created, well, then where would the source of life have come from? All we are is a system, a reproducible system of chemicals and reactions and being able to take fuels in from the environment and continue to grow and then develop intelligence and develop abstractions and then develop technology as an abstraction to our abstraction of intelligence. Like all of these things layered on top of each other, what happens? These things survive, they sustain, they are naturally sustainable. And there is something about the universe that smiles upon the ability to create these sustainable, self replicating and kind of isolated systems to progress and prosper forever forward.
[01:04:28] That's exactly what life is.
[01:04:31] It is a system that tends to. The universe hasn't destroyed it yet because it is robust and variant enough in order to continue to survive. Even in the face of all the crazy things that the universe can throw at it. The wild environments, the volcanoes, the asteroids, the seasons, the sun, the heat, the cold, the depths of the ocean, the, the upper stratosphere, like every psychotic environment and all of the craziness of this planet. And yet life kind of exists in some form or fashion at every place, like even red at some point or something. Even if you like dig into the deep, deep down into the bedrock, drilling into the middle of a solid rock, you'll still find bacteria.
[01:05:22] Life is crazy and there are sustainable defensible systems.
[01:05:27] And I think it's no different.
[01:05:30] It would be silly to think that that would all suddenly break down as soon as you got into the world of the Internet or electronics. That that fundamental reality of the universe suddenly doesn't exist for some reason, when everything else about it seems to be an extension of what we do know from the real world.
[01:05:48] You know, there's a reason, if you've ever seen mastering Bitcoin and you have the ants, you notice there's ants on the front of mastering Bitcoin. That's because the, the way the Internet works and packet switching works on the Internet is actually shockingly similar to the way ants communicate and this is a. This is a repeated pattern from nature. And it wasn't him. It wasn't by design. It wasn't. Like we looked at ants and we're like, oh, we should design it this way. Ants have figured it out. In fact, we may have. I don't remember exactly the whole story behind it, but we may have figured out the ants communicate that way after we already built the Internet. But I think it goes to show that we naturally replicate systems in the real world and that our minds are a consequence of those systems. You know, our minds are developed upon beating the organism of humanity against the world. And then we develop systems about physics, about patterns, about how things we expect things to crush when we crush them, or things to break, or the hardness or softness of textures and objects and interactions and temperatures and all of these things. We build these models and then suddenly we think through these patterns. We use these patterns as like a lens to then think about how to put other things together.
[01:07:11] We figure out how to build a pattern that we discovered through touching grass all the time to apply to something in software or something.
[01:07:21] We think about how things interact in a physical way to how things interact in the middle of a protocol.
[01:07:27] We are a product of all of these patterns that we develop by the five senses, by our interactions with the real world.
[01:07:36] Which means, and this is why I think that you can, in a very introverted way, you can look inward in order to understand the realities of the universe. You can, you can think, you can literally reason by yourself in your own mind. You can reason through contradictions. You can reason through moral, logical and physical contradictions. You can test ideas out in your head and you can see, ah, nah, this would totally break right here. This wouldn't work. So I'll have to change it in the same way that the Devon engineer can, you know, run it in a certain environment and get an error.
[01:08:11] We can actually predict those things because we have these models in our minds of all of this stuff that is, that is this mirror of the real world inside our brains. That's why we, that's why reason is a thing, is that we can actually comprehend the existence of contradiction.
[01:08:28] And we can sort out, we can battle between those two ideas that we have and we can sort out that contradiction if we actually attack it head on, if we try to have an overarching, clear, consistent picture of the world and of all of the different patterns that we have built in our mind from that world. Problem is we spend a lot of time actually trying to excuse our way into holding Contradictions so that we don't ever actually have to address them, we don't have to put them in the same reality, we categorize them and separate them out and then we say, oh, these are in different places and so they don't actually have to work together. But all of this is to say that I think the existence of life and the sustainability of systems is a natural thing.
[01:09:17] And because of that, I don't think destruction wins out just because we created super intelligence. I think sustainability wins out for the same reason as when you create life and life prospers, you get more life. I mean for all the horrors and mistakes and just disasters of human civilization and for all the anti human, you know, self hating ideology that has born from that in kind of the, the normie narrative, humanity has like massively prospered.
[01:09:53] You know, I think a lot of people would think that we have like poverty and death and destruction and war and all of these things are as worse or as bad as they have literally ever been in the history of mankind. A lot of people really think that, that poverty is just staggeringly awful. And it's mostly because of visibility, is because now we have cameras, we can see what poverty looks like. Whereas 200 years ago you couldn't tell what the hell was going on halfway around the world. You didn't know what poverty looked like to somebody in Africa or you know, South America or China.
[01:10:26] So a lot of people mistakenly think that it's awful that we have made no progress when literally the exact opposite is true.
[01:10:36] We have had the greatest rise of the greatest number of people in the most magnificent fashion out of poverty in the last 20 to 30 years as we have ever, ever had.
[01:10:51] Mankind is flourishing in the broadest sense. Its prosperity is exploding because of all of this technology, because of the progress of what we have done, because of the ability to undermine these giant centralized institutions that coalesce, that consolidate and scale up the risk of bad patterns and bad systems and institutions.
[01:11:17] But our ability to exit, our ability to basically have a foot out the door and protect ourselves from these things has increased as well. But there's a fantastic video. It's old now. Good Lord. I think it's like 10 or 15 year old video on YouTube now with Hans Rosling. And it's a hundred years, 100 countries and 100 something in four minutes. I can't remember if you search that. And I'll have, actually I'll just have the link in the show notes.
[01:11:48] AI remind me to put that in the show notes. But I'll have that video in the show notes and it's a fantastic video, just showing the statistics of health and life expectancy and income and all of these things from all these countries looking at the biggest picture possible globally, all of these things and you can even see major events like world wars and all of these things show up in dips. But the incredible progression towards the upper right of healthy, wealthy, happy and prosperous.
[01:12:20] And I think this is the march of technology.
[01:12:23] And so no matter what I, I, you know, what risk I obviously see in these AI systems, I do not think the, the comparison to the nuclear bomb. I think that so grossly, like so poorly frames what the real pieces of the puzzle are and what the real dynamic is and what the real consequences are.
[01:12:55] The real problem is centralization of this capability is the imbalance of power for super intelligence.
[01:13:06] And I do not think we will solve that by being extra secret and by really locking it down because we're worried about China having it. I think if we were worried about the CCP having superintelligence, we should be thinking about how to give super intelligence to Chinese citizens. That's what we should be thinking about. We should be trying to figure out how to make sure that the CCP isn't the only one in China to have it and that other people can use it in defense for their networks, for their protocols, for their devices so that China doesn't have a backdoor into all of those devices. Because if we can build our own operating system with a super intelligent AI for our own devices and we can get it to plug all of our devices holes, well, we've got a very new dynamic now. We have the ability for the Chinese people to actually have an economy outside of the CCP itself. We have systems where we can organize, we can coordinate, we can trade, we can calculate, we can do business.
[01:14:08] We can build a society without this top down, violent structure on top of it. And you know, maybe a lot of that is wishful thinking and maybe this is going to be iterative and it will still take 30 years and we'll just undermine it like one piece of the puzzle at a time. But I think that's the goal, I think that's where we should target. The idea is to give it to everyone and as, and make it as.
[01:14:32] I don't really like the word democratic because democracy is the idea of majority rule and democracy has produced horrible massacres and awful, awful consequences. But democratic in the sense that as many people have access to it as possible, as distributed among everyone who has the capability as we can make it. And that I think leads to a decentralized future. That I think leads to a billion superintelligences and various variants of it. And very targeted and very life specific and user specific ways to use this thing so that it can help guide and respond and build out the values of each of those individual people. Rather than being some giant centralized system that's controlling everybody in it because nobody even knows the capabilities of the superintelligence that is running them. It is that imbalance that is dangerous. It is not everyone having it. If everyone has it, then they basically have computation and software in order to guide their life the way that they want it to be guided.
[01:15:37] That's a fantastic thing. That's a magical thing. And to be able to defend themselves from all of those other people and maybe even to collectively come together with our computation and do parallel computation and that we can have. Going back to the thing that I talked about in one of the recent episodes is that we had a peer to peer, a distributed supercomputer that was like five times bigger than the largest supercomputer in the world with a screensaver, with a screensaver with SETI at home. You had five exahashes worth of compute with people just donating their computers, their PlayStations and other devices in order to search for life in the universe, search for signals from alien life. Then you had a 1.4 exahash of the largest ever like supercomputer in one spot. In a centralized corporation, centralized control on the ground. You beat it just with volunteer compute.
[01:16:42] What happens when you add the ability to pay, you add the ability to monetize and organize those people. And what happens when you have a world where everybody has 20 devices, where I've got four Raspberry PIs, I've got an embassy, I've got a Linux machine, an AI specific Linux machine with two GPUs. And then I have another Linux machine which is just the old one, which isn't even being used for anything anymore. I have a MacBook Pro, I have a phone in my pocket. I have an iPad next to my bed. Like everybody has so many different devices. I have a tiny little supercomputer in this household. What happens if we could actually organize all of that in parallel? Yes, that's not going to be an easy thing to build. But as we get closer to super intelligence, if this is accessible to everybody, how can we not build it? Wouldn't as long as we actually had the pressure and the desire to Build that. Wouldn't it be possible and wouldn't that be the lowest hanging fruit to getting a trillion dollar cluster without actually having it cost a trillion dollars?
[01:17:44] Couldn't you do that with software?
[01:17:47] I don't know. I just think, I think the thinking is wrong, is that we're going to get bigger and more centralized and more. Because everything that we have had, this is a very short period of history where corporations just get infinitely bigger and everything just scales up infinitely bigger and the trucks just get bigger and the machines just get bigger and the, the, the supercomputers just get bigger and the governments just get bigger. I think we're looking at the fallout of a broken financial system that has just completely and grossly misallocated resources. Bigger is not efficient. Yes, you have the simple, the, the, the elementary, the naive economies of scale in scaling up a process or scaling up, you know, you know, it's easier to sell a thousand of a unit of something than it is to sell 10 of a unit of something because the tooling is the same essentially, but for that, that same, that same item. So you have a basic economies of scale, but that's what happens. That, that's, that's useful in a static economy.
[01:18:53] That is not useful in a robust variant diverse economy in one that moves and disrupts things on the period of four to five years or a decade where entire business models and, and production structures are being undermined. That is not efficient. We're, we are getting quote unquote efficiency at the cost of security, robustness and sustainability.
[01:19:17] We are over subsidizing economies of scale purely for efficiency. And we are causing all sorts of additional problems. We are trading off so many critically important factors and characteristics of our economy, of our system. Like we have no robustness. If something big breaks. If one of those big corporations went down, the entire economy would be in short shambles for years as we try to rebuild around it and reallocate those resources in the way that they should have been originally. We have no robustness. We have no redundancy, we have no security, we have no savings. We have a mess of a system. And I think the response, the consequence of that is the ever beginning of all of the things and that, that thinking is what's going to break that that thinking is what's going to change drastically in the next few years.
[01:20:12] And that's what I want to get back to. Let me finally bring this around to Mark Zuckerberg's piece because this is the thing that kind of shocked me about his perspective on this is that he actually seems to realize this alignment to this element of how progress and where progress is occurring and how it might actually change the larger dynamic entirely. But he specifically said that Meta is committed to open source AI and that he actually believes that open source is necessary for a positive AI future. But he has a couple of different reasons why that I think is he has a pretty strong argument. One of the things that he points out is that open source AI is following a very similar development path to Linux when it comes to high performance computing.
[01:21:02] And they have released very recently one of the. One of the actually really fantastic model is LLAMA 3.1, which is their newest fine tune or additional model from llama3. And they've released a 405 billion parameter model which is a huge model. It's basically a commercial run model. Like I cannot run that sucker on my machine on my AI machine. In fact I can't even run the 70 billion parameter model on my machine. Realistically, it struggles. So this is basically the first enterprise level open source model that I'm aware of. A 405 billion parameter model, like I think even is it ChatGPT4?
[01:21:54] I don't know. There's speculation about the parameter size of GPT3 and GPT4 and you know what size models that are being run specifically. And I want to say it was about a 300 billion parameter model for GPT3, but I honestly don't know. I could be wrong about that. And I think also the number I saw was actually speculation, maybe hard to say, but this is only to say that I think this is probably very close to and or in the realm of the larger closed models. And I would bet that the quality is as great as llama3 is just in the 30 billion the 30B model that I run on my machine. As fantastic as some of the results I get for just llama 3, I would bet that llama 3.1 with the 405B model, I bet I would probably especially for what I use it for.
[01:23:04] I might have a really hard time telling a difference. I highly suspect it would be difficult between it and something like Claude and GPT4 is what I mean.
[01:23:14] And one thing that Zuckerberg points out in this article is that one of the big reasons that the US leads in innovation and in the tools and in the infrastructure for AI is specifically because of the open AI development that's going on because we are letting the market develop and fine tune and create all of these things.
[01:23:42] And his argument, the foundation of his argument is that open source AI is actually necessary for a positive AI future with benefits that actually are distributed across, well, the whole world really, but across the market, across demographics and across countries.
[01:23:59] And the really good thing, especially from a security standpoint and from a privacy standpoint, because one of the best things we can do for security is stopped putting all of this stuff into giant centralized platforms, but letting people silo into their own, their own little corner. Like the best thing for security isn't to get everybody into one giant super secure system, because then you just have one giant place to attack. You've basically created a honeypot and at the same time you've created such a massive power dynamic where somebody who does have surveillance and insight into everything that's going on is now your greatest adversary. You created, you know, one giant centralized institution in order to quote, unquote, protect everybody. But who's going to protect you from that one giant centralized institution? Because they are now going to be your biggest enemy. The best thing to do for security is to decentralize it, is to let every single company have their own model, is to let them fine tune and do adjustments to models like Llama 3.1 that they can have within their own company and actually tune it to their specific data, to their specific purpose, to all of their employees and everything that they're doing inside of their enterprise. And then doing that hundreds, thousands, tens of thousands, maybe even millions of times with millions of different people and organizations. And they all have their own encryption, they all protect their data in their own way. And everybody can attempt to build out. Not only can they build out their own solutions, but but they can sell and share those solutions with other people. That's how you build something that's robust, that will actually respond to two attacks, external attacks, which will actually have multiple. It's like Bitcoin, right? Is that we even recognize that not only is it dangerous to be on the same client, but it's actually dangerous to be on the same version of the same client.
[01:25:54] So actually a part of decentralization is just not upgrading. Sometimes the more centralized and the more clear, the more everybody is built on the same system or the same architecture. There's no way security isn't something that you can just kind of like think up and engineer out of the blue. Security is something that you test, it's something that you earn. You know, we only have ideas about what is secure. It's not until it's tested in a live environment that we really know there's only so much we can do until we actually have it fight against, have it exist and prove itself in the real world and then adjust to like an anti fragile system, adjust to the stimuli, to the, to the attacks and the environment itself to know exactly how it actually, how it actually works or how secure it actually is against a genuine adversary. Which means that inevitably everything is expected to get poked, everything is going to break at some point. All no matter how perfectly you think you've designed your security system, somebody is going to be able to get in. So the thing is, is that if someone is going to be able to get in, then you expect it. And so you design your system in such a way, you design your ecosystem in such a way that everybody has their own security solution. And, and if somebody figures out how to penetrate, they only penetrate one tiny little piece of it, only one computer. And you segment, you isolate out different things. This is why you have something like Docker, right? Is that you, you create little VMs inside of a computer to segment out and create permissions from the rest of the system. This is true in markets, this is true in networks and open protocols. This is true at all layers. And what we are doing is exactly the opposite. We're thinking of this in completely the wrong way, I think. And, and Zuckerberg makes a fantastic case for that sort of thing, that this is how you create security, this is how you create robustness. You let everybody build their own models and train on top of. And that's the source of all the innovation too. Like that's why Linux is the foundation of all the infrastructure of all the operating systems of router software and you name it, Linux is that underlying Unix that so much stuff is built on top of. And he thinks and says that he sees that open source models are running this way too. This is why no matter what happens, these closed source models are inevitably, they're just, they're still just going to be further and further behind because other people won't be allowed or able to build on top of them properly or fine tune them for specific purposes and that at some point it will literally just get away from them. And that open source will end up being the more secure, the more efficient and more affordable obviously than any closed model. And he says specifically too, he calls out that, you know, this doesn't hurt Meta's revenue either because Meta isn't selling access to a, to a model as part of its revenue. So it actually helps all of its other things because all of its other revenue avenues and streams specifically, because it just means that Everybody can interoperate and work with their tools and they get the benefit of everybody else who's building open source tools with their models. It's a little bit like Tesla's decision to open source their, their connector and how they, how they charge the plug or whatever the, the charging station plug and standards for the Tesla car is that that immediately became the standard because they open sourced it. And then what happened is that everybody else was able to adopt it for their electric cars and then everybody started building stations that all had the exact same charge port. And because this was infrastructure, this was a huge boon to Tesla. Even though they had a massive market advantage, it was still better for them to open source it because now they had everybody else building out their infrastructure. They had other people building charging stations that a Tesla could pull up at and charge their car.
[01:30:14] And Zuck actually makes a couple of pretty bold statements in this article too. He says one of them that I highlighted was starting next year, we expect future Llama models to become the most advanced in the industry.
[01:30:27] And honestly, if a ton of people start building on llama 3.1 and tons of these different enterprises are basically using it for their internal customer service and all of this stuff, like really a seriously advanced and significant model, not only do you have all of these different companies then fine tuning and retraining these models which you can then turn, not only are you preparing all of this data for fine tuning models, but then you have all of these other models to potentially combine or to aggregate tons of these different weights and specialized models for different versions or different purposes and then to create things like mixture of experts.
[01:31:14] The reason he might be right about this isn't even because they will make the best Llama model, even though that may also be the case, but you might actually get a lot of funding from a ton of these different, you know, smaller businesses or other corporations that end up using Llama 3.1 and fine tuning it for their purposes, but you also have all of these different variations of the model in order to utilize for different and specific tasks. Like it will be the most advanced because it will be the only one that was able to build a specific version of itself for a million different purposes. And then what can you do with those weights in combination? What can you do with all of the data that may have then been prepped and designed and all of the software then designed for again fine tuning models? Like it's very much like the software I was talking about that, you know, can prepare computer, the data and the documents and notes and everything on your own machine in order to be prepped for a, to fine tune a model. Now imagine you've got hundreds, thousands, tens of thousands of businesses building their own tools for, for designing and preparing their data, for fine tuning or for fine tuning in some specific way. You now have a massive community of people competing to figure out how best to fine tune a model, how best to change an algorithm or the change the way that it's fine tuned or change the way to organize and prepare the information.
[01:32:50] Whereas something like chat GPT you're just going to do whatever OpenAI thinks and whatever they implement and offer as a product, well then you can use it, but otherwise you're just stuck with chat GPT4. So there's something to that.
[01:33:03] He might be right. Just because you're looking at an ecosystem that can just build so much more just like Tesla open sourcing the, the charging station and being able to let how many other people can now build electrical electric cars and use charging stations that are already available.
[01:33:27] And now it's going to create a lot of competition in the electric car market, which appears, you know, on the surface to be bad for Tesla. But, but in this same way, if these things are open source, not Tesla, but the models, if these models are then open source, because they are forks of an open source model, everyone gets to use this stuff, everyone gets to utilize all of the benefits and all of the tools and you will just have, there will be nothing that has as many tools, as much tooling, as much infrastructure and as much compatibility and, and simplicity in working with it as llama, if it continues in this direction and it becomes the dominant, well, it is basically the dominant open source model. And I think the release of this 405B is a huge deal, is a huge deal in pushing things that direction. And this is exactly what keeps the lead in the US is if the US has the infrastructure, if the US builds all the tools, if the US has all of the people who work with this thing and know how to fine tune it and have all of this knowledge and continue to expand and build on top of it, well, you can steal, you can steal the weights very easily. You can replicate all of that with just a thumb drive, just by putting information on a thumb drive and then shipping it somewhere else. But you can't replicate all the infrastructure, you can't replicate a million innovators building on top of this. You can't replicate thousands and thousands of small business, medium sized business and huge, large business, all running their own tiny server, their Own like, you know, ten a one hundreds, their own little mini supercomputers so that they have massive AI compute and they can basically hire, quote, unquote, 10 or 20 digital employees to help run and expand and speed up and make more efficient their own enterprise and everything that they do. You can't copy, paste any of that. None of that fits on a thumb drive. Most of it's physical. You can't copy and paste the people who earn and know those skills.
[01:35:39] Tell that to the people who are trying to build chips right now in Texas and are trying to copy what Taiwan is doing and need all of the people from Taiwan who have the skills and know the infrastructure and know how to use these machines and know how to not make mistakes, and they know the meticulousness of actually putting this stuff together. Ask, tell them they will. Or ask them. They will tell you you can't copy and paste the people who understand and know and have lived and breathed these skills for decades.
[01:36:12] That is what we will have and that is what we could have as a lead that can't be copied. No matter what we do with the weights, no matter what we do with trying to build our own superintelligence, the super intelligence still is going to be limited. It's all. It always will. Still need to expand, still need to have somebody to plug it into robotics, still need to have somebody to plug in new computers. It will never be done. It's all just going to accelerate again.
[01:36:37] It's not like super intelligence gets here and then there's an intelligence explosion and now everybody's sitting around with their thumbs up their asses. It's not like we're gonna be done quite the opposite. We're going to have hundreds of thousands, millions of things that we can now do that we could not do before.
[01:36:55] We're going to be busier than we have ever been, unless we literally want to take a break.
[01:37:00] But I really think a lot of this actually being a prosperous future, a better future, relies on decentralization and relies on sound digital money. Like, without it, I think we just cause ourselves massive problems. And one of Leopold's points that Zuckerberg actually kind of seems to target specifically, and maybe it's just he's hitting the idea more broadly, but I almost kind of get the feeling that maybe he actually read Leopold's piece and may actually be specifically responding. This might be a targeted response to it. But there's because Leopold spent like the whole last section and a half, kind of the last, last two sections in this piece on all about secrecy and the project and treating this thing like a nuclear bomb and we have to keep it secret and we have to put super security lockdown on OpenAI and all the the corporations that have the big models and stuff to keep our advantage is not only do I think he's wrong and Zuckerberg is right in the sense of our our advantage is the open innovation.
[01:38:16] But Leopold also makes the case in his own article or in his own piece that there is no this not even slightly feasible to think that these large corporations could ever have the security necessary to keep to keep China from getting the weights. He basically says that that's just not gonna happen, but then says that we should still just kind of hope for the best and still close the models and make everything proprietary and make it super secret which again is going to just create the worst possible power imbalance that we could ever ever conceive of when we are talking about witnessing an intelligence explosion. And yet he makes the very case that it's not going to work and that at best it will still give us like a month's worth of advantage because China will go through the exact same thing and they'll have their intelligence explosion and all they have to do is steal our weights. So here's the quote from Zuckerberg's article says the next question is how the US and democratic nations should handle the threat that of states with massive resources like China. The United States advantage is decentralized and open innovation. Some people argue that we must close our models to prevent China from gaining access to them. But my view is that this will not work and will only disadvantage the US and its allies.
[01:39:42] Our adversaries are great at espionage, stealing models that fit on a thumb drive is relatively easy and most tech companies are are far from operating in a way that would make this more difficult. It seems most likely that a world of only closed models results in a small number of big companies plus our geopolitical adversaries having access to leading models. While startups, universities and small businesses miss out on opportunities.
[01:40:14] Plus constraining American innovation to closed development increases the chance that we don't lead at all.
[01:40:23] Instead, I think our best strategy is to build a robust open ecosystem and have our leading companies work closely with our government and allies to ensure that they can best take advantage of the latest advances and achieve a sustainable first mover advantage over the long term. And then this is a really important line as well says when you consider the opportunities ahead.
[01:40:49] Remember that most of today's leading tech companies and scientific research are built on open source software.
[01:40:59] The next generation of companies and research will use open source AI if we collectively invest in it.
[01:41:07] The bottom line is that open source AI represents the world's best best shot at harnessing this technology to create the greatest economic opportunity and security for everyone.
[01:41:22] You know, I did not see myself agreeing with Mark Zuckerberg.
[01:41:27] I didn't, but I do.
[01:41:29] I think he is spot on. I think Leopold makes the best argument as to why the reason, the way that he is thinking about it isn't going to even work.
[01:41:40] And notice he even mentions in situational awareness that we have a really open ecosystem and market right now and that we have to close it down. But he fails to recognize that our lead may very well be because we haven't closed it down so badly, is because we have llama, because we have stable diffusion. We have tons of these different models and things that we can use and fine tune and work with. What if that's the reason we are leading? Well, then his attempt to secure that lead and to make sure nobody else can steal our secrets would destroy our lead. It would prevent us from keeping it all to protect something that he says explicitly we have almost no chance of protecting, let alone the fact that we then give a couple of big corporations and the government of a nuclear bomb, as he puts it, a super intelligence that nobody else has access to and nobody else can utilize, not for their own purposes, not to defend themselves, not to defend the nation, but not even to defend themselves from those big corporations or from our government. Which I'll tell you, that's what I'm concerned about. I'm a lot less worried about China than I am about my government.
[01:43:01] And then another thing that Leopold points out is, which I thought was also a really fascinating thought experiment or way to address the issue about trust, about trust and the models that we currently have.
[01:43:18] And how what happens when we get to a place where the models are creating new models is how do we deal with safety, how do we deal with training to make sure these things aren't hurting people and that it's very difficult, if at all able to be used to hurt people.
[01:43:37] And that in addition, that not only does it align, you know, Super Alignment was I think, the third piece that we read, or maybe the second one, I can't remember.
[01:43:49] But that Super Alignment is like a huge problem. And one of the things that we need is to be able to create extremely trustworthy models and in order to assess and give us an understanding or breakdown of other models and Weights that we create.
[01:44:06] But then again, he says, but we need to close down. We need to make this super proprietary and secret so nobody knows what's going into it. That's not a recipe for safety.
[01:44:17] That's not a recipe. That's exactly how you build something where we don't know whether it is safe or not because we're having to trust someone else. Someone else say it. Don't trust. Verify, man. That's what open source is about. That's the only way we get anything safe or trustworthy. I'm not going to trust the government model to be safe. If they went and did it in secret like a Manhattan Project, that'll be the last model on earth that I ever trust. That'll be the opposite of trust. That's the, that's the do whatever they say because you're screwed. Because they created super intelligence. And if it's not safe and it turns out the government's incompetent because the government is obviously and always has, has been incompetent or malicious, well, now we're screwed. Now we're screwed because if two of the things that government naturally is and always has been turn out to be true, well, then all of the AI models that they create are either going to be designed best to kill people and to lie and manipulate them. That's exactly what government will create. And we'll have no idea. They'll say, they'll bring in their experts and say we made it safe. And that's the exact opposite of what it will be. They will have purposefully, purposefully made it to kill people. They will have purposely made it something so that they can attack China. That's the only thing, as Leopold specifically says, it will be a problem or an issue of national security. And this will be the most powerful military technology we have ever seen. That's what the government does. They will make it to kill people. And if it is ever going to get away from them, if it's ever going to get away from anybody and do something terrible, it'll be the government's version of it.
[01:45:59] So sorry, but no.
[01:46:01] That's the exact opposite of how you make it safe and secure is you make it a secret how on earth we actually made it safe and secure to begin with.
[01:46:13] That's not gonna work.
[01:46:15] The only way we actually have some sort of semblance of way to trust or to verify that it might actually be safe, that it might actually be robustly aligned with us so that we know that when the models are creating new models that that alignment may extend forward into the next generation is if it's all built out in the open. We know exactly how we were certain it was safe or not.
[01:46:44] And if something bad does happen or if it starts to get away from us, well, then we have a starting point. We know where we have to go back to, to, to continue on in a different direction. And we simply take our resources away from the one that's spiraling in the wrong direction and put it towards the one that actually produces the better results.
[01:47:07] We only have any semblance of guarantee if this is done entirely in the open.
[01:47:15] And I think Mark is right, I think Zuckerberg is right too, that I think that's the only reason we have a lead too, is because we have an open market with as many people as possible, with as many minds as possible.
[01:47:27] And as Leopold talks about, is that, you know, a lot of this rests on the, the shoulders of a tiny number of people who are actually skilled at this. Well, then our best bet is to make as many more people skilled at as is at. At it as we can.
[01:47:43] The, the, the goal isn't to. The idea isn't to then to be like, oh, there's only like 20 people that we have to trust on this and therefore we should close it down and let them make decisions in the dark and we should just blindly follow whatever they say or just put the fate of the. I mean, he literally is saying this like it's the fate of the whole freaking world on a couple of people's shoulders who know how this stuff works. Well, then what we should do for the next three years is train and teach as many freaking people as we possibly can so that there's at least a thousand people that we are depending on. And we have different opinions and different strategies and different thoughts about what to do with this. Because if we do this, if this is all dependent on 20 people, one tiny thing that goes wrong in one singular mindset, that we're relying on one person too much. If we move in that direction, well, the fallout will be massive. And it will only be that massive if we have centralized our problem and scaled our problem into one thing where we are all relying on one superintelligence explosion and 1 trillion dollar cluster and one government safety committee and, you know, three AI researchers that just are the only ones who know how it works. And they all do it in secret like the freaking atomic bomb. And then of course it's gonna be massive and destructive. And then when it's released, what the hell else Is anybody else gonna do, it's gonna go bad if we do it that way. All while we stagnate our market and basically let all of the other small, medium and even other big business be in the dark about how to produce the right things and about, you know, giving feedback on whether or not their strategy is even the right strategy. Maybe somebody else comes up with maybe, maybe a 14 year old kid who's getting into this technology and just kind of intuits how crazy and amazing a lot of this stuff is and has, you know, spent a year just building code with AI, has some sort of a really fascinating and exciting idea about how to make these things safe or how to align them. What happens if that was the one, that was the idea that actually fundamentally shifted how we thought about it and led to hundreds, thousands of other developments. What we need is to stop thinking that we're gonna find this one dude who has all the plans, who has all the secrets and locks everything down properly.
[01:50:16] And this is a numbers game. We need as much parallel processing as we possibly can.
[01:50:22] There is no limit to human innovation. And if we give this to a million people, one person, just one person's breakthrough could make this scale and work for the next 10 million, they will make up for 999,999 idiots.
[01:50:42] One person with one good innovation. The problem is you have no idea where it's going to come from. You have no idea who it is. You can't predict the future and you cannot predict what the next innovation is going to be. As Leopold specifically points out that we don't know exactly where or in what way those algorithmic improvements are going to arrive and what they may even look like or what direction or type of thinking that they are even going to come from. Therefore, the best thing that we can do is to get as many people who think as differently as possible into it at different age groups, different demographics, different. Just as many minds as we can possibly can to look at and be interested in the problem. And somebody's going to come up with something that's insanely valuable and it's going to change how we do all of it. And that's how we stay on the orders of magnitude, that's how we stay ahead of the game. We get open innovation, we get as much participation as possible.
[01:51:40] We let these models be free to build on for everyone and we get out of this stupid backward mindset that if we make something bigger, more centralized, more top down controlled, more closed and more secretive, that somehow it's going to get better. That's never been the case. And honestly, I would have thought anybody who's looked at the last five to six years, the last 50 years, might recognize what a huge problem that has been.
[01:52:08] Maybe we just need AI superintelligence to explain it to us.
[01:52:14] Oh, okay.
[01:52:16] Wow. This episode is long.
[01:52:19] So, yeah, yeah, we will close this here. I don't think there's anything else. I think I hit most of what I thought the best was for Zuckerberg's piece. I will have the link in the show Notes if you want to read it.
[01:52:34] There's. There's a couple of other, like, really great points that he. That he has in this piece, and it's not long. I recommend reading it.
[01:52:42] It's. It's a really quick read and I think it's a good one. So the links to that will be in the show notes as well as to our amazing sponsors, Coincite and the ColdCard Hardware Wallet. Keep your Bitcoin safe. Don't forget to boost on Fountain. If you're listening, do Zaps Zap on Nostr. Thank you guys so much for supporting this show. Thank you guys for listening.
[01:53:05] I. I find all of this stuff so fascinating. I geek out on this shit so hard.
[01:53:13] And it's just, man, this year has been crazy. And everything that's been unfolding just like some. Some random small updates actually is there's a new tool called Live Portrait which is able to make a portrait of someone talk and like realistically, like with all sorts of crazy facial expressions and stuff. It's open source. It is a freaking cool tool, especially if you're trying to make some memes. Oh my God, this is about to be meme like heaven. And also the newest version of the tool just came out and now you can do it for video. So you could take a video of somebody talking and you can get them to say something else, anything else that you want them to say.
[01:53:59] And it's shockingly good.
[01:54:02] And of course, Pinocchio Dot Computer, the Pinocchio app that I talk about all the time, had it implemented very, very quickly. So even though I installed it separately because Pinocchio didn't have it, now it is on Pinocchio and it is the one with the video version.
[01:54:21] So it's very easy to. It's basically a couple of click, install and run. So if you haven't played around with it yet, I highly encourage is a really, really cool tool. It's really fun, especially if you guys out there making memes or joke videos. It's perfect for it. Also, actually, if you want to create like an animated video, I think this will be, like, a hugely valuable tool.
[01:54:47] So if you're trying to make like a feature film or a short film or something, this is definitely the way to go. Because what you can do is you can record your face acting and then you can put it onto. You can put that acting and the emphasis and the facial expression onto any other person or character or anything like that. Like, that's what's really magical about this. And this is just gonna be huge for storytelling. So we might go over this a little bit more in depth in a future episode, but this was the lengthy response and follow up to the fantastic piece. And I know I criticized Leopold's opinion or stance on this a lot, but, you know, I'm a bitcoiner. Like, we love to argue, and I, I have mad respect. Like, I, I loved that piece and it truly changed my thinking on a lot of these, a lot of the issues. And about AGI in general, he 100% has me thinking very differently about this.
[01:55:51] So if he ever listens to this, kudos. I don't, I don't want to feel like I'm disrespecting. In fact, I'd still love to have him on the show. Would love to pick his brain about some of this. So with that, thank you guys so much for listening. The link to the read, as well as everything that I mentioned, if, as long as the AI picks it up and also to Zuckerberg's piece will all be in the show notes, as well as the discount code. So you can get your cold card and I will catch you on the next episode of AI Unchained. Until then, everybody, I am Guy Swan. Take it easy, guys.
[01:56:38] He was giving me enough rope to hang myself with. Apparently he didn't realize that once a noose is tied, it will fit one neck as easily as another.
[01:56:49] Patrick Rothfuss.