Episode Transcript
[00:00:00] Yes, AI is a big deal, but the conclusion that AI is going to kill the vertical and functional software business model simply makes no sense.
[00:00:11] The truth is that AI simply isn't going to kill software companies.
[00:00:15] After all this panic has passed, we'll see that AI is the best thing that ever happened to the software industry.
[00:00:25] The best in Bitcoin made Audible. I am Guy Swan and this is Bitcoin Audible.
[00:00:49] What is up, guys? Welcome back to Bitcoin Audible.
[00:00:54] I am Guy Swan, the guy who has read more about Bitcoin than anybody else. You know. And we've got a fun AI episode today that I think hits on. There's a lot of doom and gloom and this idea that software is all about to die and humans are about to be replaced. And I know I've talked about it off and on in a bunch of different episodes here, but this, this article actually from the A16Z. A16Z news, but it's, it's their substack. This looks like substack, but there's an article by Alex Immerman and Santiago Rodriguez and it's a really good one that without actually talking about economics, hits on a handful of really critical economic principles that I think are some of the things that are just totally, totally missed on people and why there's so many people who claim like, oh, we're about to hit super intelligence and all humans will be replaced and nobody's gonna have a job anymore. And all of this stuff I've always thought, and I've said that multiple times on this show, is that the failure to understand basic economic principles are why people believe these ridiculous things. They're simply not true, they're not going to happen. And there's fail fundamental reasons to understand why. And I think this article actually hits on a few of them without actually hitting on a few of them. And it, I think it gives me a good foundation, which I think what they get into is a very sensible and sober look at what the value propositions are and what, where the moats will still be and what the, what the major moats are that will be collapsing or changing dramatically in the way that we currently see the market economy and how the software economy and the Internet economy work. All of these various industries that are going to be undermined or changed dramatically by AI, but that this is normal creative destruction. And there's a number of things that they can point to and framings that they can suggest to make it clear. And then I think taking those things and explaining how they connect to the economic principles.
[00:03:14] The the very fundamental nature of value and judgment in an economy I think can complete paint a much more complete picture there. So I thought this would be a really good one to read on the show and give my thoughts on. A quick shout out to the Human Rights foundation and don't forget that they have tickets on sale now to the OSLO Freedom Forum June 1st to 3rd. That link will be done in the show. Notes Also have a lot of other great links like if you want tobitcoin on river.com I have an affiliate link and it sends me like five bucks or something like that. I don't know but it's a really great way to help out the show that's totally free. I love river. They're one of my main ones to use. I also have an affiliate for Fold and for the the board game Hodlup. I have one for Coin Kite and for Bitbox and Blockstream Jade. I have a bunch of these that I am making available for you guys because these are products and services that I really really like and people are always asking me what do I use, what should I do for this and so I tried to collect them together and also just give affiliate options so that you can support my work and help keep this show running and help fund Pear Drive and the other development projects that I'm working on. Any and all support is highly, highly welcome and I thank you all for for those who do and a shout out to the audionauts who have basically become a vibe coding conversation 5 coding group because everybody's trying to build stuff and I just find that awesome. So shout out to those guys and everybody who supports the show. With that let's get into today's read and it's titled Good News AI Will Eat Application Software by Alex Immerman and Santiago Rodriguez.
[00:05:00] The software industry is having a panic attack.
[00:05:04] Since the start of 2026, ETFs for public software companies have fallen by 30%, erasing all the gains since the launch of ChatGPT.
[00:05:15] Companies like Salesforce, Adobe Intuit, ServiceNow and Viva Bellwethers that have compounded investors capital for a decade or more are down 25 to 30% in a matter of weeks.
[00:05:29] Viral Substack Posts Imagine a world where the customer base for enterprise software is hollowed out and and the S and P enters a massive years long drawdown. They're calling it the saaspocalypse. It's rapidly become the market consensus AI is going to kill the software industry.
[00:05:47] Yes, AI is a big deal, but the Conclusion that AI is going to kill the vertical and functional software business model simply makes no sense. The truth is that AI simply isn't going to kill software companies.
[00:06:00] After all this panic has passed, we'll see that AI is the best thing that ever happened to the software industry.
[00:06:07] Why is that?
[00:06:08] The bear case rests on a basic misunderstanding of what software companies actually sell. The market is treating software as though it were a commodity input, as if the value of a software company resided in its code and cheaper code meant more competition and therefore cheaper companies.
[00:06:26] But code is never where the value has lived. If code is where the value was, these companies would have never gotten so big in the first place. They would have been killed years ago by open source software or by competition from cheap software engineering labor in developing countries.
[00:06:41] The bearish arguments today usually fall into one of four categories. Maybe the foundation model labs will move up the stack and own every function specific application.
[00:06:53] Or maybe enterprises will vibe code replacements for their internal tooling. Or at least use the option of doing that to reduce software businesses pricing power. Or maybe existing players will use AI to massively expand their product breadth, rubbing up against each other. Or maybe a flood of new entrants. The famous single person billion dollar company will undercut incumbents on price pile. On top of this, agents won't care about brand loyalty or familiar names, only the cheapest options for any particular task.
[00:07:23] AI might increase competition, but it'll also dramatically expand what software companies can do, how fast they can do it, and how large the markets they serve can become.
[00:07:36] The end result won't be margin compression to zero.
[00:07:40] Software will be a much bigger industry with durable competitive advantages for the companies that earn them.
[00:07:49] The moats that matter aren't going away.
[00:07:53] The classic contemporary book on business moats is Hamilton Helmer's Seven Powers. He lists seven distinct ways in which companies develop robust competitive scale, network effects, counter positioning, switching costs, brand cornered resources and process power.
[00:08:14] Let's go through them. Switching costs are perhaps the one moat that really is going to change. It's definitely true that AI is changing. The friction and the cost benefit analysis associated with switching vendors agents can assist with a lot of migration work that used to be a headache. So it means legacy companies with hostages, not customers, to borrow a phrase from our colleague Alex Rample, will feel a lot more pressure than they're used to.
[00:08:42] But that is a good thing for software as a whole. When companies have to earn their customers loyalty instead of relying just on vendor lock in, the result is better products, faster innovation and a healthier Competitive ecosystem that grows faster and delivers more value to its customers. We expect AI will shift some customers to new winners, but it won't impair industry profit pools at large companies will just get better Network effects are a classic moat, and they aren't going away. We tend to invoke network effects for social media platforms or marketplaces. The more nodes in the network, the more attractive it is to be on it. But the same applies to application software offerings that exhibit ecosystem, collaboration and data network effects. On the surface, Salesforce is a CRM database. But anyone who has worked in an enterprise setting knows that Salesforce is also an ecosystem.
[00:09:37] When everybody uses one platform, the network becomes self reinforcing. You use Salesforce because everyone uses Salesforce, and the more companies use Salesforce, the more valuable the ecosystem of third party applications built on top of Salesforce and platform administrators, experts in Salesforce in recent years. A similar thread is true for figma. Every designer, then every engineer, product manager, marketer buys Figma because everyone is collaborating there.
[00:10:07] Go to the annual config conference and witness the value of the ecosystem firsthand. And the same dynamic is emerging in the AI native generation. Harvey and Hebia are building finance and legal collaboration spaces that connect service providers and clients. And soon they're agents on a single system. The more people and agents who use these platforms, the more valuable the platforms become.
[00:10:31] Elise AI's maintenance product is a multi sided network that becomes more valuable with every unit and vendor added.
[00:10:38] As migration gets easier, aggregation gets easier. But these network effects simply don't go away in a world where software is free. In fact, insofar as AI makes the network more powerful, you can just do much more with a network than you could before.
[00:10:53] We should expect to see AI make these network effects more powerful than they were before.
[00:11:00] Scale was never the defining moat in software.
[00:11:04] It's just not as important for Salesforce as it is for a cloud provider or an industrial company. But to some extent it may matter for AI applications where compute spend exceeds labor costs, driving a unit cost advantage to the larger consumers of tokens. In addition, there are places where scale will still help. It's a straightforward economy of scale to concentrate that maintenance burden in one place, since productivity gains from specialization don't go away in an AI world.
[00:11:33] Stripe highlights the value of centralized infrastructure benefiting all of its clients. Its compliance infrastructure absorbs the cost of navigating regulations across dozens of countries so that individual businesses don't have to. Its payment optimization algorithms, which route and retry transactions to maximize authorization rates, improve with every dollar of volume and they can pass those savings on to their customers.
[00:12:00] Finally, scale will continue to benefit companies at the intersection of bits and atoms. Anduril, Flock, Safety and Waymo will continue to see lower unit costs as they produce higher volume of their hardware offerings.
[00:12:15] Brand Endures for better or Worse no one got fired for buying IBM remains a fact of life in most enterprises. And if every industry gets more crowded, if there's suddenly an explosion of fly by night solopreneurs selling vibe coded erps, we should expect the power of strong brands to increase.
[00:12:37] Brand is how you signal reliability. In a world of infinite optionality, no upstart is going to instantly replicate the trust and recognition that companies like Stripe or Shopify or Service Titan have built. The closer you sit to business critical functions people really don't want to get creative when it comes to payment processing, the more powerful brand effects will be. If you are a startup and you charge customers, you build on Stripe by default.
[00:13:09] We do acknowledge the power of brand might change as more decisions are delegated to AI agents that optimize for price without the soft considerations that humans have agent led growth. But as long as they report to humans who have to worry about getting fired, the no one got fired for buying IBM principle still holds cornered. Resources like high quality proprietary data aren't going to stop mattering either. If friction goes to zero, simply consolidating publicly available data into a usable interface becomes less valuable because anyone can do it. But if AI enables doing much more with high quality data than you could before, then the stuff that you can't get easily becomes extremely valuable. We have observed the power of Bloomberg's live market data abridges millions of clinical conversations, Open Evidence's vast medical library and Velix's legal database.
[00:14:06] And perhaps the strongest moat of all in this new era is process power. Or as George Sivolka of Hebia calls it, process engineering.
[00:14:18] Application software can be thought of as a stored process. It encodes opinions about how the function of an organization should operate, and those opinions calcify over years and decades of use into something that is inseparable from the organization itself.
[00:14:34] Successful app software companies are the ones that co evolve with their clients around these workflows. As those workflows penetrate ever deeper into an organization, process engineering only becomes more important and more difficult for challengers to replicate. Consider Harvey. If Harvey deeply understands how a particular law firm structures its work, the templates, the review processes, the institutional preferences, the way a specific partner likes her memos done, there is simply no way a new entrant can replicate that overnight, even with the cost of coding being zero.
[00:15:09] That kind of embedded workflow knowledge becomes more powerful, not less, as software moves from a system of record to a system of action. Because you can just do much more with that knowledge. So as the underlying models improve, Harvey's orchestration layer, the scaffolding that routes model output through specific professional workflows compound in value.
[00:15:34] Better models don't make the application layer thinner, they make it more capable. Because the hard part was never raw intelligence, it was knowing what to do with it.
[00:15:47] Platform shifts create new winners and new moats.
[00:15:52] But there's one final source of durable competitive advantage that we find particularly exciting as investors, and that is counter positioning. Counter positioning is a kind of power that can be summoned and wielded by new entrants to a market.
[00:16:07] It's when the new company has a business model which for whatever reason is unattractive for the incumbent company to compete against. Disruption theory from Clay Christensen is a classic type of counter positioning, but it doesn't always have to be low cost. As the differentiated counter position in software, a new technology stack could create the opening for a startup to create new kinds of products and business models that are difficult for incumbents to replicate. Like databricks and their lakehouse model. The agent model of doing work and replacing tasks is certainly going to create some counterposition opportunities for new startups to challenge incumbents. There's been a lot of ink spilled on the disruption of per seat pricing at the hands of agentic upstarts with value based pricing. Let's take customer service as an example. Decagon prices its customer support product per conversation handled, not per agent seat and will eventually price per resolution achieved. That's fundamentally a better alignment of incentives between vendor and buyer. An incumbent like Zendesk can't easily make that same move without cannibalizing its own seat based revenue. Just as Blockbuster couldn't match Netflix's subscription model without destroying its existing economics, or PeopleSoft couldn't match Workday's SaaS model without upending its monetization.
[00:17:31] Companies that start with the new business model don't face that dilemma. And it's the core reason why platform shifts so reliably produce new winners. But guess what? The total amount of end state pricing power in the market didn't necessarily decrease. It just means customers now have a choice of business models they'd like to subscribe to. And the better one will win.
[00:17:57] That's how competitive markets have always worked. AI is not the first time that a wave of creative destruction has rearranged markets and shifted the playing field. But here's the thing. The business models that result almost always dwarf the old ones in the scale of the total opportunity.
[00:18:18] The great software bifurcation is coming.
[00:18:23] So yes, AI will definitely change vertical and functional software, but it won't look like a massacre. Maybe gross margins settle into a different steady state. Maybe pricing power is diminished because switching costs give procurement teams more leverage in vendor negotiations. But AI also supports margin expansion due to a much more efficient use of labor. But no matter where margins end up, we expect that scale will expand dramatically. Because, as our colleague Anish Akaria likes to say, the world is still short software. We are nowhere near saturating the world's demand for high quality software. And as code becomes cheaper, we should just expect to see the market demand more.
[00:19:12] On the other side of this AI transition, we'll be looking at a much bigger software industry that provides much more value to its customers. Companies will be able to serve more customers, enter adjacent markets and automate workflows that were previously far too complex or too expensive to touch. Customers that were previously too low. ACV will suddenly have attractive economics. Ideas that would once have gone into the too hard pile suddenly become interesting and feasible. There will still be moats. And as long as there are moats, there's plenty of reason to expect hugely successful and and highly durable businesses to survive and thrive.
[00:19:52] AI isn't going to destroy the software industry. It's going to split it into two parts. There really will be some categories of software companies that face genuine pressure. Front end tools that serve primarily as thin wrappers around commodity functionality and do relatively little beyond presenting data in a slightly more convenient format are vulnerable incumbent systems of record that still operate on archaic interfaces but raise prices every year should be worrying.
[00:20:20] So should software companies that have an outdated pricing model and value proposition that's just inferior to what AI native competitors can offer.
[00:20:28] The companies that win in this environment will be the ones delivering genuine value, not the ones that built the highest walls around their customer base.
[00:20:39] But that's just creative destruction. It's great for the industry that these companies are facing pressure that they weren't facing before.
[00:20:48] Some of them will figure things out and get stronger. Others won't and will die. That's good. The rest of the software ecosystem, the companies that are committed to delivering real value for their customers, is set to grow massively.
[00:21:05] So yes, some individual companies will lose, but the industry will win and win big.
[00:21:12] The sasspocalypse isn't the death of software, it's the start of something much bigger.
[00:21:21] All right, so that wraps us up.
[00:21:24] Shout out to the authors of this one. I have the link to the article this is from. I think it's the A16Z sub stack.
[00:21:33] But there's just a ton of really, really good, honestly, very simple and very good economics in this article.
[00:21:42] Maybe unintentionally, but just kind of in the.
[00:21:45] The broad concepts that they discuss and how they talk about the evolution of markets and stuff that are deeply aligned with Austrian theory. Probably without intending to be, but probably the best example or the best thing to connect this to is Jevons Paradox, is that when you make something more available and cheaper to produce, the demand for it inevitably increases. So so many people will think they're like, oh well, it won't be viable as a business anymore or it will just be totally replaced because it's too easy to create. When in fact. And there's actually a line in this that I really liked that kind of demonstrated the concept behind this is that customers will be able to serve. Excuse me, companies will be able to serve more customers, enter adjacent markets and automate workflows that were previously far too complex or too expensive to touch.
[00:22:43] Customers that were previously too low ACV will suddenly have attractive economics. So what they're talking about here is like too low ACV is it's annual customer value or annual contract value, something like that. And what they're basically talking about is like how much will each new customer versus the cost of sustaining whatever it is that you're providing that customer. So if you build a new area of your business or a new feature that only brings in 10 customers, well then, and you're only expected to, or it only makes sense to be able to charge those customers $5 a month. Well then you be you better be able to provide that feature for less than $600 a year or you're wasting money on it. This is exactly why certain features or software or customizations or preferences don't get answered in a market because they. It's the, it's the niche problem, right? It's the same reason why broadcast media, while you wouldn't see on cable news or cable tv, you wouldn't see a show that was just tutorials about Bitcoin wallets or unboxing videos for popular products, or even something kind of crazy and high risk like Black Mirror. You would only ever see it on very specific, very specific platforms or production companies that knew exactly why or how it would benefit their audience. But now YouTube and streaming come along and it changes the nature of how that content is provided. And what the acv, what the, the cost per benefit ratio of catching or attracting certain niche audiences or being able to serve some small subset of a network or some small group or some cultural niche. And suddenly all of these things are available. Suddenly Netflix can take the risk on something like Black Mirror because keeping the 50,000 or 100,000 users that can't find that content somewhere else on that platform, having a variety of content types and ratings and length and seriousness versus stupidity, you know, all of the various types of content that might serve the huge swath of customers. Netflix has actually benefited because they sell one subscription and they simply need, and most people only need two to three good shows that they might want to binge watch or explore to justify keeping that subscription. And then in addition, the delivery of that content has a lower overhead now and then. Think about YouTube, the number of people who could survive or benefit from just doing reviews, just doing reviews of products and being able to branch out because people can simply pick and choose. I mean, just the whole idea of pay per view, right, like that was a huge thing in cable and broadcast media that there was this concept and after, after, you know, quote unquote, advanced and you could get it per customer, that you could pay per view. That was like a big deal. Now basically everything is some form of.
[00:26:00] You just watch the content that you specifically select at the time that you specifically want to, and there's no schedule. You don't have to wait until 5 o' clock in the afternoon to watch your tutorial videos. You don't have to decide whether, like wait for the next episode of Black Mirror. You literally get to be binge watch, binge watch the entire three seasons or five seasons, whatever it is, in one day, if you just feel so inclined to do that. This wasn't, this wasn't even possible until the economics around providing the service, around paying for the service, around the platform delivery and, and the connection, or the connection, the exposure to the size and breadth of the audience changed dramatically with the nature of the Internet.
[00:26:47] And is there less media today or less money going into production or provision and content creation today than in the 1990s because it's easier to do.
[00:27:01] I think the fact that there's now this entire subset of economic activity that is called content creator, myself included, this podcast doesn't exist without that platform shift, without the change in those economics, you couldn't possibly sustain a show about, you know, economics and philosophy and engineering and technological shifts in Internet history called bitcoin audible on television, on broadcast media back in the 1990s. I mean, obviously none of that stuff existed, but it's. This is a very niche audience and it's kind of a niche within a niche too. There's like bitcoin and crypto is a pretty big environment now, and I do not serve the broader audience. There's a lot of stuff that I talk about that wouldn't make any damn sense to them. And most of them actually want to trade garbage, right? They want to, they're. They're in it because they're like, how can I make money? And which. When do I sell this token and buy this token? That is not what my audience is. That is not what my show is. I hate that. That's total garbage. I might as well just have a show about casinos. But new technologies and new platforms that lower the barrier to entry and lower the barrier of service provision fundamentally change the economics of the ability to actually participate in that market. And it specifically, in the context of Jevons Paradox, it specifically opens it up to be able to apply or provide services for a vastly greater array or variety.
[00:28:31] And again, niche audiences, smaller networks and smaller features, less important ones that did not make economic sense when the costs were too high that now suddenly become available. And that same thing is going to happen to software. That same thing is kind of already happening to software. I think it's just largely in.
[00:28:51] More isolated, in kind of the tinkerer or the solo preneur sense is that people are building things for themselves. That's exactly what I've done. And in fact, everybody that I've talked to or listened to when it comes to like Stefan Lavera and Marty Bent, everybody in the podcast and sessions, everybody in kind of the pod bitcoin podcasting circles is they've all created their own custom workflows and setups and they're all using AI to make their implementations faster and get from zero to episode published with lower cost and lower turnaround. And this is where another thing that they specifically mentioned in this article that I really, really liked and, and I love this kind of distinction. One of the, one of the biggest things that I think people constantly miss is that what is the value shifts, value shifts to where the new challenge is when you make something automatable or you make something, you know, you. You get intelligence to, to be able to do something.
[00:30:02] It's the idea. There's this drastic oversimplification of how economies work or the different layers of it. And some of the layers become much more clear when you solve a layer underneath it. When it. It becomes far simpler to understand how the next layer up becomes the focus.
[00:30:25] This is slightly oversimplified, but I think it's a good example just conceptually.
[00:30:30] Being a good painter for getting a good piece of art right is we often conflate the process, the act or the skill of doing a thing a particular way or with a particular tool with the value of the thing itself. Right? Is a great piece of art is great because it's very, very difficult or it's valuable because it's very, very difficult to a make something that's pretty. And you have to be a good painter. And it's really hard. You got to get all these brushes, you got to figure all this stuff out. Or guitar and music, right? Is that, oh, I'm valuable because I have spent all of this time and investment into learning how to pluck the guitar. And I can do it very well and I can do it with memory and with my motion memory, and I can do it offhand. I'm very skilled at creating.
[00:31:21] Creating music with the guitar. So my value is in my ability to master the tool itself. But then the tool changes. And what happens when the tool changes? We recognize that being able to play the guitar doesn't really matter if there's no good music or you don't understand music theory, or you don't understand.
[00:31:37] You don't have the sense of creating good music because the real value is the music. The real value is the beauty in the painting or the. The artwork itself. And in this sense, there's a really good.
[00:31:51] There's a really strong section in this that I think really uncovers. And I found this with a few other articles and other people talking about this. As to where is the strongest part of the value proposition in doing this work? It's in the process.
[00:32:06] So let me reread this section says perhaps the strongest moat of all in this new era is process power. Or as George George Savolka of Hebia calls it, process engineering.
[00:32:19] Application software can be thought of as a stored process. It encodes opinions about how the function of an organization should operate. And those opinions calcify over years and decades of use into something that is inseparable from the organization itself. Successful app software companies are the ones that co evolve with their clients around these workflows. As those workflows penetrate ever deeper into an organization, process engineering only becomes more important and more difficult for challengers to replicate. So what, what do we mean by this? And I believe I read, I know I read the article, but I think I read it on the show. I'll have to dig into the, the feed. It wouldn't have been that many episodes back because we've talked about AI coding and vibe coding stuff recently.
[00:33:04] But about how it is that a process only is actually created or a process only actually emerges through the act of doing it is over time. And as you run into reality, as you run into the need or the judgment or the value that thing is actually provided that the tool itself is actually providing. And I don't, I don't think I did read it. I think it was from Peter Steinberg, the.
[00:33:34] I think that's his last name, the guy who created OpenClaw.
[00:33:37] I think this was like a long Twitter post or something that I read.
[00:33:41] I'm having a hard time placing it in my memory.
[00:33:44] But it was about the fact that what he is managing, when he's managing multiple agents running at the exact same time, and he's publishing, he's making 6,000 commits to GitHub every single month, and he's got like 15 agents or whatever doing different jobs is he ends up managing the process and directing this orchestra of where to go and how to think about what we learned and adapt to those new things. Which means essentially goals, judgment, and long term memory become the value that you can provide because you tweak and guide things as they move forward. You know, memory isn't simply a, a task of trying to find the right pieces or having all of the context around one particular thing. You know, it's like, you know, you can't just read a billion books and then you're smarter or you have a better sense of the world. You also have to filter what a good book you can eat just as easily just fill your head with garbage because you read a billion of the wrong books. It matters, just as actually it matters far more, especially in a world where there's an infinite amount of content, where the price to create content is essentially zero. It matters to filter and judge what is valuable and what is a good use of your memory. And that's exactly the sort of thing anybody who's worked with an agent for long enough knows that memory is one of the biggest problems, and process is one of the biggest problems. Because the number of times that you can go down the wrong rabbit hole or not properly learn, recognize that a lesson that's supposed to have been learned has not actually been internalized or got out of context with your model. And so you broke a bunch of things and you have to figure out how to reel it back in and clear your context so that you can put those things at the forefront of the most critical things to consider.
[00:35:41] Well, this is exactly how you would, what you would have to keep into account or modify your structure, like kind of your gateway or the system that you're actually working in which you can code with your AI agent to basically insist that those things are never lost in context. That means that you have to do something different than what the agent with the LLM itself is actually doing. Because it's not just in the weights in the context and it's in the process, it's in how you actually apply those things and then create a custom workflow around what you value or what you think are the most important judgments. Because you might have software that just values just as concerned most, most concerned with the features that you want to implement and having fun gadgets and stuff for your customers. Or you might have software where your goal is trust, your goal is reliability, your goal is having a base that cannot be effed with. And if a little bit of features on the end, at the end state or on the front end aren't directly or immediately provided or you know, have some sort of a trade off, that's worth it. If you can know that someone can always go to your application and do the one thing that you need that application to do. Those are different considerations, those are different judgments. Those are things that humans must be involved in to make cause on. Because judgment is specifically a mortal thing. Judgment is, is a extension of value. And value only comes from living in the real world and having a risk. And maybe that, maybe you walk out one day and die and there's so much of this doom and gloom that AI is going to replace everybody and there aren't going to be any jobs. And this time is different. And it's incredible to me how God, how blind people still are to this, that that's just not how it works. And it's not that, oh, we just got to a point where suddenly it does work that way. It just fundamentally doesn't work that way. And I'll tell you a simple reason to understand why. LLMs are a map.
[00:37:45] They're a map.
[00:37:46] They are, they are weights derived from human judgment, human assessment and human connection to the real world. And then all that information is aggregated together and patterns are determined within that set of data. But if that data is not generated from real world experience and basically clashing up against the contradictions of the universe, then they aren't valuable anymore. They won't actually create a consistent experience and be of something that will sustain itself in the real world. It has to have some sort of interaction. It has to have some sort of means to judge what a good output is. And it can't be circular. You can't use that output. You can't judge that output and then put it back into the input and think that you're going to get a better system. There's a reason why we've seen this fundamentally. When LLMs try to train themselves on LLM output data, or when image models try to train themselves on the best output of the image model, that it gets retarded.
[00:38:51] This isn't something that we're going to engineer around.
[00:38:55] It's a universal truth about the nature of what these things are. They are derivatives. They cannot then be better than what they are derived from. It is a map of intelligence. And intelligence only matters if it corresponds to the real world and real value judgments. For something that is alive and can die one day. You cannot draw a more accurate map because you've got a better process from the previous map. You can take that process into the real world and you can draw a better map because you have a better system or a better way to draw it. But the map will never, ever, ever be as complex or varied or be able to better account for the reality of the situation on the ground than reality itself. Reality will always be infinitely more complex and infinitely deeper than any map of it. In fact, the map is wrong as soon as it's drawn because everything changes. And there's only so.
[00:39:58] There's only so deep of a degree that a map can go in actually determining the accuracy or the precision of a pattern. And the bigger and more general the attempt of making that map or that pattern of that pattern weighting system is, the less efficient it is at doing the job. And this, and this is big basic economics. If we try to create a superhuman AI that can do everything and it's one big giant model, it's just going to fail. It's going to fail to basic economics. It will cost too much to run it, and it will cost too much to get marginal improvements in the weighting of that mount of that system itself than it will be to create thousands of tiny, specific, specific models and have one general agent that can spin up a hundred other general agents to help manage. While it's all conducted by a human who creates a series of processes and structures that are actually specific to the thing they are trying to do and the goal they are trying to achieve. And these process will get. Processes will get deeper and more complex and more specific to everything that we are trying to accomplish to the point that everything, the complexity itself becomes exponential such that the bigger models and have even worse time trying to achieve it or answer it. And all of this actually only unfolds in the process of the doing of the thing. You can't just take a picture of it and then map it and then suddenly it works forever. And that's how everything goes. It literally requires time and response and interaction for it to be developed. And as it's developed, it changes the environment that then forces new development and new change because all of these things are interacting and, and nothing is static. It's like trying to predict the weather by having a knowing where every single molecule is in the air and knowing every direction and velocity and then trying to calculate what's going to happen in the next six weeks by watching all of these molecules bounce into each other. It's not even. It's not an intelligence problem, it's a sheer compute problem. And every time we do something that can more automate or better provide any of those things and everything gets more complex because your output is now part of the input. This is exactly what we see in economics. As soon as the measure itself becomes the goal, it ceases to be a good measure. As soon as we have a pattern in the market where we're like, oh, this is definitely the way that everybody's going to make money, well, it breaks down and everybody and everything fails. And the pattern, the pattern dies. Why? Because you can actually only isolate and take advantage of that pattern if you're doing it in a tiny minimal sense. And as soon as you're large enough or there's enough volume or liquidity trying to take advantage of that pattern, the pattern stops working. Because you're predicting the pattern, you are an output as much as you are an input in the market. This is exactly true of our technology as well. This is how all of this shit works. It's never different. It's just exponentially bigger and faster. And whatever it is, wherever the value goes to the new layer, it. It simply is less and less obvious until we have technology that actually solves the layer that we are so die hard focused on right now. And what we are doing is we are moving into a new layer. And so everybody thinks this time is different because it's obvious that, oh well, Photoshop was clearly just it Wasn't, you know, it wasn't going to kill or destroy all graphic design and all artwork ever.
[00:43:26] Well, that's. Yeah, in hindsight it makes perfect sense, but in foresight, nobody saw that. In foresight, photos killed painting, it destroyed art. Photography was the. Photography was literally the end of the world. And it's like, oh, now. Because now we can see the. Obviously it didn't. And photography is great and we need it for all of these things. And it made accessible billions of things that were never accessible before. Now I can have pictures of my family. I don't have to go sit in front of someone to paint my picture for three hours just to get some historic image of what I used to look like when I was 20 years old. Now I can just take pictures of my family and my friends and every event that I go to. It was brilliant. It lowered the. It drastically changed the economics of everything around the capturing of history and the creation of art and beauty. This time is not different.
[00:44:18] Universal truths don't just hand wave away because we built a new tool. And I urge anyone to go into any of these Bitcoin and AI groups or the open claw groups and tell me if there's not work involved here that you can just tell an agent to do stuff and it will run away and do everything and everything's solved and there's no problem.
[00:44:43] Go ahead, Go ahead. All I see are people frustrated. All I see are people trying to do bigger things than they've ever done before and running into roadblocks and having to clear context and having to figure out far bigger and more advanced memory problems and how do I get it to associate in context, be able to pull context from every single thing on my hard drive.
[00:45:05] The problems will just scale exponentially with the exponential scale of the tool and technologies that we are using to take advantage of them. This is how it worked yesterday, this is how it works today, and this is how it will always work.
[00:45:20] And to the contrary, I really like this article. Application software will not be replaced. It will not die. It will actually become vastly more available, more diverse, and it will eat everything.
[00:45:34] And humans will not be replaced because we will now start doing and engaging in so many different tasks and in so many different goals that we will need anything and everything we can to manage, direct and perfect, provide judgment for all of the agents that we have. The people who will lose their jobs and will become poor are the people who simply have no agency or no motivation to do anything. And they were always going to be unsuccessful. They were always Going to be miserable because they have no goals anyway. Anybody who actually has some motivation and wants to achieve something and has something of value they want to obtain, will have that thing vastly more affordable and accessible to them and will be able to create whatever they want to create far more easily than ever before. Because that's exactly what these tools do. And there will be tons of disruption, there will be tons of chaos, there will be tons of shifting sands all over the place. There will not be a strong foundation except for die hard blue collar age old work that will maintain value at least in the short to medium term. But all of that will be a good thing because everything will be getting more accessible. But I really just think the people who think that we're gonna just break into some super intelligence and everybody's gonna be replaced and there's not gonna be any jobs, just don't understand the basics of economics.
[00:47:00] Economics is relative.
[00:47:02] Economics is relative. It is not absolute. There's no source of absolute anything in economics. It's all relative. Which means if you move one thing, then it just changes the relative importance of something else. Because it's all just relative. It's like velocity. It's like everything's going to start moving so fast one day that nothing will ever move slow again. It's like if everything's moving fast together, then everything standing still and whether you're going slow or fast is just relative to our current position in velocity. It's like everybody's gonna be on a train, a speeding bullet train, and we're still gonna be walking back and forth, we're still gonna be doing things.
[00:47:43] It is all relative.
[00:47:46] Value works the exact same way. In fact, it cannot be defined in any other way. Like it is purely relative. Which is exactly why a price is only useful if you know what the price of another thing is.
[00:47:59] You know, if I give you something, if I say that this microphone sitting in front of me cost a bajillion wingdings, you have absolutely no idea if that's expensive or not.
[00:48:12] But what if I tell you that a cup of coffee costs two bajillion wingdings? Well, now you have some sense of whether or not that's expensive. But you might be wrong. Because you don't know where I am and how difficult it is to obtain coffee in the place that I am. Only in your experience, because you know it is relatively easy to obtain coffee and it is not scarce where you are. You can go to a Starbucks that's probably less than two bucks away right now and get Yourself a cup of coffee with a relatively short amount of exchanged work for it, maybe an hour, maybe less than an hour, maybe a fourth of an hour's worth of work, of your work will actually obtain that coffee. But you had no idea, you had no idea whether it was expensive until it told you how much the coffee was worth. That is exactly how money and why money actually works is that it doesn't change so that it can actually be used to weigh the difference between those two things. However, if I buy the MIC today and there's only a bajillion wingdings in existence, and then I buy the coffee in five weeks and there's 10 billion times the number of wingdings in existence, well then, now that comparison is utterly meaningless. It doesn't mean anything at all. That is the shittiest money on earth because it cannot possibly compare something five weeks into the future from five weeks ago, which defeats the very purpose and value and coordination capability of money itself. And it's why every money that inflates itself dies because it. It unsolves the one problem it fundamentally is supposed to solve. If anybody who listens to this show learns anything, it better be that. It better be that the concept and reason for money's existence and why it, it necessarily is the rock that every. It's the. It's the fundamental weight that doesn't change. Because all economics is relative. All value is relative. And without a good stable measure, a good thing to weigh against, not only from one space to another, not only from one time to another, not only between one judgment and one life and another judgment and life experience against another, but from the past to the future. It sucks if it cannot do those things. And the most reliable way for it to be able to accomplish that is to be as scarce as physically possible. If it cannot be that, then all of the other attributes, attributes of money, the fact that it moves reliably, it's portable, and it can be transmitted across the space, the time, and the various people in existence experiences that it's durable, which is just an extension of the fact that it's portable across time, all of those things don't matter because it can't hold the value needed to be exchanged that those other attributes are even good for. And AI is not going to change that.
[00:51:12] It's just gonna f with our current relative assessments because it's going to fundamentally change the assessment of or the relative value of a ton of things. But relative value will not go away.
[00:51:28] The nature of value will not change to not being relative anymore. That's the suggestion that all value will be that AI will run everything and there will be no jobs and nobody will ever do anything and there won't be an economy. Blah blah blah, AI will just run everything and we'll just live our lives on ubi. That is the notion that value will stop being relative and we'll just achieve all of our ends. And honestly, the very notion should be seen as absolutely idiotic to anybody who's ever imagined a sci fi future ever. To the contrary, most of the things we dream about doing aren't possible until you have something like AI agents doing all of the grueling and grunt work of all the other stuff. If we want to be a civilization that goes to different stars, you'll never be able to do that without something like AI. Are you kidding me? This is a necessary step to actually gaining the complexity and the power and the capability to actually do all of the things that we imagine. And when we start start to do those things, we'll just imagine vastly more complex and ambitious things that weren't even imaginable until we crossed that Rubicon. Whether or not you will succeed or fail, or whether or not you will have anything you want in a world with AI will be entirely up to you. Because it's going to get easier, not harder in the exact same way, but probably to a bigger extent than the Internet did to everything before it. So if you're a doomer, you should think about that. So take that with you and I'll catch you on the next episode of Bitcoin. Audible. I hope you enjoyed this one.
[00:53:04] I am Guy Swan. And until next time, Everybody, that's my two SATs, Sam.