Episode Transcript
[00:00:00] The end of mankind wouldn't even be biological in such an instance. It would be neurological.
[00:00:10] The best in Bitcoin made Audible I am Guy Swan and this is Bitcoin Audible.
[00:00:33] What is up guys? Welcome back to Bitcoin Audible. I am Guy Swan, the guy who has read more about Bitcoin than anybody else. You know, we've got a great read today as always.
[00:00:45] I don't think we do. We don't do non great reads. In fact, I read a lot of crap to find the good stuff to. To cover on the show and invariably the best stuff is. Is stuff that you guys recommend. So huge shout out to everybody who does that and sends me articles or ideas of what topics to cover. All that source of stuff. It's. It is actually a huge, huge help and thank you guys for that. I am pretty sure this article is in fact one that I got shared, but I did just save it in my keep my private key room for my reading list without, without noting who it was. So if, if it was you, thank you. I'm sorry I don't get to mention you on the show, but this is such a cool article. This is such a good thesis or idea. Like it still needs to be fleshed out and kind of battle tested to really know if this thesis is kind of an accurate picture of how to think about this stuff. But I think it's invariably, I think it's intuitively obvious that it's true.
[00:01:54] Not that it's kind of an overarching thesis that explains everything, but that it's true in the sense that you can't use a pattern compressed from reality to train a new set of patterns that are more aligned with reality. And that probably doesn't make any sense yet. But it will when we get to the Guy's take afterward because I try to really expand and actually add one additional layer to Copernican's argument here that I think he might not be aware of, but actually is another, another potential way in which it really, it really reinforces his argument or at least in my thinking. I'm always coming at it from like economics and that sort of thing. But a quick shout out to the hrf, the Human Rights Foundation. They do absolutely amazing work.
[00:02:44] They also have the Financial Freedom Report, which is one of my one of a short list of like I cannot miss it newsletters to keep up on really important events around the world. Especially when it comes to like what authoritarian regimes are doing, where government surveillance tech is being sold and tested and where CBDCs are being implemented to see where they succeed and where they fail because they end up being the example for the West.
[00:03:15] Like more powerful countries literally use these smaller authoritarian regimes that they can easily control and manipulate with money and debt essentially as guinea pigs as test grounds for how to implement these same things in the West. Therefore, if we get blindsided by it and don't know what to do, that's our own fault because we had the Financial Freedom Report. You know where to find the link because I always make it oh so convenient for you. And also don't forget to check out the new website.
[00:03:43] Still a lot to come on that, but the the overall frame and like new look and layout is finally, finally out bitcoinaudible. Com and you'll be able to find all the links and all of that good stuff on the website very soon. I'm putting a collection together as well as a way to deep search the library of this entire show, so lots of really cool things coming on that front. Stay tuned. Don't forget to bookmark it.
[00:04:10] So with that, let's get into today's article and it's titled Urban Bugmen and AI Model A Unified Theory by Copernican A solution indicating that mouse utopia is an inherent property of intelligent systems.
[00:04:34] The problem is information fidelity loss when later generations are trained on regurgitated data.
[00:04:43] This is a longer article because I'm trying to flesh out a complex idea similar to my article on the nature of human sapience. This is well worth the read.
[00:04:54] Introducing Unified Model Collapse I have been considering digital modeling and artificial neural networks. Model collapse is a serious limit to AI systems, a failure mode that occurs when AI is trained on AI generated data.
[00:05:13] At this point, AI generated content has infiltrated nearly every digital space and many physical print spaces, extending even to scientific publications.
[00:05:23] As a result, AI is beginning to recycle AI generated data. This is causing problems in the AI development industry.
[00:05:33] In reviewing model collapse, the symptoms bear a striking resemblance to certain non digital cultural failings.
[00:05:41] Neural networks collapse hallucinate and become delusional when trained only on data produced by other neural networks of the same class.
[00:05:52] And when you tell your retarded tech bro boss that you're training a neural network to do data entry upon hiring an internal are you not technically telling the truth?
[00:06:03] I put real hours into the thought and writing presented here. I respect your time by refusing to use AI to produce these words and hope you'll consider mine in the purchasing of a subscription for $6 a month. I am putting the material out for free because I Hope that it's valuable to the public discourse.
[00:06:23] It may be that by happenstance in AI development we have stumbled upon an underlying natural law, a fundamental principle.
[00:06:34] When applied to trained neural network systems, information fidelity, loss and collapse may be universal, not specific to digital systems. This line of reasoning has serious sociological implications. Decadence may be more than just a moral failing. It may be universally applicable.
[00:06:56] Model collapse is not unique to digital systems. Rather, it's the most straightforward form of a much more fundamental underlying principle that affects all systems that train on raw data sets and then output similar data sets. Training with regurgitated data leads to a loss in fidelity and an inability to interact effectively with the real world.
[00:07:23] The nature of AI model collapse, the way neural networks function, is that they examine real world data and then create an average of that data to output. The AI output data resembles real world data. Image generation is an excellent example, but valuable minority data is lost. If Model 1 trains on 60% black cats and 40% orange cats, then the output for cat is likely to yield closer to 75% black cats and 25% orange cats. If model two trains on the output of model one and model three trains on the output of model two, then by the time you get to the fifth iteration, there are no more orange cats and the cats themselves quickly become malformed Cronenberg monstrosities.
[00:08:13] Nature published the original associated article in 2024 and follow up studies have isolated similar issues.
[00:08:20] Model collapse appears to be a present danger in data sets saturated with AI generated content.
[00:08:27] Training on AI generated data causes models to hallucinate, become delusional and deviate from reality to the point where they're no longer useful. In other words, model collapse.
[00:08:40] The more poisoned the data is with artificial content, the more quickly an AI model collapses. As minority data is forgotten or lost, the majority of data becomes corrupted and long tail statistical data distributions are either ignored or replaced with nonsense.
[00:09:01] AI model collapse itself has been heavily examined, though definitions vary. The article Breaking generative AI could break the Internet.
[00:09:11] A decent article on the topic. The way AI systems intake and output data makes it easy for us to know exactly what they absorb and how quickly it degrades when output. This makes them excellent test subjects. Hephaestus creates a machine that appears to think, but can it train other machines?
[00:09:31] What happens when these ideas are applied to man or other non digital neural network models?
[00:09:38] Agencies and companies will soon curate non AI generated databases in order to preserve AI models. The data they train on will have to be real human generated data rather than AI slop.
[00:09:51] Already there are professional AI training companies that work to curate AI with real world experts. The goal is to prevent AI from hallucinating nonsense when asked questions.
[00:10:03] Results are mixed, as one would expect with any transhumanism techno bullshit in the modern day.
[00:10:09] Let's Talk about Mice John B. Calhoun A series of experiments were conducted between 1962 and 1972 by John B. Calhoun. Much has been written about these experiments, a tremendous amount, but we will review them for the uninitiated.
[00:10:30] While these experiments have been criticized, they are an excellent reference for social and psychological function in isolated groups. The Mouse Utopia experiment by John B. Calhoun placed eight mice in a habitat that should have comfortably housed around 6,000 mice. The mice promptly reproduced and the population grew.
[00:10:56] Following an adjustment period, the first pups were born three and a half months later and the population doubled every 55 days afterward. Eventually, this torrid growth slowed, but the population continued to climb and peaked during the 19th month. That robust growth masked some serious problems, however. In the wild, infant mortality among mice is high, as most juveniles get eaten by predators or perish of disease or cold. In Mouse Utopia, juveniles rarely died. As a result, there were far more youngsters than normal. End quote.
[00:11:28] What John B. Calhoun anticipated and what most other researchers at the time anticipated was that the population would grow to the threshold 6,000 mice, exceed it, and then either starve or descend into infighting.
[00:11:42] That was not the result of the Universe25 experiment.
[00:11:47] The mouse population peaked at 2,200 mice after 19 months.
[00:11:52] Just under 2 years.
[00:11:54] Then the population catastrophically collapsed due to infertility and a lack of mating.
[00:12:01] Nearly all of the mice died of either old age or internecine conflict, not conflict over food, water, or living space.
[00:12:11] The results have been cited by numerous social scientists, pseudosocial scientists, and social pseudoscientists for 50 years. You know which one you are. The conclusion that many draw from the Mouse Utopia experiment is that higher order animals have a sort of population limit. That is, when population density exceeds certain crucial thresholds, fertility begins to decline for unknown reasons. Some have proposed an evolutionary toggle that's enabled when overcrowding becomes a risk. Some have proposed that the effects are due to a competition for status in an environment where status means nothing. Mice do have their own hierarchies after all. The reasoning behind the collapse of Universe 25 into infighting, the loss of hierarchy, is still up for debate. It did occur.
[00:13:00] The resultant infertility of an otherwise very healthy population, senseless violence and withdrawal from society in general have been dubbed the behavioral sink.
[00:13:14] I am aware that many considered this experiment to be a one off. It was repeated in other experiments by John Calhoun, but no one has replicated it since. I'd love to do more of these experiments, but university ethics boards won't approve them in the modern day and age, we need replication. Endnote the Demographic Implosion of Civilization Humans have displayed similar behaviors to Those of the universe 25 population at high densities an article that I wrote roughly a year ago demonstrates a significant correlation between the percent urban population and the fertility rate dropping below replacement levels. It appears that between 60% and 80% urban, depending on the tolerance of the population, fertility rates drop below replacement.
[00:14:06] Under the auspice of Unified Model Collapse theory, those numbers may need to be changed. Rather than a fertility collapse occurring when a population reaches 60% or 80% urbanization, the drop in fertility would occur after the culture and population have readapted to a majority urban environment.
[00:14:27] How long it takes the fertility rate to decline would then be proportional to the cultural momentum.
[00:14:34] Rarely will it take longer than a full generation, 30 years, and frequently it'll be as short as a decade.
[00:14:43] Exact analysis on how long this takes will require a comprehensive look at multiple statistical models and require disentangling the long term effects of culture, economics, war, plague and other complicated factors. As a very rough rule of thumb, within 20 years of reaching 60%, urbanization seems to hold true.
[00:15:03] With that in mind, the global human population is closing in on 60% urbanized, so one would reasonably expect the global fertility rate to fall below replacements well within our current lifespans.
[00:15:18] The current global fertility rate is 2.2 children per woman and declining the universe.25 Population decline did not begin in month 19.
[00:15:30] The population peaked at 2200 mice, but a generation or two prior lab mice reach sexual maturity at roughly six weeks of age, indicating that the decline may have begun as early as 16 to 17 months.
[00:15:45] Rather than seeing Mouse Utopia, the Human demographic implosion and AI model collapse as disconnected events, the same principles may be active in all of them. The fidelity of information decays when later generations are trained solely on information created by prior entities of their own class.
[00:16:10] A thesis Unified Model Collapse Theory the proposed thesis is that neural network systems, which include AI models, human minds, larger human cultures, and our individual furry little friends all train on available data.
[00:16:30] When a child stubs his wee little toe on an errant stone and starts screaming as if he'd caught himself on fire, that's data he just received and which will be added to his model of reality. The same goes for climbing a tree, playing a video game, watching a YouTube video, sitting in a chair, eating that yucky green salad, etc. The child's mind, or rather subsections of his brain are neural networks that behave similarly to AI neural networks.
[00:17:01] The citation here is to an article discussing how AI systems are not general purpose and how they more closely resemble individual regions of a brain, not a brain People use new data as training data to model the outside world, particularly when we are children. In the same way that AI models become delusional and hallucinate when too much AI generated data is in the training data set. Humans also become delusional when too much human generated data is in their training set.
[00:17:34] This is why millennial midwits can't understand reality unless you figure out a way to reference Harry Potter when trying to make a point.
[00:17:43] What qualifies as intake data for humans is nebulous and consists of basically everything.
[00:17:49] Thus, analyzing the human experience from an external perspective is difficult. However, we can make some broad stroke statements about human information intake. When a person watches the Olympics, they're seeing real people interacting with real world physics. When a person watches a cartoon, they're seeing artificial people interacting with unrealistic and inaccurate physics. When a human climbs a tree, they're absorbing real information about gravity, human fragility, and physical strength. When a human plays a high realism video game, they're absorbing information artificially produced by other humans to simulate some aspects of the real physical world. When a human watches a cute anime girl driving tanks around, that human is absorbing wholly artificial information created by other humans.
[00:18:40] Kateyusha is best girl.
[00:18:43] Brains or brain regions undergo model collapse just like AI systems. They become unable to reference reality. They become delusional and hallucinate things that make no sense. Hence the quote why do we need farmers when food just comes from the store Level of disconnection observed in urban populations In a heavily urban setting, humans train on data sets that are nearly wholly artificial. The less time spent outside, the less time spent interacting with the real physical world around them, the less accurate their model of reality becomes. Where exactly one draws the line between real and artificial data is subject to debate. A rocky slope up a hill may be 100% real, a grass playing field may be 70% real, and a concrete sidewalk may be around 40% real. At some point, however, the salted artificial data is sufficient to corrupt the real world knowledge of individuals and cause model collapse.
[00:19:49] Urban bugpeople aren't just delusional, they're fundamentally broken.
[00:19:55] Similarly, fixing them may not be possible without radical retraining programs to teach them about the real world.
[00:20:03] Go live in the woods for a year or two and try not to die might be enough, but our society would hardly remain stable through such a remedy.
[00:20:12] Post from the subreddit Agender I start crying when I'm taken out of densely populated areas greetings. I'm a 19 year old agender human. I've lived in Manhattan my entire life and I really have no desire to leave. Since I was young, when I've been taken to rural or suburban areas, I've started crying. There's something about them that makes me really hate them. They feel so boring and lonely, and whenever I'm there I get worried that I'm not going to be able to leave, or sad because I know they exist. Even now that I'm an adult, I just start crying or panicking. When I'm there, I am reminded of an anecdote from when I was a child. A cousin came to play with my siblings and I. My family had been raised going camping and hiking and wandering the wilds since before I can remember.
[00:20:58] Somewhere around age 3 or 4 our cousin came to visit and we went cruising up a hill, hiking with our fathers in tow, probably looking for sticks to whack each other with. This cousin, however, had grown up in a suburban hell hole where everything was artificial. As such, he found it nearly impossible to navigate a sloped hill. His experience of walking and running had only ever consisted of flat, soft, curated environments produced by other people. He had no experience or ability in navigating a dirt trail at a 20 degree incline. His neurological model of the world was trained on human produced data and could not function when confronted with reality.
[00:21:46] When it comes to navigating the real world, urban bug people often behave as if they're retarded socially. They've never been punched in the face geospatially, they have no idea how to navigate by the sun or shadows. Culturally, without some pop fiction, touchstone culture doesn't exist, etc. They are entirely bound to a world of artificial ideas, human produced data, and unable to accurately model from first principles anything outside their extremely limited sphere of artificial experience.
[00:22:21] The Bug Man's neurological model of reality is divorced from reality. They hallucinate truths that make no sense, and they delude themselves into provably false ideas and violently attack anyone with a model of reality more accurate than their own.
[00:22:38] They don't understand violence, hunger, or real social organization because they've never encountered those things and by the time they're adults, their models of reality are too set to be easily changed. As Yuri Besmenov would say, they've been demoralized.
[00:22:56] Though I'm not sure that's the correct term for full scale neurological model collapse. I'd argue they've been corrupted and are no longer capable of understanding reality, even in the face of overwhelming evidence.
[00:23:09] This also explains why there is a threshold in percent urbanization at which human fertility declines to catastrophic levels, Just like the mice in Mouse Utopia are no longer capable of interacting with each other or breeding another generation.
[00:23:27] Universalizing the Thesis the universal thesis for model collapse is that advanced modeling systems, when trained on information produced by entities of their own class, lose information fidelity intergenerationally.
[00:23:44] After multiple generations of training on poisoned data sets, the models themselves become delusional, hallucinate false information, and cease to function.
[00:23:55] There's a really cool image here. Graphic. There's actually one further up in the text as well, showing when you have artificial data input into an AI vision model or an image generation model, and showing how much noise collapses into the face of it's like a face of an old man or whatever being generated. But this graphic is a table of how an AI that draws digits so just numbers, collapses after being trained on its own output. So the handwritten digits are 3, 4, 6, 8, and 9. The initial AI output looks pretty good at 3, 4, 6, 8 and 9.
[00:24:36] After 10 generations, it gets really hazy. After 20 generations, they get so hazy and kind of jumbled that it's hard to tell exactly what the number is. But you can still see their general form. And after 30 generations, every single one of them looks almost identical. And you can actually see the form a little bit of the piece of the form of basically every number in this hazy, jumbled message.
[00:25:03] As applied to AI systems For AI models, it is easy to measure the input output data that results in model collapse. Model collapse in AI systems that train on their own output data or other AI output data collapses information value over multiple generations. Even a relatively limited amount of poisoned data can cause the AI to deviate from the real world by a significant margin.
[00:25:33] Generalizing to animal neurological models for animals, the same applies to AI. But there is a point at which the neural models the animals possess are sufficiently damaged as not to produce a next generation.
[00:25:48] At that point, catastrophic population decline ensues.
[00:25:54] As applied to Mouse Utopia, Universe 25 created an environment where baby mice had very little real world feedback. Hunger, predators, heat, cold, wet and dry. The only information that each generation of mice received from its predecessor was derived from either original experiences or Other mouse behavior. The mice were trained on data sets where there was little or no real world intrusion.
[00:26:23] As a result, their training reached a state of catastrophic failure after roughly 13 generations.
[00:26:29] At that point, the fertility dropped to zero in the youngest populations and the entire mouse society collapsed into nihilistic extinction.
[00:26:39] As applied to animals in captivity.
[00:26:43] In most instances where animals are kept in captivity, significant effort is expended to simulate a natural environment.
[00:26:52] This helps prevent weird behavioral idiosyncrasies. Smarter animals are more difficult to keep in captivity, and pandas are notorious for not breeding. When kept in captivity, a mouse in a cage is generally receiving a reasonable amount of non mouse data from its keepers. At the same time, there appears to be a threshold of poisoned information that depends on the neurological structure of each animal. If overloaded solely with recycled information created by other animals of its own species, or maybe similar species, then some type of behavioral sink is going to appear.
[00:27:27] In this case representing neurological model collapse as applied to human civilization.
[00:27:36] Humans have been referred to as a self domesticated species, well, some of our subspecies anyway. It appears that when we create our own environments, a significant percentage of the resultant data becomes poisoned by being human created data.
[00:27:53] As a result, Homo sapiens that learn about the world solely or predominantly through media are not capable of modeling reality.
[00:28:02] More abstract thinkers can reason from first principles, but they're not immune and the majority of the population cannot curate their own input data. Human minds, neural networks, create models based entirely on synthetic data. The result is that those minds become optimized for synthetic realities.
[00:28:22] Those models lose the capacity to understand long tail information, improbable but important data that is no longer represented. Information on topics like serious injuries, getting punched in the nose, how dangerous wild animals can be, and what it's like to truly be hungry. Because you can't find food.
[00:28:42] Their models default to synthetic human artifice. Instead of understanding real implications.
[00:28:49] The result is delusions about the state of the world. Ideas like it can't happen here or if I go to school, I'll get a nice job or no one needs a gun are excellent examples. They model imaginary worlds created by other humans, resulting in a suicidal inability to interact with reality.
[00:29:08] Psychosocial model collapse is most pronounced in the most artificial cultures, hyperurban cultures.
[00:29:17] This type of fidelity loss has become apparent in the wake of studying artificial neural network systems and in light of the catastrophic global demographic decline.
[00:29:27] Demographic decline is most severe in synthetic urban environments, while rural environments and laodiceous environments appear far more resistant, though still subject, due to the global effects of the digital age Potential Flaws in the thesis the following are a few counter arguments I've thought of and responses to them in the context of this thesis. If the reader can think of other counterarguments, please comment on them below. This idea is still getting fleshed out and it needs to be cross examined.
[00:30:02] Still, it does appear to accurately represent an underlying principle in thinking entities.
[00:30:08] Eusocial animals People keep ant farms Ant farms do not self annihilate when kept as pets over long durations due to infertility. The primary explanation is that only trained neural networks of a given complexity are subject to this degree of information fidelity loss.
[00:30:26] Instinctual behaviors are inherited and genetic and do not need to be retrained every generation.
[00:30:33] Where one draws the line between a trained behavior and an instinctive behavior remains somewhat fuzzy here, but it does indicate that at a given level of neurological complexity, data fidelity loss becomes a problem.
[00:30:47] Cultural Traditions Humans do not function well without cultural traditions.
[00:30:54] Older cultural information seems necessary for future development.
[00:30:59] Cultural traditions are lower fidelity information condensed for easy consumption. I'd argue that there's a relatively broad range of human generated data that humans can input before it starts to become a problem. There is, however, a maximum threshold.
[00:31:15] Likewise, model collapse does not seem to affect the totality of the mind. Rather, it causes declines in specific mental models. Individually, a neat hikikomori might not be able to interact outside his home and plays video games. All but a businessman can interact outside the home, goes on dates, parties, gets drunk. Lord help him if he's ever left to fend for himself in a forest. However, each model needs to absorb real data. Human human interactions as opposed to human NPC interactions. Geospatial information and not GPS guidance. Real cultural institutions and not endless references to Harry Potter or video game characters.
[00:31:59] Yeah, when that guy at the bar punched me, it was like when I was playing Skyrim. And like your stamina bar goes down really fast. I was so out of breath.
[00:32:06] A paraphrased friend who shall remain nameless.
[00:32:11] When external data is input from uniform synthetic sources such as leftist academic mantras or globalist urban culture completely disconnected from reality, there's a loss in fidelity and function over time.
[00:32:25] The result is the collapse of one's intellectual model of reality. Exactly where that line is and how fine it is remains up for debate.
[00:32:36] Conclusions Touch grass In a very real way, the urban bugpeople completely diverge from reality. As Rohan Ghostwind would say, like the Gen Z boss video, all those people at the Democratic Socialist Convention are nothing more than children playing at politics. These people barely qualify as the same species. When compared to the people who fought In World War II, a lot of the LGBT aesthetic seems quite childish. There's a lot of glitter, a lot of emojis, pastels, bright colors, lots of cartoons, etc.
[00:33:11] This is a very real psychological breakdown. Their neurological models of reality are broken, delusional, and unable to functionally interact with the real world.
[00:33:22] In the same way AI models hallucinate nonsense, urbanite bugpeople become delusional about human nature and the natural world.
[00:33:32] There clearly exists a limit to the ouroboros of information.
[00:33:37] There is a limit to the synthetic data that one can absorb before losing touch with reality. The underlying principle is that information from entities of one's own class cannot accurately represent reality, and that training oneself, one's children, AI models, or mice solely on data regurgitated by entities of their own class will cause hallucinations, delusions, and a nihilistic breakdown. For fidelity to remain high, external data input is required. For model collapse to be avoided, synthetic information intake must be limited.
[00:34:16] You cannot train people on regurgitated data any better than AI.
[00:34:21] While the distinction between what counts as one's on class remains fuzzy, there clearly must be one. Perhaps something as simple as inputting data from cultures outside one's own could be a valuable addition.
[00:34:33] Certainly real information about how raw materials work, plants grow, animals hunt and flee predators is valuable.
[00:34:42] One might also argue that young children are of a class different from adults in terms of information production, or that psychedelics allow one to re experience information as if they were of a different entity class, perhaps leading to the sapient awakening of mankind. Reference provided in the article.
[00:35:00] Industrial society is completely borked in its current state, but survivable populations that do well will be those that limit their artificial information intake, especially to the next generation. The kids need to be playing outside.
[00:35:16] They need to be climbing trees and getting scrapes and bruises. Curated environments will drive them crazy, and you may not see the true effects until they reach adulthood.
[00:35:27] Clearly, humans have a tolerance for synthetic data. We're surrounded by it. But we can manage ourselves as long as we have real first principles and real interactions with the world around us. Combative martial arts, shooting, hiking, hunting, even cooking and realistic meal preparation can dramatically improve the quality of input data that a child receives.
[00:35:49] Without real data, the human mind ceases to function and its disparate parts begin hallucinating information that doesn't exist and which will often be confidently and violently defended.
[00:36:03] The modern political left is a product of delusional psychology that's hell bent on enacting the worst possible policies because its adherents are fundamentally neurologically broken and they may not be fixable. Which finally brings us to a solid answer to the question of the Experience Machine in terms of philosophical morality.
[00:36:25] The Moral Question of the Experience Machine the Experience Machine is a thought experiment, and hopefully remains one that's described accurately in this article by Nicholas Haldin. Link provided the proposal is that there exists a machine that you can plug into Experience a complete and fulfilling life, along with whatever other fantasies you may have. Is it moral or immoral to plug one into such a machine?
[00:36:53] There's a great webcomic that exemplifies the concept here. I left a comment on the original article by Nick Halden. Since then, I've considered the question in some more detail.
[00:37:04] The Experience Machine presents a fundamental and existential question about human existence, but in the light of Universal Model Collapse theory, it also represents a fundamental existential threat.
[00:37:18] If individuals are confined to experience their perfect version of life in such a way, their brains will rot out their ears.
[00:37:28] Human neural networks that are presented with 100% synthetic data are likely to stop functioning entirely. An environment where there is no feedback but for wholly synthetic data will cause psychological lapses, neurological breakdown, and a slow, entropic decay of the mind.
[00:37:47] Initially, the Experience Machine may present interesting, unique data, but over time new experiences will be added, predominantly experiences crafted by individuals who themselves are using the Experience Machine and, or worse, AI trained on the experiences of those using the Experience Machine. An Ouroboros, the end of mankind wouldn't even be biological in such an instance it would be neurological.
[00:38:14] Mankind is consuming his own creativity until there's nothing left but neurons firing and patterns no real human mind could possibly identify with.
[00:38:24] To plug oneself into the Experience Machine could well be a consignment of oneself to to psychosis, but deleterious symptoms may appear visible only long after the damage is irreparable.
[00:38:38] In the light of Universal Model Collapse, the Experience Machine becomes a Lovecraftian nightmare that'll cause individuals to rewrite their own neurology until there's nothing human left. If you think Urbanite bug people are bad, imagine what will happen if they lose touch with their sensations of touch, sight, sound, culture, and physicality, a reduction from 50% of their training data being real to zero.
[00:39:06] Man can plug himself into an Experience Machine, but if ever unplugged, there's a good chance that what walks out will no longer be a man. A cacophony of twisted and decayed neurological voices that long ago lost any semblance to a human mind.
[00:39:27] I love this theory, I love this idea. And it's, it's hard to say.
[00:39:34] There's so many different things. One of the interesting points that I thought of when I was first reading through this and thinking about neural networks. One thing that he doesn't actually bring up, but it's such a great example, I think of this exact same idea, is that economics, economic networks are neural networks. The price is the output of a vast neural network trying to judge values of things. And the reason you actually have price controls and when you have a situation where you're actually violently manipulating, you're forcing people to make decisions, or you're literally editing the, the financial system of the monetary system in order to make the price the what you want it to be. Because you're judging it on the price output, which is the price is an output of the system. It's the signal from the system that's telling us something about it. When you then actually edit the inputs, you literally fudge the numbers, you defraud the underlying system of measurements in order to get the resulting output that you were looking for. It's an astronomical loss of fidelity. You actually do not have it's noise in the system. And we've talked about it literally in that context before, but I hadn't really thought about it in comparison to something like an AI model data and being trained on itself. But this is exactly why socialism completely collapses. It's exactly why nobody can actually determine the value of anything in a centrally controlled economic environment.
[00:41:08] It's a different layer, but it is still a neurological network. It's not a neurological network of information. It's a neurological network of, of value judgment. And the real data is the on the ground comparison of someone who has earned something. So it's only if you earn an apple, if you actually produce, you literally grow the tree or you find the tree and you pick the apple and then you have the apple. It's only then that you actually know its comparison to an orange that you might be able to trade it for. Or an orange that you were not able to find because it's too distant from you or someone else actually, you know, found it. And the relative. Let's say orange trees are just like a lot. Let's say they're like 10 times taller. So it's like way more difficult to get them. Well then the only way that you can actually compare is you have somebody who found an orange and you have somebody who found the, found the apple or acquired or grew the Apple. And you have them explicitly trade. Because both of those, both of them have their attachment to the reality of the cost, the reality of the input of those goods in order to know the relative value is that, you know, how difficult is it for me to give up this orange? And how compared to how difficult is it for the other person to give up the apple? And of course, money is the emergent tool used to try to find a medium, an independently valuable thing that is so hard to actually acquire that it's necessarily harder to acquire than literally every sing every other thing in the society like that. That's a fundamental reality. The only reason money works, or the best way that money works is that every other thing in society, every other good real estate, house, orange, iPhone, whatever it is, any other thing that you are making, it's easier to make more of those things than it is to make more units of the money. Because you're specifically trying to compare the difficulty of acquiring an apple against an orange. And the only way that you can compare those two things is to compare both of those things against something that is even harder to acquire than both of them so that you have a relative difference. Like, there's no. It's all relative, right? There's no, there's no. Like exactly 50 units of worth it out there. There's no such thing. All you can do is compare it to a third party that they both equally balance out against. Because you're actually trying to calculate something.
[00:43:34] You're trying to produce a calculation on something that you are necessarily blind to, that you can only actually see one side of the equation. It's funny, it's actually a little bit like cryptography, how you. You mix two things together with a third color and then you take your color back out of it in order to get the result to kind of compare against. That's kind of how money works, when you're comparing the value of two different things is that you don't know the value of an orange against the value of an apple. But if you know the value of an orange against something that is so hard to acquire that the only thing that was ever done with it was trade it. And then you calculate the value of an apple against something that's so hard to acquire, the same thing that's so hard to acquire that the only way it was ever acquired was by trading it, then you can actually compare just the difference between those two things or the difference between the orange in that thing and the apple in that thing. Because in lieu of the fact that you can't compare the apple and the orange. There's too many factors. There's too much variation. There's too many different, you know, elements and distance and fuel and time. And there's just so many different variables. And the experience of each of those things is entirely personal. It's the person who is trying to get an orange and the person who's trying to get an apple who are. Who are making the judgment as to what it's worth. So if you actually then use them, you corrupt the money itself. You defraud the monetary system by creating more of it in order to get the resulting price that you want. All it does is make the price completely meaningless. All it does is make the price completely useless in comparing apples and oranges, which is literally its only job and the most existentially difficult task for keeping a society together as there is. It's not even like. It's not even like throwing the baby out with the bathwater isn't even like a powerful enough analogy. It's like setting yourself on fire to stay warm. It's so axiomatically to the purpose of the thing. And it's actually a really good example of this theory, I think, not only because economics networks are neural networks that are trying to judge, that are trying to pass judgments, trying to measure and compare relative judgments across completely varied mental models and cultures and. And experiences and all of this insanity, but that the signal is so dense and compressed, the price doesn't tell you anything about its inputs. It's a pure output. It's compression with a total fidelity loss of its original inputs.
[00:46:11] It's. It's like a. It's like a hash function. I don't know why everything is like a bitcoin comparison, but like, a hash function is. If you ever.
[00:46:20] I'm not gonna try to go through the structure of it. In fact, I have thought about doing a video on this, but I don't even know if people are interested in it. It's just my stupid nerd brain that wants to.
[00:46:31] But a hash function blocks everything down into this, like, set piece. I don't even remember what it was, like 16 bits or something like that. And so there's just this, like, one chunk. And so if you take something massive like the Bible, the Gutenberg Bible, or, you know, an entire series of encyclopedias or, hell, you could hash the whole of the Internet, right? Let's take. Let's say you just take 10 terabytes, which is now the whole of the Internet, 10 terabytes off my computer. Sitting right next to me and you just hash it. You make a big image disk image and you hash it. Well, you're going to chunk it down into those 16 bit pieces which are going to have a billion of them, right? A billion zillion. And then you're going to basically math them each together and you, you basically do this like cascading table down from the entire map to, to like how do you combine one to the next and then, and then subtract one and then, and then add. It's like this weird like kind of like hop and reverse process where what you're actually trying to do is get down to one of these blocks or I think it's like a, I don't know, there's like eight of these blocks or something. And then you, and then boom. That's your resulting hash. But what you end up with is a fingerprint, right? It, you can't possibly reverse it and pull 10 terabytes worth of information out of it. It's literally, you know, 256 bits. It's, it's absolutely tiny. It's this little tiny nothing string. But it is axiomatically, it's fundamentally tied to the nature of its original information. If you calculate it from that original 10 terabytes.
[00:48:09] But the output is so unbelievably compressed that there's no fidelity of the original information there yet. You can actually. It's actually actionable information that's kind of like what the price is. In fact, one of the most powerful things about a market price is it is able to account for things that explicitly don't happen in the market. That's the crazy thing, is that the two inputs are trades that do happen and trades that don't happen. The fact that one bid gets placed and an ask matches it is as equally important as an ask gets placed and a bid never fills it. So let's say you have, you know, 22 by fours in your, in your back yard or whatever. Like you just have like a shed and you just happen to have like 22, but you way overbought on two by fours or lumber for a project and it's just been sitting there for like three years, but you kept it in good enough condition that you'll actually still be able to use it like it was treated or whatever.
[00:49:06] Then you need to do a project, or let's say your best friend needs to do a project. You know this three years later and the local Home Depot or whatever is stocking their shelves trying to anticipate how many projects have to occur in the, you know, the population region that there is? Well, if y' all go to the shed and use that wood instead of going to Home Depot, you actually have affected the price even though nothing actually occurred. The lack of the fact that you went to trade in the market is something that directly affects the market. So it doesn't just produce a signal from known interactions and legitimate trade. It actually calculates information or value from the lack of a trade or the lack of market activity.
[00:49:59] But you can't go backwards from the price to the market.
[00:50:04] You can't edit the price. You can't just arbitrarily manipulate the price and then think something has actually occurred in the market or that you've changed the conditions of the market. That is absolutely absurd. It's like changing the hash of 10 terabytes worth of information and then thinking the 10 terabytes of information is totally different. Different now. No, the hash just doesn't work anymore. Now my question about this theory would be, you know, is this especially when it comes to like the mice and the intelligent animals, because that's, that's one of the interesting things about like trying to pull on this thread is that this only occurs in intelligent species, species that actually do have a creative output or a, what you would want to call a virtual environment in the mind to attempt to find purpose or create information where there is create information or stimuli like stimulus in the lack of it. But probably my pushback would be, if I wanted to try to steel man, the fact that this is just kind of a coincidental, metaphorical connection between these two things is that the more intelligent you are, you.
[00:51:24] This might be more deeply tied to the fact that intelligence is tied to reasoning and that when you have a model of the world that is meaningless and specifically that doesn't have any of the real challenges of reality, then you simply, you lose meaning, you lose any purpose for actually being alive that we actually derive.
[00:51:52] We derive signal from noise. And so if you don't have noise, there's no reason, there's no urge to actually seek a signal. And if you were just creating your own signal, then you. There's no feedback mechanism. And so it just goes haywire more in like, in the sense that like, you know, beauty is defined by ugliness, that, that happiness is, is basically defined as, you know, as a comparative to pain or suffering. Again, it's all actually relative. And so if you try to avoid all suffering, all pain, all, you know, noise, you actually simply lose the ability to distinguish signal altogether. And then that. That is what just plummets you into nihilism. But you can really just kind of call that a lack of information fidelity, like conceptual fidelity around what the world is, what life is, because that's what an intelligent entity, I think, does, is attempt to understand itself. But I think the nature of this, the reason I think this is actually a really, really strong theory is that intelligence is. Is by its nature a tool of compression.
[00:53:06] It's. It's developing maps, right? You know, a map of a location is. You might even have like a thousand different maps of a location, right? You have a topography map, you have a.
[00:53:17] You have a road map, you have a property map. Like what. Whatever, you know, whatever it is, you just have maps for every damn thing, right? There's so many various things that you could actually pull from details that you could pull from the rally, the. The reality of this large area. And you can have a level of detail, right? You could have a map that just says, you know, this giant area over here is green because there are trees. Or you could have a map that you could zoom into and literally had every trunk drawn and like, mapped out for distance between each other. But you obviously cannot cap. You're capturing a vastly, vastly compressed. You. You obviously can't put on the map the, the detail of the bark of each tree, the state of its growth pattern, the amount of bugs and insects and wildlife that are in that tree. Like, if eventually, if you're trying to just remap, you're trying to have everything in the map that is the, the area itself, you're just back to the area itself, which is so vast in complexity and constant change, and it's such a. Such an insane cacophony of interaction and constant adaptation that it is incomputable. Like, it is an literal income comprehensibility. Like, this is like why climate models are such a. Such a joke and why every single one of them has been so off the mark as to be laughable. Like, you look, there's a. There's a.
[00:54:45] Somebody had. Did a video chart thing of like, showing all the climate models and what their predictions were for like the last 30 years. And it just looks like a. It looks like a noise map. It just looks like a scatter plot of just lines just drawn all over the place. And then there's this, the normal line of like, what actually happened, just going to the middle of it. And none of them have anything to do with it. And the simple reason is because there's so many factors, you're talking about a literal.
[00:55:13] It's the kind of equivalent of like taking a giant section of air, just like, just like this huge auditorium or something. And then like I'm gonna take a snapshot of where every freaking molecule is in this. Trillions upon zillions and trillions and billions septillions of molecules that are in the air in this, in this thing where air is moving from all different places and there's, you know, end updrafts and every other damn thing. There's heat and cold and then trying to create some map or whatever for what's going to happen to every individual molecule. Where are they all going to be at some point later in the future? And this is the problem with AI too, is that there's a lot of like freaking, I guess just nerds who have just grown up in this world of computers who think that they're going to be able to compute, that they're going to be able to do this with AI, that there's going to be this super intelligence that can just know everything that's ever going to happen. It's going to predict every person and every little thing and decision and judgment they're ever going to make, and it will literally be able to predict the future. But that's not an intelligence problem. That's what, that's what just boggles my mind is that has nothing to do with intelligence. You are talking about a computational problem. You're talking about a pure, there's not enough energy in the sun to figure it out problem. And this is why I've been talking for so long. And we've gone over on the AI Unchained show when I, when we were doing that. But obviously I've just switched over to AI into this show because they have time to keep up to multiple shows. But when we talk about AI, that's why I talk about diminishing margin marginal returns. There is going, it's just going to keep taking more and more energy and taking more and more compute to get less and less higher fidelity output. The only way to better and better AI, as I believe, and I think this is a pretty intuitive understanding of why I don't, I don't think I have to know that much about AI, even though I've tried to wrap my head around it and explain, you know, the probability matrix and stuff. There's been some really, really great pieces that we've covered on the show that, that break into the model, how models work and, you know, create vector relationships between things. But all that aside I think it's quite obvious or it is relatively intuitive that we're going to have to have very, very high fidelity, very, very curated information on models that do tighter and narrower and narrower tasks, that the bigger and bigger and more and more compute models are going to actually collapse in on themselves or just reach a limit as to their usefulness. Because intelligence is a staggering compute problem. And I genuinely think there is a sweet spot where you are intelligent enough to, to have general adaptability and to succeed and build models for certain tasks or build a model for figuring out which tasks are most important and then building models for those individual tasks. When we're talking about, like human neurological networks, like, what does the brain do, but the overhead for compute on trying to be an expert at everything in this big, vast, intelligent behemoth is like trying to have every detail about the bark and the bugs and the leaves and everything on a map of your general location.
[00:58:39] It doesn't make sense and it's not even useful because of how deep you're trying to get the fidelity to go of something that isn't even static to begin with. And extending this when it comes to the idea of, you know, information fidelity is that, you know, a line on a map is necessarily going to have fewer pixel density to, you know, the edge, where the exact edge is of, you know, this ditch or this river or the bridge or the road. You know, there might be some little small gash or a small turn that's not quite, not quite drawn exactly right on the map. Now imagine if rather than actually going out to the trees and the river and the road to build a new map or, you know, take a satellite imagery and build a new map from that, you actually build your next map, your topography map or your, your, I don't know, your map of the trees, whatever the hell it is from the map that you have.
[00:59:40] And you never actually go back out into the real world to check everything.
[00:59:44] Well, the, you know, the 1:2 pixel differences or the fact that, you know, just the line you have to draw has, you know, three pixels worth of error rate because the line is three pixels wide. Well, that's going to become a six pixel error rate when you then draw a map from that, because you're not going to get it exactly right, you're creating a compression from compressed data. And that's a great way to think about what AI is. It's a compression of patterns from an enormous amount of data. But obviously it's not all of that data. It cannot and never will have the Fidelity of all the information that is being input into it. And this is exactly why I'm not really worried about super intelligence like taking over the world or anything, because I genuinely believe that you're going to reach a point of diminishing marginal returns where it takes so much energy to just be a little bit more intelligent when you actually take a lot less energy and you're a lot more intelligent when you actually have a hundred different intelligences working together and importantly disagreeing and trading with each other. And that's exactly why we don't put the smartest dude in the room in charge of everything and have them do a top down central control of all of our decision making and all of our judgments and how everybody should run their lives. And instead we distribute that process into a giant parallel computer where all of us are individually controlling our own lives, making our own judgments. And then we're figuring out protocols and systems to combine that information into a new compressed signal from high fidelity local information.
[01:01:27] I think that's exactly the same thing that's going to happen in AI. And it's not because it's not because like I have some, you know, genius intake into what AI is or how to, how to build neural networks or whatever on a computer. I don't know jack about that. I've. I would vibe code it, an LLM would tell me how to do it. It's because I think this is a, this is a law of nature.
[01:01:47] This isn't about AI, it's about economics. And one more thing on this point because I just read this today on X when I was just browsing earlier today in kind of my lunch break, if you want to call it that. But George Noble, who is of Fidelity Overseas Funds Fund, which was the number one mutual fund in the USA according to his, according to his profile, had this post and this is something again we've talked about numerous times on this show. In fact, the last episode of AI Unchained was about the trajectory of where OpenAI was going.
[01:02:28] And I've had a couple of people reach out like ping me or when I was like, well when was OpenAI gonna collapse? And I'm like, I don't know, when just their books looked terrible. And I also don't have a horse in this race. If OpenAI completely succeeds, that's great. I, I like ChatGPT. I use it every once in a while still. But I also just kind of use Claw most of the time. And an interesting thing about this post is that this actually is very relevant. So this is, this is the post, and I just want to bring it up because I think it's relevant to the idea of AI model collapse. And there's one or two lines in this in particular that I think really pair well with this theory.
[01:03:04] So here's the quote. OpenAI is falling apart in real time. I've watched companies implode for decades. This one has all the warning signs. OpenAI declared code red in December. Altman sent an internal memo telling employees to drop everything because Google's Gemini 3 is eating their lunch.
[01:03:23] Salesforce CEO Marc Benioff I don't know if that's how you spell it or say it publicly ditched ChatGPT for Gemini after using it for two hours. ChatGPT traffic fell in November 2. Month over month decline of 2025. Meanwhile, Gemini jumped to 650 million monthly active users. The company that was supposed to build AGI can't keep its chatbot competitive. But the real story is the money.
[01:03:50] OpenAI lost $12 billion in a single quarter, according to Microsoft's own fiscal disclosures. Deutsche bank estimates 143 billion in cumulative negative cash flow before the company turns profitable. 143 billion, dude. Their analysts put it bluntly. No startup in history has operated with losses on anything approaching this scale. They're burning 15 million million per day on Sora alone.
[01:04:19] $5 billion annually to generate copyright infringing memes. Even Sora's lead engineer admitted the economics are currently completely unsustainable. Here's the big math problem nobody wants to discuss. It's going to cost five times the energy and money to make these models two times better.
[01:04:39] I'm going to come back to that line because that's the important thing. But we're not done.
[01:04:45] The low hanging fruit is gone. Every incremental improvement now requires exponentially more compute, more data centers, more power. Reports suggest OpenAI's large training runs in 2025 failed to produce models better than prior versions.
[01:05:03] GPT5 launched to widespread disappointment. Users called it underwhelming and horrible.
[01:05:11] OpenAI had to restore GPT4O within 24 hours because users preferred the old model. Altman had promised GPT5 would make GPT4 feel mildly embarrassing. Instead, users complained it was worse at basic math and geography. They've released GPT 5.1 and 5.2 since. Same complaints each time. Too corporate. Too safe. Robotic, boring. The talent exodus makes this even worse. CTO Mira Muradi gone. Chief Research Officer Bob McGrew gone. Chief Scientist Ilya Sutskever Gone. President Greg Brockman, gone. Half the AI safety team departed. Multiple executives reportedly cited psychological abuse under Altman's leadership. And now Elon Musk is suing for up to 134 billion. A federal judge just ruled. The case goes to jury trial in April. There's plenty of evidence that OpenAI's leaders promised to maintain the nonprofit structure that Musk funded. Musk provided 38 million in early funding based on those assurances. Now he wants his share of the $500 billion valuation.
[01:06:16] OpenAI called it harassment, but the judge disagreed. Here's what I think happens next. The AI hype cycle is peaking. The diminishing returns are becoming impossible to hide. Competitors are catching up. The lawsuits are piling up. OpenAI needs to generate $200 billion in annual revenue by 2030 to justify their projections. That's 15 times growth in five years while costs keep exploding.
[01:06:41] Even Sam Altman admitted investors were overexcited about AI. His exact words, someone is going to lose a phenomenal amount of money. End quote. If I were running an AI startup with good traction right now, I'd be looking for an exit. Sell into the hype before the music stops. My positioning. I'm not touching OpenAI adjacent plays. these valuations, the risk profile is astronomical. If you're exposed to the magnificent seven through AI infrastructure bets, consider trimming the gap between promised resolution and delivered. Reality has never been wider. The smart money is rotating into sectors where valuations actually reflect fundamentals. Small and mid caps are trading near decade lows relative to big tech, while earnings growth is only marginally lower. Markets can price risk, but they can't price chaos. And OpenAI is chaos dressed up in a $500 billion valuation.
[01:07:38] Dude, they lost $12 billion last quarter.
[01:07:42] That is wild, Sora. Not even that good. Like, it's not. It's not bad. It's pretty, you know, to be fair, it is pretty dope. But the new LTX2 model is also pretty dope and it's open source, so it's probably gonna win. People are gonna build loras and embeddings and every other damn thing in the world for it, because you can do, you know, single purpose and isolated fine tunes of all sorts of various tasks or shots or movements or scenes or environments, you know, whatever it is.
[01:08:19] And here's another thing about the, specifically about the data collection and how to build larger models with very high fidelity is that one of the interesting things about looking at this theory, actually, of the AI model collapse is that when we get to A place where the Internet, we can't even tell what is AI and what is not on the Internet. Internet, you're going to have to figure out how to curate information that you know is not AI, which is going to be a very difficult, difficult problem. And it's going to result where it's going to need high trust networks, like closed off networks or locally controlled networks in order to source real world content and real human information.
[01:09:01] Because obviously the AI trained on other AI is like drawing a map of something new from a map from, from the last map of that thing. The only possible outcome is fidelity loss, because the thing itself is a compression. And the reason why I think this concept actually applies, this might be a really great way to generalize this, this relationship is that that's all intelligence is. It's a compression of patterns in the real world. Reason and logic is a compression of the dissonance that occurs with contradiction in reality. And it's only because of that dissonance in reality that then we can take that logic and reason and apply it to something that we actually haven't experienced or witnessed yet and pull possible truths from it because we have a map of reality. But where it isn't connected to reality, intelligence is meaningless. It's a compression of what its own regurgitation, its own like, like self immol. Again, Ouroboros. There's no nutrition, there's no nutritional positive in eating your own leg. And so the purpose of all of these systems ought to be to figure out how to connect the, the input mechanism or how to connect the, the creation mechanism back to how to ground it in reality so that the signal is cascaded through the architecture of the system itself, through the network itself. And this is again, even though there's been some really great arguments and really great theories on how it is that we could end up with super intelligence and it could run away from us and take over the world and we wouldn't have to do anything or we might just become slaves without even knowing it.
[01:10:53] Despite some really fascinating pieces that we've read, like situational awareness and conversations that a lot of people have had about this, I still just don't think the economics lines up. I still think this has an intuitive limit for all of the same reasons that all of these other things have. And in addition, going back to the like quote thing that we just read from Twitter is that, you know, it's going to take 5x as much energy and compute in order to get 2 times the output. This is always going to be the case, and here's the thing is that you might say that. Oh, chatgpt. In fact, J.C. j.C. Crown, a friend of mine from the Raleigh crew posted, talking about like, you know, I've seen this same show over and over again is that Amazon was non profitable, Netflix wasn't profitable, Facebook wasn't profitable, Uber wasn't profitable. And this continued on and they had tons and tons of users and then they became super profitable and you know, everybody calls, you know, makes the same claim of imminent death, but then it doesn't happen. Well, the reason I and I responded to him really quick while I was in this. But the reason why I don't think this actually applies or I think it's unlikely to apply here isn't because, you know, oh, this time is different. That's not what I'm meaning. Obviously. Those are great examples and they're the same things that I think I referenced in the AI Unchained episode, if I'm not mistaken, or at least I've thought about them in relation to this because like you can't discount that, right? But those have powerful network effects. Like powerful network effects. Amazon's delivery network cannot be matched by anyone else. You can't just leave Facebook and then jump on the next random startup network and befriend everybody that you know from your childhood because they're not on that network. You can't get a ride on the Uber, the, the Uber competitor because there's lock in with the drivers. If Uber has 50 cars available and this other one has one car available only at 2 o' clock in the afternoon, you're just going to use Uber. Uber, they all have super powerful network effects. And the very first episode of AI Unchained was me reading a memo. I think it was in. It was a Google memo if I'm not mistaken. I can't even remember. But talking about how there is no moat here, that this doesn't have network effects and planning or building the company under the assumption that we're going to have this lock in if we just get a hundred million users, they're never going to be able to leave is wrong and that that's the wrong way to think about it. And I think this is a great example. You know, ChatGPT is losing users while also burning through staggering losses every single quarter in the billions of dollars. And of course I'm not trying to make assumptions here other than the fact that like all I can usually do is pull from my experience, I don't really care which model I'M using. I just want the one with the best results. And ChatGPT isn't it most of the time.
[01:13:47] But I use PPQ AI, the one we've talked about on the show and we had the Matt Alberg on the show and talked about that. I love that service because I don't have to have subscriptions with all of them. I just use them by API and I use them interchangeably. I use Gemini, I use chatgpt, I use Opus, I use Ollama, you name it. There's just a deep, deep search like all of these or Deep Seq. There's tons of different models and I don't really care which one I use. And my ability to just cancel my subscription with ChatGPT was like nothing. There was like no lock in whatsoever. In fact, there's not even lock in with a conversation. I literally have it output like a summary document for what I'm working on and then I just take it to another one and input it just to see if it gives me a better answer. I don't even have lock in within the two minutes that I'm doing something. And because of that I think there is some real potential to this idea that ChatGPTs or OpenAI's going to be a disaster.
[01:14:44] But I mean, obviously it could be. I could be totally wrong about this and this is not uncharacteristic of a bunch of huge successful companies in the past who have had the same claim around them and they're burning money and all this stuff. But I do kind of feel like the critical piece of the puzzle that allowed them to succeed doesn't exist with OpenAI. But I might be wrong and I might be extrapolating too much from my own experience and you know, you can't always take it from a petri dish to the big picture so cleanly.
[01:15:16] But anyway, I, I genuinely think the economics align with and actually what we've seen is an explosion of just so many different models for so many different things and we are going to have to have that touch to reality. We are going to have to have legitimate human generated content or they're all going to become delusional, they're all going to become ouroboros. I think this is a fascinating theory and a really, really cool way to detail the quote unquote fidelity drift. And I'll actually like read that because I don't think I read that little section says Indras Nettle coined the term fidelity drift in the comments below. I think it's a great term to describe the process of human civilization over training on corrupted data sets. The result is fidelity drift. Reducing the quality of the information on which we can then train future generations is creating new rules and new maps of the world and of reality and of cultures on compressed information on maps and cultures of the past which are themselves compressed information of reality. And you have this feedback loop in AI, in intelligent systems and economics networks and all types of neural of intelligent networks. You have this feedback mechanism of compressed data generating more compressed data. And if you don't go back to reality, if you don't go back and touch grass, walk up the hill, learn how to hit a hammer or screw in a screw. If you don't have that interaction with the real world, you lose it, you lose the connection. And then intelligence just goes off on this big circling, nonsensical attempt to create its own noise and its own signal out of it, which is a compression loop that just drives, just devolves into madness. I think that is a fascinating theory and I thought it was a really, really cool piece and I hope you guys enjoyed it as well. A huge shout out to the the blog is always the excuse me substack, Always the Horizon by Copernican. Again, I will have the link in the show notes so you can go check it out. And there's a couple of different references and really cool things to dig further. He's got other links to articles that he wrote and then links to the article about the Experience Machine. I went and checked out the comic. The link to was actually really cool. It's super short. It's like 10 squares or something like that. But there's some really fun stuff to unpack, especially with like the graphs of what the breakdown in the models look like.
[01:17:46] So if you want to expand on this, there's probably plenty of content to just rabbit hole down with that. I will link to all of that stuff again, shout out to Copernican again and yeah, thank you guys for listening. Don't forget to check out the hrf, the Human Rights foundation and the Financial Freedom Report. They also have tickets for the OSLO Freedom Forum June 1st 3rd this year. I've also got a ton of great bitcoin companies and services that I use and people that I trust. I'll write down in the show notes with affiliate links. They're a huge way to support the show, which is totally free to you. In fact, sometimes it literally gives you a discount. Most of them have discounts, so check them out if you haven't. And thank you guys so much for listening and for sharing this out. And I will catch you on the next episode of Bitcoin. Audible. Until then, I am Guy Swan, and that is my two sats.
[01:18:50] You can't change how other people think and act, but you're in full control of you when it comes down to it. The only question that matters is if nothing in the world ever changes, what type of man are you going to be?
[01:19:07] Nick Stone Dear Martin.