Ai_033 - Fear to Freedom: The Optimistic Case for AI [The Staying Free Podcast]

August 23, 2024 02:00:48
Ai_033 - Fear to Freedom: The Optimistic Case for AI [The Staying Free Podcast]
Bitcoin Audible
Ai_033 - Fear to Freedom: The Optimistic Case for AI [The Staying Free Podcast]

Aug 23 2024 | 02:00:48

/

Hosted By

Guy Swann

Show Notes

"When you introduce 2 million, 10 million "developers" that were never developers into an ecosystem that's used to having closed platforms and silos and stuff, what do those 10 million motivated people build? They're more likely to build the solution that they always wanted and that they build something that gets around these silos.
One of the most important things that's going on right now is the shift from closed platforms to open protocols. The more that all of that stuff is open, the more the walls, the potential barriers that appear to lock people into these platforms will just kind of start to fall away.
On a long enough timeline, open just wins."

- Guy Swann


In a recent episode of The Staying Free Podcast, I had a valuable conversation with my friend and colleague Jonny, also known as jonnyhodl. We delved into the complex relationship between artificial intelligence and individual sovereignty, challenging the conventional narratives that often depict AI as a threat to freedom.

We explored the open versus closed system debate, questioning whether decentralization can truly safeguard our liberties in an age of intelligent machines. The discussion also touched on crucial issues like privacy, autonomy, and human agency, encouraging people to rethink their understanding of AI. Could AI be a tool for liberation, or are we on the brink of a new form of control?

This was a conversation worth sharing, as it offers deep insights into the potential of AI to either enhance or erode our freedoms.

Link to the original episode on Staying Free Podcast on Fountain (Link: https://tinyurl.com/5yzuc54w)

The Staying Free Podcast Links

 

Bitcoin Audible & Guy Swann Links

Check out our awesome sponsors!

Trying to BUY BITCOIN?

Bitcoin Games!

Bitcoin Custodial Multisig

Education & HomeSchooling

View Full Transcript

Episode Transcript

[00:00:00] Speaker A: When you introduce 2 million, 10 million, quote, unquote, developers that were never developers into an ecosystem that's used to having closed platforms and silos and stuff, what do those 10 million motivated people build? They're more likely to build the solution that they always wanted and that they build something that gets around these silos. One of the most important things that's going on right now is the shift from. From closed platforms to open protocols. The more that all of that stuff is open, the more the walls, the potential barriers that appear to lock people into these platforms will just kind of start to fall away. On a long enough timeline, open just wins. What is up, guys? Welcome back to AI Unchained. We have a really fun episode today. I actually got to hang out with Johnny again, a good friend of mine, which we don't get to do shows together very often, but we work together with like, he's. He's my producer in the background, but we like to actually sit down on camera and catch up with a lot of stuff. And one of the things that we were able to do this time, we had a bitcoin and kind of libertarian focused show last time. If you saw that on bitcoin Audible. He brought me on in order to explore basically everything that we've been doing and discussing with AI Unchained. So if you have not been caught up with AI Unchained and you've missed a lot of what we've been talking about, this is probably a really, really good episode to tie a ton of it together because I kind of had to bring in elements of everything that we have discussed in order to paint the picture that Johnny and I were trying to explore on his show. So you could almost think about this as like a checkpoint for the show, as if. If you've missed anything, this could be a great refresher to kind of get the status of where we are in what we've explored in AI and where I think things are headed. And some back and forth with Johnny, who also isn't as super in tune or familiar with a lot of the things. So I got to share a lot of. He got to ask me a lot of questions for things that I forget to talk about that people just need to know or are curious about. So I think this will be a fantastic episode and was such a fun conversation. Always great to get together with Johnny. Johnny, Shout out, man. Thank you. You'll probably be editing this, so you'll see this. Thank you, guys. I hope you really enjoyed this episode. It's going to be a good one, trust me. And also, don't forget to check out get yourself a cold card if you want to hold your keys safely and securely. Check out Coinkite who supports this show. They're a sponsor for the show. And also I've been a long time user, I've been cold card user for I don't even know how long. Highly recommend it. And you can get a discount code with my. You can get a discount with my code which is in the show notes in the description right down below. And it really helps out my show and lets them know that I'm the one that sent you. So with that, let's go ahead and get into this episode of AI Unchained with Johnny on the Staying Free podcast. [00:03:20] Speaker B: Guy Swan welcome back to the podcast, man. How are you doing? [00:03:23] Speaker A: I'm doing good. How are you doing, man? It's good to be back. [00:03:27] Speaker B: Yeah, yeah, I'm doing good. I'm doing good. I'm excited to finally have this conversation about AI with you. I don't think we touched on it at all in the last episode, so yeah, loads of stuff to get into with it, I guess. First of all, do you want to give my audience who hasn't listened to the last episode just a bit of an intro as to who you are? [00:03:46] Speaker A: Yeah. Yeah. So I host, have a technical background, but then also film and media and I've slowly gotten into podcasts and I host of one of the more popular bitcoin podcasts in the space, Bitcoin Audible, but have in the last year or two really kind of dug deep into the peer to peer protocols and stuff that I think are reemerging nostr I kind of put in the same category, the decentralized protocols, the Pair stack and I started the pair report and then also AI Unchained because AI has been absolutely fascinating to me when it blew up on the scene and I went down that rabbit hole really hard as well and realizing I'm genuinely thinking that that's going to be one of the more potent tools for individual sovereignty, for autonomy of the individual in kind of this next era as we move into this decentralized revolution. And I think a lot of people think of AI as the opposite. And so whether or not kind of it's destined to be that or not, I kind of felt that as I was spending tons of time on it, that it made sense for me to kind of record my journey in understanding AI, but then also to just have a place where I could pull together all the open source tools and all the, the kind of open source and open AI perspective. Not open AI like closed AI chatgpt people, but genuinely open AI perspective on how we can actually use this thing for us rather than ended up being on giant corporate centralized entities that have AI guiding and trying to make sure that we're going to click on the next thing and we stay there wasting all of our time on their platform. So yeah, and that's, that's what got me starting AI and Chain. So. And, and then I do a bunch of little bit of development projects and stuff on the side which I hope to announce or be able to share soon. [00:05:48] Speaker B: All right, sweet. Do you need to, do you need to see to the dogs? I can hear him bark in the background. Do you want to. [00:05:53] Speaker A: I think it's just the trash truck or something. I'm upstairs now, so I don't have a, I don't have a basement. So it might be, that might just be the world that we're living in right now. [00:06:03] Speaker B: Okay, fair enough. That's, that's all good. Okay, cool. Anyway, you gave a great primer there. Yeah, actually that's specifically the thing that I want to go into because you touched on it there, that most people think that AI is taking us in this dystopian direction. And I would say that like, I don't know, it's not even like 50, 50 split, it's something like a 9010 split between people who think that AI is here to enslave us and people who think that AI is going to kind of grant freedom and sovereignty to the individual. And you're one of the people in that 10%. And by the way, when I'm talking about this breakdown here, I'm talking about freedom. People, you know, like the layman might not have necessarily a kind of strong opinion either way of it, or maybe they need to just see more. But I would say people who really think about these ideas deeply, I think that that's probably how the split goes. So, you know, like you're one of these people who actually thinks no AI is here. It's going to be a tool for self sovereignty and you know, it's really going to advance the individual's ability to live a self sovereign life. And I know you mentioned just there about kind of open source and things like that. Is that all it comes down to for you? Is it just a matter of open versus closed systems or kind of what's your pitch there for why AI? What's your kind of elevator pitch? I guess for why AI is going to actually make us more sovereign as opposed to enslave us. [00:07:21] Speaker A: Okay. So I think about it just in the context of scaling power, of scaling like control over your life. And I don't think it's necessarily just open source. AI is the way. Even though I think I see a lot of parallels with the early Internet in how things are developed and in Linux in general because the incredible amount of innovation and the ability to build out the infrastructure. And this is something that I talked about actually in the most recent episode of AI Unchained because Mark Zuckerberg shockingly has a fantastic article on this. And it kind of gave me the impression that it was a response to, or at least partially a response to the piece we had just read called Situational Awareness about the. The coming birth of artificial general intelligence and the intelligence explosion. But which I think there is something too. I. At the very beginning of this show, I have multiple conversations where I think that's nonsense. And I have since kind of shifted my perspective on what exactly that means and where we might be headed with this. But regardless is going back to the idea of scaling Power Dynamics is corporations and governments and like giant institutions already have the ability to hire tens of thousands of people to do their work, to do their dirty work. They can, they can execute enormous amounts of resources and enormous amounts of labor to do what they want to accomplish the goals that they want. Which means that the ability for them to do that at half the price doesn't Is not an order of magnitude shift in their power. They already have that capability. But if I can have a computer operate 247 as if I have two employees that are generalists that can learn and accomplish any task and I can have them complete things that build projects for me and then I can work with as someone who has just spent the last year and a half and now that like you work with me as well, like you've been like an absolute godsend. Like it is night and day what it has been like to have people work with me as opposed to having to do all of this crap myself. And that mind shift of recognizing that we now have a. We could be going into a place where literally everybody with a computer could have this. And I think that's a far greater power shift for the individual than it is for the corporation. You know, it's not unlike what the Internet was originally like. When the Internet comes up, sure, governments can use the Internet and governments can use it to coordinate and send communications and all of this stuff, but they could already do that before the Internet, but it's you and I that could never have an encrypted conversation before the Internet. Now you and I can, in fact you and I can extremely easily by default have an end to end encrypted communication. And that's never been possible before in, for the, for the average person. So I think on net, despite the trade offs and the shifts in the environment, I think both the Internet, like Bitcoin, AI, like all of these things are ultimately decentralizing technologies. And, and I, I can only ever reference it back to like how I'm using it, you know, like that, that ends up being my perspective on it. But I have built so many tools for myself like so many little micro apps or single use things that have made my process and my workflow so, so much more streamlined and that's with a very limited amount of time to actually dedicate to this stuff. And all I can think is like how much autonomy it has given me to accomplish what it is that I want to accomplish or to build out my workflows. And I still think I have like three or four big ideas that I'll probably cover in devs who can't code or something in building them out for myself that could actually give me like another like huge couple of steps up in what I can accomplish. And I'm not a developer, I don't, I don't know how to develop any of this crap. I, I'm using Claude and ChatGPT and Llama to do my coding for me. I'm a, I'm a developer, I'm a dev who can't code, you know, and I think that's just a good example of where we could be going. And then in addition is Mark Zuckerberg just released Llama 3.1, the 405 billion parameter model like which is a commercial grade model that is open source. And he likened it to Linux as well. And how like our profound innovation and our ability to build markets and infrastructure around this. He thinks it's going to feedback faster on open source models than it will on closed models. And I think he's right, I think he's right. I think it'll be, there's, there's a similar dynamic to AI where open AI, genuinely open AI, not the company, I hate that they took that, they took those words. But that genuinely open AI will benefit, will create the best environment and the best opportunities and the best wealth if it is accessible to everyone, if it can be constantly fine tuned and repurposed and tasked for this thing and, and quantized and that everybody basically has an open ecosystem to contribute to. And I think largely it's a coordination problem and a ability to implement and coordinate resources and people. But I think that's a, that's a coding problem, like that's a software problem that now AI can help us solve even faster than any previous era than we've had this before. So that's a little bit of a roundabout explanation, but why I think we will ultimately end up with open source AI being the strongest foundation to build on top of. And then also why I think it empowers the individual to a greater degree than it does the corporation. I think it scales better at the small, as far as the autonomy it grants than it does at a huge, at a huge level. [00:13:54] Speaker B: Okay, cool. So if you remember in our last conversation, I basically played devil's advocate for about two hours. Yeah, yeah. And I plan on doing the same here. So. [00:14:05] Speaker A: Fantastic. [00:14:06] Speaker B: There's definitely parts of what you said that I agree with and then there's other parts which I disagree with and then there's some parts which I just kind of want to like go into a bit more. So I guess to touch on something which I agree with is that skills gap, you know, in terms of, for example, right now you have people like developers, you know, you've got this series devs who can't code, which I think is absolutely awesome. It's just demonstrating how the individual can create stuff which previously only a developer could do. Previously you would have to go and study for a long, long time. And now it's saying no, you can connect your ideas much more easily to what you want to make happen. Right. And then taking that further to ideas of kind of, you know, what's going to be built in the world. Well, what does the individual have an incentive to build versus you know, for example a developer who's getting paid lots of money by a company like Facebook or whatever. They're going to build tools for themselves. They're going to build something that's capable for, for themselves to use. They're not necessarily going to build, you know, a huge international corporation. They might just say no, I want something that's going to, I want my own, let's say smart home stuff. But I want it all self sovereign. I don't want it connected to the Internet. So I'm going to use AI to build something in the home like that, you know, building little robotics or whatever it might be. So that skills gap, AI is actually taking that skills gap and it's connecting ideas much more quickly with the ability to build them. So in that case, I would say, yeah, it's great. Now, something else that you mentioned though, in terms of like open source and how AI will go open source, I guess the way that I fear this might go and if you disagree with me, then I want to hear why that is. But for example, we have, when it comes to kind of like operating systems and stuff, you've got Android which is kind of piggybacking on all of this. Well, Android really is open source, but then you have the Google's Android or Google Play Store and all these kind of things, which basically takes that entire ecosystem and almost just kind of like white labels all of this open source stuff and then just says, here you go, we're going to package it up and put it in your hardware and now you can connect to it fast. But we'll decide what you can download if you're using. Okay, fair enough. If you're using some kind of open source package manager, yeah, you can download the different apps and stuff. But when it comes to Google, if someone goes to Google and says, hey, we don't want this app to be available in the Play Store anymore, then probably 95, 98% of people are, are just not going to get that thing. Because most people, the Play Store is Android, for example. Now couldn't we see AI going that way as well in terms of, yeah, you've got all of these open source tools, but then all you have is that all just feeds into something like OpenAI. And again, yeah, I accept that OpenAI is a terrible term for the company because it suggests that it's open, which is not. But the company, OpenAI or Facebook or. I know that Facebook's trying to do it in a, in a genuinely open source way. So maybe Facebook's a bad example, but let's say Google or something like that. Is there not a risk that they're just going to take all this open source stuff? Everyone's building these really awesome models and then they're just going to say, well, we are really good at marketing. We've got a lot of computers, we've got a lot of processing capabilities, we've got loads of servers and we can do it faster and better than anyone else. And then you have to come through us. If you really want this to work, you're going to come through us. Otherwise it's going to be slow and clunky and nobody's realistically going to do it. Is there not a risk that the ecosystem evolves, something like that. [00:17:27] Speaker A: I think without a doubt there's a risk, without a doubt it's. I think it's completely possible for that to happen again, for it to kind of mirror what we've seen with quote, unquote, web 2.0. But I also see the rise of, of open protocols replacing what has locked us into a lot of these platforms. And I kind of think of this as a two steps forward, one step back sort of cycle. And I, I'm more encouraged that, you know, a lot of reason that. And this, this might seem crazy in the current context of AI, but I think if you stack out the orders of magnitude and like what we've seen over the last 10 years, like going back to Leopold Aschenbrenner's situational awareness, is that it's hard not to look out three or four years and just project the straight line. The trend that we've seen over the last 10 in AI and basically see, okay, we're going to get like three orders of magnitude from here, which means better than human AI, better than like a college graduate, potentially even PhD level of quote unquote intelligence, knowledge and skill and capability of assessing data in software in literally model weights. Looking at 2027, 2028. And one of the reasons why I think we're trapped in platforms is still potentially one of the reasons is because of the inability and the lack of bandwidth in basically competing platforms, in competing operating systems and the inability for people to utilize them. There's still a heavy illiteracy in a lot of how could I do it my way, so to speak. There's most people are just, let me just install the thing that works. Well, what happens when the AI can figure out how to install the thing that you need and what happens when you don't? You can use something like Linux without needing to know what the prerequisites are or how to use sudo apt, get, install, blah blah, blah. Like you don't need to know the command line in order to get the benefits of essentially any software. You have an interface with you, then the AI, then the computer and the computer can manage all of this and just give you the environment that you are looking for through a model that is designed to basically architect your computer for you. Like the whole point of the operating system and the gui, like the birth of the user interface was to simplify the interaction between the person and the computer. Right? It's an abstraction, it's, oh, I'm going to click on them on this icon rather than Punching in this command and this directory and this location, you know, all of this seeming complexity that I need to know the architecture of the computer. Well we kind of have that same thing with the GUI now and all of the different options and like complexity that you can get in the computer. Why wouldn't we use AI for that same thing? To abstract that away again such that even operating the computer at that level does not no longer requires this deep computer literacy. I think there's another order of magnitude of like who doesn't know how to use a computer can still use a computer because of AI. And when that happens, I think a lot of the lock in for some of these platforms might start to fade away because it's basically about the user not having control over their hardware in a sense. I think that's a good chunk of the limitation. Now there will still be platforms. I don't think Apple will go away, but I think it will increasingly have to shift because there's a lot of things I can do on my Apple computer that Apple doesn't want me to do. You know what I mean? Like that, like Apple attempts to control but they just can't do that really. And uh, as all skills become more ubiquitous, as all layers of the stack become something that fewer and fewer people can do, I think it naturally democratizes or expands access to everything and kind of what traps everybody in these huge platforms and the Google environment and the Apple environment is the amount of the stack that someone has to figure out in order to get around it. You know, they, they design their chips, they design the hardware, they, they, they build the machine, they put the operating system on it, they have an app ecosystem like this are all in one package, right? They're, they're controlling the stack from top to bottom. But as more and more people become more capable with you know, new software and algorithms to basically expand access. I mean chips are a great example actually is that like the, the walls, the, the number of people and the number of manufacturing capabilities like that can actually make like highly efficient and extremely small like nanometer chips is not, it's a handful of people, you know, like it's, it's a very like you could count hold in your hands the number of plants that there are. And that's why there's a big thing over whether or not China or the US is going to have sway over Taiwan. Because you're looking at chip manufacturing for the whole world. You're looking at a massive amount of skills and expertise and very centralized and scarce resources that everyone is dependent on in one location. So essentially what I think though is that AI is going to open up all of that in the same way that AI right now is opening up code to people who can't code. I think it will open up a lot more of the stack to people who don't know those parts of the stack, essentially. And then you've got, you know, a new era, like looking at where additive manufacturing and stuff could likely go when you start adding AI into the mix. Like, AI is literally a general tool, it is generalized. And it makes sense that it is going to do best with language, media and code right now because it's born in the Internet environment. But these model weights and the pattern weighting system is completely general. It could be done for robotics, it could be done for 3D printing like it will, and can be done for essentially everything that we do. And I think we will continue to see that shift in the next couple of years towards like, very recently we've been talking about large action models where it will literally go through a GUI and click on stuff and it knows what's happening as if it's a person looking at a screen clicking on stuff. That's a. I don't think people realize how fundamental of a shift that is for having a software interact with a computer is that the software never uses the gui. Like if you don't have an API, you can't do anything. Now we have a model that can just use the gui, therefore nobody knows whether it's a freaking computer or not. You know, you don't have to call an API. And so I think we're going to increasingly see that. And yes, right now it might not look so pretty, but I don't know. I just can't help but think that the more that all of that stuff is open, the more the walls, the potential barriers that appear to lock people into these platforms will just kind of start to fall away. Like this is going to be lots and lots of additional cracks in those silos. And I think they're increasingly going to have to do more and more to keep it up. And that's, that's number one and number two is financing is, I think a lot of the reason why these giant corporations stay so aggressively in their giant corporation locked in platform mode, so to speak, and keep that power is specifically because they have access to money that nobody else does, is because they have access to money at a price that doesn't make sense. You know, big stuff, stuff that's already saturated is not actually credit worthy. Like they're not, they're not a good investment yet they get all, all of the investment capital because they're going to automatically rise by 5,6% every single year because we are printing money. Therefore giant banks will give them loans essentially for next to nothing. Because there's going to be this self fulfilling prophecy and people trying to get away from inflation and people are just going to have to put all their retirements in Apple and Google and they are going to have an outsized return and ability to access resources than they would in a non inflationary environment. Like it's all going to flow towards giant corporations. [00:26:47] Speaker B: Because I mean, I guess I think. [00:26:50] Speaker A: It'S a huge part of it. I think we're going to see 50% of their access to resources. I think that is going to fall down the ladder. I think it's going to fall down the quote unquote pyramid of society, whatever you want to say. [00:27:03] Speaker B: Because yeah, I mean I'm right there with you. When it comes to, you know, like money printing having this kind of effect on, you know, stock prices and this kind of thing and the people who are, you know, in those industries seem to have access to the freshly printed money and all of that concepts like I'm there. However, I would also say that like when it comes to tech companies, and this is something that we haven't necessarily seen with you know, traditional tech companies, I'm not talking about historical ones like Kodak or something, it doesn't necessarily have kind of exponential returns, but I'm talking about data companies, Google, Facebook, you know, these kind of things like Twitter companies, that their resource is data. And the interesting thing about data is that like it's not, it's not linear, right? Because like the more data that you have, you can create these connections between them. Like it's kind of like nodes, you know, like you have one node, then you have another node and that's one connection. You bring in another and then that's two and you bring another and there's, there's more connections. That's an exponentially increasing number of connections between the nodes versus the number of the nodes. And with data being like the, the resource of the 21st century, essentially the commodity of the 21st century, I think that that is actually one of the primary reasons, but I'll probably say is the, is the primary reason why tech companies specifically have such a, kind of outpace everything else in terms of their valuations. Because it's really hard to chase that once it's on that exponential growth curve. Now AI has kind of disrupted that to a degree, but then the problem is, will AI see that as well? Will AI be this runaway train where it's almost like a chicken and egg situation, right? Because it's like, you know, people aren't starting off on these self sovereign AI platforms, running it on their computer, etc. People are kind of starting off with OpenAI and these kind of things. And once you're in that ecosystem, could it just be that that becomes a runaway train that in the end trying to do it in a self sovereign way just ends up looking like, you know, trying to use a horse and cart when there's, when there's cars speeding down the motorway. You know what I mean? Like, is there not an exponential growth problem within AI that exists just like it does with tech companies? [00:29:13] Speaker A: So I do 100%. Like I say, like, I don't think these things negate natural network effects. There will always be the benefit of some new network and like that network lock in and stuff. But I think we've already also seen the degree of kind of like competition and optionality in those networks that continues to increase. And this is also why I'm so bullish on stuff like the pair stack and noster is if you can get enough momentum in something that is open, that's an open alternative. I think the network effects of that are actually more potent because new businesses and new people coming in to build stuff benefit from a network effect that they didn't even have to bootstrap. And because of that, it's very much like the Internet. Nobody could get a closed network effect to build a curated Internet. A lot of people tried, A lot of people, a lot of companies tried and a lot of companies appeared to succeed for a short period. And that's why I think one of the most important things that's going on right now is the shift from closed platforms to open protocols. And if you think about just the normal progression of technology is that we figure out how to build a centralized version before we build the decentralized version. Like that seems to be kind of how things shift, is that the scaling path of least resistance is to centralize something, but that eventually it goes decentralized because of the trade offs and the consequences of having it centralized as it succeeds, like as it gets big enough. And I think increasingly we are seeing the pressure and feeling the trade off of centralized platforms of algorithms that trap us and keep us, you know, angry at each other and clicking on the next most horrible thing so that we're doom Scrolling for hours at a time. And like all of this lock in effect. But I think that is literally the consequence. It's the centralized path of scaling, of not having a protocol for liking, sharing and following who you like, like, like users on the Internet. And so when you have a open protocol that enables those things, I think momentum can actually get away with it or get away from it. And we could go through another cycle of building on top of a decentralized protocol, centralized businesses and centralized sub networks and sub clients, until we realize what is worth, you know, shifting to a open protocol again, because it was just that valuable and it scaled that much after we built it. And thus we will have another protocol, et cetera, and another protocol. And I think this is kind of the story of the Internet, is the story of how technology has gone for the last 50 to 60 years. And, you know, it's why we went to massive centralized finance and money, like with nation states and banks. And then now we have Bitcoin, now we have the lightning network, now we have Ark, now we have sidechains. Like, like we're going. We. We shift again. Like I, like I say, two steps forward, one step back. Two step forward, one step back. Like two steps forward is the new decentralized, like the new open protocol method of organization. And then the one step back is it scales faster and better with centralization in the short term. And so, boom, a massive centralization explosion happens in the new environment and the new playing field very quickly. And then everybody's like, oh, it's natural. This is how it's going to go. We're never going to get out of it. And then, boom, another decentralized revolution, another protocol shift, another fundamental disrupting of the architecture of the old world. And then another centralized explosion in growth because it's simply faster to build centralized than it is decentralized. But decentralized ultimately wins because decentralized has a stronger effect of an open, more open environment of innovation, of robustness and kind of that network feedback, of being able to share everyone's network effect rather than fighting everyone's network effect. You know, Facebook has to fight Twitter, Facebook has to fight LinkedIn or, you know, whatever. Whereas when we're all on Noster, Primal is benefited by Domus, Domus is benefited by Amethyst. Like, we're all in an open environment in which our network effects compile. They compound on each other rather than fight each other. So I think it's literally a problem of technological development really. It's how do we build these things and that's another big part of AI is that AI is going to let millions of people, when you introduce 2 million, 10 million quote unquote developers that were never developers into an ecosystem that's used to having closed platforms and silos and stuff, is it what, what do those 10 million motivated people build with now having the resources that they could build something and now having the tools that they could build something, what do they build that they wished they could but couldn't in the closed platforms? And you know, before they had this quote unquote skill did they that they now have access to? And I think that is that they build, they're more likely to build the solution that they always wanted and that they build something that gets around these silos as opposed to work within the silos simply because the silos don't they permission people and permissionless is just going to move at a faster pace when it gets enough momentum. So that's a long winded answer, but I think on a long enough timeline, open just wins. The most important thing that you can do in order to secure your Bitcoin is to use a secure trusted long running hardware wallet in the Bitcoin space and do one that is bitcoin only. Like the cold card. Not only do they have the cold card mark 4 which is the cypherpunk calculator, but they also now have the cold card Q which has the large screen, has a full QWERTY keyboard, has a flashlight and a QR code scanner. It is the cypherpunk BlackBerry of the cold card series. And if you are looking to keep your bitcoin safe and you want all of those advanced features, you want to do some custom setup with multisig and you want security features for those edge case scenarios where you need a brickme PIN or a dummy PIN that opens up to a fake wallet. The cold card has it all. And if you don't want to complicate your setup or you don't need any of those advanced features, you can use it simply as a secure default bitcoin signing device. Keep your seed, keep your keys off of your desktop computer, off of your phone and securely and easily use your your Bitcoin. If you do not have a cold card or you haven't even checked it out yet, go to coinkite.com the link will be right in the show notes. And don't forget that when you are at your checkout, the code Bitcoin audible all one word will get you a discount. That will also be right in the show notes, in case you forget about it, keep your keys safe and get yourself a cold card. [00:37:00] Speaker B: I'm going to switch gears a little bit to another, another kind of common criticism and kind of feeds into something you already mentioned, which is that like, you know, an individual can, for example, use an AI, which is the, which is much more beneficial to them kind of in terms of what it offers them comparative to where they are now versus like a corporation or something, right? Like a corporation that has, you know, 100,000 people working for it. Guess you know, an AI or whatever, it might not have the same outsized effects as an individual who uses an AI which now has a team at its hands, for example. And I appreciate that point and I think there's definitely something to that. But looking at the other side of that, if AI does continue on its trajectory, if corporations don't actually need people, when it comes to the point where it's just like, well, everything can be done by an AI, there's almost no part of the workforce that this isn't going to affect, that the AI isn't going to be able to take those jobs. I mean, I can't think of any. Perhaps there are some jobs, you know, I know that some people have said things like, you know, nursing or interpersonal stuff, but I don't see why that should even be outside of this new paradigm. Because people are using, you know, people are talking to AI already. Like it say like as a person in Japan, people got AI girlfriends and stuff. So I don't see why you shouldn't come into those human, those interpersonal relationships either. But anyway, that's a bit beside the point. But yeah, when it comes to these corporations and them just using AI now and just saying we'll just replace the workforce with AI, well, now not only can they completely out compete because they already have all the infrastructure of a corporation, which they can just say, well, now we don't have the costs, we're just using AI for everything. So they have a huge competitive advantage, at least for right now, at least in the current landscape. But in conjunction with that, people are losing their jobs everywhere and they're not going to have enough money. People are going to be like, well, I'm out of a job and AI took my job and I don't have anything now. How can I get back into the workforce? Or how can I even compete with that? Because you say, oh, well, I've lost my job. Don't worry, I'll use AI to create myself a new job. But an AI is already doing your job, which is owned by a big corporation. So isn't there a risk there? Again, there's, there's a risk that the individual can't ever kind of reach escape velocity in this environment. It seems to me like the game is kind of already rigged based upon how the world is now and those existing power structures having that competitive advantage already from being able to utilize AI. [00:39:37] Speaker A: So a lot of people want to think, and this has been the, the age old story of technology is that somehow it's different this time. But essentially this has never been, it's been true this way in the past since forever. You know, like at the end of the 1800s, 90% of the population was in farming. Then we developed farming machines, we developed tractors. And now it's like 5% of the population and 95% of the people do something else. And a lot of people think that, oh well, now, you know, there's no economy anymore. Like, everybody lost their jobs. We just, we just put 90% of the population out of a job. When the, what reality actually shows is that now it takes one twentieth the amount of resources in order to produce food. So the price of food plummets in conjunction with that. And now we have people, we have this entire, this huge subset of resources and labor force open to build and work on other things. Which is exactly why the Internet can exist. If 90% of us have to be farmers, the Internet can't exist because we don't have enough people to run it. It's only because we invented tractors that the Internet is even possible. AI is going to be the same way. It will be a shift. Yes, people will lose their jobs, they will be replaced with machines. But it's also important to remember that this means that the ability and the resources it takes to produce certain things will plummet. Like, not the ability, the capability will skyrocket and the costs of these things will plummet. So you're also looking at a massive deflationary force when it comes to the ability to afford things. So while there is an economic shift, it will also mean that everything is cheaper. Well, unless you're measuring it in fiat. And that's a huge part of the problem. That's why, that's why I, you know, my main show remains Bitcoin. Audible is. I think that's, that's actually the bigger problem. Yeah. When it comes to whether or not people can afford things, whether or not people can respond to a massive disruption in the economic structure. But I do not think it's going to Mean it like, remember a job is. The point of a business is not to provide jobs. The point of a person is not to get an income. That the point of all of it is to produce something useful for other people. The business needs to do something useful to us. If everybody is so poor that they can't buy anything, businesses don't survive either because they have to sell to those people. It is a, it's a push and pull relationship. It's a yin and yang. If yang dies, yin does too. So like the economy doesn't survive on just suffocating people. That's just the government. That's just the government and that's just counterfeiters. That's the only thing. That's just a cancer and that's a broken money. And I think people discount the scale to which things have gotten bad because of our money, rather than realizing that this is not actually the natural progression of technology or the market. That this is actually a perverse version of reality. That we're, we're witnessing cancer and then saying organisms suck. It's like, no, cancer sucks. Organisms are sustainable in the same way the economy is sustainable. Fiat money sucks. Fiat money produces giant corporations that suffocate the economy of resources and makes every, makes this huge wealth dynamic get progressively worse and worse over time. And I think we are watching a shift in that at the same time. And I don't think corporate, like giant corporations will have as easy a go of it as a lot of people think that as they start to replace quote, unquote employees with machines with AI, I don't think it will go. I think we're also going to witness a massive deleveraging event and a massive shift in how people do business and work with each other. I think we're already beginning to see it. That's why the gig economy has exploded so much. And as we shift towards open protocols, I think so much weight can actually go in that direction where we're, we're increasingly building networks around the closed networks, just from a social standpoint. And, you know, I think also there's something to say there on the idea of, you know, AI even going into interpersonal things and being like, oh, we'll replace nurses and we'll have AI girlfriends. And is that the difference between AI girlfriends and actual social interaction? Like the point of social interaction is to, is to produce humans. You know, it's, it's to expand what is literally humanity. So I think the idea of a. Even though there will be a market for this, even though there will be, you know, machine nurses or whatever and AI girlfriends. I think it's like the difference between constantly having sex with the first person you see once a week and actually having a relationship is one is literally meaningless and you will never get any happiness out of it. And the other is. The whole is kind of the point of existence in some way. You know, like it's the only thing that actually fosters humanity into the next generation and, and produces sustainable life. Like life doesn't work without reproduction, without social interaction. [00:45:32] Speaker B: Let me, let me, let me, let me see if I can even pick this one apart. Like, let's, okay, let's really, let's imagine a world in which the AI girlfriend really takes off and now you can have a full, like an AI girlfriend made of flesh real to life. Like, you know, very difficult to tell. I mean, I'm getting a bit sci fi here. But I think it's interesting to talk about these things because, you know, like you said, you, you look at the. [00:45:55] Speaker A: Way everything's sci fi until it, until it happens. [00:45:57] Speaker B: Exactly. Yeah. You look at that, you look at the trajectory and it's like, well, you know, at some point we have to discuss it. So like, yeah, let's say we take this to its full fruition. Could you not, you could theoretically have AI girlfriends with that, that have a working. It would have to be some kind of a donated, I guess like some kind of a reproductive system which is fashioned that has, let's say, real eggs or whatever. Could you not imagine, Is it, is it impossible or unfeasible to imagine that you could literally have it at some point where you could reproduce with the, with the AI girlfriend? Because I don't see why it should be scientifically impossible. You just have to have something. It's just a matter of saying, like, we're going to have this thing as true to life as possible. Could you even have it with the ability to reproduce? I mean, I don't think this is something that's going to happen in five or 10 years time. But could it happen in like 100 years time? I don't see why it couldn't. But there wouldn't be a person there. There would be, you know, in the same way as you donate sperm or you donate eggs for like, for whatever medical purpose is, could you not have something that incubates real eggs? That you could literally have an AI girlfriend and likewise you could have an AI boyfriend with real donated sperm. I mean, could this happen? [00:47:13] Speaker A: I would say that probably the biggest thing to temper an idea like that with is just a little bit of humility is to realize how little we actually know and how much. Like, I think the complexity of an organism and a natural environment is so vast that it's not an intelligence problem, it's a computation problem. And what do you mean? [00:47:39] Speaker B: What do you mean by that? [00:47:41] Speaker A: So the reason we can't predict the future and we can't predict what's happening with every molecule of air that is happening in, like, let's say in my backyard right here, is the reason I can't predict what happens with. I can't, like, look at the state of all the molecules of air and then predict where they're all going to be in 10 minutes is not because I'm not intelligent enough. It's because as soon as you exponentially start the calculation of this one bumping into this one, and then this, it changes the trajectory of the next one and the next one because they all interact with each other, the complexity in the computation gets so exponentially large so fast that you would literally need to burn a quarter of the sun just to do the calculation. Like, it's a thermodynamic limit of what is even feasible to know. And so going back to the idea of are we going to create artificial intelligence that can literally be totally artificial life is to recognize that we don't even know how to make a tree. We know how to plant a seed. We put seeds in dirt, and we get really arrogant thinking that, you know, we control or we make nature, but that's not what we do. We take what nature has done and we just kind of like, move it around a little bit, and we work with it in a different way. We work within the environment that we have. And we don't even really understand everything that goes on in just like, a typical section of forest right behind where we have some industrial thing like, like our, our level of knowledge is actually aggressively small. Yeah, but with AI, shockingly tiny. [00:49:24] Speaker B: But with AI that is going to increase massively. I mean, AI is already able to do things. [00:49:30] Speaker A: The Internet has already increased it massively. But I still think it grows in complexity faster than we. Than we learn it. Like, it's very much like, you know, the Internet people think that, like, we've discovered everything and there's nothing left to discover, when quite the opposite is actually true, is that since the birth of the Internet, there is actually more to discover than there ever has been because we have the capability to discover more than we have ever been able to discover. I will do the same Thing. Yes. It will make us better at all of the things and it will allow us to interpret and take in more information and more complexity. But all it will do is reveal the next, like just how complex it truly is. Like there is no, there's no horizon where things really end. Like I genuinely think complexity and the depth of everything that exists is infinite in every direction. It's infinitely big, it's infinitely small, it's infinitely complex. It's just in every single direction. It just goes forever. And all we do is break into a new paradigm of what we have visibility into. Like, so a great example is that like when we create all this like software AI, like we're not going to have like, there's not going to be a point where we won't, they won't quote, unquote, need us anymore. Like we're now going to shift into, okay, well how do we plug AI into the physical world? Because the reason we have interaction and we have kind of the ability to develop physical patterns and recognize and build our own weights about the patterns of reality is because we have kind of beat ourselves against the. Oh, I fell down. When I was like one year old, I fell down off the side of this couch and like I fell and like now my brain starts to put this process that like, okay, things fall. And so we build these like patterns of physical reality and limitations and that that ledge is dangerous and I shouldn't go over there. And, and so we build all of these weights because we have these five senses and we have this, this built in mortality that gives us an ability to value things and to judge things according to the risks of, of life. And machines don't have that. They will need information, they will need to filter information through something that does have a window into reality. And as soon as we start building all of this software stuff, then we're going to start plugging it into robotics. We then we're starting to plug in it into our machines. It's just going to shift where we are directing these things. And every single time we do another round of this, of that thing, we're just going to have a whole nother round in order to plug it into something else. And just like the Internet has automated a lot of things, but it's just pushed people to the edges and now we do all of the things to plug the Internet into the next thing. I just think this will, this is forever just going to go in this direction. Like there's not really a conclusion, there's not a finish line, you know, like, what happens is that we just get aggressively better. It's just accelerating. It's just accelerating. So look at what's happened over the last hundred years for the prosperity and the capabilities of the typical human being. And I think you see the exact same thing over the next hundred years. It's just, it's just an order of magnitude, like, it's the same order of magnitude shift from 1900 to 2000 as it is from 2000 to 2100. It's just on top of an already order of magnitude shift. So, you know, we go from 10 to 100, 100 to a thousand. And I, I just don't think, I don't see this, like, end of the movie sort of thing. Like, like life has never been that way. Technology has never been that way. And I don't think it's just because it's harder for us to imagine. It's always hard to imagine forward. It's very easy to see backward, you know, and if you would ask somebody in 1900 going forward and said that we're going to have computers that do this, we're going to have tractors that do this, they would probably go on like, well, we won't do anything. Humans will have no jobs, there'll be nothing to do. You know, they wouldn't be able to imagine the infinite complexity of the economy that we would build on top of that. And with all those tools, think about the idea of explaining to somebody that, like, there will be millions of people with podcasts and that somehow that will be even slightly this were they would be like, what are you talking about? That's so stupid. How could you possibly have millions of people who have individual audiences? And then what's his face from? Patreon talks about how, like, you can actually have sustainable media, quote unquote enterprises on a thousand true fans, and talking about how the dynamic has shifted and how you can reach more fans in more places. And his story about seeing that birth and that explosion in YouTube in the early days, like, I just think it just shifts. It just shifts. The reality doesn't actually change. And I just don't think, I don't think it's different this time. I just think it's crazy because it's on top of the stuff we've already built. So it's just wild to see that, that next order of magnitude, and that's what we're looking at. We're literally at the cusp of it, looking up at this freaking mountain and going, holy shit, this is going to be wild. [00:54:41] Speaker B: Yeah, and that is something that, you know, definitely that what I was asking you about when it came to kind of like workforces and stuff like that, and you know, that whole, like it took our jobs, that whole argument, like, at the end of the day, I don't really even subscribe to that because I know that's something a lot of people like worry about and are concerned about and therefore I think it's important to ask it. But I also think 100%, I also think it's just like, well, don't we want to not work? Don't we want someone to take our jobs? As long as we can still survive and thrive, isn't that a great thing? So I just kind of think as long as you still have self sovereignty and you're not enslaved, yes, it sucks if your job gets taken and then you're a slave and now you're living on UBI and you've got to just queue up your gruel every morning, whatever. Yeah, that sucks. But it doesn't suck if you don't have a job but you're able to use AI to do all of your affairs, to make some kind of a self sustaining income, to be able to gather resources and to be able to sit around and philosophize all day or whatever you want to do. Right, that's cool. That's cool. So yeah, I don't really have that personal fear myself. I actually think that the measure of a society actually should be that we're working less and less. And if that goes to zero, I don't see why suddenly know people go, oh, it's so great. In some countries they work like less hours, whatever, they have a good life. It's like, yeah, but what if that number's zero? Do you suddenly decide now that like that sucks because it's zero? Like surely zero. Zero is the end game. Yeah, you know, just work on what you want and enjoy it. So yeah, yeah, that's what I have to say about that. But just on another, on another point, because I'm gonna, I'm gonna keep dying around here, guy, because I just want to get your take on so many different aspects of AI and I want to make sure that I cover the big ones, the big ones that people are kind of worried about. So we've seen recently a massive increase in facial recognition. All of these CCTV cameras and stuff that are put all over the world and now they're all just being plugged into AI and it's like, cool, let's just do facial recognition. Now and not just let's look for people who are actual criminals, but just let's see if their face looks like they're going to create a crime. Let's analyze what people's faces look like when they commit crimes and you know, maybe we can just arrest them before they do it. And I've heard that even, you know, the kind of faux libertarian Malay over in Argentina is already starting to look at this stuff and he's already starting to say, yeah, we're going to, we're going to do like thought, thought crime stuff. Let's look into how I can stop people before they make a crime. I mean I'm not sure how true those headlines are because you can never quite trust the media. But what do you think about all that, all that kind of stuff, how A.I. you know, ties into thought crime, facial recognition and all that stuff. [00:57:25] Speaker A: There's an interesting thing about how, you know, the digital environment has had this surveillance nightmare for such a long time and what a disaster it's been basically and the consequences of it. And that that could very potentially creep into the physical space. We could see another era of this where you can't even get away from it. It's in the real world, in meatspace. Because our digital and our physical are going to merge very, very quickly. And there's absolutely something to that. But I think we'll also see a push against it and it will continue to show the dire consequences probably all too real of the degree of control when systems like that are too large and too centralized is I think naturally centralization is a, is anti evolutionary. It is naturally self defeating on a long enough time scale it will kill itself. Because like massive centralization without adaptation it's, it's very hard to adapt. And I think we are moving into a technological phase in which adaptation is the thing that, that creates survival or that that denotes whether or not you survive. And huge things that are incumbents that are locked in do not adapt. They do not adapt well at all. It's like kind of one of their, that, that's the, that is the way to describe something that is huge and entrenched and it's trying to control everything. It is attempting to not adapt. It is trying to control instead it is trying to force everything into its frame. And unfortunately I think that means, at least historically that means a lot of people dying before it's recognized what the consequences or the trade offs of that are. Because it gets bigger and bigger. I mean that's kind of what we're seeing with Fiat, right, is fiat was always just never going to work. And we just watched it get this debt bubble get bigger and bigger, bigger. And we're watching the unfolding of this and I think we have literally a global financial catastrophe of a proportion that we have never witnessed before in the history of mankind sitting on our doorstep. We are, we are kind of opening the windows and starting to let the breeze in and you know, the whole world is on fire. And I think we will see the same thing in the context, potentially we could see the same thing in the context of digital surveillance, basically infiltrating the, the physical world to that degree. And I think we will see the best and the worst of what we can imagine from that. I think we will see open societies that don't and increasingly going towards meat space as the place to actually be doing the building and be doing the, the quote unquote protection against these things. And so we will see like what the open, kind of autonomous, decentralized version of this looks like. And we will increasingly have places in the world where we do have this freedom. And, and then we will see the worst kind of totalitarian surveillance 1984 nightmare that, that will make 1984 look like a walk in the park. Like, like literally the most horrible things. But I think those things die just for the same reason that organisms that don't adapt to their environment die. And it will just be spectacular and horrible at the same time, but it will be the very thing that just shows the only direction that's really open to us is open innovation. And decentralization is simply what allows natural life to naturally progress. It is the only thing that is really in line with sustainable life. I think it's usually because things die that evolution is pushed forward. And that's, that's what kind of sucks about it. [01:01:48] Speaker B: Let me just, let me just expand a little bit on my kind of hypothetical then because. Well, it's not really hypothetical. I mean this, this facial recognition stuff is actually happening. But let me just advance a bit on it because imagine that even if we consider this in this kind of like decentralized world and that everyone has access to the data and all the rest of it in the same way that for example, like you have a situation right now where if you're trying to use banks or whatever, all of the banks have these kind of protocols which presumably there is some kind of a communication between banks, there's some kind of other interoperable layer there which is informing them how to run these systems where like, if there's a certain transaction, they'll go, that's a dodgy transaction. [01:02:31] Speaker A: Right. [01:02:32] Speaker B: I'm not sure exactly what it's called. You'd probably know the name of IT guy, because you know more about these kind of things than I do. But like, is it ofac? Why does that come to mind? [01:02:39] Speaker A: Ofac? OFAC is just kind of like a. A financial overarching, like kind of like censorship, risk or. [01:02:49] Speaker B: That sounds like what I'm trying to get at here. So like, you know, for example, when with the. [01:02:53] Speaker A: Probably more specifically is just normal AML stuff. [01:02:55] Speaker B: Yeah. [01:02:57] Speaker A: These transactions got flagged because they're too big or too frequently or. [01:03:01] Speaker B: Yeah, exactly. And this. And this happens, you know, really as a cost saving measure because it's like, okay, well, we don't want to like have to look at every single transaction. So we'll have these kind of rules by which you get flagged if it's. If you meet certain criteria. And the way that I see it is like, even if AI is fully decentralized and you haven't got necessarily the central kind of coordinators doing everything, people would kind of want to have access to that data. You know, even if it's just like, let's say the shop, a corner shop that just wants to be able to have a CCTV camera. They might be nothing, you know, they might, excuse me, they might not be related to a government or any kind of a corporation or anything like that, but they might just say, you know what? I think I'll tap into this open protocol of face recognition so that I get alerted in case, you know, this person might want to steal something from my store or whatever it is. Like, is there not still a risk that this stuff can be used kind of in a way that affects people's freedom, privacy? Yeah, ethically, questionably. Yeah, that's a better way to put it. Despite whether or not it's decentralized. [01:04:12] Speaker A: Yes. I think decentralization. I don't think it's important to remember that. I don't think, or to a caveat, that I don't think decentralization means it's only used in a good way. Quite the opposite is decentralization means it simply doesn't scale in the worst way. That decentralization just means that the things which tend to survive tends to sustain, will sustain, and where it makes sense to use those things on an individual basis. Like I record everybody who comes to my door. I have a security system. I like to do that. And if I had an AI that was doing facial recognition and, you know, recognize that, you know, oh, that was the same person who came two years ago and, you know, let their dog crap in my yard or something like that. You know, like, if I had some sort of a thing that was like doing that, I would consider that like a valuable piece of information for me. Like, I wouldn't really do anything, but it would, it would be good for me to know and I would be utilizing my environment as for what information I could gain from it. So I don't think that that will go away. And I think it will be valuable information for the business or whatever to see, to analyze what I stop at in the store, what I stop and look at for a little bit. In fact, I bet there are systems and there are businesses who are literally already doing that. They're watching customers in their stores and running analysis on where they stop, how things should be designed to, to catch people, to make sure that they pick this up off the shelf more like 10% more often, you know, like sort of thing. So without a doubt, all of that's going to happen. All of that's going to happen. It's, it's valuable, so people will utilize it. But I think the risk is that when things go wrong, you know, people are wrong 90% of the time, 95% of the time. Like 90% of businesses fail. Like, they just fail. So the idea of creating an overarching institution, I think by default, just by the lowest common denominator of what we should expect, we should expect 90% chance that it is wrong, that it's going to fail, that it's doing something unsustainable or going in a path that is ultimately broken, that has no happy conclusion, except that once the bigger something gets and the more entrenched something gets, I think the more arrogant it gets and the more people kind of assume that whatever it does can just kind of ignore the laws of nature and ignore reality. I mean, people literally think you can counterfeit the government can counterfeit their way to prosperity. It's absolutely batshit. It's bonkers. So we, I think we scale up from 90% chance of being wrong to 99 and a half percent chance of being wrong. Like whatever they do is just going to break. It's just going to collapse in on itself in the end. Because they now have put themselves in a position where they get to push that risk out onto all of these people who are not even choosing to participate in that risk. And that only increases their arrogance and their chances of doing something stupid because there's no immediate feedback mechanism for doing it wrong. And so the problem is the scale of the wrong. The scale of the bad decision. So if we can keep bad decisions at 50 people and then those 50 people pay for the consequences of that bad decision, rather than at the scale of 8 billion people, and watching a billion people die because we made a mistake, and we let this one institution basically do this across the board for everybody. Like the vaccines in Covid and stuff is a perfect example, is that. That's an experiment, man. That is an experiment. When you go out and you start vaccinating like 5, 6 billion people. Like, that is such a gross and horrific centralization of risk that it is almost unfathomable that we have done. We have been so stupid as to push that the way that it was pushed. Like, that is not. There's no robust thinking. There is no like, like game theory or like thinking about what the consequences of this might be. If there's even something slightly wrong here. If we have made even slightly a bad decision, we have. We have made this bad decision as big and as consequential as we can possibly make it. And that's what I think giant centralized institutions do. Decentralization doesn't mean more good decisions. It means bad decisions don't scale as well. [01:08:58] Speaker B: Yeah. Can you actually explain that a bit more? Why don't they scale as well? Why is it that there is a kind of glass ceiling on the scalability of bad decisions under a decentralized. Decentralized system that doesn't exist in centralized. When it comes to AI specifically? [01:09:15] Speaker A: Okay, well, I think the laws are kind of very widely applicable. I think the explanation for AI is no different than the explanation for anything else. [01:09:26] Speaker B: So. Okay, yeah, if you want to make an analogy, then, then sure. [01:09:29] Speaker A: That's. That's why I was. What I was getting at is I'm just gonna make an analogy is like, so think about it in the context of government. Government is the perfect example of scaling bad decisions as big as we can. And it's because people cannot just say, no, I won't fund that. Now imagine if everybody who didn't like Trump could just not go to the Trump store and not give them their money. Imagine if everybody who didn't like Biden could just not pay taxes to Biden and what would be the consequence? And same thing with, like, the Iraq war is if you were actually being stolen from the thousand dollars a month or whatever that we were actually putting towards the Iraq war, whatever crazy amount of money that is. Imagine if you actually got a bill for a thousand dollars. Like, how many people would have actually been like, oh, yeah, let's keep doing this versus the number of people who just don't even know what we were doing over there. Didn't even remember that we were at war for 10 years. You know, like, it didn't even, like, because it was completely hidden behind this giant centralization of risk and funding and, you know, behind the counterfeit machine, basically. I mean, there were. There was some sort of a. A thing is like somebody went around and just like, asked a bunch of, like, normal people in the street is like, who are we at war with right now? And it was during the. It was like at the end of the Obama administration or something that they did this. And it was like seven countries that we, like, literally had boots on the ground, like, fighting in and. Or were directly funding military by some other institution or whatever. And like, nobody could name any of them. Like, people just had no idea. Like, that centralization is exactly why that problem got as bad as it did and has been getting as bad as it. As it is. And when you have decentralization and like, nobody would have actively been paying bills for things that they couldn't even name. You know, like, if you had to. If you had to say, send in your bill for the war in Ukraine, send in your bill for the war in Iraq, people would just be like, nah, I'm like, I can barely pay for groceries this month. Are you kidding me? Are you kidding me? Like, I just had to decide whether or not the AC stays on or whether or not my kid gets food this week. This is the Swan bitcoin app, and this is how easy it is to just buy bitcoin whenever you're feeling like it, and I feel like it right now. And as it shows me right here, I still have $9,388 worth of my $10,000 of Bitcoin buying that has no fees whatsoever. And Now I have $10 more worth of bitcoin. I do this quite often. One of the really great things about Swan is the incredible number of resources they have for learning and understanding anything that you want to know about bitcoin. And importantly, they're going to teach you why you should not trade. They're not going to encourage you to trade. They are bitcoin only. If you're looking for an easy and reliable way to get into bitcoin. If you're looking to put bitcoin in your ira. If you're looking to put bitcoin in your business. If you're looking to do anything around getting on a bitcoin standard or getting exposed to bitcoin in your portfolio, check out swan. Bitcoin.com guy is my special link and you will find it very conveniently right. [01:12:52] Speaker B: In the show notes with these examples. How does that relate to how the, how it can be expressed in AI? I mean the only example I can think of, for example, at least off the top of my head would be let's say that you're using some kind of like self sovereign home AI thing and you're doing something with it. Let's say you're using it to, let's say you're using it to grow plants out on your balcony and using an AI to decide when they get watered and all the rest of it and you know, like how to move them in order to maximize sunshine based on your location. Maybe AI is figuring all out and you're like, oh, AI got it wrong and it sucked and I've lost a few plants today versus like, you know, what's it called? [01:13:34] Speaker A: Like AI running all of the crops. [01:13:36] Speaker B: Yeah, what's it called? Like that company that like just that evil company that just like does all this stuff. Like they're using it, they may like fertilize Monsanto, let's say Monsanto. [01:13:44] Speaker A: Monsanto, yeah. [01:13:45] Speaker B: And then Monsanto is just like, ah, yeah, we fucked up, sorry. Like there's going to be no crops for like a third of the global population this year. Like is that the kind of thing you're talking about when we're talking about, you know, the decentralization, but decentralization dynamics and how it plays out? [01:14:02] Speaker A: Exactly, exactly. Is that the, you scale up the risk. Like so going back to the idea of using the war as an example is that the Iraq war would have been over in a year if you actually had to charge people for it and you couldn't counterfeit your way through it. Like all of the fervor would have died down every time somebody wrote that $900 check or whatever to the government and it would have been exhausted of resources extremely quickly. And another great example is fractional reserve and the horrible debt system and the, the constant over leveraging that we have in the fiat system is how long does that survive? Well, in the fiat system it's lasted for 70 years or maybe let's say 50 years. Let's go back to 1970 and just say it started then. Even though technically 1970 was the result of it going for 30 years and not being reined in, but let's call it the beginning of 1970. It's gone on for 50 years and it is constantly bailed out. Irresponsible and fraudulent fraudulent behavior is constantly rewarded and the irresponsible businesses do not go out of business. That exact same practice happens in Bitcoin all the time. It's called ftx, it's called Mount Gox, it's called Luna. We've seen this happen over and over again. Notice all of those are dead already. Mount Gox was like massively leveraged in fractional reserve for like eight months and they died. FTX was like a year and they died. It would Never last for 50 years because you cannot fundamentally counterfeit the underlying system, the underlying architecture. At the end of the day, everybody withdraws to something that can't be systemically defrauded. It is only because of that systemic fraud that the fiat, this would, this, it would happen in fiat exactly the same way if essentially everybody was still trading and withdrawing to gold. But gold doesn't scale in a digital environment. So we are all fiat and we are all paper tokens and digital tokens in the fiat world. And so it can always just be fraudulently covered up. And, and then the cost can be hidden in the homeless situation, the cost can be hidden in the ever rising price of housing and the ever rising price of groceries and the fact that everybody just feels like stuff is getting harder. Nobody has any savings and no matter what happens, I'm running at a hundred miles an hour and I don't seem to make any forward progress. I got a 10% raise but I feel poorer than I did last year. Like that's where those costs are hidden, is that we paid for Mount Gox, we paid for the FTX of the Fiat system and we were lied to about it. So that's the risk of centralization. Decentralization means crash, restructure, get over it, move forward. It means cycles, it means the Bitcoin environment. And FTX dies when they do something fraudulent and irresponsible. The Fiat system, the growing, the massive centralization of all of that risk means that the people who didn't participate pay for it and the people who did it get bailed out and they get to continue their behavior and then it's normalized and it goes on for 50 years and they stay in power, they stay in charge and everything just keeps going until it gets so big that literally the entire thing collapsed. Like it just continue. It's, I think cancer is the perfect example. It gets big and it Gets bigger and bigger and bigger and until the whole organism dies and then everything needs to get restructured and that's what we need to avoid because cancer kills the organism. But if you know you have the ability to route around, the more decentralization you have, the smaller cancers are recognized and cut out essentially because everybody can just freely exit from the system. Everybody can just choose to not go to that store because it's on fire. So we're going to go to the store that's not on fire and do business with each other in a completely different fashion. And the more like it's easy to create all of these resources and like to provide our own necessities for us and the more access we have to skills and knowledge with something like AI that we would not have access to. I think it's very very likely that we see an explosion of local economies that we see a re shifting away from just using a big platform to basically the everyday person being able to have the equivalent of the big platform experience in a smaller way. And I think that's actually more natural. I think what we have been living through for the last 50 years is literally just unnatural. I do not, I don't see any. Everybody's like oh well markets just tend to go this way because look what what has happened. I'm like this is the report card fiat. Like we could not have a more perfect unnatural environment as to what we have right now. You are looking at the consequences. This is exactly what happens when you have a system that can print trillions of dollar dollars out of thin air and bail out literally unsustainable cancerous institutions. Like yes, the results are crappy because that's a crappy way to run a world. That's a crappy way to have a system of rules and a system of like money's ultimate job is sustainability is how do we have some sort of a feedback mechanism as to whether or not we're destroying resources or creating them. And we do not have that feedback mechanism. Like yeah, it's going to look, it's going to produce horrific results including enormous wealth disparity giant corporations that never should have been that big that aren't even sustainable. We have huge multi billion like deca billion dollar companies that don't turn a profit and never have. That's bonkers. That's bonkers. That can only exist because of an ever find more financing on top of more financing economy. It's because that they can rather than actually having to be profitable all they have to be able to do is refinance their old financing at a new scale. These companies, Uber would literally, I just don't think it would exist. I don't think like it's a brilliant idea, but it doesn't work at the scale that it is, certainly not at the prices that it does because it's not profitable. They just get to this point where there's so much money flowing up towards the center that they get to live in a world where costs appear to be completely different than they are with the boots on the ground and the actual people. And when that shifts, I think we will have to rethink everything that we think we know about corporations and institutions, how big these things can get and what sustainability and quote unquote efficiency even means. Because we've had this like huge drive towards every everything efficient which has only created massive counterparty risk. Like we've just seen scaled up our risk orders of magnitude while getting efficiency. It's like, you know, if everybody's, if there's one giant farm and everybody's reliant on that farm, okay, yeah, it's quote unquote the most efficient because there's no wasted food. But one hurricane comes through and everybody starves for the year. Efficiency is, is basically a version or what we are witnessing is the enormous increasing of risk. And we have thought of it like efficiency, but that's not what it is. We're missing a huge and very important piece of the puzzle is just the consequences of that ever growing centralization and what the, like basically how bad the trade offs of that are. And I think it's also there's. Oh man, I just had a thought. It's like 20 seconds ago. What was it? Oh, oh, oh is you know, we like to project and talk about how like you know, Apple is taking over everything in Google. We're all trapped into Google and we're all trapped into Facebook and take this as a given as just like there's no alternative and this is just how things are. But I think it's really important to remember that like just 14 years ago these were all disruptors, these were all startups. Not only is none of this a given, I think it's extremely easy to imagine or assume that they're going to be disrupted too because 22 years ago you could have easily said nobody will ever overtake NBC, nobody will ever challenge the broadcasting agencies, nobody will ever get rid of cable and the DVD giants and you know, all this stuff. Like it looked like they, those incumbents were just as set and just as powerful as the ones today. But we had massive disruption and I think we're just going to see it again. I think it's, it's easy to get lost into in the things are the way they are right now and forget how important the big trends are. Like the really big trends and because we're always just seeing what's right in front of us and kind of projecting out the current state of things into the future. [01:23:42] Speaker B: Yeah. And like, you know, the, the trend when it comes to a lot of things, like even though we sit around and go, oh, Google's so evil and Facebook's so evil and this kind of thing, and it's like, yeah, these are, these are risks. But like we're able to have a conversation like this to go out to, you know, a bunch of people who are going to listen to this conversation and have an audience and you can grow the audience theoretically as. As large as you want to go. Right. I mean, you've got independent media producers who are doing better numbers than major TV channels and stuff in some cases. [01:24:12] Speaker A: So like, you know, I mean, Joe Rogan still has like, if not, I think it is still the largest media show in the world. Yeah. Something the largest. Like listens to his episodes every time audience. [01:24:25] Speaker B: Yeah. And I mean, that's a good example as well of like where I'm using AI. I always take my. Well, not always, but quite often because, you know, I travel a lot and often recording in less than ideal places. I take my episodes and I run them through AI to basically clean it all up and kind of like make it sound like a bit. Yeah. Just to make the environment sound better. And it does a way better job than, you know, editing software did when it was just a matter of kind of like going in there and individual changing all your parameters. Like you run it through AI and it comes out and sometimes it literally sounds like studio quality. Like not always, but it's always like worth doing. In almost every case it's worth doing. And in some cases it's like night and day. It goes from literally an unusable recording to a usable one. So, you know, these are the kind of things that I think that's bringing a lot of, a lot of tooling to the individual which just previously didn't exist. And you know, that's kind of what I'm, what I'm focusing on here because yeah, like you said Google and you know, Facebook and all these things. Yeah, they were disruptors and then, yeah, we're kind of in their, in their world systems. To a degree. Like, you know, if someone wants to boot you off the podcasting platforms or whatever, they can do it. You can still know, use the Internet and have an RSS feed or whatever. But if the platforms don't list you, like, you know, if you're not listed on Spotify, not listed on Apple Podcasts, you know, that's probably like, you know, 80 to 90% of where most people get the podcast. Right? So like, you are going to be kind of screwed to a degree, but at least you have the opportunity. And at least if that did happen, if they were censoring left, right and center when it comes to, you know, for example, podcasting, like the reason that Google Podcasts, I know it's winding down now, may have already terminated, but Google Podcasts never censored podcasts, but YouTube did. And that's because the RSS feed is decentralized and because it exists on the Internet and any of these platforms can point to it. Right. So when it comes to, you know, these, these things, yeah, there, there's wall gardens, but there's ways around it and also there's decentralization within it. Right. And I think that's kind of how it's working with it with AI as well, at least from my perspective, personal use of it. It's like, you know, I can use these tools to give myself, you know, to have a podcast which is listenable. Like if I had terrible recording, it sounds like crap. Like I can make it sound a lot better and then I can still kind of put my episode out, for example. Now that was something I couldn't previously do without a studio and without all of this investment. So rather than me having to have a 100 grand studio with like perfect, you know, with really good wi fi and all the rest of it, it's like, no, I can, I can just do that for video content creators and stuff. You know, you can, I mean, you can use like fake backgrounds, you can even have a fake avatar and stuff if you really want. If you don't even takes it down to the, the fact that like, if you don't have a face for a YouTube channel, like just use a fake avatar. Like, you know, you could, you can. [01:27:14] Speaker A: Go just use a girl with big boobs. [01:27:16] Speaker B: Yeah. How's your trade going? Exactly right. So these tools, I do think that even though there are obviously these concerns and stuff, I'm definitely with you there, that I think that you have to take the tools that are available to help you compete with the big conglomerates. I would rather have be using something like. Even if you are using something kind of a bit centralized like OpenAI, if that means that you can kind of have tools available to you that then helps you compete with, you know, some other company which you would never otherwise be able to compete with, well, that's still a win, right? Like a company is giving you tools to help you beat out other monopolies potentially. Right. So there's, there's. There's a kind of a big. There's a lot of various incentive structures that are kind of like overlapping and sometimes in conflict with each other. But overall, I think it's like a good thing. And like you say, you know, it's like the trajectory of technology in retrospect generally has been good. And, you know, I do think that that will continue to be the case. I want to start winding this down because I know we've been kind of going for a little while and we said we'd go for another, like, 10 minutes. How are you feel like if we go a bit over that? Are you good? [01:28:30] Speaker A: I'm good. [01:28:31] Speaker B: All right, cool. Okay, I'm going to ask you one more kind of, yeah, I guess, like more of a esoteric question just to kind of like move off the. Just the pure technological aspects of it. I think there's quite a lot of people, especially in the freedom community, and, you know, maybe not so much the bitcoiners and stuff, because these probably be the people who actually also think that bitcoin is part of the beast system and blah, blah and all this kind of stuff. These are people who are like, they're against a lot of technology. They kind of want to, you know, maybe like, go back to how things were, you know, go back to living off the earth and these kind of things. And, you know, I have respect for a lot of these people. I disagree with them, but I have a lot of respect for the fact that they are just like, no, you know, like, technology is kind of like fundamentally taking us in an evil direction and stuff. What do you think to these ideas of people who say, you know, like, AI is all part of the beast system, it's. It's taking us in a direction where we're more disconnected from the real and the raw. And, you know, even though it can have some advantages in our life, you know, we should kind of like, try to stay away from it because it's kind of connecting us with something which is more ephemeral or, you know, it doesn't have any kind of real substance to it. It's like we're trying to play God or kind of like be. Yeah, be some kind of a God in creating this AI thing and give it this literal, almost consciousness like thing. There's almost like a kind of like a religious or maybe a spiritual aspect to it that people are very, very fearful of. Have you thought too much about that side of it and what are your general views there? [01:30:02] Speaker A: So I think there will be a natural increase in the push towards traditional because of the aggressive amount of change that's happening. And I also think a lot of the change that we've experienced has been technology for the sake of technology rather than technology for a genuine like, understanding or a genuine judgment or honest take on what's actually valuable about it. And I think we have really lost our way in a lot of that and that a shift back towards. And it's funny, I know I bring this up all the time, but I think a shift back to sound money is actually more. Will do more for recognizing that. And I think you see that a lot in the bitcoin culture too. Not. And for anybody who's not in it, like, I'm not talking about crypto trading culture. I'm talking about the bitcoin culture that is very much like, build a citadel, have a local community, have a family. You know, like there's, there's an extreme like going back to the basics culture in bitcoin. And I think sound money actually shifts things back that way because ultimately we respond so much to pricing, we respond so much to how things like, you know, maybe a mortgage doesn't make sense for a lot of people, but people do it because it's the middle class savings account. It appears to be the financially correct decision that alters our social dynamic, that alters the course of our lives, that alters our decisions about what skill to learn, what to go to school for, what certification to get, what, what, what to pursue in our lives. Because this is going to pay this and this is going to pay this, and this is going to give me what me and my family or my myself for my life want to accomplish. Like finance is so unbelievably bloated, Finance has doubled in size since 1980 as a portion of the economy, which finance's entire job is, is to like there's finance for the sake of finance is meaningless. It makes no sense at all. Finance is supposed to organize and divide up ownership and coordinate real things that are of value. You know, like if it, if it takes, you know, 1% of the value of a automatic mower. That mows your yard in order to organize it. And then suddenly, three years later, it takes 4%. You have a problem. You got crappier at doing the coordination than you did. It should fall to 0.5%. Just like farming used to be 90%, now it's 95%. Finance should go that exact same way. So all of the people that have aggressively run into finance, because suddenly you can just become a millionaire like so easily. That's a, that's a fault. There's a, there's a problem. There's a huge fundamental problem there where more resources are spent doing the exact same job. And that job is not any. Because the job got crappier. We're wasting more resources there. And so kind of going on that, that same thread back to how we organize our lives and how we accomplish everything that we want to accomplish. Like the goal is the end result. And that bitcoin culture is going back to basics because I think we're recognizing the real price of things. When you have sound money to compare it to, you know how stupid it is to go into debt to get this, or how stupid it is to waste a thousand dollars on this gadget that you might barely use, but when everything's going up in price and you can just like get 0% financing for no reason at all in fiat, yeah, let's just go buy this stupid thing. But when you spend a thousand dollars in bitcoin and then two years later it's worth $6,000 in bitcoin, you're like, damn it, I wish I'd held onto that bitcoin. You have a genuine assessment of what the, the trade off of buying that stupid thing. There's a fantastic website called Bitcoiner, shit bitcoin or shit.com and it just shows you if you had bought Google Glass, if you had bought a smart mop or you know, whatever it is in 2017 or 2019 or 2012, whatever it is, this is how much bitcoin you could have had if you had just saved it. And I think that's a massive, that will be a massive mental value shift into like what we choose to do, the skills we choose to pursue, the jobs we choose to have. And then that ends up creating a social dynamic change where the richest person in the neighborhood is no longer a lawyer, a politician, or a finance broke. They're a plumber and the construction worker. And like somebody who produces real value that does something consequential in the environment, like in the economy again, because most of the money the wealthy people are literally people who produce the least value. Like a lawyer is just there to navigate the, the bullshit billion page bureaucracy. Like it's like we invented. It's, it's like, it's like saying that like I could just walk out to my car in the driveway and get in the car and leave. Or somebody could create a completely fake institution in which there is now a maze and you know, a obstacle course and all of this stuff to get to my driveway from my door. And then I now pay an entire industry of people to show me how to cheat to get around this obstacle course so that I can get in my car. And now this is like a billion dollar industry to just go from the obstacle course course from the car to the driveway. It's like all of this is meaningless. The whole point is I'm just trying to get in my car. And when that person's the richest one, it, it confuses and completely mutilates our ability to communicate value in the economy. Lawyers are that for a bureaucracy. The bureaucracy is just an obstacle course that makes no sense. It's not useful for US finance. Again, exactly the same thing for actually ownership and dividing up the capital that we have in society. And it's like grossly centralizing and it keeps growing in size, it keeps getting more expensive to do the same job. And so I believe there will be not only a natural push against AI because AI will feel unnatural. You know, it will, it will seem like we are going too far in this direction. But we have also, we have also I think embraced so much technology and gone technology everything for no reason. Like I the, for the life of me, the last thing I want is a smart washer. I don't want a washing machine with a computer. Like we're literally buying washer and dryer. And it is so hard not to get like hoot. I don't need a computer in my refrigerator. It just needs to keep shit cold. You know, like we've, we've gone this direction so stupidly. You know, George Hotz has a really great example. There's a fantastic video by George Hotz. I'm not sure if you know him, he's for anybody in the audience is he's the guy who got famous for jailbreaking the first iPhone iPhone and as a kid. And then he Also jailbroke the PlayStation and Sony tried to put him in prison like when he was like 17 years old or something. And he talks about like all of these AI like self driving car companies. Like what is it wemo and Zoo Zook Zooks or something like that. I can't remember exactly the names of these ridiculous things but he talks about these being like huge scams because it's, it's, it's like popular science come real like become reality is that everything's so big and so stupid and so grossly uneconomical. But because we have just this open endless financing everything environment, it appears like these are smart decisions, but they were never sustainable. And he's been able, he has been able to create an AI, a like self driving quote unquote self driving car with a smartphone and just plug it into anything that's like later than like 2019 or something like that. Like just because everything has drive by wire now and it's like free and open source software and it works as good or better than most of them because it doesn't even try to do the map everything out and draw a line and do assess all of this stuff. It's literally an AI of just watching other people drive and then learning what they do in the environment. So it's actually better than a lot of these crazy things with all these like spinning cameras and radar and like 18, you know, $150,000 car to build this self driving car. And he's done it for $20, $20 app and a smartphone and he's accomplished the exact same thing. And all of these other ones still have these limitations. He has a fantastic video. I'll actually send you the link for anybody who wants to watch it on with Reason TV where he breaks down like a bunch of these things like it's just like it's economic insanity, like it's just nonsense. It makes no sense to do it this way and they're never going to work. And I think that's kind of this come back to reality moment. And it's another great example also of how AI can actually work at a small scale way better than it does at a big scale like wemo. I can't remember the name of this stupid company but like it's a perfect example of like how did they manage to spend a billion dollars and they don't have a working product. And George Hotz is doing this for free with a community of volunteers and he has like 20,000, there's like 20,000 self driving cars on the road using his system or something crazy like that. And it's just a perfect example of if you, if the economy or if the money is lying to you about what the price of something is, about what the cost of something is. And you're not actually, you don't actually know where your resources are coming from because you're just creating finance out of thin air, issuing a billion dollars and giving to a giant corporation. We end up spending exorbitant amounts of money and spending decades on projects that never made sense. Like never made any economic sense from the start. So both of those things, I think will heavily shift us back toward, okay, why do we actually have this technology? What's actually valuable about it? And where should we apply this? And where should we just stop and just like quit wasting resources and quit putting computers into everything? Because I don't need a touchscreen on my washing machine. I just need it to wash my damn clothes, you know? Yeah, I feel ya. And then there's, there's, there's one more point that I wanted to hit on this too, because it's interesting. I've been thinking about it in the context, it's funny, this is the context that brought it up was like ancient pyramids and all this stuff of these ancient societies that potentially have basically been able to build things that we really can't build today or that are so difficult and require so many resources. It's just like kind of wild to think about it. And we think of these as like, oh, backwards barbarians who had chisel hand chisels, tools. And it's like, you know, I think a measure of a society's wealth and prosperity is just how much capital they can allocate towards something is what, how well can you move resources? And there's a degree of like, how much do we really understand about those ancient structures and stuff? Like when we have quote unquote bricks, they have, they have literally literal solid rock that is so perfectly stuck together. There's no mortar joints. Like I don't build, we don't build any houses without mortar joints. You know, there's no mortar joints or whatever. Like they just sit together and every single one of them is a different shape and they are perfectly cut to fit with each other. Like that's crazy. That's absolutely bonkers. I don't understand. As someone who builds a lot of stuff, I can't fathom doing that out of wood. Like that seems insane to me. So I increasingly began to think about how could a complex, wildly successful, intelligent society advance to society die? How could it collapse? And of course there's the obvious stuff, you know, nuclear bomb, you know, all the whatever kind of disaster scenarios. But I think there's another element too, is that as to that's relevant to the AI discussion is that the more and more we are dependent on our systems, go back to the idea of the 90% used to be farmers, 5% now. Well that means that we could lose 5% of the population and starve because we could lose all of the skills to produce our food. We could lose that knowledge. And the more complex we become as a society, the more we are actually dependent on a smaller portion of the population for any individual piece of the puzzle. So another example is like let's say there are five people who know how to make a pencil. And there's five stages of making the pencil. There's the graphite, there's the wood, there's the paint, there's the eraser, there's the little metal thing, whatever, or the glue. And all five people know how to make all five parts of the pencil. Well, if three of those people die, well the last two can still just make pencils. We just can make fewer pencils if we have five people and they are all hyper specialized and this guy only knows how to make the glue, this guy only knows how to make the paint, this guy only knows how to make the pencil. And then we lose three people, there's no more pencils. We now have to start from the ground up and we have to rediscover how to work with the wood, the graphite and the paint, you know, or whatever it is. And so the more we get specialized, the more we are actually interdependent on each other. It's kind of like an organism. If you lose one organ, you die. But if it's a bunch of single celled organisms with their entire structures and they can all survive by themselves, well then you know, like if a human loses the liver, trillions of cells all die because those specialized cells are the only ones that do that job and the rest of it can't survive. We're increasingly becoming like that with a society, with our society. And I'm hoping that AI doesn't make us more dependent in that way. But it actually redecentralizes back toward generalization so that more people know how to make pencils rather than fewer. More people have the access to farming and more people can do their own computation. I think that's the goal here, is to spread out that intelligence as far and as wide as possible and make it as variant and as advanced and as fine tuned as we can possibly make it in all those different scenarios. And I think the only way to do that is to have open like open source, like Open AI for everybody that is accessible and that as many people can use specifically. Because if we don't, we run into that scenario of, like, if we're all on centralized AI, well, how quickly could the entire economy become dependent on that and then scale to a point where everything works so well because it's all working with the AI and then that system goes down, or a bomb goes off in just the right place? How many people die if we don't have an alternative to that? And that's what I think. Like, everything that can go wrong will go wrong. Murphy's Law is that the more we centralize and the more we put risk in one place and we are all on one platform or all under one government institution or one bureaucracy or whatever it is, the more likely we have those disaster scenarios and the more interdependent and specialized we become at the exact same time, the worst the consequence, the worst the consequences will be. And I genuinely think it's not a matter of if, it's just a matter of when. Like, do we realize it beforehand and do we build around it to solve those problems, or do we just keep going in this direction until something breaks? Because eventually something's going to break, like it just is. And that will not survive. I do not think the centralized version survives of the human story. If we just all go centralized, all go centralized, risk, centralized institutions, centralized systems, I think that is the. That is the course towards whatever mass death it takes to put us back on the right path. And I hope we can just escape it without needing to tell that story or live that story. [01:47:39] Speaker B: Yeah, actually, I guess just to kind of sum up my feelings about that, about answering that question about, well, is AI really going to be taking us back to basics, or is it taking us into this kind of unknown world where it's disconnecting us from the day to day and stuff. Like, I would say it kind of feeds off your point, really, that if we're able to use these tools, you can actually do so much more. I mean, rather than you just having that specialization, because you were talking there about, like, specialization, how many people know how to make a pencil, et cetera. But it's like, imagine that you can just say, hey, you know what? Like, I'm gonna, like, one day you decide, hey, you know, I want to create this thing. I want to build a computer. And whereas you would normally have to sit down and just kind of, you know, read through loads of the books and go through articles and this and the other, you can just kind of like, work with an AI to say, I'm going to do it. And the ability for you to kind of get to where you want to get to is much quicker. So it is, it's also kind of like decentralizing skills in a way that is just like, well, now you can be skilled in anything because you're using AI to help you acquire those skills. So now you don't rely on it. And, you know, for people who are, you know, unhappy about big pharma and, you know, all these kind of things, which is a, you know, like a big, a big deal. You know, everyone's just saying, okay, well, the way that you get, the way to, to be healthy is just take a vaccine or do this, or you've got to go to this doctor who's done a degree and stuff. Well, yeah, I mean, like, you can make that choice if you still want, or you can just say, well, no, I mean, all of, like, medical knowledge has been put into these models. And I'm going to use a decentralized model that hasn't been done by the, you know, the WTF or whatever. I'm going to use a model that I trust the people who have done it. I trust that it's open source and it's, it's been fed good data and I trust that data. And now I'm going to interact with that directly. And now I don't need to go to the, to the doctor, Now I don't. You know, I mean, like, it's actually kind of like bringing these things which have been permissioned previously, which have had these kind of, I guess, what's the word? [01:49:47] Speaker A: Like, gatekeepers. [01:49:49] Speaker B: When something's kind of like, yes, gatekeepers. Like now, now the gatekeepers can finally break down because you can kind of like skip right ahead and just say, I'm going straight to the data. I'm going straight to the source. I don't need to now go through this filter. So, yeah, that's, that's what I'm excited about, you know, that that's how I'm. What I'm kind of looking for. So I got two more questions for you guy, and then, and then we can close this off. [01:50:10] Speaker A: Just a quick analogy is. I totally, I totally agree with that sentiment. And that's like where I really hope things are headed. And I think it's perfect to just like, what's happening with development and coding and like, my access to things that I can build that I couldn't before, I think is just a perfect microcosm of where I think it all will go, you know, is that I have access to something, to a skill that I did not have before and the ability to understand it better. You know, I can build these little one page things and I can look at it and be like, okay, yeah, that makes sense. Like I can kind of read the Python code and I've been, I've become way, way, way, way, way better at troubleshooting code than I ever was before because I keep having to do it with chat, GPT or CLAUDE over and over again. And now I can just spot where I'm like, that's probably gonna, that's gonna screw me up right there. I don't think, I don't think it wrote it right. And I've been able to see where problems are probably going to arise and I've actually solved a couple before I ran into, before I got the error now and I hope to see, I hope everything goes that way. I hope that's kind of the experience with AI as we continue. [01:51:21] Speaker B: Yeah, yeah, definitely, definitely. All right, so these are kind of like pretty quick fire questions. So I, mm, it's two, but maybe two slash three. Okay, I'll start with the first one. So what are you most excited about? What is the number one thing that you're most excited about? That AI is going to enable, like either in your life personally or in society at large. [01:51:51] Speaker A: To be able to, you know, I think one of the biggest limitations of open source and especially with the. After seeing the application that we have built, I've recently got to use paradrive for the first time and I am already in love with it and I don't even have a UI really for it yet, but it was actually the easiest, fastest solution to moving files is I think one of the biggest problems with open source and building. Getting the solution to the problem that you want is the inability to commit the resources to really finishing the project. And right in line with the whole devs who can't code thing. I am so excited about AI being able to code, like being able to build and finish and expand on projects without needing some developer that's crazy skilled and somehow just has an infinite amount of time on their hands to just devote to this one project indefinitely. And the ability to basically work with simple resources and novice developers who can use AI to make that accessible to everybody. It's just what put it in the simplest terms possible, what the hell can we build when AI can build for us? And it's really just about what we want and how badly we can Go after it, you know. Okay, yeah, so that's, that's my thing. [01:53:23] Speaker B: All right. The next one is a, is a two parter. So what are your predictions for the next kind of six months to a year when it comes to AI? Big things that we're going to see. [01:53:35] Speaker A: It's fascinating. I'm already seeing it. But I'm going to see a lot of commercials and media that are that clearly like somebody's used AI for vfx. I think we'll see some really fascinating tools for VFX and image generation and video generation finally come to fruition of like editing and altering footage that you've already created. And then I also think we will see some really, really fantastic open source models. Maybe like llama 3.2. Llama 3.1 is already awesome, but that really start to get built on top of and has a lot of commercial success with businesses, especially internally to a single business. And just the, the integration of all of this stuff. Like I feel like we've still kind of been at arm's length with how we are using AI and I think within the next, within the year most people will be using AI 20 different ways directly integrated into stuff that they're doing. And many might not even realize that that has happened. But I think it will, I think it will happen. I think we're really, really close to the cusp of just kind of the tools that shift this from. I'm building and working with developer and I'm using a chat app interface to this is integrated and now I can play like now this is, this is not just music theory, this is a musical instrument. And I think we're entering that phase very, very quickly, like right now. [01:55:10] Speaker B: Okay, final question. What is your most crazy prediction when it comes to AI? Like it can be any length of time, it can be 50 years, it can be 100 years. Like what's the thing that you're just like AI is going to enable this totally unimaginable thing? [01:55:29] Speaker A: Damn, that's a good question. I wish I had time to think about that one. So one thing I was thinking about just the other day and I talked about in the last episode, just kind of like an amusing. Is the idea of an AI that's practically at the hardware level that actually the idea of an operating system might kind of fade into the background. And your operating system and your computer setup may actually be specific to whatever you have. At some point where we don't think about quote unquote, installing an operating system is what we do is we have an AI that starts up in our machine, tests every path and every circuit in the device to assess what's there and how to use it, and then basically builds an environment for it, you know, with a Unix foundation or whatever the hell it is, and then essentially creates the perfect operating system with as optimized as it can be for you, and then essentially runs and manages all the installing and utilizing and crosstalking between all of the software and stuff on your computer. And you literally interact with the computer as if it has no limits. Like it just. It will just adapt to whatever it is you need it to do, and it will figure out how to use the resources that it has available to it. However it can. If you just plug in a new gpu, it will just figure out how to utilize that GPU in the best way possible. Because one of the fundamental pieces of the hardware is just about assessing what the machine is and how to design or tweak software to get it to work for your machine. So I wonder if operating systems will be AI first and code and GUI and everything second. The AI will basically create it for your environment. [01:57:32] Speaker B: All right, cool. Cool, man. I have to learn more about that. Have you talked about an AI and chain that concept? [01:57:39] Speaker A: I just randomly brought it up in the previous episode, but I did not really expand on it much, I don't think. [01:57:46] Speaker B: All right, cool. [01:57:47] Speaker A: Yeah. [01:57:47] Speaker B: All right, cool. Yeah, I need to look more into that one to see how that will play out, because that sounds like a total reimagining of how computers even work, but yeah. All right, man. Anyway, this has been. This has been awesome. It's been another epic conversation, so appreciate you coming on. Do you want to just let people know where they can find you for those who aren't already and then we can sign this one off? [01:58:07] Speaker A: Yeah, yeah, for sure. Check out Bitcoin Audible AI Unchained. That's my AI focused show. This will all soonish be the Guy Swan network. But the feeds are still separate and everything right now, so just keep an eye out for that. But you can find me on Twitter heguyswan and you can find me on Noster heguyswan as well. My impub, I'll send you. I mean, you have my impub, so yeah, that'll be posted there as well. But yeah, those are probably the best places to get in touch with me. And I got YouTube and rumble too. A lot of fun video stuff coming there. That's where you'll find devs who can't code, by the way. And I highly encourage checking that one out. That's all AI and pair based stuff that I, I think is a big deal. I think it's really fun stuff. [01:58:53] Speaker B: Yeah, I would definitely recommend people check it out. I think devs who can't code is like super unique and yeah, for people who don't ever imagine himself as a, as a programmer, like it really does show like the stuff that you can do. So yeah, you're doing great work, man. It's awesome working with you and it's always great like catching up and doing episodes every now and again. So I look forward to the next one and hell yeah, until next time, dude, take it easy. [01:59:17] Speaker A: Later dude. All right. I hope you enjoyed that conversation. Don't forget to check out his podcast. I will have a link in the show notes if you want to follow and stay in touch with all of the stuff that Johnny is doing. And as well I will have the look of course the link in the show notes for Coinkite. If you hopefully you have your cold card by now because you had a whole episode like you could have listened and watched this and gotten your cold card at the same time. But if you didn't do that, you still have a discount code. And not only will that get you the easiest and most robust way to secure your bitcoin, but also it supports my show. So a shout out to everybody who does that but don't forget to check out the links. Lots of other things to explore and he has a fantastic show to dive into and it'll be, it'll be right there, just right out, right down there in the description. Super easy to find. Could not be more convenient with that I will catch you on the next episode of AI Unchained. Until then, everybody take it easy. Guys. Sa.

Other Episodes

Episode

March 06, 2018 00:19:14
Episode Cover

CryptoQuikRead_020 - There is a Bitcoin Patent War Going On

There is a Bitcoin Patent war going on that could have serious consequences for the entire industry and security assumptions of blockchain as a...

Listen

Episode

April 16, 2019 00:51:25
Episode Cover

CryptoQuikRead_235 - How Is Fiat Money Possible [Part 2 - Hans Hermann Hoppe]

Continuing from yesterday's episode with Part 2 of our journey through Hoppe's amazing piece on the nature and origins of the fiat money system.  ...

Listen

Episode

July 24, 2019 00:49:09
Episode Cover

CryptoQuikRead_274 - BetterHash, Decentralizing Bitcoin Mining with new Hashing Protocols [StopAndDecrypt]

What if we already had a solution to one of the biggest threats to the Bitcoin network, but simply weren't implementing it?  Today we...

Listen