Episode Transcript
[00:00:00] Speaker A: We basically had a fork in the road. I think that there's a good chance, you know, we go, we go the wrong way and you end up with a situation like China. We need to build these tools that serve liberty and not tyranny and we need to design these systems correctly. And then on top of that, I think as, as I pointed out earlier, you know, we have an opportunity to build a better product than the tyrannical systems. You know, now, before, before you're in too deep. So we need to do that so that people aren't, aren't lured by that.
[00:00:44] Speaker B: What is up, guys? Welcome back to AI Unchained. This is the place where we explore the self hosted open source AI alternatives and how to make sense and apply this technology to send us to a future where we actually have liberty and we're not trapped in these crazy centralized platforms that use AI to control us. That is what we are doing here today. That is what we do here every day. I am Guy Swan, your host and we have got a conversation today that I'm pretty jazzed about. It's a tool that I just kind of stumbled upon that was, I think it was a hackathon. Jim will explain it a little bit better, but this was built pretty quickly as kind of a proof of concept and I think it's such a, such a, interesting direction. I think it's the vision of how I suspect a lot of these tools will go and in how to target what is most valuable and figure out what is most valuable in the AI technology and in LLMs in image generation and these sorts of things. And I think this is a really fascinating way to target it that's very relevant to the kind of like tiny apps, things that we've talked about and what we do in the Devs who Can't Code series with Hope and many other discussions and things we've dug into on this show. And so I immediately was like, okay, we should set up a conversation. Let's go ahead and make a show about this and just kind of like dig into this plus tons of other topics around AI and, and where he kind of what, what his perspective of it is in kind of putting his finger on both the value and then also where we are in the kind of economic transition to, with a very, very new and unique technology. So without further ado, we have a fascinating conversation with the creator of Cascader.
Dude, thank you for coming on the show. I'm very excited for this conversation.
Joey, actually a good friend that I was working with actually pointed me to your service? And it was specifically because I. We were talking about, you know, AI and all this stuff. He'd listened to a couple episodes of the show, and we were kind of like, just kind of going down the rabbit hole of like, you know, what would be. What would make sense to build how. I was thinking about how to target things and how to. How to go from just kind of like the general.
Just plug LLM to. Into everything. Like, how do we go from that kind of like, vague, untargeted aspect of it to, all right, well, where can we, like, directly apply things and, like, where can we narrow down exactly what this thing is good at for some specific task? And then he was like, you have got to check this thing out. Cascader. It's exactly what you've been talking about. It's very similar to your kind of model that you talk about on the show with the whole tiny apps. And then I saw it, I was like, oh, my God, this is perfect. Let's. Let's dig into this because I think this is a really strong direction and your app is really fascinating. It's one of the pieces of like, okay, let's talk about targeting that and how do we build platforms for creating tools rather than trying to think about every tool that's going to be made. And that's where I think Cascader comes in. But I don't want to get too ahead of myself for the audience. Can you introduce yourself? And I just want to say welcome to AI. Unchanged man.
[00:04:22] Speaker A: Yeah, Guy, thanks for having me on. I'm a big fan. Honored to be here.
So, yeah, Cascader is a collection of AI tools, and the idea is to just build as much interoperability as we can. We have some example applications of workflows and other micro apps, as you mentioned, that can achieve specific goals. And we think that AI is so much more than just chat apps. I mean, they're great. It's great that we've got such a great handle on language with AI where we're at now, but I think everything's going to be built on top of these systems. That's going to be sort of like a raw material that's going to go into some of these more sophisticated apps that actually serve specific market needs. And we want to get the process going and working on that. And a big part of what I think about, when I think about Cascader is that open systems just tend to win. So we're using Noster to broadcast the services, and we're using lightning as kind of the glue that can glue together services. And I often think about the fact that there's a specific AIDS problem, a problem AIDS researchers were working on for about 15 years about folding a specific protein.
[00:05:36] Speaker B: Out of curiosity, do you know, you told me this already and I love this, but do you know the specifics, any specifics about what that problem was?
[00:05:47] Speaker A: It was like, in fact, I might have it up in front of me.
[00:05:52] Speaker B: If you could send me the link like afterwards so people can do this because this is so cool.
[00:05:56] Speaker A: It was like Washington State researchers spent like 15 years, these were the brightest PhDs in the entire world trying to solve this problem for 15 years. And then in about 10 days they, they, they open sourced it, the problem as a video game and they solved it in 10 days with like a quarter million gamers just went after it. They solved it. So you're just like, no matter how smart I am, no matter how like much of an expert I think I am, the end of the day open systems are going to win. And you got to just give people the tools to, to try to solve their own problems or try to try to solve your problem, you know, at scale.
[00:06:30] Speaker B: Yeah, there's a couple of really interesting takeaways from that. Is one is that obviously they gamified it so that, so that they could essentially pull information from the crowd. Because like, like that's the beauty of decentralization, right? Is that you have a million different perspectives attacking the problem with a million just kind of unique ways to look at it or kind of turn it to a different angle and like attempt to broaden that. Like it that the central planning does not. All it actually does is limit your access to the amount of capability and the brain power, the, the mental computational power of the group of the entire society, like as a collective. And that's kind of the beauty of the free market is that the idea is to decentralize so to make this giant kind of parallel processor of economics to determine what is useful, what to go after, where to spend resources and where to stop wasting resources. Like where that direction goes. But one of the other things that I find interesting about that little anecdote that you just bring up is that to turn something into a game is to be incredibly detailed about exact understanding exactly what the problem is. Like you have to very cleanly and like very precisely define the problem in order to turn it into a game that someone else could solve that became useful information to you, you know what I mean?
[00:08:05] Speaker A: And that, yeah, you'd have to engineer the heck out of it. My Understanding is that it was a collaboration between their, you know, research, you know, their biological research researchers and their game, their, their, their video game design school.
And so they got together and that, that once again highlights the importance of cross functional people getting together and deciding. Yeah, yeah, exactly. And personally, I draw on that a lot. You know, I'm, I'm a, an electrical engineer by training. I did embedded systems, which is kind of like a hybrid between computer science and electrical systems. And now I'm doing mostly software. But, you know, I see the value in just bringing in, you know, all of that experience I got in other contexts and in bringing them to bear on a new problem and, you know, using that experience to design cascade or that's actually where the name comes from in, in circuit technology, when you put two circuits into each other, it's a cascade, It's a cascaded system. So. Oh, interesting example would be like an amplifier. You cascade multiple stages together to get different effects. So it's the same idea with cascade or.
[00:09:17] Speaker B: Oh, that's cool, that's cool. No, but really shines a light on how valuable generalized and specific thinking are in different contexts. Because, you know, one thing that I've always really disliked about kind of, you know, being us centered is the. Well, it's, it's largely a problem through most of the world, but not all of the world is how bad our education system is and how it kind of tends toward like, authoritative thinking of like, you know, somebody in biology is. And then like, you have your academia, the biology crew, and then the chemistry crew. And it's like the chemistry people are like, the biology people aren't allowed to say chemistry things. We're the experts. Leave chemistry to us. And then vice versa. It's like, how dare you as a chemist, make a paper about biology? That's our domain. You don't know anything that you're talking about. Do you have a PhD in biology? You know, it becomes a very authoritative and like, competitive. When it's like the, the person who deeply understands chemistry might be able to put a biological change or consequence in a very different context. It's like extremely useful. There's not actually this division between disciplines that we try to put in academia. In the real world, they're just categories like, like, we just like loosely define. It's like saying like, what's the edge of a species when there's like a whole bunch of like mixed species and there's donkeys exist, you know? You know what I mean? Like.
[00:10:49] Speaker A: Yeah, yeah, I mean that siloed thinking I think does limit progress. I think there's value in going deep. I think people have explored that. Malcolm Gladwell. Right. 10,000 hours. But there's also kind of a counterpoint. I think it might depend on the type of person you are and it might, it might be. Everyone needs a bit of a mixture. One book that's kind of a rebuttal to that is called Range by David Epstein. And he brings out a lot of examples of how and why having cross functional backgrounds helps. Like a lot of NFL teams will get players and they evaluate them higher if they know that they're a good basketball player, they're a good track athlete, they're good at other things because they're able to bring to bear those other skills, those cross training skills in a different context. That's a very like simple kinetic example. But there's tons of people that have invented a lot of different things that, that we use every day that, that discovered them through serendipity and from, you know, a chance collision of different fields of knowledge. There's a lot of. I can't remember all the details in that book, but one of the main points they talk about is there's, there's different kinds of games. There's Kind games and there's Wicked games. So a kind game is a game sort of like chess, where you know what the next step is. You get immediate feedback on what works and what doesn't work.
[00:12:09] Speaker B: Okay.
[00:12:10] Speaker A: And it's, it's immediately available the information you. There's. But then there's Wicked Games response and.
[00:12:16] Speaker B: Like the choice of what to do is very limited and like kind of a direct feedback. Yeah, yeah, that makes.
[00:12:22] Speaker A: Yeah, the feedback loop is tight. And then there's Wicked games which are much more challenging because you don't have that information. And so, you know, I think the thesis of the book and it's been a while since I read, but is that, you know, when you have Wicked games, it helps more to have different tools in your tool belt and be able to change gears versus just, just being specialized in one thing.
[00:12:43] Speaker B: Interesting.
[00:12:43] Speaker A: And yeah, no, that's fascinating.
[00:12:47] Speaker B: You said Range by David Epstein. Make sure I got that right. Yeah, okay, that's.
[00:12:51] Speaker A: Yeah, that's cool.
[00:12:52] Speaker B: I always love book suggestions. Any book suggestions you have. I am, I'm down for it. Especially if I haven't heard from it. That that makes me happy.
[00:12:59] Speaker A: I'm the, I'm the book guy at Pleb Lab. Everybody.
[00:13:01] Speaker B: Oh, love it, love it.
[00:13:03] Speaker A: Everybody is like, yeah, you and I.
[00:13:05] Speaker B: Are definitely gonna get along.
[00:13:06] Speaker A: Yeah.
[00:13:09] Speaker B: So I wanted to ask like, what got you into. So you came into Bitcoin first and if I understand it right, yeah. Okay. What directed you or made you see AI as a solution and something to go after and why was it specifically relevant to Bitcoin in your mind?
[00:13:33] Speaker A: Yeah. So it's funny, I originally got into Bitcoin thanks to your boy, Davey Smith. He helped me kind of get the underpinnings that I needed to understand in order to really be receptive to Bitcoin in general. And so I consider myself sort of a general purpose technologist, as I've stated. I. You did a bunch of different things and then got into Bitcoin, started hanging around the pleb lab crowd, saw how useful and in some ways essential the AI tools are these days, if you're, if you're a developer or if you're doing any kind of serious research. Yeah. And so, so you just think about, you know, what if that gets taken away? You know, am I really living sovereign if I'm relying on one company to, to provide me with these services? And I think the answer is no. And you extrapolate that out.
I think that we could see a situation very similar to the evolution of Google. If you look at Google initially in the 2000s, they were just pretty much a high quality product. I didn't see a lot of scandal or censorship. But then as we've evolved, we've gotten to the point of Gemini. You can see how far they've fallen. We've got the Gemini scandal, plus the fact that they came and maintained basic APIs. You're better off scraping stuff on your own, you know, so like you're seeing the atrophy happen in real time and.
[00:14:59] Speaker B: I didn't expect that. We're kind of like search has become like a nightmare. Yeah, yeah.
[00:15:05] Speaker A: We're kind of in that honeymoon phase with GPT. I see them probably following a similar track. And you know, this is a story as old as time. You look at IBM, same thing happened with them. So. So there's going to be like this. It's almost like the Empire cycle, but. But in a company you're going to see a similar thing. And I think we're early enough to where we have in the open source tools are good enough to where, you know, we have an alternative. Let's just not even, let's just not. We've seen this movie, let's just not play that movie and let's build alternatives right now.
[00:15:35] Speaker B: Yeah, no, I love that. And actually I want to get Your general thoughts on this before I get into the next piece is getting into AI, at least for me, was a bit of a mess. Like, I spent a long time before. Like I thought I was going to launch this show really quickly and then, God, I spent two, three months just trying to build out a network of. All right, who, who the hell is even saying anything useful here? And it's still just a mess, but I've at least pulled together some interesting pieces of that network.
What's your general feelings on kind of the signal and noise ratio for AI Then also, are we in a bear market? Is this just kind of like. Or are we in a bull market still? You know, like, what's the general, what's your general sense of the AI market and trying to find good AI signal, so to speak. And maybe how do you solve that problem?
[00:16:36] Speaker A: Yeah, yeah, I think, I think we're probably starting to hit some, some laws of diminishing our terms with just like the basic chatbot app or the LLMs. I think for sure it's going to take a little more effort to unlock maybe a next level. So I wouldn't. It's hard for me to say if we're in a bull or a bear market. I would say maybe neither. I'd say we're kind of hitting the end of the initial usefulness of just pure LLMs. I could be wrong. There could be other use cases. I think people are starting to understand how you can do fine tuning, how you can do rag, how you can do some of these other techniques that are very useful for getting it to be an expert in a specific field. I think that's great stuff. I'm not knocking that stuff, but I think fundamentally it seems like almost everywhere in human endeavor, the last mile of actually delivering value to an end user is actually the limiting factor more so than the technology itself. I mean, if you look at a lot of applications that have become popular in the last five years, let's say non AI applications, I'm not calling any of my head, but I bet you a lot of them were possible before, you know, well, many years before they actually became successful. But it was just a matter of finding the right product market fit, finding the right ux and really, you know, knocking that out of the park, which is something that's very challenging to put together because you got to have the technical chops to, to make something real and make it reliable, but also to understand other humans and how they think and what they happen to need right now.
[00:18:09] Speaker B: Yeah, yeah, there's A. There's actually a really great answer that Steve Jobs gave, I believe, to someone who. It's like somebody in the audience who was just kind of like pestering him and being like, why didn't you use this tech instead of what you use?
[00:18:29] Speaker A: I remember this.
[00:18:30] Speaker B: And yeah, and the way he framed it that I thought was so great is that, you know, it's really easy to look at a technology and be like, okay, how can we apply this to all these different things? And it's extremely difficult to then be working with that technology and then to recognize when you have to let go of it or not use it, because you have to think from the user who doesn't know anything about the technology or what's limited here or what's there, and like, from basically the complete polar opposite end and think, how are they going to interact with it? And like, the kind of creativity and risk that's necessary in trying something there and then figure out whether that technology actually fits it. And the. I feel like LLMs really are suffering, like AI in general is suffering so badly from that problem because the technology is so exciting and it's clear that it's used useful for a lot of different reasons. I mean, it feels like you're talking to a human in a computer. So it's like, obviously that's going to be fascinatingly valuable at some point for some reason, but how do we target it and how do we recognize its limitations, what it's good at, what it kind of sucks at, and how to, how to kind of like find that, that middle ground and the creativity for implementing it for a user that makes sense. Sense rather than just like talking to a computer, which is neat but not really useful if you don't know what to do with that conversation or if the LLM doesn't have a, a very explicit mode of operating or performing the actions that you need rather than just kind of like summarizing a thing or giving you a suggestion or, you know, just giving you kind of like generic feedback, so to speak.
[00:20:25] Speaker A: Yeah, I, I agree with you, you know, there. This is a story as old as, as old as, you know, I guess, Steve Jobs, right. It's probably older than that. It's market pull. Market pull versus technological push, fundamentally is what it's about. And I, I agree. You know, I think in my eyes, LLMs are kind of like a steel beam. Like, they're great raw material, but you have to figure out how to integrate them with other applications. And you got to get more shots on goals so that's why we've been going after those. You said the tiny app use case that's been a huge part of our philosophy is, you know, fanatical obsession with market poll and finding, finding the market pull and iterating quickly so that we can find that. So today we have about 12 applications inside Cascader and we're still figuring out, you know, how do we, how do we inter combine them, how do we take these to another level and you know, really embracing that bottom up approach versus being top down. And the beauty of open protocols is that it allows you to iterate much more quickly because when you think about it, you have two things going on. It's, it's the people and the technology. So you have the technology. I can set up a lightning wallet in 10 seconds and, and set up LN URL in 10 seconds or L402, whichever. I can set that up very fast. And I don't have to set up a bank account. I have to wait several days. I don't have to kyc. I don't have to do all this other red tape. Same thing goes for nostr. One of the applications I'm excited about is you can kind of build your own personalized newsletter with Cascader and that's called the Chad Bot, the Cascader hub for acquisition of data and where it allows you to select YouTube channels and then subscribe to them and just get summaries straight to your dm. So it's a great way to kind of curate your information diet and get a survey of everything that's happening without investing tons of time.
[00:22:24] Speaker B: Actually, pause that. Pause, pause that thought real quick. One thing that's really interesting about that is that, you know, a lot of the reason that you have like a newsletter or something you subscribe to somebody else's newsletter is because you're specifically trying to get that summary or that shortcut of a bunch of things that's going on. And it's crazy to think that if you can find the signals specifically, you can then kind of create that newsletter yourself because you can have AI essentially do that job for you to go out and search in those things that you have marked down as like signals coming from this direction, then aggregate it and send it to you. Like, that's a really cool. I hadn't put that one in my mind of like, okay, well that's a really great use case. I've been thinking like a lot of the AI newsletters and stuff I get are probably just AI driven. It's like, well, why wouldn't I just get AI to do that for me for.
[00:23:16] Speaker A: Yeah, and why don't you get it for you or get an AI that doesn't have a dog in the fight of whether or not you consume the content in the first place?
[00:23:23] Speaker B: Yeah.
[00:23:24] Speaker A: Beauty of Chad Bot is. Is. And it was kind of an accident. You know, my. My focus was on, you know, helping my friends that record podcasts and other people that, you know, we want to target that record podcasts. Just making it easier for them to get their notes, essentially notes of the entire conversation straight to their DMs, so they don't have to do any extra work. They don't have to plug things in, you know, because we already had a module that did that we added on the Noster dms.
And the interesting thing we kind of found out by accident was that it's actually a great way to curate your information diet because A, it doesn't have a dog in the fight. If you watch the content, it just tells you what's in it. Yeah, the temperature's down to zero and it just tells you what's in it. So that's point A. And then point B is when you consume content with. By reading it, I think it has a tendency to go more through your rational brain, through your prefrontal cortex, and it takes out the emotional manipulation. So, like, you could. I have a bot that does Bloomberg. You know, they have a certain Keynesian way of looking at things, but when you actually read on paper what they're saying, you get the brass tacks of what they're actually saying, what they're commenting on, and maybe you can draw your own conclusion without necessarily just taking in their. Their way of looking at it. You know, same thing goes with even really biased news sources like MSNBC and Fox.
It's. It's an. It was kind of an accidental discovery, which kind of goes back to like what I were saying earlier is you got to experiment, find out. Sometimes you got to find out some of these things empirically. And that's kind of the beauty of something like a cascade or that just gives you that. That wide open sandbox. This is a wide open canvas we can paint on and figure out where the values lie.
[00:25:03] Speaker B: This show is brought to you by the ColdCard hardware wallet. My favorite setup, which I know I talk about a lot, is the nunchuck wallet on mobile that just connects directly to or just talks NFC whenever I need a sign. The nunchuck does not hold my keys. It is securely on my cold card, not connected to The Internet not vulnerable to a phishing email or any malware or anything like that. If I ever need to send a transaction, I just create the transaction on my Nunchuck wallet and I tap it to my cold card, I hit sign, I tap it again and then off it goes. There is no easier interface and way of interacting that grants a higher level of security, in my opinion, than that right there. It's genuinely incredible to me that we even have this capability in the Bitcoin space. And Coinkite has just made an entire suite of fascinating security and just fun bitcoin devices and hardware products like the block clock. Just connect it to your node and have it show the bitcoin price, have it show the block height just right there on your desk in this really cool package. If you haven't checked out what they have to offer, you definitely need to. And don't. When you go over there, do not forget that I have a 9% discount code. Bitcoin audible all one word gets you 9% off. And you can go through the link in the show Notes or just remember the discount code, which is not hard, it's just the name of the show. You can go through the link in the Show Notes to go right there, or just go to the store, browse around, see what you want, get yourself a solid hardware wallet. Experience the tap to send with a cold card hardware wallet. It is, it's just kind of magical. And get notified for the Q1. I'm really stoked about my, my cypherpunk BlackBerry, the new model that's going to be coming out. So check that out as well.
And yeah, don't forget, 9% off the link is right in the show notes. Go check them out. Yeah, yeah, you know what, since I guess we haven't really explained it much, like kind of vaguely talked around it, so why don't you just explain cascader kind of what the tool, the toolkit is. It's kind of like, at least in my mind, it feels a little bit like a Swiss army knife of like different individual things that you can do with AI in a couple and even some things that aren't even necessarily AI specific, but just like individual tasks. So kind of explain the overarching vision, where you're taking it, why you've targeted the things. I think the idea of targeting media content creators, I think is really, really smart, by the way. Um, and also it's the hard side of the network for nostr. So if it can be made to work really well with nostr, it's a perfect way to basically drive both benefit from Nostr but then also be a huge benefit to Nostr in the ability to execute on and create content. But anyway, a little ahead of myself. Why don't you just give me the pitch for Cascader, what it is and why the direction that you're taking it in.
[00:28:10] Speaker A: Yeah. Cascader is a collection of AI services and non AI services alike that are L402 compatible endpoints. So they're paid lightning endpoints that they're actually broadcast on Nostr. So we create our own spec shout out to Topher and Christian helped us make this spec and it basically uses Nostr as a billboard announces, you know, what the services are, where you can find them on HTTP and what they cost and you, you just go to those endpoints and pay them. So that's kind of like the, the back end side of it. Now the front end side is Cascader and it, it can take these services and you know, consume them individually. Like we have a chat GPT proxy, we have a stable diffusion proxy and other services like that, we, we, we even have one that downloads YouTube videos. But those are kind of like the raw materials as I was speaking about before and we combine them in different ways. For example, we have a YouTube agent which can take in a YouTube URL, you know, pull in all of the content as context and then give you a summary or give you an article or answer a specific question you may have about that YouTube video. So that's kind of an example of, you know, taking these modules and building upon them. Cascader is really about, you know, three things. Firstly it's the modules because without the modules you don't really have anything and we need these, you need bottom up systems and you need, you need good modules. The next step is interconnecting them so figuring out how they could be connected just you know, on, on a, you on like a composability, you know, manual workflow basis and then eventually it's about figuring out how they should be connected based on the context. And that's how we get from where in my opinion, how we get from where we are today to this idea of the machine payable web where you can create a business person in cyberspace, you give them a budget in Bitcoin and then you have them go out and accomplish a bunch of tasks for you. I think we're a little, it's going to take us some time to get there. It took us a long time to get from, it took us imbibing the entire Internet to get it, get chatgpt to talk like you know, a drunk 20 year old. Right. You know, just, just kind of exaggerate. Right. So, so I think it's going to take us some time to get there and this is kind of the bridge where we can get some shots on goal. We can learn, you know, how do these services interconnect? How can you open this up to the whole, the creativity and the genius of the whole world and take that and then feed that into another level of AI and figure out how to interconnect them based on the context and provide value in a totally automated way.
[00:30:50] Speaker B: Yeah, that's so cool. It's funny how many like y'all have basically created a much cleaner version of a couple of things that I've specifically built for myself and just kind of like a hacked together version. But I also, just for anybody who's listening, I know a number of people who don't buy, who don't want to do a subscription to Chat gbt but that's a really good thing. And I'm, you know, I don't know how long something like this survives if like GPT, if like OpenAI gets like mad about it or whatever. But the idea of doing like a Chat GPT proxy but it's because you have an, you have an API, you know, is such a great way because it's like there's a lot of people who just don't want their identity attached to it. They don't want to give up personal information, they don't want another subscription that they have to manage and, or they don't have. It's like Magnific. A Magnific. Have you seen do you know Magnific the service?
[00:31:45] Speaker A: I'm not familiar with it.
[00:31:46] Speaker B: Okay. It's a, it's a really cool image service for like if you, you generate images or whatever and they're really low, generally low resolution. Especially if you're doing a lot of iteration, you want to generate a bunch of them. It's like 5:12 by 5:12, right.
But you want to upscale them and then create like a lot of like very extremely specific and like fine tuned detail. That is what Magnific does. You can like upscale stuff to 8K and then it will, you can give it a creativity slider and you can create like incredible depth in the image and like really upscale. Like it looks really freaking cool. But I only need it on like two images a month.
You know, like it's very rare that I would actually need it very much. Like some people might use ChatGPT, you know, if they don't have a daily podcast where they need a description every day, they need to yank a summary, they need pull links from every single episode of things that were mentioned. Like, that's another great way that I could probably use Cascader as a.
It would probably be buildable in Cascader pretty easily for the thing that I have on my local machine, which is, I mentioned, we mentioned a lot of links. We already mentioned two books, I think, and you know, Steve Jobs video. And the AI is perfect to give that transcript to AI and be like, what are the things that we mentioned that were outside of this show that we should link to for people in the show notes? And they will, it will give a bullet point link that's extremely useful. But in that same context, not everybody needs that every day.
Some people might need it twice a month. And who the hell wants to pay A$20? $40, like Magnific is $40. Like even doing media and image work. Like, I don't want to pay that. And I even sent them like an email. Is there any way I can, like one off, I'll pay like four times the price of the tokens if I can just pay like two bucks. Like, if somebody could do that with lightning. Cascader is such a great example of like trying to get the economics right for providing the service. And it's funny, without lightning, without Bitcoin, you don't really have the capability to do that. It just becomes messy.
And that's such a cool way for anybody who's listening to be able to use ChatGPT and find out if it does fit in your workflow. Find out if it does fit in what you need it for, Use it for a couple of days and you didn't give them your credit card number. You're not on a subscription. You just throw a couple of sats, test it out. If it doesn't work, it doesn't work. If you only need it once a month, you can only Pay, you know, 20 cent or whatever the heck it is rather than $20 to use it.
[00:34:29] Speaker A: And like, yeah, lightning fixes this. Yeah, I mean, I. On multiple levels. Right. I think you mentioned too, two big things are, you know, the cost and just, you know, the fairness of being able to pay per call. And then the flip side is privacy. You know, you have really, really good sender privacy. I don't have any information on you. I don't really want any information on you. Any information. I Do collect is only to make the experience better for you and I'll delete it upon request if you ask me to. And I'm, you know, a big privacy advocate. I think that that's tremendously important. On top of everything else, the thing already can make a lot of inferences and assumptions about you and collect a lot of data on you. Do you really want this thing to take your soul like that? I say no. And I think we need private solutions. And back to your point earlier about ChatGPT eventually getting mad about this.
I think, as I mentioned earlier, I think they're going to follow the same arc as Google. I think this is a temporary armistice for really anyone that's, you know, developing open unopened protocols and developing, you know, with a freedom and privacy mindset. I think, I think it's eventually going to come to a head and so we need to start planning now. I myself, you know, put this in here for mostly for speed and for ease of use, but it's going to come to a point where we're going to have to just have our own open source protocols. And the good news is that I think in the long run those are going to win. I think this is a replay of some things that Microsoft tried in the past when the Internet was early and they tried to build their own closed Internet.
This is just, it's funny because it's the same company, but different people trying to do the same thing with AI and I think ultimately they'll lose, but we need to make sure that we're vigilant and we keep pushing for solutions that decentralize things.
[00:36:22] Speaker B: Yeah, yeah, for sure. Now tell me a little bit more about. So I opened it just so I could see again. Well, this was. Because this is not something I have explored as much as I should have.
I should have is. So I'm on. By the way, the, the link which will be in the show notes is Cascader with C A S c d r.versal.app.
without having to explain, you can also.
[00:36:51] Speaker A: Go to Cascader xyz.
[00:36:53] Speaker B: That xyz. Okay. Okay, gotcha. Yep. But the. What's this one called? The Logic app demo.
So explain to me a little bit about what this is and kind of what your.
The path for this sort of development is and what you're trying to get out of it.
[00:37:16] Speaker A: Sure. So these are, these are example applications. They're static. I want to eventually make them composable. But this is the general ui. So the first one that you'll see on the Logic app demo is just a very straightforward one that lets you take a ChatGPT request that can be a very vague, crappy request and then it basically transforms it into a pretty good request for a text to image. So you could just say, you know, draw me a skyline of Austin from Lady Bird Lake and it'll go in and it'll write you a paragraph, you know, poetic prompt that will tend to get you better results. So that was like a very basic example that I was able to build with our first two modules which are ChatGPT Proxy and Stable Diffusion Proxy. There are other examples. Another good one is a YouTube transcript. So you, you plug in a YouTube video and it combines my YouTube video extraction service with the Whisper service.
So that's another example. And then the third example we have is sort of similar to what you were talking about of helping you shop for books or other items. This takes an image in and it can actually tell you what its value is in Bitcoin so that you can sell all your belongings. Go all in on Bitcoin right now, right before the dip ends.
[00:38:44] Speaker B: That's funny.
[00:38:45] Speaker A: Memeing aside, there's a, there's a shopping assistant there. They can actually tell you where to buy things. I can't tell you how many times I've had like a piece of hardware I didn't know exactly what it was called. I snap a photo. Now I'm able to look it up in real time. I can find it on Amazon or Home Depot. So these are just, so these are just examples of just showing how you know, these, these services synergize. It's not, it's not a, you know, they're by themselves, they're just kind of raw materials and their potential isn't truly tapped and that there's a lot of opportunities to innovate and improve on this. And that's an area of focus on our roadmap is building out this composability, really building out sort of like a lookup table for how do these fit together and publishing those on Noster. So then people can take those and do whatever they please with those. Even if they don't use my front end, they can do it however they please and figure out new ways to create value or inter. Combined things.
[00:39:42] Speaker B: So when you say they don't have to use your front end or platform, what do you mean in.
I'll give the, the overview. Like the big picture real quick is basically what you're doing is you're building a set of tools so that other people can take the primitives of this thing and build their own little tiny apps within Cascader. Right. As I understand it.
[00:40:09] Speaker A: Yeah. So Cascader will be sort of like the front end workbench. We happen to run a lot of the services for now, but we've published a lot of these. Not, maybe not all of them, but most of them are open source. So somebody could take them today and put their own twists on them, maybe even run them on their own data sets. Like they could somehow integrate it with a different application or a different set of data or even a different model if they chose to, and take those and figure out how to use them. And all it does is broadcast these endpoints on Nostr and where to find them. And my application goes and finds them and uses the information it has about how to combine them, how to parse the result and display them to the user. So my goal long term, and if anyone's listening out here that wants to develop modules, hit me up. On Nostr, it's just Cascade or on Nostr, hit me up. And I'm down to collaborate and bring in new ideas.
For now, this is what we've started with, but we're actively building out a bunch of new tools and we're actively working with a couple people already that are open source devs. But we're always open to new ideas because as I said, I don't think it matters how smart any of us are. Together we're going to just build faster and better than these closed systems. And this is one of the ways that I think we haven't really explored the power of AI and open systems.
[00:41:36] Speaker B: Yeah, yeah. It's funny that this is like the tiny apps that I've built are we're all with ChatGPT, but you know, you use the example of the getting a YouTube link or getting a file and then doing a transcription. And I'm looking at this as just like. And this is also something really neat. I would highly recommend anybody going to check this one out. And it's just on the main page, you'll see in the kind of the middle of the. Right. It just says the Logic app demo. But this thing's like really intuitive and it's just built like a node system, which I have just fallen in love with since I started using Audio Hijack and then also Fusion, the editing suite for DaVinci and the Blackmagic crew. They've got like a really fascinating editing program. And the Fusion system is super intuitive. It doesn't work with kind of like the layers that like Photoshop and then After Effects does, which kind of is really odd when you add in that third layer of the timeline, like the sequence of time and the chronology of stuff, suddenly the connection between certain layers and like trying to connect things back because it's not always just this one's on top of this one. You know, sometimes the relationships are different over time.
That the Photoshop supplanting it on top of after effects, the 2D to, to to 3D or 4D editing, whatever you want to call it when you add in time made it very, very convoluted. And I love the node system. And this is so simple is that you just have a chronology right is this just says you've got your start, you've got your. Get the YouTube put in a YouTube URL and it looks like the, you're adding in the price for everything. Like this is just four sats to pull the information from YouTube and then the very next thing. So you're also calculating like over time in this thing to figure out like how much this is all going to cost. The more pieces that you need. And then get a whisper transcription. So you're pulling in the open source whisper model and then boom. What's the result out of that? Well, that's literally exactly what I did with, I don't know, 50 lines of code with ChatGPT, but it took me like 40 minutes to iterate. This would take me six seconds. You know, like if, if you just gave me the nodules. Okay, well get me, get me the media, get me the whisper. Boom. What's the result?
So what you're basically doing is you're opening this, this is the turn it into a video game is that rather than trying to figure out every product or every targeted situation or explicit problem to solve or anything, is you're giving people the building blocks to build those solutions for themselves without having to create some sort of an app or figure out whether or not it's going to run on their computer. And what kind of prerequisites do they need to, to get this thing to operate is give them building blocks for each of those things. And then, you know, rather than designing every situation in the game that you think they're going to run into is let them design the game because the game is the solution to their problem. And you just. There's no way for you or any of us to know everybody's potential problem. And as broadly useful as a lot of these things are, we need to think about the scope of everybody's problem, you know, so how better to do it than give them the tools so that they can define and figure out their problem and figure out which pieces of the puzzle solve it for them and then suddenly that's available for other people. Suddenly you can extend that out. So I really think this, we're still in that phase of where we're trying desperately to create end products everywhere. And what we need to do is, which I think your, your thinking is really interesting, is that we need to be build, creating the building blocks so that people can figure out the tools to build. You know what I mean? Like they're the end products to build, which are the, the tasks, I guess the problems, the individual problems to solve. So I just thought that was a really fascinating way to think about it. And I can already tell that had I had something like this, a lot of the things that I've already built would be something that were just kind of drag and drop as opposed to, you know, and put in chronological order as opposed to me having to come up with code for which app to use and how to put it together and then get a bunch of generic errors and be like, oh, I mean I told you just before this, right? My workflow thing broke because I changed the name of my notes and like that sort of thing. And so I'm having to, I'm gonna have to go back to Chad GPT and fix it. But it maybe just kind of expand on that thinking.
[00:46:23] Speaker A: Yeah, yeah, I think there's a, like a Metcalfe law, that sort of effect that happens here where the more modules you have, the, the more opportunities there are and the more likely it is there's more value adds and the more people you have exploring these, the more likely it is you're going to find those opportunities. And I think fundamentally this is how we win. Just, just flat out that's how we win. I think bottom up systems win. I just think free people tend to win over, over slaves, right?
I would say Vikings roast faster than slave ships. You know, collectively as a group of free people, Bitcoiners, I think, you know, this is one way that we can kind of, you know, kind of kind of flex on them. And I know that there's a lot of disadvantages with decentralized systems. It sometimes feels like we're fighting with our hand side behind our back. Here's an opportunity that we have to, you know, take, take back control and really fight with our strengths against some of these centralizing forces.
[00:47:24] Speaker B: Yeah. Hell yeah. That's awesome, man. What's changing in the recent future. Like what's, what's next in the pipeline, so to speak for sure what you're doing with Cascade or directly.
[00:47:37] Speaker A: Sure. So but probably by the time this comes out we will have the Cascade or Amber package. So the feedback we got was, you know, I like being able to go in my private browser and use this thing anonymously. Paying bitcoin. If I only want a one off project done, I like paying in bitcoin but I also, you know, I want to keep my sats. You know, that's one of the, some of the feedback we got. The other side of it is, you know, this is a, if we just focus on only bitcoiners, we miss out on a lot of other opportunities to bring in people that don't necessarily know about bitcoin. And they get, they, they get kind of this, you know, gate gateway to these open systems and you know, maybe they become bitcoiners. Maybe they, they see that the UX is better in some ways in bitcoin.
So we have the Cascade or Amber project which is releasing now and it allows you for $10 per month you can use an unlimited number of the services that run on our server. Oh sweet. So as you said, maybe you don't want to buy ChatGPT, but you would buy it if you have access to umpteen services that you can mix and match.
So that's the first step is just opening it up and just making it possible for more people and more people that just want to hodl and don't want to spend their bitcoin to use our service and then also just building out more services. As you mentioned, we have a bunch of tools that we're building out for creators. Those should be coming in the coming weeks.
And a couple other demographics I'm not ready to announce yet, but there's a couple other demographics that we've discovered in our experimentation. And one thing I kind of want to bring up, you know, while I'm on that point, you know, earlier we were talking about how open systems let you move faster. So lightning lets you move faster because you don't gotta go through this whole KYC process. You don't have to go all through all this other stuff. NOSTR helped us a lot because we're able to test out this idea of wouldn't it be cool to get a newsletter? But if I want to set up an email server, I have to go through all this BS just to not be able to spam.
[00:49:42] Speaker B: Yeah.
[00:49:42] Speaker A: So now. Right, and so now that's eating into my time. While I was doing this, I was in the top builder competition which is rapid fire, only like two, three weeks sprints. So I needed a fast solution. I said okay, nostr DMS Great. And so there's that. And then also just bitcoiners and other people that embrace open systems tend to be early adopters. So they're a great target audience to try things out, experiment, learn and then figure out what works and what the demographic you know and what demographics to focus on, what use cases make sense and then spin those out into ones that are facing more to the non bitcoiner crowd. Yeah, so that's kind of been our our philosophy when it comes to development.
[00:50:26] Speaker B: Swann Bitcoin has the full suite of bitcoin financial services. You can instantly buy with your bank account or wire transfer any amount of bitcoin up to $10 million worth. And you can easily set up what I have been doing for ages, which is an automatic purchase of Bitcoin on a weekly or monthly basis. You just pick your time frame and then automatically withdraw it to your cold storage. And still they have free withdrawals to self custody which I was sure would be gone by now. But you should always treat any custodian as a point of failure. And luckily you won't have to go anywhere for all of the information and advice you need for for why you should withdraw and how to do it safely. Because Swan Bitcoin has all of the resources you need and will regularly remind you about 80% or more of their customers automatically withdraw their coins. That is an amazing feat if you ask me. Then they also have the Swan ira. If you have a traditional IRA and you want to get it allocated to bitcoin. And there's so much more with Swan Private they have inheritance planning. You also have Swan business accounts and you can even do Bitcoin as a part of your employee benefit plans. They have an advisory, they have the Swan Vault, a multi sig service for those who want to have the benefits of holding the majority of their keys but still also want to be able to rely on a trusted institution in the case of an emergency or a disaster. If you haven't started into Bitcoin yet, Swan is an amazing place to begin. Go to swann.com guy the link will be right there in the description. Again that is swann.comguy and they will know that I sent you and my beautiful face will be right there at the top of the page to greet you. I am a long time user myself. And a huge thank you to Swan for supporting this show. And I definitely recommend you check them out.
Yeah, no, that's awesome. Out of curiosity, on, you said there are more things geared toward content creators. What, what are those things? Out of curiosity and also I'll add to that afterward actually is what's the thing that you really feel like is important to have or that you really want in kind of the featured set of tiny apps or like targeted use cases that you're still trying to build out or that you just really want to build out? Curious on both of those.
[00:52:56] Speaker A: Yeah.
So I'm not at liberty to talk about every single thing that we're doing, but in general, just taking tasks that would take you even 5, 10 minutes and just cutting them down to a few seconds, I think people underestimate how powerful that is. If you value your time, which is the only thing more scarce than bitcoin, then you should value these tools and be open to using these tools. I'll give you an example. With an existing tool we have the YouTube summary bot that saves people I know personally, at least 20 minutes per run, if not more, depending on the length of the video.
So there's that and then there's just other things you need to do to promote your podcast, like creating written content or even creating non written content, creating clips and things like that. Just making that a lot easier and making it more intuitive as well. You know, being able to, to look at. Okay, we just did an hour and change podcast, whatever this ends up being. But I want to zoom in on the part about when we talked about this particular book, whatever, you know, because we've got this huge innovation with essentially lowering the bar with search terms. That's really what LLMs are, is they're like a different kind of search engine in a lot of ways. It's lowering that bar. So now you can say, yeah, you can just type that in and then it'll go find the, you know, top one or two candidates of, of that clip and show them to you and then you click go. And heck, you could even throw in subtitles, right? And now, now you're ready to make a YouTube short, right? It's, it's things like that.
[00:54:33] Speaker B: No, that's awesome.
[00:54:34] Speaker A: That's, that's huge with your time. But they, but they are critical for you to run a successful podcast, run a successful business at the end of the day. And so we're, we're really leaning into, you know, automation, productivity gains, like real, real market pulls versus just, just trying to, you Know, just, just trying to build better infra. I think that's important. But I, but our, our focus and skills is more on the la. On the, on the, the former than the latter.
[00:55:03] Speaker B: Gotcha. Out of curiosity, what have you been. Is there anything that you've tried to target or that you've been thinking about targeting that you can't find a good either, either open source or even service provided model like AI to really do the job? Like I asked this specifically because one of the things that, one of the areas that I've been having trouble with is good translation. And part of it is even that transcription is like whisper transcription is good, but it's not quite good enough to replace like, you know, it's like 94% of the way there. Right. And there's always something that gets lost in translation. So if the transcription has a bunch of errors, then the translation has even more errors and then a lot of the context or meaning. Especially if you don't have like bitcoin terms in the show, you know, like the translation thinks that nostr is nostir, like, like N O S T I r. Then what does that get translated to? You know, like it just, it there's like a bunch of like little barriers that kind of add up with limitations to the AI. What kind of limitations or what kind of roadblocks have you run into for being able to provide certain things like that and. Or have you kind of not gotten, not hit those yet?
[00:56:28] Speaker A: Yeah, I mean, maybe I'm a bad person to ask this question. Some. Sometimes I, I'll hear a request and I'm just like, I do the cost benefit and I'm just like I, yeah, that's not, that's not viable right now. Throw that in the idea bank. Hopefully in like a few months somebody smarter than me will have a better solution. That's kind of how my philosophy works.
But, but, but at the same time, you know, there are people doing some great things. Like I think stack work is doing some really good things around making better transcriptions from what I understand. That's right. I don't know how open all of those services are, but you know, I know that they tend to combine human, humans and AI and they, and they use lightning and so they use lightning to pay out the workforce and pay for services. So that's an option for you if you're interested in that.
You know, I've been focused on more of these low, lower hanging fruits where it's like somebody, you know, once they get a job done quick, it's been less of the, maybe the higher end where you have a critical transcription of a business meeting where it's gotta be perfect.
And I think fundamentally the AI is gonna just keep getting better to the point where it will be able to do that. But one of the areas that AI struggles is the proper nouns, right? It doesn't, especially in Bitcoin. It's not gonna know what a UTXO is. It might mess that up and turn it into something funny. But, but, but I think, I think it'll keep getting better with time and there's really smart people that are there exploring ways to make those better. And I'm, you know, fundamentally not always like a, you know, zoomed into the point where I'm playing with every little nut and bolt. I'm more zoomed out trying to figure out how to combine these, how to solve like real world problems. And as Steve Jobs says, start with the, the, the you know, market need and work backwards.
[00:58:22] Speaker B: Start with the experience, then find the technology. Rather than just get excited about a technology and try to stick every hole that it, you make it fit, you know, a bunch of, bunch of square pegs and round holes.
[00:58:36] Speaker A: That's right.
[00:58:38] Speaker B: I'm curious, what are kind of the best open source tools that you have been like, has most of this been stuff that you guys have coded up entirely yourself or are you actually leveraging some other self hosted or open source tools in order to kind of like simplify the interactions with it? Like I mean obviously you're using Whisper, you know, like open source model, but what else have you found in how you're using AI outside of Cascader and just in like your daily life? Like if you're just recommending to somebody who's listening to this, how do you use AI? Like what are the big things that you found that are most useful and that you can self host or use very easily?
[00:59:18] Speaker A: You know, one that I find interesting, that I think still might need some time to cook, but is a great idea is the free GPT project in those start nine boxes.
That's an awesome idea and that's something I've actually played around with. So to answer your question, a lot of what we're doing is kind of homegrown. A lot of it is proxied out to services like ChatGPT and other services that are not necessarily open source. The stable diffusion one happens to be open source, but the problem comes back to are you going to run your own infra, are you at scale yet? I think eventually we'll be able to get there where some of these, some of these models will be, you'll be able to run on your own server. But for now the focus has been more on, more on interoperability and you know, providing real tangible results right now and kind of getting into that later. So I think over time we'll start integrating things like Mistral, we'll start integrating things like Asper Satoshi. You know, we've spoken with Swetsky about that. Yeah, your team's really small, so we have to, we have to prioritize and optimize a lot. But you know, I think fundamentally, you know, these, these tools are going to keep evolving. I, I, I personally do probably use chat GBT more than I'd like to admit, but I do use some of the open source models and I, you know, I run, I run them locally or I run them through a service that doesn't, that, that I respect more than, you know, Sam Altman's big, big.
[01:00:52] Speaker B: Mega Corp.
Yeah, especially with, I know the last time we talked or whatever you talk about, like, privacy is like a huge concern and I think it's becoming far more relevant. Like in the last episode of this, from last week of this show, actually I, I didn't even intend for the show to be about this, but I ended up just kind of going on an hour long rant about like personal AI and kind of the, both the promise and then also kind of the risks of it, like, you know, how to think about it. And one of the issues that I've come across with a number of different people now, and you brought it up when we spoke last week as well, was that like, I think privacy concerns are becoming far more clear. Not only just in the general sense of like watching platforms abuse our privacy and it becoming more and more in our face over time as the Internet has become more centralized and we've kind of gotten locked in these platform silos. But then also there was an element that I thought was interesting, interesting, and I want to get your feedback on this, is that when we have the opportunity to personify the computer, when the computer is actually kind of talking back at us, I think there's this, this slight new introduction of kind of a social element that we realize it's watching us, you know, that the, the I, the promise of a personal AI is that it can see everything that you do, it can interact with you and then it knows that thing that you mentioned earlier today, it knows what you ate yesterday. Because in that context it can be so unbelievably valuable. But then you realize when that's being handed off to somebody like OpenAI, that holy crap, they literally just have, they can see everything, you know, and suddenly privacy is like far more clearly prevalent as like something that needs to be solved. And how are you thinking about that in the building, out of this and in just the general model of how to use AI?
[01:03:00] Speaker A: Yeah, for me, you know, it was funny because I was on the phone with somebody who's you know, very principled person and you know, kind of like, you know, my, some, a colleague I used to work with, much, much older than me, very sharp guy, very liberty minded and he was asking me, you know, how do you think about sovereignty? How do you, and, and I said, you know, it comes back to do it yourself, you know, as much as you possibly can and if you need help, go get the help that's also you helping yourself, you know, figuring, figuring those things out. And then he said, well, what about privacy? And I said, privacy is a special case of sovereignty. It's, you know, it's, it's taking responsibility for holding your own data yourself. And I think you bring up a great point. We're basically at a fork in the road. I think that there's a good chance, you know, we go, we go the wrong way and you end up with a situation like China. So I think we really need to be mindful of this. And this is, you know, what I said at my talk at Top Builder and I truly believe this, that we need to build these tools that serve liberty and not tyranny and we need to design these systems correctly. And then on top of that, I think, as I pointed out earlier, we have an opportunity to build a better product than the tyrannical systems now before you're in too deep. So we need to do that so that people aren't lured by that.
We need to show the strengths of these interoperable bottom up systems and build them out now so that before we get to the point where ChatGPT starts, starts censoring people more, starts, you know, starts being more controlling or giving away your data, you, you, you've, you've built the groundwork. So now you have a viable alternative now versus waiting for when they inevitably start doing things you don't agree with.
[01:04:44] Speaker B: Yeah, yeah, and that's, I think people discount that is how valuable of an opportunity we have is. When you're in a disruptive phase, you can, fundamentally you can, you actually have the opportunity to change the model where I don't mean the, the language Model, I mean, the model of like how the product and the, the service itself is delivered because you're specifically in kind of this, in between where there isn't enough momentum or network effect built around the new system that you actually have the opportunity to redefine it. Whereas if you're just in a era where Google is most powerful and still just on the nothing but growth phase, where everything's still being sucked up by Google, you, you know, trying to build a comparative model is just like, good luck, you know, you're not going to accomplish anything there. You know, when the dollar is like at the peak reserve currency and everybody's buying treasury reserves, introducing a new monetary competitor is just kind of like, okay, you're not gonna, you're not gonna fight that network effect. But in the turmoil of where everything's uncertain, where the dominance of the major player is suddenly in question and it's in decline, that's when you have the opportunity to say, we're going to introduce a money that can't be stopped and that stays, stays here. TikTok next block. Every single day, 21 million, no questions asked. And suddenly the conversation is very different. And the same thing is happening in AI is that it's such a disruptive technology that we have the opportunity to redesign how these things work and how we interact with them and stuff. Like what you're building, I feel like, is such a great way to be. Like, how do we target and build as fast as we can to always leave that foot out the door and not get trapped this time?
[01:06:41] Speaker A: Yeah. I think there's two things that you said there that I wanted to comment on. The first was a good example in the AI space of a similar scenario, shout out to Zuck for making all these Facebook models of real. Right. So I mean he very well could have saved humanity by doing that ironically, possibly for maybe not even the best motivation. But you got to tip your hat to him. That's true. That's another example of, of just, just knowing, knowing what time it is, being insightful enough to just flip that switch and say, hey, we're going to just release the Kraken, we're going to release these open source models. Now you guys got to compete against the whole world. I think that was brilliant and, and got to tip it, tip my hat to them for that. And I, I think the other point I want to bring up and this, this has been an initiative, we've kind of had a DM deprioritize a little bit that we want to pick up again soon is we built basically a Cascader vendor node that you can set up in your start os. So you literally just put in your chat GPT keys and then your price and then you hit a button and then it'll over tour, it'll broadcast the Nostr through your private keys up to Tor, you know, up through, up through Nostr and it'll accept requests.
[01:07:54] Speaker B: Oh my God. Make a million GPT proxies. That's awesome.
[01:07:59] Speaker A: Yeah, that's going to be hard for them to stop. I mean, I think eventually we're going to. As I said, this is a temporary truce. I think we're eventually going to have to go a completely different direction. But the beauty of something like that is, okay, now we have that entry point. Well, the Start nine guys have been doing cool stuff with setting up free GPT. So now I literally can just make the request inside the internal bus of the system and do free GPT. Maybe I add extra hardware to make it more beefy and then I run start OS on it. And now I can literally be my own GPT or maybe Uncle Jim it to my family, my community or anyone else in the world. Really what got me to think about this is that we need free market incentives in order to ensure that this succeeds. We can't just sit back and just let a few people have all the tools.
[01:08:51] Speaker B: Yeah, no, that's awesome. And I want to give you an idea, something that I think specifically because I, I'm going to build it if nobody else does or I'm going to try to at least attempt to, but I just don't have the time to do that shit. And obviously, and I'm launching another show. You know, we're just so completely kind of wall to wall with everything that we're doing that I'm, I'm in a place where I just want to tell everybody about any vaguely interesting idea that I have to see if it, it hits somebody else in the way that I think it hits.
But one of the things that I saw in Cascader that I would probably use religiously because I've built so many of my own little tiny apps were actually just a combination of this.
There will probably be need, need to be some degree of fine tuning for a smaller model to do this. But you could do this entirely self hosted without much trouble at all, I think.
But you know, FFmpeg, right. I figure you probably have been using it for one of your things.
[01:09:52] Speaker A: We use it, we use it in services.
[01:09:53] Speaker B: Yeah, 100% right. FFmpeg is the is like 98% of the world's media empire is run on this one ridiculous media format encoding piece of terminal software, right? It's a command line software. You know, there's a lot of things that like you go to a website or whatever, convert GIF to webpage, you know. Well, I've built a bunch of little tiny apps for conversions that I normally have to do and then every once in a while I will have to do like kind of a more custom conversion and it will need different parameters. I'll be like, oh God. What I do is I go to chat GPT where I go to GPT for all on my computer and I will just ask it what's the code and like the things that I have to do for FFMPEG to get it to this format and this thing, blah blah, blah. And I get it and then I go open it and load up FFMPEG and then put in the commands and drop in this like it's a bunch of steps. It takes like three or four minutes to go through the whole process. And I was thinking you could just put a small LLM, fine tune it to a crap ton of just like get FFMPEG to do things a certain way. Give it the man pages of FFMPEG as kind of like a pre prompt of, you know, give it the context of all of the different instructions and then just have a universal convert anything to anything and you just write into a text box what kind of format you want. I don't want, I want this to be a wav file at 16khz and this is a video and then it just writes the FFMPEG command, runs it and then just spits out the thing. Never again do I have to go to gifttowebp.com and or you know, like it's a, it's a universal. The interface of FFMPEG is just no longer a terminal window. It's now just a text box.
[01:11:44] Speaker A: Yeah, I think you, you brought up a bunch of good points. I mean first is, you know, how this open source tool is, you know, pervasive in all these media empires and how powerful that is. But, but also just the fact that as we discussed earlier, LLMs are just a really great search engine. It's a more, you know, it's, it's going over that entire corpus of all the things you could do and helping to figure out, you know, what, what ought to be done in the moment based on the context. And I think that's going to keep getting better and yeah, we'll definitely, we'll definitely consider tools like that. I mean, I, Yeah, I think that's a great idea. Thanks for the tip.
[01:12:22] Speaker B: Yeah, I just know without a doubt, like, you know, if that was just a couple of sats and I could do that, just jump over to Cascade or, you know, logged in with my Albi wallet and then just like throw in a couple sats, like, I would probably use that every day, like multiple times a day. I'm constantly converting media and doing things. And like handbrake is even good. Like, like another great example is that like audible or itunes, you know, like when you're setting up stuff, they always have like a list, like a bullet point list of stipulations for the media.
And it's like, well, I could just highlight this and drop that into the tool and then it's just gonna, it's gonna take what's useful. Like, okay, these are the parameters. Stick that into an FFMPEG thing and it would be very minimal fine tuning, I think, to, to get a model to do that.
[01:13:13] Speaker A: Yeah. And even just riffing off the top of my head, you know, one thing I would probably do is, you know, make it a chat interface like you said, but basically get it to the point where, okay, you give it some information and then it gives you like a confidence score of like how likely it is you'll get what you want and ask you follow up questions to increase that score. In other words, you know, let's say you put in.
[01:13:33] Speaker B: You're too vague.
[01:13:33] Speaker A: Okay, you know, what is the purpose? Where will you be uploading this? What will you be doing with this? Oh, you want to go, you want to put it on audible or whatever platform? Well, it's recommended these settings so that, so I'll take that into account. Is there anything else? No. Okay, then, then, you know, confidence score, 95%. Boom it and it does it. That would be the way I'd approach that problem. And I think that's, that's a great idea. I appreciate any and all ideas, no matter where they come from, especially from the great guys.
[01:14:01] Speaker B: Build it for me.
Hell yeah, man.
Well.
Well, shit. Is there anything else specifically you wanted to bring up? I'm really kind of stoked about the direction of this tool. You guys built this like really quick too, right?
[01:14:18] Speaker A: Yeah, we started in like October and you know, been on and off. This has been part time. This has not been a full time project. And it was just me until about, I think maybe mid February. So we're starting to pick up Momentum, we're getting more, more and more users every day. We have the new Cascade or Amber out. So if you guys don't want to spend your SATs or you have, you know, somebody in your life that doesn't know how to use Bitcoin yet, we got you.
You can sign up through our square checkout, just put in your card. None of that gets touched by us. That's only touched by Square.
And then you can use all of our services and make it your oyster. And once again, we're always open to feedback. We're always open to open source devs that want to find a way to monetize APIs without having to make a front end or have to do any real marketing we can provide. You can kind of be like that hub that provides you with inbound without you having to do as much.
So we're always open to collaboration. And in the long run we want to build tools that serve liberty. We want to create free market incentives to make better AI tools. And I think it's imperative that we do so because if we don't, the consequences can be dire.
[01:15:31] Speaker B: Hell yeah. One other thing is, I would probably say just because until the, the logic app is built out, it might be useful to have like a little thing is like, do you have a module that you wish was here or something? Because I'll like, while I'm doing stuff, I'll just, I mean I'll probably just message you on key and, and be like, dude, you should do this. Like if I have an FFmpeg, you know, idea or something like that. But even though a lot of it might end up being noise or things that are absurd and impossible to build, but it would probably be really useful to kind of get that feedback from people on the, using it on Noster and, or on the actual Cascader web front end. And I know I'd probably use that a couple of times too.
[01:16:17] Speaker A: Yeah, the idea bank always grows. And as I said before, we want to combine the genius of all the people in the world and help create something that, you know, basically creates value and, and pursues. Pursues our, our, our morals and our values as Bitcoiners.
[01:16:34] Speaker B: Hell yeah, man. Hell yeah. Well, why don't you tell everybody where they can find the project, easy. I'll have links and stuff for the show notes and then where they can find you.
[01:16:46] Speaker A: Cool. You can go to Cascader xyz, that's C A S C D R dot xyz. And you can also find us on Nostr. I'm A no social media guy except for Noster.
You can find us at Cascader. Same spelling C A S C D R. And you can find me on Nostr as well. Uncle Jim 21 and same handle for GitHub.
However you want to get at me, send me a dm. Always open to new ideas, new collaborators and. And other like minded, liberty minded people.
[01:17:23] Speaker B: Hell yeah man. Hell yeah man. Any last thoughts for the audience?
Suggestions, Recommendations for them to check out? Book tool, whatever, what you got?
[01:17:33] Speaker A: Yeah, I guess book recommendations. I. I think I mentioned range by David Epstein. Other good ones are Where Good Ideas Come from by Stephen Johnson. That is an excellent book that explores the similarities between biological evolution and technological evolution. Okay, there was actionable gamification by Yukai Cho. So I mentioned that earlier. And then last but not least, I want to give a shout out to Pleb Lab. Pleblab has been extremely helpful to us in our journey. I was pretty new to bitcoin and they welcomed me with open arms and helped me find other people like minded to collaborate with on you. All these little projects. Learn from. You know, if you come to Pleb Lab, you're going to be, you're going to be hanging with the best. You're going to be learning from the best. You got Super Test Net. You got Topher Scott.
[01:18:21] Speaker B: Whoa.
[01:18:22] Speaker A: CTO of Bit Escrow. You got Keon. Keon, yeah, Keon. CEO of Stacker News. Great, great and wise person that you can learn from. You got Nifty in there. You got car, you know, all, all kinds of pl. You got.
Yeah, it's a crew. Yeah, yeah. Francisco from Yopaki. It's, it's heavy hitting and you know, it's a great environment to learn. I think a lot of us, especially if you're a nerd, you know, you like to kind of sit at home and, and be in your own thoughts. But I think there's a lot to be said for meeting in real life and having these serendipitous conversations that spur new ideas. I don't think Cascade or would have happened without FLEB Lab because it was literally start germinated an idea in my head. We decided to go after it at tab comp. We did it.
You know, they still thought I was crazy. They didn't understand this interface I was trying to build. So I built it for him and I showed it to them and I realized, you know, the world needed to see this and we need to keep building on this vision. So my recommendation is if, you know, if you're not in Austin. If you're in Austin, come to playlab. If you're not in Austin, try to find other people that are like mine. Try to find other bitcoiners, other people that are just trying to build or create things and collaborate with them. And if you can't get to any of them, you know, hit me up in the DMs. Always happy to work with people, especially if you're interested in, you know, working on some of these open source modules or any other open source project that could probably help you find. You'll find a path.
[01:19:47] Speaker B: Hell yeah. Hell yeah, man. Dude, thank you so much for coming on the show for exploring this stuff.
As you know, I'm a absolute freaking nerd and I love all of this stuff and I love talking out and like just exploring the different directions to take this. And I'm really interested in the project. Keep the keep room open. Just shoot me any updates, you know, you got. I'm, I'm eager to just kind of test like that's, that's kind of where I am. Everything outside of the show. I'm just beating my face against every tool that comes across, comes across the table for me. So let me know, keep the chat open and thank you. All those links. Well, I've got, I've already got everything down, so I'll have the links in the show notes, send me anything else that you got so the audience can find it and keep building, man. Appreciate it.
[01:20:38] Speaker A: Yeah, thanks. Thanks for having us, guy. I really appreciate it. Broke up for me on there a little bit. Thanks for having me. Really appreciate it. Really admire your work and looking forward to see you keep going with Bitcoin Audible AI on chain, all your other projects.
[01:20:54] Speaker B: Awesome, awesome. Thanks man. We will close it out.
I hope you guys enjoyed that conversation. Please check out Cascader and if you're interested in developing in this direction or building some of these sorts of tools. I really think this is a, a really significant path toward and something, in fact, something that I kind of wish I had brought up during this conversation is thinking about the idea of microservice provision and starting from the ground up and kind of building out. Because one of the things that I've noticed, like a great example is a company and a bunch of guys that I really like is Crypto Cloaks. They are a company, they started with one 3D printer and started printing things and selling it to bitcoiners and they have built out this whole basically 3D printing manufacturing facility. They make all sorts of stuff. In fact, I, I Probably got some stuff right around here. Yeah. So this is the, this is my cold card, right. Which I talk about on the show all the time. And this is actually a, a case printed by the cold card guys. They have an art, an artist selection series. But anyway, this is, it wasn't even to sell my cold card cases like I'm getting an affiliate or something. I'm not. I just think it's such an interesting example of how they built out and how I watched them build out this company over a number of different years and that they're able to kind of like micro manufacture and then build out. And Cascader is such an interesting perspective because we also have all of these like Fiverr and you know, independent contractors and one off sort of creator and hobbyist in gig economy that's just kind of growing at the edge and it's becoming more and more prevalent and more and more necessary for everyone to participate in. And Cascader is such an interesting example of building that out with AI tools and service provision that if you have the network tools and you have the self hosted and kind of redirect and the payment tools with lightning and all of these different things to start pulling these things together and then you have some, a tool like AI itself that can help you code and build these things faster. Suddenly microservice provision is something that we can provide for each other that I could, they could just, you know, generate and create some specific targeted thing that I don't have to run a subscription for. I can, if I choose to, if I end up using their service. And then they can slowly build out piece by piece, use case by use case, task by task.
So it's just a really interesting thing to think about it kind of in the context of the kind of gig economy and the new sort of decentralized economic earning structure that is occurring and growing alongside the sort of like career and salary system. And I never actually remembered to bring that up because we'd already kind of gone through so many other things and it's such a fascinating topic. So maybe, maybe even we'll bring Jim back on at some point to dig into this after, you know, a couple years down the road or whatever, see how this unfolds, see what direction they takes, what do they learn from, you know, creating this sort of a service and providing it in a bunch of different, in a unique environment and building out the, the node system. But there's so much, just so much fun stuff to unpack.
If you can't tell, I'm a bit of a. I'm a bit of a nerd when it comes to this stuff. I hope you guys really enjoyed that though. And there's so much to share, so. Or to check out. So I have links in the show notes of all of those things that he discussed, the book recommendations, the website. So you can check out Cascader. And what was really funny is he recommended me a YouTube video and I actually thought this was great. We were in a. In our keyt room, so we're chatting over Keat and he said, you should check out this video. And then he says, here's the Cascader summary. And he gave me a bullet point list that was produced by Cascader that pulled the video from YouTube, read the transcript, and then gave a breakdown of what the show was or what the video was. And actually I was like, oh, well, this is actually super useful. And I read through the summary. I was like, oh, okay. And then watch the video. Um, so that's just a really cool way of him, like, not having to go into his version of like, explaining, like, what's this video about?
And, you know, just right in there in the conversation. Just posted the summary. So anyway, check it out. Links in the show notes. Don't forget our amazing sponsors that make the show spos possible. Swan, Bitcoin, and Coinkite. It will be right there so conveniently in the show notes in the description of this video and this episode. So with that, thank you guys so much for listening and I will catch you on the next episode of AI Unchained. Until then, I am Guy Swan. Take it easy, guys.
Sa.