AI_031 - How I use AI

August 09, 2024 01:20:42
AI_031 - How I use AI
Bitcoin Audible
AI_031 - How I use AI

Aug 09 2024 | 01:20:42

/

Hosted By

Guy Swann

Show Notes

Is AI really all hype and oversold gimmicks, or is it going to change the world and replace everyone with software workers? Or are both of these views a distraction from what's right in front of us? How is AI useful today for very real tasks? In this episode, we'll dive into some major new open-source models and releases, along with exploring the very practical applications of LLMs with the help of Nicholas Carlini's fantastic article on where he finds value today. I also want to challenge the pessimistic views about AI's impact on jobs and share how we can leverage these tools to enhance our work and bring new projects to life. But how can you start using AI effectively in your daily tasks and creative endeavors? Let's find out.

Links to check out

Bitcoin Audible & Guy Swann Links

Check out our awesome sponsors!

Trying to BUY BITCOIN?

Bitcoin Games!

Bitcoin Custodial Multisig

Education & HomeSchooling

View Full Transcript

Episode Transcript

[00:00:00] So when people say things like LLMs are just hype and that all LLMs provide no tangible value to anyone, it's obvious to me that they are just wrong because they provide value to me. [00:00:14] This is AI Unchained. [00:00:26] What is up guys? Welcome back to AI Unchained where we explore the value and use of AI in a sovereign and open source way. This is Guy Swan, your host and I gotta say, for the very first time, I've actually not had this happen before, which I am shocked that this is the first time. But I just recorded a whole intro as if this show is bitcoin audible. But it's not. It's AI Unchained. And we are getting back into some very practical stuff. [00:00:58] Since we have spent five or so episodes going very abstract, esoteric, what's the future hold, how do we think about it and how do we treat it and all that stuff. I want to go back to right now and let's talk about AI and what is going on in AI. But also more specifically, I want to cut through the hype and I want to talk about just like using AI. There's a fantastic little article that we found in our research called How I Use AI. And I'm not going to be reading the whole article because it's long and also a lot of it's just dry, like practical stuff that he's broken down. But I will link to it if you want to get more details. But there's a lot that I want to go over from his article as well as mirror from my own experience to just talk about, like what are the practical uses of it? Where, where is AI not a scam and what are the best ways that this is actually being used for genuine productivity gains. Now real quick, before we jump in, for anybody who was lucky enough to buy the dip now that bitcoin has bounced pretty hard and is back up to around $60,000, you're going to do the right thing and you're going to withdraw it to your cold card hardware wallet. You're not going to leave it on an exchange. Being complacent with this sort of stuff is exactly how you lose it. Or you run into a problem, or you wait until there's some sort of a serious situation or you really need it and then it's really stressful and you'll run into headaches and limitations and KYC and God knows what and you're going to have a problem. And if you just do it now, if you just move it to your cold storage and if you get a cold card hardware wallet, it will be safe and you won't have to think about it and it will be there when you need it. Getting a solid hardware wallet and moving your Bitcoin to it is the most important thing and the easiest thing that you can do for the greatest benefit in security and autonomy in the Bitcoin space. Do it and do it with my discount code Bitcoin Audible, which you will find right in the show notes. Grab a cold card. Go to coinkite.com all right, but I do actually want to read the introduction of the article when we get to it. Just because he does a good job of kind of summing up the whole idea. Especially if you want to get into the knit and gritty and kind of get the the serious breakdown and details and the actual conversations with AI that he goes through in this rather lengthy piece. The introduction and kind of an overview of what he is doing is a great way to kind of tease out whether or not you actually want to go read this article. [00:03:40] However, before we get into that, I just want to go through a number of updates and things that have been going on to kind of catch us up to speed. So we talked about this in the last episode a little bit if you had not heard it, but Llama 3.1 has been released and this is the first time that I know of that a full sized 405 billion parameter model has been available in an entirely open source fashion from Meta. [00:04:13] And this is partly what Mark Zuckerberg's article that we read about is referencing and just kind of talking about explanation for why they did this and what the value of this is. But this is a rival for the top, like the Claude OpenAI and ChatGPT, the top closed source AI models in just general knowledge multilingual translation. Like all of those things, all of the standard LLM functions that you think of, and this one's entirely open source. So any business or company or anything can build on top of this, can fine tune it, can offer it as part of their service or offer it as their, you know, AI customer portal assistant or something. Which honestly, all of those customer support chat things, even real humans sometimes just are a pain in my butt. The hours that I spent yesterday on the phone with Amazon because I can't get access to an old email account, they just don't, you know. Like I always have this feeling that, you know, the computer isn't going to care and I need a human. But then the humans don't care. [00:05:28] Like they don't care that they just kind of throw their hands up as soon as there's any sort of a friction or them having to go to somebody else or to actually give any sort of genuine investment into solving the problem if it's not an obvious solution in their kind of like textbook bullet point, this is what you're allowed to do kind of environment. [00:05:51] They are worse than robots because they make mistakes and they don't care and can't go any further than what the robot in front of them tells them they can do. And I don't mean can as far as like their authority. I mean can as far as their capability. Like, if the computer doesn't give them a course of action, they're lost. [00:06:11] I got literally hung up on and it was not even. They were, they just went dead silent. They gave me the silent treatment for like 40 seconds and I could still hear them. Like I could hear them breathing. I could tell that they were still there. Nothing changed about the quality or the, the, the kind of like background noise of the phone call. And when it got really complicated and I was like, how can you, how can there not be some other way for me to verify my account? And then they just hung up. So more and more I'm a little less worried about the AI as the customer assistant everywhere because I'm just generally not happy with customer support a lot of the time. [00:06:47] Like, maybe AI doesn't do a very good job, but honestly, humans don't do a very good job a lot of the time either. And it's extremely difficult to find someone who actually cares or who would actually come up with some sort of a creative or next level. Because after, after I got hung up on, I ended up being sent back and forth from department to department because nobody wanted to deal with my problem because it was outside of the normal problem. [00:07:13] But this is one of those models going Back to Llama 3.1 is this is something where it would be good enough and especially if you were able to train it on all of that data on all of the. [00:07:29] Well, think about it. Just in all of the customer support calls that they've had that, you know, they record every single one, right? [00:07:36] Give do transcripts and do a rundown of everything and you know what the instructions that the person had to follow on the computer, the customer support person had to follow on the computer. When there were edge cases that a customer support person actually went out of their way to help solve a problem, what did they do and how did they solve that problem? And who did they contact. [00:07:58] You could use all of this data to fine tune something like llama 3.1 and use this as kind of a knowledge based vectorized knowledge base to call from. And now you actually have a really good customer support assistant that's very knowledgeable and it can be backed up. You can get rid of the customer support agents that don't care and can't function any better than a computer and just leave the ones that do care and are motivated and are actually decent at coming up with a different way around the problem or locating who can help in the cases where they can't and put them behind who the AI so that they deal with nothing but edge cases, all of the odd problems and 90% of the workload is offloaded because I end up solving a lot of my problems with AI anyway with LLMs or Claude because I can't find it in the FAQs or the documentation or it's too hard to call up customer support or customer support wasn't even able to help me. It's really difficult, especially when you have a really unique problem or you're using, using it with some beta soft thing or you have like, you know, trying to shovel, troubleshoot a motherboard with somebody and you're building a custom computer. It just, sometimes it just, it's incredibly difficult to find that person who would be in customer service who even has the knowledge and the kind of creative mindset to even help you troubleshoot that and then to kind of begin the conversation at the level or at the place where you are in your troubleshooting and, and in your process. I can't tell you how many times I've explained that like I did this, this and this, and I tried this first step a completely different way to try something else, blah blah, blah. And none of this is working. So I've narrowed it down to this, to these three possible routes, these three possible things that could be going wrong. But I can't, I don't know how to eliminate any of them because I don't know it well enough or I don't know what these indicator lights mean, et cetera. And, and then I'll just get like the most generic response of can you do step one? And I'm like, I just, I just told you what the outcome of step one was and that I did it multiple different ways. Like you just gave me the most naive version of it. And I can get it if someone is saying, is trying to be thorough and just saying, listen, let's just do it again to Start over. Like I respect that if somebody just wants to go through that process, but when I got the response specifically because I know they couldn't listen to my 30 seconds of explanation as to the where I am in the process and that that is literally just step one on their computer screen. That's a very different, there's a very different problem. [00:10:50] And you also kind of get that feeling when you were talking to them, whether or not they are doing one or the other. And speaking of using AI, that's a really big one where I do just go to AI and AI always meets me where I have explained the situation from. And sometimes I will tell it to let's go back to square one and troubleshoot it as if I have no idea what I'm doing. What's the most basic thing that I should be starting with. And at least once, maybe twice before that has actually helped by going back to the basics and then verifying each piece of the puzzle and that I usually go there when I have tried every advanced and all everything at the level that I was trying to attack the problem at and could not find any sort of a solution or was not make it. Just ran into roadblock after roadblock after roadblock and couldn't actually make any progress. That's when I just go back to the drawing board and I go to the AI. What's the first thing I should do to troubleshoot this and then move forward from there. So it's usually when I'm about to give up that I go back to that, to that. And it has helped, it has solved me, helped me get through some sort of a problem. It's fantastic for troubleshooting. But Llama 3.1, having that open source and having a 405 billion parameter model, like a really big like enterprise level model to run is exactly the kind of thing that will really explode the number of uses and the ways these can be used with a bunch of different companies and small businesses and smaller enterprises that want access to this sort of thing and want to be able to utilize these tools, but do not want to be plugging all of their private, very private and or proprietary data into Claude, into Anthropic or OpenAI. And honestly I would be shocked to find out, to find proof in some way that OpenAI is not selling our information, we're not selling the results or the weights of all of the things that we are doing and the data and documents that we are feeding it and that we have any privacy at all. I just don't believe it and I had always heard a lot of stuff going into this like about a year ago that the. The finances for OpenAI do not work out based on how they sell their subscription and how it's used and how much it costs for every single time somebody, you know, requests inference from them. Now granted, all of that stuff can change really really fast and they have an extraordinary number of users. So I don't know if Microsoft could be using this as a loss leader, but I just makes. It just seems obvious to me that they would go to selling information and I can't imagine they have any moral scruples or any serious philosophy about privacy or security or autonomy of the individual when it's from the stupid. Sam Altman is literally the guy who made World Coin, which is that stupid orbit that they went around to give people money by scanning all of their biometric data into the machine and they say it's safe and they're not doing anything with it, but it's completely closed source and it's a black box and you have no idea which is just the stupidest, most obnoxious. [00:14:11] I can't imagine anything more idiotic to say that yeah, it's secure, but we could just trust us bro, scan your eyeballs and if that isn't enough to get you to not trust Sam Altman and any BS that he spouts, they added the former head of the NSA to their board. So yeah, I don't believe that shit for a second. [00:14:33] In a rather crazy turn of events that I did not expect, Mark Zuckerberg may cause them may give them a run for their money with Open source llama 3.1 and any additional models that they have in the works. And I also just loved the framing of these AI weights, like how the incredible amount of capital going into them and the fact that one of the best things about these is the variants, the ability to fine tune and work with all of these models and fork them and use them to create smaller models etc, etc, that it's a very. It's an ecosystem and technology that even even seems to beg to be open. [00:15:22] And Linux, he compares it to Linux infrastructure and this is why Linux won and he thinks Open Source is ultimately going to win again because of this same kind of dynamic and he, he sees a parallel with how Linux developed during that time or during the early days of the Internet. [00:15:39] Speaking of open source, Gemma2 there is a 2 billion parameter model. So it's a very efficient, very compact, so very small model that Google released. So Gemma 2 2B I'll have the links to all of this in the show notes by the way. But this would be a great sort of thing to have as a companion if you're using this. And so of course you could also use like the 8 billion parameter llama model. But 8 billion is also pretty steep for a lot of generic purposes. You don't want to go too heavy on computation for no reason and then hog up a lot of RAM when you're just doing search or you're just translating something in a text field. You really kind of want to go with a small, more more efficient and more concise model. And that particularly goes for if you are applying a lot of different things. Like one of the things I've noticed even with my Linux machine that is built for AI and stuff and I probably am going to have to be getting another GPU partly because of this, but that I can fill up my RAM very quickly because I want to run and utilize usually by API or through an app or just through using wholesale and connecting it to my mobile or my desktop or my MacBook Pro. [00:17:00] But I'm trying to use a bunch of different things. I'm trying to use Florence in order to, you know, give good captions and tagging and all of this stuff. I'm trying to use image generator, I'm trying to use a, the face swap thing for a meme. I have llama3 running the 30 billion parameter model running on that machine. So I have a chatbot and after I boot up three or four of these I start getting errors and then stuff crashes because I'm just hogging tons and tons of resources because I might need these. They're just all running and they're all loaded. And this I think especially from the developer side of things is where knowing when to be efficient, when to load and or shut down a model or how much of a like what size of a model to load depending on your use case. [00:17:57] I think this is where it could be a really important piece of building out good applications with AI, especially with a lot of these open source tools. So it's really cool to see both Apple release a bunch of very small open source models and then Also Google and Gemma2 with a 2 billion parameter model and that's probably a great place to leverage something like that. [00:18:22] Short version is I think you should have a lot of small models in kind of your back pocket because I think there's a lot of use cases that where it's good to have an LLM prior to or during in the middle of some other process, where the other process is kind of the main thing that's happening. But you want to be able to use it to interpret what's going on or change the format of something. Or like a good example, a good example would be to propose a possible file name from a set of, you know, reading the OCR from the data, the type of file that it is, the date or whatever, like pull all the metadata and then all of the contents from the document or the thing. Then you could get the LLM to propose a possible title for it to make it easily searchable with some other context. You know, maybe you write two pages on like, this is how I think about my file system and this is how I organize and search for stuff. [00:19:27] Can you use this? Well, let's put that into the LLM and then use that, use that human explanation as an explanation to organize your files. Another one is if you use like Deep Burrow and what's the one, there's like a stable diffusion one and then Florence two, which is a Microsoft model, there's a bunch of these like captioning and tagging models. And some of them literally will just write out like this really elaborate caption trying to describe what is happening in the scene. And sometimes it gets it wrong, sometimes it gets it right, but sometimes, like, sometimes it'll have some oddities in it. It'll generally understand what's going on in the scene. And then you have like a series of tags and then which is a different model that will do that. And then Florence 2 writes it in a completely different way. So there's a bunch of different metadata about like what an image is or what is happening in some sort of a video shot or something. [00:20:24] And each piece of it, you know, run it through three different models and three different models give three completely different outputs that, in different formats that don't really have a lot to do with each other. They build their own system. Well, then you could have some sort of a standard formatting with an LLM. Again, give it a prompt in which you detail out exactly how. And this can be automated, detail out exactly what you actually want the tagging system to be like. And maybe it's different from all three of the different models that you just used, but it can take all three of the results and then basically clean it up, reformat it, put it in the way that it makes sense to your context or the way that you're trying to use it. And then everything can have standard formatting and being marked down or Whatever it is that becomes useful to your application or your use case. And then on that. Actually since I brought up image diffusion and image recognition stuff, Flux 1 has just been released and there's also Flux 1 control net now. So for anybody who does any image generation, Flux is now a. It's. It's basically everything I've read about it so far and haven't got to play with it much. [00:21:49] But everything I've read about it so far makes it seem like this is the predecessor to Stable Diffusion that has already. That is surpassing Stable Diffusion. So you may remember that we went over this a little bit on the show before that stable diffusion 2 was far and away the most popular open source image diffusion model out there. [00:22:10] And everybody was forking it and fine tuning it. And there's Loras and there's Control Net. There's everything that you can think of with stable diffusion 2. Then stable diffusion 3 came out and oh, by the way, in the stable diffusion 2, I thought this was a good model and maybe just people were violating it and the money just wasn't there. I don't know what it is. It kind of seems like the company was just run very, very poorly. And it was. There was a lot of internal strife and a lot of people being fired and a lot of power changing hands. Just all sorts of internal strife in the company. And I think this, that's really what killed it. I don't think they had something like Stable Diffusion was incredibly popular. I think it could have worked if they had the right people at the helm. But stable diffusion 2 was under a model that it was free open source, you could do whatever the. What you wanted with it except for a commercial license. And to get a commercial license to use it in on your website or whatever, you had to pay a. [00:23:21] You had to like register with Stable Diffusion. I can't remember exactly. Or with stability AI the company and it costs like $200 a year or something which isn't even. That's not even bad to the point that I was using Stable Diffusion so much especially for like little logos and little things that I create for background, like just a ton of different things. Stable Diffusion was coming in handy for little projects and also it was just fun to explore. But because of that I was using it enough and I hate going to other things and just like constantly paying to generate images when you know, I especially if I'm exploring and I just end up generating like a thousand images. Like there was one great example is I was trying to get something to fix A hand in an AI generated model from a different place, from a different thing. [00:24:14] And so I was doing infill and every single one of them, it was just hands just sucked. But so I just generated. [00:24:23] I just gave it the max in the web UI, the stable diffusion web UI, which I think is like 100 images. [00:24:32] And I just generated 100 images. I just was like, okay, just go. And then I'll go back and do other stuff. I'm recording because I'm using my Linux machine. So I go back to my Mac Pro and I'm recording an episode or doing some other project and I let it run for 30 minutes. I come back to it, check all the images. It doesn't take that long for 100. But I go back and scroll through them and you know there's one that's good, boom, I'll use it. And if there's not, in this case, there wasn't. I literally generated like 500 images. And I just kept doing the max over and over again and just waiting until I got a good result. So it's easy to use it a lot, especially if you have like a weaker model like that. And that's why a commercial setup, when things don't come out fantastic every single time doesn't make sense because you're paying for every single one of them. You might, you know, the. The question isn't how much does it cost to generate an image. The question is how much does it cost to generate the image that you want to generate or generate the pieces of the image that allow you to composite the image that you're trying to get together. [00:25:35] And all of them, for a long time just weren't super good at it. The reason stable diffusion got incredibly good, stable diffusion 2 got really good, is because people were forking it and making loras for specific things. So if you wanted it to do a specific thing, if you wanted it to do a certain style of image or a certain style of drawing or character design or locations even, you could get someone else's trained version of that and then use that for that use case. And suddenly your. Your capability of getting the results that you were looking for skyrockets like 10x easy. [00:26:16] Then stable diffusion 3 came out and they literally killed themselves. So they dropped their license shortly after the model actually came out. [00:26:28] And the license was insane. The license suggested that if you ever use stable diffusion 3 outputs in order to train a different model or a Lora or a Fine tune, anything like that, and you become, you stop paying stability AI or something that you have to delete them all or you're in violation that you can't. It's considered a derivative of stable diffusion if you use the output like the whole. I read through a lot of it and I use AI to kind of sort through and summarize the main points, but it was bonkers. The license made no sense, like to the point that it almost. [00:27:10] It's almost like they were trying to commit suicide. But it's probably just really arrogant people. But it was incomprehensible, especially in the context of an open source model. So at that point, for me, and I think a lot of people, stability AI just died. There was no stable diffusion. [00:27:27] And that's basically that. [00:27:29] And Flux 1, however, which. There's a number of different models, versions of this already it seems like. [00:27:37] But Flux one people are saying is actually better than midjourney. And this is an open source model. [00:27:45] So this is really interesting. This may be the thing that. And like I said, there's not an explosion of stuff for it yet because it is very new, but this may be the stable diffusion replacement. Just a cursory glance at like what a lot of people are generating and what you can do with this model. It's also a very big model. I think like one of the main ones is like 16 billion parameters, which is like 20 gigs or something. I can't remember exactly. [00:28:15] It's a big model though. So it's really useful to have like a hefty gpu. [00:28:21] But it's really exciting to see kind of a new era of that come out. But it also kind of sucks because the loras and fine tunes and embeddings and all of that stuff won't work anymore. You basically have to rebuild or recreate all of those things. They're based on the model that they are built from, so they make no sense in a completely different set of model weights. So for any of you who are out there training a bunch of models and have like collected a bunch of loras or whatever it is around models, none of that is going. You have to start your whole collection over. The one thing I haven't tried with it yet is. Well, I haven't tried much of anything with it. I just used it in the web browser. But one thing I haven't tried yet that I'm excited to start trying out because this is something I did use mid Journey for when stable diffusion didn't give me any. The results that I wanted or we're looking for is for icons and logos. [00:29:16] And this is because I make a bunch of my little single use applications with AI a lot. So I'm really excited to try this one out just because I've heard such fantastic things about the quality of the output with this. [00:29:30] And on the video and AI generation front, Runway Gen 3, their Gen 3 model for the video generation is really, really good. And it is now available for everyone. So it's no longer in a private test mode. It is now just generally available for anyone who uses that service. So that's another thing to check out for anybody who's trying to tell stories or do animations or memes or anything like that. And actually another one that I've been playing around with a lot recently is Live Portrait. [00:30:07] This one easiest way to get this, and I did install this separately, but it's still the best and easiest to do so with Pinocchio. [00:30:16] So this is the P I N O K I O Pinocchio dot computer. I know I've talked about this a lot, but still, if you were looking to play around with a bunch of AI tools, this is the thing to get. And then I would say if you end up like really solidly using one where you kind of want to use it in a more industrial capacity, then install it as a standalone and or try to find a better, more efficient, coding, coded version of it or something. You know, trying to find it written in C C or Rust or something like that. But Pinocchio Computer is the place to explore these now. Live Portrait recently had an update. First it was take a screenshot or take a picture of someone and then take a video of someone saying something or making a facial expression. And it was fantastic at putting it onto that person's face to make it look like they were saying that thing like it's for an open source tool. Most of the open source were really limited. They were iffy in that job. This one is seriously solid. [00:31:27] But they also just released their newest version which does face tracking in a video. [00:31:32] And you can even change a video of someone's face to make it look like they're saying anything that you want them to say. And then of course you have like a tool like 11 labs that can basically replicate their voice. You have a fantastic tool for meme crafting right there. [00:31:52] So those are all the most recent tools I've been using for or exploring for image generation, video generation, and then for kind of media and content creation stuff. So that was Runway ML, now has gen 3 out, which is a potential SORA competitor. [00:32:11] Still wasn't as good as what Sora showed as you know, examples like it was a fantastic model but they still also don't have sora released. So gen 3 is orders of magnitude better than something that you can't use. [00:32:28] Then Live Portrait, which is the change or put whatever expression you want on somebody's face, video and image. [00:32:38] Then there is Flux, the new model that may be the stable diffusion replacement and a rival to the closed source models like Midjourney for image generation. [00:32:50] And then another couple of fantastic LLMs released, which llama 3.1 now has a full 405 billion parameter model and then Gemma 2 has a small 2 billion parameter model as well as the Apple open source ones, they have a bunch of small models as well. I haven't really gotten gotten to explore those in practice, so I'm really curious how well those will really play out or where those will fit into basically my my workflows and stuff. But when I have something to report back, I'll give you an update and let you know how. I think they could be useful, but that should cover new models and new tools I think that I've been exploring recently. [00:33:32] So let me talk about using AI and a lot of these things that I've just referenced I've kind of loosely talked about how I use them, but specifically in LLMs because Alex Fetzky, we were actually in a chat and he said something that kind of cracked me up but and I think there's a good reason to feel this way, especially with the amount of promise that and hype around LLMs and basically that they're going to be general purpose and you can get them to do whatever you want. They're going to be agentic and all of this stuff and way down the line after an unbelievable amount of infrastructure is laid down for this, a lot of this is probably going to be true, but I think we're spending too much thinking time thinking about what could or may be or will be down the road this, you know, perfect future and failing so hard at realizing what it's useful for right now and in specifically targeting and discussing AI in the context of its current trade offs and limitations is what is it useful for right this moment? [00:34:46] Basically every service and AI company and device, almost every single thing that I have seen has sold a gimmick rather than a product. [00:35:01] So this article written by a Nicholas Carlini, titled How I Use AI, I think he does a good job of just kind of cutting through the noise and just being just saying this is just what I do like right now. This is What I do, what I use it for, and how it has helped me be productive. And it mirrors so many of my own experiences that I want to kind of lean on this as because he did such a good job of breaking down each piece. And I've had experiences in almost every single one of these, save for the fact that I'm not a developer. So he uses it far more extensively and more specific development things that I just wouldn't even occur to me or that I don't go to. I never go deep in a development project. I usually build something very small though we are starting into something much bigger with devs who can't code. But we are trying to do it very modularly so that we only have to build small pieces and then know how they fit together. So let me just start by reading the introduction and giving his kind of summary of everything that he breaks down in this very lengthy piece. So any details or specifics you want to get on some of these, I promise you you can get them. Just go to the link and you will find it. He's even got like a really handy little clickable kind of index for all the different things that he has used it for and examples of their exact conversations. So trust me, this is really easy to navigate and it's really easy to just kind of skip the stuff that's not important if you want to actually explore. But with that, let's go ahead and just read the beginning of the article. [00:36:37] I don't think that AI models, by which I mean large language models, are overhyped. Yes, it's true that any new technology will attract the grifters. And it is definitely true that many companies like to say they are using AI in the same way they previously said they were powered by the blockchain, as we've seen again and again and again and again. And it's also the case that we may be in a bubble. The Internet was a bubble that burst in 2000, but the Internet applications we now have are what was previously the stuff of literal science fiction. But the reason I think that the recent advances we've made aren't just hype is that over the past year I have spent at least a few hours every week interacting with various large language models and have been consistently impressed with by their ability to solve increasingly difficult tasks that I give them. And as a result of this, I would say I'm at least 50% faster at writing code for both my research projects and my side projects. As a result of these models, most of the people online. I find who talk about LLM utility are either wildly optimistic and claim all jobs will be automated within three years, or wildly pessimistic and say they have contributed nothing and never will. [00:37:59] So in this post I just want to try and ground the conversation. I'm not going to make any arguments about what the future holds. I just want to provide a list of 50 conversations that I, a programmer and research scientist studying machine learning, have had with different large language models to meaningfully improve my ability to perform research and help me work on random coding side projects. [00:38:25] Among these 1 Building entire web apps with technology I've never used before two Teaching me how to use various frameworks having never previously used them three Converting dozens of programs to C or rust to improve Performance 10 to 100x4. Trimming down large code bases to significantly simplify the project 5. Writing the initial experiment code for nearly every research paper I've written in the last year 6. Automating nearly every monotonous task or one off script 8 almost entirely replaced web searches for helping me set up and configure new packages or projects, and 9 about 50% replaced web searches for helping me debug error messages. [00:39:20] If I were to categorize these examples into two broad categories, they would be helping me learn and automating boring tasks. [00:39:32] Helping me learn is obviously important because it means that I can now do things I previously would have found challenging. But automating boring tasks is, to me, actually equally important because it lets me focus on what I do best and solve the hard problems. [00:39:49] Most importantly, these examples are real ways I've used LLMs to help me. They're not designed to showcase some impressive capability. They come from my need to get actual work done. This means the examples aren't glamorous, but a large fraction of the work I do every day isn't, and the LLMs that are available to me today let me automate away almost all of that work. [00:40:15] My hope in this post is to literally exhaust you with example after example of how I've concretely used LLMs to improve my productivity over the past year. Just know that after you've had enough of the examples I provided, I've only showcased you. I've only showed you less than 2% of the cases I've used LLMs to help me. So when you get exhausted and you will, please feel free to just skip along in the new navigation menu, it's at the left which I read An LLM wrote new jest for this post because it had gotten so long so that's just a really fun little thing, is he had an LLM write him the links to each of the sections on the side of the page so that you could skip through all of his, all of his many, many examples and not have to read the whole thing. But I will say he's 100% right. You do very much get exhausted. And that is why I did not make this a read. [00:41:09] Now he goes into an entire section about nuance and just caveating the fact that he's not defending that, you know, LLMs are going to replace coders and saying anything about the future or anything like that, but then also just he feels the need to give concrete examples because it's not about like what LLMs are going to do. And there's a lot of people who actually say LLMs are just a total scam and there's nothing to them at all and they don't do anything useful. And the simple reality is that he's using them right now at this very moment, and this is exactly what they are useful for. So who cares? [00:41:49] Doesn't matter. I'm not going to speculate about what's in the future, but I'm also. He also feels the need to explain how in fact they are useful because he's using them. Then he has a section about blockchain and about how Bitcoin has no purpose, which is also quite funny. I encourage you if you're a bitcoiner and you understand that argument. I'm not going to go into that argument here at all because this is AI Unchained. But I could probably do a whole episode on his one little comment about bitcoin. So the guy certainly doesn't have a background in economics or understand how large monetary systems and economic economies work. This guy should really listen to my other show, Bitcoin Audible. [00:42:30] Now, one of the things that he starts out with which I thought was this was pretty funny, is that he actually built a complete application which was actually an HTML quiz designed to test how people could predict the ability of GPT4 to solve a handful of tasks. So basically it was a quiz that apparently got really popular. [00:42:53] I had not seen anything about this until I was reading the article, but to basically help people understand how whether or not GPT, GPT4 or the LLM specifically is going to work with their task, like will it be useful for them? And what's funny is he built the entire thing in GPT4 or GPT4 built it for him and he's actually got this entire Conversation shown, which is incredibly long. And it involves a bunch of different types of interactions. Some where he asks for an entire block of code, or he asks for the entire HTML application. [00:43:35] Sometimes where he is just making one tiny little change. Can you take this one and change the color? Can you, instead of saying this, can you say this? [00:43:45] And then cases where he just, you know, copy and pasted errors and got it to make corrections based on what that error was. And then even asking for just simple one off answers for how he does something as he's still putting this thing together and to change something manually throughout the process. [00:44:06] And then he's got a line in here that I thought was really important to understand. [00:44:12] Says in general, the reason this works is because large language models are great at solving things that people have solved before. [00:44:22] And 99% of this quiz was just some basic HTML with a Python web server backend that anyone else in the world could have written. [00:44:33] The reason why this quiz was interesting and people liked it is not because of the technology behind it, but because of the content of the quiz. And so automating away all of the boring parts made it really easy for me to make this another important line right here that I want to highlight before I read it. [00:44:53] In fact, I can confidently say that I probably just wouldn't have made this quiz if I didn't have access to a language model to help me, because I was uninterested in spending time writing the entire web application from scratch. And I am someone who knows how to program. [00:45:16] This is something that I have tried to get through to people. [00:45:20] And I think it's one of the best arguments as well for art and storytelling and media creation and stuff that I don't think. I still think there will be, there will be jobs for the hard stuff. Like it just changes the realm or the plateau in which creativity and innovation and the human actually plays their role. And I think extending it out to humans aren't needed anymore or when that's when we're still a long way from that actual future is to basically give up the fight before this is even started. Like, humans are not replaced. Humans have not been replaced. Basically menial tasks and problems that have already been solved and images that have already been generated can be regenerated. But the most critical piece of the puzzle is that he said he would confidently say that the quiz wouldn't have exist because it wasn't worth the time, it wasn't worth the time and effort to build a web application from scratch. And he can build a web Application from scratch. [00:46:30] This is the thing about media and storytelling and content generation. All of this stuff is it will enable projects that never would have happened because they become feasible, because it becomes economically viable for someone to scratch that itch and say, maybe I can do that. [00:46:50] And the. One of the perfect examples in my own life or in my own situation is that I've always wanted to do animation. I've always had a number of. I have had a number of stories that I thought would be great as a animated story, but the idea of an animation seemed so horrifically daunting to me. And then in addition, is just incredibly like all encompassing of time and energy and thought for an incredibly extended amount of time. If you're working with a small team or it's just extraordinarily expensive, and it's expensive even if you have a small team. But it was just completely infeasible to tell a lot of these stories because I don't want to dedicate my life to animation. I want to dedicate my life to a whole bunch of things and exploring ideas and exploring tools and exploring freedom technology and doing podcasting and reading books and all of this stuff. And part of that is I want to tell stories. [00:47:52] But if a feature film or an animated project is always $2 million and five years worth of like, no excuse or no side project, total and complete focus, then I might never complete a large feature film project. It might just never seem feasible in my situation. [00:48:18] And unfortunately, a good large project. As much as I value my own skills and I think I know how to tell a story well, I also don't have. [00:48:29] I'm rusty from a very practical standpoint of pointing the camera and knowing what to do with it and knowing exactly what shot to get. I'm better at. I kind of. I'm on the fly in a lot of my projects that I still do from time to time. I try to keep a little bit familiar with it. And I do stuff as side projects and I film specifically with my phone and stuff with that mindset. And I do good film and I do good pictures with the phone. Most people think you can't do them. You can. Doesn't take a lot of knowledge to kind of bump it up to make it feel next level for a picture or taking good video, even with a phone. But I am extremely rusty. Like, I would not be able to. There are projects that I would be like, next level passionate about, but I would not be confident to take on unless I could get a team of people that I knew Were going to just be incredible at it. But myself, to do a two hour feature film without a ton of experimentation and you know, having to go in and shoot everything that I need, that's just not going to work. It's not going to. At the end of it, I might produce something that's half decent, but I'm going to make a lot of simple errors. I'm going to make a lot of probably novice mistakes in the actual production and not foreseeing something that I need somewhere without someone who actually has skill. And not even necessarily a ton of skills, yes, obviously skill, but a ton of experience. [00:50:03] Just the this is what you do in this situation because I've had it happen to me five times, ten times before sort of knowledge. There's simply a level of instinct and base knowledge that is not something you can really write down per se. Or even if you do write it down, it doesn't mean to the reader what it meant to the person writing it down. To the person with the actual skill, there is this instinctual knowledge that you get by doing something for years or decades that you just can't really pick up, you know, without actually doing it, without actually picking up a camera, putting the lighting in the scene, testing stuff and knowing what it's going to be because everything is different. You can't just know what it's all going to be because your situation is different. The clouds are different. When you're setting up, you're pointing in the different directions where you are in your scene. You can't just possibly have all of the knowledge of just kind of like a database level knowledge of if you're pointing this way, everything's this way. If the fence behind you is orange instead of blue, you're gonna want to, you're gonna want to change the tint on the shirt, blah, blah, blah. You know, the number of things that you end up having to take account of. You just need the experience to do it. Which means that if I'm ever going to get to the point of making a feature film, I need to be able to make short films, small films. It's going to have to scale up and that's how you learn and get experience with anything. But if those small projects and medium sized projects are going to take up everything I have and be 10 times the financial risk, then that's not something that I can really do. It only becomes economical when I realize that I don't have to hire 10 animators. [00:51:49] And that's exactly true. Now that AI is around, it's going to make hundreds, thousands, tens of thousands, maybe even millions of projects suddenly come into existence. Suddenly. Even as someone who can pick up a camera and film a project and even who generally wants to, this is still something that I'm very passionate about. I still know there are tons of projects that would just never happen without certain tools that I know I can actually utilize today and may actually get even better and better in, you know, within a year or two that completely change the feasibility of the project. [00:52:27] And I think also people discount the insane value of the knowledge gap of the speed at which you know, something about like, the technology does move at a million miles an hour, but people don't learn how to use the technology at the same speed. The technology has, has moved far faster than our ability to learn, account for, and integrate all of this technology. [00:52:57] And I swear, I just cannot get over if all of the artists and stuff were, instead of just being sad and believing that they don't have any jobs anymore. If they could take, they would find these programs and these applications and these tools that they think are going to replace them and then use them to 10x their output, use them to ideate, use them to, you know, infill painting and to lower their costs because it's taking them a whole lot less to get the results that the customers want or that the people, their clients, the people that they're working with or the projects that they want to do in. And to be able to tell a story by themselves that otherwise they want would have needed two or three other people because the scope of the story is too big. Or it's not just animation or not just pictures that they can do. They need, you know, 3D rendering skills and a bunch of other. There's a bunch of other different pieces of this puzzle that they can't do alone. And now this actually begins to challenge that. It begins to say, maybe I can do 3D rendering with an AI model. There's a lot of really great AI models for that. Maybe I can draw the actual image and composite all of this stuff and then animate it. It makes them so much more capable. And I feel like there's so many people who are just, they just gave up. They're just like, oh, well, it's done now. Now I'm dead and I have no job. And it's like, you actually, you. [00:54:22] If you know this, there are tons of people who don't know about these tools. The most, most of people, most of all people everywhere still do not have access and still don't even know how to utilize these tools. There's tons of people who went up there, chatted with an LLM for a little while, couldn't figure out how to integrate it in their lives because it's a new tool and they have to completely rethink how they even assess what software is able to do to help them, and then left and don't use it and think it's completely useless. [00:54:54] It will literally take years and years and years just for the stuff that we've already made to find its way through all of the environments and all of the people in which they will help. Because the speed of information, the pace at which people learn to integrate and change their systems and integrate new tools and technologies hasn't really changed. [00:55:17] People always still just kind of learn new things at the same pace. That means there will never. We're just never going to run out. There will always be gaps to fill. It's about filling in that gap, finding where you are in the stack. That's what I'm trying to do with this show. I'm good at podcasting. I enjoy it a lot, and I like talking about the stuff that I learn about. [00:55:40] I think I do a pretty decent job of explaining it to people. So what am I doing? I'm trying to find all of the tools that I can. I spend a whole bunch of time tinkering with them because that's what I've always done with software and crap on my computer and building some custom machine and making something overly complicated and custom and using the beta version for no good freaking reason, just because I just. Just like for stuff to break. Then I explore all of this stuff. I coalesce and try to come out of it with something that's actually workable and something that's actually useful in some sort of a context. And then I share and explain all the stuff that I did. And, you know, you always have this, like, bias that if you know it or that you explored it, obviously everybody else did too. But constantly I'll talk about something for weeks and weeks and weeks, and then somebody will just randomly hit me up, ask me about something, and I'll show it to them. And they'll be like, I've never heard of this. This is completely new to me. And I'll be like, I've just always already assumed that the whole world knows about this. We all do that. We all do that. And it's important to remember, just distilling and spreading the knowledge that you have gained, utilizing the tools that you have found to make your job easier is actually a massive part of providing economic benefit to other people. [00:56:55] People won't stop needing good design aesthetics and good artists. In fact, quite the opposite. I think this will be incredibly important because people who are really naive will think that they can just go generate an image and then it won't fit into anything else they've done. They won't know how to give it the information it needs to build some sort of a cohesive brand. So the people who still who think and know kind of like the biggest big pictures of all of the artistic kind of the sight, like, they can just see the artistry and the aesthetic inside of things and they can just know, you shouldn't use this color. No, this is actually a much better color scheme for what you've got going on. This is what you want to feel bold. You want to punch people with this brand or this idea that hasn't been replaced. And in fact, I don't think it can naively be replaced because when you're talking to a bunch of these LLMs, you have to give it the context. You still have to think in the context of everything that you're doing and the stuff that you have access to and the stuff that's in your head. And just translating that is difficult. [00:58:01] So maybe my call to anybody who's super pessimistic or thinks that they're being replaced is don't despair at a future that you think is going to be here, because it's literally not here today. [00:58:14] AI can't replace anybody's jobs unless you have the most monotonous, awful job ever. And in that case, that's probably why you're despairing, is because you have a very depressing job and you need to figure out how to upgrade it. And if you can completely automate the thing that you were doing, you can figure out how to be useful at a layer on top of it. But use the tools today, they're not perfect and there's huge knowledge gaps that you can take advantage of that just make you better at whatever it is that you do. [00:58:43] So, anyway, anyway, back to the piece. I want to get on to the next thing. I just thought that was a critically important thing to hit on because so many people are so pessimistic about it. And I just think the big thing that people aren't calculating is the number of things that will get built that simply would never have been built otherwise. [00:59:05] The big folder of little tiny apps in my own computer is a prime example of it. In fact, here's a good example because I just spent like 20 minutes on this and I'd go through a couple of iterations. But it was kind of a neat little thing is when I was reading through this article and when I read through articles a lot, what I actually do is I save them. Because when I'm reading them in an app, I, excuse me, in the browser or whatever, I can't highlight on the page. [00:59:37] Sometimes I save them as PDFs, but sometimes I save them as a website, an HTML file file, because I want to save the links and I want to save a lot of the formatting. [00:59:47] And I like the idea of just being able to double click bring it up in a browser and then have everything I was used to before. [00:59:53] But one of the things is that websites don't have, or excuse me, a browser doesn't have like a native highlighting function. So I can't highlight text. [01:00:04] But that's something I do a lot, especially in PDFs with markdown and Preview or not, not Markdown with Markup in Preview on Mac is I will highlight stuff and then I can just kind of scroll through and I highlight with a simple color scheme and. [01:00:20] And I scrolled through for all of my highlights for what I think were really good points to hit on or stuff to go back to later that I could tweet out or that is just important to focus on and to remember at a later date because I want to do some sort of a content around it. But I can't highlight an HTML file. So I had Claude write me a script to make it so I can select something and then it brings up a little highlight button next to my mouse and I can click it and highlight stuff in my HTML files on my computer. So I can now save articles as HTML files and I can put this code in at the bottom. [01:01:03] Or I have made a HTML highlighter app which is just built in AppleScript, again with Claude, that makes it so that I can drag an article file onto that little app and it will simply add that highlighting functionality at the bottom of the HTML webpage code, which is just like a style and button script. And now I can highlight anything when I'm reading the articles out of my Reads folder. [01:01:29] That would never have existed. I would have just stopped saving stuff as HTML. But it's actually really useful for me to have some stuff in HTML and have the entire page. And honestly, sometimes it looks like crap in PDF. And now that exists, it would never have been something that was functionally possible or economically possible because I would have been bored with it in about 15 minutes because that's about as much as I was able to commit to it before. I was like, okay, I'm done, but it works. So I've got what I wanted now his next section as a tutor for new technologies. [01:02:04] Once upon a time, I used to keep up on top of new frameworks, but a single person only has so much time in the day, and by virtue of my job, I spend most of my time keeping up on the latest research advances and not the latest advances in JavaScript frameworks. [01:02:18] What this means is that when it comes time to start a new project outside of my particular research domains, I generally have two possible choices. First, I can use what I know. This is often a decade or two out of date, but often is enough if the project is small. [01:02:34] Or I can try to learn the new and usually better way to do things. [01:02:39] And that's where language models come in. Because most new to me, frameworks or tools like Docker or flexbox or React aren't new to other people. There are probably tens to hundreds of thousands of people in the world who understand each of these things thoroughly, and so current language models do, too. [01:02:58] Now, he uses Docker as an example, just because he didn't really know Docker was very useful just in containerizing and making it so that whatever the project was is not running around, you know, in his computer deleting random files or anything. But it had a confined environment. But he didn't know Docker. So. But obviously there's an unbelievable amount of details and tutorials and God knows what about Docker out there. So he was able to, and this is important too, is that rather than having some sort of a system static tutorial with some set of assumptions about what the reader actually knows and with some explicit task built out from the tutorial, like, oh, let's. Let's use a tutorial to figure out how to do Docker, use Docker to make a hello World app, and one that likely, you know, after a couple of years, it might even be out of date with some, you know, simple piece or how to use the tool or the command to run, et cetera, et cetera, rather than having some static thing that he had to go find and find the best one, which again, is different on the reader, different to the reader. So what the reader knows and the assumptions that the person writing makes in detailing out what you need to do. I mean, how many times for anybody who's not familiar with some sort of a framework or some Sort of a code base or tool or programming language, God knows what. How many times do you go to a tutorial and he says, well, first you need to compile and then do this. And there's like one or two or three steps that just say you should do this and you realize you have not the slightest clue how to do that entire step. Like that entire thing is foreign to you or might as well be just a completely different language. I can't tell you how many times it would be something where it clearly there's some sort of a command or there's a button or there's just some basic way to do something in some super complicated advanced application. And there's. It's never in any tutorials because it's one of those things that of course if you use this application, you know how to do this. This is one of those standard things. But it's not searchable. You don't know exactly what the name of that thing is. And there's no tutorial tutorials about it because it's a built in thing. It's one of those assumed things that everybody knows. And you spend an hour looking for this stupid thing. This is the perfect type of thing for LLMs to solve because you can operate, you can work with them interactively, they can help you, they can help guide you and teach you pieces of tutorials and the different things you would need and the different tools to use and what those commands are specifically for your task. It's like a live directed tutorial specifically from where you are, specifically leading you to where you are asking it to go. [01:06:04] And then anytime that you don't know, anytime that it assumes, you just ask to expand and you can even get it to pull out the name of that little tool or that function that everybody else just knows. And you had no idea what it was. You didn't know how to search it because you didn't even know what it was called. So you give your weird abstracts. What happens when you have the thingy and it goes under the thingy and then you cut like at after, at the end of the thingy you do this like ridiculous, awful two paragraph explanation of what you're trying to find. And the LLM is just like, oh, you mean the blade tool or you're talking about slipping the blade slip tool where you want to, you want to keep everything right where it is and you want to scroll the video footage itself back and forth underneath the. But keep it in the same place in the timeline. Whatever it is, it's Great for that sort of thing. I have used it quite a bit for that kind of thing myself, especially when exploring tools that I'm not familiar with or when I'm getting to a level of a tool that is stuff that I've even done before. But I just forget, you know, like, I totally did this and I was super into this project or this tool or this application for like four months when I was working on this other thing. And I got really, really good at it. But it's been like a year. It's been a year since I broke it out and I've done a whole bunch of layers and done a bunch of cutting and all of this stuff. And I just forget where stuff is. I forget what tools are. And I'm like, I definitely did this. What the hell did I do? LLMs are perfect at saving you from having to fight back through those 80 tutorials and relearn the things that you had already learned in popular frameworks and new oss, new applications. [01:07:53] This is a perfect thing for large language models, especially the much, much larger ones. But this is also why I've thought for a really long time that language models specific to applications that literally just train you and teach you on every single little piece of the application, which is basically what I do with a fine tuning of ChatGPT or Claude, or not even fine tuning. I just drop hundreds of pages of docs into these things to get it to explain to me how to do something. It's like, damn it, like, I don't understand why everybody doesn't have a fine tune of one of the open source models, especially with llama and llama 3.1 now so that I can just ask it a question and get it to explain to me how to do something with DaVinci Resolve or Microsoft Office or Google Docs, whatever the hell it is. That's a really big one. That's a really big one. And in fact, another one is to get started with new projects. This is the next one on his list. But I thought this was a really good one because when you don't know where to begin, it's so easy to get lost in just sitting and thinking, oh, maybe I do this, maybe I do this. [01:09:08] Look up some search results. You don't really find what you're looking for. Again, you don't know what it's even called, so you don't know how to search it. [01:09:14] LLMs are a great way to get broad overview. Okay, where would I even begin to attack this project? [01:09:22] Just have a conversation with an LLM, if you ever feel stuck in that way, like, let's say you have no idea you're thinking about doing an animation too. You want to make a movie, but you have not the slightest clue where to start. You have no prior experience or anything. You just heard me talk about it and it's like, huh, maybe I could do that. Go to an LLM, ask it how you might start that project, and then when it gives you 20 steps or five steps and you don't know how to start the first step, ask it to expand on that first step and keep going deeper and deeper, just inception your way, all the way to very concrete, specific things that you can do and you can start putting a project together. [01:10:03] You would be shocked how much of just decent stuff could probably be accomplished basically by letting an LLM guide you through most of the process and guide you into what you need to learn and the kind of people or the places or the programs or stuff that you need to ask or connect to. [01:10:26] It's probably pretty easy to think about it as a extremely general assistant in just kind of accomplishing things or thinking about projects and systems and tools and frameworks and applications, whatever it is. [01:10:41] Now he's got a lot of examples of simplifying code and a lot of things to dig into for anybody who is actually a developer. [01:10:49] But this is not something I build all of my code. Every bit of my code comes through an LLM. So I just ask it to simplify if, you know, I ever feel like it's gotten too bloated or it's just because of my mindset or my direction in asking it to build was wrong or just suboptimal. But I'm not going to go into those because I don't really have much to say on that other than sure, it makes sense that you could use an LLM for this, but I don't write any of my own code to even have that happen. [01:11:23] For monotonous tasks and for boring tasks, to automate tasks, there's a couple of different, actually a bunch of different examples he gives for these sorts of things. And this is really the heart of what I've used it for, is to complete, to automate certain things that I do a lot, like around my workflow with the business and podcasting and projects, and then to complete one off tasks that I just would not be able to do or that I don't know how long it would take. Like there's a decent example of, you know, if you have screenshots of A bunch of stuff. And let's say you have screenshots of a bunch of tweets or screenshots of receipts or something and you want to aggregate them. Well, there are a lot of vision models that vision and language models together. [01:12:13] They can literally look at these objects and then pull the important information out of them. If you just tell it to pull the information out of them. They have ocr, the character recognition that it can use to pull a bunch of stuff together. And you might be able to write a small program with AI very, very quickly and even use a local model that will then pull all of that information from a bunch of different pictures and take a monotonous task and turn it into something where you make a 10 minute application and then just do it and maybe you never even need it ever again. [01:12:48] But you have then solved yourself from having to deal with the monotony of going through every single one and writing everything down. [01:13:00] Granted, obviously if you're talking about like receipts or something, you want to actually go check your work, but still, there's a lot of times in which that could be insanely useful. [01:13:08] And especially like myself, I've definitely run into this a number of different times and it's something I want to use in a much more advanced way down the road. That's why I've been saving a lot of stuff without categorizing it or giving it file names properly and just trying to prioritize having as much as I can and then using it as examples to test a bunch of these different tools and how I can actually use them to organize my stuff in kind of a more formal way and use a lot of LLMs and AI tools in order to take over a lot of that job. And then the rest of his list though says as an API reference, as a search engine to solve one offs to teach me solving, solving solved problems and to fix errors. [01:13:55] I would just say that as a reference for something very specific or for API reference or for commands for a specific tool like ffmpeg I use all the time. But I still find myself just going to Claude and asking like what commands do I. Rather than going to the help doc and being like, ugh, what was the thing to get the right algorithm and to get the right bit rate and all this stuff. I just go to Claude and I just say what should be my FFMPEG command to get this video in this format and I just detail it all out and it just gives me the command and I just drag and drop my file into that command it is really useful for docs referencing and for. I mean, sometimes I will literally just give it the entire help. Like, so if you type in wholesale dash dash help or ffmpeg dash dash help and it just prints out all the different commands, sometimes I'll just copy and paste that whole thing, drop it in, and then ask it. Because I'm doing the same thing. It's just saving me having to read it. I do that a lot. In fact, a really funny thing is there were a lot of little pieces in this that. It's funny, I didn't even really use it that much just because I already had my own notes and it takes me way too long. I always write 10 times as many notes as I can actually get through in a show because I ramble. But I actually used Fabric's tool for the Extract Wisdom thing on this article to try to come put together a lot of notes and insights about this article. And there were a couple of useful things that I pulled from it. And it was actually my brother who I was on the phone with him because I was trying to pull, having a hard time parse everything that I wanted to say without actually reading the whole freaking article. And it was funny. I had used AI for multiple other things today, like the LLMs. Like, I just built my highlighter tool so that I could highlight stuff in the HTML file. And then he was like, well, dude, why don't you just get it to summarize the stuff? You tell me about the fabric thing, right? And it's got something to just put it in and pull a bunch of, like, great insights out of it. Why don't you just use that to have your notes from? I was like, oh, well, that's actually a good point. That'll help me parse through what I want. [01:16:02] And Extract Wisdom is a really good one. This is something too. For anybody who hadn't listened to one of the recent episodes, Fabric is to my kid. I'm not sure if you can hear him, but he's going bonkers. [01:16:14] My wife is playing with him. And in fact, that's why I got to close this up, because he gets delirious just before bed and he's a ton of fun. I gotta go hang out with him. [01:16:24] But there's a tool called Fabric, which I actually haven't used in earnest the way Fabric wants me to use it. Like, it's a command line tool where you pull a bunch of these prompts from, you execute them like commands. [01:16:39] Whereas it's really funny. I've actually just been going to the GitHub page and finding the prompt that I want to use and then just copy and pasting that into Claude or ChatGPT or whatever. Mostly just Claude, but copying and pasting that into my prompt space in the thing and then giving it the article or whatever. And so I'm doing exactly the same thing. I'm just not using the fabric app specifically, I'm just using the patterns that fabric has put together because it's a big open source thing so you can just get it all on GitHub. But it's a really great tool and a really great collection of things that are explicitly designed to be useful and explicitly designed for certain certain tasks and for certain ways to take advantage of content. And even though I only used it for one or two quotes or little things from this article, if nothing else, it actually gave me a great place to get started from because I was a little bit overwhelmed with all of the stuff that I had to tackle in this episode. [01:17:42] So there's just all these. There's so many different ways that it can help get you across the hump, it can help you, help guide you into the rat is just giggling back here and it's killing me. Guide you into the first steps of a project and walk you through in detail. [01:18:00] Learn entirely new frameworks, be used as a fantastic reference. It is a great kind of general knowledge search engine as well. [01:18:11] And also that was one thing my brother said is that he hardly ever uses any of this stuff, but he has been using the crap out of Perplexity. Perplexity AI is a search engine that is full on LLM integrated and honest to God it's way better than Google. I find myself not really gravit gravitating to Google for anything anymore, which is actually really cool that I find myself using Google in so few respects these days. But Perplexity AI is a fantastic search engine. [01:18:41] So without going any deeper and listing anything else out that I'm going to have to go back and grab to put in the show notes. [01:18:50] This should give you plenty of things to explore in the next week or so before we have another episode, so check them out. All the links and details will be in the show notes. Don't forget to grab yourself a cold card to keep your Bitcoin safe and use my discount code Bitcoin Audible. Those details and links will be right in the show notes. Thank you all so much for listening. [01:19:12] Thank you for supporting the show. Thank you to everyone who does value for value and does a boost on fountain or streams SATs to the show. That makes a huge difference. [01:19:23] It literally keeps me going with all of this stuff just to know that there are people out there who are constantly listening and constantly sending me SATs to keep this thing running. I love this stuff and I love digging into all of this tech and these tools and I wouldn't be able to do it if I didn't feel like I was finding a way to make it useful or valuable to other people. It would seem like a waste of time otherwise. So super excited and really appreciate it that you guys support on Fountain. Check them out if you haven't. By the way, it's a great way to start earning sats and then use bitcoin and lightning for the first time if you haven't so something to check out. You know what, I'll throw the link to that one in the show notes as well. [01:20:06] Alright guys, thank you so much for listening. I am Guy Swan and I will catch you on the next episode of AI Unchained. Until then, everybody take it easy. [01:20:27] LLMs wildly reduce this pain and just makes it so much easier. Easier to start something knowing I'll only have to solve the interesting problems. [01:20:38] Nicholas Carlini.

Other Episodes

Episode

February 08, 2019 01:16:36
Episode Cover

The Human Threat Model [JW Weatherman]

"The human threat model is intended to help humans better understand the threats to that they face and the tools available, and in active...

Listen

Episode

June 18, 2018 00:15:54
Episode Cover

CryptoQuikRead_093 - The Importance of Layer 2

"the most pressing issue in cryptocurrency and blockchain technology right now is, how are we going to achieve the original vision of blockchains as...

Listen

Episode

October 07, 2022 02:51:49
Episode Cover

Chat_73 - WTF is Happening in 2022? With Greg Foss, Clint Russell, & Guy Swann

We dive into the chaotic world of 2022 with the insanity of the financial markets, the bond markets cracking apart, the political sphere going...

Listen