Episode Transcript
[00:00:00] Speaker A: Apple has finally announced the details of Apple Intelligence at their Worldwide developers conference in 2024. And in my opinion it is one of the most valuable and also frightening implementations of AI I have seen yet. It is genuinely the promise of so much of what we have discussed with AI, an OS level integration to see and have context and have valuable references to everything that you are doing and see everything that is happening in your apps and answering your questions and fulfilling tasks for you. So much so that the rabbit R1 which I had purchased months ago and still is not in my hands, may very well already be obsolete. And they claim that it's private. They claim that you're not sharing your data with anyone. Which means the real question is how much do you trust Apple? Lets dive into the features, the design and how they may be pulling this off and what it might mean for the rest of us. This is AI Unchained.
What is up guys? Welcome back to AI Unchained. I am Guy Swan, your host and I made this show in order to explore the world of artificial intelligence and figure out how to make my way to a world with self hosted, sovereign and open source AI tools. So that is what we explore here. That is what I hope to be driving toward. If that's what you're looking for, well then you made it to the right place. So this show is also brought to you by Swan. Bitcoin. The best place, the easiest place to start buying Bitcoin, to learn about Bitcoin, to set up a long term plan to, to even go from zero where you've never bought any Bitcoin or to all the way to a bitcoin standard with a full financial suite of tools. You gotta check em out. And there are no fees up to $10,000 for your first $10,000 there are literally no fees and then you're gonna wanna keep it safe. And the show is also brought to you by Coinkite and the coldcard Hardware Wallet. This is literally my favorite little device. I use it constantly. I'm a huge fan of the Nunchuck Wallet in particular. I'll actually be doing a little video on the Nunchuck Wallet and cold card setup which I've been meaning to do for a while. So you should actually see that. You should actually see that somewhere maybe on the screen. So if I have that done, you can go check that out too and the link will be in the show notes. All right guys, so I wanted to, there's a really awesome announcement but then also like a little bit like a little Bit depressing because Apple has kind of blown away all of the prior implementations of AI in my opinion when it comes to the interface. You know, a lot of things that I have really wanted to do with AI and I have sought for ways to do myself. In fact, all of the major ones that I have been trying to pull off Apple seemed to in this recent WWDC 24 and all of the newest changes, they're coming to iOS and iOS, they have essentially implemented in the operating system the key features that I have wanted out of AI for a long time. Now that might not be the case for everyone. And there's something really interesting about showing about why it is that the integration has to be so deep. Like I think it really has to be in the operating system.
But then why is this depressing is that the difficulty and the complexity of setting this up in an open source and self hosting environment, which I think those tools will improve quickly over time largely because of this. You know, usually a company pulling something off is a. It's basically a signal. It's a push for other people, other institutions to figure out how to. It just kind of shows that this is possible, like you can do it at this level. And so basically everyone else has to step up their game. So maybe it's a situation where open source is just lagging behind, but there's also an element where I think it's really that they have access to the operating system itself and they can put this in the operating system that allows the integration to be so seamless and smooth. And more specifically which we will get into in greater detail because we're just going to go through the WWDC keynote all about Apple intelligence. I'm just going to leave all the other stuff I don't really care about notes and mail. Like this is not a Mac show, this is an artificial intelligence show. And I want to cover Apple intelligence and what they have done.
Especially in context of what we talked about recently with Apple releasing a like 8 small LLMs that are specific to that are kind of fine tuned in a certain direction. Which makes me feel that there's a very mixture of expert sort of situation happening in the AI on Apple, on Apple devices or Apple os. But then why do I also say that this is a little bit depressing?
It's because I don't trust Apple. I mean I have an iPhone, I have, I also have an Android and I have my Mac. I love my Mac, I love my M1 Pro or Max, whatever, whatever it is, whatever the big one is processor that I got and this thing still screams. I have been able to use it constantly for tons of different things. Like. Like I love editing on it. Like this is a powerful computer. I love it.
But I just don't trust Apple. And they talk a lot. They say so many times in this keynote that how concerned they are with privacy.
I don't, I don't believe them. I think they want to, they want to protect people's privacy to the degree that it helps them and where it gets in their way. I do not think your privacy or my privacy will be their top concern.
And I certainly also know how easy it is. I mean maybe this is just all smartphones and most devices these days, but with something like. Is it Pandora? No. What's the name of it? Oh crap. The software that basically can pwn at least any cell phone, any smartphone device. I imagine it's pretty broad, just complete control over the device. I know iOS does not protect from that. And I also know and have, or at least have heard a lot of, I guess leaks, a lot of news reports, a lot of discussions about the enormous number of recordings that Apple has on its hands through Siri that are of accidental basically. Like I don't know if it's, if it's the default that you're sharing data with Siri for them to get better at making Siri. I mean, of course there's a. There's a million legitimate excuses to have that recorded audio of you asking Siri for something and then knowing what the outcome was so that they can make it better. But that's also something that can be used for very nefarious purposes. And I read an article recently, I wonder if I can find it again. I imagine a search would probably pull it up. So I'll try to have it in the show notes if I can remember to do that. I will have a link in the show notes which I will make sure that I say specifically so that my AI picks it up.
But the article was talking about how many hours of accidental recorded sex and like very private things that had been picked up by Siri by these. Talk to your device. Accidental activation, I guess. And I've had Siri randomly activate on me so many times and I thought I'd even try to turn off the voice commands. Like if I want to use it, I want to be able to manually control it. I really don't like to think that my device has more control or the network service and the server or the corporation or whatever it is has more control over my device than I do. And at the same time that I with this announcements again we will get into for Apple's intelligence. Apple intelligence is.
I want to use it like I know I will like some of the things that I have spent a long time trying to figure out what little pieces I'll need to put together, like being able to search inside of a video for a particular scene or for a particular action or location.
They've. It seems like they've pulled that off and that is something that I have been wanting to do for a while, trying to figure out how to, how to do that myself. But it's a big trade off.
And the other big negative in my opinion, which we'll come back to as well, is that it won't be long before you won't be able to run it. Without it, AI will become so integrated into all of the things that we do and these features I think will become so useful that it will feel like running through molasses. If we do not have these tools in our operating systems and if these tools are constantly phoning home and Apple has more control over them than we do, I think that's just an enormous risk.
It is an enormous risk and I think they want to pander. I think they want to appear as if they are very privacy concerned and letting you control your device and what happens with your information.
Like I said, I just don't trust it there. And they're too big. They don't have the best history.
And honestly, even if they wanted to, I don't think governments would just generally let them. I don't think the United States government or the Chinese government would let them sell their devices.
What is it, $3 trillion, $2 trillion organization would let them sell their devices in those countries if they did not have a backdoor, if they did not have some government, as long as we say so, remote control, I don't think Apple would be allowed to sell in those countries. I think the governments would perfectly, would be perfectly willing to sacrifice their population access to the devices and to the technology in order to enforce the fact that they have control.
So there's a lot of good and there's a lot of bad, but nonetheless it's cool. It is really cool. I think they have a lot of fantastic tools and I want to go through them with you right now. So let's get into it. All right, I'm gonna spare you guys the intro to this because this whole this is about the cringiest WWDC intro and video that I've seen yet. I really wish they would go back to just live because I think just having somebody come out on stage and talk is a million times better than this. So I'm gonna spare you guys. If you guys want to go to the Apple website and watch the awful cutscenes or whatever they call these things, we'll skip that. We're going straight to Apple Intelligence contorts.
[00:12:17] Speaker B: For a long time. We're tremendously excited about the power of generative models, and there are already some really impressive chat tools out there that perform a vast array of tasks using world knowledge. But these tools know very little about you or your needs. With iOS 18, IPADOS 18 and macOS Sequoia, we are embarking on a new journey to bring you intelligence that understands you. Apple Intelligence is the personal intelligence system that puts powerful generative models right at the core of your iPhone, iPad and Mac. It draws on your personal context to give you intelligence that's most helpful and relevant for you. It protects your privacy at every step and it is deeply integrated into our platforms and throughout the apps you rely on to communicate, work and express yourself. Let's take a closer look at Apple Intelligence, starting with its incredible capabilities. Then we'll tell you about its unique.
[00:13:17] Speaker A: So in talking about Apple providing privacy is one thing that I think they are doing is limiting what third party apps can do. But this is where, and we'll see in a couple of the things that they have. This is also where I think when they're trying to give us privacy or the appearance of privacy in certain regards with essentially what are services and apps that would provide a competitor to Apple's version of those things.
I think they're getting a lot of benefit out of it and also being able to sell that they're a very privacy focused company in an age where a lot more people are caring about their privacy, which I think is a very good thing. But again, I just don't trust why we are getting the privacy and whom we are getting the privacy from specifically. I'll actually refer back to this in a second with one of the things that they talk about in this unique architecture.
[00:14:19] Speaker B: And after that we'll show you how it elevates so many of your everyday experiences. Let's begin with capabilities. Apple Intelligence will enable your iPhone, iPad and Mac to understand and create language as well as images and take action for you to simplify interactions across your apps. And what's truly unique is its understanding of your personal context. Language and text are fundamental to how we communicate and work and the large language models built into Apple intelligence deliver deep natural language understanding, making so many of your day to day tasks faster and easier. For example, your iPhone can prioritize your notifications to minimize unnecessary distractions while ensuring you don't miss something important.
[00:15:07] Speaker A: Now this is a big one and I've talked about on this show about how we're entering the stage or the era of infinite everything, infinite media, infinite content, infinite video, infinite image and holy crap, infinite notifications.
And that the real key to this will be making, building and controlling filters to essentially got. And this may really already be the case if you think about it like YouTube's incredible power is them being able to put in front of you what they think you want to watch. Twitter's incredible power is in trying to trap me into some political rage thread where I'm mad about something or certain somebody else is doing something terrible and that I have to comment on it or prioritizing my seeing the comments that are insulting me so I have to just tell him how it actually works and trying to trap me on that platform because that's the best for them. You know, we have a principal agent problem where their incentives are to keep me trapped on Twitter where this is not a good use of my time. And so we're actually fighting incentives to. And if I don't see the algorithm, if I don't see what it is that is controlling me, like there's, there's a great example is there's a dude who post these like outrage sort of videos that are always political based or it's some awful thing like some kid was killed and this person, they got out on bail after like three days and then they went out and killed somebody else or something. Like just like awful stuff that depresses me that makes me mad and I want to go in and make a comment and then inevitably there's somebody defending them. And I have on multiple occasions. I do not follow this person.
I do not. I at least three times have said this post does not interest me. It still shows up again, it defaults to my for your feed and he's there every single day time because it traps me. I almost, I engage a ton when it gets put in front of me because I get angry. I'm an extrovert. I just want to engage and I want to share my opinion, so to speak. This is why I do a podcast and I hate it. I hate it after I'm there for like 30 minutes arguing with somebody in some stupid thread or just mad about this thing. And like, like worries, like I'm agitated, I feel literally physically crappy because I have a son and all I can do is imagine it happening to him and I don't want to be there and do that. And it still, no matter what I do, it still shows up. Now I, I haven't blocked him, but I have muted him and hopefully it doesn't show up, but I swear to God, I've already done this. But all of it is just an example to show when somebody else has control over these things, that the re nuance in how you can be controlled and what they can do to their benefit and not your benefit. Whereas this feature, in the context of the infinite everything and the infinite notifications, this is a, this is an awesome feature to just prioritize your notifications so that I don't have to sort through them. I can't even keep up with clearing notifications that I don't look at. If I'm gone for six seconds, I have notifications. And I actually try really hard to control what allows what it's allowed to send me notifications. So anyway, prioritizing notifications is a fantastic tool. Here's the problem, here's the thing that gets me is that however Apple decides it is best to learn from the user, or what it decides is best for the user to prioritize, it's going to decide how and why it should learn something.
And that might be to our benefit most of the time, but think of the level of control. This is the, this is the problem with Apple's privacy and competition, so to speak, with their app interface is I don't think it's possible for another app to have full visibility into all of your notifications from every single other app in order to sort them and prioritize them. Like, I'm not sure if they allow that level of OS access so they can make this tool, but you probably can't even make this tool for yourself on their ecosystem, on their operating system. Maybe, maybe you can. Maybe I'm a little bit. Maybe it's just like a degree of permissions and you have to like really turn a bunch of stuff on and make it look like that app is just looking at every damn thing you do in order for it to, to be able to do something like that.
But this is an incredibly awesome feature. This is an incredibly obvious feature of like, yeah, that's going to save me a mountain of time to just prioritize my notifications and kind of put those things back that don't matter. That much put them in the background.
But it's also one of the things that when I get used to that, Apple might be the only one that I can get it from on all my devices.
So trade offs. Anyway, let's get back into it.
[00:20:41] Speaker B: Apple Intelligence also powers brand new writing tools that you can access system wide to feel more confident in your writing. Writing tools can rewrite, proofread and summarize text for you whether you're working on an article or blog.
[00:20:55] Speaker A: This is really cool.
[00:20:56] Speaker B: Condensing ideas to share with your classmates or looking over a review before you post it online.
[00:21:02] Speaker A: So this is also another really big one that this is the way that I think like I the chat interface I think was a really great introduction to how AI is supposed to be. But this is what I have been trying to do and I've got one running that plugs into Obsidian specifically, which is my note taking app, which I'll probably be doing a video on soon. I only recently figured this out, but having this in the OS level because he actually says in just a second that this is through all mail, notes, anything like pages, numbers, like all these things. This is OS wide that you have this writing tool. This is what a lot of people do with ChatGPT, right? It's. It's like an extension of spell check and grammar check. Is the ability to make this a little bit more succinct. This is something I do a lot with my 2sats is my 2sats videos if you've ever seen them. I'm trying to get as succinct as possible, like get down to the core of an idea to make it enjoyable and make it hit hard in four minutes, five minutes tops. Inevitably, when I do an episode on a topic like this or I start writing my two sets videos, the thing's 20 minutes long. I end up writing way too much. I end up repeating myself in three different ways and I have to just go through and I have to cut and I have to cut and I have to cut and I have to cut. But this is a perfect example of where having the AI and just being like cut the fat out of this. How many times can you look at different times that I repeat this idea for a different reason or in a different context or using a different analogy and basically frame it all around one analogy and cut out all the extra ones? Like this is a. This seems like the extension, a natural extension of the tools that we already are used to with spell check. I mean if we didn't have spellcheck most typing these days would just be psychotic, especially on smartphones. But the thing is, is it needs to be OS level. Like I think it makes sense. This really shouldn't be done in a chat interface. The question is again, can you even do this? Is Apple the only one in their entire ecosystem that really has this level of access that you could install a tool like this? Do they grant this level of access to something else? Could I build a tool that does this? I don't know, but I think this is a really great way to implement it and I think this will be hugely useful. I know I'll probably use it a lot for various, for various reasons like I won't get it to write for me, but at least to be succinct and clean something up, break up, run on sentence, that sort of thing.
This is a really, really useful tool.
[00:23:59] Speaker B: And they're available automatically across mail notes, safari pages, keynote and even your third party apps. In addition to language, Apple Intelligence offers a host of capabilities for images from photos to emojis and gifs. It's so much fun to express ourselves visually.
[00:24:19] Speaker A: This one's really cool.
[00:24:20] Speaker B: And now you can create totally original images, everyday conversations even more enjoyable here.
[00:24:25] Speaker A: In just a second.
[00:24:26] Speaker B: And because Apple Intelligence understands the people in your photo library, you can personalize these images for your conversations where it's training it. So when you wish a friend's birthday, you can create an image of them surrounded by cake, balloons and flowers to make it extra festive.
[00:24:41] Speaker A: This is.
[00:24:42] Speaker B: And the next time you tell mom that she's your hero, you can send an image of her in a superhero cape to really land your point, you can create images in three unique styles.
[00:24:53] Speaker A: I, I can't the whole, I mean maybe this is like a thing with kids, probably Gen Z or something. Younger generation may just get a kick out of this and love exploring with it. But I'll never, I why be like thanks mom and then send her a picture of herself as a cartoon with a cape on.
I don't want, I don't want that. I don't need that. This is a deeply gimmicky thing. Except sketch. Except for the fact illustration. Okay, so you have three different animations. So they've greatly limited the messages.
[00:25:33] Speaker B: This experience is built into apps.
[00:25:36] Speaker A: They've limited exactly how you can create pictures, which again is a little bit like Apple. You know, it's not like a generic stable diffusion thing which it might be, you know, some mix of that but in order probably again to get it the best outputs, the best quality and consistency and how it's created. They've limited what you can do with it exactly by having very three distinct styles.
Now the big thing this episode is brought to you by Swan Bitcoin. The place to buy bitcoin. The place to get started, the place to take yourself from zero all the way to a bitcoin standard. They have an entire suite of financial tools. Whether you are buying your first bitcoin and you want to do it without any fees at all for the first $10,000 you can do that right now at my Link with Swan Bitcoin.com Guy G U Y It's very easy to remember because that's me. And if you want to get a retirement account, you want to set up an ira, you can do it in minutes and start allocating tax free. You want to get your business on it and your business treasury. You want to do employee benefits packages, you want to, you want to do a vault where Swan actually holds one of the keys so that they can help you recover in the case of a disaster. Or maybe you just want to learn about Bitcoin and you want a place where you are only going to find signal and you will find all high quality resources and high quality people to follow and get advice and learn from. That is the Swann cannon. They have great market analysis and a fantastic set of resources to learn anything you want to know about bitcoin. And with Swan private for high net worth individuals they are there to answer any questions, assist with any of your setup, to answer any questions and to walk you through all of the unfamiliar and uncomfortable processes of first getting into Bitcoin or more importantly holding an enormous amount of bitcoin safely and securely. Check it all out@swan bitcoin.com Guy throughout.
[00:27:46] Speaker B: The system like notes free available everywhere keynote and pages.
Another way Apple intelligence is deep.
[00:27:54] Speaker A: This is useful just because I am constantly generating things to try to picture something in my videos. Like I actually use a lot of AI generated images now in my two SATs videos. Some of it's stock footage, some of it's found footage for political things or whatever. But I almost always at least have three or four AI created images now in my videos just to give a clear picture to re catch somebody's attention. It's really useful for keeping someone engaged and then also it's.
It's just good to have a visual so that somebody can put. Somebody can put an image to the words that you're trying to explain.
And so the fact that this is integrated basically everywhere Again, is potentially really useful in certain contexts. I just think the whole thing about, like, sending somebody a picture of themselves and with a birthday cake on a text, ah, really, not so much.
All right, let's keep rolling.
[00:29:04] Speaker B: Is its ability to take action across your apps. The greatest source of tools for taking actions is already in your pocket with the apps you use every day. And we have designed Apple Intelligence so it can tap into these tools and carry out tasks on your behalf. So you can say things like, pull up the files that Jaws shared with me last week or show me all the photos of mom Olivia and me, or play the podcast that my wife sent the other day. We're designing Apple Intelligence to be able to orchestrate these and hundreds of other actions for you so you can accomplish more while saving time. There's one more critical building block for personal intelligence, and that's an understanding of your personal context.
[00:29:51] Speaker A: Okay, so I want to talk on the action point really quick before we move on. So this was always really the threat with me buying a rabbit R1, which I've mentioned before. The review from MKBHD. Marques Brownlee does the really great review channel on YouTube, but I've watched a bunch of different ones and I still don't even have mine. But it's really funny to see what they targeted and what they could not achieve. The one thing that the Rabbit apparently cannot yet do with their quote unquote large action model. As they say, the. The most simple and obvious things. What I was thinking, I have paid a hundred dollars, two hundred dollars for a simple recording device with a decent mic that I could record thoughts into that I could record quick interviews with. I still have one over there. I just don't use it that much anymore because I. Because my setup has changed. But I'm totally willing to pay that to get a very specific use case of something that I use often I use a lot. And what the rabbit R1 looked like to me was the ability to do this and combine this with AI so that this device can then manage, organize, and actually take a greater level of control and specificity in what I am doing so that when I come back to it later, it's not just one clip of audio in this huge random string of unorganized, unfindable stuff. Like that's been the great thing about being able to do transcripts with my little whisper tool is I can search through my episodes for things that I talked about and go straight to that section. This seemed like another step removed from that. Let's go. Even further and see what we can do.
And yet that seems to be one of the things that you literally cannot do with the device is to just take a note, put something on my calendar and schedule an appointment. The very simple function for all of the things that they bragged about it being able to do and all of the attempts at creating some sort of feature or capacity, I feel like they didn't ask what do people need to use this for? The way they've explained it here with Apple, I feel like the new iOS has already obsoleted the rabbit R1, because if the Rabbit can't do that thing and Siri is going to be able to do so, and more importantly, it's going to be able to integrate more easily and to a far greater degree with all of the other apps that I use. Because it's at an OS level, what do I want to Rabbit for? And it sucks because I don't even have it. I don't even have it to be like, oh, I only used it for a week. I've abandoned it before it is has even arrived at my home. And I'll still give it a try and I might still do a video on it or something just to see if it has any place. But just another thing that I have to carry around when we have a phone that does the job, I just, I think it's just going to be a level of inconvenience that's not necessary anymore. But notice what he used as an example when he showed in the background, for those of you who are watching the video, when he showed in the background how there were specific tasks that they probably just interviewed. They probably just watched people ask their employees, how do you use your iPhone? What are you trying to do with it? All you have to do is watch that and you can actually build.
You can train your LLM or you can train your action model to be able to take those tasks. But you have to know how your users are using it. Even ChatGPT and GPT4O, I think are a great example of let's try to get as much capacity and as much general ability and stuff as possible without targeting what people might actually use it for. And this is the same thing that I've talked about with Cascader, with sats for AI and with so many different people is give me a specific use case, think of the task and then think about how the AI can actually perform that task better target what the user actually needs to do. I do not need a robot to just talk to me like a human that is not really that useful. You're putting the onus on me to figure out how to use it in my setup or in my environment when what Apple has always done and I have to have respect for them for this, despite all of the complaints and frustrations and the level of trust that I don't really lend to Apple is that they always target, they think about how the user actually uses stuff and then they go from there to okay, how do we apply the technology to do this? And in doing so they can solve the problems far more efficiently every single time. Because you're going to be able to use a smaller LLM that might actually run locally if you've designed it to work for particular tasks in particular situations. Now it may be more limited when someone wants to branch out and do something that hasn't. It hasn't been specifically trained on. It might not work super efficiently or it might not work at all. I might make a mistake. But here's the thing about LLMs versus creating software to do some sort of a task is the crazy thing about LLMs is it's going to learn, especially if you have the APIs and something they refer to as app intents, which I think is a really cool way to kind of, or at least it seems like I don't really know the structure of this, but I'll, I'll give how I view it at least. Maybe I'll dig a little bit more into it for a future episode so I can have more specificity into how it works. But it seems like kind of an API for how people are going to interact with it. So it's not like a explicit API. It's a, this is a mode in which people will probably use it that is put into an LLM, almost like a pre prompt into how it is likely that someone would interact with this app. So a great example would be Obsidian is I have a little extension in my browser that lets me save stuff from the web to certain pages with a certain format and pre assigned things and tags and that sort of stuff. So I select from a list and be like okay, this is something that I'm saving for political history. This is something I'm saving for a two sats video. This is something I'm saving for an AI episode and I will literally just go down and hit that list and it will save the link with the right format or it will save it with a note, I can highlight something and save it as a quote, et cetera, et cetera. So I have this, it's a really useful thing for organizing. Now, the app intent for that would be, I. I'm assuming at least that the app intent would be, okay, someone is going to be interacting with this to save stuff to a certain location. So what the app or what the LLM would be looking for is locations in Obsidian for where notes are saved, folders or tags or these sorts of things. All of those things related to where and how notes are organized and how they are written out. And then interact with me on the side of this is a great video. Can you save this? In fact, can you save this section of video from. This is something that would help me with this project is rather than having to manually go and play and pause the recording for each one of these things, is that I could just while I was watching it be like, ooh, can you save this 30 seconds of video for a scene so that I can bring it up and maybe I can use it in a different editor rather than having to use a live recorder to put all of this together, which is a little bit clunky and going back to clean it up later when I'm used to kind of cleaning as I go. It adds work. It adds work that's not necessary. This could be built into the interface to make it work, especially with the way all of this software is today. But there's also another thing about training LLMs for action to be able to interact. And especially when their whole ecosystem is going to go out of their way, all of these developers, all these billions of apps are going to go out of their way to make sure they have some sort of an app intent, that they have some way for the onboard OS LLMs to interact with their applications, because otherwise the user is going to use a different app that can. The crazy thing about using large language models for this, just as a general progression of technology, it reminds me of a story.
When the Lord of the Rings came out, in particular the second Lord of the Rings movie, the Two Towers, they actually designed software for their battles, for their battles with all of the different, like armies and all of the individual soldiers and orcs and all of this stuff.
And doing so they actually trained them to behave individually based on their situation. So it's. It was actually a very rudimentary machine learning tool for how the army was going to behave in the battle. And what ended up happening, and this is something that happens in games a lot these days too, is they design an environment and think they know what's possible, but when the player is playing the game, something Entirely different is ca. There is like somebody creates a way or figures out a way to do something that they had no idea was even possible and kind of blows away, like the game developers, like, wow, that's so crazy. I had no idea you'd be able to do that. And in this same example of the Lord of the Rings, I'm a dork and I watch all of the behind the scenes stuff for movies that I love. And I also, I did film. So that that was my thing. Learning how to do that was just fascinating to me. And one of the things that they did is they show sitting down with this software and they ran it a bunch of times, ran these simulations of these huge battles at Helm's Deep and all of this stuff. And when they did, the army behaved differently based on the situation. And they would literally interact with each other. So where the archers were on the wall would change based on the intensity of where the fight was. And you'd get completely different scenes and things going on in the battle. And sometimes they would literally just kind of watch it. To be like, this is interesting to see this play out in between this dynamic of these two little like tiny AIs, essentially, he said. But something happened was fascinating, but also a little bit creepy with the Orc army. One part of the army got separated off and stuck in a corner and they were getting rained on by arrows. And they literally made the strategic decision. They. They just, they turned and ran. They retreated from the battle. And this wasn't an action that they set up. They independently, by being in a bad spot and not being able to move forward and basically losing a lot of people. This group decided to break off and retreat from the battle. And he said it was kind of crazy to watch because we didn't tell them to do that. You know, they made the decision, quote, unquote. I use that as a story, as an example, because when you train these LLMs on very specific tasks and on very specific interactions, like a perfect one, I can't tell you how many times my wife has sent me a podcast in messages or in an app or in Keat or something like that.
And then I want to listen to it later and I have to go look for it.
And one of the ones that they used was play that podcast that my wife sent me the other day. They know my interaction. They know how the user is interacting with things. And that's got to be something that's extremely common where they wouldn't have used it as a main example. And because it has that context as to what I am doing. It knows that my wife has recently sent me a podcast episode that it can just jump to that, open it in my favorite podcast app in Fountain and start playing it. When you start to train an LLM on all of these little tasks and you build out 5, you build out 10, you build out 50, you build out a hundred standard tasks and actions that people usually take and start building in interactions and intents.
The interesting thing here is that while it might seem limited at first, and you may ask it things in edge cases that it can't do, is it's going to be able to start figuring out the broader the number of actions that it is specifically trained on, the more likely it is going to be able to take a completely novel action with a completely novel set of apps that it hasn't explicitly been trained on and actually work they're building. I think from the right perspective. This is exactly why I brought up in the Cascader episode, is why I just need I wish I had a chat interface and simple LLM to work with FFmpeg so that I could just drag a file into it and then just have it format it based on all of the instructions or details that I need it. And I could just forever any file, media, video, audio or image, it would just change the format to whatever it is that I need in the particular situation. And if you simply fine tuned it, if you simply have the context and the pre prompt of exactly the different the manual pages for FFMPEG kind of built into that LLM you can have an extremely lightweight like 400 million parameters, a billion tops LLM that will run on a phone, that will literally run on an iPhone on the A17 without a doubt I know it would and could convert any media without ever having to think about finding the right piece of software or finding the like the the interfaces. Just tell it what you need and then it will give it to you. And that's a use case that me as a media producer and a content creator would be immensely valuable. I mean just incredibly valuable. I would use it all the time. I use Handbrake constantly, which is essentially that same thing. Just I kind of have to know the settings and the things that I'm going for. And in the case of the LLM version of it, if I was able to train it on those that action and that software specifically, it would just work way way better. It would be far more intuitive. I can see how easily it would be far more intuitive.
So anyway, let's get back into it so I'm gonna play two different sections that aren't actually right next to each other, but I think they're important to explain where I think they've made a really remarkable achievement and how this is going to interact with all of the things on your devices and specifically something that I have been trying to do and I've talked about a lot on this show, but have yet to find any comprehensive solution to so let's, let's.
[00:45:56] Speaker B: Go back to the video calendar event you're looking at. This can be incredibly useful in so many moments throughout the day. Suppose one of my meetings is being rescheduled for late in the afternoon and I'm wondering if it's going to prevent me from getting to my daughter's play performance on time. Apple Intelligence can process the relevant personal data to assist me. It can understand who my daughter is, the play details she sent several days ago, the time and location for my meeting, and predicted traffic between my office and the theater. Let me tell you more about its architecture and how it's built with privacy at the core. The cornerstone of the personal intelligence system is on device processing. We have integrated it deep into your iPhone, iPad and Mac and throughout your apps so it's aware of your personal data without collecting your personal data. This is only possible through our unique integration of hardware and software. It also includes an on device semantic index that can organize and surface information from across your apps when you make a request. Apple Intelligence uses its semantic index to identify the relevant personal data and feeds it to the generative model so they have the personal context to best assist you.
[00:47:13] Speaker A: All right, so I'm gonna pause right there for a second to to hit a couple of things that he said. So first is that this is going to be able to pull information from multiple different areas in order to see how it relates and to know what is related to the task or to what you're trying to figure out. And rather than having to go and piece together all of that information, the LLM is literally going to shortcut that is going to do that for you or the stack of LLMs that are working, I assume in tandem to make this work. Because I think this necessitates a mixture of experts situation where a lot of different LLMs that are geared toward very specific tasks or very specific types of information are being pulled together to actually supply that answer. And one of the things that he talks about in the architecture of this, it's exactly what I've talked about and why I've talked about embedding and Vectorizing data and stuff on the show. And I think I just recently talked about it in the previous episode. I, I went a little bit more in depth into the nuance of trying to do that, but this basically accomplishes that and then takes it one step further in the operating system. So there's two huge pieces here that I think are so valuable. One of the things I've talked about is semantic search. A lot on this show is being able to search through my information, my apps, my notes, my files, based on what their content is, their context, whether they are memes, like what they are used for, all of this stuff. And AI can do that, but it requires an enormous integration, difficulty in knowing how and what to embed in what way, so that it's semantically relevant to what you are doing. He says there is an on device semantic database of everything, of all of your media, of all of your data. That is exactly what I have been trying to do on the Linux machine. But here's the kicker is that rather than manually searching through and trying to hunt down what information, rather than using it like a Google search on my computer that I just have control over, or is a database that I have created, is that the LLM.
When I am creating a task, or I'm trying to accomplish a task, or I'm relating to an email or to a podcast episode, something that I'm doing, it is automatically going out and doing a semantic search for any details or any notes or anything that's related to this, calendar, appointments, emails, all of this stuff, and pulling it into the context of the action that's happening and loading it into memory so that this is going to operate seamlessly rather than me having to do a search, create a database. Like all of these things, it's doing multiple steps at the exact same time to just go straight from what I need, integrate everything that I have that's related to it, and then producing the result or action or giving me the information related to that task to help me solve it.
I think that's like. That's the integration, that is the integration of AI in the operating system. All right, so let's watch this next little section and then let's hear what they have to say about privacy.
[00:51:05] Speaker B: Many of these models run entirely on device. There are times, though, when you need models that are larger than what fits in your pocket. Today, servers can help with this, but traditionally servers can also store your data without you realizing it and use it in ways you did not intend.
And since server software is only accessible to its owners. Even if a company says it's not misusing your data, you're unable to verify their claim or if it changes over time. In contrast, when you use an Apple device like your iPhone, you are in control of your data, where it is stored and who can access it. And because the software image for your iPhone is accessible to independent experts, they can continuously verify provides privacy. We want to extend the privacy and security of your iPhone into the cloud to unlock even more intelligence for you. So we have created Private Cloud Compute. Private Cloud Compute allows Apple intelligence to.
[00:52:05] Speaker A: Flex and now the fact that because of their hardware architecture and because of the lightweight LLMs that have that it appears they are using to make this work is they're able to run a lot of this locally or at least that they claim.
And I think that's a really powerful and really awesome thing that they are targeting.
However, this episode is brought to you by the Coldcard Hardware Wallet. Do not trust hold your own keys. They have verifiable source code. They have an awesome device built for security. They have been around longer than practically anyone else in building secure hardware for storing just for safely and conveniently storing your Bitcoin so that it is accessible and that is secure from the the security hole ridden disasters of our devices and the new levels of surveillance and invasiveness that so much of our software is beginning to become. Do not let your Bitcoin be a victim of the surveillance everything Internet that we have found ourselves in. Keep your keys off of your devices. Keep them in a cold card locked down like a digital Fort Knox. I use my Bitcoin all of the time with my cold card and the keys never touch my phone. I cannot stress how important this is and you get a discount with my code. Bitcoin audible. The link and details are right in the show notes. However, the private cloud compute and the oh well, you can't verify the software and the images that are being run on the cloud machines. And we have independent verifiers so you can always know who selects the independent verifier. So trying to pretend they have the benefit of open source without actually having anything open source. Like if I'm trusting them to decide who the independent verifiers are to tell me that this is private, well then I might as well just trust them to just say it's private.
What is, what does independent mean? If. If it's not my choice or if I can't verify. This is something that I just don't like about the corporate environment. Like it feels like pandering it feels like pandering. They're not trying to. This is, this is my problem with OpenAI. They talk, they pander about open source all the time, and then their core product, their entire core platform is as closed as anything you would find in any environment.
Now, I am not saying that anybody has to release this stuff for free, that I'm entitled to free software. I know how many hours just trying to use other people's software and other stuff that I can't even come close to building myself. I know the degree of difficulty this is. As someone who is trying to. Has hired two developers in order to build my own project. I know how much it costs and I know how difficult, conceptually. I don't even know how I'm going to make my money back. I really don't. I'm just trying to make something that I think is insanely valuable and useful, at least to me, and that I think a lot of people will get a lot of value out of. And I'll figure it out later. So I'm not saying they should provide it to me for free, but if they're going to, if they're going to pander to and they're going to drop a lot of platitudes about privacy, but then none of it's verifiable and they just have independent verifiers. Open source doesn't mean that it has to be free for everybody else to use. And they don't even have to open source the things that are necessarily their value propositions. They can still have their operating system, they can still have their ecosystem. But if the images that are running this thing that are pulling out to, you know, giant cloud servers in order to do compute, if I can't know how it's operating, if I can't know that, that it is actually, quote, unquote, blind compute, that it doesn't know what actions are taking on my device, then I'm just trusting them. I'm just trusting them and I'm just trusting whatever verifiers they decide to pick. How much do you trust Apple? Do you trust Apple? That's really the question.
Because the level of convenience and value of these tools is huge.
But the level of insight and the level of surveillance that these tools can enable is equally or even greater in size. It's crazy how easy it is to see both the utopia that these tools could provide and the dystopia that, that we could end up in. I guess in one way or another, I have to at least say they are attempting to create privacy. They are attempting to have these LLMs run almost and, or entirely on device. Because you know, if I don't have Internet, I want this thing to still be able to work. I want most of my actions and tasks and stuff to still operate. But on the issue of trust, like I said earlier, I just don't think Apple would be able to sell their products in Russia, in China, in the us, in the eu. Like all of the governments are so incredible about their overreach into surveillance, into violating the basic rights, the basic privacy of everyone. If you actually look at Vault 7, like what WikiLeaks has made public and made available to people and you just don't listen to the, the stupid parrots on the news that push the narrative of the political environment. If you actually just look at the details, you don't have to have a warrant, there's no privacy, we don't have constitutional protections for that anymore. And I just don't think Apple would be allowed to sell. They would figure out how to cause them unbelievable amounts of cost problems, legal restrictions or just absolute ban from the country or the banking system. Like it would be Operation Choke 2.0 for software company. They just wouldn't be allowed a bank account. They could just do the entire thing extra legally. Like they don't have the big banks in their pockets. The big banks are literally gone. If the Federal Reserve says by the way, we're not going to lend to you today, like the whole thing would just collapse. That much political power. And I'm supposed to believe that they don't have a key to all of this stuff, that the private cloud compute is private and that I can know this because they have independent verifiers who say it's true.
How much do you trust Apple? So I want to close this out with kind of hitting a couple of little short bits from the rest of the video because these are the broad strokes of how valuable and useful I think this, these tools are going to be. But then I also want to talk about why this might be. I want to be a little bit less depressing and talk about why this might be a really, really fantastic thing for open source and give us a direction. It will be great to have that focus on what exactly is the goal, where are we headed toward and how can we achieve this with our compute, with actually local compute, where when my phone can't accomplish everything that I want it to accomplish, it leans on my Linux machine rather than Apple's private cloud computing independently verified computers. So let's jump back into it, hit a couple clips and then we'll close this out of which is its extensive product knowledge. Siri now holds a great deal of information about features and settings and can answer thousands of questions. When you want to know how to do something on your iPhone, iPad or Mac, even if you don't know exactly what a feature is called, you can just describe it and Siri will find the info you're looking for like this. How can I write a message now and have it be delivered tomorrow?
Siri understood what feature I was referring to and now I have step by step guidance on how to use the new Send later feature in messages. This one I can't. I just can't believe that it took this long. Or at least I haven't seen anybody else who has done this yet is implementing an LLM into their software or their operating system that is literally just trained on how everything works and what all of the features are. A great example is I just tried to use, I'm using descript right now, which is a fantastic app for editing and recording and doing a lot of live stuff to, to essentially clean up video. And it's not perfect. And I still prefer DaVinci, a more broad, nonlinear editor. This is great for simpler stuff, but I tried to use something called mmhmm Studio. Mm hmm. Something I can't remember exactly how to spell it, but it looks like it has all of these great features. But it is the most. It attempts to be intuitive and somehow in doing so it becomes less intuitive. It's so frustrating. And I spent probably 10 minutes trying to figure it out and then went to like watch a tutorial and I was just so annoyed. I was like, this is stupid. This should not be this difficult. And so I deleted it. But the extremely simple and seemingly obvious use case to me at least, and something I've talked about from the very beginning of this show, of just having all the documentation, all of the support tickets, all of the FAQ of a piece of software in a very specific like just go up to the help and then you can ask a question and there's a very thin, lightweight LLM that is running with all, all of this information and then you can ask it a general question about how to do something and it can figure out what that feature is and it can give you the stuff. The fact that I am still going to Google to ask how to do something on a Mac or on this piece of software, on this piece of art, this should be the dominant mode of support for every piece of software. Now, I get there's an enormous amount to build, but people are training these things on so many huge, way bigger data sets and trying to integrate and make all of this crazy stuff that a lot of it just feels like gimmicks. And it's just like, cool that, okay, you can do this, but am I even going to use that? And I'm just like, support, support, just help me to know how to use your software. Because we're going to be in a world where there's the software is going to 10x, where it's going to explode in the number of different things that we have and is going to be incredibly useful to have a very small, lightweight LLM that just knows everything about the software. And that's exactly how we can get to that place where rather than necessarily, you know, having A billion separate LLMs that aren't connected is you can get the operating system LLM to talk to the local piece of software LLM and maybe even do actions that LLM that support LLM can maybe even kind of have its own little closed API for the software. So if you want something done, it can kind of lead you to exactly what those things are. And then a different LLM can talk to it and integrate with it. And from outside of the software, you can maybe set up and start into a project or start into completing a task by opening and building the framework for doing that, because you have different LLMs for exactly those different goals. But anyway, just, I. I'm shocked that this seems to be the first time that I have heard someone say, it's like, by the way, our LLM just knows everything about Mac. So you can ask Siri how to do something and it's going to have full context, semantic knowledge. It's going to be able to know what feature you're asking about because it knows what that feature does. And if you just want to accomplish something, you can just ask and it's going to tell you how to do it on a Mac.
Why? Why did it take this long? I don't know. Maybe I'm just impatient. All right, so the last things to mention that I think are notable and are kind of an extension of what we've already talked about, but just so you can kind of get an idea of how capable and how much visibility into everything that you're doing on these devices the AI has and why this is such an enormous risk and benefit is it has on screen awareness. You will see with Siri. Is that what I am looking at if I'm reading like messages, the example that she gives was I'm talking in a messages app with someone and they say, yeah, I love, I really do want to go see Oppenheimer in the theater. We should do that. And then if I said to Siri, can you schedule on the calendar for us to go see that movie without me saying what the movie is? It's going to be able to read what's happening on the screen. It's already going to take in everything that is on the screen and be able to schedule that and know that the movie is Oppenheimer. Again, that's really cool and it's really useful. But if you have Bitcoin Wallet and that app is showing you the seed, Siri knows your seed phrase, like that device is able to read what is on your screen. And that's kind of already true. This is why a lot of those, a lot of intelligent wallets don't let you take screenshots of your seed words if you're generating them on device. And this is, I mean, not to push our sponsors again, but dear God, if you are holding an, a meaningful amount of Bitcoin on your phone, please get a hardware wallet, get a hardware wallet and separate those keys because nothing is safe. The operating system is going to be able to see everything that you are doing and read all of the text on the screen. This is essentially like a key logger and a remote screen watcher that is built into the operating system. And sure, yeah, it might be all local, sure, it might be all on my device, but I don't, I'm trusting that. I am trusting that, and I just don't.
So it's very much something to consider that maybe even the semantic database. Imagine somebody got your device and you had a Bitcoin wallet on it. And because Siri is making a semantic database and semantic embeddings of everything that's going on and you, it can see into your Bitcoin wallet, it can just see what you have on your screen. Imagine somebody just asks Siri, what's the seed phrase for my green wallet? And because it's seen it, it just recalls it. That's not great for security.
So something to keep in mind. Another thing which will be massive though, and it's so useful is the photos and videos context.
Like me being able to ask, can you get me all of the pictures of Rad riding his scooter? I cannot tell you. I take so many videos, I take so many images and trying to sort through them, trying to find the images that I'm specifically looking for is not easy, especially with how many things that I save into my photos library at the exact same time. Contextual search of media is something that I have so desperately wanted out of AI since the the very notion of the idea has popped into existence and I want my hand, I want to get my hands on it and see how useful it is. But this seems like a really, really good implementation of it. And I already do use it just in the machine learning tools that they already have. But this seems like a massive step up from where we have been so far, at least on Apple devices. Another one that's really cool just in the writing tools that are available operating system wide and that they show an example of is getting like summaries of an email and then also summaries of notifications so that rather than having a tagline or alert, it's really important that you need to hear in an email that's actually spam. But I can't tell because of I don't have enough context or enough detail that I can just get a summary that says 20% off on this tool, blah blah blah. And I just know that it's spam or it's already going to prioritize that again. The built in filters and AI as a filter to what is important and what is not. And just being able to do that to email something that will be built into mail now is being able to automatically sort through. I cannot tell you how much time I have spent building custom filters that A will still miss a ton of stuff, B will have lots of false positives. And because of filters I have actually missed emails from people because some services that send me confirmation codes and send me receipts that are really important also send me information on new deals. And so if I have a filter I have to figure out how to. How do I distinguish between those two? Like they're obvious when you look at them. Okay, this is important, this is not. But I can't just use one keyword. I can't just say oh, if it's from this company, then stick it into my browse deals folder or something.
And being able to do that with machine learning with LLMs I think will be a huge benefit because I can't tell you how long I have spent trying to build custom filters and apply them and all this stuff. And at the end of the day, even after building some that are successful and constantly tweaking them to get them to work properly, with all of the flood of emails that I get. I still. I look at my inbox and I still feel like I have too many emails. I struggle. 50 emails a day or something. Ridiculous. This is why I had so many email addresses, because I just get to the point where the email is so overwhelming that I just have to change over to a new one and just say, okay, you're dead. You're dead to me. And if I ever need something, I'll just go look for that very explicit thing. But too many people have this email address and I can't operate with this anymore. I have to start my filters over and start over with a new email address and try to manage it better this time. And I get incrementally better. But I still. I feel like my ProtonMail and my Bitcoin audible.com address is getting to that point. So hopefully this helps to solve that problem. Oh, another one that's really, really big is to be able to ask Siri what was the movie that my wife recommended to me? And then it will go through all of the different messages and all of the different apps where I have conversations with my wife and then pull out and remind me of that movie. I cannot tell you how many times I have lost a conversation because it's a Twitter dm. It's on Keat. It's on. It's in telegram. It's not a Telegram dm. It's in actually a telegram group. I thought it was a dm. It was in an email. I'm just inundated with so many different chat apps and so many different conversations from so many different avenues.
The ability to have something that can see into all of them. And then I've. I've lost shows that I had intended to do with people because I could not figure out where the message was. Sometimes I'll just. I give up trying to find it and I'd be like, how is this not in any of the. The default things? And so I just hope that they refresh it, that in a week they're like, oh, I haven't heard back from Guy. Let me refresh it just so it pops up in my notifications again so I can be like, oh, it was on Twitter. Oh, it was. That's right. I went to Facebook for the one time in the year and that's where they had messaged me. So that's a really useful thing. But again, the dangers of having the AI see in every one of my apps and all of my conversation and all of that stuff, I. I want this. I want this so badly. This is the promise of personal AI. This is the episode that we just recently did. I think I brought up pretty much all of this stuff in the episode, but I don't believe that this is my AI. I don't believe that this is actually private.
Maybe. Maybe some portion of it is, but I don't know. The risk is just very clear. It's very clear and present, and it's hard for me to take them at their word now. The only Gen AI, they talk a lot about the. The image playground and being able to create your own images. To be able to draw something in notes and then make an image out of what you're drawing is by describing it. Okay, that's. It's a little gimmicky, like I said, but there is something that I kind of think I'm going to have to try out at least is the gin moji, because I really like emoji reactions and I can't always find exactly the emoji that I want. So I'm going to have to put my hands on. I'm going to have to try this, and I have to see what kind of emojis I can make. I feel like there's probably going to be a very. I'm. I'm not a very censored person. I would like to have more inappropriate emojis, and I doubt Apple is. Apple's going to protect everybody's sensitive emotions, and I'm probably not going to be able to make the fun emojis that I want to make. We will see. We will see. I'll. I'll test it out and see what kind of ridiculous stuff I can get out of it.
So the idea that I can actually make my own custom emoji and as many as an infinite amount of them, that's actually kind of neat. That could be fun to play with. Then also, AI image editing tools, being able to clean up photos, take people out of the background. That's another really big thing that I use a bunch of external tools for. It'll be really cool to actually have that inside of photos. And then the semantic search going back to like, you know, all the pictures of rat on a scooter, the extension of that, of even being able to search inside of videos.
God, I. This. This is like my dream of being able to manage my media, of being able to just. Can you find me all the pictures of guys in a suit or something from just a collection of media, or all the pictures of rad or all the images that are predominantly yellow just Just being able to search for sections inside of a video because not the thumbnail doesn't always get me there. The thumbnail isn't always what I'm looking for and it might start with me pointing it at the ground and then I pick up and start doing the video. I don't even realize from the thumbnail because it's just a picture of a ground. I don't realize that this is actually the one I was looking for. That's the picture of the entrance to Disney World or something. So having semantic search of images and video like different sections of video is huge. That's. That's a really, really powerful feature. Something that I have. I have spent a lot of time thinking about and trying to figure out how to do on my own machine and my own stuff that Apple has really to have that by default just as an operating system feature.
That's hard to beat.
Now lastly is it has integrated with Chat GPT.
Oh goody.
So there is now a single Click option of ChatGPT. Might be able to do this better. If it seems like the local LLMs are not up to the task or there's too much going on in order to do that you can send it off to Chat GPT and it's free and you don't have to have a ChatGPT account and there's no logging as they say.
I do not. I do not believe that whatever they're calling logging I. It's a thousand percent. It's. Oh, there's no shadow banning on Twitter and they just called it attention filtering or something but they were doing the exact same thing. I just do not believe that if it is free you are the product.
That is a rule that is always. ChatGPT is spending an extraordinary amount of capital keeping this running. And if this is in the OS for free and you are sending them your data or your images or whatever it is, they're making money. They have to be making money. It is not sustainable for that to be free in an ecosystem that has billions of devices. Not possible. I call bullshit.
This is where all of my previous concerns now get bigger and more frustrating. I really wish there were just an AI like an API for don't. Don't integrate with ChatGPT. Let me choose to integrate with ChatGPT, but let me integrate with my chosen AI where I can just pick like when I use GPT for all or when I use Ollama, I just pick which model I want. Well, I could also add ChatGPT specifically into it. I could Use my API for a service.
Why can I not just pick an AI integration? Why? Why are we recentralizing on ChatGPT and OpenAI? I cannot believe how fast OpenAI has developed a veritable monopoly. A vast global center of all AI communication and tools, especially with the speed at which open source and alternative tools have gotten really good. Llama 3 is fantastic, Mixtral is fantastic. And yet OpenAI and ChatGPT went from virtually nothing to directly integrated into one of, if not the biggest computing device ecosystems in the world for free. Quote, unquote.
I don't know what they're doing to make money, I don't know how they are making money off of that, but they are using the ever loving crap out of your data in some way. Either they are just retraining ChatGPT and it's okay, we're, it's, we're not connecting it to you. I don't know how many people they're probably partnered with behind the scenes who can get value out of what type of data and what type of conversations, the amount of population surveillance and sentiment analysis and things that can be done with, with this.
They have to be, they have to be selling that in some way. And a simple without your data being logged is not going to make me think that is not happening. I would have to be painfully naive to believe that that was even slightly the sane way to describe what is happening with that data on the back end. It's a thousand percent being used, being logged, being organized, and maybe they'll just call it file management instead of file logging. Okay, thanks for the gaslighting. That's what that is. There is zero chance that this is free and they aren't using my data.
So back to the good news.
I think this just shows what the integrations can be like. We've imagined it a lot, we've talked about it a lot and this is what it actually is, this is it in practice. And I think it gives a focus, a goal for the alternative and for the open source tools. And the other good news is the degree that they have tried to implement privacy and that they have had to pander to privacy in this I think is also a very, very good sign that they know people are actually to some degree, to a far greater degree concerned with their privacy more than they have been in the past. And that is going to continue to grow in magnitude I believe, as the future shows us the consequences of losing our privacy. And I will be staying on top of as much as I can for how to get these same tools and these same features in a self sovereign way. I will be watching and looking for open source tools that can provide this on Linux, on anything else and how to plug in alternatives to ChatGPT and all of this stuff. That is, that's what this show is for. And I still think this is just showing the incredible capability of these tools and where we could really be headed. And I refuse to believe that it is not possible without Apple, without OpenAI.
I think we can accomplish this in an open source, in a self hosted and a sovereign way. I will keep looking for those setups that software and we'll cover it here on AI Unchained.
All right guys, I hope you enjoyed that episode. I will have the link to the WWDC if you want to look at the other stuff and you care about just the upgrades to iPad and all of that stuff. But that was, that was the breakdown of Apple intelligence, the very lengthy breakdown of Apple intelligence. And I'm very, very curious your thoughts. How much do you trust Apple? Do you believe their claims and are you going to use this? Does this make you want to get on Apple devices more because of how valuable and useful it is? Or does this make you want to leave the Apple ecosystem because of just how much privacy and security we may have to give up? I'm curious. Let me know in the comments below or shoot me a message if you're listening to the audio version of this and I will catch you on the next episode of AI Unchained. I am Guy Swan and until next time everybody. Take it easy. Guys.
Don't trust verify.