There are so many myths and misconceptions around AI models. What they can do, how they are trained, even the fundamentals of how the data for them is even collected and formatted to make sense. The idea that we can "train our own model" by just feeding it a ton of conversation or notes is unrealistic, but why exactly is that?
Today we get into a 2 Part read which will be one of the most value dense we have covered on the show yet. It breaks down the entirety of the process of identifying the core value, and then sourcing, qualifying, and prepping the data, and then training, fine tuning, adjusting, and testing an AI model built from it, from beginning to end.
The Spirit of Satoshi project is an incredible open source endeavor and the team reveals tons of great details about the complexities and challenges of building an LLM, as well as the incredible work they are doing in building novel tools for crowdsourcing the hardest part of the process, and of course, how Bitcoin and Lightning enable better tools to make this all possible.
Check out the original articles at Spirit of Satoshi (Link: http://tinyurl.com/4jsvmz3z) & Satoshi GPT (Link: http://tinyurl.com/msyr4m5t)
Host Links
Check out our awesome sponsors!
Ready for best-in-class self custody?
Trying to BUY BITCOIN?
Bitcoin Games!
Bitcoin Custodial Multisig
Education & HomeSchooling
"Ethereum’s #1 problem is not a problem of product-market-fit but one of engineering soundness. Ethereum architecture is based on a flawed and unscalable idea:...
"With his choice of words, Taaki had outed an elephant in the room. It was true, Nakamoto had enacted soft forks, but by late...
One thing to make abundantly clear: Nostr is a protocol. It’s a set of rules that servers and clients use to communicate (just like...