Playback speed
×
Share post
Share post at current time
0:00
/
0:00

Loading your own data into your personal AI

The Perfecting Equilibrium Vlog for April 24, 2024

Now that we have the h2oGPT Large Language Model running locally on a desktop computer, let’s load it with our own data. I’ve gone on and on about how Large Language Models — so-called AIs — are crippled by training on bad data. Garbage In; Garbage Out! So let’s build a local LLM and feed it good data for a specific purpose. Welcome to Virtual Grad Student! We’re going to set up a Large Language Model to run locally, feed it a clean set of data, then make it available to authors as a virtual writer’s assistant. For example, to pull together a few paragraphs of background on Roman aqueduct architecture. Oh, and sorry again about the background!

Links:

The h2oGPT open source Large Language Model

Common Corpus, the largest public domain dataset for training LLMs

h2oGPT macOS, and Windows installers:

Next on Perfecting Equilibrium

Friday April 26th - Foto.Feola.Friday

Sunday April 28thHighly Granular, Loosely Coupled Entropy is the strongest force in the universe. Without the Industrial Age benefits of economies of scale, the vast edifices that powered manufacturing economies like the United States will simply break down. Without those benefits, the bonds will simply fade. States will act more and more independently, and the Federal Government will have less ability to respond.

0 Comments
Perfecting Equilibrium
Perfecting Equilibrium Podcast
For a brief, shining moment Web1 democratized data. Then Web2 came along and made George Orwell look like an optimist. Now we are building Web3, Perfecting John Nash’s Information Equilibrium.