Recent large language models such as Llama 3 and GPT-4 are trained on gigantic amounts of text. Next generation models need 10x more. Will that be possible? To try to answer that, here’s an estimate of all the text that exists in the world. Firstly, here’s the size of some recent LLM training sets, with […]| Educating Silicon
For decades I had a bet that worked in good times and bad: time you invest in word skills easily pay| breckyunits.com
I thought we could build AI experts by hand. I bet everything I had to make that happen. I placed my| breckyunits.com
There are tools of thought you can see: pen paper, mathematical notation, computer aided design ap| breckyunits.com
If you want to understand the mind, start with Marvin Minsky. There are many people that claim to be| breckyunits.com
Vector quantization algorithm minimizing the sum of squared deviations| en.wikipedia.org