1 Comment

“keeping the AI current without retraining”, this is key.. it is a waste of energy and resources to retrain the whole model to capture the dynamics of human development and their understanding of the world. RAG makes sense, it is the first time I learn about this systematic way to make AI models dynamic without retraining.

I was wondering about which lexical languages would perform better, and why? This could give us a glimpse on the origins of how each language was programmed by early humans and why every word have the letters and pronunciations they have.

I think, human languages are not arbitrary and random in letter shape, vocal sound, letter sequence in a word. There is a structure that we might discover that makes LLMs simple and lightweight.

Expand full comment