Valeriy’s Substack

Valeriy’s Substack

The Embedding Model That Thought Like a Human, Not a GPU

Valeriy Manokhin's avatar
Valeriy Manokhin
Dec 16, 2025
∙ Paid

Long before GPUs, transformers, or billion-parameter models, researchers were already building word embeddings.

In 1996, they called one of them HAL — Hyperspace Analogue to Language. Its goal wasn’t scale or performance. It was a much stranger question:

Can a machine learn meaning the way humans do—just by being exposed to language?

User's avatar

Continue reading this post for free, courtesy of Valeriy Manokhin.

Or purchase a paid subscription.
© 2026 Valery Manokhin · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture