nels.ai | Research Lab

nels.ai | Research Lab

Anthropic explains how language models reason

Dr. Nels Lindahl's avatar
Dr. Nels Lindahl
Apr 23, 2025
∙ Paid

BLOT: The Anthropic team just gave us the clearest glimpse yet into how Claude organizes its thinking, and it starts with tracing latent directions in embedding space.

How Language Models Reason

I’ve been watching Anthropic’s interpretability research evolve for a while, but this new post stands out. It’s worth reading. They’ve developed a method for trac…

User's avatar

Continue reading this post for free, courtesy of Dr. Nels Lindahl.

Or purchase a paid subscription.
© 2025 Dr. Nels Lindahl · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture