36

What I don’t like about chains of thoughts (2023)

Decoder only LLMs are Markov chains with sophisticated models of the state space. Anyone familiar with Hamiltonian Markov Chains will know that for good results you need a warm up period so that you're sampling from the typical set which is the area with generally the highest probability density in the distribution (not necessary the high propbability/maximum likelihood).

I have spent a lot of time experimenting with Chain of Thought professionally and I have yet to see any evidence to suggest that what's happening with CoT is any more (or less) than this. If you let the model run a bit longer it enters a region close to the typical set and when it's ready to answer you have a high probability of getting a good answer.

There's absolutely no "reasoning" going on here, except that some times sampling from the typical set near the region of your answer is going to look very similar to how human reason before coming up with an answer.

4 hours agocrystal_revenge

I don't understand the analogy.

If I'm using an MCMC algorithm to sample a probability distribution, I need to wait for my Markov chain to converge to a stationary distribution before sampling, sure.

But in no way is 'a good answer' a stationary state in the LLM Markov chain. If I continue running next-token prediction, I'm not going to start looping.

3 hours agodhampi

I think you're confusing the sampling process and the convergence of those samples with the warmup process (also called 'burn-in') in HMC. When doing HMC MCMC we typically don't start sampling right away (or, more precisely we throw out those samples) because we may be initializing the sampler in a part of the distribution that involves pretty low probability density. After the chain has run awhile it tends to end up sampling from the typical set which, especially in high dimensional distribution, tends to more correctly represent the distribution we actually want to integrate over.

So for language when I say "Bob has three apples, Jane gives him four and Judy takes two how many apples does Bob have" we're actually pretty far from the part of the linguistic manifold where the correct answer is likely to be. As the chain wanders this space it's getting closer until it finally statistically follow the path "this answer is..." and when it's sampling from this path it's in a much more likely neighborhood of the correct answer. That is, after wandering a bit, more and more of the possible paths are closer to where the actual answer lies than they would be if we had just forced the model to choose early.

edit: Michael Betancourt has great introduction to HMC which covers warm-up and the typical set https://arxiv.org/pdf/1701.02434 (he has a ton more content that dives much more deeply into the specifics)

3 hours agocrystal_revenge

No, I still don’t understand the analogy.

All of this burn-in stuff is designed to get your Markov chain to forget where it started.

But I don’t want to get from “how many apples does Bob have?” to a state where Bob and the apples are forgotten. I want to remember that state, and I probably want to stay close to it — not far away in the “typical set” of all language.

Are you implicitly conditioning the probability distribution or otherwise somehow cutting the manifold down? Then the analogy would be plausible to me, but I don’t understand what conditioning we’re doing and how the LLM respects that.

Or are you claiming that we want to travel to the “closest” high probability region somehow? So we’re not really doing burn-in but something a little more delicate?

2 hours agodhampi

You need to think about 1) the latent state 2) the fact that part of the model is post trained to bias the MC towards abiding by the query in the sense of the reward.

A way to look at it is that you effectively have 2 model "heads" inside the LLM, one which generates, one which biases/steers.

The MCMC is initialised based on your prompt, the generator part samples from the language distribution it has learned, while the sharpening/filtering part biases towards stuff that would be likely to have this MCMC give high rewards in the end. So the model regurgitates all the context that is deemed possibly relevant based on traces from the training data (including "tool use", which then injects additional context) and all those tokens shift the latent state into something that is more and more typical of your query.

Importantly, attention acts as a Selector and has multiple heads, and these specialize, so (simplified) one head can maintain focus on your query and "judge" the latent state, while the rest can follow that Markov chain until some subset of the generated+tool injected tokens give enough signal to the "answer now" gate that the middle flips into "summarizing" mode, which then uses the latent state of all of those tokens to actually generate the answer.

So you very much can think of it as sampling repeatedly from an MCMC using a bias, A learned stoping rule and then having a model creating the best possible combination of the traces, except that all this machinery is encoded in the same model weights that get to reuse features between another, for all the benefits and drawbacks that yields.

There was a paper close when OF became a thing that showed that instead of doing CoT, you could just spend that token budget on K parallel shorter queries (by injecting sth. Like "ok, to summarize" and "actually" to force completion ) and pick the best one/majority vote. Since then RLHF has made longer traces more in distribution (although there's another paper that showed as of early 2025 you were trading reduced variance and peak performance as well as loss of edge cases for higher performance on common cases , although this might be ameliorated by now) but that's about the way it broke down 2024-2025

27 minutes agoigorkraw

In RNNs and Transformers we obtain probability distribution of target variable directly and sample using methods like top-k or temprature sampling.

I don't see the equivalence to MCMC. It's not like we have a complex probability function that we are trying to sample from using a chain.

It's just logistic regression at each step.

2 hours agopcwelder

Right, you're describing sampling a single token which is equivalent to sampling from one step in the Markov Chain. When generating output you're repeating this process and updating your state sequentially which is the definition of the Markov Chain since at each state the embedding (which represents our current state) is conditionally independent of the past.

Every response from an LLM is essentially the sampling of a Markov chain.

an hour agocrystal_revenge

I wish it got called "scaffolding" instead

3 hours ago4b11b4

That sounds a lot like bad marketing. Chain of thoughts is better, it makes you think the thing is thinking !

3 hours agoxwolfi

"Obviously no, this IMO proves that we humans can reason efficiently without an inner speech."

Well, no, it proves that Messi can reason efficiently without an inner speech.

4 hours agotintor

This is Dual Process Theory [0] otherwise known as Fast vs Slow thinking, or System 1 and System 2 thinking.

Humans are the only known organism to do System 2 (which doesn't mean we're the only ones that do it, just that we don't know if whales do it), but System 2 is what the author is talking about when they refer to Chains of Thought.

System 1 is what they're referring to when they talk about Messi reacting to an unusual situation on the field.

Related anecdote: I tested myself for ADHD by taking amphetamines. I normally think by intuitive leaps from point to point, without doing the intermediate steps consciously. I found that during this experience my System 2 thinking was fast enough to follow and I actually experienced proper chains of thought. Or I was whizzing my tits off and hallucinated the whole thing. Not sure yet. I should repeat the experiment.

[0] https://en.wikipedia.org/wiki/Dual_process_theory

5 hours agomarcus_holmes

Dont know why you are getting the downvotes this is the best comment here.

42 minutes agoprotocolture
[deleted]
7 hours ago

> chains of thoughts

Pedantic maybe -- but does this need two plurals?

5 hours agoalbert_e

There's lots of typos like this in the article which I find quite nice for some reason

3 hours ago4b11b4
[deleted]
2 days ago

Interesting that he came to this conclusion (CoT should be done in latent space) well before the release of OpenAI's o1, which made explicit CoT reliable in the first place. At the time the blog post was written, CoT was only achieved via a "reason step by step" instruction, which was highly error prone compared to modern o1-like reasoning. (And before InstructGPT/ChatGPT, it was achieved by prompting the model with "let me reason step by step".)