4
Ask HN: Does Claude use 'prior' in a Bayesian sense more than English?
Just an observation. When asked to summarize articles, or extract insights, I see the word 'prior' being used a lot more by Claude than usual English language writing (Journalistic essence). And it's clearly using it in a Bayesian sense, because it's always mentioning things like 'Updating priors', 'the prior doesn't hold', etc.
Probably something I noticed after reading the 'goblin' and 'gremlin' article.
Probably? Reinforcement learning creates bots with specific styles. For example, ChatGPT is very fond of "typically", "unpack this", and "if you want".
AI talk is turning into Silicon Valley pseudo-math slang. Priors, exponentials, latent space
You get lines like “no priors” or “embracing exponentials” that sound smart but mostly signal status
Same move as N Taleb and “convexity.” A real idea turned into a generic intellectual flex
Once again a post with literally 3 points and 2 hours old is the top of /ask
Why is the HN algorithm such ass, can we talk about that?
Well it did have Claude both in the title and the description...
[dead]
[dead]
[flagged]
What does this have to do with the question OP is asking?
Nothing, but he got to plug his vibe coded startup that he advertises in the about section.