question with the article's main thesis. If agents were trained on human-generated data and "subtracted" human conventions (markdown, file systems, git), is there really such a thing as "native" agent UX that's different from human UX? Or have we just discovering which human conventions happen to align well with transformer attention patterns?
It's a fair question - I think the fact that they hold abilities (read 200k tokens instantly, can clone themselves, ...) that we don't would suggest they will have quirks and differecnes.
What downstream implication that will have on a AX sense is certainly arguable, but I would put forward that we're already seeing it with effective harnesses such as Claude Code. The experience the agent has there is quite different to how you'd build an IDE for a human.
question with the article's main thesis. If agents were trained on human-generated data and "subtracted" human conventions (markdown, file systems, git), is there really such a thing as "native" agent UX that's different from human UX? Or have we just discovering which human conventions happen to align well with transformer attention patterns?
It's a fair question - I think the fact that they hold abilities (read 200k tokens instantly, can clone themselves, ...) that we don't would suggest they will have quirks and differecnes.
What downstream implication that will have on a AX sense is certainly arguable, but I would put forward that we're already seeing it with effective harnesses such as Claude Code. The experience the agent has there is quite different to how you'd build an IDE for a human.