It's hilarious to me to see the same kind of engineer, who throughout my career have constantly bitched and moaned about team meetings, agile ceremonies, issue trackers, backlogs, slack, emails, design reviews, and anything else that disrupted the hours of coding "flow state" they claimed as their most essential and sacred activity to be protected at all costs, suddenly, and with no hint of shame, start preaching about about the vital importance of collaborative activities and the apparent inconsequence of code and coding, the moment a machine was able to do the latter faster than them. I mean, they're not even wrong, but the nakedly hypocritical attitude of people who, until a year ago, were the most antisocial and least collaborative members of any team they were on is still extraordinary.
Are you referring to the author specifically? Or a specific hypocritical person you know? If you're making a general statement about groups of online people you might be falling for the group attribution error[1], where the characteristics of an individual are assumed to be reflective of the whole group.
In any case, two things can be simultaneously true:
1. Writing code is not the bottleneck, as in we can develop features faster than they can be deployed.
2. It's annoying and disruptive to be interrupted when doing work that requires deep focus.
I think it's obvious that they're not referring to the author or a specific person at all. They're talking about how the zeitgeist has changed. Look at Hacker News archives 3 or more years ago and it would be really hard to find anyone arguing that coding speed is not a bottleneck or that engineers need to spend more time in collaboration. You would find a lot of arguments that leaving engineers alone to code is the best thing a business can do and constant lambasting of meetings, documents, approvals, and other collaborative activities.
I think there are small pieces of truth on both sides of the argument, but I find the sudden change to claiming that coding speed doesn't matter to feel half-baked. Coding speed is part of building a product. Speeding it up does provide benefit. There's a lot of denial about this, but I think the denial is rooted in emotion more than logic right now.
Needing focus to think is not the same as needing focus to write code..
It can take a whole day to find 10 good lines to write.
The collective smarts on the Internet can't come up with a better name that might help our brains tag the concept?
Although at least it has some cross-cultural references. Why is naming so hard.
Memes help things to become popular. I guess the question would be how popular would it have been without the meme name? We'll never know.
I learned here like a week ago so I'm here to evangelize.
This was a perfect example.
[deleted]
That’s kind of just strawman with an origin story isn’t it?
No because the goomba is the average of two real opinions, and the strawman is a distortion/reduction of any opinion such that its easy to argue against.
More like a superposition of two opinions.
Ah, ok, so two real opinions made into a distortion. Right, I see how very dissimilar to an origin story of a strawman that is…
If I only had a brain...
On some level, yes, but having words to describe sub-sets can be useful too.
A "human" is just a "featherless biped", after all?
I think the Goomba Fallacy captures something helpful to me- it's helpful to know the origin of a straw man if you want to un-stuff one.
I know this is off topic but I cannot resist. It's very funny to read a comment about un-stuffing a straw man written by "scarecrowbob"
Gave me a good laugh, thank you. :)
I think the Goomba is distinct. Strawman is disingenuously representing an argument, Goomba is assuming contradictions are coming from the same person, presumably b/c it's coming to the Goomba through the same app.
> Goomba is assuming contradictions are coming from the same person, presumably b/c it's coming to the Goomba through the same app.
Its because it comes from the same political faction. In general people are open about A when A seems palatable, and openly B when B seems palatable, but they almost never admit to do that when its obviously wrong to do so.
That is the rational part of the fallacy, even if these are different sets of people you can still tell they are biased since they never appear in the threads where its obvious they are in the wrong.
For example, lets say in a thread where a white cop shoots a black guy you find a lot of republicans say "this is just statistics, nothing to see here". Then in another thread where a black cop shoots a white guy republicans pour in and argue this must be racism and we should investigate! Maybe it isn't the same set of people, but its still a strong sign of problematic bias that they only choose to speak up in those particular threads and not the others.
Every political side everywhere does this, and that is why people started calling that out.
This is just the fallacy. Political groups are coalitions not single monoliths.
In general, hypocrisy is a pretty weak argument. It's an annoying personality trait, but consistency is a thing humans often fail at, and humans failing at holding consistent opinions is a failure of those humans, not the claims they're making. It's not quite as weak as the more non-sequitur kind of ad-hominem attack, because it does at least pertain to the argument being made, and kind of resembles a logical contradiction if you squint, but it seldom does a good job addressing the merits of the argument, rather than the arguer. It's a successful political tactic for the same reason ad hominem arguments in general are, of course, especially in the context of representative forms of government, where the person's character or competence is relevant when they're running for an office. Much less so in contexts where the merits of a position are being debated in abstract.
I think it's very silly to make the argument that "groupwise hypocrisy" is not a fallacy in such a conversation. In politics, the reality is that people have to form coalitions with people with whom they don't agree on everything, and non-political groupings are even more non-sensical, often holding people responsible for the opinions of other people who happen to share things like inborn characteristics. It's especially ridiculous to explain this with this idea that people are engaging in some kind of elaborate coordination to argue with you on the internet. Yes, some people, and indeed political parties, engage in that kind of behavior, and if you think you're arguing with something like a botnet, there are larger considerations to make about what you gain as an individual by trying to engage with such a machine at all. If I believe I'm arguing about the merits of an idea with an actual person, and I find myself reaching for something like "your group is collectively hypocritical on this issue" to make my argument, this is cause to reflect on whether I actually have any real arguments for my position, as that one is... well, essentially meaningless
I think you're trying to invoke what's commonly called a "motte-and-bailey" argument, where people argue for a maximally-defensible position when faced with serious criticism, but act as though they're proving a much less defensible version of their argument, often including a nebula of related ideas, in other contexts. This is something individuals and coordinated factions absolutely do, but again doesn't really support treating any grouping you want to draw of some kind of collective hypocrisy. Even assuming we care about hypocrisy, it seems like this kind of reasoning about nebulous groups that don't explicitly coordinate would allow making that argument about any position in any context, depending on how you draw the boundaries of the group that day. It's well-understood that you can go on the internet and find someone who believes just about any crazy thing you can think of, or find someone who makes the argument for any position poorly.
Are you a very smart writer or are you using smart tools, I'm not sure?
Wouldn't know if I'm a smart writer, but I see little value in writing with a model if that's what you're asking. Language models are good for searching, getting alright at structured outputs like code, and trash at meaningfully expressing my thoughts in prose. Frankly, it concerns me that people think vomiting their thoughts onto the internet could possibly benefit from computational assistance
Is it true that they don't appear in the threads where you feel it's obvious they're in the wrong? Or do they just get upvoted less in those threads so you don't see it when they do appear?
[deleted]
It's almost as if factions are made up of different people with different opinions in a loose alliance.
But nah, clearly they're all goombas.
This is exactly it. You see it on HN all the time. You will debate someone. Then deep in thread, a second person appears with a gotcha. when you point out that the gotcha doesn't fit in with the prior argument, they point out that was a separate person. They knew damn well what they're doing with their little conniving deflection fuck-fuck game. They're acting for the same surrogate argument. The Goomba is real and the people playing the game are just too cowardly to be two-faced themselves so they act two-faced through a surrogate and deflect to the surrogate when it's pointed out.
You're making a strawman
If so, I've seen that strawman living and breathing, and saying the first thing before, and the other thing now, several times.
Sometimes there are two groups of people who have different opinions that don't interact, but given the extent they take up the same platform and don't seem to see each other, I'm not sure it is really a fallacy even then.
First, it becomes possible for people who have a double standard to hide behind this. One can try to track an individual's stance, but a lot of internet etiquette seems to be based on the idea of not looking up a person's history to see if they are being contradictory. (And while being hypocritical doesn't necessarily invalidate an argument, it can help to indicate when someone is arguing it bad faith and it is a waste of time as someone will simply use different axioms to reach otherwise contradictory conclusions when they favor each.)
Second, I think there is the ability to call out a group as being hypocritical, even when there are two sub groups. That one group supports A generally and another group supports B generally (and assuming that A + B is hypocritical), but they stop supporting it when it would bring them into conflict indicates a level of acceptance by the change in behavior. Each individual is too hard to measure this (maybe they are tired today, or distracted, or didn't even see it), but as a group, we can still measure the overall direction.
So if a website ends up being very vocally in support of two contradictory positions, I think there is still a valid argument to be made about contradicting opinions, and the goomba fallacy is itself a fallacy.
Edit: Removed example, might be too distracting to bring up an otherwise off topic issue as an example.
I believe in A, I don't take a strong position on B, I am in coalition with people who believe in B and don't take a strong position on A, we both believe in C, D, E, and F, which some other people believe in with differing weights. Browbeating me about position B (or, the most useless kind of Internet banter, complaining about me and my hypocritical position on A+B to your friends who oppose both in a likewise contradictory way, in some venue I've never heard of) is not about making people reevaluate positions, it's about negative factionalism. The only reason it might not fit the familiar categorization of "fallacy" is that you would never use it in rational debate, either in arguing with another person or in reasoning out your own position.
>I believe in A, I don't take a strong position on B
But if A and B are opposed, then there is a question of why a strong position on A can be allowed with a weak position on B, if the reason for the strong position on A would also indicate a strong position against B.
The underlying argument being implied (but rarely ever directly stated) is to question if your reason for the strong position on A is really the reason you state, or if that is just the reason that sounds good but not the real reason for your belief.
In effect, that you don't apply the stated reason to B despite it fitting is the counter argument to why it doesn't actually support A.
If there is an inconsistency in arguments being applied, any formal discussion falls apart and people effectively take up positions simply because they like them, contradictions irrelevant. This generally isn't a good outcome for public discourse.
[flagged]
[dead]
>Writing code is not the bottleneck, as in we can develop features faster than they can be deployed.
That's an organizational issue due to over-regulation, bureucracy, too many stakeholders each with their own irrelevant opinion, etc.
Startups or FOSS projects without the above absolutely can't "develop features faster than they can be deployed", and usually have a huge backlog of bugs and features they'd like to have, but never got around to.
> 2. It's annoying and disruptive to be interrupted when doing work that requires deep focus.
Steering a LLM also requires deep focus. Unless you want to end up on accidentally quadratic or have a CVE named after your project.
How can it? You prompt it, then wait minutes+ for it to come back. It's the opposite of flow state.
I get much better results the more thought I put into crafting my prompt. including using llms to help create that prompt. There's definitely a declining rate of return on that time, but thinking about the problem and carefully describing the context can take fairly deep thought. I do think it's in shorter bursts than when doing all of the work, but I get that same feeling of 'bah, where was I?' if I get interrupted while creating the prompt for a more complex feature. On the other hand, I spend a lot less time in flow state while debugging - it's way easier to describe a bug to an llm (often can just paste in the exception or link to error log).
You don't wait. You run multiple independent incremental feature in parallel, while also running a code review, which will create the next set of tasks while you or the llm think up the feature to add after.
It isn't exactly flow - but at when the prompt comes back it forces me to think. Flow is about getting into a state where I'm thinking so this is surprisingly similar. The prompt is helpful because it gives me a place to focus: does the proposed changes make sense (this is much smaller than the entire code base), and given this is done: do I know anything else that was missed.
So you move from a maker’s schedule to a manager’s schedule. Interrupting you does not have any meaningful consequence on your ability to work because at any given moment you are not really working and when your interruption is over the prompt is still just waiting there for you.
Flow comes after when you verify what it did...
What I do is, I'm always responding to output N while the AI is working on prompt N+1. So we are both always responding to each other's question/answer before last.
I'm using my ADHD hyper focus skills while flagging issues that the LLM is doing.
Reading 10x more code than before puts me straight into the zone. (In a language that I find interesting: Elixir)
My own process is improving so much that I had only one bug last week that was fixed immediately after the error tracker caught.
But yeah I feel more tired sooner. So it's oneto three hyper focus zones per day, just like before.
The difference is that I enter faster, and now I'm not afraid of leaving the task and resuming later, since I can just ask for a summary of what we did so far.
I'm using two different models from two different providers to cross check the work tho.
I'm very good with bad smells I guess, after years supervising less experienced developers since early days in my long career.
Many people today just trust whatever shit comes out. Some even brag about it, even famous devs like Yegge.
And requiring review of the result is not a "flow state". Flow states means continuous and uninterrupted focus while actively performing, LLMs block and return with new code or questions for minutes on end. That's the opposite of flow, it's the "let's take a break now, see you in a few minutes" for every interaction.
This is a false dichotomy. Software development has always been about keeping people in agreement, from the customer to the coder, and all the people in between (the fewer the better).
Meetings that increases sync between customer and coder are few and precious.
In large organisations ceremonial meetings proliferate for the wrong reasons. People like to insert themselves in the process between customer and coder to appear relevant.
I personally am fond of meetings with customers, end-users, UX designers, and actual stakeholders.
I loathe meetings with corporate busybodies who consume bandwidth for corporate clout.
No, I don’t need another middle manager to interface themselves between me and my users.
Yes! So much of professional software development is about assisting the nominal job of management—planning and budgeting—rather than users or even business fundamentals.
Why am I awake at 1:00am, ruining my brain and body, trying to get this feature finished before the end of the week instead of three days later? Ah yes, so that we meet our quarterly OKR, and the next quarter's plan that the EM and PM negotiated without me or our customers isn't disrupted and doesn't need adjustment. That would invite reprimand from the director, and the extra work would be terrible for them, I understand.
I'm reminded of this recent thread in which Heroku left the devs in charge and suddenly features that the author had requested for years got implemented: https://news.ycombinator.com/item?id=47669749
Just on a personal note as someone who worked in the game dev industry for far too long and is still suffering for it... if
> Why am I awake at 1:00am, ruining my brain and body, trying to get this feature finished before the end of the week instead of three days later?
is actually you (or some other reader) please quit/find a new job ASAP even if it means a pay cut. You don't want to deal with back issues, heart issues, weight issues, digestion issues, blood sugar issues - none of that. Please respect your body and your limited time with us. A coworker of mine had a stroke at thirty - that is a life changing event with, honestly, no real paths to full recovery.
For that matter, here's a thread from a few days ago recommending the practice of scheduling status meetings for the purpose of pressuring the attendees to work on your project in addition to their other work: https://news.ycombinator.com/item?id=47906942
What hermit wouldn't love meetings that simultaneously insist that you do more while taking away time to do it, all to avoid adjusting a pollyanna quarterly plan and budget!
Well said.
This matches perfectly my experience in working in many companies, where in most of them meetings were useless, but in a few places meetings were very useful, depending on how the companies were organized and how the attendance to meetings was selected.
I have seen projects that had to be abandoned without bringing any money, despite being executed perfectly according to the specifications. The reason was that the specifications were wrong because the customers have not thought about describing some requirements and the developers could not ask about those, because of lack of direct communication, while the middle men had no idea about both things, about what the customers might require and about what the developers might need to know.
Not a false dichotomy. I agree with OP and I can say for certain that if you are one of the few developers that is "fond of meetings with customers" you are not the the type of person OP is talking about, and you are more rare than you think.
I am a former Dev turned PO/PM and now CEO, I can tell you many a developers are not fond of those meetings you are fond of and people like myself don't insert our selves where we don't belong, we simply join the meeting and have the vital conversation with the customers/stakeholders whos payments make payroll possible, while the developers refused to.
My team have always commented and liked that I "shielded" them from the none technical meetings and distilled customer needs in our kanban, without them having to go to the meeting.
While I agree this isn't the "best way" to do things, I simply have never seen a Dev Team work as the way HN tries to make the role sound "Dev/Eng and the customer is the only thing needed". Would love for this to be the case!
Also for those who think I'm down talking the abilities of my team, we made a company together when we left a huge company we worked for, as Co owners and even now we use same setup is used :)
> you are more rare than you think.
Truth. I'm that person and didn't appreciate how rare I was until I became an EM and learned that most of my team would actively avoid conversations with the customer. Even though I have no way to quantify it, I'm sure it's benefitted my career.
Are those people in contact with the customer able to make decisions regarding the roadmap or feature design? It’s a miserable position to be in front of unhappy customers while having no power to solve anything (which is why I tend to be polite with customer support).
> I simply have never seen a Dev Team work as the way HN tries to make the role sound "Dev/Eng and the customer is the only thing needed". Would love for this to be the case!
I think a lot of HN truly believes that Software Developer is the only important role at their company. Software goes straight from the developer's brain, through his fingertips into the computer, and then on to the online store (run by nobody) for customers to buy. Engineering managers, program managers, product managers, marketers, MBAs, tech writers, QA, lawyers, process people, various admins and liaisons... they all exist to play pointless political games, have distracting meetings, and obstruct the One True Role. Design docs, planning, schedules, e-mails, JIRA, reviews, syncs, exec updates... all are useless parts of a scheme to torture the developer. It should just be "developers developing, and then money comes in from somewhere." This is an exaggeration, but you see these themes all over the comment section.
> I think a lot of HN truly believes that Software Developer is the only important role at their company.
I doubt that. A lot of HN might have believed that some 10 years ago, perhaps, but most of those people have either matured or been driven away by the shift in the discourse.
I was one of the people who used to believe that, but the years of experience have taught me several important lessons that changed my mind. That change in attitude came both from my own failures and from having the rare privilege to work with people who were actually good at those other roles you listed.
> This is an exaggeration, but you see these themes all over the comment section.
And you'll keep seeing those comments, just like you'll keep seeing the comments about how developers are hypocritical divas. Those comments come from people's bad experiences.
Workplace political games are a thing. Unnecessary meetings and documents are a thing. Problematic, unprofessional developers are a thing.
100%, majority of the posts here are based in fantasy of how the world should work. They're also highlighting why most Devs cant deal with customers effectively. Customers aren't showing up with a clear spec and handing it off while middle managers butt in and ruin the whole thing.
Though I agree, most managers are BSing way too much, but the reality is that most Devs cannot navigate conversations like they think they can, and like you said, nor do they want to. And that is exactly what the managers do.
I live by these words: “if the client could enunciate a perfect spec, they would code it themselves”.
Software development is about helping people get what they want out of a computer. Not what they can specify, that’s asking way too much.
There's an in-between point that I think is better than either, but it can be more difficult to find the right balance: Direct contact with internal stakeholders (with the manager still somewhat involved to still have a good overall view and help prioritize / push back / act as a general buffer), while shielded from customers. That's the place I've always preferred.
I don't know how rare it is. I have always found it harder to write software when I don't know the people who will use it or get to see what they feel about it. It's part of the feedback loop.
When I get good feedback it's like winning a prize and when it's bad it lets me see where we should be spending our time rather than were we perhaps thought we should.
[deleted]
The solution is to work at small companies/startups.
That comes with real tradeoffs, but I've never regretted that path.
> nakedly hypocritical
How is it hypocritical?
If in the old world, the very important process that used up a lot of time and benefited greatly from no distractions was the actual writing of code then interruptions for various ceremonies with limited value other than generating progress reports for some higher ups would feel like a waste of time.
That same person in the 'new' world where writing code is very fast but understanding the business and technical requirements that need to be accomplished is the difficult part would then prioritize those ceremonies more and be ok with distractions while their AI agents are writing the code for them.
It's not hypocritical to change your opinion when the facts of the situation have changed.
Well it is hypocritical. Hypocrisy is an action or statement that is contrary to a stated value or principle. Just because your values or principles changed doesn’t make you a suddenly no longer a hypocrite, it just admits that your former opinions are no longer tenable.
I’ve noticed this push to try to clothe hypocrisy in made up virtues like intellectual curiosity and mental plasticity a lot lately. All I can think is that it’s some kind of ego satisfaction play people make when their place in the world is threatened.
Old value: Producing high value software.
How to do it? Focus on writing code.
New value: Producing high value software.
How to do it? Focus on writing specs for code / identifying needs.
I expect there are a lot of hypocrites in the mix, scared for their job. But this isn't a fundamentally hypocritical position - agents are changing the game for how software gets produced and the things that were important as recently as a year ago might reasonably be said to be irrelevant now. Ironically, we might yet see a great software engineer who has never written a program in their entire life. The odds are slim but it is possible now.
Sorry, did people not identify needs when developing "high value software" before? That doesn't seem true to me at all. I took a "Needs Assessment" course in my class of '09 undergrad...
This is shifting the principle/value discussion up to a level where it's meaningless. Let's use a different example.
Old value: Returning value to shareholders.
How to do it? Treat your employees like family and don't be evil.
New value: Returning value to shareholders.
How to do it? Treat your employees like human resources and get away with what you can get away with.
Is this hypocritical? Most people would say yes, but in your framing it's not because we've backed up to the least specific articulation of an underlying principle. It's a species of the motte and bailey fallacy.
Agents may be changing the game for how software gets produced, but all it's really done is switch software developers from being managed to being managers. And software developers trying to square their historic value/principle that management tasks are useless, easy, and ceremonial (to borrow GP's word) tasks that should take a back seat to ~flow state coding~ with their new view that management is an integral, difficult, and requisite part of writing code reeks of hypocrisy.
I've noticed on hackernews in the past year, a certain type of comment. A deep suspicion to first call out a surface behavior, then psychoanalyze strangers with whatever the flavor of the month "deep observation" is.
You can't be a dick on this platform without fancy prose I guess.
Abduhl, the nature of the job has changed; before it was coding, now it is managing the AI coding. What was and remains valuable is delivering value. This principle has not changed.
If your job was only coding then you are the most replaceable of the bunch. Traditional software engineering is a broader domain that, as rightly pointed out, will require you to actually *sit and talk* with the worst communicators you'll meet in your life.
Looking at a slice of most folks' workday and calling it their whole job is in my opinion, incorrect.
> Just because your values or principles changed doesn’t make you a suddenly no
longer a hypocrite
Bottom line is the people described as hypocritical in the comment have no principles, but rather feign passion in anything they think other people consider valuable. When devs thought coding skill was valuable, that's what they claimed to be passionate about, when the game changed and communication became key, they suddenly changed their passion. Either the timing is a coincidence, or they are hypocrites.
I don't think switching one's passion on a dime is a valid escape hatch from hypocrisy.
You're trying to turn flexibility and the ability to adapt to new circumstances into a vice.
You're wrong. It's a virtue.
Ceremonies and tickets aren't especially effective for actual collaboration. They're primarily tools for making work legible and controllable to management.
There is a reason (well, many reasons) that, if I'm working on a creative project with somebody outside a company, we would never think of reaching for Scrum ceremonies or Jira.
It is more than perfectly consistent to complain about that while valuing collaboration.
I require Jira for all my work to protect myself from three things, definitely not productivity:
1. µManagement asking "What have you even been doing?" Now they have a dashboard, and I have a nice record.
2. Protect me from people who wouldn't tell me problems existed, but would tell their managers they were blocked by those problems. Now, the understanding is that if the Jira doesn't exist, then the problem doesn't exist.
3. I use the "On Hold" state of an issue for a clear signal, for them and their managers that I add as watchers, that there will be no progress until whatever requirement is met (question answered, etc). It dramatically decreases response times, and means I don't have to nag them. Goes back to #2, where I can point out that they are blocking themselves.
All these things come into existence because people are so bad at collaborating, but really good at pointing fingers.
This is a good way of using a ticket tracker: bottom-up, where engineers, testers, and other stakeholders are empowered to create and manage tickets as a means of communicating about their work and what they are blocked on. It's part of the writing culture mentioned in TFA.
In some places, the ticket tracker works top-down: only the manager creates tickets, and the manager makes measurements about tickets closed, velocity, and so on to assess the productivity of their team.
I think the great divide on JIRA and JIRA-likes often comes down to which culture people have been exposed to.
It's 100% denial/ego. I've been a contractor longer than I'd like and it's the exact same response I see when I join a new team. The team complains they have too much work and can't get anything done, so their manager pulls me in. Suddenly, they don't want to give anything up. I'm actually in the middle of this right now. The team "is swamped" yet somehow, they are able to argue that almost everything I can handle is best handled by them and they don't need help. Fine by me, I'll sit around and get paid. But it smells exactly the same. They don't want to admit that A - they are replaceable and their work isn't that unique and B - they are the bottleneck, not the process or workload.
> A - they are replaceable and their work isn't that unique and B - they are the bottleneck, not the process or workload.
The problem rather is: often good programmers have quite good ideas how these problems could be solved, but for "organizational politics" reasons they are not allowed to apply these solutions.
Thus:
Concerning (B): Because they are not allowed to apply their improvement ideas, they are the bottleneck. But being the bottleneck is not the root problem, but rather a consequence of not being allowed to improve things.
Concerning (A): It is indeed often the case that if you simply let someone else do the work, the code quality decreases a lot and in subtle ways. Good programmers are very sensitive (and sometimes vocal) with respect to that - in opposite to managers.
Additionally on A: the people who will be stuck maintaining this contribution for years and years have a different view of the pain.
Pushing a 90% solution through is a ‘win’ for the coder who is leaving, and hurts everyone on a continuing basis. It’s bad accounting, and lets the consultant look good for making the team perform worse (and look bad later).
And, IME, if that 90% solution needs a 100% rewrite after 40-80% burn in bugs and error chasing? What once was a bit behind is now way behind with staffing issues. Sunk costs don’t create extra budget.
Do It Right The First Time doesn’t always apply, only mostly always. Some people are insecure and territorial, yeah, but some know what their job is.
No matter what you do, entrenched engineers will make SURE they will be the only ones maintaining everything until their retirement, because they will make life impossible for everybody else until they leave.
Entrenched engineers don’t want to you to alleviate or god forbid share the pain. Pain is good for employment security. And if the ship goes down, they’ll make sure they’re the last one to get fired, because there’s nothing the entrenched engineer fears more than having to job hunt.
And I totally respect that, I get it, I really do. But it's really obvious when people are being territorial and any contractor will tell you this happens every time. I suspect that a lot of the times, I'm hired to "teach them a lesson" in that "Hey, velocity sucks and I'm hearing a lot of whining, so if you don't like doing it, this guy will" and people snap into shape.
Unless the team are seriously bad developers, many times, it’s the manager fault. As a hired consultant, you often benefit many freedom that team is lacking. As someone that has been hired as a consultant, one of those are meetings and not having to worry about office politics.
Well who are you? Why should they trust you to actually complete a task and not dump unfinished work on them when your contract is up?
The manager didn't do the work to figure out what a contractor should do before hiring one. Why would they expect that org to plan the exit if they didn't plan the entrance?
Behavior shouldn't be surprising, no?
I mean, they hire me for a reason, whatever that may be. I want to do a good job and carry out the task because I want to get hired again by them or whatever agency is pimping me out. I've seen a lot of shit and that's my value. Whether or not the team wants to help me succeed is their political thing. And that's not invisible to management either.
You're not looking at it very empathetically. You're disregarding the concerns I floated, you expect the team that feels underwater to now stop everything to reshuffle the work scheduling to fit in a wild card all while you're calling them bad and replaceable.
I mean it really sounds like you're not on their side at all. It's their job to help you succeed, apparently. From what you've said already, you don't care about the project either. You're happy to waste time and money. It sounds like they're right not to trust you.
If he was hired to do a job, its not on the team to "trust" him.
Its to incorporate him as a resource. I'm sorry but speaking strictly from a productivity standpoint, we're not here to be empathetic, we're here to deliver value to the organization.
If I'm a manager of a team thats struggling and now also sabotaging additional resources, because they havent got the right warm and fuzzies, I'm going to be looking to have some difficult conversations. I'm also going to be very critical of anyone who floats a lack of "trust" as the blocker without some concrete evidence to justify it.
Whatever concerns they might have is not for the contractor to address. They are between them and their own management who deemed them unable to deliver sufficiently.
Well this is all highly hypothetical but my point is that there are valid reasons to not entrust a contractor, who is only around temporarily, with long standing features. Not because they are nefarious but because they, by definition, will not own the feature ultimately.
Resistance is also not the same as sabotage. My assumption is that everyone is acting in good faith from their own perspective. An immediate issue I see that the contractor was brought in because folks were looking at the calendar and not the tasks. Now the team is being pushed to carve out tasks. If shovel ready tasks are identified first, almost certainly, things go smoother. You're not context switching everyone. Its far less chaotic.
What you seem to not understand is empathy is going to move the team forward and deliver. Jumping to bad faith immediately is likely not the fastest way to a solution. If someone is struggling, its useful to understand why and address those problems. Its often not because they're bad.
That's not at all what I'm saying and I don't know where you're getting that from. I'm not trying to stop anyone. I'm trying to be an extra paddle. I'm happy to do nothing iff I try to help but get boxed out. I have no problem just riding in the boat and saying "hey, there's rocks up there" or "This seam looks leaky, maybe patch it". I'm not here to fight people or egos.
This sounds like my ideal job. How do you land such gigs?
This seems to confuse cause and effect.
True, most engineers hate meetings because as your rightly point often there can be too many "types" of meetings - team meting, issue tracking, backlogs, design reviews, triage etc etc. Out of the 7-8 working hours, a senior engineer might be in meetings for 4-5 hrs. Then they bitch and moan that they are spending too much in meetings and not enough time coding. A reason for that is projects often have unclear or even changing requirements along with tight deadlines.
Sure today with AI, code can be produced faster than ever. But the requirements being unclear or always evolving hasn't really changed. Today many non-engineers assume that what they have in mind is straightforward and can be created by AI. That is not true. Unclear requirements lead to unclear results. Garbage in Garbage out. Getting the right input is still the most important part of software. That has not changed. That is the collaboration piece of software.
And sure within the software community there are folks who don't like to collaborate even on requirements, they are more than happy to follow someone's lead. They like their manager/architect to "shield" them and do these tasks for them. These silent warrior type engineers are going to be the most impacted due to AI coding. Because they have no visibility and even if they are 5 rated coders, there is always going to be "But AI can produce code. What else can you do if you wont even collaborate?"
So, it's not very cut and dry. Engineers come in all shapes and size.
The bitching was about meetings and ceremonies that took away the little time left to spend time asking more features to be implemented, or revisited before it could get completed.
No developer was ever unhappy to communicate. But when pointless communication occupies too many long hours, interrupting useful the progress of understanding what could and should be done (by coding, yes, experimenting, getting a grasp of the beast), then yes they became unsympathetic.
> No developer was ever unhappy to communicate
I've worked with engineers all over the spectrum in terms of their styles, beliefs, and preferences... and some of them are frankly not very interested in getting out of their comfort zone (like heads down, writing code and being alone), and optimizing for the group rather than themselves.
So yes, they are in fact unhappy to communicate (in a general sense), because of how tedious and uncomfortable communication often is.
I'm not saying it's irrational or immoral, or not driven by the types of past poor experiences you mention, but in my experience it's often pretty obviously suboptimal and highly frustrating to work with.
> the same kind of engineer
Who?
There are millions of software engineers around the world. It's quite likely that they have a few different opinions and point of views!
But it is written there, and GP was quite specific:
>the same kind of engineer, who throughout my career have constantly bitched and moaned about team meetings, agile ceremonies, issue trackers, backlogs, slack, emails, design reviews, and anything else that disrupted the hours of coding "flow state" they claimed as their most essential and sacred activity
Seems pretty clear to me.
I doubt the GP has gone back through their career and checked on each person who thought there were too many meetings have now all made the switched they're being accused of, though.
Why does that matter?
Because a claim was made about a group of people.
Exactly. Does it matter if one member of that class does not follow the trend?
What type of engineer, who until a year ago - because of AI apparently - is suddenly no longer concerned about code? Personally I'm just as concerned about code because AI has not changed the fact that it still takes a really long time to develop stable, secure software (ie- if you're making software to do ambitious things). Nothing about modern AI tools eliminate the need to get in the zone; using AI to amplify one's engineering skills let's us solve the next problem faster - but in software there are unlimited problems.
Just because I hate those ' team meetings, agile ceremonies, issue trackers, backlogs, slack, emails, design reviews, and anything else that disrupted the hours of coding "flow state"' - doesn't mean I don't understand how important they were and are. I moaned them before, and will continue to - but they were always important. I have learned the hard way more than once what happens when you sit at a keyboard and write code (one time I lost my job because the code I was writing was so far out from what the company needed, the next I realized what was happening in time to leave first - only after I was gone did they realize that what I was doing really was important and they made me a good offer to come back)
I posit that getting into a coding flow state isn't just about producing code. It morphs and develops the problem space in the engineer's head. It helps them realize where the spec is lacking. It familiarizes them with the capabilities of the codebase so they can speak confidently about what future changes will entail.
I understand your sentiment, and there’s absolutely some truth to it, but I don’t think the path forward is throwing more management resources (or layers) at the problem. And I don’t think the management technology industry is the answer either.
I think the solution will be small (1-5 person) teams where product and engineering sit next to each other and have clear authorization to launch directly to prod at their discretion. The gripes about performative work tracking mechanisms and the realization that non-tech considerations are now the bottle neck are not mutually incompatible.
I don’t think that’s the issue. The problem is that with software you don’t know what a user might like until something is in production.
This is probably true of other fields too. But rolling back changes there is expensive (example construction).
But with software you can get to put things out and iterate. This is not to say identifying what’s needed isn’t important but you had roles where the product owner is getting feedback for the previous iteration while the devs are working on the current one.
With code assistants this loop collapses a LOT. Suddenly it can be a lot easier to define better what you need and in near real time also gauge how it would operate.
Both are true “leave me alone” and “you don’t know what to build”. Because the people identifying what to build aren’t the people doing the building.
I'm not in the group you described so I don't want to speak for them, but I can empathize because there are some things that are meaningfully different with AI:
1. Increased velocity makes rituals like daily standup and other comms relatively infrequent compared to how they used to be, so there are fewer touch points now. For example a daily standup might have been occuring several times while someone worked on one feature ticket, but now they can bang out multiple features a day plus some bug fixes, but still only have the daily touch point.
2. AI written code needs to be thought through and planned a lot more than human written, because the machine doesn't go through the same discovery/writing process that a human goes through. It looks superficially similar, but is subtlely and importantly different.
3. Without solid planning and requirements definitions, it's a lot easier for AI to go off the rails and do something you don't ultimately want. That wasn't true for humans writing code because they have a lot of project context knowledge that helps a great deal. AI obviously doesn't have this.
4. With the intense speed of devs now thanks to AI, it's far easier to step on each other and end up with at best merge conflicts, at worst significant deviation in solutions, and often major refactors/overhauls that can make the codebase feel foreign and confusing to devs. Most people have had the experience of stepping away from a project and coming back after a refactor had been done, and realizing that they don't know where basic things even are anymore. It can be unsettling and add a lot of friction.
5. AI can be pretty good (and very fast) at producing documentation and plans, so the "cost" of planning before coding is a lot lower now. That changes the equation of "what is the most important thing to spend my time on to iterate quickly".
Just because concurrent design, QA, research etc push out the Gantt chart doesn't mean your meeting isn't useless.
In fact, deep pipelines don't even need to have bottlenecks to take time. Even still any given meeting could still be a waste of time depending on the meeting.
You’re describing a multitude of different people with a variety of viewpoints. It’s also smart to change your mind when the environment changes; code being easy to write is a decisive shift.
Code may be easy to write once you know what the code needs to do.
The people I know like this are people I consider to be "advanced juniors". They are held back by their inability to work with other parts of the business and understand customer needs. In order to be successful they need to be spoon fed requirements. What I've seen from the limited sample in my orbit is that they've actually doubled down on AI and are creating little private worlds of agents and further isolating themselves from the business, not talking about how great collaboration and such are.
Are my standards too high to expect juniors to also be able to work with other parts of the business and ask for additional requirements if needed?
No. "Plays well with others" is part of being a functional member of any engineering team that I'm on.
The activity that needed and still needs to be protected is problem solving:
- Understanding the problem at hand
- Putting all the pieces together so that they solve the right problem the right way
- Making sure that the solution facilitate future extension and doesn't lead to a ball of mud two months from now... Unless stakeholders want it to be quick and dirty, then making sure they understand the costs/risks
- Planning execution a way that is incremental and testable so that we can build confidence that the system is doing what we expect of it
- if you are in a team, figuring out common dependencies so that those can be done first and unblock parallelism on execution.
Once all that is done and documented, writing the code was easy and fast.
What would sometimes happen is that some unexpected detail or dependency would be discovered as part of the writing of the code and then you are back at the beginning, figuring out how to make everything fit together.
I find that the main confusion comes from people not realizing that those are two different activities and instead calling it all "writing code".
That's just scarcity based economics doing its thing.
I don't think it's hilarious, I think it's rather sad to see people so easily trampled by the whims of an irrational market. Generally speaking, we benefit when people stick by their values, and yet we play this awful game where winning means abandoning our values in pursuit of "value" whatever that is.
/s Oh noooo, it's like they're turning into managers... Now that machine can do their job better than them they've become as unimportant as you always thought they were, always pretending to be banging keyboard where it used less brain cycles than highly important work like posturing in a meeting did. Anyone can bang a keyboard even a machine can do that now, it can surely never replace important work of you having to 10 meetings. Lets replace all of them with machines and us meeting lovers can run the company with the machine produced work that we never have hope of understanding.
Meanwhile people who bitch and moan about “other engineers” all the time haven’t changed at all. How refreshing.
I've had seniors tell me my entire career that writing code was the easiest part of their jobs.
True
I've been selling and managing my own projects for over six years, so I don't count myself in the antisocial camp. But LLMs haven't changed the fact that I like to have at least five hours of deep work every day.
Just because you use LLMs doesn't mean you don't need the "flow". Reading code SUCKS, getting into the flow is harder than ever.
Unless you sign off on a Looks Good to Me PR and go loiter by the kombucha machine. Then you have other problems.
There's nothing hypocritical about preferring the part of the job that isn't the bottleneck and wishing one could spend more time on enjoyable things. Nor is code being called "inconsequential" simply because it can be done (more) easily.
We've had systems that induce boilerplate before, and we've had systems that try to cope with that boilerplate before.
Considering the process to be tedious is really not the same thing as being antisocial.
I do need to point out that not all meetings are equal, and the "hypocrisy" you are seeing may come from different groups of people.
I feel attacked. I still dislike most team meetings, agile ceremonies, etc. Slack and emails give me anxiety. A 30 min meeting will disrupt me for 90 minutes. But, yea, the code was never the bottleneck. Except maybe when I worked at a startup. All of the above are true.
Personally I find it hilarious that the same people at my company who can't be bothered to write down detailed requirements and are constantly fighting any effort to do research or technical documentation or pay down tech debt are now trying vibe coding and struggling to produce anything useful. Oh you don't understand why you aren't getting the results you expected? Maybe you should try thinking deeper about what you expect before your rush your engineers or, now, your agents.
Um… how do you get those requirements if slack / email give you anxiety ? And meetings are disruptive ?
I am genuinely curious. I understand where you are coming from, you want to maintain flow state.
How does one effectively load the funnel to support flow state ?
Jira tickets? Requirements documents in some kind of ALM tool?
Isn't solving problems, instead of blindly implementing a high-level description of the solution, your job as a developer?
There are problems you can solve, and problems that you cannot. Depending on the exact details GP may have been slacking for not solving problems, or correct in saying he can't do good work because he shouldn't be solving the problems alone.
Two things can be true at the same time. Yes, those meetings are horrible, and plenty of times they're useless and can be summarised as "why wasn't this an email/slack message", but also plenty of those same meetings can equally be extremely important.
In fairness, given the context those meetings give, it stands to reason that giving that same context to an AI, it can, in theory, still do the same thing as an engineer. But those meetings still need to be had.
Yea, when you have multiple people doing anything, communication has to happen. It's not optional. As soon as your company hires developer #2, you need to communicate. As the team sizes get larger, 1:1 in-person conversations become less important and you need E-mail. As the team sizes get even larger and non-developer stakeholders become more numerous, meetings creep in. These things are not developer-torture devices. They are happening because your company decided that the product needs to be built by more than one person.
If y'all can find that company where the product is entirely developed soup-to-nuts by a single lone-wolf developer, without any other stakeholders or involved parties, by all means join that company! And tell HN about it--many of us would join it, too. But in the real world, development is a messy people-soup and you have to communicate.
Or the person just likes to lock into flow states at the point of maximum leverage. Previously that was coding. Now it’s commanding agents.
Is it hypocrisy or learning? A more charitable take - it wasn't too many years ago that I also decried the need for all the collaboration. But as I advanced in my career, that worldview just didn't hold up. In this case, maybe the introduction of agentic coding has accelerated that learning because now 'regular' engineers are forced to take on coordination roles.
[With that said, the specific implementations of such collaboration are often still very painful and counterproductive...]
They sound like very important people no matter what the circumstances are, haha.
Having "house rules" on a team that new members must agree to follow tends to flush such people out and they usually exit on their own when their shenanigans get repeatedly called out as violative. Gotta introduce the rules in the interview process and get agreement after they join. Catching them out early is the key.
We had an intervention on one hard case and he rage quit the next day. I don't know why people do that, it's a small world and people talk.
There are 2 bands: you let people earn a living or you let investors/executives become richer every year to the detriment of workers. I don’t care about the medium, Im not with the big fishes
I've got nothing to add to the discussion but want to take a moment to appreciate your ability to construct long sentences. It flows beautifully.
The amount of cognitive dissonance I'm seeing on HN right now is concerning.
I'm seeing both these beliefs right now:
• Belief A: "I am a skilled professional whose value lies in my unique ability to solve complex problems."
• Belief B: "An LLM can now solve many of these problems in seconds for pennies."
This thread is great at showing how people are rationalizing by moving the goal posts, so to say
"both these beliefs" and you label one a Belief and the other a Fact.
That's an oppinion for sure, and a very shallow, general oppinion. Some people like solving problems, sometimes via code, while others tend to hide behind the 'Collaboration' banner, to help their own career progression. Both are legitimate tracks. To dismiss one, is to make the other appear 'non-Good'. But, perhaps data can be furnished as part of this post to support either as 'better'.
Did you ever try no meeting days and other methods to avoid interrupting thought workers?
Because even if someone is writing design documents you shouldn't be interrupting that process regularly either.
It's certainly the case that the collaborative ceremony can be mismanaged, and that is frustrating when you need time to implement. I don't expect that complaint to go away, those who are using AI heavily will replace it with not having enough time for prompting.
But I have also worked with some who refused to participate in collaboration, they felt their time and ideas superior to others, and there's no excuse for that.
Before, meetings (aka. coordination) bottlenecked the coding.
Even if coding was solved, meetings could still be the bottleneck.
You think spending more time on meetings is going to solve anything?
I have seen this play out too IRL and I am really enjoying the schadenfreude.
My sense is that it's the opposite. The people who complain about meetings, managers, and methodologies also complain about agentic coding. The people who are excited about frameworks, methodologies, and project management tooling are excited about agentic coding.
Sometimes code is the bottleneck, other times it's not. Large company, not a bottleneck, fixing bugs or individual app developer, more so.
They are still anti-social. But they see the “social” as a way to feed the AI better, to make better code.
The focus is still the code.
I'm not going to comment on the likely "Goomba fallacy" at work in your comment, but I just want to note:
I've always been someone who disliked distractions from my "coding 'flow state' they claimed as their most essential and sacred activity to be protected at all costs" (because, you know, I was getting paid to write code and that's the only way I could actually get it done), but I also loved genuine collaborative activities (as in a small number of people, interacting with each other in a high bandwidth way, to figure something out or get on the same page).
A lot of the activities you explicitly mention are usually literal garbage for actual collaboration.
> (because, you know, I was getting paid to write code and that's the only way I could actually get it done)
I'm going to assume you were getting paid to build software that solved problems and created value for your customers and stakeholders. Writing code has always been just one activity that's part of the job, and developers forget that and make statements like this! That's the parent poster's point. I'm not saying it's not an extremely important part of the job, or that people don't often collaborate poorly in ways that take away from the sacred deep work time, but framing it as "I get paid to do X and not Y" is just a highly limiting way to look at or talk about the role.
>> (because, you know, I was getting paid to write code and that's the only way I could actually get it done)
> I'm going to assume you were getting paid to build software that solved problems and created value for your customers and stakeholders.
That's a distinction without a difference. At least historically, I was "paid to build software that solved problems" and I was to do that by writing code. If I didn't write code, and enough of it, I'd be fired. Getting my flow state disrupted for no good reason was something I'd resist.
Also agile ceremonies are a drag, literally becoming the thing agile was originally supposed to be fighting against (not that agile is gospel, I've always disagreed with some of its practices). They're not a good reason. And I also mentioned an actual good reason. I should also note those meetings I was referring to positively were almost always with users, not tech people.
> Writing code has always been just one activity that's part of the job, and developers forget that and make statements like this! That's the parent poster's point.
I wasn't addressing the parent poster's point per se (and I noted that and why), just noting that a lot of the "collaborative" activities he cited were often not that collaborative, and the shade he was throwing at people who were unenthusiastic about participating in them was probably unwarranted and misguided.
tl;dr: OP needs to have more empathy. There are better ways to thread the needle of his observations than what was on display in his comment.
It's an astute observation but overstated. There are just as many programmers who view their activity as too sacred to consider using an LLM, even for relatively easy, predictable, or disposable work.
[deleted]
Generally, groups of people aren't homogenous.
The contradictions you see could mostly be variations across individuals rather than hypocrisy within individuals.
(Doubly so for vaguely defined groups, like "kind of engineer".)
I think there’s some kernel of validity in this comment, but the unnecessarily aggressive tone loses it. This just comes off as bitter.
That's a straw man at the root of which sits a conflation of at least two types of meetings:
(A)
Meetings where we discuss whether naming two users in an integration test `u1` and `u2` vs. `user1` and `user2` and whether whoever did the former is so hopeless that they should drop all computer work and go work on a farm, and spend an hour of meaningless and meandering style preferences.
(B)
Higher-level meetings where I can communicate with PMs and customers and CEOs almost on their level f.ex. "Does it make sense for us to have primary/co-borrower roles in our credit products or are all sides equally liable?".
---
With the advent of really good LLMs, meetings of Type A are nearly gone and meetings of Type B have increased meaningfully. I am very happy with that new state of affairs. Are you not?
Textbook example of goomba fallacy.
[deleted]
Comments like these are why I still come to HN. Absolute kino.
They were right back then because these tools didn't exist yet, and they're right today because they do now.
What even is your point? Are you... mad because the truthiness of a statement can change over time?
> I mean, they're not even wrong, but the nakedly hypocritical attitude of people who, until a year ago, were the most antisocial and least collaborative members of any team they were on is still extraordinary.
I don't think there is any hypocrisy. The error in the analysis is assuming both conflicting opinions are held by the same person. They aren't.
Collaborative activities and process being important is NOT mutually exclusive with many meetings being useless, agile ceremonies time wasting uselessness and design review being used as a place to pontificate about crap.
NONE of the activities you mentioned are activities that lead to what article talks about - well designed spec.
Just look at what they write. There is a correlation between the Agentic Multitasker and the type of person who wanted results and didn’t care about the coding in itself. That’s what they themselves keep writing.
They are not the same people.
> It's hilarious ... their most essential and sacred activity ... suddenly, and with no hint of shame ... the nakedly hypocritical attitude ... still extraordinary
Calm down the hyperventilating for two seconds, look around, and you’ll immediately see examples of the same group of people who now biTch aNd mOaN about how agentic coding is killing what they love about programming.
It’s interesting to see people either gloat or get incensed at the nerds who like computers in the context of these developments.
But the flow state wasn't just about typing code. The flow state was about understanding the problem, about loading it into your head so that you could "walk around in it" mentally, so that you could figure out that what really needed to happen was that module X needed to add a getter to value foo, that module Y needed to get foo and make a change based on the value, and that the key to making this all work was to add a way for Y to access X that fit within the existing architecture. That took focus, far more than implementing the pieces did.
[dead]
Yes this exactly, it's getting ridiculous at this point.
It's precisely because I get swamped with all the non-coding work that agentic coding works so well. And in multiple ways.
- it lets you get back in the flow faster (unless you were used to writing out your inner thinking monologues and reasoning to get yourself back to speed when you come back from a meeting).
- it lets you move faster and take on more on your own, meaning less people needed in the team, less communication/syncing/non-coding overhead.
If you're objective about it, AI coding is going to be amazing for individual productivity. It's probably going to fuck us (developers) over with the reduced demand, lower bargaining power, etc. But just on technical merits it's a great productivity tool.
The models are still not better than me at coding and handholding is required, but the speedups are undeniable, and we're long past the threshold of usefulness. So far all the contrarian takes are either shallow/reflexive pushback because people don't like the consequences, or people working in niche stuff where LLMs are not that great yet. But that has been shrinking with almost every release - in my experience.
I know everyone here writes cutting edge algorithms that were never encountered in the training data, their code is hyper optimized realtime bare metal logic that's used in life or death scenarios and LLMs are useless to them - but most of the stuff I do day to day is solve problems that have been solved before, in a slightly different context. LLMs are pretty good at that.
I hate meetings when they're mismanaged, which is often. I like a good meeting. Probably what most swes would say.
I think probably both things can be true. That all of those things can be actively harmful when they distract the most productive coders from coding, and become more useful when time at a keyboard isn't really the constraint for producing code any more and coordination becomes a more serious problem.
The archetype of the "jerk engineer" is over, because it turns out coding isn't all that valuable anymore. We now need "engineers" who understand much more than coding.
THIS COMMENT IS GOLD.
Another example I can point to is software security. For context, I’ve built and sold two edtech companies that taught enterprise developers about software security .. It didn’t matter how good the training content was .. ouur product replaced boring appsec video training with interactive labs, vulnerable code snippets to hack and fix .. gamification ... leaderboards .. whatever it took so they couldn’t complain about having to watch boring videos .. however the completion rates sucked .. because they just didn’t care regardless of how hard we tried ..
Now post AI .. my Linkedn is full of blogs and think pieces about how important “software threat modelling” and “cybersecurity” are, and how “coding was never the hard part.” ... suddenly, TM, something only a tiny fraction of companies actually practice, is being framed as the real challenge .. and having deep understanding of OWASP / secure design , vulnerable dependencies ..secure architecture ,, is the real bottle neck .. lol
What? Coding was escape from what that hypothetical engineer of yours disliked the most. Now there is less of it and ai hypers keep yapping about the job being no longer needed. Meanwhile it's just the fun part that was optimised out. Working hours stay the same, so it's more of the unfun activities. The job is worse, but we're told it's "solved". Bitching more makes sense, no?
I would say in general the amount of persons who pivot like that is low.
Similarly, the amount of open source people who previously maintained a hardliner programming meritocracy stance and now pivoted to AI and market AI is exclusively limited to those whose companies are working on AI products. The good ones in that space are decidedly less than 1% of all good ones.
Welcome in humanity my friend.
Also, expect harsh and rude reactions when pointing to big issues that are crystal clear in the middle of the village. Not all truths are warmly welcomed, especially when looking elsewhere feels more comfortable in the immediate experience.
Take care and don’t worry too much: the journey’s short, so remember to also enjoy the good parts.
no, these meetings are still hot garbage.
half the time you’re going to discover the right decision / path while you’re coding.
focus time went from hammering code to figuring out how to solve the problem. PRs are now how we exchange ideas. meetings are still productivity theater.
Looks like this comment is touching a nerve. This community is progressing from "AI can't write code", to "Well, AI can write code but it's not really about the code". I wonder where the goalposts will be moved next?
The community portion that unironically think AI is good enough now, are mostly managers and non/semi-technical people, and engineers who do not engage in critical or complex problems. HN has always been too much of the velocity-alignment-synergy class of professional talkers; it's just so much more obvious now that they feel emboldened in false confidence.
No hackers on Hacker News? Oh my oh my
I've been paying attention to that line of division and came to the same conclusion. We're seeing the "tiers" diverge as AI becomes more capable. As the saying goes, you don't know what you don't know.
Ah yes, the good old No True Scotsman fallacy, where we smugly proclaim that anyone who thinks AI is good now must be working on unimportant and easy things.
Ah yes, the (please copy & paste this message into an LLM to get it to generate a novel response in these parentheses)
There's some of that, but more often it's developers whose arguments are a year behind the frontier models or, just as common, they're dramatically overstating their abilities.
It's an inherent tension that every discipline has to wrestle with. The most experienced developers are in the best position to evaluate where LLMs are, but those who are the loudest about their own abilities generally aren't in this camp. Humility tends to come with experience, and arrogance tends to come with inexperience.
[deleted][deleted]
Conversely there's a massive amount of money being thrown around biased in favor of inflating what LLMs can do compared to humans.
Both your error and the OP's error is in imagining that the same people are saying both things. The "community" fallacy, which has been around for about 10 years now, that pretends that people with something in common (e.g. "uses HN") are somehow a community that thinks identically is completely wrong.
Actually, it's some of the same people. I won't name names, but there are a lot of AI skeptics on this site who loudly and prominently comment on every AI story. And if you look at their posting histories you'll see the exact type of goalpost-shifting the parent commenter is talking about.
You see it elsewhere as well. There's now a cottage industry (with visible members like Ed Zitron) who have made a career out of creating and selling anti-AI content. At first they were complaining that AI lies constantly. As AI got better, they shifted to other talking points.
This community hasn't agreed on either of those things, just like it never agreed on good coding practices.
My opinion since college (8y ago) was that the best engineers are the ones who treat everything as halfway a people problem, even in low level code.
LLMs have been getting a lot better at coding.
If the "goalposts" represent what people generally think LLMs are capable of, they should be moving, right?
And complex, multi-part, long term efforts like building software and software companies always have numerous obstacles. When one is cleared, you wouldn't expect there to be no more, would you?
Your tone is complaining, but I just see people working in reality.
Is it even a problem that so called goal posts are moved?
That's life.
Life changes and us along with it.
"Who Moved My Cheese?"
Exactly goalposts need to keep moving
I think veteran engineers have always known that the real problems with velocity have always been more organizational than technical. The inability for the business to define a focused, productive roadmap has always been the problem in software engineering. Constantly jumping to the next shiny thing that yields almost no ROI but never allowing systemic tech debt to be addressed has crippled many company's I have worked at in the long-term.
> The inability for the business to define a focused, productive roadmap has always been the problem in software engineering.
Agreed, and I also agree that most developers come to this realization with time and experience. When you have a clear understanding of business rationale, scope, inputs, and desired outputs, the data models, system design and the code fall out almost naturally. Or at least are much more obvious.
For veterans engineers that might be true. But for a junior engineer pre-AI, velocity has always been technical. I know junior engineers who after a whole year of writing C++ still does not grok std::unique_ptr; and this person consistently has the least velocity on their whole team. When I used to write performance reviews for junior engineers, their performance really was dominated by their velocity, which was roughly measured in lines of bug-free code written within a time period. A good junior engineer would be given a clearly defined feature and write good code quickly, whereas a worse junior engineer would be given the same thing and write code slowly or write buggy code quickly that required so much work debugging and rewriting.
> [O]rganizations which design systems (in the broad sense used here) are constrained to produce designs which are copies of the communication structures of these organizations.
— Melvin E. Conway 1967
- systemic tech debt is now addressable at scale with LLMs. Future models will be good enough to sustain this, if people don’t believe this I would challenge them to explain why. First consider if you understand what scaling laws are like chinchilla and how RL with verification works fundamentally
- I completely agree with you about fundamentally the limitation being the business able to coherently articulate itself and its strategy
- BUT the benefit now is you can basically prototype for free. Before we had to be extremely careful with engineer headcount investment. Now we can try many more things under the same time constraints.
The problem with tech debt is not that it is some poorly designed code in a few repositories that can just be changed. True tech debt is the kind that requires significant architectural changes across many systems and is almost always coupled with major data migrations. You need the rest of the business to agree that you want to invest all that time and energy to fix a problem someone else created 10 years ago. You likely will also need other teams to set aside time on their own road map to address it. You also might need customers to change what they are doing because if software lets you do something, you can guarantee that someone has learned to do it - even if that 'something' was actually a bug.
LLMs don't solve any of those problems by itself.
> BUT the benefit now is you can basically prototype for free.
But.. so can your competitors. And that changes the value proposition.
How do you mean?
> systemic tech debt is now addressable at scale with LLMs.
Is there any reason to believe this? I've only seen the evidence of the contrary so far.
My experience with AI coding aides is that they, generally:
1. Don't have an opinion.
2. Are trained on code written using practices that increase technical debt.
3. Lack in the greater perspective department, more focused on concrete, superficial and immediate.
I think, I need to elaborate on the first and explain how it's relevant to the question. I'll start with an example. We have an AI reviewer and recently had migrated a bunch of company's repositories from Bitbucket to GitLab. This also prompted a bunch of CI changes. Some projects I'm involved with, but don't have much of an authority, that are written in Python switched to complicated builds that involve pyproject.toml (often including dynamic generation of this cursed file) as well as integration with a bunch of novelty (but poor quality) Python infrastructure tools that are used for building Python distributalbe artifacts.
In the projects where I have an authority, I removed most of the third-party integration. None of them use pyproject.toml or setup.cfg or any similar configuration for the third-party build tool. The project code contains bespoke code to build the artifacts.
These two approaches are clearly at odds. A living and breathing person would either believe one to be the right approach or the other. The AI reviewer had no problems with this situation. It made some pedantic comments about the style and some fantasy-impossible-error-cases, but completely ignored the fact that moving forward these two approaches are bound to collide. While it appears to have an opinion about the style of quotation marks, it completely doesn't care about strategic decisions.
My guess as to why this is the case is that such situations are genuinely rarely addressed in code review. Most productive PRs, from which an AI could learn, are designed around small well-defined features in the pre-agreed upon context. The context is never discussed in PRs because it's impractical (it would usually require too much of a change, so the developers don't even bring up the issue).
And this is where real large glacier-style deposits of tech debt live. It's the issues developers are afraid of mentioning because of the understanding that they will never be given authority and resources to deal with.
You are not wrong about anything you’re saying but like I said this misses the forest for the trees. I’m talking about like the next ~2 years. There is a common idea that we don’t understand this technology or what will happen performance wise. We know a lot more about what’s going to happen than people think. It’s because none of this is new. We’ve known about neural nets since the 40s, we know how RL works on a fundamental level and it has been an active and beautiful field of research for at least 30-40 years, we know what happens when you combine RL with verifiable rewards and throw a lot of compute at it.
One big misconception is that these models are trained to mimic humans and are limited by the quality of the human training data, and this is not true and also basically almost entirely the reason why you have so much bullishness and premature adoption of agentic coding tools.
Coding agents use human traces as a starting point. You technically don’t have to do this at all but that’s an academic point, you can’t do it practically (today). The early training stages with human traces (and also verified synthetic traces from your last model) get you to a point where RL is stable and efficient and push you the rest of the way. It’s synthetic data that really powers this and it’s rejection sampling; you generate a bunch of traces, figure out which ones pass the verification, and keep those as training examples.
So because
- we know how this works on a fundamental level and have for some time
- human training data is a bootstrap it’s not a limitation fundamentally
- you are absolutely right about your observations yet look at where you are today and look at say Claude sonnet 3.x. It’s an entire world away in like a year
- we have imperfect benchmarks all with various weaknesses yet all of them telling the same compelling story. Plus you have adoption numbers and walled garden data that is the proof in the pudding
The onus is on people who say “this is plateauing” or “this has some fundamental limitation that we will not get past fairly quickly”.
> look at say Claude sonnet 3.x. It’s an entire world away in like a year
In the area I work I find them to be of very little value both then and now... I see no real difference. They help in marginal tasks. Eg. they catch typos, or they help new programmers to faster explore the existing codebase.
So far, I haven't used a single line of code generated by AI, even though I've seen thousands. Some of them worked to draw attention to a problem, but none solved it successfully. It was all pretty lame.
I see no reason to believe it's going to get better. Waving hands more forcefully isn't helping, there's no argument behind the promise of "it will get better". No reason to believe it will...
But, more importantly, the AI is applied on a level where really important things don't happen. It's automating boilerplate work. It doesn't make decisions about the important parts. Like, in the example above, the AI is not capable of choosing a better strategy: use pyproject.toml or write code to build Python packages? It's not the kind of decision it's called to make and nobody sensible would trust it to make such a decision because there isn't a clear right or wrong answer, only the future will prove one or the other to be the right call.
Any competent engineer should understand that engineering is just the assembly line side of product development. Deciding when to release which feature, bug fixes, etc. and the development/management of the product in general has always been the real challenge, and a lot of the strategy involved in doing this relies on feedback loops that AI cannot speed up. Though at the same time I do feel like leaders on the business side often scapegoat engineer's speed as an excuse instead of taking responsibility for poor decisions on their end.
I get what youre trying to say but this is actually a bad picture to defend. product and engineering should go hand in hand, with one side informing the other. Engineers sctually giving a shit about a product will tell product possibilities they havent even considered, product people caring about engineering will not propose utterly stupid things. and I for one can spot when a product is well designed but poorly made, as well as when a product is perfectly crafted yet useless. the sweetspot is both. and even with the speed multiplier of AI, having a proud in the craft and being actually good in it as an engineer makes a night and day difference for the final result.
[deleted]
yes, most places I have worked were hobbled by the organizations being completely idiotic.
which is why engineers want to be left alone to code, historically. Better to be left alone than dealing with insane bureaucracy. But even better than that is working with good bureaucracy. Just, once you know it's insane, there's not really anything that you can personally do about it, so you check out and try to hold onto a semblance of sanity in the realm you have control over, which is the code.
> there's not really anything that you can personally do about it
Small companies/startups don't have insane bureaucracy, and they're hiring.
> I think veteran engineers have always known that the real problems with velocity have always been more organizational than technical.
I don't think this comment is fair or grounded. There are plenty of process bottlenecks that are created by developers. Unfortunately I have a hefty share of war stories where a tech lead's inability to draft a coherent and clear design resulted in project delays and systems riddled with accidental complexity required to patch the solution enough to work.
Developers are a part of the process and they are participants of both the good parts and the bad parts. If business requirements are not clear, it's the developer's job to work with product owners to arrive at said clarity.
> Unfortunately I have a hefty share of war stories where a tech lead's inability to draft a coherent and clear design resulted in project delays and systems riddled with accidental complexity required to patch the solution enough to work
This is also an organizational problem (bad hiring/personal management). If you put an incompetent individual at the helm of a project, then resources (especially time) will be spent horrendously and you will have more problems down the line. That’s true for all type of organizations and projects.
It’s part of the problem but AI also can crush this on pure lines of code and functionality alone. It can put out 100,000 lines of somewhat decent code in a day. That usually takes months or years of manual coding for a team.
More lines of code doesn’t help adding more constraints to a system without violating the existing ones.
In fact, it makes it harder.
It’s not just verbose code. I’m talking about 100,000 lines of relatively decent feature code that isn’t bloated.
There is a reason that kLOC / FP were rightly shunned out of being measurable metrics years ago.
The same clown show seems to be resurging with "tokens".
There is, in my opinion, no real formula or metric that you can define for "good" code or "bad" code.
Tickets and ceremonial activities, however abstract that into a N-nary status value that seems easier to judge upon.
And now they're almost forcing us to produce machine-made tech-debt at an industrial scale. The AI craze isn't going to produce the boon some people think it will. And the solution? More AI, unfortunately.
> And the solution? More AI, unfortunately.
I think the solution to using AI in coding is more testing, which unlocks even more AI.
The solution truly is more AI, yes.
> AI craze isn't going to produce the boon some people think it will.
What’s the boon you don’t think it will produce?
No. It's not more AI. The solution is designing and sticking to development process that is more resilient to errors than the one that's currently happening. This isn't a novel idea. Code reviews weren't always part of the process, neither was VCS, nor bug tracker etc.
The way AI is set up today, it's trying to replicate the (hopefully) good existing practices. Possibly faster. The real change comes from inventing better practices (something AI isn't capable of, at least not the kind of AI that's being sold to the programmers today).
What better practices do you mean? Are you saying we just need different more agentic-friendly practices that ensure scaled reliability beyond what we can manually check? If so I totally agree.
AI is 100% capable fundamentally of making new processes. Look I mean it’s not like I think opus 4.7 is all you need, but how can you argue with the fact that adoption since 4.5 has been an inflection point? That’s kind of proof that reliability has reached a level that serious usage is possible. That’s over a period of months. When you zoom out further you see this is extremely predictable even a few years ago, despite the absolute hissy fits thrown on HN when CEOs began saying this.
Agentic coding is verifiable and this implies there are very few practical limits to what it can do. Combine that with insanely active research on tackling the remaining issues (hallucinations — which are not a fundamentally unsolvable problem at a practical level, context rot, continual learning etc)
> What better practices do you mean?
I literally listed examples above... Code reviews weren't the norm until some time around 2010-ish. Then programmers realized that reviews help improve the code quality, and, eventually, this became so popular that today virtually everyone does it.
Anyways, I'll give an example from something that I've personally experienced / contributed to, which isn't as massive of a thing as code reviews, but is in the same general category.
Long ago, Git didn't have --force-with-lease option. Few people used `git rebase` command because of that (the only way this would work is if using it later with --force, which could destroy someone else's work). In the company I worked at the time, we extended Git to have what was later implemented as --force-with-lease. Our motivation was the need for linear history and some other stricter requirements on the repository history (s.a. every commit must compile, retroactive modifications in response to tests added later etc.)
This is an example of how a process, that until then was either prone to accidental loss of programmer's work or would result in poorly organized history was improved by inventing a new ability. This is also an example of something AI doesn't do, because, at its core, it's a program that tries to replicate the best existing tools and practices. It won't imagine a new Git feature because it has no idea what it could possibly be because its authors don't know that either.
> opus 4.7 is all you need, but how can you argue with the fact that adoption since 4.5 has been an inflection point?
What did it invent?
Right no I understand what you mean, I asked to be sure and you’ve confirmed my understanding.
I think we’re talking past each other because your comment is like 99% interesting and insightful and also I agree with it completely but there is only one part of your claim that I have an issue with which is
> It won't imagine a new Git feature because it has no idea what it could possibly be because its authors don't know that either.
I left comments in other threads with a lot of detail but this is a fairly common misconception. It is true in a sort of practical sense today, and I have many experiences as you do with respect to this, but the gist is: this is a world of RL with verifiable rewards, you are not bounded by human ability at all and that is why we have the adoption, funding, and frothy excitement. It is not simply mimicking human coding. In early stages it will because human programming traces are used as kind of a bootstrap to get to an RL phase without any limitation on performance. This is a very well studied field and it just isn’t that much of a question of if and now it’s not even really a question of when.
> What did it invent?
This is a perpetual question with constantly moving goal posts so I’ve given up convincing anyone but by now it’s solving unsolved Erdos problems, not sure how convincing you find that (not opus though but that hardly matters now)
The point I’m trying to make is: we aren’t there yet but it’s a crazy idea to think that isn’t imminent given all of the measurement and observations we have.
Additionally my point on 4.5 being a turning point is adoption. You wouldn’t see adoption numbers if we were not accelerating rapidly from say 3.x performance along the scaling trend that we’ve known for years now
Code is a liability.
I think it can be easy to look at code as an asset, but fundamentally it is a liability. Some of the "bottlenecks" to new code are in place to make sure that the yield outweighs the increased liability. Agents that produce more code faster are producing more liability faster. Much of the excitement and much of the skepticism about coding agents is about whether the immediate increased productivity (new features) and even immediate yield (new products or new revenue) outweighs the increased long term liabilities. I'd say we won't find out for another 1-3 years, and of course that the answer will differ in different domains.
From this perspective, attempting to build these bottlenecks into the agentic workflow directly makes some sense. Supplying coding agents with additional context that values a coherent project vision and that pushes back against new features or unconstrained processes would be valuable.
Is this what the article is trying to get at? Is this attempting to make some agents essentially take on product management responsibilities, synthesizing as much as possible into a cohesive product vision and reminding the coding agents of that vision as strictly as possible? Should these agents review new proposals and new pull requests for "adherence to the full picture", whether you want to call this "context" or "vision" or something else?
I think these agents might do an exceptionally good job at synthesizing context and presenting a cohesive roadmap that appears, linguistically, to adhere to the team values and vision. But I'm doubtful that they can have the discernment that a quality manager or team can have. Rapidly and convincingly greenlighting a particular roadmap could do more harm than good.
> Code is a liability
You're over-simplifying. Code in and of itself is neither an asset nor a liability. The minimal amount of code needed to solve business needs with no additional complexity is an asset with some maintenance liabilities attached (same as how a farmer's tractor is an asset that needs to be maintained), with depreciation if unmaintained (bitrot). Any code used to build unnecessary complexity is pure liability.
I would term it a Depreciating Asset, like a car or a building. Bitrot is real.
Yes, if code is only a liability then just delete the code and poof liability is gone.
Would it be fair to say that complexity is a liability, and LOC is an (approximate) measure of complexity?
Yes, but: writing code always teaches you something.
I've worked at founder-sized startups and $xxb dollar public companies. I've never read a product spec, a pitch deck, or a PRD that describes a solution that, if implemented in the way described, would solve the problem. Building the thing teaches you how it should behave.
Software is a complex, interactive medium. Iterating in the code, with people who understand the problem and care to see it solved, is the only way I've seen valuable products get created. Meetings and diagrams help, but it's not until you write some working software that you know whether you have something.
From the article:
> Jevons Paradox: when something gets cheaper, you tend to use more of it, not less.
That's a butchering of Jevons paradox. What's stated is not a paradox, but a very natural effect. Obviously usage of something goes up when it gets cheaper.
What Jevons paradox actually describes is the situation where usage of a resource becomes more efficient (which means less of it is needed for a given task), but still the total usage of that resource increases.
> What Jevons paradox actually describes is the situation where usage of a resource becomes more efficient (which means less of it is needed for a given task), but still the total usage of that resource increases.
Why is this stated as a paradox? One simple cause is the given task being performed more than it was before because it is now cheaper (since it uses fewer resources).
In most real-world use of the term, a paradox isn't something that's impossible, it's just something that intuitively seems wrong when you first think about it.
"I got a Prius so now I am spending more money on gas" sounds ridiculous, but it would be an instance of this paradox.
One of the classic examples is highway traffic. You want to prevent traffic jams, so you increase the number of lanes. However, now that there are more lanes, people see less “cost” in driving, leading to even more people driving (e.g. to go on more day trips or as alternative to public transport). This can cause the traffic jams to become even worse.
So, increased efficiency can sometimes not lead to reduced latency, which goes against our natural thinking.
> Obviously usage of something goes up when it gets cheaper.
Sure.
But is it not also obvious that when usage of a resource becomes more efficient, the price of that ”usage” becomes cheaper?
So usage goes up obviously because efficiency increases.
It is called a paradox because some people naively think that increasing efficiency is a good way to decrease consumption.
Almost everything that is called a ”paradox” is this obvious.
It’s surprising (not using the term paradox) in situations where demand was seen as inelastic.
An example of probably inelastic demand is the cost of diamonds which has fallen as synthetic diamonds enter the market. But people typically don’t buy more engagement rings than before.
With code it could be different. People might think that the amount of code that needs to be written is fixed, so the ability for a person to write code implies a reduced demand for people who write it.
In reality, bringing the cost down may unlock new use cases, so the number of actual coders might increase.
Should the paradox not be that we PAY more for it?
Or, if some process is made more effective, i.e. takes shorter time, we spend more time in that process.
Jevons paradox starts with some resource being used more efficiently.
A classic example could be coal. The first steam engines used a ton of coal, but over time more efficient steam engines where created that used way less coal.
One might think that this caused the global coal usage to go down. But the opposite happened, as the overall cost of doing something with a steam engine went down.
Note, that the price of coal itself can remain fixed in this example. So Jevons principle is not (directly) about a resource changing in value.
If LLMs make codes cheaper to produce, then obviously more code will be produced. That's not an instance of Jevons paradox even though the article claims so.
You could say that LLMs means that we can create software with less of the resource that is human software engineers. So one might think that we'll need less software engineers in the future. If, on the other hand, we end up needing more software engineers, then that'll be an instance of Jevons paradox. But the article is not making that claim.
The paradox is usually presented as:
The People: Hey local government! The roads are so packed with cars they are useless. Fix it!
The Government: We hear you and just finished a huge road expansion project. The roads now have 2x the capacity! Enjoy the new fast roads!
The People: The roads are just as slow as before because they are packed with 2X as many cars now!
So, the paradox is that greatly increasing the capacity of the roads led to the roads being just as slow as before. Maybe even slower. This is because there previously were lots of potential uses of the roads that people did not enact because it would not have been worth the hassle. But, now with 2X the capacity, those uses become viable. So, more people find more uses of the roads up until it gets right back to the limit of everyone patience.
Apply this to coding and you can predict: Coding is much faster and easier now. So, why are all my coders still so busy?
Yes. Jevon's paradox is that if we need less X to do a single Y, we end up using more X doing Y.
Anyway, it's an specific observation about a single X, Y pair. It some times happens with other things, but anybody claiming it's a universal rule don't know what they are talking about.
Unit cost is down but aggregate cost is up.
Yes, I believe you are correct (but imprecise which is why the other commenter disagreed with you).
We pay less per unit, but we pay more in total.
No, we pay less for it. But there's much higher demand so overall use goes up.
Right. That is not a paradox as stated.
The paradox would be:
* a TV used to be really expensive. So a home just had one
* over time TVs become half the price.
* now a home has 3 TVs, i.e paying 150% of what they initially payed.
It is a paradox. The paradox is that increasing the efficiency of resource usage can lead to more resource usage.
If you think that isn't a paradox because you can fairly easily explain why that is the case, you need to go and check what "paradox" means.
Bottleneck for what? More features?
I don't think amount of software is what determines whether a company does well.
I don't think capturing quantity of context is that important either.
Now, quality of context. How well do the humans reason?
Then, attitude. How well do the humans respond to bad situations?
Then, resource management. How well does the company treat people and money?
Finally, luck. How much of the uncontrollables are in our favor?
Those are pretty good bottlenecks for a company. I doubt an agent is fixing any of those. At least any time soon.
For business, software applications are tools that facilitate "the thing" that generates money. (We in the software world think that _thing_ is software and software _features_, but outside that world, there's usually a different _thing_.)
The bottleneck for making software applications better at being used by (non-software) businesses is making sure the software does all the software things that actually benefit the business. Save time. Make humans more productive. Reduce human error. Make the business more efficient. Increase profit margins.
All of those things are a bit difficult to predict and quantify. You start with ideas of what might help the business, you maybe design, prototype, trial. Ultimately you build or enhance software applications, and try to measure how well they're making the business better.
In all of this, making sure software is addressing the right problem in the right way, and ultimately making the business better - that's a hard problem! Regardless of how fast and easy it is to make software.
But yes, the speed can really help. You can prototype and trial and improve the feedback loop.
> But yes, the speed can really help. You can prototype and trial and improve the feedback loop.
Based on what I’ve seen, prototyping has been always easy. You don’t even have to build software for the first iteration. For UI stuff you can use a wire-framing tool.
What has happened is that we abandoned the faster iteration methods (design think tank, quick demo and UX research,…) and we have full in on building the first idea that came in and fostering it on the users. That process is very slow and more often goes wrong.
[deleted]
Hmm not agile or waterfall....
Tsunami?
> Bottleneck for what? More features?
Code changes. Not necessarily features, but also bug fixes, plain old maintenance, and even refactoring to improve testability.
With AI coding assistants, what in the past were considered junior dev tasks are now implemented with a quick prompt and an agent working in the background.
These junior dev tasks are now effortlessly delivered by coding assistants, with barely any human intervention. Backlogs are cleared faster than new items are added. And new items are added more and more because capacity to clear them is no longer an issue. The challenge is now keeping up with the volume of changes. I see this first-hand at my org.
> Those are pretty good bottlenecks for a company. I doubt an agent is fixing any of those. At least any time soon.
Just because you can think of other bottlenecks that doesn't mean that generating code was not a bottleneck, and is not the bottleneck today. The mere notion of a backlog demonstrates that it is a bottleneck.
> Backlogs are cleared faster than new items are added
Totally depends on what kind of product and codebase.
Last time I checked, the number of open issues in Claude Code repo has increased.
And I have seen tons of tickets that are open for years. Not because it's technically hard or anything. An intern can do that. Those tickets are not closed because nobody wants to deal with what comes after it.
I was not merely stating other bottlenecks. I'm saying they're more important bottlenecks.
They can't all be equally important bottlenecks; a bottleneck is by definition a singular component or sub-system most-limiting to the system's output.
What are we trying to output from our businesses? Code?
What is this magical context floating around every business that will unlock AI agents to produce ... what?
[Edit] I apologize for my tone. You're right, dealing with the speed of code generation is an unprecedented problem. I was making the argument that it's not the most important to the business and that rate of code change is very rarely the top concern. But that does not mean it's not the most important problem for someone. For the developers dealing with the system, it is.
> Software is what’s left over after a group of humans finishes negotiating with each other about what the system should do.
Love that.
I agree, in particular, about the context. That’s where long-retention, experienced, teams pay off.
I managed one of those for decades. When they finally rolled up our department, the engineer with the least seniority, had ten years.
When a team is together for that long, the communication overhead drops to an almost negligible level.
That’s what I find most upsetting about the current culture of mayfly-lifespan employment tenures.
Nowadays, I work mostly alone. I’m highly productive, but my scope is really limited.
I miss being on a good team.
What kind of projects are people working on, where understanding what features the management wants is the only difficult part and the rest can just be "typed out" (or, today, offloaded to an LLM)? If that's what you do, then I'm not surprised so many people on HN think LLMs can replace them.
Any discussion related to this topic always seems to assume everyone uses code the same way and for the same function, and then forces the rest of the world through that lens.
So here we walk around the circle one more time again, voicing our anxieties, talking past each other, waiting for the next opportunity for commentary to come in half an hour.
I've found the more senior you get, the code seems more fungible, and the process seems more important and difficult.
Isn't that just ascribing difficulty to the parts of the process you're closer to?
This is like 80% of CRUD apps. Sometimes they have a few interesting problems but not like the upper 20%. Most of them are hot garbage in terms of code quality because of the offshoring and layoff cycles.
Also even the more complex things will have a lot of simpler code in them.
Uno reverse. What kind of limited project experience would lead anyone to think that there isn't an enormous continuum between code difficulty and organizational problems in the space of software development?
My buddy made a music software, with synths, effects, visualizers, etc. with Claude and Codex.
For him, the bottleneck very much was the code. He still doesn't know any programming.
I want to say that his ability here has been accelerated by orders of magnitude, but without AI he couldn't have done it at all, so it's actually a divide by zero situation.
(Yeah, he could have just learned programming... and audio engineering... and the specifics of JavaScript ... and the web audio API, and the DOM, and WebGL, and his demo would be ready in like, 2030.)
> Software is what’s left over after a group of humans finishes negotiating with each other about what the system should do.
The issue is that sometimes you don't know what the system should do until you build it.
A design is a hypothesis. Most of them are wrong, in subtle or not so subtle ways.
(Also, as a separate issue, having a group in the first place increasingly adds negative value. If it was ever a good idea to design by committee... it's increasingly expensive to do so, in opportunity cost.)
>The goal was to test our structured-generation algorithms and their open-source counterparts, replacing the naive “does it accept this string?” with something closer to the real problem: “does it produce the right token distribution?” The experiment kept coming up in conversation, then returning to the roadmap. Last month, I spent half an hour explaining the method to Codex. A few hours later, it had produced a working first version. That’s all it took.
Proving that the bottleneck, was, in fact, the code. It's just that the AI wrote it now.
The person who thought "the bottleneck wasn't the code" already had the goal discussed and coherent in their mind.
Code as bottleneck doesn't have to mean "I wanted this feature but it took me many months to finally code it". It is also "I wanted this feature for 2 years, but the friction in sitting down to put it in code and spending 5-10 days on it, etc, put me off".
If the code wasn't the bottleneck, they could just sit and write it themlseves. But, they didn't want to go through the effort and time spent of coding it themselves, as they knew it wouldn't take as little as with the LLM.
(And even when you don't have a clear final spec in mind, the exploratory code+check+discard+retry-new-design, is also faster with an LLM, precisely because the "code" part is).
In other words, the code was the bottleneck.
The post appears AI-generated itself, just with instructions to avoid obvious constructions, which still makes for tedious reading.
I think the argument here misses critical nuance; there is a difference between code used to implement a product and when code _is_ the product.
It goes without saying that agents have little to no product sense in any discipline. If you're building a game or an app or a business, your creative input still matters heavily! And the same is true for code; if the software is your product, then absolutely the context missed by skipping the writing process will degrade your output.
That doesn't mean that writing code wasn't a bottleneck even for creating well structured software projects. Being able to try multiple approaches (which would have previously been prohibitively expensive) can in many instances provide something a room of bickering humans never would have reached.
> difference between code used to implement a product and when code _is_ the product
Care to elaborate? I don't understand the difference unless you mean code that _is_ the product, being OSS code or code for license.
I think what I'm trying to get at is that there's a lot of code out there that really just needs to work. It doesn't need to scale to millions of users, it doesn't need to be abstract-able and useful to use cases we don't even know about yet, just needs to get an idea off the ground. That code is not the product. In such a case writing the code very much is a bottleneck.
If you're writing OSS code or software projects expected to be used by others that may have constraints like that, then by all means the code that gets output matters itself. But even still I'd argue that the cost of writing code manually to get there is still a bottleneck.
In my mind this is what prototyping is for. Just get it working quickly and see if the concept has legs. But be prepared to completely re-write it because the "just make it work" mindset will make it more difficult to change and improve upon in the future.
But when you factor in today's favorite business model of "make it shitty", perhaps this matters very little.
Code you ship vs tooling you use to build the code.
So, the product vs everything that is needed on the way, but isn’t the core.
CI/CD tooling, template population…. Things you write a use once/use few script for.
I typically end up with a library of tools to deal with repetitive finicky tasks.
systems vs application code
An awful lot of problems can in fact be solved by 'more code' in fact. People seem to straw man this in terms of product feature surface.
A lot of places skip creation and maintenance of decent observability - that's code.
We can now easily use advanced, code heavy testing techniques like property testing - code.
We can create environmental simulations to speed up and improve integration testing - code.
We can lift up internal abstraction levels, replace boiler plate with frameworks, DSLs - code.
All listed are in service of the product feature surface.
[dead]
It typically is the code that’s the bottleneck, but not writing the code. My career is littered with numerous delays from slow applications.
I am stuck with an editor based on Eclipse. It’s slow and periodicity pauses or crashes. I am stuck with build jobs that take 15-20 minutes. I am often stuck with web apps that take forever to do a task that should take 50ms max.
The list can go on and on. Every delay is a distraction that shatters my concentration. I still write code at work but I am in management now with dozens of other people and administrative distractions. When the software is slow it become my lowest priority. I don’t care who that impacts because if it really mattered we wouldn’t be held hostage by all this slow syrup of software pulling each of us under.
Which editor and why eclipse?
Because our software platform requires use of a proprietary editor built over eclipse. I use other editors when I can deviate from the official one.
Doesn't add up. I used to spend more than half my time coding, as did others. Besides the obvious cost, that coding took wall-time which meant talks had to wait. Sure a poor collaborator will jam things up a ton, but a team of at least ok collaborators used to be bottlenecked on code.
(not related to the article)
The flashing red dot on the web page is very annoying. Is there some design reason for that?
edit: I meant the <svg> inside `trail-map-container`
FWIW I see the red dot only at the top of the page, flashing slowly. It does not annoy me, in fact I only discovered it because of your comment.
On desktop, it's fixed to the left of the story, pulsing along the entire time you're trying to read. If you are like me, it will annoy you. I had to switch to reader mode.
Turning off Javascript made the dot go away.
From what I can tell, it marks the article you're on. There are other light grey dots with other article names in it.
I think he's going for a metaphor about groups versus individuals. There are other gray dots around the red dot. Software is a group effort, but made of individuals. Something, Something.
One of the bottlenecks has always been the code. That code has been stolen and is being laundered while companies rely on mediocre engineers who have never written anything of value to promote the burglary tools and call the process "writing software".
It is the same as putting an Einstein paper on a photocopier and call the process "writing a paper".
I agree with the point of the article though: code generation does not really work, the results are bloated and often wrong and people already had more features that they could absorb in 2020.
The solution to this mess is to have 18 year olds boycott studying computer science altogether, since the industry (and mediocre fellow "engineers") will treat them like human garbage.
Personal anecdote. When I started as a wide-eyed university co-op, I was surrounded by a team of aging mainframe developers who had been coding since the 1960s. Their backgrounds ranged from working their way up from the mailroom to astrophysics to masters degrees in computer science. What struck me was how the entire team treated coding as a form of art. Of course, it had to be functionally correct and maintainable, but you could glance at a snippet of hyper-efficient PL/1 and instantly tell that 'Larry' or 'Trudy' wrote that routine - each programmer had a unique style, along with just the right amount of comments to make you laugh and guide you through difficult bits.
Most of the team has since passed away, and their code has been long replaced by modern systems, but what stuck with me is that great code is a form of art - where your individual style, insights and personality can be reflected in code for the better. The systems were efficient, responsive, extensible, and a joy to work on, since the team took a great deal of pride in their work. It really is akin to being affected by a clever and insightful work of art. A decade later, and programming became something to "make money" at, which flooded the market with many people who never really had a deep love of programming, and I guess that's ok, but something has definitely been lost along the way.
To your point, it may not be such a bad thing if people started boycotting computer science and it again became more of a calling than purely an avenue to employment.
Agentic tools are "burglary tools" -> Younger folks should not study CS?
I think the solution is to fix society so that we value quality over quantity, self betterment over getting rich, and making society better for every one rather than worshipping billionaires.
"What slows down a team where agents do the implementation is the production of specifications precise enough for an agent to pick up and run. Roadmap, written down. Acceptance criteria, written down. The “what we actually want” forced into precision, be it via a test suite, a ticket, or a written design."
This is merely speed of development and not the velocity of a company towards higher value. There are many PMs confidently (using the same AI tools), without a clear deep understanding of the user problems or why the requirements will be adopted by their target users (or even who the target users really are), writing these done elaborately.
So yes this will lead to faster end-end execution. But if the product is used or if it sits unused will depend on things beyond the above.
> Producing easily consumable context is precisely the thing humans don’t like to do.
I don't think this sentence speaks for me. This is the sort of thing I love to do.
Can someone explain the title? I think the author illustrates that the code was the bottleneck and it has shifted to context. What am I missing?
I think he is saying, I hope he is saying, that software has never been writing software, it has been communications with people over what the software should be, needs to be, and the entire point all along has been to achieve better collaboration with people, and implied: to achieve their collective goal. He spends a good amount of time on how slow writing software has been in the past, and that allowed the industry to over focus on the software writing. While it has been pointed out a number of times by milestone books our industry embraced that it is the communications aspect of why and what we write that is the most important. Finally now that is being forced upon us because writing code is now automated, and all that is left is the specification and the communications with humans over what and why.
The author argues that writing code cannot be a bottleneck because work always fills up the allotted time. Developer teams should instead focus on doing less and writing better specifications.
The error in the reasoning is that while you can increase your resourcing to tenfold and gain nothing in return, the inverse is not necessarily true.
I think the point they're trying to make is that context known by humans and the requirements they agree on, is 'the' bottleneck, rather than implementation
It shares some ideas with Peter Naur "Programming as Theory Building".
Quote from the post article: "To quote Michael Polanyi: we know more than we can tell. Some load-bearing context exists precisely because it was never put into words, and writing it down would change what it is."
Imagine how much knowledge exists only in the heads of software engineers, with code being just a functioning footprint of that "Theory". I know SRE in FAANG who told me that multi-billion system is supported by tribal knowledge within their group, and for years, even pre-AI it was a protection against automation.
I can type faster than I can think of the correct things to type. My experience may be non-standard but I think for most serious software folks the code has never been the bottleneck.
Sometimes code is definitely the bottleneck. For example some organizations have a very bureaucratic process guarding which projects get access to a development team and when. That's not needed if implementation is now faster/cheaper.
I'm also skeptical that development velocity is so separate from all those other things (context, stakeholder alignment,etc). It's much easier to get actionable feedback when you have a prototype.
I think this is the wrong conclusion.
Whether code is the bottleneck likely depends on the organization. In mine, code is the bottleneck. AI has pushed it so validation is now the bottleneck. If it is such that the devs are "middlemen" such they can't spec things, then I think whoever can spec things is likely the bottleneck.
Ask yourself what monks did when scribes were replaced by the printing press.
If I was a scribe at the time I’d be thrilled because of all that extra time available to work on beer productivity metrics.
The .txt website fails to load if you won't enable WebGL on your browser. Incredible
Absolutely matching the gut feel I've had lately. We've always been pretty good at producing bad code very fast. All of the other stuff - dependency management, learning what's valuable, ownership & boundaries, context switching costs, etc... have always been the bottlenecks and it's just more obvious now.
I'm finding counterexamples of this constantly now that I can have an agent rewrite large sections of my codebase that have been sorely needing it.
- Moving to a newer and more modern test library
- Refactoring my data layer so it's easier to read, based on years of organic changes that need to be baked in and simplified
- Porting some functionality to another language to vastly improve performance
I agree with the overall sentiment, but having an agent at my finger tips who can really crank out large-scale, involved code changes is unclogging quite a few backburnered todos lately for me.
The bottleneck for my personal projects was the code. So many have become unblocked because of LLMs
> Agents that consume context need agents that produce it. Once that loop is running, the organization has a written substrate it would never have produced on its own.
I'm not sure a business is helped by documentation that distilled from (hopefully present) PR descriptions and comments in JIRA, by agents. Or wherever this context is supposed to be reverse-engineered from.
[flagged]
If I read “load-bearing” or “blast radius” one more time…
I really think as code becomes cheap, misalignment between people, teams, and organizations is going to hurt a lot more, especially when everyone is trying to move at break neck speeds.
I also think a big piece of this is human attention and inertia. Aka, why bother doing the hard work to coordinate with others when you can just ship whatever you’re thinking. I think whichever organizations can figure out the human and cultural aspects to this will do phenomenally
“Something went wrong with the editor” when loading page
same
> They are waiting on the next well-formed spec
Is this actually true? Maybe in a widget factory. I think it’s an anti-pattern for the new world.
When you look at places that are shipping at insane pace (like Anthropic) the secret is not accelerating the writing down of a roadmap and we’ll groomed backlog, it’s empowering smart individuals to run their own end-to-end product improvement loops.
You can slightly reframe the OP by saying “the bottleneck is product ideas”, but “well formed backlog items” IMO frames it as more structured and hierarchical than it should be.
The problem with this is that everybody thinks they have better ideas than they do. And engineers are probably the worst offenders in that they're smart enough to make a case that deludes themselves.
The insane billion dollar companies ship straight to production because they have PMF so anything and everything gets signal.
The same happened with Facebook and Google. And it was always cautionary advice to mimic these giants. It's a bad idea for all the rest of us.
The bottleneck has always been the human element. I too used to be one of those up-my-own-ass engineers who thought the most important part of my work was the machine, and it wasn’t until I began actually listening to others and their problems that I realized my function was far more than mere technology scaffolding.
That said, I’m also increasingly aware that puts me in a minority group. I got to see this first hand in a recent org where their codebase and product design hadn’t meaningfully evolved in nearly thirty years. NAT was a “game changer” to them - and one they refused to implement without tons of extraneous testing they would deliberately undermine, stall, and sabotage so they didn’t have to modernize their code accordingly. It was easier for the developers and stakeholders to preserve their own status quo rather than entertain alternatives, to the point of open hostility (name calling, insults, screaming, and a few threats) to anyone suggesting otherwise.
The human element has always been, and always will be the bottleneck. Stakeholders who don’t contribute updated or accurate datasets to automation systems, or who hold back development to preserve personal status and power, or who otherwise gum up the works on purpose to game their own careers.
That’s not to make the argument of “replace all humans with machines”, mind you. Just stating that an organization that incentivizes bad behavior will be slowed down versus ones that incentivize collaborative outcomes, and AI is just going to turbocharge that by removing the friction associated with code creation and shifting that elsewhere.
> name calling, insults, screaming
Never experienced this at a job in 30+ years, and that includes my first jobs in fast food. If you experience this at work, find another job. This isn't normal. It's extremely dysfunctional in fact.
I was already looking, but they ultimately made the decision for me in January with a RIF.
Thing is, this job market is hell. There are folks who have to choose between the abuse or making rent, which is why we need stronger incentives for organizations to discipline said abuse rather than let it permeate because existing penalties lack teeth.
The company website linked in the article is broken https://www.dottxt.ai/ on (mobile and desktop) Safari. Looks like your cert doesn’t cover the www subdomain.
As software engineer, we should collectively realize that this is all cope. Every article or comment about how AI will never be smart enough, etc, etc will only be true until its not. One of our main valuable skill sets is now partially automated. Some of us are completely obsolete and its coming for the specialists and more experienced ones within a decade tops. You're not going to convince anyone that "um actually we're better because we bike shed more".
Stuff like this is ridiculous and comes off as frantically trying to save your ass. Its pretty obvious at this point that we will just throw more matmuls at it until it can do this or something equivalent.
> Agents cannot do osmosis. They do not get context by being in the room, by half-hearing the planning conversation, or by carrying the memory of the last incident.
I have been thinking about this a lot lately. How do you capture key factors succinctly, and even harder, keep it succinct as it evolves?
The shrinking that property based testing does when it finds an issue is kind of what we need for specs/context.
[deleted]
So managers are overwhelmed because the code is now happening a lot faster? It sounds like the immediate bottleneck really was the code, at least frequently. Now it seems the bottleneck is managerial.
> Real programmers don’t document their programs.
Probably true, but I, for one, have always liked documenting how the code I've written should be used, whether programmers calling APIs I've created, or end-users actually making use of a program's executable. I find writing the docs just as interesting and creative as writing code.
> Real programmers don’t document their programs.
This is kind of a straw man. I suspect people say that tongue in cheek.
Good programmers try to make their code clear and easy to understand. They add comments to clarify, specially their whys.
The problem I have with documentation is that you end up with mountains of documents over time about a lot of things that are no longer true and many times contradictory. The only solution I have seen is making sure that documents have owners that update them periodically.
Velocity, velocity, velocity! Ah yes, velocity always seems to matter except to those that don’t need to worry about it.
I can see the division here already, and the cogs are afraid. As a dev of 25+ years, currently working for a small company who came from a global company, I see both sides. I'm very excited about AI and love to see my projects come to life so much faster. I still love the craft of code, but its always been about the product for me.
the bottleneck was never the software, that is the ship we ride,
people, are part of a team focused on a goal, they work together because they believe in that the ship is worth riding on and will reach its destination,
the ship should carry food people want,
team decides what food will be consumed,
captain tries first the food,
if food is good and people want it, people buy more
The paper hits the nail right on the head, but it misses the mark on the next constraint: how to decide what to build.
In the old days when writing code took up a lot of resources, the constraint was self-correcting since being off in your implementation was obvious enough that the error could be easily seen after three months of work on the wrong feature. Today, you could spend five wrong efforts in the same amount of time that it used to take you to implement one wrong effort.
[flagged]
Everything in life revolves around people, and even more so today
the tediousness of keeping documentation up to date and the natural tendency towards small attention spans has always come up as a tax on organizational efficiency: complicated org structures, legibility exercises, communication tollgates etc. there is real value in reducing the friction in the former so that the latter becomes less of a burden.
at the same time, context poisoning is a real cognitive problem for humans too and I can't tell you the number of times I've seen irrelevant details become a drag on execution. my fear is that having too much context will only cause bikeshedding and a revisiting of prior decisions.
frankly, our organizational structures were already pretty good at creating mechanisms for eliciting the right implicit context at large scales. it is possible that we're just going to come up with the same mechanisms from first principles...
I swear AI has made this entire industry crazy.
Here's Robert Martin saying non-determinism in AI is ok:
For me it was. Solo entrepreneurs are the ones who profit the most from AI assisted development.
Or startups where coding was always a bottleneck because it was very expensive to hire swes, unlike big corps which would often throw swes at a problem.
If thats true, I am sure some C-suite manager knows this already. Assuming management knows what they do, after all, they're getting payed for this. The time where engineer are trying to educate people above them should be over. Management gets payed for the big decisions. If they tank the company, so be it. I no longer care.
The bottleneck was ALWAYS the code, which is why everything was built around it.
This is the key line right here:
> Negotiating, agreeing, communicating the shared picture of what we are building has become the work. And it’s just as hard as it was.
But if software (via code) is what we ultimately produce and sell, how did we get here? The main reason is the following lemma:
Lemma A: "The loss of fidelity of what can fit in any one person's head scales superlinearly (exponentially?) as the scope of work scales up." Or more colloquially: "It is impossible to fit a large scope of work in any one person's head." This is largely because any non-trivial task is a fractal of smaller dependencies.
The chain of logic to today's situation is then obvious:
1. Writing code requires humans who are slow and expensive.
2. To do large things we need large groups of humans.
3. As the number of humans grows (like beyond 5? 10?) it becomes impossible to keep them aligned, largely because Lemma A.
4. We need to coordinate these humans, so: enter managers!
5. But even a manager can't manage too many people and coordinate with all other managers because, again, Lemma A. Enter hierarchy!
6. As the size of the organization grows, so does the coordination overhead (exponentially, if Google AI overview is to be believed) until as,that quote surmises, the majority of the work is just that.
7. Coordination costs (or "Conway Overhead" as I call them) are very well understood in the literature, but this also brings in undesirable dynamics like bureaucracy, politics, organizational metrics (also due to Lemma A, but now triggering GoodHart's law!) and eventually territorial disputes and empire-building. Lots of friction and subtle mis-alignments.
As you can see the overhead scales superlinearly with the number of leaf workers added. And for the same reason, once the leaf workers are decimated because one worker can now do the work of a whole team, the entire organizational overhead above that is gone, which is also a superlinear change! Assume a conservative 2:1 reduction in ICs and a 1:5 manager:reportee ratio, a simplistic hierarchy that was:
Not only did that eliminate 300 ICs (mostly junior I suspect) it took out 60 managers and removed an entire layer of Directors from the hierarchy! Worse, the leaf-layer will probably get decimated 5:1 not 2:1, and this will also eliminate coordination-specific roles like Program Managers. The rest of the hierarchy is much fewer but mostly more experienced (or politically savvy) people. They will be paid more, but not superlinearly more, of course, what do you think this is, socialism?
It's very much a pyramid scheme of cards built on that one bottleneck. And this bottleneck applies for pretty much all knowledge work. Once that bottleneck opens up, everything collapses. This is why I fear that the coming job changes are going to be much more disruptive that people realize, something I'm extra concerned about as a parent of high-schoolers.
It seems like so many developers know this, yet here we are. SV pushing this AI slop economy. More code! Faster! Less testing! Less understanding! It's what we NEED!
> What may save us it that agents are unreasonably good at reading exhaustively. An agent will read every PR comment, every closed issue, every commit message, every stale design doc ...
> Not just “this module exists,” but “this module is weird because the migration had to preserve old behavior,” or “this benchmark matters because a previous optimization silently changed the distribution.”
The thesis here is that an LLM will document code better than a human (although based on human artifacts), since churning through huge quantities of text is what they are good at.
A few thoughts:
1) Yes, an LLM may be able to pull comments out of commits and PR comments and put them back in the code where they belong, but I question how often a developer too lazy to put a vital comment in the code would put it in a commit message instead!
2) "The truth is in the code" has always been true, and will always remain true. If the comments differ from the code, the code defines the truth. Pulling comments from stale external documentation and putting them in the code does more harm than good.
3) Comments that can be auto-generated from the code don't add much value (lda #1; add one to the accumulator).
4) Comments about the purpose or motivation of the code, distinct from 3), such as the "we had to preserve backwards compatibility" example, or "this code does this non-obvious tricky thing because ...", are where the value is, but the LLM is highly unlikely to be able to discern any unwritten motivation by itself. If the human developer left a comment somewhere then great (assuming it is still relevant)
Most of the discussion we see about LLM coding is how fast it can churn out thousands of LOC on a greenfield project, or how good they can be at finding bugs, but neither of these are very relevant to the main job of developers which is maintaining and extending existing codebases. It would be lovely if most projects were greenfield, but they are not.
In any large project that has been maintained over a few years or more, there will inevitably be an ever growing accumulation of bug fixes and patches for specific issues that have been discovered in production, likely poorly documented and out of sync with any original documentation that may have existed (which anyway tends to be more idealistic and architectural in nature, not capturing these types of post-deployment detail and special cases).
The natural tendency of an LLM is to want to rewrite code to match the statistics of what it was trained on, and they need to be reigned in via prompting to resist this and not touch more code than is minimally needed for what is being asked. Of course asking an LLM to do something is a bit like asking a dog to do something - sometimes it will, and sometimes it won't. I expect over the next few years we'll be experiencing, and reading about, more and more cases where LLMs have introduced bugs and regressions into mature code bases because of this - rewriting code that should have been left alone. The general rule is that if you are tempted to rewrite something you better first understand why it was there, coded the way it is, in the first place.
I can't help but compare the current state of "AI" (LLMs) to the early days of things like computer speech recognition or language translation when they were considered amazing, and everyone was gushing about them, but at the end of the day the accuracy still wasn't good enough to make them very useful - that would take another 10-20 years.
Another historical lesson/perspective would be expert systems which at the time were considered as AI and the future of machine intelligence (the Japanese "5th generation systems" were going to take over the world, CYC promised to offer human level intelligence), but in retrospect were far less important. It won't be until we move on from LLMs to something more brain-like, deserving to be called AGI, that LLMs will be put in their historical perspective.
At the moment DeepMind seems to be the only one of the big labs admitting/recognizing that scaling LLMs isn't going to achieve AGI and that "a few more transformer-level breakthroughs" are needed. Hassabis has however talked about LLMs (GPTs) still being a part of what they are envisaging, which one could either regard as a pragmatic stepping stone to real AGI, or perhaps that they are not being ambitious enough - building something that still needs to be spoon-fed language rather than being capable of learning it from scratch.
Even if writing new code is not the problem or the bottleneck anymore, a major blocker is one CoreService.java file someone wrote 10 years ago that is keeping the whole system glued together. Only they know how it works, and how to keep it working.
Parroting my own words from a while back, inferred intent is not the same as initial intent. This is just reverse engineering except somewhat automated
It also bakes in the LLM quality at the time the documentation was generated, into the documentation. It potentially worsens the performance of future LLMs if they ingest the documentation produced by older LLMs. It’s not clear why documentation wouldn’t instead be generated on demand, using the newest SOTA LLM.
Before going to work, we're fed algorithms and data structures and how they are the bottlenecks that makes wasteful use and here's how to utilize them; only to naively know from hard stories that the actual bottleneck is always from the people, the H-factor, except this time H stands for human.
Insane amount of bureaucracy, paperworks, and how we are missing deadlines so we write shit code that the quick and dirty solutions were never replaced.
Algorithms and data structures therefore are more like helping you utilize the machine economy better, but it doesn't have any meaningful impact on the social aspect of it. That's a hard lesson I had to learn from my two previous job, though now I'm considering starting my own small business just to make a little bit of living enough to survive.
But now my ADHD kicked in and is still lazy and I had so many concerns whether the market validation is great, how to deal with situations if I broke customers stuff, how to gain (and hopefully not regain) trust if any bad things ever happen, what if I want to go vacation and suddenly the server broke and got code zero (the highest level of alert I termed internally, when you had alertmanager flashing everything red, network storage is down, corruption happened) during a trip to Bahamas.
I'm still in the watershed of thinking really to do this or not, but the job market is filled with ghost jobs that are not worth my time either, I'm basically "dead locked" right now and had to make a decision quick.
Either choice is fucked for me, as I started to notice after going to work, despite I got some really interesting ideas in tech, but I'm not a charismatic person so I can't really make those idea to fruition, because no one wants to listen to me and implement it together, so I'm pretty sure it is impossible for me to be a great leader (tech lead probably, but CEO level of leadership and coordinator and manipulate the grand scheme of thing, nah, I pretty much can't do).
Now the problem is, even if I'm pretty sure to get fucked, you should choose the one that inflicts minimum pain to you. So far having my own business seems like a less painful to die and bankrupt, and I'm preparing to sell off some of my stuff to get a last dip of my fortunes and have fun. Will see how it looks. Bankruptcy is nothingburger in this modern society perhaps.
Now you see how the bottlenecks can't even be the code anymore and even goes beyond code, despite having the same core template: I don't even have to code, to repeat the same "quick and dirty" kind of mindset in another domain, in another instance. That's something LLM, heck not even AGI can solve: decision-making based on situations with limited time and resources, and it can be personal or organizational or even structural.
This is very much not going to be solvable by a bunch of lines and statements and expressions, but it really need some time to dig in and compromise. Pick your kool-aid and drink it
[dead]
[flagged]
[flagged]
[flagged]
[flagged]
[dead]
I cringe every time I read the word "load-bearing" in an article.
It instantly made me think this was either written by AI or heavily edited, and I hadn't gotten that sense until the moment I saw load-bearing.
I hate that em dashes have been killed by AI as well. Hundreds of years ago Charles Dickens was using them in his novels and today someone will read his books and in ignorance actually think AI wrote it.
I don’t mind em dashes, I use them myself. But load-bearing context, that’s just really bad writing.
It's hilarious to me to see the same kind of engineer, who throughout my career have constantly bitched and moaned about team meetings, agile ceremonies, issue trackers, backlogs, slack, emails, design reviews, and anything else that disrupted the hours of coding "flow state" they claimed as their most essential and sacred activity to be protected at all costs, suddenly, and with no hint of shame, start preaching about about the vital importance of collaborative activities and the apparent inconsequence of code and coding, the moment a machine was able to do the latter faster than them. I mean, they're not even wrong, but the nakedly hypocritical attitude of people who, until a year ago, were the most antisocial and least collaborative members of any team they were on is still extraordinary.
Are you referring to the author specifically? Or a specific hypocritical person you know? If you're making a general statement about groups of online people you might be falling for the group attribution error[1], where the characteristics of an individual are assumed to be reflective of the whole group.
In any case, two things can be simultaneously true:
1. Writing code is not the bottleneck, as in we can develop features faster than they can be deployed. 2. It's annoying and disruptive to be interrupted when doing work that requires deep focus.
[1] https://en.wikipedia.org/wiki/Group_attribution_error
I think it's obvious that they're not referring to the author or a specific person at all. They're talking about how the zeitgeist has changed. Look at Hacker News archives 3 or more years ago and it would be really hard to find anyone arguing that coding speed is not a bottleneck or that engineers need to spend more time in collaboration. You would find a lot of arguments that leaving engineers alone to code is the best thing a business can do and constant lambasting of meetings, documents, approvals, and other collaborative activities.
I think there are small pieces of truth on both sides of the argument, but I find the sudden change to claiming that coding speed doesn't matter to feel half-baked. Coding speed is part of building a product. Speeding it up does provide benefit. There's a lot of denial about this, but I think the denial is rooted in emotion more than logic right now.
Needing focus to think is not the same as needing focus to write code..
It can take a whole day to find 10 good lines to write.
Or just goomba fallacy
til https://en.wiktionary.org/wiki/Goomba_fallacy
The collective smarts on the Internet can't come up with a better name that might help our brains tag the concept?
Although at least it has some cross-cultural references. Why is naming so hard.
Memes help things to become popular. I guess the question would be how popular would it have been without the meme name? We'll never know.
I learned here like a week ago so I'm here to evangelize.
This was a perfect example.
That’s kind of just strawman with an origin story isn’t it?
No because the goomba is the average of two real opinions, and the strawman is a distortion/reduction of any opinion such that its easy to argue against.
More like a superposition of two opinions.
Ah, ok, so two real opinions made into a distortion. Right, I see how very dissimilar to an origin story of a strawman that is…
If I only had a brain...
On some level, yes, but having words to describe sub-sets can be useful too.
A "human" is just a "featherless biped", after all?
I think the Goomba Fallacy captures something helpful to me- it's helpful to know the origin of a straw man if you want to un-stuff one.
I know this is off topic but I cannot resist. It's very funny to read a comment about un-stuffing a straw man written by "scarecrowbob"
Gave me a good laugh, thank you. :)
I think the Goomba is distinct. Strawman is disingenuously representing an argument, Goomba is assuming contradictions are coming from the same person, presumably b/c it's coming to the Goomba through the same app.
> Goomba is assuming contradictions are coming from the same person, presumably b/c it's coming to the Goomba through the same app.
Its because it comes from the same political faction. In general people are open about A when A seems palatable, and openly B when B seems palatable, but they almost never admit to do that when its obviously wrong to do so.
That is the rational part of the fallacy, even if these are different sets of people you can still tell they are biased since they never appear in the threads where its obvious they are in the wrong.
For example, lets say in a thread where a white cop shoots a black guy you find a lot of republicans say "this is just statistics, nothing to see here". Then in another thread where a black cop shoots a white guy republicans pour in and argue this must be racism and we should investigate! Maybe it isn't the same set of people, but its still a strong sign of problematic bias that they only choose to speak up in those particular threads and not the others.
Every political side everywhere does this, and that is why people started calling that out.
This is just the fallacy. Political groups are coalitions not single monoliths.
In general, hypocrisy is a pretty weak argument. It's an annoying personality trait, but consistency is a thing humans often fail at, and humans failing at holding consistent opinions is a failure of those humans, not the claims they're making. It's not quite as weak as the more non-sequitur kind of ad-hominem attack, because it does at least pertain to the argument being made, and kind of resembles a logical contradiction if you squint, but it seldom does a good job addressing the merits of the argument, rather than the arguer. It's a successful political tactic for the same reason ad hominem arguments in general are, of course, especially in the context of representative forms of government, where the person's character or competence is relevant when they're running for an office. Much less so in contexts where the merits of a position are being debated in abstract.
I think it's very silly to make the argument that "groupwise hypocrisy" is not a fallacy in such a conversation. In politics, the reality is that people have to form coalitions with people with whom they don't agree on everything, and non-political groupings are even more non-sensical, often holding people responsible for the opinions of other people who happen to share things like inborn characteristics. It's especially ridiculous to explain this with this idea that people are engaging in some kind of elaborate coordination to argue with you on the internet. Yes, some people, and indeed political parties, engage in that kind of behavior, and if you think you're arguing with something like a botnet, there are larger considerations to make about what you gain as an individual by trying to engage with such a machine at all. If I believe I'm arguing about the merits of an idea with an actual person, and I find myself reaching for something like "your group is collectively hypocritical on this issue" to make my argument, this is cause to reflect on whether I actually have any real arguments for my position, as that one is... well, essentially meaningless
I think you're trying to invoke what's commonly called a "motte-and-bailey" argument, where people argue for a maximally-defensible position when faced with serious criticism, but act as though they're proving a much less defensible version of their argument, often including a nebula of related ideas, in other contexts. This is something individuals and coordinated factions absolutely do, but again doesn't really support treating any grouping you want to draw of some kind of collective hypocrisy. Even assuming we care about hypocrisy, it seems like this kind of reasoning about nebulous groups that don't explicitly coordinate would allow making that argument about any position in any context, depending on how you draw the boundaries of the group that day. It's well-understood that you can go on the internet and find someone who believes just about any crazy thing you can think of, or find someone who makes the argument for any position poorly.
Are you a very smart writer or are you using smart tools, I'm not sure?
Wouldn't know if I'm a smart writer, but I see little value in writing with a model if that's what you're asking. Language models are good for searching, getting alright at structured outputs like code, and trash at meaningfully expressing my thoughts in prose. Frankly, it concerns me that people think vomiting their thoughts onto the internet could possibly benefit from computational assistance
Is it true that they don't appear in the threads where you feel it's obvious they're in the wrong? Or do they just get upvoted less in those threads so you don't see it when they do appear?
It's almost as if factions are made up of different people with different opinions in a loose alliance.
But nah, clearly they're all goombas.
This is exactly it. You see it on HN all the time. You will debate someone. Then deep in thread, a second person appears with a gotcha. when you point out that the gotcha doesn't fit in with the prior argument, they point out that was a separate person. They knew damn well what they're doing with their little conniving deflection fuck-fuck game. They're acting for the same surrogate argument. The Goomba is real and the people playing the game are just too cowardly to be two-faced themselves so they act two-faced through a surrogate and deflect to the surrogate when it's pointed out.
You're making a strawman
If so, I've seen that strawman living and breathing, and saying the first thing before, and the other thing now, several times.
Sometimes there are two groups of people who have different opinions that don't interact, but given the extent they take up the same platform and don't seem to see each other, I'm not sure it is really a fallacy even then.
First, it becomes possible for people who have a double standard to hide behind this. One can try to track an individual's stance, but a lot of internet etiquette seems to be based on the idea of not looking up a person's history to see if they are being contradictory. (And while being hypocritical doesn't necessarily invalidate an argument, it can help to indicate when someone is arguing it bad faith and it is a waste of time as someone will simply use different axioms to reach otherwise contradictory conclusions when they favor each.)
Second, I think there is the ability to call out a group as being hypocritical, even when there are two sub groups. That one group supports A generally and another group supports B generally (and assuming that A + B is hypocritical), but they stop supporting it when it would bring them into conflict indicates a level of acceptance by the change in behavior. Each individual is too hard to measure this (maybe they are tired today, or distracted, or didn't even see it), but as a group, we can still measure the overall direction.
So if a website ends up being very vocally in support of two contradictory positions, I think there is still a valid argument to be made about contradicting opinions, and the goomba fallacy is itself a fallacy.
Edit: Removed example, might be too distracting to bring up an otherwise off topic issue as an example.
I believe in A, I don't take a strong position on B, I am in coalition with people who believe in B and don't take a strong position on A, we both believe in C, D, E, and F, which some other people believe in with differing weights. Browbeating me about position B (or, the most useless kind of Internet banter, complaining about me and my hypocritical position on A+B to your friends who oppose both in a likewise contradictory way, in some venue I've never heard of) is not about making people reevaluate positions, it's about negative factionalism. The only reason it might not fit the familiar categorization of "fallacy" is that you would never use it in rational debate, either in arguing with another person or in reasoning out your own position.
>I believe in A, I don't take a strong position on B
But if A and B are opposed, then there is a question of why a strong position on A can be allowed with a weak position on B, if the reason for the strong position on A would also indicate a strong position against B.
The underlying argument being implied (but rarely ever directly stated) is to question if your reason for the strong position on A is really the reason you state, or if that is just the reason that sounds good but not the real reason for your belief.
In effect, that you don't apply the stated reason to B despite it fitting is the counter argument to why it doesn't actually support A.
If there is an inconsistency in arguments being applied, any formal discussion falls apart and people effectively take up positions simply because they like them, contradictions irrelevant. This generally isn't a good outcome for public discourse.
[flagged]
[dead]
>Writing code is not the bottleneck, as in we can develop features faster than they can be deployed.
That's an organizational issue due to over-regulation, bureucracy, too many stakeholders each with their own irrelevant opinion, etc.
Startups or FOSS projects without the above absolutely can't "develop features faster than they can be deployed", and usually have a huge backlog of bugs and features they'd like to have, but never got around to.
> 2. It's annoying and disruptive to be interrupted when doing work that requires deep focus.
Steering a LLM also requires deep focus. Unless you want to end up on accidentally quadratic or have a CVE named after your project.
How can it? You prompt it, then wait minutes+ for it to come back. It's the opposite of flow state.
I get much better results the more thought I put into crafting my prompt. including using llms to help create that prompt. There's definitely a declining rate of return on that time, but thinking about the problem and carefully describing the context can take fairly deep thought. I do think it's in shorter bursts than when doing all of the work, but I get that same feeling of 'bah, where was I?' if I get interrupted while creating the prompt for a more complex feature. On the other hand, I spend a lot less time in flow state while debugging - it's way easier to describe a bug to an llm (often can just paste in the exception or link to error log).
You don't wait. You run multiple independent incremental feature in parallel, while also running a code review, which will create the next set of tasks while you or the llm think up the feature to add after.
It isn't exactly flow - but at when the prompt comes back it forces me to think. Flow is about getting into a state where I'm thinking so this is surprisingly similar. The prompt is helpful because it gives me a place to focus: does the proposed changes make sense (this is much smaller than the entire code base), and given this is done: do I know anything else that was missed.
So you move from a maker’s schedule to a manager’s schedule. Interrupting you does not have any meaningful consequence on your ability to work because at any given moment you are not really working and when your interruption is over the prompt is still just waiting there for you.
Flow comes after when you verify what it did...
What I do is, I'm always responding to output N while the AI is working on prompt N+1. So we are both always responding to each other's question/answer before last.
https://www.youtube.com/watch?v=CCMiyXM-1_U
https://shreevatsa.wordpress.com/2010/02/07/mastermind/
I'm using my ADHD hyper focus skills while flagging issues that the LLM is doing.
Reading 10x more code than before puts me straight into the zone. (In a language that I find interesting: Elixir)
My own process is improving so much that I had only one bug last week that was fixed immediately after the error tracker caught.
But yeah I feel more tired sooner. So it's oneto three hyper focus zones per day, just like before.
The difference is that I enter faster, and now I'm not afraid of leaving the task and resuming later, since I can just ask for a summary of what we did so far.
I'm using two different models from two different providers to cross check the work tho.
I'm very good with bad smells I guess, after years supervising less experienced developers since early days in my long career.
Many people today just trust whatever shit comes out. Some even brag about it, even famous devs like Yegge.
And requiring review of the result is not a "flow state". Flow states means continuous and uninterrupted focus while actively performing, LLMs block and return with new code or questions for minutes on end. That's the opposite of flow, it's the "let's take a break now, see you in a few minutes" for every interaction.
This is a false dichotomy. Software development has always been about keeping people in agreement, from the customer to the coder, and all the people in between (the fewer the better).
Meetings that increases sync between customer and coder are few and precious.
In large organisations ceremonial meetings proliferate for the wrong reasons. People like to insert themselves in the process between customer and coder to appear relevant.
I personally am fond of meetings with customers, end-users, UX designers, and actual stakeholders.
I loathe meetings with corporate busybodies who consume bandwidth for corporate clout.
No, I don’t need another middle manager to interface themselves between me and my users.
Yes! So much of professional software development is about assisting the nominal job of management—planning and budgeting—rather than users or even business fundamentals.
Why am I awake at 1:00am, ruining my brain and body, trying to get this feature finished before the end of the week instead of three days later? Ah yes, so that we meet our quarterly OKR, and the next quarter's plan that the EM and PM negotiated without me or our customers isn't disrupted and doesn't need adjustment. That would invite reprimand from the director, and the extra work would be terrible for them, I understand.
I'm reminded of this recent thread in which Heroku left the devs in charge and suddenly features that the author had requested for years got implemented: https://news.ycombinator.com/item?id=47669749
Just on a personal note as someone who worked in the game dev industry for far too long and is still suffering for it... if
> Why am I awake at 1:00am, ruining my brain and body, trying to get this feature finished before the end of the week instead of three days later?
is actually you (or some other reader) please quit/find a new job ASAP even if it means a pay cut. You don't want to deal with back issues, heart issues, weight issues, digestion issues, blood sugar issues - none of that. Please respect your body and your limited time with us. A coworker of mine had a stroke at thirty - that is a life changing event with, honestly, no real paths to full recovery.
For that matter, here's a thread from a few days ago recommending the practice of scheduling status meetings for the purpose of pressuring the attendees to work on your project in addition to their other work: https://news.ycombinator.com/item?id=47906942
What hermit wouldn't love meetings that simultaneously insist that you do more while taking away time to do it, all to avoid adjusting a pollyanna quarterly plan and budget!
Well said.
This matches perfectly my experience in working in many companies, where in most of them meetings were useless, but in a few places meetings were very useful, depending on how the companies were organized and how the attendance to meetings was selected.
I have seen projects that had to be abandoned without bringing any money, despite being executed perfectly according to the specifications. The reason was that the specifications were wrong because the customers have not thought about describing some requirements and the developers could not ask about those, because of lack of direct communication, while the middle men had no idea about both things, about what the customers might require and about what the developers might need to know.
Not a false dichotomy. I agree with OP and I can say for certain that if you are one of the few developers that is "fond of meetings with customers" you are not the the type of person OP is talking about, and you are more rare than you think.
I am a former Dev turned PO/PM and now CEO, I can tell you many a developers are not fond of those meetings you are fond of and people like myself don't insert our selves where we don't belong, we simply join the meeting and have the vital conversation with the customers/stakeholders whos payments make payroll possible, while the developers refused to.
My team have always commented and liked that I "shielded" them from the none technical meetings and distilled customer needs in our kanban, without them having to go to the meeting. While I agree this isn't the "best way" to do things, I simply have never seen a Dev Team work as the way HN tries to make the role sound "Dev/Eng and the customer is the only thing needed". Would love for this to be the case!
Also for those who think I'm down talking the abilities of my team, we made a company together when we left a huge company we worked for, as Co owners and even now we use same setup is used :)
> you are more rare than you think.
Truth. I'm that person and didn't appreciate how rare I was until I became an EM and learned that most of my team would actively avoid conversations with the customer. Even though I have no way to quantify it, I'm sure it's benefitted my career.
Are those people in contact with the customer able to make decisions regarding the roadmap or feature design? It’s a miserable position to be in front of unhappy customers while having no power to solve anything (which is why I tend to be polite with customer support).
> I simply have never seen a Dev Team work as the way HN tries to make the role sound "Dev/Eng and the customer is the only thing needed". Would love for this to be the case!
I think a lot of HN truly believes that Software Developer is the only important role at their company. Software goes straight from the developer's brain, through his fingertips into the computer, and then on to the online store (run by nobody) for customers to buy. Engineering managers, program managers, product managers, marketers, MBAs, tech writers, QA, lawyers, process people, various admins and liaisons... they all exist to play pointless political games, have distracting meetings, and obstruct the One True Role. Design docs, planning, schedules, e-mails, JIRA, reviews, syncs, exec updates... all are useless parts of a scheme to torture the developer. It should just be "developers developing, and then money comes in from somewhere." This is an exaggeration, but you see these themes all over the comment section.
> I think a lot of HN truly believes that Software Developer is the only important role at their company.
I doubt that. A lot of HN might have believed that some 10 years ago, perhaps, but most of those people have either matured or been driven away by the shift in the discourse.
I was one of the people who used to believe that, but the years of experience have taught me several important lessons that changed my mind. That change in attitude came both from my own failures and from having the rare privilege to work with people who were actually good at those other roles you listed.
> This is an exaggeration, but you see these themes all over the comment section.
And you'll keep seeing those comments, just like you'll keep seeing the comments about how developers are hypocritical divas. Those comments come from people's bad experiences.
Workplace political games are a thing. Unnecessary meetings and documents are a thing. Problematic, unprofessional developers are a thing.
100%, majority of the posts here are based in fantasy of how the world should work. They're also highlighting why most Devs cant deal with customers effectively. Customers aren't showing up with a clear spec and handing it off while middle managers butt in and ruin the whole thing.
Though I agree, most managers are BSing way too much, but the reality is that most Devs cannot navigate conversations like they think they can, and like you said, nor do they want to. And that is exactly what the managers do.
I live by these words: “if the client could enunciate a perfect spec, they would code it themselves”. Software development is about helping people get what they want out of a computer. Not what they can specify, that’s asking way too much.
There's an in-between point that I think is better than either, but it can be more difficult to find the right balance: Direct contact with internal stakeholders (with the manager still somewhat involved to still have a good overall view and help prioritize / push back / act as a general buffer), while shielded from customers. That's the place I've always preferred.
I don't know how rare it is. I have always found it harder to write software when I don't know the people who will use it or get to see what they feel about it. It's part of the feedback loop.
When I get good feedback it's like winning a prize and when it's bad it lets me see where we should be spending our time rather than were we perhaps thought we should.
The solution is to work at small companies/startups.
That comes with real tradeoffs, but I've never regretted that path.
> nakedly hypocritical
How is it hypocritical?
If in the old world, the very important process that used up a lot of time and benefited greatly from no distractions was the actual writing of code then interruptions for various ceremonies with limited value other than generating progress reports for some higher ups would feel like a waste of time.
That same person in the 'new' world where writing code is very fast but understanding the business and technical requirements that need to be accomplished is the difficult part would then prioritize those ceremonies more and be ok with distractions while their AI agents are writing the code for them.
It's not hypocritical to change your opinion when the facts of the situation have changed.
Well it is hypocritical. Hypocrisy is an action or statement that is contrary to a stated value or principle. Just because your values or principles changed doesn’t make you a suddenly no longer a hypocrite, it just admits that your former opinions are no longer tenable.
I’ve noticed this push to try to clothe hypocrisy in made up virtues like intellectual curiosity and mental plasticity a lot lately. All I can think is that it’s some kind of ego satisfaction play people make when their place in the world is threatened.
Old value: Producing high value software.
How to do it? Focus on writing code.
New value: Producing high value software.
How to do it? Focus on writing specs for code / identifying needs.
I expect there are a lot of hypocrites in the mix, scared for their job. But this isn't a fundamentally hypocritical position - agents are changing the game for how software gets produced and the things that were important as recently as a year ago might reasonably be said to be irrelevant now. Ironically, we might yet see a great software engineer who has never written a program in their entire life. The odds are slim but it is possible now.
Sorry, did people not identify needs when developing "high value software" before? That doesn't seem true to me at all. I took a "Needs Assessment" course in my class of '09 undergrad...
This is shifting the principle/value discussion up to a level where it's meaningless. Let's use a different example.
Old value: Returning value to shareholders.
How to do it? Treat your employees like family and don't be evil.
New value: Returning value to shareholders.
How to do it? Treat your employees like human resources and get away with what you can get away with.
Is this hypocritical? Most people would say yes, but in your framing it's not because we've backed up to the least specific articulation of an underlying principle. It's a species of the motte and bailey fallacy.
Agents may be changing the game for how software gets produced, but all it's really done is switch software developers from being managed to being managers. And software developers trying to square their historic value/principle that management tasks are useless, easy, and ceremonial (to borrow GP's word) tasks that should take a back seat to ~flow state coding~ with their new view that management is an integral, difficult, and requisite part of writing code reeks of hypocrisy.
I've noticed on hackernews in the past year, a certain type of comment. A deep suspicion to first call out a surface behavior, then psychoanalyze strangers with whatever the flavor of the month "deep observation" is.
You can't be a dick on this platform without fancy prose I guess.
Abduhl, the nature of the job has changed; before it was coding, now it is managing the AI coding. What was and remains valuable is delivering value. This principle has not changed.
If your job was only coding then you are the most replaceable of the bunch. Traditional software engineering is a broader domain that, as rightly pointed out, will require you to actually *sit and talk* with the worst communicators you'll meet in your life.
Looking at a slice of most folks' workday and calling it their whole job is in my opinion, incorrect.
> Just because your values or principles changed doesn’t make you a suddenly no longer a hypocrite
Uh yes it does?? What are you talking about.
https://www.google.com/search?q=hypocrisy
Bottom line is the people described as hypocritical in the comment have no principles, but rather feign passion in anything they think other people consider valuable. When devs thought coding skill was valuable, that's what they claimed to be passionate about, when the game changed and communication became key, they suddenly changed their passion. Either the timing is a coincidence, or they are hypocrites.
I don't think switching one's passion on a dime is a valid escape hatch from hypocrisy.
You're trying to turn flexibility and the ability to adapt to new circumstances into a vice.
You're wrong. It's a virtue.
Ceremonies and tickets aren't especially effective for actual collaboration. They're primarily tools for making work legible and controllable to management.
There is a reason (well, many reasons) that, if I'm working on a creative project with somebody outside a company, we would never think of reaching for Scrum ceremonies or Jira.
It is more than perfectly consistent to complain about that while valuing collaboration.
I require Jira for all my work to protect myself from three things, definitely not productivity:
1. µManagement asking "What have you even been doing?" Now they have a dashboard, and I have a nice record.
2. Protect me from people who wouldn't tell me problems existed, but would tell their managers they were blocked by those problems. Now, the understanding is that if the Jira doesn't exist, then the problem doesn't exist.
3. I use the "On Hold" state of an issue for a clear signal, for them and their managers that I add as watchers, that there will be no progress until whatever requirement is met (question answered, etc). It dramatically decreases response times, and means I don't have to nag them. Goes back to #2, where I can point out that they are blocking themselves.
All these things come into existence because people are so bad at collaborating, but really good at pointing fingers.
This is a good way of using a ticket tracker: bottom-up, where engineers, testers, and other stakeholders are empowered to create and manage tickets as a means of communicating about their work and what they are blocked on. It's part of the writing culture mentioned in TFA.
In some places, the ticket tracker works top-down: only the manager creates tickets, and the manager makes measurements about tickets closed, velocity, and so on to assess the productivity of their team.
I think the great divide on JIRA and JIRA-likes often comes down to which culture people have been exposed to.
It's 100% denial/ego. I've been a contractor longer than I'd like and it's the exact same response I see when I join a new team. The team complains they have too much work and can't get anything done, so their manager pulls me in. Suddenly, they don't want to give anything up. I'm actually in the middle of this right now. The team "is swamped" yet somehow, they are able to argue that almost everything I can handle is best handled by them and they don't need help. Fine by me, I'll sit around and get paid. But it smells exactly the same. They don't want to admit that A - they are replaceable and their work isn't that unique and B - they are the bottleneck, not the process or workload.
> A - they are replaceable and their work isn't that unique and B - they are the bottleneck, not the process or workload.
The problem rather is: often good programmers have quite good ideas how these problems could be solved, but for "organizational politics" reasons they are not allowed to apply these solutions.
Thus:
Concerning (B): Because they are not allowed to apply their improvement ideas, they are the bottleneck. But being the bottleneck is not the root problem, but rather a consequence of not being allowed to improve things.
Concerning (A): It is indeed often the case that if you simply let someone else do the work, the code quality decreases a lot and in subtle ways. Good programmers are very sensitive (and sometimes vocal) with respect to that - in opposite to managers.
Additionally on A: the people who will be stuck maintaining this contribution for years and years have a different view of the pain.
Pushing a 90% solution through is a ‘win’ for the coder who is leaving, and hurts everyone on a continuing basis. It’s bad accounting, and lets the consultant look good for making the team perform worse (and look bad later).
And, IME, if that 90% solution needs a 100% rewrite after 40-80% burn in bugs and error chasing? What once was a bit behind is now way behind with staffing issues. Sunk costs don’t create extra budget.
Do It Right The First Time doesn’t always apply, only mostly always. Some people are insecure and territorial, yeah, but some know what their job is.
No matter what you do, entrenched engineers will make SURE they will be the only ones maintaining everything until their retirement, because they will make life impossible for everybody else until they leave.
Entrenched engineers don’t want to you to alleviate or god forbid share the pain. Pain is good for employment security. And if the ship goes down, they’ll make sure they’re the last one to get fired, because there’s nothing the entrenched engineer fears more than having to job hunt.
And I totally respect that, I get it, I really do. But it's really obvious when people are being territorial and any contractor will tell you this happens every time. I suspect that a lot of the times, I'm hired to "teach them a lesson" in that "Hey, velocity sucks and I'm hearing a lot of whining, so if you don't like doing it, this guy will" and people snap into shape.
Unless the team are seriously bad developers, many times, it’s the manager fault. As a hired consultant, you often benefit many freedom that team is lacking. As someone that has been hired as a consultant, one of those are meetings and not having to worry about office politics.
Well who are you? Why should they trust you to actually complete a task and not dump unfinished work on them when your contract is up?
The manager didn't do the work to figure out what a contractor should do before hiring one. Why would they expect that org to plan the exit if they didn't plan the entrance?
Behavior shouldn't be surprising, no?
I mean, they hire me for a reason, whatever that may be. I want to do a good job and carry out the task because I want to get hired again by them or whatever agency is pimping me out. I've seen a lot of shit and that's my value. Whether or not the team wants to help me succeed is their political thing. And that's not invisible to management either.
You're not looking at it very empathetically. You're disregarding the concerns I floated, you expect the team that feels underwater to now stop everything to reshuffle the work scheduling to fit in a wild card all while you're calling them bad and replaceable.
I mean it really sounds like you're not on their side at all. It's their job to help you succeed, apparently. From what you've said already, you don't care about the project either. You're happy to waste time and money. It sounds like they're right not to trust you.
If he was hired to do a job, its not on the team to "trust" him. Its to incorporate him as a resource. I'm sorry but speaking strictly from a productivity standpoint, we're not here to be empathetic, we're here to deliver value to the organization.
If I'm a manager of a team thats struggling and now also sabotaging additional resources, because they havent got the right warm and fuzzies, I'm going to be looking to have some difficult conversations. I'm also going to be very critical of anyone who floats a lack of "trust" as the blocker without some concrete evidence to justify it.
Whatever concerns they might have is not for the contractor to address. They are between them and their own management who deemed them unable to deliver sufficiently.
Well this is all highly hypothetical but my point is that there are valid reasons to not entrust a contractor, who is only around temporarily, with long standing features. Not because they are nefarious but because they, by definition, will not own the feature ultimately.
Resistance is also not the same as sabotage. My assumption is that everyone is acting in good faith from their own perspective. An immediate issue I see that the contractor was brought in because folks were looking at the calendar and not the tasks. Now the team is being pushed to carve out tasks. If shovel ready tasks are identified first, almost certainly, things go smoother. You're not context switching everyone. Its far less chaotic.
What you seem to not understand is empathy is going to move the team forward and deliver. Jumping to bad faith immediately is likely not the fastest way to a solution. If someone is struggling, its useful to understand why and address those problems. Its often not because they're bad.
That's not at all what I'm saying and I don't know where you're getting that from. I'm not trying to stop anyone. I'm trying to be an extra paddle. I'm happy to do nothing iff I try to help but get boxed out. I have no problem just riding in the boat and saying "hey, there's rocks up there" or "This seam looks leaky, maybe patch it". I'm not here to fight people or egos.
This sounds like my ideal job. How do you land such gigs?
This seems to confuse cause and effect.
True, most engineers hate meetings because as your rightly point often there can be too many "types" of meetings - team meting, issue tracking, backlogs, design reviews, triage etc etc. Out of the 7-8 working hours, a senior engineer might be in meetings for 4-5 hrs. Then they bitch and moan that they are spending too much in meetings and not enough time coding. A reason for that is projects often have unclear or even changing requirements along with tight deadlines.
Sure today with AI, code can be produced faster than ever. But the requirements being unclear or always evolving hasn't really changed. Today many non-engineers assume that what they have in mind is straightforward and can be created by AI. That is not true. Unclear requirements lead to unclear results. Garbage in Garbage out. Getting the right input is still the most important part of software. That has not changed. That is the collaboration piece of software.
And sure within the software community there are folks who don't like to collaborate even on requirements, they are more than happy to follow someone's lead. They like their manager/architect to "shield" them and do these tasks for them. These silent warrior type engineers are going to be the most impacted due to AI coding. Because they have no visibility and even if they are 5 rated coders, there is always going to be "But AI can produce code. What else can you do if you wont even collaborate?"
So, it's not very cut and dry. Engineers come in all shapes and size.
The bitching was about meetings and ceremonies that took away the little time left to spend time asking more features to be implemented, or revisited before it could get completed.
No developer was ever unhappy to communicate. But when pointless communication occupies too many long hours, interrupting useful the progress of understanding what could and should be done (by coding, yes, experimenting, getting a grasp of the beast), then yes they became unsympathetic.
> No developer was ever unhappy to communicate
I've worked with engineers all over the spectrum in terms of their styles, beliefs, and preferences... and some of them are frankly not very interested in getting out of their comfort zone (like heads down, writing code and being alone), and optimizing for the group rather than themselves.
So yes, they are in fact unhappy to communicate (in a general sense), because of how tedious and uncomfortable communication often is.
I'm not saying it's irrational or immoral, or not driven by the types of past poor experiences you mention, but in my experience it's often pretty obviously suboptimal and highly frustrating to work with.
> the same kind of engineer
Who?
There are millions of software engineers around the world. It's quite likely that they have a few different opinions and point of views!
But it is written there, and GP was quite specific:
>the same kind of engineer, who throughout my career have constantly bitched and moaned about team meetings, agile ceremonies, issue trackers, backlogs, slack, emails, design reviews, and anything else that disrupted the hours of coding "flow state" they claimed as their most essential and sacred activity
Seems pretty clear to me.
I doubt the GP has gone back through their career and checked on each person who thought there were too many meetings have now all made the switched they're being accused of, though.
Why does that matter?
Because a claim was made about a group of people.
Exactly. Does it matter if one member of that class does not follow the trend?
https://hn.algolia.com/ - search for "actual work" This has long been a topic on HN.
What type of engineer, who until a year ago - because of AI apparently - is suddenly no longer concerned about code? Personally I'm just as concerned about code because AI has not changed the fact that it still takes a really long time to develop stable, secure software (ie- if you're making software to do ambitious things). Nothing about modern AI tools eliminate the need to get in the zone; using AI to amplify one's engineering skills let's us solve the next problem faster - but in software there are unlimited problems.
Just because I hate those ' team meetings, agile ceremonies, issue trackers, backlogs, slack, emails, design reviews, and anything else that disrupted the hours of coding "flow state"' - doesn't mean I don't understand how important they were and are. I moaned them before, and will continue to - but they were always important. I have learned the hard way more than once what happens when you sit at a keyboard and write code (one time I lost my job because the code I was writing was so far out from what the company needed, the next I realized what was happening in time to leave first - only after I was gone did they realize that what I was doing really was important and they made me a good offer to come back)
I posit that getting into a coding flow state isn't just about producing code. It morphs and develops the problem space in the engineer's head. It helps them realize where the spec is lacking. It familiarizes them with the capabilities of the codebase so they can speak confidently about what future changes will entail.
I understand your sentiment, and there’s absolutely some truth to it, but I don’t think the path forward is throwing more management resources (or layers) at the problem. And I don’t think the management technology industry is the answer either.
I think the solution will be small (1-5 person) teams where product and engineering sit next to each other and have clear authorization to launch directly to prod at their discretion. The gripes about performative work tracking mechanisms and the realization that non-tech considerations are now the bottle neck are not mutually incompatible.
I don’t think that’s the issue. The problem is that with software you don’t know what a user might like until something is in production.
This is probably true of other fields too. But rolling back changes there is expensive (example construction).
But with software you can get to put things out and iterate. This is not to say identifying what’s needed isn’t important but you had roles where the product owner is getting feedback for the previous iteration while the devs are working on the current one.
With code assistants this loop collapses a LOT. Suddenly it can be a lot easier to define better what you need and in near real time also gauge how it would operate.
Both are true “leave me alone” and “you don’t know what to build”. Because the people identifying what to build aren’t the people doing the building.
I'm not in the group you described so I don't want to speak for them, but I can empathize because there are some things that are meaningfully different with AI:
1. Increased velocity makes rituals like daily standup and other comms relatively infrequent compared to how they used to be, so there are fewer touch points now. For example a daily standup might have been occuring several times while someone worked on one feature ticket, but now they can bang out multiple features a day plus some bug fixes, but still only have the daily touch point.
2. AI written code needs to be thought through and planned a lot more than human written, because the machine doesn't go through the same discovery/writing process that a human goes through. It looks superficially similar, but is subtlely and importantly different.
3. Without solid planning and requirements definitions, it's a lot easier for AI to go off the rails and do something you don't ultimately want. That wasn't true for humans writing code because they have a lot of project context knowledge that helps a great deal. AI obviously doesn't have this.
4. With the intense speed of devs now thanks to AI, it's far easier to step on each other and end up with at best merge conflicts, at worst significant deviation in solutions, and often major refactors/overhauls that can make the codebase feel foreign and confusing to devs. Most people have had the experience of stepping away from a project and coming back after a refactor had been done, and realizing that they don't know where basic things even are anymore. It can be unsettling and add a lot of friction.
5. AI can be pretty good (and very fast) at producing documentation and plans, so the "cost" of planning before coding is a lot lower now. That changes the equation of "what is the most important thing to spend my time on to iterate quickly".
Just because concurrent design, QA, research etc push out the Gantt chart doesn't mean your meeting isn't useless.
In fact, deep pipelines don't even need to have bottlenecks to take time. Even still any given meeting could still be a waste of time depending on the meeting.
You’re describing a multitude of different people with a variety of viewpoints. It’s also smart to change your mind when the environment changes; code being easy to write is a decisive shift.
Code may be easy to write once you know what the code needs to do.
The people I know like this are people I consider to be "advanced juniors". They are held back by their inability to work with other parts of the business and understand customer needs. In order to be successful they need to be spoon fed requirements. What I've seen from the limited sample in my orbit is that they've actually doubled down on AI and are creating little private worlds of agents and further isolating themselves from the business, not talking about how great collaboration and such are.
Are my standards too high to expect juniors to also be able to work with other parts of the business and ask for additional requirements if needed?
No. "Plays well with others" is part of being a functional member of any engineering team that I'm on.
The activity that needed and still needs to be protected is problem solving:
- Understanding the problem at hand
- Putting all the pieces together so that they solve the right problem the right way
- Making sure that the solution facilitate future extension and doesn't lead to a ball of mud two months from now... Unless stakeholders want it to be quick and dirty, then making sure they understand the costs/risks
- Planning execution a way that is incremental and testable so that we can build confidence that the system is doing what we expect of it
- if you are in a team, figuring out common dependencies so that those can be done first and unblock parallelism on execution.
Once all that is done and documented, writing the code was easy and fast.
What would sometimes happen is that some unexpected detail or dependency would be discovered as part of the writing of the code and then you are back at the beginning, figuring out how to make everything fit together.
I find that the main confusion comes from people not realizing that those are two different activities and instead calling it all "writing code".
That's just scarcity based economics doing its thing.
I don't think it's hilarious, I think it's rather sad to see people so easily trampled by the whims of an irrational market. Generally speaking, we benefit when people stick by their values, and yet we play this awful game where winning means abandoning our values in pursuit of "value" whatever that is.
/s Oh noooo, it's like they're turning into managers... Now that machine can do their job better than them they've become as unimportant as you always thought they were, always pretending to be banging keyboard where it used less brain cycles than highly important work like posturing in a meeting did. Anyone can bang a keyboard even a machine can do that now, it can surely never replace important work of you having to 10 meetings. Lets replace all of them with machines and us meeting lovers can run the company with the machine produced work that we never have hope of understanding.
I agree with this sentiment https://news.ycombinator.com/item?id=48033534
Meanwhile people who bitch and moan about “other engineers” all the time haven’t changed at all. How refreshing.
I've had seniors tell me my entire career that writing code was the easiest part of their jobs.
True
I've been selling and managing my own projects for over six years, so I don't count myself in the antisocial camp. But LLMs haven't changed the fact that I like to have at least five hours of deep work every day.
Just because you use LLMs doesn't mean you don't need the "flow". Reading code SUCKS, getting into the flow is harder than ever.
Unless you sign off on a Looks Good to Me PR and go loiter by the kombucha machine. Then you have other problems.
There's nothing hypocritical about preferring the part of the job that isn't the bottleneck and wishing one could spend more time on enjoyable things. Nor is code being called "inconsequential" simply because it can be done (more) easily.
We've had systems that induce boilerplate before, and we've had systems that try to cope with that boilerplate before.
Considering the process to be tedious is really not the same thing as being antisocial.
I do need to point out that not all meetings are equal, and the "hypocrisy" you are seeing may come from different groups of people.
I feel attacked. I still dislike most team meetings, agile ceremonies, etc. Slack and emails give me anxiety. A 30 min meeting will disrupt me for 90 minutes. But, yea, the code was never the bottleneck. Except maybe when I worked at a startup. All of the above are true.
Personally I find it hilarious that the same people at my company who can't be bothered to write down detailed requirements and are constantly fighting any effort to do research or technical documentation or pay down tech debt are now trying vibe coding and struggling to produce anything useful. Oh you don't understand why you aren't getting the results you expected? Maybe you should try thinking deeper about what you expect before your rush your engineers or, now, your agents.
Um… how do you get those requirements if slack / email give you anxiety ? And meetings are disruptive ?
I am genuinely curious. I understand where you are coming from, you want to maintain flow state.
How does one effectively load the funnel to support flow state ?
Jira tickets? Requirements documents in some kind of ALM tool?
Isn't solving problems, instead of blindly implementing a high-level description of the solution, your job as a developer?
There are problems you can solve, and problems that you cannot. Depending on the exact details GP may have been slacking for not solving problems, or correct in saying he can't do good work because he shouldn't be solving the problems alone.
Two things can be true at the same time. Yes, those meetings are horrible, and plenty of times they're useless and can be summarised as "why wasn't this an email/slack message", but also plenty of those same meetings can equally be extremely important.
In fairness, given the context those meetings give, it stands to reason that giving that same context to an AI, it can, in theory, still do the same thing as an engineer. But those meetings still need to be had.
Yea, when you have multiple people doing anything, communication has to happen. It's not optional. As soon as your company hires developer #2, you need to communicate. As the team sizes get larger, 1:1 in-person conversations become less important and you need E-mail. As the team sizes get even larger and non-developer stakeholders become more numerous, meetings creep in. These things are not developer-torture devices. They are happening because your company decided that the product needs to be built by more than one person.
If y'all can find that company where the product is entirely developed soup-to-nuts by a single lone-wolf developer, without any other stakeholders or involved parties, by all means join that company! And tell HN about it--many of us would join it, too. But in the real world, development is a messy people-soup and you have to communicate.
Or the person just likes to lock into flow states at the point of maximum leverage. Previously that was coding. Now it’s commanding agents.
Is it hypocrisy or learning? A more charitable take - it wasn't too many years ago that I also decried the need for all the collaboration. But as I advanced in my career, that worldview just didn't hold up. In this case, maybe the introduction of agentic coding has accelerated that learning because now 'regular' engineers are forced to take on coordination roles.
[With that said, the specific implementations of such collaboration are often still very painful and counterproductive...]
They sound like very important people no matter what the circumstances are, haha.
Having "house rules" on a team that new members must agree to follow tends to flush such people out and they usually exit on their own when their shenanigans get repeatedly called out as violative. Gotta introduce the rules in the interview process and get agreement after they join. Catching them out early is the key.
We had an intervention on one hard case and he rage quit the next day. I don't know why people do that, it's a small world and people talk.
There are 2 bands: you let people earn a living or you let investors/executives become richer every year to the detriment of workers. I don’t care about the medium, Im not with the big fishes
I've got nothing to add to the discussion but want to take a moment to appreciate your ability to construct long sentences. It flows beautifully.
The amount of cognitive dissonance I'm seeing on HN right now is concerning.
I'm seeing both these beliefs right now:
• Belief A: "I am a skilled professional whose value lies in my unique ability to solve complex problems."
• Belief B: "An LLM can now solve many of these problems in seconds for pennies."
This thread is great at showing how people are rationalizing by moving the goal posts, so to say
"both these beliefs" and you label one a Belief and the other a Fact.
That's an oppinion for sure, and a very shallow, general oppinion. Some people like solving problems, sometimes via code, while others tend to hide behind the 'Collaboration' banner, to help their own career progression. Both are legitimate tracks. To dismiss one, is to make the other appear 'non-Good'. But, perhaps data can be furnished as part of this post to support either as 'better'.
Did you ever try no meeting days and other methods to avoid interrupting thought workers?
Because even if someone is writing design documents you shouldn't be interrupting that process regularly either.
It's certainly the case that the collaborative ceremony can be mismanaged, and that is frustrating when you need time to implement. I don't expect that complaint to go away, those who are using AI heavily will replace it with not having enough time for prompting.
But I have also worked with some who refused to participate in collaboration, they felt their time and ideas superior to others, and there's no excuse for that.
Before, meetings (aka. coordination) bottlenecked the coding.
Even if coding was solved, meetings could still be the bottleneck.
You think spending more time on meetings is going to solve anything?
I have seen this play out too IRL and I am really enjoying the schadenfreude.
My sense is that it's the opposite. The people who complain about meetings, managers, and methodologies also complain about agentic coding. The people who are excited about frameworks, methodologies, and project management tooling are excited about agentic coding.
Sometimes code is the bottleneck, other times it's not. Large company, not a bottleneck, fixing bugs or individual app developer, more so.
They are still anti-social. But they see the “social” as a way to feed the AI better, to make better code.
The focus is still the code.
I'm not going to comment on the likely "Goomba fallacy" at work in your comment, but I just want to note:
> team meetings, agile ceremonies, issue trackers, backlogs, slack, emails, design reviews
Are frequently not:
> [important] collaborative activities
I've always been someone who disliked distractions from my "coding 'flow state' they claimed as their most essential and sacred activity to be protected at all costs" (because, you know, I was getting paid to write code and that's the only way I could actually get it done), but I also loved genuine collaborative activities (as in a small number of people, interacting with each other in a high bandwidth way, to figure something out or get on the same page).
A lot of the activities you explicitly mention are usually literal garbage for actual collaboration.
> (because, you know, I was getting paid to write code and that's the only way I could actually get it done)
I'm going to assume you were getting paid to build software that solved problems and created value for your customers and stakeholders. Writing code has always been just one activity that's part of the job, and developers forget that and make statements like this! That's the parent poster's point. I'm not saying it's not an extremely important part of the job, or that people don't often collaborate poorly in ways that take away from the sacred deep work time, but framing it as "I get paid to do X and not Y" is just a highly limiting way to look at or talk about the role.
>> (because, you know, I was getting paid to write code and that's the only way I could actually get it done)
> I'm going to assume you were getting paid to build software that solved problems and created value for your customers and stakeholders.
That's a distinction without a difference. At least historically, I was "paid to build software that solved problems" and I was to do that by writing code. If I didn't write code, and enough of it, I'd be fired. Getting my flow state disrupted for no good reason was something I'd resist.
Also agile ceremonies are a drag, literally becoming the thing agile was originally supposed to be fighting against (not that agile is gospel, I've always disagreed with some of its practices). They're not a good reason. And I also mentioned an actual good reason. I should also note those meetings I was referring to positively were almost always with users, not tech people.
> Writing code has always been just one activity that's part of the job, and developers forget that and make statements like this! That's the parent poster's point.
I wasn't addressing the parent poster's point per se (and I noted that and why), just noting that a lot of the "collaborative" activities he cited were often not that collaborative, and the shade he was throwing at people who were unenthusiastic about participating in them was probably unwarranted and misguided.
tl;dr: OP needs to have more empathy. There are better ways to thread the needle of his observations than what was on display in his comment.
It's an astute observation but overstated. There are just as many programmers who view their activity as too sacred to consider using an LLM, even for relatively easy, predictable, or disposable work.
Generally, groups of people aren't homogenous.
The contradictions you see could mostly be variations across individuals rather than hypocrisy within individuals.
(Doubly so for vaguely defined groups, like "kind of engineer".)
I think there’s some kernel of validity in this comment, but the unnecessarily aggressive tone loses it. This just comes off as bitter.
That's a straw man at the root of which sits a conflation of at least two types of meetings:
(A)
Meetings where we discuss whether naming two users in an integration test `u1` and `u2` vs. `user1` and `user2` and whether whoever did the former is so hopeless that they should drop all computer work and go work on a farm, and spend an hour of meaningless and meandering style preferences.
(B)
Higher-level meetings where I can communicate with PMs and customers and CEOs almost on their level f.ex. "Does it make sense for us to have primary/co-borrower roles in our credit products or are all sides equally liable?".
---
With the advent of really good LLMs, meetings of Type A are nearly gone and meetings of Type B have increased meaningfully. I am very happy with that new state of affairs. Are you not?
Textbook example of goomba fallacy.
Comments like these are why I still come to HN. Absolute kino.
They were right back then because these tools didn't exist yet, and they're right today because they do now.
What even is your point? Are you... mad because the truthiness of a statement can change over time?
> I mean, they're not even wrong, but the nakedly hypocritical attitude of people who, until a year ago, were the most antisocial and least collaborative members of any team they were on is still extraordinary.
I don't think there is any hypocrisy. The error in the analysis is assuming both conflicting opinions are held by the same person. They aren't.
Collaborative activities and process being important is NOT mutually exclusive with many meetings being useless, agile ceremonies time wasting uselessness and design review being used as a place to pontificate about crap.
NONE of the activities you mentioned are activities that lead to what article talks about - well designed spec.
Just look at what they write. There is a correlation between the Agentic Multitasker and the type of person who wanted results and didn’t care about the coding in itself. That’s what they themselves keep writing.
They are not the same people.
> It's hilarious ... their most essential and sacred activity ... suddenly, and with no hint of shame ... the nakedly hypocritical attitude ... still extraordinary
Calm down the hyperventilating for two seconds, look around, and you’ll immediately see examples of the same group of people who now biTch aNd mOaN about how agentic coding is killing what they love about programming.
It’s interesting to see people either gloat or get incensed at the nerds who like computers in the context of these developments.
But the flow state wasn't just about typing code. The flow state was about understanding the problem, about loading it into your head so that you could "walk around in it" mentally, so that you could figure out that what really needed to happen was that module X needed to add a getter to value foo, that module Y needed to get foo and make a change based on the value, and that the key to making this all work was to add a way for Y to access X that fit within the existing architecture. That took focus, far more than implementing the pieces did.
[dead]
Yes this exactly, it's getting ridiculous at this point.
It's precisely because I get swamped with all the non-coding work that agentic coding works so well. And in multiple ways.
- it lets you get back in the flow faster (unless you were used to writing out your inner thinking monologues and reasoning to get yourself back to speed when you come back from a meeting).
- it lets you move faster and take on more on your own, meaning less people needed in the team, less communication/syncing/non-coding overhead.
If you're objective about it, AI coding is going to be amazing for individual productivity. It's probably going to fuck us (developers) over with the reduced demand, lower bargaining power, etc. But just on technical merits it's a great productivity tool.
The models are still not better than me at coding and handholding is required, but the speedups are undeniable, and we're long past the threshold of usefulness. So far all the contrarian takes are either shallow/reflexive pushback because people don't like the consequences, or people working in niche stuff where LLMs are not that great yet. But that has been shrinking with almost every release - in my experience.
I know everyone here writes cutting edge algorithms that were never encountered in the training data, their code is hyper optimized realtime bare metal logic that's used in life or death scenarios and LLMs are useless to them - but most of the stuff I do day to day is solve problems that have been solved before, in a slightly different context. LLMs are pretty good at that.
I hate meetings when they're mismanaged, which is often. I like a good meeting. Probably what most swes would say.
I think probably both things can be true. That all of those things can be actively harmful when they distract the most productive coders from coding, and become more useful when time at a keyboard isn't really the constraint for producing code any more and coordination becomes a more serious problem.
The archetype of the "jerk engineer" is over, because it turns out coding isn't all that valuable anymore. We now need "engineers" who understand much more than coding.
THIS COMMENT IS GOLD.
Another example I can point to is software security. For context, I’ve built and sold two edtech companies that taught enterprise developers about software security .. It didn’t matter how good the training content was .. ouur product replaced boring appsec video training with interactive labs, vulnerable code snippets to hack and fix .. gamification ... leaderboards .. whatever it took so they couldn’t complain about having to watch boring videos .. however the completion rates sucked .. because they just didn’t care regardless of how hard we tried ..
Now post AI .. my Linkedn is full of blogs and think pieces about how important “software threat modelling” and “cybersecurity” are, and how “coding was never the hard part.” ... suddenly, TM, something only a tiny fraction of companies actually practice, is being framed as the real challenge .. and having deep understanding of OWASP / secure design , vulnerable dependencies ..secure architecture ,, is the real bottle neck .. lol
What? Coding was escape from what that hypothetical engineer of yours disliked the most. Now there is less of it and ai hypers keep yapping about the job being no longer needed. Meanwhile it's just the fun part that was optimised out. Working hours stay the same, so it's more of the unfun activities. The job is worse, but we're told it's "solved". Bitching more makes sense, no?
I would say in general the amount of persons who pivot like that is low.
Similarly, the amount of open source people who previously maintained a hardliner programming meritocracy stance and now pivoted to AI and market AI is exclusively limited to those whose companies are working on AI products. The good ones in that space are decidedly less than 1% of all good ones.
Welcome in humanity my friend.
Also, expect harsh and rude reactions when pointing to big issues that are crystal clear in the middle of the village. Not all truths are warmly welcomed, especially when looking elsewhere feels more comfortable in the immediate experience.
Take care and don’t worry too much: the journey’s short, so remember to also enjoy the good parts.
no, these meetings are still hot garbage.
half the time you’re going to discover the right decision / path while you’re coding.
focus time went from hammering code to figuring out how to solve the problem. PRs are now how we exchange ideas. meetings are still productivity theater.
Looks like this comment is touching a nerve. This community is progressing from "AI can't write code", to "Well, AI can write code but it's not really about the code". I wonder where the goalposts will be moved next?
The community portion that unironically think AI is good enough now, are mostly managers and non/semi-technical people, and engineers who do not engage in critical or complex problems. HN has always been too much of the velocity-alignment-synergy class of professional talkers; it's just so much more obvious now that they feel emboldened in false confidence.
No hackers on Hacker News? Oh my oh my
I've been paying attention to that line of division and came to the same conclusion. We're seeing the "tiers" diverge as AI becomes more capable. As the saying goes, you don't know what you don't know.
Ah yes, the good old No True Scotsman fallacy, where we smugly proclaim that anyone who thinks AI is good now must be working on unimportant and easy things.
Ah yes, the (please copy & paste this message into an LLM to get it to generate a novel response in these parentheses)
My comment from 3 years ago still something I agree with https://news.ycombinator.com/item?id=34925145
There's some of that, but more often it's developers whose arguments are a year behind the frontier models or, just as common, they're dramatically overstating their abilities.
It's an inherent tension that every discipline has to wrestle with. The most experienced developers are in the best position to evaluate where LLMs are, but those who are the loudest about their own abilities generally aren't in this camp. Humility tends to come with experience, and arrogance tends to come with inexperience.
Conversely there's a massive amount of money being thrown around biased in favor of inflating what LLMs can do compared to humans.
Both your error and the OP's error is in imagining that the same people are saying both things. The "community" fallacy, which has been around for about 10 years now, that pretends that people with something in common (e.g. "uses HN") are somehow a community that thinks identically is completely wrong.
Actually, it's some of the same people. I won't name names, but there are a lot of AI skeptics on this site who loudly and prominently comment on every AI story. And if you look at their posting histories you'll see the exact type of goalpost-shifting the parent commenter is talking about.
You see it elsewhere as well. There's now a cottage industry (with visible members like Ed Zitron) who have made a career out of creating and selling anti-AI content. At first they were complaining that AI lies constantly. As AI got better, they shifted to other talking points.
This community hasn't agreed on either of those things, just like it never agreed on good coding practices.
My opinion since college (8y ago) was that the best engineers are the ones who treat everything as halfway a people problem, even in low level code.
LLMs have been getting a lot better at coding.
If the "goalposts" represent what people generally think LLMs are capable of, they should be moving, right?
And complex, multi-part, long term efforts like building software and software companies always have numerous obstacles. When one is cleared, you wouldn't expect there to be no more, would you?
Your tone is complaining, but I just see people working in reality.
Is it even a problem that so called goal posts are moved?
That's life.
Life changes and us along with it.
"Who Moved My Cheese?"
Exactly goalposts need to keep moving
I think veteran engineers have always known that the real problems with velocity have always been more organizational than technical. The inability for the business to define a focused, productive roadmap has always been the problem in software engineering. Constantly jumping to the next shiny thing that yields almost no ROI but never allowing systemic tech debt to be addressed has crippled many company's I have worked at in the long-term.
> The inability for the business to define a focused, productive roadmap has always been the problem in software engineering.
Agreed, and I also agree that most developers come to this realization with time and experience. When you have a clear understanding of business rationale, scope, inputs, and desired outputs, the data models, system design and the code fall out almost naturally. Or at least are much more obvious.
For veterans engineers that might be true. But for a junior engineer pre-AI, velocity has always been technical. I know junior engineers who after a whole year of writing C++ still does not grok std::unique_ptr; and this person consistently has the least velocity on their whole team. When I used to write performance reviews for junior engineers, their performance really was dominated by their velocity, which was roughly measured in lines of bug-free code written within a time period. A good junior engineer would be given a clearly defined feature and write good code quickly, whereas a worse junior engineer would be given the same thing and write code slowly or write buggy code quickly that required so much work debugging and rewriting.
> [O]rganizations which design systems (in the broad sense used here) are constrained to produce designs which are copies of the communication structures of these organizations.
— Melvin E. Conway 1967
- systemic tech debt is now addressable at scale with LLMs. Future models will be good enough to sustain this, if people don’t believe this I would challenge them to explain why. First consider if you understand what scaling laws are like chinchilla and how RL with verification works fundamentally
- I completely agree with you about fundamentally the limitation being the business able to coherently articulate itself and its strategy
- BUT the benefit now is you can basically prototype for free. Before we had to be extremely careful with engineer headcount investment. Now we can try many more things under the same time constraints.
The problem with tech debt is not that it is some poorly designed code in a few repositories that can just be changed. True tech debt is the kind that requires significant architectural changes across many systems and is almost always coupled with major data migrations. You need the rest of the business to agree that you want to invest all that time and energy to fix a problem someone else created 10 years ago. You likely will also need other teams to set aside time on their own road map to address it. You also might need customers to change what they are doing because if software lets you do something, you can guarantee that someone has learned to do it - even if that 'something' was actually a bug.
LLMs don't solve any of those problems by itself.
> BUT the benefit now is you can basically prototype for free.
But.. so can your competitors. And that changes the value proposition.
How do you mean?
> systemic tech debt is now addressable at scale with LLMs.
Is there any reason to believe this? I've only seen the evidence of the contrary so far.
My experience with AI coding aides is that they, generally:
1. Don't have an opinion.
2. Are trained on code written using practices that increase technical debt.
3. Lack in the greater perspective department, more focused on concrete, superficial and immediate.
I think, I need to elaborate on the first and explain how it's relevant to the question. I'll start with an example. We have an AI reviewer and recently had migrated a bunch of company's repositories from Bitbucket to GitLab. This also prompted a bunch of CI changes. Some projects I'm involved with, but don't have much of an authority, that are written in Python switched to complicated builds that involve pyproject.toml (often including dynamic generation of this cursed file) as well as integration with a bunch of novelty (but poor quality) Python infrastructure tools that are used for building Python distributalbe artifacts.
In the projects where I have an authority, I removed most of the third-party integration. None of them use pyproject.toml or setup.cfg or any similar configuration for the third-party build tool. The project code contains bespoke code to build the artifacts.
These two approaches are clearly at odds. A living and breathing person would either believe one to be the right approach or the other. The AI reviewer had no problems with this situation. It made some pedantic comments about the style and some fantasy-impossible-error-cases, but completely ignored the fact that moving forward these two approaches are bound to collide. While it appears to have an opinion about the style of quotation marks, it completely doesn't care about strategic decisions.
My guess as to why this is the case is that such situations are genuinely rarely addressed in code review. Most productive PRs, from which an AI could learn, are designed around small well-defined features in the pre-agreed upon context. The context is never discussed in PRs because it's impractical (it would usually require too much of a change, so the developers don't even bring up the issue).
And this is where real large glacier-style deposits of tech debt live. It's the issues developers are afraid of mentioning because of the understanding that they will never be given authority and resources to deal with.
You are not wrong about anything you’re saying but like I said this misses the forest for the trees. I’m talking about like the next ~2 years. There is a common idea that we don’t understand this technology or what will happen performance wise. We know a lot more about what’s going to happen than people think. It’s because none of this is new. We’ve known about neural nets since the 40s, we know how RL works on a fundamental level and it has been an active and beautiful field of research for at least 30-40 years, we know what happens when you combine RL with verifiable rewards and throw a lot of compute at it.
One big misconception is that these models are trained to mimic humans and are limited by the quality of the human training data, and this is not true and also basically almost entirely the reason why you have so much bullishness and premature adoption of agentic coding tools.
Coding agents use human traces as a starting point. You technically don’t have to do this at all but that’s an academic point, you can’t do it practically (today). The early training stages with human traces (and also verified synthetic traces from your last model) get you to a point where RL is stable and efficient and push you the rest of the way. It’s synthetic data that really powers this and it’s rejection sampling; you generate a bunch of traces, figure out which ones pass the verification, and keep those as training examples.
So because
- we know how this works on a fundamental level and have for some time
- human training data is a bootstrap it’s not a limitation fundamentally
- you are absolutely right about your observations yet look at where you are today and look at say Claude sonnet 3.x. It’s an entire world away in like a year
- we have imperfect benchmarks all with various weaknesses yet all of them telling the same compelling story. Plus you have adoption numbers and walled garden data that is the proof in the pudding
The onus is on people who say “this is plateauing” or “this has some fundamental limitation that we will not get past fairly quickly”.
> look at say Claude sonnet 3.x. It’s an entire world away in like a year
In the area I work I find them to be of very little value both then and now... I see no real difference. They help in marginal tasks. Eg. they catch typos, or they help new programmers to faster explore the existing codebase.
So far, I haven't used a single line of code generated by AI, even though I've seen thousands. Some of them worked to draw attention to a problem, but none solved it successfully. It was all pretty lame.
I see no reason to believe it's going to get better. Waving hands more forcefully isn't helping, there's no argument behind the promise of "it will get better". No reason to believe it will...
But, more importantly, the AI is applied on a level where really important things don't happen. It's automating boilerplate work. It doesn't make decisions about the important parts. Like, in the example above, the AI is not capable of choosing a better strategy: use pyproject.toml or write code to build Python packages? It's not the kind of decision it's called to make and nobody sensible would trust it to make such a decision because there isn't a clear right or wrong answer, only the future will prove one or the other to be the right call.
Any competent engineer should understand that engineering is just the assembly line side of product development. Deciding when to release which feature, bug fixes, etc. and the development/management of the product in general has always been the real challenge, and a lot of the strategy involved in doing this relies on feedback loops that AI cannot speed up. Though at the same time I do feel like leaders on the business side often scapegoat engineer's speed as an excuse instead of taking responsibility for poor decisions on their end.
I get what youre trying to say but this is actually a bad picture to defend. product and engineering should go hand in hand, with one side informing the other. Engineers sctually giving a shit about a product will tell product possibilities they havent even considered, product people caring about engineering will not propose utterly stupid things. and I for one can spot when a product is well designed but poorly made, as well as when a product is perfectly crafted yet useless. the sweetspot is both. and even with the speed multiplier of AI, having a proud in the craft and being actually good in it as an engineer makes a night and day difference for the final result.
yes, most places I have worked were hobbled by the organizations being completely idiotic.
which is why engineers want to be left alone to code, historically. Better to be left alone than dealing with insane bureaucracy. But even better than that is working with good bureaucracy. Just, once you know it's insane, there's not really anything that you can personally do about it, so you check out and try to hold onto a semblance of sanity in the realm you have control over, which is the code.
> there's not really anything that you can personally do about it
Small companies/startups don't have insane bureaucracy, and they're hiring.
> I think veteran engineers have always known that the real problems with velocity have always been more organizational than technical.
I don't think this comment is fair or grounded. There are plenty of process bottlenecks that are created by developers. Unfortunately I have a hefty share of war stories where a tech lead's inability to draft a coherent and clear design resulted in project delays and systems riddled with accidental complexity required to patch the solution enough to work.
Developers are a part of the process and they are participants of both the good parts and the bad parts. If business requirements are not clear, it's the developer's job to work with product owners to arrive at said clarity.
> Unfortunately I have a hefty share of war stories where a tech lead's inability to draft a coherent and clear design resulted in project delays and systems riddled with accidental complexity required to patch the solution enough to work
This is also an organizational problem (bad hiring/personal management). If you put an incompetent individual at the helm of a project, then resources (especially time) will be spent horrendously and you will have more problems down the line. That’s true for all type of organizations and projects.
It’s part of the problem but AI also can crush this on pure lines of code and functionality alone. It can put out 100,000 lines of somewhat decent code in a day. That usually takes months or years of manual coding for a team.
More lines of code doesn’t help adding more constraints to a system without violating the existing ones.
In fact, it makes it harder.
It’s not just verbose code. I’m talking about 100,000 lines of relatively decent feature code that isn’t bloated.
There is a reason that kLOC / FP were rightly shunned out of being measurable metrics years ago. The same clown show seems to be resurging with "tokens". There is, in my opinion, no real formula or metric that you can define for "good" code or "bad" code. Tickets and ceremonial activities, however abstract that into a N-nary status value that seems easier to judge upon.
And now they're almost forcing us to produce machine-made tech-debt at an industrial scale. The AI craze isn't going to produce the boon some people think it will. And the solution? More AI, unfortunately.
> And the solution? More AI, unfortunately.
I think the solution to using AI in coding is more testing, which unlocks even more AI.
The solution truly is more AI, yes.
> AI craze isn't going to produce the boon some people think it will.
What’s the boon you don’t think it will produce?
No. It's not more AI. The solution is designing and sticking to development process that is more resilient to errors than the one that's currently happening. This isn't a novel idea. Code reviews weren't always part of the process, neither was VCS, nor bug tracker etc.
The way AI is set up today, it's trying to replicate the (hopefully) good existing practices. Possibly faster. The real change comes from inventing better practices (something AI isn't capable of, at least not the kind of AI that's being sold to the programmers today).
What better practices do you mean? Are you saying we just need different more agentic-friendly practices that ensure scaled reliability beyond what we can manually check? If so I totally agree.
AI is 100% capable fundamentally of making new processes. Look I mean it’s not like I think opus 4.7 is all you need, but how can you argue with the fact that adoption since 4.5 has been an inflection point? That’s kind of proof that reliability has reached a level that serious usage is possible. That’s over a period of months. When you zoom out further you see this is extremely predictable even a few years ago, despite the absolute hissy fits thrown on HN when CEOs began saying this.
Agentic coding is verifiable and this implies there are very few practical limits to what it can do. Combine that with insanely active research on tackling the remaining issues (hallucinations — which are not a fundamentally unsolvable problem at a practical level, context rot, continual learning etc)
> What better practices do you mean?
I literally listed examples above... Code reviews weren't the norm until some time around 2010-ish. Then programmers realized that reviews help improve the code quality, and, eventually, this became so popular that today virtually everyone does it.
Anyways, I'll give an example from something that I've personally experienced / contributed to, which isn't as massive of a thing as code reviews, but is in the same general category.
Long ago, Git didn't have --force-with-lease option. Few people used `git rebase` command because of that (the only way this would work is if using it later with --force, which could destroy someone else's work). In the company I worked at the time, we extended Git to have what was later implemented as --force-with-lease. Our motivation was the need for linear history and some other stricter requirements on the repository history (s.a. every commit must compile, retroactive modifications in response to tests added later etc.)
This is an example of how a process, that until then was either prone to accidental loss of programmer's work or would result in poorly organized history was improved by inventing a new ability. This is also an example of something AI doesn't do, because, at its core, it's a program that tries to replicate the best existing tools and practices. It won't imagine a new Git feature because it has no idea what it could possibly be because its authors don't know that either.
> opus 4.7 is all you need, but how can you argue with the fact that adoption since 4.5 has been an inflection point?
What did it invent?
Right no I understand what you mean, I asked to be sure and you’ve confirmed my understanding.
I think we’re talking past each other because your comment is like 99% interesting and insightful and also I agree with it completely but there is only one part of your claim that I have an issue with which is
> It won't imagine a new Git feature because it has no idea what it could possibly be because its authors don't know that either.
I left comments in other threads with a lot of detail but this is a fairly common misconception. It is true in a sort of practical sense today, and I have many experiences as you do with respect to this, but the gist is: this is a world of RL with verifiable rewards, you are not bounded by human ability at all and that is why we have the adoption, funding, and frothy excitement. It is not simply mimicking human coding. In early stages it will because human programming traces are used as kind of a bootstrap to get to an RL phase without any limitation on performance. This is a very well studied field and it just isn’t that much of a question of if and now it’s not even really a question of when.
> What did it invent?
This is a perpetual question with constantly moving goal posts so I’ve given up convincing anyone but by now it’s solving unsolved Erdos problems, not sure how convincing you find that (not opus though but that hardly matters now)
The point I’m trying to make is: we aren’t there yet but it’s a crazy idea to think that isn’t imminent given all of the measurement and observations we have.
Additionally my point on 4.5 being a turning point is adoption. You wouldn’t see adoption numbers if we were not accelerating rapidly from say 3.x performance along the scaling trend that we’ve known for years now
Code is a liability.
I think it can be easy to look at code as an asset, but fundamentally it is a liability. Some of the "bottlenecks" to new code are in place to make sure that the yield outweighs the increased liability. Agents that produce more code faster are producing more liability faster. Much of the excitement and much of the skepticism about coding agents is about whether the immediate increased productivity (new features) and even immediate yield (new products or new revenue) outweighs the increased long term liabilities. I'd say we won't find out for another 1-3 years, and of course that the answer will differ in different domains.
From this perspective, attempting to build these bottlenecks into the agentic workflow directly makes some sense. Supplying coding agents with additional context that values a coherent project vision and that pushes back against new features or unconstrained processes would be valuable.
Is this what the article is trying to get at? Is this attempting to make some agents essentially take on product management responsibilities, synthesizing as much as possible into a cohesive product vision and reminding the coding agents of that vision as strictly as possible? Should these agents review new proposals and new pull requests for "adherence to the full picture", whether you want to call this "context" or "vision" or something else?
I think these agents might do an exceptionally good job at synthesizing context and presenting a cohesive roadmap that appears, linguistically, to adhere to the team values and vision. But I'm doubtful that they can have the discernment that a quality manager or team can have. Rapidly and convincingly greenlighting a particular roadmap could do more harm than good.
> Code is a liability
You're over-simplifying. Code in and of itself is neither an asset nor a liability. The minimal amount of code needed to solve business needs with no additional complexity is an asset with some maintenance liabilities attached (same as how a farmer's tractor is an asset that needs to be maintained), with depreciation if unmaintained (bitrot). Any code used to build unnecessary complexity is pure liability.
I would term it a Depreciating Asset, like a car or a building. Bitrot is real.
Yes, if code is only a liability then just delete the code and poof liability is gone.
Would it be fair to say that complexity is a liability, and LOC is an (approximate) measure of complexity?
Yes, but: writing code always teaches you something.
I've worked at founder-sized startups and $xxb dollar public companies. I've never read a product spec, a pitch deck, or a PRD that describes a solution that, if implemented in the way described, would solve the problem. Building the thing teaches you how it should behave.
Software is a complex, interactive medium. Iterating in the code, with people who understand the problem and care to see it solved, is the only way I've seen valuable products get created. Meetings and diagrams help, but it's not until you write some working software that you know whether you have something.
From the article:
> Jevons Paradox: when something gets cheaper, you tend to use more of it, not less.
That's a butchering of Jevons paradox. What's stated is not a paradox, but a very natural effect. Obviously usage of something goes up when it gets cheaper.
What Jevons paradox actually describes is the situation where usage of a resource becomes more efficient (which means less of it is needed for a given task), but still the total usage of that resource increases.
> What Jevons paradox actually describes is the situation where usage of a resource becomes more efficient (which means less of it is needed for a given task), but still the total usage of that resource increases.
Why is this stated as a paradox? One simple cause is the given task being performed more than it was before because it is now cheaper (since it uses fewer resources).
In most real-world use of the term, a paradox isn't something that's impossible, it's just something that intuitively seems wrong when you first think about it.
"I got a Prius so now I am spending more money on gas" sounds ridiculous, but it would be an instance of this paradox.
One of the classic examples is highway traffic. You want to prevent traffic jams, so you increase the number of lanes. However, now that there are more lanes, people see less “cost” in driving, leading to even more people driving (e.g. to go on more day trips or as alternative to public transport). This can cause the traffic jams to become even worse.
So, increased efficiency can sometimes not lead to reduced latency, which goes against our natural thinking.
> Obviously usage of something goes up when it gets cheaper.
Sure.
But is it not also obvious that when usage of a resource becomes more efficient, the price of that ”usage” becomes cheaper?
So usage goes up obviously because efficiency increases.
It is called a paradox because some people naively think that increasing efficiency is a good way to decrease consumption.
Almost everything that is called a ”paradox” is this obvious.
It’s surprising (not using the term paradox) in situations where demand was seen as inelastic.
An example of probably inelastic demand is the cost of diamonds which has fallen as synthetic diamonds enter the market. But people typically don’t buy more engagement rings than before.
With code it could be different. People might think that the amount of code that needs to be written is fixed, so the ability for a person to write code implies a reduced demand for people who write it.
In reality, bringing the cost down may unlock new use cases, so the number of actual coders might increase.
Should the paradox not be that we PAY more for it? Or, if some process is made more effective, i.e. takes shorter time, we spend more time in that process.
Jevons paradox starts with some resource being used more efficiently.
A classic example could be coal. The first steam engines used a ton of coal, but over time more efficient steam engines where created that used way less coal.
One might think that this caused the global coal usage to go down. But the opposite happened, as the overall cost of doing something with a steam engine went down.
Note, that the price of coal itself can remain fixed in this example. So Jevons principle is not (directly) about a resource changing in value.
If LLMs make codes cheaper to produce, then obviously more code will be produced. That's not an instance of Jevons paradox even though the article claims so.
You could say that LLMs means that we can create software with less of the resource that is human software engineers. So one might think that we'll need less software engineers in the future. If, on the other hand, we end up needing more software engineers, then that'll be an instance of Jevons paradox. But the article is not making that claim.
The paradox is usually presented as:
The People: Hey local government! The roads are so packed with cars they are useless. Fix it!
The Government: We hear you and just finished a huge road expansion project. The roads now have 2x the capacity! Enjoy the new fast roads!
The People: The roads are just as slow as before because they are packed with 2X as many cars now!
So, the paradox is that greatly increasing the capacity of the roads led to the roads being just as slow as before. Maybe even slower. This is because there previously were lots of potential uses of the roads that people did not enact because it would not have been worth the hassle. But, now with 2X the capacity, those uses become viable. So, more people find more uses of the roads up until it gets right back to the limit of everyone patience.
Apply this to coding and you can predict: Coding is much faster and easier now. So, why are all my coders still so busy?
Yes. Jevon's paradox is that if we need less X to do a single Y, we end up using more X doing Y.
Anyway, it's an specific observation about a single X, Y pair. It some times happens with other things, but anybody claiming it's a universal rule don't know what they are talking about.
Unit cost is down but aggregate cost is up.
Yes, I believe you are correct (but imprecise which is why the other commenter disagreed with you).
We pay less per unit, but we pay more in total.
No, we pay less for it. But there's much higher demand so overall use goes up.
Right. That is not a paradox as stated.
The paradox would be:
It is a paradox. The paradox is that increasing the efficiency of resource usage can lead to more resource usage.
If you think that isn't a paradox because you can fairly easily explain why that is the case, you need to go and check what "paradox" means.
Bottleneck for what? More features?
I don't think amount of software is what determines whether a company does well.
I don't think capturing quantity of context is that important either.
Now, quality of context. How well do the humans reason?
Then, attitude. How well do the humans respond to bad situations?
Then, resource management. How well does the company treat people and money?
Finally, luck. How much of the uncontrollables are in our favor?
Those are pretty good bottlenecks for a company. I doubt an agent is fixing any of those. At least any time soon.
For business, software applications are tools that facilitate "the thing" that generates money. (We in the software world think that _thing_ is software and software _features_, but outside that world, there's usually a different _thing_.)
The bottleneck for making software applications better at being used by (non-software) businesses is making sure the software does all the software things that actually benefit the business. Save time. Make humans more productive. Reduce human error. Make the business more efficient. Increase profit margins.
All of those things are a bit difficult to predict and quantify. You start with ideas of what might help the business, you maybe design, prototype, trial. Ultimately you build or enhance software applications, and try to measure how well they're making the business better.
In all of this, making sure software is addressing the right problem in the right way, and ultimately making the business better - that's a hard problem! Regardless of how fast and easy it is to make software.
But yes, the speed can really help. You can prototype and trial and improve the feedback loop.
> But yes, the speed can really help. You can prototype and trial and improve the feedback loop.
Based on what I’ve seen, prototyping has been always easy. You don’t even have to build software for the first iteration. For UI stuff you can use a wire-framing tool.
What has happened is that we abandoned the faster iteration methods (design think tank, quick demo and UX research,…) and we have full in on building the first idea that came in and fostering it on the users. That process is very slow and more often goes wrong.
Hmm not agile or waterfall....
Tsunami?
> Bottleneck for what? More features?
Code changes. Not necessarily features, but also bug fixes, plain old maintenance, and even refactoring to improve testability.
With AI coding assistants, what in the past were considered junior dev tasks are now implemented with a quick prompt and an agent working in the background.
These junior dev tasks are now effortlessly delivered by coding assistants, with barely any human intervention. Backlogs are cleared faster than new items are added. And new items are added more and more because capacity to clear them is no longer an issue. The challenge is now keeping up with the volume of changes. I see this first-hand at my org.
> Those are pretty good bottlenecks for a company. I doubt an agent is fixing any of those. At least any time soon.
Just because you can think of other bottlenecks that doesn't mean that generating code was not a bottleneck, and is not the bottleneck today. The mere notion of a backlog demonstrates that it is a bottleneck.
> Backlogs are cleared faster than new items are added
Totally depends on what kind of product and codebase.
Last time I checked, the number of open issues in Claude Code repo has increased.
And I have seen tons of tickets that are open for years. Not because it's technically hard or anything. An intern can do that. Those tickets are not closed because nobody wants to deal with what comes after it.
I was not merely stating other bottlenecks. I'm saying they're more important bottlenecks.
They can't all be equally important bottlenecks; a bottleneck is by definition a singular component or sub-system most-limiting to the system's output.
What are we trying to output from our businesses? Code?
What is this magical context floating around every business that will unlock AI agents to produce ... what?
[Edit] I apologize for my tone. You're right, dealing with the speed of code generation is an unprecedented problem. I was making the argument that it's not the most important to the business and that rate of code change is very rarely the top concern. But that does not mean it's not the most important problem for someone. For the developers dealing with the system, it is.
> Software is what’s left over after a group of humans finishes negotiating with each other about what the system should do.
Love that.
I agree, in particular, about the context. That’s where long-retention, experienced, teams pay off.
I managed one of those for decades. When they finally rolled up our department, the engineer with the least seniority, had ten years.
When a team is together for that long, the communication overhead drops to an almost negligible level.
That’s what I find most upsetting about the current culture of mayfly-lifespan employment tenures.
Nowadays, I work mostly alone. I’m highly productive, but my scope is really limited.
I miss being on a good team.
What kind of projects are people working on, where understanding what features the management wants is the only difficult part and the rest can just be "typed out" (or, today, offloaded to an LLM)? If that's what you do, then I'm not surprised so many people on HN think LLMs can replace them.
Any discussion related to this topic always seems to assume everyone uses code the same way and for the same function, and then forces the rest of the world through that lens.
So here we walk around the circle one more time again, voicing our anxieties, talking past each other, waiting for the next opportunity for commentary to come in half an hour.
I've found the more senior you get, the code seems more fungible, and the process seems more important and difficult.
Isn't that just ascribing difficulty to the parts of the process you're closer to?
This is like 80% of CRUD apps. Sometimes they have a few interesting problems but not like the upper 20%. Most of them are hot garbage in terms of code quality because of the offshoring and layoff cycles.
Also even the more complex things will have a lot of simpler code in them.
Uno reverse. What kind of limited project experience would lead anyone to think that there isn't an enormous continuum between code difficulty and organizational problems in the space of software development?
My buddy made a music software, with synths, effects, visualizers, etc. with Claude and Codex.
For him, the bottleneck very much was the code. He still doesn't know any programming.
I want to say that his ability here has been accelerated by orders of magnitude, but without AI he couldn't have done it at all, so it's actually a divide by zero situation.
(Yeah, he could have just learned programming... and audio engineering... and the specifics of JavaScript ... and the web audio API, and the DOM, and WebGL, and his demo would be ready in like, 2030.)
> Software is what’s left over after a group of humans finishes negotiating with each other about what the system should do.
The issue is that sometimes you don't know what the system should do until you build it.
A design is a hypothesis. Most of them are wrong, in subtle or not so subtle ways.
(Also, as a separate issue, having a group in the first place increasingly adds negative value. If it was ever a good idea to design by committee... it's increasingly expensive to do so, in opportunity cost.)
>The goal was to test our structured-generation algorithms and their open-source counterparts, replacing the naive “does it accept this string?” with something closer to the real problem: “does it produce the right token distribution?” The experiment kept coming up in conversation, then returning to the roadmap. Last month, I spent half an hour explaining the method to Codex. A few hours later, it had produced a working first version. That’s all it took.
Proving that the bottleneck, was, in fact, the code. It's just that the AI wrote it now.
The person who thought "the bottleneck wasn't the code" already had the goal discussed and coherent in their mind.
Code as bottleneck doesn't have to mean "I wanted this feature but it took me many months to finally code it". It is also "I wanted this feature for 2 years, but the friction in sitting down to put it in code and spending 5-10 days on it, etc, put me off".
If the code wasn't the bottleneck, they could just sit and write it themlseves. But, they didn't want to go through the effort and time spent of coding it themselves, as they knew it wouldn't take as little as with the LLM.
(And even when you don't have a clear final spec in mind, the exploratory code+check+discard+retry-new-design, is also faster with an LLM, precisely because the "code" part is).
In other words, the code was the bottleneck.
The post appears AI-generated itself, just with instructions to avoid obvious constructions, which still makes for tedious reading.
I think the argument here misses critical nuance; there is a difference between code used to implement a product and when code _is_ the product.
It goes without saying that agents have little to no product sense in any discipline. If you're building a game or an app or a business, your creative input still matters heavily! And the same is true for code; if the software is your product, then absolutely the context missed by skipping the writing process will degrade your output.
That doesn't mean that writing code wasn't a bottleneck even for creating well structured software projects. Being able to try multiple approaches (which would have previously been prohibitively expensive) can in many instances provide something a room of bickering humans never would have reached.
> difference between code used to implement a product and when code _is_ the product
Care to elaborate? I don't understand the difference unless you mean code that _is_ the product, being OSS code or code for license.
I think what I'm trying to get at is that there's a lot of code out there that really just needs to work. It doesn't need to scale to millions of users, it doesn't need to be abstract-able and useful to use cases we don't even know about yet, just needs to get an idea off the ground. That code is not the product. In such a case writing the code very much is a bottleneck.
If you're writing OSS code or software projects expected to be used by others that may have constraints like that, then by all means the code that gets output matters itself. But even still I'd argue that the cost of writing code manually to get there is still a bottleneck.
In my mind this is what prototyping is for. Just get it working quickly and see if the concept has legs. But be prepared to completely re-write it because the "just make it work" mindset will make it more difficult to change and improve upon in the future.
But when you factor in today's favorite business model of "make it shitty", perhaps this matters very little.
Code you ship vs tooling you use to build the code.
So, the product vs everything that is needed on the way, but isn’t the core.
CI/CD tooling, template population…. Things you write a use once/use few script for.
I typically end up with a library of tools to deal with repetitive finicky tasks.
systems vs application code
An awful lot of problems can in fact be solved by 'more code' in fact. People seem to straw man this in terms of product feature surface.
A lot of places skip creation and maintenance of decent observability - that's code.
We can now easily use advanced, code heavy testing techniques like property testing - code.
We can create environmental simulations to speed up and improve integration testing - code.
We can lift up internal abstraction levels, replace boiler plate with frameworks, DSLs - code.
All listed are in service of the product feature surface.
[dead]
It typically is the code that’s the bottleneck, but not writing the code. My career is littered with numerous delays from slow applications.
I am stuck with an editor based on Eclipse. It’s slow and periodicity pauses or crashes. I am stuck with build jobs that take 15-20 minutes. I am often stuck with web apps that take forever to do a task that should take 50ms max.
The list can go on and on. Every delay is a distraction that shatters my concentration. I still write code at work but I am in management now with dozens of other people and administrative distractions. When the software is slow it become my lowest priority. I don’t care who that impacts because if it really mattered we wouldn’t be held hostage by all this slow syrup of software pulling each of us under.
Which editor and why eclipse?
Because our software platform requires use of a proprietary editor built over eclipse. I use other editors when I can deviate from the official one.
Doesn't add up. I used to spend more than half my time coding, as did others. Besides the obvious cost, that coding took wall-time which meant talks had to wait. Sure a poor collaborator will jam things up a ton, but a team of at least ok collaborators used to be bottlenecked on code.
(not related to the article)
The flashing red dot on the web page is very annoying. Is there some design reason for that?
edit: I meant the <svg> inside `trail-map-container`
FWIW I see the red dot only at the top of the page, flashing slowly. It does not annoy me, in fact I only discovered it because of your comment.
On desktop, it's fixed to the left of the story, pulsing along the entire time you're trying to read. If you are like me, it will annoy you. I had to switch to reader mode.
Turning off Javascript made the dot go away.
From what I can tell, it marks the article you're on. There are other light grey dots with other article names in it.
Yeah. This one shows a graph:
https://www.thetypicalset.com/blog/grammar-parser-maintenanc...
Solid red dots are articles you've visited.
I think he's going for a metaphor about groups versus individuals. There are other gray dots around the red dot. Software is a group effort, but made of individuals. Something, Something.
One of the bottlenecks has always been the code. That code has been stolen and is being laundered while companies rely on mediocre engineers who have never written anything of value to promote the burglary tools and call the process "writing software".
It is the same as putting an Einstein paper on a photocopier and call the process "writing a paper".
I agree with the point of the article though: code generation does not really work, the results are bloated and often wrong and people already had more features that they could absorb in 2020.
The solution to this mess is to have 18 year olds boycott studying computer science altogether, since the industry (and mediocre fellow "engineers") will treat them like human garbage.
Personal anecdote. When I started as a wide-eyed university co-op, I was surrounded by a team of aging mainframe developers who had been coding since the 1960s. Their backgrounds ranged from working their way up from the mailroom to astrophysics to masters degrees in computer science. What struck me was how the entire team treated coding as a form of art. Of course, it had to be functionally correct and maintainable, but you could glance at a snippet of hyper-efficient PL/1 and instantly tell that 'Larry' or 'Trudy' wrote that routine - each programmer had a unique style, along with just the right amount of comments to make you laugh and guide you through difficult bits.
Most of the team has since passed away, and their code has been long replaced by modern systems, but what stuck with me is that great code is a form of art - where your individual style, insights and personality can be reflected in code for the better. The systems were efficient, responsive, extensible, and a joy to work on, since the team took a great deal of pride in their work. It really is akin to being affected by a clever and insightful work of art. A decade later, and programming became something to "make money" at, which flooded the market with many people who never really had a deep love of programming, and I guess that's ok, but something has definitely been lost along the way.
To your point, it may not be such a bad thing if people started boycotting computer science and it again became more of a calling than purely an avenue to employment.
My status after reading your comment: https://imgflip.com/i/7yki7x
Agentic tools are "burglary tools" -> Younger folks should not study CS?
I think the solution is to fix society so that we value quality over quantity, self betterment over getting rich, and making society better for every one rather than worshipping billionaires.
"What slows down a team where agents do the implementation is the production of specifications precise enough for an agent to pick up and run. Roadmap, written down. Acceptance criteria, written down. The “what we actually want” forced into precision, be it via a test suite, a ticket, or a written design."
This is merely speed of development and not the velocity of a company towards higher value. There are many PMs confidently (using the same AI tools), without a clear deep understanding of the user problems or why the requirements will be adopted by their target users (or even who the target users really are), writing these done elaborately.
So yes this will lead to faster end-end execution. But if the product is used or if it sits unused will depend on things beyond the above.
> Producing easily consumable context is precisely the thing humans don’t like to do.
I don't think this sentence speaks for me. This is the sort of thing I love to do.
Can someone explain the title? I think the author illustrates that the code was the bottleneck and it has shifted to context. What am I missing?
I think he is saying, I hope he is saying, that software has never been writing software, it has been communications with people over what the software should be, needs to be, and the entire point all along has been to achieve better collaboration with people, and implied: to achieve their collective goal. He spends a good amount of time on how slow writing software has been in the past, and that allowed the industry to over focus on the software writing. While it has been pointed out a number of times by milestone books our industry embraced that it is the communications aspect of why and what we write that is the most important. Finally now that is being forced upon us because writing code is now automated, and all that is left is the specification and the communications with humans over what and why.
The author argues that writing code cannot be a bottleneck because work always fills up the allotted time. Developer teams should instead focus on doing less and writing better specifications.
The error in the reasoning is that while you can increase your resourcing to tenfold and gain nothing in return, the inverse is not necessarily true.
I think the point they're trying to make is that context known by humans and the requirements they agree on, is 'the' bottleneck, rather than implementation
It shares some ideas with Peter Naur "Programming as Theory Building".
Quote from the post article: "To quote Michael Polanyi: we know more than we can tell. Some load-bearing context exists precisely because it was never put into words, and writing it down would change what it is."
Imagine how much knowledge exists only in the heads of software engineers, with code being just a functioning footprint of that "Theory". I know SRE in FAANG who told me that multi-billion system is supported by tribal knowledge within their group, and for years, even pre-AI it was a protection against automation.
I can type faster than I can think of the correct things to type. My experience may be non-standard but I think for most serious software folks the code has never been the bottleneck.
Sometimes code is definitely the bottleneck. For example some organizations have a very bureaucratic process guarding which projects get access to a development team and when. That's not needed if implementation is now faster/cheaper.
I'm also skeptical that development velocity is so separate from all those other things (context, stakeholder alignment,etc). It's much easier to get actionable feedback when you have a prototype.
I think this is the wrong conclusion.
Whether code is the bottleneck likely depends on the organization. In mine, code is the bottleneck. AI has pushed it so validation is now the bottleneck. If it is such that the devs are "middlemen" such they can't spec things, then I think whoever can spec things is likely the bottleneck.
Ask yourself what monks did when scribes were replaced by the printing press.
If I was a scribe at the time I’d be thrilled because of all that extra time available to work on beer productivity metrics.
The .txt website fails to load if you won't enable WebGL on your browser. Incredible
Absolutely matching the gut feel I've had lately. We've always been pretty good at producing bad code very fast. All of the other stuff - dependency management, learning what's valuable, ownership & boundaries, context switching costs, etc... have always been the bottlenecks and it's just more obvious now.
I'm finding counterexamples of this constantly now that I can have an agent rewrite large sections of my codebase that have been sorely needing it.
- Moving to a newer and more modern test library
- Refactoring my data layer so it's easier to read, based on years of organic changes that need to be baked in and simplified
- Porting some functionality to another language to vastly improve performance
I agree with the overall sentiment, but having an agent at my finger tips who can really crank out large-scale, involved code changes is unclogging quite a few backburnered todos lately for me.
The bottleneck for my personal projects was the code. So many have become unblocked because of LLMs
> Agents that consume context need agents that produce it. Once that loop is running, the organization has a written substrate it would never have produced on its own.
I'm not sure a business is helped by documentation that distilled from (hopefully present) PR descriptions and comments in JIRA, by agents. Or wherever this context is supposed to be reverse-engineered from.
[flagged]
If I read “load-bearing” or “blast radius” one more time…
Don’t forget about the “smoking gun”
Totally agree, we wrote our own piece similar to this: https://productnow.ai/blogs/teams-that-coordinate
I really think as code becomes cheap, misalignment between people, teams, and organizations is going to hurt a lot more, especially when everyone is trying to move at break neck speeds.
I also think a big piece of this is human attention and inertia. Aka, why bother doing the hard work to coordinate with others when you can just ship whatever you’re thinking. I think whichever organizations can figure out the human and cultural aspects to this will do phenomenally
“Something went wrong with the editor” when loading page
same
> They are waiting on the next well-formed spec
Is this actually true? Maybe in a widget factory. I think it’s an anti-pattern for the new world.
When you look at places that are shipping at insane pace (like Anthropic) the secret is not accelerating the writing down of a roadmap and we’ll groomed backlog, it’s empowering smart individuals to run their own end-to-end product improvement loops.
You can slightly reframe the OP by saying “the bottleneck is product ideas”, but “well formed backlog items” IMO frames it as more structured and hierarchical than it should be.
The problem with this is that everybody thinks they have better ideas than they do. And engineers are probably the worst offenders in that they're smart enough to make a case that deludes themselves.
The insane billion dollar companies ship straight to production because they have PMF so anything and everything gets signal.
The same happened with Facebook and Google. And it was always cautionary advice to mimic these giants. It's a bad idea for all the rest of us.
The bottleneck has always been the human element. I too used to be one of those up-my-own-ass engineers who thought the most important part of my work was the machine, and it wasn’t until I began actually listening to others and their problems that I realized my function was far more than mere technology scaffolding.
That said, I’m also increasingly aware that puts me in a minority group. I got to see this first hand in a recent org where their codebase and product design hadn’t meaningfully evolved in nearly thirty years. NAT was a “game changer” to them - and one they refused to implement without tons of extraneous testing they would deliberately undermine, stall, and sabotage so they didn’t have to modernize their code accordingly. It was easier for the developers and stakeholders to preserve their own status quo rather than entertain alternatives, to the point of open hostility (name calling, insults, screaming, and a few threats) to anyone suggesting otherwise.
The human element has always been, and always will be the bottleneck. Stakeholders who don’t contribute updated or accurate datasets to automation systems, or who hold back development to preserve personal status and power, or who otherwise gum up the works on purpose to game their own careers.
That’s not to make the argument of “replace all humans with machines”, mind you. Just stating that an organization that incentivizes bad behavior will be slowed down versus ones that incentivize collaborative outcomes, and AI is just going to turbocharge that by removing the friction associated with code creation and shifting that elsewhere.
> name calling, insults, screaming
Never experienced this at a job in 30+ years, and that includes my first jobs in fast food. If you experience this at work, find another job. This isn't normal. It's extremely dysfunctional in fact.
I was already looking, but they ultimately made the decision for me in January with a RIF.
Thing is, this job market is hell. There are folks who have to choose between the abuse or making rent, which is why we need stronger incentives for organizations to discipline said abuse rather than let it permeate because existing penalties lack teeth.
The company website linked in the article is broken https://www.dottxt.ai/ on (mobile and desktop) Safari. Looks like your cert doesn’t cover the www subdomain.
As software engineer, we should collectively realize that this is all cope. Every article or comment about how AI will never be smart enough, etc, etc will only be true until its not. One of our main valuable skill sets is now partially automated. Some of us are completely obsolete and its coming for the specialists and more experienced ones within a decade tops. You're not going to convince anyone that "um actually we're better because we bike shed more".
Stuff like this is ridiculous and comes off as frantically trying to save your ass. Its pretty obvious at this point that we will just throw more matmuls at it until it can do this or something equivalent.
> Agents cannot do osmosis. They do not get context by being in the room, by half-hearing the planning conversation, or by carrying the memory of the last incident.
I have been thinking about this a lot lately. How do you capture key factors succinctly, and even harder, keep it succinct as it evolves?
The shrinking that property based testing does when it finds an issue is kind of what we need for specs/context.
So managers are overwhelmed because the code is now happening a lot faster? It sounds like the immediate bottleneck really was the code, at least frequently. Now it seems the bottleneck is managerial.
> Real programmers don’t document their programs.
Probably true, but I, for one, have always liked documenting how the code I've written should be used, whether programmers calling APIs I've created, or end-users actually making use of a program's executable. I find writing the docs just as interesting and creative as writing code.
> Real programmers don’t document their programs.
This is kind of a straw man. I suspect people say that tongue in cheek.
Good programmers try to make their code clear and easy to understand. They add comments to clarify, specially their whys.
The problem I have with documentation is that you end up with mountains of documents over time about a lot of things that are no longer true and many times contradictory. The only solution I have seen is making sure that documents have owners that update them periodically.
Velocity, velocity, velocity! Ah yes, velocity always seems to matter except to those that don’t need to worry about it.
I can see the division here already, and the cogs are afraid. As a dev of 25+ years, currently working for a small company who came from a global company, I see both sides. I'm very excited about AI and love to see my projects come to life so much faster. I still love the craft of code, but its always been about the product for me.
the bottleneck was never the software, that is the ship we ride,
people, are part of a team focused on a goal, they work together because they believe in that the ship is worth riding on and will reach its destination,
the ship should carry food people want,
team decides what food will be consumed,
captain tries first the food,
if food is good and people want it, people buy more
The paper hits the nail right on the head, but it misses the mark on the next constraint: how to decide what to build.
In the old days when writing code took up a lot of resources, the constraint was self-correcting since being off in your implementation was obvious enough that the error could be easily seen after three months of work on the wrong feature. Today, you could spend five wrong efforts in the same amount of time that it used to take you to implement one wrong effort.
[flagged]
Everything in life revolves around people, and even more so today
the tediousness of keeping documentation up to date and the natural tendency towards small attention spans has always come up as a tax on organizational efficiency: complicated org structures, legibility exercises, communication tollgates etc. there is real value in reducing the friction in the former so that the latter becomes less of a burden.
at the same time, context poisoning is a real cognitive problem for humans too and I can't tell you the number of times I've seen irrelevant details become a drag on execution. my fear is that having too much context will only cause bikeshedding and a revisiting of prior decisions.
frankly, our organizational structures were already pretty good at creating mechanisms for eliciting the right implicit context at large scales. it is possible that we're just going to come up with the same mechanisms from first principles...
I swear AI has made this entire industry crazy.
Here's Robert Martin saying non-determinism in AI is ok:
https://x.com/i/status/2044440457422549407
Who wants a non-deterministic banking app?
See also https://wesmckinney.com/blog/mythical-agent-month/
For me it was. Solo entrepreneurs are the ones who profit the most from AI assisted development.
Or startups where coding was always a bottleneck because it was very expensive to hire swes, unlike big corps which would often throw swes at a problem.
If thats true, I am sure some C-suite manager knows this already. Assuming management knows what they do, after all, they're getting payed for this. The time where engineer are trying to educate people above them should be over. Management gets payed for the big decisions. If they tank the company, so be it. I no longer care.
The bottleneck was ALWAYS the code, which is why everything was built around it.
This is the key line right here:
> Negotiating, agreeing, communicating the shared picture of what we are building has become the work. And it’s just as hard as it was.
But if software (via code) is what we ultimately produce and sell, how did we get here? The main reason is the following lemma:
Lemma A: "The loss of fidelity of what can fit in any one person's head scales superlinearly (exponentially?) as the scope of work scales up." Or more colloquially: "It is impossible to fit a large scope of work in any one person's head." This is largely because any non-trivial task is a fractal of smaller dependencies.
The chain of logic to today's situation is then obvious:
1. Writing code requires humans who are slow and expensive.
2. To do large things we need large groups of humans.
3. As the number of humans grows (like beyond 5? 10?) it becomes impossible to keep them aligned, largely because Lemma A.
4. We need to coordinate these humans, so: enter managers!
5. But even a manager can't manage too many people and coordinate with all other managers because, again, Lemma A. Enter hierarchy!
6. As the size of the organization grows, so does the coordination overhead (exponentially, if Google AI overview is to be believed) until as,that quote surmises, the majority of the work is just that.
7. Coordination costs (or "Conway Overhead" as I call them) are very well understood in the literature, but this also brings in undesirable dynamics like bureaucracy, politics, organizational metrics (also due to Lemma A, but now triggering GoodHart's law!) and eventually territorial disputes and empire-building. Lots of friction and subtle mis-alignments.
As you can see the overhead scales superlinearly with the number of leaf workers added. And for the same reason, once the leaf workers are decimated because one worker can now do the work of a whole team, the entire organizational overhead above that is gone, which is also a superlinear change! Assume a conservative 2:1 reduction in ICs and a 1:5 manager:reportee ratio, a simplistic hierarchy that was:
1 CEO -> 5 VPs -> 25 Dirs -> 125 Managers -> 625 ICs
now becomes something like:
1 CEO -> 12 SVPs -> 60 Sr. Managers -> 310 Sr. ICs.
Not only did that eliminate 300 ICs (mostly junior I suspect) it took out 60 managers and removed an entire layer of Directors from the hierarchy! Worse, the leaf-layer will probably get decimated 5:1 not 2:1, and this will also eliminate coordination-specific roles like Program Managers. The rest of the hierarchy is much fewer but mostly more experienced (or politically savvy) people. They will be paid more, but not superlinearly more, of course, what do you think this is, socialism?
It's very much a pyramid scheme of cards built on that one bottleneck. And this bottleneck applies for pretty much all knowledge work. Once that bottleneck opens up, everything collapses. This is why I fear that the coming job changes are going to be much more disruptive that people realize, something I'm extra concerned about as a parent of high-schoolers.
It seems like so many developers know this, yet here we are. SV pushing this AI slop economy. More code! Faster! Less testing! Less understanding! It's what we NEED!
> What may save us it that agents are unreasonably good at reading exhaustively. An agent will read every PR comment, every closed issue, every commit message, every stale design doc ...
> Not just “this module exists,” but “this module is weird because the migration had to preserve old behavior,” or “this benchmark matters because a previous optimization silently changed the distribution.”
The thesis here is that an LLM will document code better than a human (although based on human artifacts), since churning through huge quantities of text is what they are good at.
A few thoughts:
1) Yes, an LLM may be able to pull comments out of commits and PR comments and put them back in the code where they belong, but I question how often a developer too lazy to put a vital comment in the code would put it in a commit message instead!
2) "The truth is in the code" has always been true, and will always remain true. If the comments differ from the code, the code defines the truth. Pulling comments from stale external documentation and putting them in the code does more harm than good.
3) Comments that can be auto-generated from the code don't add much value (lda #1; add one to the accumulator).
4) Comments about the purpose or motivation of the code, distinct from 3), such as the "we had to preserve backwards compatibility" example, or "this code does this non-obvious tricky thing because ...", are where the value is, but the LLM is highly unlikely to be able to discern any unwritten motivation by itself. If the human developer left a comment somewhere then great (assuming it is still relevant)
Most of the discussion we see about LLM coding is how fast it can churn out thousands of LOC on a greenfield project, or how good they can be at finding bugs, but neither of these are very relevant to the main job of developers which is maintaining and extending existing codebases. It would be lovely if most projects were greenfield, but they are not.
In any large project that has been maintained over a few years or more, there will inevitably be an ever growing accumulation of bug fixes and patches for specific issues that have been discovered in production, likely poorly documented and out of sync with any original documentation that may have existed (which anyway tends to be more idealistic and architectural in nature, not capturing these types of post-deployment detail and special cases).
The natural tendency of an LLM is to want to rewrite code to match the statistics of what it was trained on, and they need to be reigned in via prompting to resist this and not touch more code than is minimally needed for what is being asked. Of course asking an LLM to do something is a bit like asking a dog to do something - sometimes it will, and sometimes it won't. I expect over the next few years we'll be experiencing, and reading about, more and more cases where LLMs have introduced bugs and regressions into mature code bases because of this - rewriting code that should have been left alone. The general rule is that if you are tempted to rewrite something you better first understand why it was there, coded the way it is, in the first place.
I can't help but compare the current state of "AI" (LLMs) to the early days of things like computer speech recognition or language translation when they were considered amazing, and everyone was gushing about them, but at the end of the day the accuracy still wasn't good enough to make them very useful - that would take another 10-20 years.
Another historical lesson/perspective would be expert systems which at the time were considered as AI and the future of machine intelligence (the Japanese "5th generation systems" were going to take over the world, CYC promised to offer human level intelligence), but in retrospect were far less important. It won't be until we move on from LLMs to something more brain-like, deserving to be called AGI, that LLMs will be put in their historical perspective.
At the moment DeepMind seems to be the only one of the big labs admitting/recognizing that scaling LLMs isn't going to achieve AGI and that "a few more transformer-level breakthroughs" are needed. Hassabis has however talked about LLMs (GPTs) still being a part of what they are envisaging, which one could either regard as a pragmatic stepping stone to real AGI, or perhaps that they are not being ambitious enough - building something that still needs to be spoon-fed language rather than being capable of learning it from scratch.
Even if writing new code is not the problem or the bottleneck anymore, a major blocker is one CoreService.java file someone wrote 10 years ago that is keeping the whole system glued together. Only they know how it works, and how to keep it working. Parroting my own words from a while back, inferred intent is not the same as initial intent. This is just reverse engineering except somewhat automated
It also bakes in the LLM quality at the time the documentation was generated, into the documentation. It potentially worsens the performance of future LLMs if they ingest the documentation produced by older LLMs. It’s not clear why documentation wouldn’t instead be generated on demand, using the newest SOTA LLM.
Before going to work, we're fed algorithms and data structures and how they are the bottlenecks that makes wasteful use and here's how to utilize them; only to naively know from hard stories that the actual bottleneck is always from the people, the H-factor, except this time H stands for human.
Insane amount of bureaucracy, paperworks, and how we are missing deadlines so we write shit code that the quick and dirty solutions were never replaced.
Algorithms and data structures therefore are more like helping you utilize the machine economy better, but it doesn't have any meaningful impact on the social aspect of it. That's a hard lesson I had to learn from my two previous job, though now I'm considering starting my own small business just to make a little bit of living enough to survive.
But now my ADHD kicked in and is still lazy and I had so many concerns whether the market validation is great, how to deal with situations if I broke customers stuff, how to gain (and hopefully not regain) trust if any bad things ever happen, what if I want to go vacation and suddenly the server broke and got code zero (the highest level of alert I termed internally, when you had alertmanager flashing everything red, network storage is down, corruption happened) during a trip to Bahamas.
I'm still in the watershed of thinking really to do this or not, but the job market is filled with ghost jobs that are not worth my time either, I'm basically "dead locked" right now and had to make a decision quick.
Either choice is fucked for me, as I started to notice after going to work, despite I got some really interesting ideas in tech, but I'm not a charismatic person so I can't really make those idea to fruition, because no one wants to listen to me and implement it together, so I'm pretty sure it is impossible for me to be a great leader (tech lead probably, but CEO level of leadership and coordinator and manipulate the grand scheme of thing, nah, I pretty much can't do).
Now the problem is, even if I'm pretty sure to get fucked, you should choose the one that inflicts minimum pain to you. So far having my own business seems like a less painful to die and bankrupt, and I'm preparing to sell off some of my stuff to get a last dip of my fortunes and have fun. Will see how it looks. Bankruptcy is nothingburger in this modern society perhaps.
Now you see how the bottlenecks can't even be the code anymore and even goes beyond code, despite having the same core template: I don't even have to code, to repeat the same "quick and dirty" kind of mindset in another domain, in another instance. That's something LLM, heck not even AGI can solve: decision-making based on situations with limited time and resources, and it can be personal or organizational or even structural.
This is very much not going to be solvable by a bunch of lines and statements and expressions, but it really need some time to dig in and compromise. Pick your kool-aid and drink it
[dead]
[flagged]
[flagged]
[flagged]
[flagged]
[dead]
I cringe every time I read the word "load-bearing" in an article.
It instantly made me think this was either written by AI or heavily edited, and I hadn't gotten that sense until the moment I saw load-bearing.
I hate that em dashes have been killed by AI as well. Hundreds of years ago Charles Dickens was using them in his novels and today someone will read his books and in ignorance actually think AI wrote it.
I don’t mind em dashes, I use them myself. But load-bearing context, that’s just really bad writing.