The article seems to equate QA with testing, which is a short-sighted view. QA also includes things like standards, and importantly, design and code reviews, which are actually the best way to improve quality.
A good QA team will be the definitive experts on a piece of software. They have more complete and in-depth experience than sales, engineering, or anybody else. A single software engineer might know their specific piece, but your QA tester will know everything.
100% and I’m a software developer and have been for ~30 years. Good QA people know how to find regression and bugs _that you didn’t think about_ which is the whole reason why it shouldn’t be under “engineering” and that it should exist. One of the QA people I work with currently is one of my favorite people. They don’t always make me happy (in the moment) with their bugs or with how they decide to break the software, but in the end it makes a better, more resilient product.
Agreed. QA specialists are there to think about what the engineer didn't think about. Unless the engineer is incompetent or the organization is broken, the engineer has already written tests for everything they could think of, but they can't think of everything.
More importantly, it is almost impossible for engineers to be as well incentivized to spend extra time exploring edge cases in something they already believe to work than to ship a feature on time.
Like everything else though, its contextual. Complexity of domain, surface area and age of product, depth of experience on team and consequences of failure are all so variable that there cannot be only one answer.
I have done it both ways for many years. I have worked on teams where QA is a frustrating nuisance, and teams where they were critical to success. I have worked on teams that did pretty good without them, and probably those were the highest throughput, most productive teams because the engineers were forced to own all the consequences - every bug they shipped was a production issue they were immediately forced to track down and resolve.
But those were very small teams, and eventually I was the only founding engineer left on the team and far too many mistakes by other people made it to my desk because I was the only person who could find them in review or track them down quickly in production. That was when I started hiring QA people.
Ive almost never worked on a project where there was the right number of QAs who were doing the right thing.
Usually there either arent any in which case bugs get missed or there are 5 very cheap ones running mindless scripts who are standing in for the devs' inability or unwillingness to write decent automated tests but dont catch the really deep level thorny stuff.
Interesting username
i know right hahah
> More importantly, it is almost impossible for engineers to be as well incentivized to spend extra time exploring edge cases in something they already believe to work than to ship a feature on time.
Personal liability and professional insurance works for all the actual “professions” in the US, to some extent, right?
It might be time to start the considerations for professional licensing for platform scale or commercially published software.
More like certified products. New ISO standard may require professional liability for software products, which will be adopted as requirement by big consumers and will pull the industry into certification loop, because insurers will ask for it. This will obviously put a high entry barrier to many product categories, slowing down innovation.
Yes, but slowing down to avoid hazards is sometimes important.
Medical devices and such are the only places I’d expect to see the need for certified products. By extension, in the new era, we really ought only expect certified software where we expect a duty to care from the software system (or any other assigned duty)
In development of medical devices existing quality controls are already working well, right?
My point exactly, embedded devices are the closest software gets to actually being built by licensed engineers. The expectation can often be that you are an electrical engineer by training, where licensure is a viable path, unlike in software engineering.
Yes, QA is important. My code will always "work" in that everything I tested is bug free. But having someone other test, especially someone who knows the service is gold.
But there is also bad QA: The most worthless QA I was forced to work with, was an external company, where I, as developer, had to write the test sheet and they just tested that. Obviously they could not find bugs as I tested everything on the sheet.
My most impressive QA experience where when I helped out a famous Japanese gaming company. They tested things like press multiple buttons in the same frame and see my code crash.
> But there is also bad QA: The most worthless QA I was forced to work with, was an external company, where I, as developer, had to write the test sheet and they just tested that. Obviously they could not find bugs as I tested everything on the sheet.
This was my sole experience at the one place I worked with an internal QA team. They absolutely could never find bugs that devs missed, often mis-marked ones that didn't exist, and failed to find obvious edge cases that did exist.
Multiple devs fired because the CEO believed the QA over the engineering team; if they marked a bug as present, it was the engineer's fault for writing it. If they didn't catch a bug that made it to prod, it was the engineer's fault for not including it in the test plan. They represented nothing but red tape and provided no value.
Good QA sounds great! I'd love to know what that's like someday! It'd be great to have someone testing my code and finding breakages I missed! I'm only slightly (incredibly) bitter about my bad experience with its implementation.
I do think the type of testing where QA just follows pre-generated script has place. But it is about long term regression. The first round absolutely should not find anything. But with complex system it also should find nothing in a year or three or five years... Offloading this to dedicate resource could be useful in certain industries.
I did not think of that. Maybe for some industries, it might make sense. But if I want a regression test, I would probably set it up as automated test. In the case I mentioned above it was the only test beside my own for a new service.
Not really that impressive, that's Testing Quick Attacks 101
The tension is that QA is important. But most QA practitioners are not good. The world is filled with QA people who couldn't make it as an SWE and now are button pushers. But high quality QA people are amazing. These are the ones who understand how to break apart a system, push it to its limits, and engineer the quality plan.
This is an area where I expect AI to create a bimodal future. The smaller group of high quality QA people will now be able to offload the activity to agents instead of the QA drones. They'll still be worth their weight in gold, whereas the drones will be redundant.
I've worked with a lot of QA folks that just repackage up the unit tests the dev already writes. And I've met a few that strikes out on their own and comes up with real tests.
The latter is much more high touch, but they're often worth their weight in gold. The former is kinda pointless.
Exactly. I think AI tooling will make that good group even more effective. And it will make the bad group even more pointless.
Exactly. I spent 20 years split between MS and Apple. Some of the best people I ever worked with were in QA. One guy in particular was an extremely talented engineer who simply didn't enjoy the canonical "coding" role; what he did enjoy was finding bugs and breaking things. ;-)
Really? The best people I worked with were never QA.
Moreover, the best QAs would almost always try to be not QA - to shift into a better respected and better paid field.
I wish it werent so (hence my username) but there is a definite class divide between devs and QA and it shows up not just in terms of the pay packets but also who gets the boot in down times and who gets listened to. This definitely affects the quality of people.
I think it's overdue an overhaul much like the sysadmin->devops transition.
I mean the people that come up thru QA may be the best while getting enough time in the company to go to a position that pays.
But yea, so many companies cheap their QA and then wonders why their QA sucks.
We have differing experiences, which shouldn't be surprising. My example explicitly referred to someone who was a good engineer who enjoyed the QA role.
This might have been an Apple/MS thing, but we always had very technical QA people on the dev tools team. For example, the QA lead for the C++ compiler had written their own compiler from scratch and was an amazing contributor.
In the Windows team (back before the test org was decimated) I saw the described "class divide". Anybody who was good enough would switch from SDET to SDE [disclaimer: obviously there were some isolated exceptions]. The test team produced reams of crappy test frameworks, each of which seemed like a "proving project" for its creators to show they could be competent SDEs. After the Great Decimation my dev team took ownership of many such frameworks and it was a total boondoggle; we wasted years trying (and mostly failing) to sort through the crappy test code.
This was all unfortunate, and I agree in principle with having a separate test org, but in Windows the culture unfortunately seemed to be built around testers as second-class software developers.
I spent most of my time working on Visual Studio (in the Boston time frame) so we got to interact with pretty much every team. I absolutely hated interacting with the Windows team. Everything was a fight for no reason.
As I said above, everyone has their own experiences but the QA folks I worked with at MS were fantastic.
Not sure if you're aware but Dave Plumber now has a really good YT channel [0] where he talks about MS back in those days. It's a fun walk down memory lane.
> Really? The best people I worked with were never QA.
> Moreover, the best QAs would almost always try to be not QA - to shift into a better respected and better paid field.
That sort of seems circular. If they're not respected or paid well, of course most of the talented people would not want to remain in QA, and eventually you'd just have mediocre QA. That doesn't really give you any insight into whether high quality QA would be useful though.
(edit: I see now that's basically the point you're trying to make, so I guess we're in agreement)
> Good QA people know how to find regression and bugs _that you didn’t think about_ which is the whole reason why it shouldn’t be under “engineering”
I don’t understand the reasoning here why QA shouldn’t be engineering.
> I don’t understand the reasoning here why QA shouldn’t be engineering.
Who watches the watcher, right?
That aside, the core idea is the same as the principles of independent audit, peer review, or even simply just specialization.
Red team / Blue team?
Yes but both the red team and blue team would still be engineering.
Yes, but police and military are both law enforcement, on one level, but each are very different from the other.
Even the military have police, right?
edit: ultimately, it comes down to the importance of independent audit, the builders and the breaker/fixers are very different groups in engineering.
The red team and blue team should not share supervisors.
Nor, in the case of QA, should the audit team be engineers trained to act and think like the ones who wrote the software. A fresh perspective is useful.
But in the long run, supervisory independence is the real deal. I know of a QA manager who shut down an entire factory's output until a major safety issue (that had been kicked down the road several times) was addressed. It took chutzpah, and serious power, to do that. The Dir. of Engrg. would NEVER have allowed it.
Frankly, calling software development engineering is quite debatable. We should be calling less things engineering that aren't actually engineering qualifications.
"Automated Verification Engineer is experimental"
Except I worked at a company with a QA department made up of entirely "Automated Verification Engineers" ... over a decade ago. And the head of the department had taught at a local QA school (so presumably other QA engineers learned that style of work from her also).
Good QA departments switched to this mode long before AI was even a thing! Maybe 90+% of QA departments didn't work that way pre-AI, but there certainly were ones that did!
Absolutely QA "should" exist. Our QAs are the most knowledgeable people on our product, often informing devs and product alike of requirements, missing requirements, weird configuration outliers, how to actually use the damned app, etc. Without QA we would be developing and testing for brittle requirements to get code into an MVP state, not a functional, user-friendly state.
"Should" is one of those words. QA absolutely should not exist. Developers should not write buggy code to begin with! But they do, so QA (in some form) must exist for software to be any good.
There are two very important ideas in this article, which I fully agree with: QA are not the only people responsible for quality - entire team is. QA act as experts and drivers of quality management process, but they should not and are not acting alone. They should have adversarial approach which is helpful on every stage of SDLC. Thus, few more items from my list why QA is useful in every engineering organization and why every team I hire has at least one QA starting from 4-5 people:
1. Quality management is a continuous process that starts with product discovery and business requirements. Developers often assume that requirements are clear and move on to building the happy path. QA often explore requirements in depth and ask a lot of good questions.
2. QA usually have the best knowledge of the product and help product managers to understand its current behavior, when new requirements suggest to change it.
3. The same applies to product design. Good designer never leaves the team with a few annotated screens, supporting developers until the product is shipped. Design QA - the verification of conformance of implementation to design specs - can be done with QA team, which can assist with automation of design-specific tests.
4. Customer support - QA people are natural partners of customer support organization, with their knowledge of the product, existing bugs and workarounds.
And just a story: on one of my previous jobs recently hired QA engineer spotted number error in an All Hands presentation. That was an immediate buy-in from founders. :)
"Engineers sometimes exhibit an arrogance that they can do everyone else’s job,"
This rings so many bells that it feels like some Buddhist festival.
Apply the same approach to QA, Operations, and anything outside the actual product development: when this arrogance was shared between bosses and developers, all good on their side.
Now with the AI, the arrogance is staying only on the bosses' side, and we have developers freaking out.
Its also a fucking annoying sentiment for us as engineers. I dont want to do everyone else's job
A good QA actually does the work that product managers should but don't, in terms of understanding actual users and checking the feature delivered matches actual user ask.
But I've worked at places with a whole spectrum of coverage in roles spanning Product Mgrs, Project Mgrs, BAs, QAs, production support level 1, production support level 2, etc. The one constant is whatever is missing or understaffed just ends up getting done by engineers.
Testing, on-call, Jira managing, requirements gathering with users, analysis, etc... all falls on to engineering. Then management gets even more wound up about dev productivity/velocity, etc.
But, as someone who’s agile and adaptable, I can do any job. That doesn’t mean I can do them all simultaneously. It doesn’t mean I can be the full-time loan officer and the full-time app developer.
Can I do your job? Yep. Can I also, at the same time, be the engineer that optimizes the IT systems? No - one of these jobs will suffer.
Give me the chance to understand your job, and I’ll replace as much of it as possible with code to do the same thing. But what it won’t do is have good judgement. It will make decisions on actual data - accurate data, erroneous data, it doesn’t care.
I think this is an interesting place to put “AI” - can it take input in the form of data and historical decisions, and come to a new decision from recent data? The same decision a human would?
Thank you for embodying and exemplifying the point, albeit ironically.
Aside: it has become an interesting personal experiment to stop being obviously ironic and see how people read what I’ve written. The voting is telling.
You must not have heard of Poe's law yet. This is not the flex you think it is.
So many of these articles talk about why a particular role or type of role within an org should be there or not, but they fail to touch on the 'theory' of why or why not. This article has that same lack of foundation, and so meanders around a bit, IMHO.
Any process in an organization of size will have indicators that measure output. Those indicators should typically be paired with indicators that measure the quality of the output, to ensure product or service levels. That's the theory, and the genesis of 'quality management': whether you're measuring output code or breakfasts [1] or chemicals or widgets or medicine, you need to measure the quality of the output if there are any client specifications or expectations around the output. And there are very few cases where your customers will not have any specs or expectations around your product or service.
How you manage quality follows from those basics; it matters where you measure quality but it is so process dependent - earlier in the process lowers costs, but may not suffice to guarantee final quality - that quality management has to be designed around the specific process; balancing cost with benefit and requirements. How deep or specialized quality management becomes depends on the needs of the org, the size of the org, and the needs of the particular process.
This is why I'm skeptical about whether broad articles like this are beneficial overall. Why and how matter, and where's the foundational discussion behind why and how? Do folks not think at the organizational/business level? Maybe not everyone is a Sheryl Sandberg :-)
The best QA isn’t just about finding bugs. It’s about bringing quality to the codebase: typing, better static analysis, linters, and useful libraries. In the other direction, it’s also about integrating into the release process by using integrating the what-goes-on / what-stays-in-beta decisions into quality’s approach to giving signal over any other part of the codebase.
Anything that involves gating bits of code, basically, and deciding whether to gate bits of code or not.
The best QA is linting, CI and language features? Sure and automated testing at different levels too but this sounds like basic things the engineering org does
I was in testing for 17 years before moving back into Engineering. I have spent my time in Engineering leading teams to push quality left. But I think it's better to say "quality is a system" than to say "engineers own quality." What are you building into your SDLC that makes sure quality happens? Testing is just a part of that, and not even the biggest one.
Is this the real life? Is this just fantasy?
If engineering owns quality, then engineering own all, up the chain. No need for anything and anybody.
Which is the AI pipe dream, really.
You are, in fact, with using AI, QA or coding or otherwise, externalizing services in hope the services will improve and costs will drop.
Let me know how that goes without HITLs.
If engineering owns quality, it ought to also own the liability, too.
That would put the damper on the pipe dream pretty quick. Probably more healthily than any data center ban could ever do.
If engineers were licensed, bonded, and liable, things would go very differently.
* speaking as having been a practicing software “engineer” for a decade
Applies to making bread or brain surgery. Sometimes things go wrong and people are accounted for.
But since CEOs, or any other bosses, need to make a living, they will eat the liability in exchange for wealth, and leave engineering in the dust.
Somehow hospital CEOs and doctors/nurses have managed to both eat the liability and make a living.
I think the same patterns can be applied here.
> Is this the real life? Is this just fantasy?
Caught in a landslide, no escape from reality ...
:-)
QA? Or testers?
I worked with an excellent QA once, and that changed my perspective completely as a dev.
A great QA can understand the features of a product quickly, turn those concepts into some sort of grid or matrix in their mind, then pull a bunch of paths and scenarios with estimated priorities and probabilities at a fast and efficient pace, all with great coverage. They can also identify features contradicting each other more quickly than product people.
I think a good QA is capable of being a great vibe coder nowadays, too. If you can write great test suites (write names only), agents nowadays are able to turn those specs into decent codebases. Comparatively, I know a lot of decent dev having not very good taste in testing, who often write overlapping tests or missing important paths.
I ship a very visible product which, when it breaks, generates a lot of social media angst (it's in the gaming adjacent space). So we try not to break things to the best of our ability. We have very few QA people and have whittled down that team over the past few years (DevOps was eliminated during the first round of layoffs ~2023).
This was painful at first but I do think it's the way to go. We found that too much manual QA incentivizes devs writing features to throw it over the fence - why should they test more if someone else is paid to do it? Devs need to feel the pain of writing tests if their code is hard to test, and they need to be held accountable when their code blows up in production. This feedback loop is valuable in the long run.
Same thing for test automation. Previously we shipped this over to our in-team DevOps people and they built complicated CI/CD setups. Losing them meant we needed to simplify our stack. Took a while and it slowed down feature development, but it was worth it. Of course you need leadership who understands this and dedicates time to building this out.
In defense of DevOps, I think the landscape for automation was poor a few years back. Jenkins and Teamcity are way too complex. Github Actions (for all its warts, and there are many) is much simpler. Our pipelines are also in their own CI/CD (CDK, CodeBuild) - infrastructure as code is the key to scaling.
We still have manual QA people to test things we can't automate. Usually this is for weird surfaces such as smart TVs, or for perceptual tests. I don't see this going away any time soon, but high levels of automation elsewhere drive down the need for "catch-all" manual testing.
I work with someone who does great QA work. They know how to rip something apart, they understand the user's non-technical perspective and approach, and they understand what edge cases to look out and they have the actual equipment to test on different physical devices (and so on).
Most importantly, they have the diligence and patience to methodically test subtly different cases, which I frankly don't have.
On the question of whether QA slows things down, I have to ask: slows down what? Slows down releasing something broken? Why is that something to optimize for? We should always be asking how long it takes to release the right thing (indeed I'm most productive when I can close a ticket after concluding nothing is needed).
If all/most QA people were like this then no one would be complaining.
Sure, but this issue is not specific to QA. Any roles which you depend on with incompetent people occupying them will lead to issues and frustration.
If you have to ask, the answer is "yes."
Yes. Without a doubt.
I worked with a QA team for the last fifteen years until last year when they laid them all off.
QA is a discrete skill in and of itself. I have never met a dev truly qualified to do QA. If you don't think this you have never worked with a good QA person. A good QA persons super power is finding weird broken interactions between features and layers where they meet. Things you would never think of in a million years. Any dingbat can test input validation, but it takes a truly talented person to ask "what if I did X in one tab, Y in another, and then Z, all with this exact timing so events overlap". I have been truly stunned at some of the issues QA has found in the past.
As for time, they saved us so much time! Unless your goal is to not test at all and push slop, they are taking so much work off your plate!
Beyond feature testing, when a customer defect would come in they would use their expertise to validate it, reproduce it, document the parameters and boundaries of the issue before it ever got passed on to dev. Now all that work is on us.
> "what if I did X in one tab, Y in another, and then Z, all with this exact timing so events overlap"
As a QA: this bug will get downprioritised by PM to oblivion.
If anyone should not exist, it's PMs.
I kid a little, I worked with some very good PMs when we did client work who made my life much easier. Working on a SaaS though, I find them generally less than useful.
Not anyplace that cares about quality.
where I work it is normally easier to fix things than deprioritize to oblivion. I can fix an issue, but priority puts a dozen people in a meeting.
Depends on what happens in that case, no?
If it messes up the UI until you refresh, yeah, I understand deprioritizing that.
If it causes catastrophic data corruption or leaks admin credentials, any sane PM would want that fixed ASAP.
Should Accounting exist?
Should Legal exist?
Should Facilities exist?
Surely your average employee could own each of these functions.
> Have engineering own quality
The moment that happens it will either be re-outsourced to QA anyways or quality will become a question of licensing and bonding of professional engineers
There's a basic loop that goes on regardless:
1. define a requirement
2. implement the requirement
3. verify that the requirement was implemented
TDD was built around the idea that 1 and 3 could be unified in automated testing, and that's certainly true for a large part of it. But QA as a discrete role needs to exist because, beyond verifying that 2 was done correctly, they expose higher level bugs in 1, the requirements themselves.
It's virtually impossible to define requirements completely and without second order interactions that cause problems. QA is as effective at exposing assumptions and handwaving by the people who created the wireframes or the visual design as by the developers failing to test their own work.
And ideally, this leads to the cycle being virtuous: higher quality starts at the requirements phase, not the implentation phase. It's not just that QA should work closely with the engineers--the engineers need to work closely with UX and VD to ensure they fully understand the requirements. The incentives are aligned among all parties.
I think social media companies don't need that
Enterprise software companies selling definitely need it. Customers ask was this tested? where is the test report?
It’s not even optional as soon as you’re getting close to any type of standards or compliance framework like SOC2 and the likes.
As someone who holds a degree in Medicine besides his degree in Computer Science and works in medical devices, can I just please emphasise that QA should exist in this industry and is well deserved?
Sorry. No 'blue screens' or stack traces in my pacemaker or insulin pump, please.
> No 'blue screens' ... in my pacemaker or insulin pump
they have screens?!? /s
"Before I weigh in further, I’d like to make sure you’re familiar with the testing pyramid."
The testing pyramid is a par excellance SWE kool-aid. Someone wrote a logically-sounding blogpost about it many years ago and then people started regurgitating it without any empirical evidence behind it.
Many of us have realised that you need a "testing hourglass", not a "testing pyramid". Unit tests are universally considered useful, there's not much debate about it (also they're cheap). Integration tests are expensive and, in most cases, have very limited use. UI and API tests are extremely useful because they are testing whether the system behaves as we expect it to behave.
E.g. for a specific system of ours we have ~30k unit tests and ~10k UI/API tests. UI and API tests are effectively the living, valid documentation of how the system behaves. Those tests are what prevent the system becoming 'legacy'. UI and API tests are what enable large-scale refactors without breaking stuff.
Isolated QA should not exist because anything a QA engineer can do manually can be automated.
The testing pyramid comes from a time of desktop apps with no API and when UI tests were extremely expensive. I made 100% sense in that context, it never did in other contexts. Despite its omnipresence it had not made any sense for the vast majority of us in the past 25 years.
everyone gets the pyramid wrong in my opinion.
the vertical axis is not test type. It is would you run the test. At the bottom are deterministic fast tests for something completely unrelated to what you are working on - but they are so easy/fast you run them anyway 'just in case'. As you move up you get tests that you more and more want to aviod running. Tests that take a long time, tests that randomly fail when nothing is wrong, tests that need some settup, tests that need some expensive license (i can't think of more now but I'm sure there are).
You want to drive everything down as far as possible, but there is value in tests that are higher so you won't get rid of it. Just remember as soon you get to the 'make would run this test but I'm skipping it for now because it is annoying' line you need a seperate process to ensure the test is eventually run - you are trading off speed now for the risk that the test will find something and it is 10x harder to fix when you get there - when a test is run all the time you know what caused the failure and can go right there, while later means you did several things and have forgotten details. 10x is an estimate, depending where in your process you put it it could be 100 or even 1000 times harder.
I’ve never encountered an initiative to “shift left” that wasn’t directly motivated by clunky, slow, unreliable and unmaintainable E2E tests. Failing earlier, especially pre-deployment, with targeted integration and contract testing is fabulous but it can’t replace rubber hitting road.
I’ve had quite a bit of success in helping my dev teams to own quality, devising and writing their own test cases, maintaining test pipelines, running bug hunts, etc. 90% of this can be attributed to treating developers as my customer, for whom I build software products which allow them to be more productive.
> because anything a QA engineer can do manually can be automated.
Looks like you never worked with a decent QA team and do not understand the full scope of quality management. They have plenty of creative tasks not aligned with other roles.
> Isolated QA should not exist because anything a QA engineer can do manually can be automated.
Well, sort of maybe, but it's not always economical. For a normal web app - yeah I guess. Depends on the complexity of the software and the environment / inputs it deals with.
And then there's explorative testing, where I always found a good QA invaluable. Sure, you can also automate that to some degree. But someone who knows the software well and tries to find ways to get it to behave in unexpected ways, also valuable.
I would agree that solid development practices can handle 80% of the overall QA though, mainly regression testing. But those last 20%, well I think about those differently.
> And then there's explorative testing, where I always found a good QA invaluable.
Yes, I agree. We do this too. Findings are followed by a post-mortem-like process: - fix the problem
- produce an automated test
- evaluate why the feature wasn't autotested properly
> it's not always economical. For a normal web app - yeah I guess
What do you define as "normal"? I can't think of anything harder to test than a web app.
Even a seemingly trivial static HTML site with some CSS on it will already have inconsistencies across every browser and device. Even if you fix all of that (unlikely), you still haven't done your WCAG compliance, SEO, etc.
The web is probably the best example case for needing a QA team.
[dead]
Of course, anyone would agree that if wishes were fishes, QAs should not exist. We would all use agile with cross-functional teams. Every single team member can do any work that may be needed. All team members can take time off any time they need to because we have full coverage and the world is a beautiful place.
Of course, none of this is true in the real world.
For example, just last week we had a QA essentially bring down our web application on staging environment always reproducible with a sequence of four clicks. Follow the sequence with about the proper timing and boom, exception.
Should this have been caught before a single line of code was written? Yes, it should have been caught before any code was written. However, the reality is that it did not. Should this have been caught by some unit test? Integration test? End to end test? Code review? I'd argue as we barrel down a world of AI slop, we need to slow down more. We need QA more than ever.
[dead]
Hard to believe people are asking this question in 2026.
Quality is something that takes dedicated focus and lots of work. Therefore it’s a job, not an afterthought or latest priority for someone whose primary focus is not quality.
Like the majority of slop articles, the author is advertising his services.
So why would you outsource it to another team that isn't doing the actual work?
Team doing the work should do QA so they only produce quality.
But on other hand those people can not often be trusted. As such you need a team that does checks again. Or alternatively they might have misunderstood something and thus produced incorrect system. Or there is some other fault in their thought process or reality. And system operates differently in more real scenario.
QA is actual work. Building the thing is actual work. Each is not "the" work, which is the task of the whole company.
QA perspective and focus is just different from the one of the team building the thing. It's precisely because of their detached perspective that they can do their work properly.
In the age of AI of course not, AI is your QA
I think the unstated (but highly prevalent) view among executives in large swathes of this industry is that they don't really care to spend any time or money on user testing or quality assurance, and if this role exists at companies it is usually under-compensated and straddles both these functions to have some party be accountable. It is sometimes a check on product teams and vision-driven executive teams who don't prototype/test their ideas (or empower their teams to do so), and sometimes a check on engineers and engineering managers who don't want to be accountable to gaps in quality.
If you care about: the consistency of your output, what you're selling to your customers, you have not much of a choice (than to control what you are shipping).
Not even mentioning the potential regulatory/market and legal consequences if you don't.
Developers are liars. Why would I trust them to test their software?
Has there been enough times it doesn't even compile on their machine? Which might be the absolute bare minimum...
If your product is used by humans, then it needs to be tested by humans - this cannot be automated. Those humans can be your QA people, or your customers. Perhaps your customers are happy to be testers, perhaps not.
Unit tests are very expensive and return little value. Conversely, a (manual?) 'smoke test' is very cheap and returns great value - the first thing you do when updating a server for example is to check it still responds (and nothing has gone wrong in the deployment process), takes 2 seconds to do, prevents highly embarrassing downtime due to
a misconfigured docker pull or whatever.
> Unit tests are very expensive and return little value
Why are unit tests very expensive? This goes against everything I know.
Unit tests very roughly double the amount of effort required to make any meaningful change to your codebase. They are also require maintenance same as ordinary code - but the customer does not care in the slightest whether or not they pass. On the other side, they can only really tell you about low level bugs that you already expected, they cannot surface system level bugs - the actual hard bugs that cause problems for you and your customers.
Then there is the danger of thinking that green=all good, an example of 'automation bias' where we learn to trust the automation even as things go wrong.
As makers, it is also tempting to believe that [all] problems can be solved by making something (i.e. code), but actually many problems are not of that nature, and cannot be solved in that way.
Thank you, that makes sense. What I meant was that today all unit tests are basically written by an AI so the "cost" is almost zero. Am I wrong?
Automated tests are no longer expensive to write and nowadays less expensive to execute.
not really . but distill his knowledge into an OpenClaw.
The article seems to equate QA with testing, which is a short-sighted view. QA also includes things like standards, and importantly, design and code reviews, which are actually the best way to improve quality.
A good QA team will be the definitive experts on a piece of software. They have more complete and in-depth experience than sales, engineering, or anybody else. A single software engineer might know their specific piece, but your QA tester will know everything.
100% and I’m a software developer and have been for ~30 years. Good QA people know how to find regression and bugs _that you didn’t think about_ which is the whole reason why it shouldn’t be under “engineering” and that it should exist. One of the QA people I work with currently is one of my favorite people. They don’t always make me happy (in the moment) with their bugs or with how they decide to break the software, but in the end it makes a better, more resilient product.
Agreed. QA specialists are there to think about what the engineer didn't think about. Unless the engineer is incompetent or the organization is broken, the engineer has already written tests for everything they could think of, but they can't think of everything.
More importantly, it is almost impossible for engineers to be as well incentivized to spend extra time exploring edge cases in something they already believe to work than to ship a feature on time.
Like everything else though, its contextual. Complexity of domain, surface area and age of product, depth of experience on team and consequences of failure are all so variable that there cannot be only one answer.
I have done it both ways for many years. I have worked on teams where QA is a frustrating nuisance, and teams where they were critical to success. I have worked on teams that did pretty good without them, and probably those were the highest throughput, most productive teams because the engineers were forced to own all the consequences - every bug they shipped was a production issue they were immediately forced to track down and resolve.
But those were very small teams, and eventually I was the only founding engineer left on the team and far too many mistakes by other people made it to my desk because I was the only person who could find them in review or track them down quickly in production. That was when I started hiring QA people.
Ive almost never worked on a project where there was the right number of QAs who were doing the right thing.
Usually there either arent any in which case bugs get missed or there are 5 very cheap ones running mindless scripts who are standing in for the devs' inability or unwillingness to write decent automated tests but dont catch the really deep level thorny stuff.
Interesting username
i know right hahah
> More importantly, it is almost impossible for engineers to be as well incentivized to spend extra time exploring edge cases in something they already believe to work than to ship a feature on time.
Personal liability and professional insurance works for all the actual “professions” in the US, to some extent, right?
It might be time to start the considerations for professional licensing for platform scale or commercially published software.
More like certified products. New ISO standard may require professional liability for software products, which will be adopted as requirement by big consumers and will pull the industry into certification loop, because insurers will ask for it. This will obviously put a high entry barrier to many product categories, slowing down innovation.
Yes, but slowing down to avoid hazards is sometimes important.
Medical devices and such are the only places I’d expect to see the need for certified products. By extension, in the new era, we really ought only expect certified software where we expect a duty to care from the software system (or any other assigned duty)
In development of medical devices existing quality controls are already working well, right?
My point exactly, embedded devices are the closest software gets to actually being built by licensed engineers. The expectation can often be that you are an electrical engineer by training, where licensure is a viable path, unlike in software engineering.
Yes, QA is important. My code will always "work" in that everything I tested is bug free. But having someone other test, especially someone who knows the service is gold.
But there is also bad QA: The most worthless QA I was forced to work with, was an external company, where I, as developer, had to write the test sheet and they just tested that. Obviously they could not find bugs as I tested everything on the sheet.
My most impressive QA experience where when I helped out a famous Japanese gaming company. They tested things like press multiple buttons in the same frame and see my code crash.
> But there is also bad QA: The most worthless QA I was forced to work with, was an external company, where I, as developer, had to write the test sheet and they just tested that. Obviously they could not find bugs as I tested everything on the sheet.
This was my sole experience at the one place I worked with an internal QA team. They absolutely could never find bugs that devs missed, often mis-marked ones that didn't exist, and failed to find obvious edge cases that did exist.
Multiple devs fired because the CEO believed the QA over the engineering team; if they marked a bug as present, it was the engineer's fault for writing it. If they didn't catch a bug that made it to prod, it was the engineer's fault for not including it in the test plan. They represented nothing but red tape and provided no value.
Good QA sounds great! I'd love to know what that's like someday! It'd be great to have someone testing my code and finding breakages I missed! I'm only slightly (incredibly) bitter about my bad experience with its implementation.
I do think the type of testing where QA just follows pre-generated script has place. But it is about long term regression. The first round absolutely should not find anything. But with complex system it also should find nothing in a year or three or five years... Offloading this to dedicate resource could be useful in certain industries.
I did not think of that. Maybe for some industries, it might make sense. But if I want a regression test, I would probably set it up as automated test. In the case I mentioned above it was the only test beside my own for a new service.
Not really that impressive, that's Testing Quick Attacks 101
The tension is that QA is important. But most QA practitioners are not good. The world is filled with QA people who couldn't make it as an SWE and now are button pushers. But high quality QA people are amazing. These are the ones who understand how to break apart a system, push it to its limits, and engineer the quality plan.
This is an area where I expect AI to create a bimodal future. The smaller group of high quality QA people will now be able to offload the activity to agents instead of the QA drones. They'll still be worth their weight in gold, whereas the drones will be redundant.
I've worked with a lot of QA folks that just repackage up the unit tests the dev already writes. And I've met a few that strikes out on their own and comes up with real tests.
The latter is much more high touch, but they're often worth their weight in gold. The former is kinda pointless.
Exactly. I think AI tooling will make that good group even more effective. And it will make the bad group even more pointless.
Exactly. I spent 20 years split between MS and Apple. Some of the best people I ever worked with were in QA. One guy in particular was an extremely talented engineer who simply didn't enjoy the canonical "coding" role; what he did enjoy was finding bugs and breaking things. ;-)
Really? The best people I worked with were never QA.
Moreover, the best QAs would almost always try to be not QA - to shift into a better respected and better paid field.
I wish it werent so (hence my username) but there is a definite class divide between devs and QA and it shows up not just in terms of the pay packets but also who gets the boot in down times and who gets listened to. This definitely affects the quality of people.
I think it's overdue an overhaul much like the sysadmin->devops transition.
I mean the people that come up thru QA may be the best while getting enough time in the company to go to a position that pays.
But yea, so many companies cheap their QA and then wonders why their QA sucks.
We have differing experiences, which shouldn't be surprising. My example explicitly referred to someone who was a good engineer who enjoyed the QA role.
This might have been an Apple/MS thing, but we always had very technical QA people on the dev tools team. For example, the QA lead for the C++ compiler had written their own compiler from scratch and was an amazing contributor.
In the Windows team (back before the test org was decimated) I saw the described "class divide". Anybody who was good enough would switch from SDET to SDE [disclaimer: obviously there were some isolated exceptions]. The test team produced reams of crappy test frameworks, each of which seemed like a "proving project" for its creators to show they could be competent SDEs. After the Great Decimation my dev team took ownership of many such frameworks and it was a total boondoggle; we wasted years trying (and mostly failing) to sort through the crappy test code.
This was all unfortunate, and I agree in principle with having a separate test org, but in Windows the culture unfortunately seemed to be built around testers as second-class software developers.
I spent most of my time working on Visual Studio (in the Boston time frame) so we got to interact with pretty much every team. I absolutely hated interacting with the Windows team. Everything was a fight for no reason.
As I said above, everyone has their own experiences but the QA folks I worked with at MS were fantastic.
Not sure if you're aware but Dave Plumber now has a really good YT channel [0] where he talks about MS back in those days. It's a fun walk down memory lane.
[0]: https://www.youtube.com/@DavesGarage
> Really? The best people I worked with were never QA.
> Moreover, the best QAs would almost always try to be not QA - to shift into a better respected and better paid field.
That sort of seems circular. If they're not respected or paid well, of course most of the talented people would not want to remain in QA, and eventually you'd just have mediocre QA. That doesn't really give you any insight into whether high quality QA would be useful though.
(edit: I see now that's basically the point you're trying to make, so I guess we're in agreement)
> Good QA people know how to find regression and bugs _that you didn’t think about_ which is the whole reason why it shouldn’t be under “engineering”
I don’t understand the reasoning here why QA shouldn’t be engineering.
> I don’t understand the reasoning here why QA shouldn’t be engineering.
Who watches the watcher, right?
That aside, the core idea is the same as the principles of independent audit, peer review, or even simply just specialization.
Red team / Blue team?
Yes but both the red team and blue team would still be engineering.
Yes, but police and military are both law enforcement, on one level, but each are very different from the other.
Even the military have police, right?
edit: ultimately, it comes down to the importance of independent audit, the builders and the breaker/fixers are very different groups in engineering.
The red team and blue team should not share supervisors.
Nor, in the case of QA, should the audit team be engineers trained to act and think like the ones who wrote the software. A fresh perspective is useful.
But in the long run, supervisory independence is the real deal. I know of a QA manager who shut down an entire factory's output until a major safety issue (that had been kicked down the road several times) was addressed. It took chutzpah, and serious power, to do that. The Dir. of Engrg. would NEVER have allowed it.
Frankly, calling software development engineering is quite debatable. We should be calling less things engineering that aren't actually engineering qualifications.
"Automated Verification Engineer is experimental"
Except I worked at a company with a QA department made up of entirely "Automated Verification Engineers" ... over a decade ago. And the head of the department had taught at a local QA school (so presumably other QA engineers learned that style of work from her also).
Good QA departments switched to this mode long before AI was even a thing! Maybe 90+% of QA departments didn't work that way pre-AI, but there certainly were ones that did!
Absolutely QA "should" exist. Our QAs are the most knowledgeable people on our product, often informing devs and product alike of requirements, missing requirements, weird configuration outliers, how to actually use the damned app, etc. Without QA we would be developing and testing for brittle requirements to get code into an MVP state, not a functional, user-friendly state.
"Should" is one of those words. QA absolutely should not exist. Developers should not write buggy code to begin with! But they do, so QA (in some form) must exist for software to be any good.
There are two very important ideas in this article, which I fully agree with: QA are not the only people responsible for quality - entire team is. QA act as experts and drivers of quality management process, but they should not and are not acting alone. They should have adversarial approach which is helpful on every stage of SDLC. Thus, few more items from my list why QA is useful in every engineering organization and why every team I hire has at least one QA starting from 4-5 people:
1. Quality management is a continuous process that starts with product discovery and business requirements. Developers often assume that requirements are clear and move on to building the happy path. QA often explore requirements in depth and ask a lot of good questions.
2. QA usually have the best knowledge of the product and help product managers to understand its current behavior, when new requirements suggest to change it.
3. The same applies to product design. Good designer never leaves the team with a few annotated screens, supporting developers until the product is shipped. Design QA - the verification of conformance of implementation to design specs - can be done with QA team, which can assist with automation of design-specific tests.
4. Customer support - QA people are natural partners of customer support organization, with their knowledge of the product, existing bugs and workarounds.
And just a story: on one of my previous jobs recently hired QA engineer spotted number error in an All Hands presentation. That was an immediate buy-in from founders. :)
"Engineers sometimes exhibit an arrogance that they can do everyone else’s job,"
This rings so many bells that it feels like some Buddhist festival. Apply the same approach to QA, Operations, and anything outside the actual product development: when this arrogance was shared between bosses and developers, all good on their side. Now with the AI, the arrogance is staying only on the bosses' side, and we have developers freaking out.
Its also a fucking annoying sentiment for us as engineers. I dont want to do everyone else's job
A good QA actually does the work that product managers should but don't, in terms of understanding actual users and checking the feature delivered matches actual user ask.
But I've worked at places with a whole spectrum of coverage in roles spanning Product Mgrs, Project Mgrs, BAs, QAs, production support level 1, production support level 2, etc. The one constant is whatever is missing or understaffed just ends up getting done by engineers.
Testing, on-call, Jira managing, requirements gathering with users, analysis, etc... all falls on to engineering. Then management gets even more wound up about dev productivity/velocity, etc.
But, as someone who’s agile and adaptable, I can do any job. That doesn’t mean I can do them all simultaneously. It doesn’t mean I can be the full-time loan officer and the full-time app developer.
Can I do your job? Yep. Can I also, at the same time, be the engineer that optimizes the IT systems? No - one of these jobs will suffer.
Give me the chance to understand your job, and I’ll replace as much of it as possible with code to do the same thing. But what it won’t do is have good judgement. It will make decisions on actual data - accurate data, erroneous data, it doesn’t care.
I think this is an interesting place to put “AI” - can it take input in the form of data and historical decisions, and come to a new decision from recent data? The same decision a human would?
Thank you for embodying and exemplifying the point, albeit ironically.
Aside: it has become an interesting personal experiment to stop being obviously ironic and see how people read what I’ve written. The voting is telling.
You must not have heard of Poe's law yet. This is not the flex you think it is.
So many of these articles talk about why a particular role or type of role within an org should be there or not, but they fail to touch on the 'theory' of why or why not. This article has that same lack of foundation, and so meanders around a bit, IMHO.
Any process in an organization of size will have indicators that measure output. Those indicators should typically be paired with indicators that measure the quality of the output, to ensure product or service levels. That's the theory, and the genesis of 'quality management': whether you're measuring output code or breakfasts [1] or chemicals or widgets or medicine, you need to measure the quality of the output if there are any client specifications or expectations around the output. And there are very few cases where your customers will not have any specs or expectations around your product or service.
How you manage quality follows from those basics; it matters where you measure quality but it is so process dependent - earlier in the process lowers costs, but may not suffice to guarantee final quality - that quality management has to be designed around the specific process; balancing cost with benefit and requirements. How deep or specialized quality management becomes depends on the needs of the org, the size of the org, and the needs of the particular process.
This is why I'm skeptical about whether broad articles like this are beneficial overall. Why and how matter, and where's the foundational discussion behind why and how? Do folks not think at the organizational/business level? Maybe not everyone is a Sheryl Sandberg :-)
[1] https://en.wikipedia.org/wiki/High_Output_Management
The best QA isn’t just about finding bugs. It’s about bringing quality to the codebase: typing, better static analysis, linters, and useful libraries. In the other direction, it’s also about integrating into the release process by using integrating the what-goes-on / what-stays-in-beta decisions into quality’s approach to giving signal over any other part of the codebase.
Anything that involves gating bits of code, basically, and deciding whether to gate bits of code or not.
The best QA is linting, CI and language features? Sure and automated testing at different levels too but this sounds like basic things the engineering org does
I was in testing for 17 years before moving back into Engineering. I have spent my time in Engineering leading teams to push quality left. But I think it's better to say "quality is a system" than to say "engineers own quality." What are you building into your SDLC that makes sure quality happens? Testing is just a part of that, and not even the biggest one.
Is this the real life? Is this just fantasy?
If engineering owns quality, then engineering own all, up the chain. No need for anything and anybody.
Which is the AI pipe dream, really.
You are, in fact, with using AI, QA or coding or otherwise, externalizing services in hope the services will improve and costs will drop.
Let me know how that goes without HITLs.
If engineering owns quality, it ought to also own the liability, too.
That would put the damper on the pipe dream pretty quick. Probably more healthily than any data center ban could ever do.
If engineers were licensed, bonded, and liable, things would go very differently.
* speaking as having been a practicing software “engineer” for a decade
Applies to making bread or brain surgery. Sometimes things go wrong and people are accounted for.
But since CEOs, or any other bosses, need to make a living, they will eat the liability in exchange for wealth, and leave engineering in the dust.
Somehow hospital CEOs and doctors/nurses have managed to both eat the liability and make a living.
I think the same patterns can be applied here.
> Is this the real life? Is this just fantasy?
Caught in a landslide, no escape from reality ...
:-)
QA? Or testers?
I worked with an excellent QA once, and that changed my perspective completely as a dev.
A great QA can understand the features of a product quickly, turn those concepts into some sort of grid or matrix in their mind, then pull a bunch of paths and scenarios with estimated priorities and probabilities at a fast and efficient pace, all with great coverage. They can also identify features contradicting each other more quickly than product people.
I think a good QA is capable of being a great vibe coder nowadays, too. If you can write great test suites (write names only), agents nowadays are able to turn those specs into decent codebases. Comparatively, I know a lot of decent dev having not very good taste in testing, who often write overlapping tests or missing important paths.
I ship a very visible product which, when it breaks, generates a lot of social media angst (it's in the gaming adjacent space). So we try not to break things to the best of our ability. We have very few QA people and have whittled down that team over the past few years (DevOps was eliminated during the first round of layoffs ~2023).
This was painful at first but I do think it's the way to go. We found that too much manual QA incentivizes devs writing features to throw it over the fence - why should they test more if someone else is paid to do it? Devs need to feel the pain of writing tests if their code is hard to test, and they need to be held accountable when their code blows up in production. This feedback loop is valuable in the long run.
Same thing for test automation. Previously we shipped this over to our in-team DevOps people and they built complicated CI/CD setups. Losing them meant we needed to simplify our stack. Took a while and it slowed down feature development, but it was worth it. Of course you need leadership who understands this and dedicates time to building this out.
In defense of DevOps, I think the landscape for automation was poor a few years back. Jenkins and Teamcity are way too complex. Github Actions (for all its warts, and there are many) is much simpler. Our pipelines are also in their own CI/CD (CDK, CodeBuild) - infrastructure as code is the key to scaling.
We still have manual QA people to test things we can't automate. Usually this is for weird surfaces such as smart TVs, or for perceptual tests. I don't see this going away any time soon, but high levels of automation elsewhere drive down the need for "catch-all" manual testing.
I work with someone who does great QA work. They know how to rip something apart, they understand the user's non-technical perspective and approach, and they understand what edge cases to look out and they have the actual equipment to test on different physical devices (and so on).
Most importantly, they have the diligence and patience to methodically test subtly different cases, which I frankly don't have.
On the question of whether QA slows things down, I have to ask: slows down what? Slows down releasing something broken? Why is that something to optimize for? We should always be asking how long it takes to release the right thing (indeed I'm most productive when I can close a ticket after concluding nothing is needed).
If all/most QA people were like this then no one would be complaining.
Sure, but this issue is not specific to QA. Any roles which you depend on with incompetent people occupying them will lead to issues and frustration.
If you have to ask, the answer is "yes."
Yes. Without a doubt.
I worked with a QA team for the last fifteen years until last year when they laid them all off.
QA is a discrete skill in and of itself. I have never met a dev truly qualified to do QA. If you don't think this you have never worked with a good QA person. A good QA persons super power is finding weird broken interactions between features and layers where they meet. Things you would never think of in a million years. Any dingbat can test input validation, but it takes a truly talented person to ask "what if I did X in one tab, Y in another, and then Z, all with this exact timing so events overlap". I have been truly stunned at some of the issues QA has found in the past.
As for time, they saved us so much time! Unless your goal is to not test at all and push slop, they are taking so much work off your plate!
Beyond feature testing, when a customer defect would come in they would use their expertise to validate it, reproduce it, document the parameters and boundaries of the issue before it ever got passed on to dev. Now all that work is on us.
> "what if I did X in one tab, Y in another, and then Z, all with this exact timing so events overlap"
As a QA: this bug will get downprioritised by PM to oblivion.
If anyone should not exist, it's PMs.
I kid a little, I worked with some very good PMs when we did client work who made my life much easier. Working on a SaaS though, I find them generally less than useful.
Not anyplace that cares about quality.
where I work it is normally easier to fix things than deprioritize to oblivion. I can fix an issue, but priority puts a dozen people in a meeting.
Depends on what happens in that case, no?
If it messes up the UI until you refresh, yeah, I understand deprioritizing that.
If it causes catastrophic data corruption or leaks admin credentials, any sane PM would want that fixed ASAP.
Should Accounting exist?
Should Legal exist?
Should Facilities exist?
Surely your average employee could own each of these functions.
> Have engineering own quality
The moment that happens it will either be re-outsourced to QA anyways or quality will become a question of licensing and bonding of professional engineers
There's a basic loop that goes on regardless:
1. define a requirement 2. implement the requirement 3. verify that the requirement was implemented
TDD was built around the idea that 1 and 3 could be unified in automated testing, and that's certainly true for a large part of it. But QA as a discrete role needs to exist because, beyond verifying that 2 was done correctly, they expose higher level bugs in 1, the requirements themselves.
It's virtually impossible to define requirements completely and without second order interactions that cause problems. QA is as effective at exposing assumptions and handwaving by the people who created the wireframes or the visual design as by the developers failing to test their own work.
And ideally, this leads to the cycle being virtuous: higher quality starts at the requirements phase, not the implentation phase. It's not just that QA should work closely with the engineers--the engineers need to work closely with UX and VD to ensure they fully understand the requirements. The incentives are aligned among all parties.
I think social media companies don't need that
Enterprise software companies selling definitely need it. Customers ask was this tested? where is the test report?
It’s not even optional as soon as you’re getting close to any type of standards or compliance framework like SOC2 and the likes.
As someone who holds a degree in Medicine besides his degree in Computer Science and works in medical devices, can I just please emphasise that QA should exist in this industry and is well deserved?
Sorry. No 'blue screens' or stack traces in my pacemaker or insulin pump, please.
> No 'blue screens' ... in my pacemaker or insulin pump
they have screens?!? /s
"Before I weigh in further, I’d like to make sure you’re familiar with the testing pyramid."
The testing pyramid is a par excellance SWE kool-aid. Someone wrote a logically-sounding blogpost about it many years ago and then people started regurgitating it without any empirical evidence behind it.
Many of us have realised that you need a "testing hourglass", not a "testing pyramid". Unit tests are universally considered useful, there's not much debate about it (also they're cheap). Integration tests are expensive and, in most cases, have very limited use. UI and API tests are extremely useful because they are testing whether the system behaves as we expect it to behave.
E.g. for a specific system of ours we have ~30k unit tests and ~10k UI/API tests. UI and API tests are effectively the living, valid documentation of how the system behaves. Those tests are what prevent the system becoming 'legacy'. UI and API tests are what enable large-scale refactors without breaking stuff.
Isolated QA should not exist because anything a QA engineer can do manually can be automated.
The testing pyramid comes from a time of desktop apps with no API and when UI tests were extremely expensive. I made 100% sense in that context, it never did in other contexts. Despite its omnipresence it had not made any sense for the vast majority of us in the past 25 years.
everyone gets the pyramid wrong in my opinion.
the vertical axis is not test type. It is would you run the test. At the bottom are deterministic fast tests for something completely unrelated to what you are working on - but they are so easy/fast you run them anyway 'just in case'. As you move up you get tests that you more and more want to aviod running. Tests that take a long time, tests that randomly fail when nothing is wrong, tests that need some settup, tests that need some expensive license (i can't think of more now but I'm sure there are).
You want to drive everything down as far as possible, but there is value in tests that are higher so you won't get rid of it. Just remember as soon you get to the 'make would run this test but I'm skipping it for now because it is annoying' line you need a seperate process to ensure the test is eventually run - you are trading off speed now for the risk that the test will find something and it is 10x harder to fix when you get there - when a test is run all the time you know what caused the failure and can go right there, while later means you did several things and have forgotten details. 10x is an estimate, depending where in your process you put it it could be 100 or even 1000 times harder.
I’ve never encountered an initiative to “shift left” that wasn’t directly motivated by clunky, slow, unreliable and unmaintainable E2E tests. Failing earlier, especially pre-deployment, with targeted integration and contract testing is fabulous but it can’t replace rubber hitting road.
I’ve had quite a bit of success in helping my dev teams to own quality, devising and writing their own test cases, maintaining test pipelines, running bug hunts, etc. 90% of this can be attributed to treating developers as my customer, for whom I build software products which allow them to be more productive.
> because anything a QA engineer can do manually can be automated.
Looks like you never worked with a decent QA team and do not understand the full scope of quality management. They have plenty of creative tasks not aligned with other roles.
> Isolated QA should not exist because anything a QA engineer can do manually can be automated.
Well, sort of maybe, but it's not always economical. For a normal web app - yeah I guess. Depends on the complexity of the software and the environment / inputs it deals with.
And then there's explorative testing, where I always found a good QA invaluable. Sure, you can also automate that to some degree. But someone who knows the software well and tries to find ways to get it to behave in unexpected ways, also valuable.
I would agree that solid development practices can handle 80% of the overall QA though, mainly regression testing. But those last 20%, well I think about those differently.
> And then there's explorative testing, where I always found a good QA invaluable.
Yes, I agree. We do this too. Findings are followed by a post-mortem-like process: - fix the problem - produce an automated test - evaluate why the feature wasn't autotested properly
> it's not always economical. For a normal web app - yeah I guess
What do you define as "normal"? I can't think of anything harder to test than a web app.
Even a seemingly trivial static HTML site with some CSS on it will already have inconsistencies across every browser and device. Even if you fix all of that (unlikely), you still haven't done your WCAG compliance, SEO, etc.
The web is probably the best example case for needing a QA team.
[dead]
Of course, anyone would agree that if wishes were fishes, QAs should not exist. We would all use agile with cross-functional teams. Every single team member can do any work that may be needed. All team members can take time off any time they need to because we have full coverage and the world is a beautiful place.
Of course, none of this is true in the real world.
For example, just last week we had a QA essentially bring down our web application on staging environment always reproducible with a sequence of four clicks. Follow the sequence with about the proper timing and boom, exception.
Should this have been caught before a single line of code was written? Yes, it should have been caught before any code was written. However, the reality is that it did not. Should this have been caught by some unit test? Integration test? End to end test? Code review? I'd argue as we barrel down a world of AI slop, we need to slow down more. We need QA more than ever.
[dead]
Hard to believe people are asking this question in 2026.
Quality is something that takes dedicated focus and lots of work. Therefore it’s a job, not an afterthought or latest priority for someone whose primary focus is not quality.
Like the majority of slop articles, the author is advertising his services.
So why would you outsource it to another team that isn't doing the actual work?
Team doing the work should do QA so they only produce quality.
But on other hand those people can not often be trusted. As such you need a team that does checks again. Or alternatively they might have misunderstood something and thus produced incorrect system. Or there is some other fault in their thought process or reality. And system operates differently in more real scenario.
QA is actual work. Building the thing is actual work. Each is not "the" work, which is the task of the whole company.
QA perspective and focus is just different from the one of the team building the thing. It's precisely because of their detached perspective that they can do their work properly.
In the age of AI of course not, AI is your QA
I think the unstated (but highly prevalent) view among executives in large swathes of this industry is that they don't really care to spend any time or money on user testing or quality assurance, and if this role exists at companies it is usually under-compensated and straddles both these functions to have some party be accountable. It is sometimes a check on product teams and vision-driven executive teams who don't prototype/test their ideas (or empower their teams to do so), and sometimes a check on engineers and engineering managers who don't want to be accountable to gaps in quality.
If you care about: the consistency of your output, what you're selling to your customers, you have not much of a choice (than to control what you are shipping).
Not even mentioning the potential regulatory/market and legal consequences if you don't.
Developers are liars. Why would I trust them to test their software?
Has there been enough times it doesn't even compile on their machine? Which might be the absolute bare minimum...
If your product is used by humans, then it needs to be tested by humans - this cannot be automated. Those humans can be your QA people, or your customers. Perhaps your customers are happy to be testers, perhaps not.
Unit tests are very expensive and return little value. Conversely, a (manual?) 'smoke test' is very cheap and returns great value - the first thing you do when updating a server for example is to check it still responds (and nothing has gone wrong in the deployment process), takes 2 seconds to do, prevents highly embarrassing downtime due to a misconfigured docker pull or whatever.
> Unit tests are very expensive and return little value
Why are unit tests very expensive? This goes against everything I know.
Unit tests very roughly double the amount of effort required to make any meaningful change to your codebase. They are also require maintenance same as ordinary code - but the customer does not care in the slightest whether or not they pass. On the other side, they can only really tell you about low level bugs that you already expected, they cannot surface system level bugs - the actual hard bugs that cause problems for you and your customers.
Then there is the danger of thinking that green=all good, an example of 'automation bias' where we learn to trust the automation even as things go wrong.
As makers, it is also tempting to believe that [all] problems can be solved by making something (i.e. code), but actually many problems are not of that nature, and cannot be solved in that way.
Thank you, that makes sense. What I meant was that today all unit tests are basically written by an AI so the "cost" is almost zero. Am I wrong?
Automated tests are no longer expensive to write and nowadays less expensive to execute.
not really . but distill his knowledge into an OpenClaw.
Dumbest thing I ever read