99

Trials avoid high risk patients and underestimate drug harms

It's understandable that unusual patients are seen as confounding variables in any study, especially those with small numbers of patients. Though I haven't read beyond the abstract, it also makes sense that larger studies (phase 3 or 4) should not exclude such patients, but perhaps could report results in more than one way -- including only those with the primary malady as well as those with common confounding conditions.

Introducing too many secondary conditions in any trial is an invitation for the drug to fail safety and/or efficacy due to increased demands on both. And as we all know, a huge fraction of drugs fail in phase 3 already. Raising the bar further, without great care, will serve neither patients nor business.

10 hours agorandcraw

Having been an "investigator" in a few phase 3 and 4 trials, it is true that all actions involving subjects must strictly follow protocols governing conduct of the trial. It is extremely intricate and labor intensive work. But the smallest violations of the rules can invalidate part of or even the entire trial.

Most trials have long lists of excluded conditions. As you say, one reason is reducing variability among subjects so effects of the treatment can be determined.

This is especially true when effects of a new treatment are subtle, but still quite important. If subjects with serious comorbidities are included, treatment effects can be obscured by these conditions. For example, if a subject is hospitalized was that because of the treatment or another condition or some interaction of the condition and treatment?

Initial phase 3 studies necessarily have to strive for as "pure" a study population as possible. Later phase 3/4 studies could in principle cautiously add more severe cases and those with specific comorbidities. However there's a sharp limit to how many variations can be systematically studied due to intrinsic cost and complexity.

The reality is that the burden of sorting out use of treatments in real-world patients falls to clinicians. It's worth noting level of support for clinicians reporting their observations has if anything declined over decades. IOW valuable information is lost in the increasingly bureaucratic and compartmentalized healthcare systems that now dominate delivery of services.

9 hours agojrapdx3

This could at least be done after release, but I don’t think any incentives are there, while collecting the data is incredibly difficult

6 hours agoharha

It seems like the current situation is doing a disservice to "unusual" patients (who may actually make up the majority of patients).

2 hours agofwip

I can’t find the exact MDMA study for PTSD but after reading the study participants rejection criteria for it; it seemed like few could qualify.

I saw a new procedure available in Mexico for 8k for psychedelic treatment with Ibogaine. Still schedule 1 like MDMA in USA.

It looks like there has been a few MDMA trials for ptsd even though the FDA denied more widespread testing.

https://www.science.org/content/article/fda-rejected-mdma-as...

33 minutes agoinstagib

Abstract: "The FDA does not formally regulate representativeness, but if trials under-enroll vulnerable patients, the resulting evidence may understate harm from drugs. We study the relationship between trial participation and the risk of drug-induced adverse events for cancer medications using data from the Surveillance, Epidemiology, and End Results Program linked to Medicare claims. Initiating treatment with a cancer drug increases the risk of hospitalization due to serious adverse events (SAE) by 2 percentage points per month (a 250% increase). Heterogeneity in SAE treatment effects can be predicted by patient's comorbidities, frailty, and demographic characteristics. Patients at the 90th percentile of the risk distribution experience a 2.5 times greater increase in SAEs after treatment initiation compared to patients at the 10th percentile of the risk distribution yet are 4 times less likely to enroll in trials. The predicted SAE treatment effects for the drug's target population are 15% larger than the predicted SAE treatment effects for trial enrollees, corresponding to 1 additional induced SAE hospitalization for every 25 patients per year of treatment. We formalize conditions under which regulating representativeness of SAE risk will lead to more externally valid trials, and we discuss how our results could inform regulatory requirements."

12 hours agobikenaga

This seems like an odd criticism.

First off it ignore the fact that if you include frail patients you’ll confound the results of the trial. So there is a good reason for it.

Second, saying “rate of SAE is higher than rate of treatment effect” is a bit silly considering these are cancer trial - without treatment there is a risk of death so most people are willing to accept SAE in order to achieve treatment effect.

Third, saying “the sickest patients saw the highest increase in SAE” seems obvious? It’s exactly what you’d expect.

9 hours agorefurb

First, ignoring frail patients means your trial isn't representative of the wider population, so it shouldn't be accepted for general use - only on people who were well-represented in the trial.

Second, you're ignoring the possibility of other treatment options. It isn't always the binary life-or-death you're making it, so SAEs do matter.

Third, a big part of trials is to discover and develop prevention methods for SAEs. Explicitly ignoring the people most likely to provide data valuable for the general population sounds like a pretty silly approach.

6 hours agocrote

> Second, you're ignoring the possibility of other treatment options. It isn't always the binary life-or-death you're making it, so SAEs do matter.

A common reason for a drug (especially a cancer drug) going to trial is because other options have already failed. For example CAR-T therapies are commonly trialed on patients with R/R (relapsed/refractory) cohorts.

https://www.fda.gov/regulatory-information/search-fda-guidan...

> "In subjects who have early-stage disease and available therapies, the unknown benefits of first-in-human (FIH) CAR T cells may not justify the risks associated with the therapy."

6 hours agoaydyn

> First, ignoring frail patients means your trial isn't representative of the wider population, so it shouldn't be accepted for general use - only on people who were well-represented in the trial.

Sure, but including frail outliers does not automatically mean you can generalize to the whole population. People can be frail for a wide variety of reasons. Only some of those reasons will matter for a given trial. That means the predictive power varies widely depending on which subpopulation you're looking at, and you'll never be able to enroll enough of some of the subgroups without specifically targeting them.

The results in the posted paper seem valid to me, but the conclusion seems incorrect. This seems like a paper that is restating some pretty universal statistical facts and then trying to use that to impose onerous regulations that can't and won't solve the problem. It will improve generalizability for a small fraction of the population, at a high cost.

> Second, you're ignoring the possibility of other treatment options. It isn't always the binary life-or-death you're making it, so SAEs do matter.

Of course they do. It's a good thing we have informed consent.

> Third, a big part of trials is to discover and develop prevention methods for SAEs. Explicitly ignoring the people most likely to provide data valuable for the general population sounds like a pretty silly approach.

If your primary claim is that data from non-frail people is not generalizable to frail people, then how can you claim that data from frail people is generalizable to non-frail people? If the trials for aspirin found that hemophiliacs should get blood clot promoting medications along with it, then should non-hemophiliacs also be taking those medications?

I'm thankful we can extract some amount of useful data from these trials without undue risk. It's always going to be a balancing act, and this article proposes putting a thumb on the scale that reduces the data without even solving the problem it's aiming at addressing.

3 hours agosfink

But you’re stating the obvious? It’s not like physicians don’t know trials are designed this way, and for good reasons.

Frail patients confound results. A drug may work great, but you’d never know because your frail patients die for reasons unrelated to the drug.

Second is obvious as well. Doctors know there are treatment alternatives (with the same drawback to trial design).

And I already touched on your third point. The alternative to excluding frail patients is not being able to tell if the drug does anything. In many cases that means the drug isn’t approved.

Excluding frail patients has its drawbacks, but it has benefits as well. This paper acts like the benefits don’t exist.

6 hours agorefurb

I've personally been excluded from several depression clinical trials for having suicidal ideations, it makes me wonder just what kind of "depression" they are testing drugs on.

6 hours agoRobotToaster

There are a few broad reasons this can happen. One possibility is that they want to know if the treatment causes suicidal ideation, and the effect is often small enough that people more likely to report those symptoms independent of the treatment confound the result. Another is that they don't want to have to deal with the safety protocols that come with screening in participants who have reported any history of suicidality. Another still is that higher likelihood of an active mental health crisis means that it's harder for study coordinators to determine if participants have provided informed consent.

Sometimes studies are specifically for treatment-resistant depression, and I expect those studies are more likely to screen in participants with a history of suicidality, so I would recommend keeping an eye out for those if you would like to participate in clinical trials.

an hour agoflurie

Be strong, brother, there is hope. Antidepressant can be really hard to administer, they exclude particularly vulnerable people from trials because they need to be protected the most.

6 hours agoLucasoato

The type of depression that makes the sufferer lie about not having suicidal ideations

5 hours agokhannn

Tangentially related, but I was surprised to learn about the lax attitude towards placebos in trials. Classes of drugs have expected side effects, so it's common to use medications with similar effects as placebos. Last I heard, there is no requirement or expectation to document placebos used, and they are often not mentioned in publications.

9 hours agoortusdux

Those are documented, but not necessarily in the paper. You can find the info at clinicaltrials.gov. Check out this current trial for breast cancer treatment by Merck Sharp & Dohme LLC for example. For the control arm, they are allowing doctors choice from a set of alternatives. Assuming the doctors are selecting control treatments to improve chance of survival, this test is comparing the new treatment to "the best known treatment for this specific cancer".

https://clinicaltrials.gov/study/NCT07060807#study-plan

5 minutes agoyouainti

> Classes of drugs have expected side effects, so it's common to use medications with similar effects as placebos.

This would be called an "active placebo" and would certainly be documented.

It's common to find controlled trials against an existing drug to demonstrate that the new drug performs better in some way, or at least is equivalent with some benefit like lower toxicity or side effects. In this case, using an active comparison against another drug makes sense.

You wouldn't see a placebo-controlled trial that used an active drug but called it placebo, though. Not only would that never get past the study review, it wouldn't even benefit the study operator because it would make their medication look worse.

In some cases, if the active drug produces a very noticeable effect (e.g. psychedelics) then study operators might try to introduce another compound that produces some effect so patients in both arms feel like they've taken something. Niacin was used in the past because it produces a flushing sensation, although it's not perfect. This is all clearly documented, though.

8 hours agoAurornis

You were surprised to learn this because it’s not true.

8 hours agopadjo

This covers the trials not being fully representative, but largely neglects why that is the case.

The paper defines a population "at high risk of drug-induced serious adverse events", which presumably means they're also the most likely people to be harmed or killed by the drug trial itself.

9 hours agonitwit005

A lot of companies essentially cherry pick healthy patients and write insane inclusion/exclusion criteria to rule out anyone except for the ideal participant, which is why more and more research sites are negotiating payment up front for pre-screening and higher screenfail % reimbursement for into their study budgets.

Study design is sometimes optimized so only the "best" most enticing participants will actually be eligible, I've seen as low as 2% - 12% but frequently 50% randomization rates. Some studies also have 100 to 150 day screening period, a limited AND full screening period, etc.

Overly restrictive inclusion/exclusion criteria to super narrowly defined ideal populations hinders enrollment, causes a large burden to sites for prescreening and ends with trial results that fail to reflect real-world demographics.

3 hours agorandycupertino

Also, if they're known to be at such a high risk of adverse events, would they even be given the treatments, trial or not?

7 hours agoNatsu

i would imagine to make life of the statisticians downstream easier

3 hours agonoipv4

This was a plot in an early season of ER.

9 hours agounethical_ban

This problem is actually even worse than the article identifies, because broad definitions of what a "risk" is, result in broad exclusions.

The most pernicious of these problems is that women--yes, more than half the earth's population--are considered a high risk group because researchers fear menstrual cycles will affect test results. Until 1993 policy changes, excluding women from trials was the norm. Many trials have not been re-done to include women, and the policies don't include animal trials, so many rat studies, for example, still do not include female rats--a practice which makes later human trials more dangerous for (human) female participates.

[1] Sort of one citation: https://www.aamc.org/news/why-we-know-so-little-about-women-... There's more than this--I wrote a paper about this in college, but I don't have access to jstor now, so I'm not sure I could find the citations any more.

6 hours agokerkeslager

Move generally, whenever you read the percentage of patients that are noted as having a particular side effect from a medicine, the real percentage is much higher.

9 hours agoOutOfHere

> whenever you read the percentage of patients that are noted as having a particular side effect from a medicine, the real percentage is much higher.

The patients self-report their own side effects, then the numbers go into the paper.

Are you suggesting the study operators are tampering with numbers before publishing?

8 hours agoAurornis

> Are you suggesting the study operators are tampering with numbers before publishing?

No, but did you not read the posted article? Firstly, trials don't select participants unbiasedly. Secondly, many trials are not long enough for the side effects to manifest. Thirdly, I have enough real world experience.

7 hours agoOutOfHere

Real world experience doesn't count on HN health articles. If it wasn't documented by a researcher paid via funding from his industry leaders, or a government official trying to fast track his hiring in the public sector for $800k a year, it basically didn't happen.

7 hours agothrowawaylaptop

And this just goes to reinforcing the beliefs of those who are skeptical of medical research. "Trust the science" is all well and good in theory except when the scientists are telling you a selective, cherry-picked story.

9 hours agoSoftTalker

Strange how that line of thinking always winds up in places like "vaccines are bad" or "ivermectin cures COVID".

9 hours agoventurecruelty

It correctly observes that experts are not always right, and often incorrectly responds by turning to loud, persuasive quackery.

7 hours agovkou

No relation (except in your winding mind).

7 hours agoOutOfHere

See also: women.