Tomorrow I walk into a room that's been having the wrong conversation for years.

Not about whether patient engagement matters. That debate is over.

The conversation that keeps failing is about what engagement actually requires. Structurally. Before the design is locked, before the sites are contracted, before the budget is set.

Tomorrow I'm in Boston for Patients as Partners. Three days. The room will be full of people who care deeply about this problem. And the argument I'm walking in with is one I want to say clearly before I get there, so I can be held to it.

Patient advisory boards that cannot influence protocol design are not patient engagement. They are consultation with a better name. Most clinical development programmes have built a lot of the latter while calling it the former. And the regulatory environment is about to start asking harder questions about which is which.

If you're at Patients as Partners this week and you want to have that conversation in person, find me. I'll be there from tomorrow through to Thursday. Reply to this email or message me on LinkedIn.

The conversation the industry keeps having. And the one it keeps avoiding.

Here is the version of patient engagement that gets presented at conferences.

Sponsor identifies underserved population. Sponsor convenes patient advisory board. Patients are consulted. Insights are captured. Protocol is designed.

It sounds right. It has the right components. And in most cases, the sequence is almost exactly backwards.

By the time patients are sitting around a boardroom table reviewing materials, the protocol architecture is already established. The eligibility criteria have been drafted. The visit schedule is set. The site locations are decided. What's being reviewed is not a design. It is a finished document that needs a sign-off.

That is not engagement. That is endorsement. And communities, particularly the communities that have the most at stake in clinical research, have decades of practice telling the difference.

The question nobody wants to answer directly is why it keeps happening this way. And the honest answer is not cynicism. It is sequencing. Community engagement is treated as a downstream activity, something that happens after the scientific and operational decisions are made, rather than an upstream input that shapes them.

That sequencing is the problem. And it cannot be fixed by running better advisory boards. It requires different infrastructure.

Why the most important shift is happening at portfolio level.

Here is something I've been watching carefully over the last twelve months.

The clinical development leaders who are actually moving the dial on trial diversity are not making inclusion decisions trial by trial. They are building capability at portfolio level. Upstream infrastructure decisions that make representation a development-wide standard rather than a per-protocol problem to be solved reactively.

That shift matters because trial-by-trial decisions are always made under pressure. Timeline pressure. Budget pressure. The path of least resistance when a single protocol is being designed is to use the populations your sites already know how to reach, your eligibility criteria already know how to include, your visit schedule is already designed to accommodate.

The teams breaking that pattern are not doing it by making better individual decisions under the same pressure. They are changing the conditions under which decisions get made. Building the community relationships, the data infrastructure, and the internal capability before any specific protocol is on the table.

That is a different kind of investment. It requires defending spend that does not map neatly to a single trial budget. But the organisations that are making it are not starting from scratch every time a new protocol needs a diversity plan. They already have the architecture.

That is the conversation I am most interested in having in Boston this week.

If you're building the case internally, the white paper will help

Most of the people reading this already believe the argument. The barrier is not conviction. It is making the case in a room where the person across the table wants to know what it costs, what the timeline looks like, and what happens to the next trial if you build this now.

The white paper was written for exactly that conversation. It covers where equity risk enters the development lifecycle, what it costs when it's missed, and what decision-grade lived experience data looks like in practice. Practical, not theoretical. Built to be shared internally.

Something new on the website worth reading

Firstly, please forgive me, the Unwritten Health website has been written, built and designed by me. So I know it has glitches, so bare with me as I fix them… if you spot anything, please do let me know.

This week we published the first in a new series of blogs on the Unwritten Health website. The first blog is entitled Inclusion as Clinical Infrastructure: How to Build a Data Layer That De-Risks Trial Design.

The argument it makes is one I've been building toward in this newsletter for weeks. But the blog goes somewhere the newsletter hasn't yet: it gets specific about where in the development lifecycle inclusion infrastructure has to show up to actually change outcomes. Not as a principle. As a workflow.

Four inflection points in particular. Pre-protocol concept stage, when the trial's basic architecture is still flexible. Protocol development itself, where eligibility criteria get written with or without a health equity lens. Feasibility and site selection, where the right postcode on a spreadsheet is not the same as a site that communities actually trust. And recruitment and retention planning, where differential dropout can quietly undo the diversity you spent months trying to achieve.

There's also a worked example that I think is the most practically useful thing I've written on this. A Phase II heart failure trial. An eGFR exclusion criterion that looks clinically neutral. And what happens when you layer in the epidemiology of who actually carries the burden of heart failure in the UK and US.

The punchline: the risk was visible before the protocol was locked. It just required a data layer that most teams aren't collecting.

More blogs are coming. Each one will go deeper on a specific point in the lifecycle. If you know someone who is currently in protocol design and would find this useful, send it directly.

This week in data

68%A Mumsnet report published this month, drawing on almost 100,000 posts from women over a decade, found that 50% of women believe they have been dismissed or not believed by an NHS professional because of their sex. 64% say they have been explicitly told their pain or symptoms were normal or in their head. 68% think the NHS does not take women's health concerns seriously.

The UK Health Secretary Wes Streeting acknowledged the findings, describing the pattern as "structural and deeply embedded" sexism in healthcare.

Mumsnet. Medical Misogyny: A Lifetime of Failure. March 2026. mumsnet.com/news/medical-misogyny

This is what a lived experience data gap looks like before it becomes a clinical research problem. Women are being excluded from accurate diagnosis not at the trial design stage but earlier, at first contact with the system. By the time a clinical trial is designed around a condition that disproportionately affects women, the evidence base it draws on has already been shaped by years of dismissal.

You cannot fix that with better recruitment. You fix it by treating lived experience as decision-grade data from the start.

Thanks for reading. This newsletter exists because I believe the right framing, in the right hands, changes decisions. If it did that for you this week, even a little, that's enough.

Ashish.

Reply

Avatar

or to participate

Keep Reading