I've spent the past few weeks visiting community spaces, here in the UK.

I didn’t go to present, I only went to listen.

When you're presenting, you're managing a room. When you're listening, the room manages you. It tells you things you didn't know to look for. It corrects assumptions you didn't know you'd made. It hands you frameworks that no research paper would have given you.

Six weeks in. Here is what I've actually found.

Community organisations are not disengaged from clinical research. They are disengaged from us. From the way we show up, what we ask for, and most consistently, what we do with what we're given.

One organisation I sat with had received twelve separate research requests in a single month. Twelve. Their staff and the people they support weren't opposed to research. They were exhausted by it. Bombarded by people who needed something from them, extracted it, and disappeared.

We don't have an engagement problem. We have a follow-through problem.

The question communities ask, every time, without exception, is not "is this safe." It is not "what do I get." It is: what happens to what we tell you?

That question is decades of being ignored, compressed into eight words.

The most useful thing anyone said to me across six weeks of conversations came from a community leader who didn't dress it up. He said: "Don't be the uncle that only comes for Christmas."

It landed because it was exact. Engage before the agenda is set, not after it's locked. Stay connected between projects. When you find something, share it back in a format the community can actually use. Not a journal article behind a paywall. A summary. Plain language. Something that closes the loop.

None of this is complicated. The evidence on what communities actually want is consistent and has been for years. They want to be involved before the questions are written. They want to be paid properly from the start, not with one-off tokenistic payments that signal exactly what the organisation thinks their time is worth. They want the findings shared back. And they want to be treated as knowledge holders, not recruitment channels.

Community organisations hold decades of contextual knowledge that no research team can replicate by analysing EHR data. They know how their communities experience illness, navigate systems, make decisions under pressure. That knowledge should be designed into studies. Instead, it usually gets tapped only when recruitment is already failing.

There is no pipeline from what communities share to the systems where decisions actually get made. The knowledge gets collected and filed, the decision gets made exactly as it would have been without it, and the community learns, correctly, that their input didn't matter.

Unwritten Health is built to close that gap. To create the infrastructure that turns what communities know into something that can sit alongside clinical and operational data and actually influence a protocol, a site selection decision, a market access submission.

That is what it would mean to treat community knowledge as decision-grade. And it is the only basis on which the follow-through problem gets solved.

I'll be in Boston, later this month… shall we meet?

From March 19th to 26th I'll be in Boston for Patients as Partners, one of the most important gatherings in the patient engagement and clinical research space.

If you're attending and want to connect, I'd genuinely like to meet. Reply to this newsletter or drop me a message on LinkedIn. I'll be there all week and I'm keeping the diary open for conversations that matter.

An AI called me Ashish Patel this week. I've never used that name.

I didn't put "Patel" anywhere. Not in my profile. Not in my writing. Not in any document I've published.

The platform pattern-matched. South Asian first name. Most statistically common South Asian surname. Close enough.

Except it's not my name.

This is what algorithmic assumption looks like in practice. Not malicious. Not even particularly unusual. A system filling a gap with a demographic proxy instead of actual data about the person in front of it.

I want to use it as a lens, because it is the same mechanism operating inside clinical protocol design. At a different scale. With much higher stakes than getting a surname wrong.

How does a protocol design team decide what visit schedules are feasible for working-class patients? How do eligibility criteria account for comorbidity profiles in minority ethnic communities? What lived experience data sat alongside the clinical data when those decisions were made?

In most cases, the honest answer is: none. The gap gets filled with assumptions. Demographic proxies. Historical precedent. Whatever the team believes is probably true about patients they've never directly studied.

The result is a protocol that looks complete. Passes internal review. Gets signed off. And quietly excludes entire populations before a single participant is recruited. Not through malice, but through the same pattern-matching logic that gave me someone else's surname.

The fix is not awareness. It is not more careful protocol reviewers. It is structured data, gathered directly from the communities in question before the design decisions are locked, that closes the gap assumptions currently fill.

You cannot intention your way out of a data problem.This week in data

The MHRA mandate is coming. Most sponsors aren't ready.

I've had this conversation enough times now to say it plainly.

The MHRA's mandatory Inclusion and Diversity Plans come into force in 2026. Most clinical teams know this. Most have not materially changed how they approach protocol design as a result.

The plan is still: recruit first. Fix diversity problems if recruitment misses. Retrofit the equity narrative for the submission.

That sequence is about to get costly.

Because the question the MHRA will ask is not "did you try to recruit diverse patients." It is "did your protocol design give them a realistic chance of participating."

Those are very different questions. The first can be answered with effort. The second requires evidence gathered before the protocol is locked, not assembled retrospectively when the submission is being written.

The organisations building this capability now will not be starting from scratch when the mandate lands.

If you want to understand exactly where equity risk enters across the development lifecycle and what it costs when it's missed, the white paper covers this in full.

What a room full of patients taught me about top-down vs bottom-up

Last week I promised I'd tell you what the room at Make 2nds Count taught me.

I was there to give a keynote at the Secondary Breast Cancer Patient Summit in Liverpool, alongside Pfizer UK's Medical Affairs Oncology Team. Secondary breast cancer is metastatic, stage four, incurable. The people in that room live with it every day.

Presenting to a patient audience is not the same as presenting to industry. When you're in front of a pharma team or a regulator, the job is to shift thinking. When you're in front of people who are living the thing you're talking about, the job is not to educate. It is to empower. That requires a different kind of preparation. And a different kind of presence.

I finished the keynote and a hand went up.

"Does the bottom-up approach actually work? Isn't top-down more effective?"

It's a fair question. And I want to give it a proper answer here, because the version I gave from the stage was shorter than the argument deserves.

Top-down is already happening. The MHRA's mandatory Inclusion and Diversity Plans. The FDA's Diversity Action Plans. HTA bodies asking harder questions about whether trial populations reflect the patients who will actually use the medicine. That pressure is real and it matters. Regulatory mandates have moved industries before and they will move this one.

But here is the thing nobody says clearly enough: a mandate tells you where to get to. It does not tell you how to get there. And without the infrastructure to answer the how, sponsors will do what organisations always do when faced with a compliance target they don't have a genuine mechanism to meet.

They will game the metric.

Select sites in postcodes with diverse demographics. Loosen eligibility criteria in the final weeks before recruitment closes. Point to a patient advisory board as evidence of community engagement. Produce a diversity plan that satisfies the submission requirement without changing a single design decision.

The result is trials that hit the diversity number on paper while reproducing exactly the same exclusions underneath it. Communities get counted. They don't get heard.

This is not a hypothetical. It is the predictable consequence of applying top-down pressure to a system that has no bottom-up infrastructure to absorb it. We have seen it play out before. The NIH Revitalization Act of 1993 mandated the inclusion of women and minorities in federally funded clinical research. Thirty years later, representation gaps in trial populations remain substantial and well-documented. The mandate moved. The infrastructure didn't follow. And so the gap persisted, in more compliant-looking packaging.

So when I say top-down sets the standard and lived experience data makes it achievable, I am not making a feel-good argument about the importance of listening to communities. I am making a structural argument about what has to exist before regulatory pressure produces genuine change rather than sophisticated workarounds.

The bottom-up infrastructure has to come first. Or at minimum, it has to be built in parallel, fast enough that when the mandate lands, sponsors have a legitimate mechanism to meet it rather than a compliance strategy to approximate it.

There is also a harder version of this argument that I think deserves to be said.

Top-down pressure, applied without bottom-up infrastructure, can actually damage community trust. When sponsors respond to diversity mandates by treating participation as a numbers target, the dynamic it creates is exactly the extractive one communities are already exhausted by. We need X percent of this ethnicity. Go and find them. The community becomes a recruitment problem to be solved rather than a knowledge source to be engaged. And every time that happens, the next organisation that walks through the door with genuine intentions starts from a lower baseline of trust than the one before.

Mandates, in other words, can make the engagement problem worse if the infrastructure isn't there to back them up.

This is what I was trying to say in that room. Top-down without bottom-up is pressure with nowhere to land. Bottom-up without top-down is evidence with no obligation behind it. Both halves have to be present, and they have to be sequenced correctly, or the gap this industry has been trying to close for thirty years will simply find new ways to persist.

Ashish Rishi - Be Seen, Heard and Counted.pdf

Ashish Rishi - Be Seen, Heard and Counted.pdf

518.04 KBPDF File

What stayed with me after that session wasn't the Q&A. It was something quieter. Presenting to people who are living the gap you're trying to close is a reminder that this is not abstract. It is not a policy argument or a commercial proposition. It is someone's life. That is worth holding onto on the days when the work is slow.

Thanks for reading. This newsletter exists because I believe the right framing, in the right hands, changes decisions. If it did that for you this week, even a little, that's enough.

Ashish.

This week in data

50% Europe's share of global clinical trials has halved over the last decade. The UK Life Sciences Sector Plan targets doubling commercial trial participants by 2026, then doubling again by 2029.

- Office for Life Sciences. Life Sciences Sector Plan. UK Government, 2021.

As the US stalls on FDA diversity plan enforcement, pharma companies conducting global trials are looking to Europe for diversity leadership. The UK has the population, the NHS infrastructure, and the regulatory intent. What has not existed is a commercial platform that turns that into decision-grade lived experience data for sponsors.

Reply

Avatar

or to participate

Keep Reading