Across the life science industry, many leaders rightly emphasize patients, users, and real‑world journeys when planning new solutions, strategic roadmaps, and investments. Journey mapping, personas, and user experience are now standard talking points whenever organizations design new programs or technologies. The deeper issue, however, is how organizations define the people those initiatives are meant to serve in the first place. When decisions are built around an “average” patient or user, critical differences between high‑stakes subgroups are flattened, and that is where outcomes and value quietly erode.[1][2]
This is not a theoretical concern. Whether the focus is a chronic metabolic therapy support program, an oncology pathway, a digital adherence tool, or a rare disease support model, executives are making sizable investments based on simplified views of “our patients” or “our users.” When those views gloss over meaningful variation in needs, behaviors, and constraints, even well-resourced initiatives can underperform for the very groups that matter most.[3][1]
Where “average patient” thinking shows up
In most life science companies, journey mapping and user experience are already recognized as important ingredients in solution design and implementation. Teams invest effort in visualizing touchpoints, mapping pain points, and aligning stakeholders around a shared picture of the patient or user. However, the way these concepts are planned, executed, and applied across initiatives often makes the difference between unlocking meaningful impact and ending up with suboptimal outcomes and lost opportunities.
Several recurring situations illustrate this pattern:
First, digital adherence tools are often designed around a middle‑aged, smartphone‑comfortable “average” user, then rolled out in therapy areas where a substantial portion of patients are over 70, juggling multiple comorbidities, and less comfortable with mobile technology.
Second, patient support programs are built for a generic profile that blends commercially insured, urban patients with stable housing and strong caregiver support, then applied wholesale across rural populations, lower income groups, or individuals with unstable employment.
In each case, the “average” design fits many groups partially but fully fits none. The friction and misalignment that result are often misattributed to “engagement challenges,” “market realities,” or “patient behavior,” rather than to the upstream assumption that a single, simplified profile would be sufficient.[1][2]
How much outcomes can really be affected
The impact of “average” thinking is easiest to see in clinical and behavioral outcomes, but its ripple effects extend across the value chain.
Real‑world use already tends to underperform trial conditions. Analyses comparing trial efficacy with real‑world effectiveness repeatedly show lower adherence and persistence once therapies move into everyday practice, especially in complex chronic diseases. When programs and journeys are built around a flattened user profile, that gap widens. Seemingly small differences in how well an initiative fits specific subgroups can translate into several percentage points difference in adherence, persistence, or correct use. Aggregated across a large population, that divergence can materially affect the real‑world effectiveness and safety profile of a therapy and the ability to demonstrate value in real‑world evidence.[6][1]
Patient and caregiver experience is similarly affected. Patients who feel unseen or poorly matched by the support around a therapy are less likely to stay engaged, less likely to trust the information they receive, and less likely to recommend the experience to others. For older patients, a digital first model can feel overwhelming or exclusionary. For younger, time‑pressed patients, a process that requires repeated phone calls and paperwork can be equally misaligned. When design choices do not reflect the lived realities of distinct groups, even strong clinical products can struggle to achieve their full potential.[4][7]
Equity and access are also at stake. Average‑based designs tend to align best with the most represented, resourced, or vocal segments. Other groups, including those with different socioeconomic contexts, language needs, or caregiving structures, may receive a version of the experience that is technically available but practically difficult to navigate. Over time, this can widen gaps in access and outcomes, even when equity is an explicit organizational priority.[8][2]
From a commercial and strategic perspective, underperformance in one or two large or high value subgroups can materially change the shape of a launch curve, the credibility of real‑world performance narratives, or the sustainability of value-based contracts. Companies often attribute shortfalls to competitive pressure or payer dynamics without fully examining whether the underlying design assumptions about “typical” patients and users were fit for purpose.[3][8]
Leaders rarely regret discovering that certain groups were quietly struggling with the way an initiative was designed. They do regret discovering it after a major program has launched and underperformed.
Why this is a leadership issue and not only a design choice
It can be tempting to see these questions as the domain of user experience specialists or individual project teams. In reality, they sit squarely in the remit of senior leadership and strategy owners.
When executives approve investments in new programs, digital tools, or evidence strategies, they are making capital allocation decisions with significant clinical, operational, and financial implications. Those decisions are often underpinned by a few simple statements about “our patients,” “our users,” or “our prescribers” that have not been fully stress tested against the diversity of real‑world populations.[2][3]
This pattern mirrors a broader theme that has emerged in digital and AI initiatives, where strategic roadmaps built on unvalidated assumptions about data readiness or governance can look compelling yet struggle in execution. The same logic applies here. Initiatives built on unexamined assumptions about an “average” user can appear sound on paper while failing to deliver when confronted with the complexity of real care settings, heterogeneous patient populations, and varying levels of digital readiness.[3]
The core issue is not the presence or absence of journey maps. It is the quality of thought, evidence, and challenge behind the picture of the people those journeys represent.
Organizational patterns that keep the risk invisible
Several recurring patterns make “average patient” risks hard to see from the inside.
One pattern is siloed views of the user. Different functions interact with different slices of reality. Medical affairs hears one narrative, commercial teams hear another, patient services and hubs hear a third, and digital teams see yet another set of metrics. In the absence of structured synthesis, these perspectives are often collapsed into a single, digestible composite. That composite may be comfortable to work with, but it rarely reflects the full spread of needs across age, geography, socioeconomic context, and care settings.[8][2]
A second pattern is over reliance on aggregate metrics. Many dashboards emphasize averages and rolled‑up indicators, such as overall adherence, total engagement rates, or mean time on therapy. These metrics are useful, but they can be misleading when used without an understanding of the variance underneath. A seemingly healthy average can mask the fact that certain subgroups are thriving while others are consistently failing to start, stay, or benefit from the therapy or program.[6][1]
A third pattern is time and resourcing pressure. Teams are often under intense pressure to move quickly, align cross‑functional stakeholders, and show visible progress. Under these conditions, designing for an “average” user feels efficient. It enables consensus, simplifies vendor requirements, and fits within standard timeline and budget constraints. The trade‑offs only become evident later, when outcomes lag expectations and remediation requires more effort than a deeper look upfront would have.[3]
Functions that should keep this risk in view
Multiple functions have a direct stake in whether “average patient” thinking is shaping critical initiatives.
Medical affairs and clinical development need to ensure that real‑world patterns of use and support do not systematically erode the clinical value established in trials, particularly for older, more complex, or under‑represented populations. Commercial and market access functions depend on consistent real‑world performance and engagement to sustain launch trajectories, strengthen payer relationships, and justify value propositions. When high value segments are poorly served by generic designs, commercial potential is left on the table.[5][1][8][3]
Patient services and hubs sit at the interface of real‑world complexity. They are often forced to manually “customize around” standardized processes that do not quite fit specific patient groups, creating operational burden and inconsistent experiences. Digital, innovation, and IT teams are tasked with deploying platforms and tools that must work across a heterogeneous user base. When the underlying assumptions about users are oversimplified, even technically sound solutions face adoption challenges and repeated retrofitting.[7][4]
Health economics, outcomes research, real‑world evidence, and strategy teams rely on real‑world data to demonstrate value, inform evidence strategies, and guide portfolio decisions. When the impact of heterogeneous journeys and support models on outcomes is not explicitly examined, important nuances can be missed. For all of these groups, the question is not whether patient focus matters. It is whether the organization’s mental model of its patients and users is robust enough for the stakes involved.[1][2]
The cost of ignoring versus examining “average”
For leadership teams planning their next wave of initiatives, the trade‑off is clear.
Continuing to design for an “average” user may feel efficient in the short term. It keeps plans simple, aligns stakeholders quickly, and avoids difficult questions about heterogeneity. The cost is absorbed over time in quieter ways. Underperforming segments, unexplained variance in outcomes, incremental operational workarounds, and erosion of trust in programs that never quite fit the people they were meant to help all become part of the background noise of the portfolio.[2][3]
Committing to look deeper does not require endlessly slicing populations or over engineering personalization. It requires deliberately identifying where the stakes are highest, where the risk of treating heterogeneous patients as if they were the same is most likely to undermine outcomes, and where a more nuanced view would change design and investment choices. The critical question for leaders is where they are relying on an “average” that is convenient for planning but dangerous for execution, and what it would be worth to see that risk clearly now rather than several years into an initiative.[8][3]

References
Makady, Amr, et al. “What Is Real-World Data? A Review of Definitions Based on Literature and Stakeholder Interviews.” Value in Health, vol. 20, no. 7, 2017, pp. 858–865, https://doi.org/10.1016/j.jval.2017.03.008.
Marmot, Michael. The Health Gap: The Challenge of an Unequal World. Bloomsbury, 2015.
The Economist Intelligence Unit. The Future of Drug Development: Barriers, Enablers and Calls for Change. Economist Intelligence Unit, 2019, https://druginnovation.eiu.com/wp-content/uploads/2019/05/Parexel-Quantitative-report-part-2Final-1.pdf.
Cajita, Mitzi I., et al. “Digital Health Technology Use Among Older Adults.” Journal of Cardiovascular Nursing, vol. 33, no. 4, 2018, pp. 345–352, https://doi.org/10.1097/JCN.0000000000000491.
Nekhlyudov, Larissa, et al. “Addressing the Needs of Cancer Survivors.” Journal of Clinical Oncology, vol. 35, no. 1, 2017, pp. 18–21, https://doi.org/10.1200/JCO.2016.71.6452.
Herrett, Emily, et al. “Data Resource Profile: Clinical Practice Research Datalink (CPRD).” International Journal of Epidemiology, vol. 44, no. 3, 2015, pp. 827–836, https://doi.org/10.1093/ije/dyv098.
Wolf, Michael S., et al. “Health Literacy and Patient Outcomes.” Journal of General Internal Medicine, vol. 20, no. 8, 2005, pp. 760–766, https://doi.org/10.1111/j.1525-1497.2005.0178.x.
Artiga, Samantha, and Kendal Orgera. “Key Facts on Health and Health Care by Race and Ethnicity.” Kaiser Family Foundation, 2020, https://www.kff.org/report-section/key-facts-on-health-and-health-care-by-race-and-ethnicity-coverage-access-to-and-use-of-care.

