DHSC Compass

Designing a value-based medical technology procurement platform for the NHS

Role

Senior Interaction Designer

Activities

  • User research synthesis
  • Information architecture
  • Prototype development
  • Workshop facilitation

Client

Department of Health
and Social Care

Year

2025/2026

Problem statement

The NHS currently lacks a consistent, centralised way to compare medical technology products. While valuable evidence exists, it is often fragmented, inconsistent, or difficult to use – resulting in inefficiencies, repeated work, and missed opportunities to adopt high-value technologies.

Design

Using the NHS Prototype Kit, I designed and iterated on a search experience built around four information-seeking modes: known-item, exploratory, don't-know-what-you-need-to-know, and re-finding.

Trust adoption visibility became the primary signal in the interface — showing which NHS organisations had evaluated or adopted a product proved more persuasive to users than any scoring or rating system.

I designed a dual-pathway upload model to reduce submission burden: a lightweight five-minute document share for lower-maturity trusts, alongside a structured fifteen-minute assessment route for those with more established evaluation processes.

User research and testing

The user researchers ran usability testing across multiple cohorts. Sessions tested five critical assumptions: whether users would contact peers when facilitated, whether adoption visibility built confidence, whether users could accept our evaluation formats, whether showing evaluation process detail helped them judge relevance, and whether surfacing discussion topics increased contact likelihood.

Testing validated the confidence-over-efficiency framing consistently. Users engaged far more with peer adoption signals than with structured scoring, and responded positively to seeing which trusts had conducted evaluations — even before the content itself was accessible.

Workshops and collaboration

Early on I facilitated feedback workshops with the wider team, to gauge what iterations we should test with users. Early resistance to evaluation diversity was largely resolved once stakeholders understood that seeing how another trust evaluated a product was itself useful signal.

Outcomes

The alpha concluded with a validated set of interaction patterns, a tested dual-pathway upload flow, and a suite of design histories following the DfE format — providing the beta team with a clear artefact trail and a set of five prioritised assumptions to carry forward for testing at scale. The passporting model and peer intelligence architecture were accepted as the foundation for the next phase.