Precision is Easy to Measure, Value is Harder to Prove | Eclevar MedTech
Orthopaedic Robotics

Precision is easy to measure, value is harder to prove: the endpoints that actually sell Orthopaedic robotics

NICE spelled this out clearly in its early value assessment discussion: robot assisted surgery often shows better alignment, but the evidence did not suggest that this reliably results in better PROMs or clinical outcomes, and overall outcomes were broadly non inferior. (1)

NICE also highlights why PROMs matter so much here. In NHS PROMs data 2021 to 2022, only 64.5 percent of knee replacement patients and 77.6 percent of hip replacement patients rated satisfaction as excellent or very good. That is the gap robotics has to move, not just an alignment plot. (1)

So how do you pick endpoints that move decisions, not just publications.

Below is a simple way to think about it, based on what I see as an orthopaedic surgeon and as someone who helps manufacturers design clinical evidence and PMCF plans across orthopaedics, spine and robotics.

The three steps are a simple way to make sure your robotics evidence is built to win real decisions, not just produce nice accuracy charts, by aligning endpoints to the people who must approve, adopt, and pay for the technology.

Dr. Nikhil Khadabadi

Dr Nikhil Khadabadi, MD, MS, MRCS

Chief Medical Officer

Clinician-led excellence in clinical evidence and MDR compliance

TÜV SÜD logo Ex TÜV SÜD Clinical Reviewer MDR clinical evidence and PMCF expert Board member and active advisor to several robotics-focused medical device companies

"With over fifteen years of experience across orthopaedic surgery, notified body review and industry, I bring frontline medical and regulatory insight to every evidence program. At Eclevar MedTech we focus on clinical data that is clear, defendable and ready for review."

Step 1

Know who you are trying to convince

Different stakeholders read endpoints in different languages.

1

Regulators

NICE's message is clear: most primary outcomes are non inferior, even when alignment improves. If your promise is patient benefit, your endpoint plan must measure it directly, not assume precision equals value.

Is it safe?

Does it perform as intended?

Do the clinical data support the promise, and can you keep that evidence current after launch?

So the endpoints they care about are

Safety events and device related harms

Performance evidence that matches intended purpose

Usability and training evidence

A clear post market plan that keeps evidence current

2

Surgeons

Surgeons still care about accuracy, but adoption is increasingly tied to what happens after the theatre doors close, complications, early recovery, and whether the system fits real workflows.

Surgeon questions sound like this

Can I execute the plan consistently?

Does it help in difficult anatomy?

Does it avoid new complications?

Does it fit my workflow?

So the endpoints they care about are

Consistency and outlier reduction

Complications and reoperation

Early recovery such as pain and function

Workflow impact over the first 20 cases

3

Hospital leadership and procurement

This is where many robotics rollouts quietly fail. Not because the robot is unsafe. Because the business case is fragile.

Procurement teams ask

Does it slow lists down?

Does it reduce length of stay or readmissions?

What is total cost of ownership?

What is the training burden, and who absorbs it?

So endpoints here are often operational and economic, not radiographic.

So the endpoints they care about are

Operating room time broken into parts

Length of stay and discharge timing

Training time and staffing impact

Cost per case, servicing, and pathway assumptions

4

Payers and HTA bodies

Payers and HTA bodies read value through durability and economics. Revision is a key long term outcome, but it is rare in trials and often needs registry scale and longer follow up. Cost effectiveness hinges on credible utilities and real world resource use. NICE highlights the need for more PROMs and utility data and better resource use evidence to make the economics decision ready. (1)

They want value in a different format

Utilities, QALYs, budget impact

Durability and revision risk

Outcomes at scale, not just at a single centre

So the endpoints they care about are

Utilities and cost effectiveness

Revision and durability through registry follow up

Outcomes at scale across real sites

Resource use that supports economic modelling

Step 2

Decide the promise, then choose the endpoints that prove it

This step is about connecting precision to a decision. You choose one outcome that proves real value for patients and hospitals, then you add the supporting endpoints that explain why your robot delivers that value and stays safe and usable as it scales.

If you remember one thing, it is this: Pick one value endpoint as your headline. Everything else is supporting proof.

1

Start with one primary endpoint that reflects value

Pick the one outcome you want to be known for. In arthroplasty robotics, this is often a patient reported outcome at 12 months, because it speaks directly to what patients feel and what hospitals care about.

Example. Forgotten Joint Score at 12 months is a strong anchor because it is patient centred and decision makers recognise it.

The point is not that every robot must use this exact PROM. The point is that your primary endpoint should match the promise you want to make.

Now that you have the headline, you need proof the robot actually caused it.

2

Add technical performance endpoints that explain the mechanism

Now keep your alignment and positioning metrics, but place them in the right role. They prove the robot did what it was supposed to do.

Examples of mechanism endpoints surgeons recognise

Arthroplasty

Component position accuracy versus plan, for example deviation in cup inclination and anteversion, femoral component rotation, tibial slope

Alignment outliers, for example percentage outside target mechanical alignment or HKA range

Leg length and offset restoration versus plan

Gap balance or intraoperative balance metrics if your platform measures it

Spine robotics

Pedicle screw placement accuracy on CT using a standard breach grading system

Rate of critical breaches or revisions for malposition

Radiation exposure time or dose reduction versus freehand or navigation

These endpoints are your proof of performance. They should sit under the value endpoint, not replace it.

Then you need to show it did not introduce new harm.

3

Add safety endpoints that survive scrutiny

This is where you remove doubt. Define complications clearly, capture adverse events and serious adverse events consistently, and plan how events will be reviewed.

This matters because robotics can introduce new risk pathways, like fixation pin issues, workflow related errors, longer operative time in early cases, or unintended changes in technique.

Safety endpoints answer: Did we improve performance without introducing new harm

Then you need to show it fits real theatre flow.

4

Add workflow and adoption endpoints that hospitals actually use

If your robot slows the list down or increases staffing burden, procurement will notice quickly.

So measure workflow in a way that reflects reality. Operating room time broken into meaningful components. Setup time. Planning time. Skin to skin time. Turnaround impact. Training time and how it changes over the learning curve.

Then analyse the learning curve properly, not with vague statements. Show how many cases it takes to stabilise time and outcomes, and what support model is required.

Then you need a plan for what happens after year one.

5

Add durability endpoints with a registry first plan

Revision is one of the most meaningful outcomes, but it is also one of the hardest to prove in a short clinical study because events are rare.

So you plan durability in two layers. Short term reoperations and early failures in your comparative study. Long term implant survival through registry linkage and real world follow up.

Durability endpoints answer: Does any performance benefit translate into fewer failures over time

And because this is software heavy, you need to protect the evidence from version drift.

6

Add a software and upgrades plan so your evidence does not drift

Robotics is software heavy. The product will evolve. Interfaces change. Planning algorithms change. Sometimes AI enabled modules are added or updated.

So you need a clear approach to evidence continuity. What version was studied. What changes are minor. What changes require bridging validation. What changes might require new clinical evidence.

Software lifecycle endpoints answer: Can we keep the promise true as the product changes

Step 3

Three traps to avoid before you start your clinical study and PMCF plan

1

Mistaking precision for value

It is tempting to lead with alignment because it is clean and fast to measure. But NICE said it plainly. Better alignment did not reliably show better PROMs or clinical outcomes in the evidence they reviewed. (1)

What to do instead: Keep precision metrics, but lead with your value endpoint, then use precision to support the story.

2

Ignoring the learning curve and theatre reality

A robotics study that only reflects expert users can look great, but it does not answer the procurement question. Can this work across a full team, on a busy list, after training, without slowing everything down.

What to do instead: Make learning curve and workflow part of the evidence. Measure setup time, planning time, skin to skin time, and how they change over the first 6 to 30 cases.

3

Letting software updates break your evidence

Robotics is software heavy. Versions move fast. If your study does not track what version was used, or what changed during the study, the evidence can become hard to defend.

What to do instead: State the version you are studying, define what counts as a meaningful change, and have a simple bridging plan so updates do not break your evidence.

If you avoid these three traps, your endpoint plan becomes easier to defend to regulators, easier to sell to hospitals, and easier for surgeons to trust.

Manufacturer Checklist

A simple checklist you can use if you are a manufacturer

If you are planning your next robotics study or PMCF programme, start here

1

Write the promise in plain language

What will be better for the patient, the surgeon, the hospital

2

Choose one primary value endpoint

For arthroplasty robotics, PROMs at 12 months are a common and credible anchor, as used in major orthopaedic robotics trials.

3

Add a focused technical performance set

Enough to show the robot is doing something real and measurable

4

Pre specify safety endpoints and reporting rules

Do not leave this to site preference

5

Measure workflow and learning curve explicitly

Make it decision usable for hospitals

6

Build a registry and real world pipeline from day one

It is how you make durability and rare harms testable at scale

7

Lock a versioning and change control approach

So your evidence stays valid as the product evolves

If you are building an orthopaedic robotics evidence plan this year, this is exactly the work I do at the intersection of surgery and evidence generation. I help teams turn a promise into a decision ready endpoint plan, then deliver it through a lean clinical study plus PMCF and registry follow up. If you want, message me your platform type knee, hip, spine, or navigation, and I will share a simple endpoint map that fits your promise.

Contact ECLEVAR MedTech

Reforming Clinical Evaluation of Medical Devices in Europe