What do we need to see? 

  • Evidence of clinical benefits.
  • Real-world evaluation.
  • Information about service users’ and caregivers’ experiences of the intervention.
  • The measurement plan for collecting usage data and evidence.

Open the accordions sections to see descriptions of the domain elements. 

Evidence of clinical benefits

Demonstrating benefits.

  • Are there high quality, relevant studies available?
  • Were these studies done in a setting relevant to the Scottish health and social care system?
  • Did the studies show improvements in relevant outcomes?

For technologies that treat a specific condition.

  • Are there interventional studies (experimental or quasi-experimental design) that support the claimed benefits of the technology?
    • Did they show improvements in relevant outcomes?
    • Is the comparator a care option that reflects the current NHSScotland care pathway?
    • In a novel, innovative or transformative technology, the setting may not reflect the Scottish pathway, but it should still be able to demonstrate excellent performance and high value.

For technologies that diagnose a specific condition.

  • Do they support the claimed benefits of the test?
  • This may include test accuracy studies, using an appropriate reference standard, or a concordance study to show agreement with current practice.

When it is not possible, ethical or relevant to conduct an interventional study.

  • Are there observational studies?

Understanding service users’ and healthcare professionals’ views of the technology.

  • Are there qualitative studies or surveys available?

Is any published evidence described of real-world benefits transferable to the Scottish population?

Real-world evaluation

Is there evidence that the technology has been evaluated in the Scottish health and social care system? When it’s important to know how technology works in the real world, real-world evidence may help to reduce uncertainty.

  • Was the technology acceptable to service users (including clinicians, service users and caregivers)?
  • Did the technology perform its intended purpose to the expected level?
  • Did the technology successfully integrate into current service provision or current best practice?
  • Did the technology cause any unintended negative impacts on service users or services?
  • Did the technology show improvements in outcomes (costs saved, efficiencies achieved and health and care improvements)?
  • Did the improvements align with the published interventional studies
  • Was the technology used in line with expectation (who, how, for how long)?

Information about service users’ and caregivers’ experiences of the intervention

  • Are there descriptions of individuals’ experiences of living with the condition?
  • What expectations do individuals’ have of the technology including what they expect to gain?
  • What are individuals’ and caregivers’ experiences of using the technology?
  • Are service users and caregivers’ satisfied with the technology?
  • Does the use of the technology affect the service user’s capability and possibility to exercise autonomy?
  • Is there a need for any specific interventions or supportive actions concerning information in order to respect service user autonomy when the technology is used?
  • Is information available that the service user needs to make informed decisions about the technology?
  • Does use of the technology challenge or change healthcare professional values, ethics or traditional roles that could impact the relationship between the service user and the healthcare professional?
  • Does use of the technology affect human dignity?
  • Does use of the technology affect the service user’s moral, religious or cultural integrity?
  • Does the technology invade the privacy of the service user?
  • What could prevent a group or person from gaining access to the technology?
  • Is there any information that needs to be communicated to service users to improve adherence?

The measurement plan for collecting usage data and evidence

  • Is there a plan, agreed between the evaluator and developer, for ongoing data collection, particularly around ongoing use of the technology and service-user outcomes?
  • Is there a plan, agreed between the evaluator and developer, on post-deployment reporting of changes in performance and safety?