Enterprise AI Analysis
Recommendations for overcoming methodological challenges to health economic modelling that arise when comparing in vitro diagnostics with imaging tests
Undertaking health economic modelling to compare in vitro diagnostics (IVDs) with imaging tests in health technology assessment (HTA) is associated with several challenges. Ignoring these challenges can lead to inaccurate and misleading results. This research identified common challenges and developed practical recommendations for considering them.
Executive Impact: Key Findings
This multi-method research presents 19 challenges and 30 practical recommendations for researchers to consider when developing models in this area. These recommendations are non-binding but serve as guiding principles. This research aimed to raise awareness of the challenges and support future-evaluations and methodological advances.
Deep Analysis & Enterprise Applications
Select a topic to dive deeper, then explore the specific findings from the research, rebuilt as interactive, enterprise-focused modules.
1. Variability in imaging pathways
- 1. Care pathway mapping is recommended to identify variability in current practice (Graziadio et al. 2024 Recommendations 1 to 3).
2. Variability in position of new test
- 2. When the role of the new IVD test is uncertain, investigating the most cost-effective sequence of tests is recommended.
3. Lack of data to parameterise the model
- 3. Expert opinion is recommended to identify whether alternative imaging tests are clinically and sufficiently similar to the comparator to provide proxy values. Elicitation is recommended to identify how the tests are similar, as well as acceptable ranges of key parameters (such as sensitivity, specificity and decision making).
4. Uncertain test accuracy data
- 4a. When evaluating an IVD against a comparator imaging test, selecting accuracy data that evaluates the imaging test in its intended place in the pathway is recommended.
- 4b. If evidence is not available in the relevant patient population, seeking clinical advice on the most generalisable study populations is recommended to inform the most likely direction of results.
- 4c. When evidence is poor, conflicting or uncertain, varying accuracy parameters using sensitivity analysis is recommended to estimate the impact of changes to sensitivity/specificity and to identify the values at which the results will change direction (i.e. threshold analyses). Probabilistic approaches can be used when there is confidence in the ranges and distributions of accuracy parameters.
- 4d. Clinical validation of uncertain accuracy limits or thresholds for clinical use of IVD is recommended through clinical opinion or elicitation to advise whether ranges are realistic. Validation against other data that are not used in the model may be also be useful, for example existing clinical studies or real-world data.
5. Lack of evidence on clinical decision making
- 5a. The preferred model inputs are reliable end-to-end or clinical utility studies that measure clinical decision making, are applicable to the decision problem, and evaluate the imaging test in its intended place in the pathway. Adjusting the model structure/inputs to reflect this evidence is recommended. It is not appropriate to use stand-alone efficacy data from either test when a diagnostic decision is made using combined data.
- 5b. In the absence of direct evidence, follow the recommendations published by Shinkins et al. (2024) for identifying changes in recommended and actual management: seek clinical opinion to understand the impact to decision making, avoid making blanket assumptions about recommended patient management unless truly reflective of the clinical scenario. Adjusting the model structure/inputs to reflect this evidence is recommended.
- 5c. Performing multivariate sensitivity analysis assuming varying patterns of clinical decision making is recommended (e.g. different patterns of referral behaviour with increasing levels of resource use). If relevant, varying assumptions on clinician compliance to management pathways is also recommended to indicate the direction of results and whether this changes based on different decision-making assumptions.
6. Additional resources are required to optimise performance of a new test
- 6. When optimisation costs could significantly impact accuracy or capacity, incorporating the key additional resources required to use the new test should be considered.
7. Incorporating capital investment costs
- 7a. Capital investment in changes to diagnostic infrastructure should be considered for both the proposed and existing tests.
- 7b. Capital investment costs should always be considered but should only be included in the model if appropriate. Several analytic methods can be used to incorporate capital investment costs. It is recommended to select the method which is most applicable. A qualitative discussion of capital investment for use by multiple departments or multiple indications could also be provided. For example, a digital reader could be used for all pathology services not just one test.
8. Prioritising the value proposition
- 8. It is not always possible to include every consequence of introducing the test in the model structure because this could make the model too complex. Identifying and considering the most important consequences of introducing the new test is recommended, to help prioritise those that should be included in the model structure.
9. IVD changes the tested population
- 9a. Modellers should identify whether introducing the new test will change the population who will be eligible for receiving the test. If identified, changes to the indicated population should be incorporated into the model structure.
- 9b. For screening settings, including differential uptake is recommended if differences between the tests are expected.
10. Incidental findings
- 10a. When considered clinically relevant and there is a policy to act on them, incidental findings and their consequences should be made explicit and considered for incorporation in the conceptual model. When incorporation is not considered appropriate, the rationale should be clearly stated.
- 10b. If incidental findings are likely to have a significant impact on net health effects (including cost effectiveness), these should be included in either the base case or scenario analyses, or the consequences of their omission should be discussed, including an indication of direction if it can be determined.
11. Direct test harms
- 11. When relevant, considering changes in complication rates, uptake or other important consequences of differences in procedural harm is recommended.
12. Different versions of imaging tests in use
- 12a. Modellers should identify whether differences in test versions exist. If differences are important, including them as individual comparators should be considered. Models should allow for variation in performance and handle variation analytically.
- 12b. When the evidence base is uncertain, Recommendation 4b applies.
13. Variation in test failure and interpretable outputs
- 13a. If missing diagnostic information is identified as an important issue, the model should reflect missing results and how they are managed.
- 13b. Some tests require more tests per patient than other tests. This could be due to setting or test reliability, e.g. home sampling return rates versus hospital testing. Varying the number of tests per patient through scenario or sensitivity analyses, using PSA where there is good evidence of variability, is recommended.
14. Additional information provided by imaging tests
- 14. The impact of additional diagnostic information should be evaluated in terms of its potential impact to decision making and patient outcomes.
15. Inter-operator variability of imaging tests
- 15a. When clinical opinion indicates that operator skill will be lower in routine care than in specialist settings, performing sensitivity analyses to vary the diagnostic accuracy of the imaging test within reasonable ranges is recommended. This is particularly important if the data populating the model have been collected in specialist settings where user operability and training are high.
- 15b. Time or experience-based resource use and complication rates should also be considered and varied using scenario or sensitivity analysis for each of the relevant inputs.
16. Variation in lead time for test results
- 16. It is important to consider whether differences in the lead time for test results create system benefits (such as clinician time saved) or resource use benefits (such as cost per image read). These could be included as cost savings or efficiency gains. It is also important to consider any benefits from the patient's perspective (such as comfortability during the test) and clinician's perspective (such as time saved).
17. Variation in timing of care
- 17. Clinical opinion should be sought to understand whether a material difference in time to diagnosis could delay treatment that might impact adversely on disease progression or treatment outcomes. If so, these risks should be incorporated into the model and evidenced appropriately. An appropriate time horizon should be used to estimate the difference in long-term outcomes.
18. Capacity constraints
- 18. If capacity constraints are important to the decision problem, they should be incorporated into the conceptual model. When a new test has the potential to be capacity releasing, the incremental change in resource should be considered in the results (i.e. number of radiology appointments avoided).
19. IVD triage delays treatment
- 19. If there is a different likelihood of progression, incorporating this higher risk in the model and using an appropriate time horizon to estimate the differences in long-term outcomes is recommended.
Enterprise Process Flow
Calculate Your Potential ROI
Estimate the economic benefits of adopting advanced health economic modeling strategies in your organization.
Implementation Roadmap
A phased approach to integrating these advanced health economic modelling recommendations into your enterprise.
Phase 1: Diagnostic Pathway Mapping & Data Review
Conduct detailed mapping of current IVD and imaging pathways, identifying all variations and potential data gaps. Review existing evidence for accuracy and clinical decision-making. (Duration: 2-4 weeks)
Phase 2: Expert Elicitation & Model Structuring
Engage clinical and economic experts to validate pathways, elicit uncertain parameters (e.g., test accuracy, decision-making patterns), and determine the most appropriate model structure for the HTA. (Duration: 3-5 weeks)
Phase 3: Model Development & Sensitivity Analysis
Build the health economic model, incorporating identified challenges such as capital costs, resource optimization, and incidental findings. Perform extensive deterministic and probabilistic sensitivity analyses. (Duration: 6-10 weeks)
Phase 4: Impact Assessment & Reporting
Evaluate the net health effects, cost-effectiveness, and budget impact. Clearly report findings, assumptions, and the implications of identified challenges for decision-making. (Duration: 2-3 weeks)
Ready to Transform Your Diagnostic HTA?
Leverage our expertise to navigate complex health economic modelling for IVDs and imaging tests.