|Year : 2015 | Volume
| Issue : 3 | Page : 114-118
A conceptual framework (Cat-4) for estimating clinical relevance of evidence related to oral diagnosis
Rahul Nair1, Amanda-Mae Nguee Ai-Min2
1 ARCPOH, School of Dentistry, University of Adelaide, Adelaide, SA, Australia; Department of Oral Sciences, School of Dentistry, National University of Singapore, Singapore
2 Dental Officer, Ministry of Health, Singapore
|Date of Web Publication||28-Apr-2016|
ARCPOH, School of Dentistry, University of Adelaide, Adelaide, SA; Department of Oral Sciences, School of Dentistry, National University of Singapore
Source of Support: None, Conflict of Interest: None
| Abstract|| |
This article categorizes studies that assess diagnosis in oral health care into four categories (Cat-4) based on the extent of clinically relevant information in its reported analyses. Category 1 includes studies publishing results from in vitro research. Category 2 includes studies that compare the test results of a diagnostic test of interest versus those of a reference test. Category 3 includes studies that assess the overall effect of the diagnostic test on future health, function, or quality of life. Finally, Category 4 includes studies that report economic analyses. Each category also includes a hierarchy of evidence (based on study design) that can be used for further assessment of internal validity along with other published criteria for testing internal validity and applicability. Clinical application of Cat-4 should result in greater awareness of the uncertainties in diagnosis that can result in missed diagnosis and overdiagnosis.
CLINICAL RELEVANCE TO INTERDISCIPLINARY DENTISTRY
- Quantification of the ultimate effects of diagnosis on a patientss' lives is needed
- Cat-4 presents a framework that can combine clinical relevance and assessment of internal validity to evidence related to diagnosis
- An understanding of the aspects of the evidence that are missing may result in a more realistic planning of clinical procedures.
Keywords: Critical appraisal, diagnosis, diagnostic test studies, evidence-based dentistry
|How to cite this article:|
Nair R, Ai-Min AMN. A conceptual framework (Cat-4) for estimating clinical relevance of evidence related to oral diagnosis. J Interdiscip Dentistry 2015;5:114-8
|How to cite this URL:|
Nair R, Ai-Min AMN. A conceptual framework (Cat-4) for estimating clinical relevance of evidence related to oral diagnosis. J Interdiscip Dentistry [serial online] 2015 [cited 2021 May 19];5:114-8. Available from: https://www.jidonline.com/text.asp?2015/5/3/114/181372
| Introduction|| |
In oral health care, the emphasis is often given to the various therapies and how effective they are at meeting the needs of patients and populations. Selection of the most appropriate therapy depends on an accurate diagnostic process that detects the disease, its characteristics, and prognosis. Thus, diagnosis plays a critical role in healthcare provision. Diagnostic procedures include diagnostic tests, screening tests, and prognostic factors. In oral health care, diagnostic procedures include diagnostic tests such as radiographs, biopsies, or visual-tactile examination; screening tests such as oral cancer screening protocols or temporomandibular disorders screener; and assessment of prognostic factors such as tobacco use, oral hygiene, or sugar consumption. As oral diseases such as dental caries and periodontal disease continue to be widely prevalent, it is important to quantify the beneficial effect of the diagnostic procedures that are used to diagnose and choose appropriate dental therapies.
Published research articles dealing with the accuracy of diagnostic tests are often approached from a researcher's perspective and need assessment of their usefulness for individual clinical practice. A current literature search identified two publications in peer-reviewed journals that outline a pragmatic approach to the evidence-based assessment of diagnostic tests for dental professionals and students., While a pragmatic approach is essential for application of currently available evidence, these two publications do not explore the overall need of evidence in oral health care.
While quantifying the comparative effects of diagnostic tests, it is important to keep in mind the ever-increasing ability to detect minor defects that many diagnostic tests claim on achieving. These increases in ability to detect minor defects can lead to increasing frequency of overdiagnosis, which is the detection of conditions that do not result in any perceivable effect in the individuals who have these conditions. Diagnosing these pseudodiseases in patients can reduce their overall well-being by simply telling them that they have a disease that is not a disease and by subjecting them to procedures that are not required. This definition may need to be translated at a lesion level and not at an individual level as our diagnoses are often at a lesion level, along with a high prevalence of oral diseases such as dental caries and periodontal diseases. The risk of overdiagnosis is also increased with the use of screening tests. This is relevant in cases like the recommended bi-annual dental checks for everyone, where select diagnostic tests are routinely carried out (such as dental caries examinations and periodontal assessments) as screening tests as well.
Use of randomized controlled trials (RCTs) can reduce the biases that are commonly associated with diagnostic tests. For instance, RCTs could randomly allocate the competing diagnostic procedures into the two (or more) allocation arms, let each diagnosis help decide on the treatment choice and then assess the difference in outcomes that occur due to the choices that follow each of the diagnosis that is being compared. Here, the outcomes of interest go beyond comparisons such as sensitivity and specificity and report on more relevant outcomes such as disease state, function and quality of life, quality-adjusted life years (QALYs), or overall benefits. Here, we suggest the selection of an appropriate outcome that measures the overall well-being of the patients. Such outcome measures could quantify the effect of various diagnostic tests that are compared against each other. The measurement of the effect of diagnosis on overall well-being can also reduce overdiagnosis bias by reducing the detection of pseudodiseases. Currently, the patient reported outcomes and more specifically, health-related quality of life is considered the standard approach for this measurement., In the case of oral health outcomes, these are the oral health-related quality of life (OHRQoL) measures., As suggested by Sullivan, limiting outcomes of interest to merely reducing diseases or increasing lifespans would be doing a disservice to patients. This is largely because conditions are labeled as diseases when they impact a person's health in present and future. It is these concerns of impact on patients' lives in present or future that are the reason for the existence of healthcare systems, including oral care systems, and accurately measuring the effects of diagnostic procedures on OHRQoL would add great value to evidence related to diagnosis.
Just as appropriate outcomes should be measured, so should be the resources that are put into diagnoses in oral care and the therapies (including other consequences) that are a result of these diagnoses. These evaluations include cost-minimization, cost-benefit, cost-effectiveness, and cost-utility analyses. Among these, cost-utility analyses with QALYs being the currently favored approach by health technology experts in national agencies in countries including the UK and Australia. With QALY, the life years are adjusted for the perceived quality of those life years. The approach of using QALYs has the added benefit of measuring the person's preference for well-being and making it accessible to analyses of costs and its effects. Currently, there are no instruments for measurement of QALY in dentistry. In future, such a measure would enable a more comprehensive outcome than OHRQoL would allow.
Hence, overall, there are several possibilities for elements (including study design and outcomes) that can be used in studies that compare diagnostic procedures. As a result, studies range from simple comparisons between diagnostic procedures to more comprehensive assessments that include the larger extent of clinical information such as the resources used (cost) and its effects on patients. Greater inclusion of relevant information related to the clinical context in the analysis can result in more useful information for those who need to make decisions. A greater inclusion of contextual clinical information also reduces the number of assumptions that are needed for application of the information to the clinical situation. This in-turn reduces the uncertainties in realizing potential desired effects from the choice of diagnosis. Previous publications that dealt with assessment of evidence related to diagnosis in oral health care took a pragmatic approach by mostly dealing with the types of articles that are currently available., These two publications were not meant to help directly assess the extent of evidence that is unavailable. To address this gap, this article aims to provide a conceptual framework (called here on as Cat-4, for the four categories in oral diagnosis) that categorizes studies that assess diagnosis in oral health care based on the amount of clinically relevant information that is included in its analyses. Furthermore, Cat-4 aims to provide a framework that combines clinical relevance while complimenting currently prevalent evidence-based assessment strategies.
| Cat-4: The Four Categories of Studies Pertaining to Diagnostic Procedures in Oral Health Care|| |
Studies pertaining to assessment of diagnostics in oral care are divided into four categories [Table 1]. [Table 1] shows the categories in the first column and the expected hierarchy of evidence (as applicable to clinical practice) in the second column.
|Table I: Cat-4- Categorization of studies pertaining to oral diagnostic procedures based on the comprehensiveness of contextual clinical information|
Click here to view
Category 1 includes all in vitro tests that assess diagnostic tests in laboratory conditions. These in vitro studies do not provide directly useful evidence for clinical use as the simulations in the laboratory are not close enough to replicate the diagnostic process in routine clinical practice. This is evident in all the other hierarchies of evidence for therapies and diagnoses alike that exclude the direct application of this evidence. Category 1 studies are critical for initial validation and elicitation of the mechanism of these diagnostic tests before they are translated into clinical studies in further categories listed here.
Category 2 includes the common types of clinical studies that compare a diagnostic test of interest to a reference test to assess its accuracy through measures of approximation in results between the two tests. Usually, a reference test is a diagnostic test that is accepted to have high accuracy through research and practice. A reference test could also be a clinical standard that is commonly used due to its clinical utility and acceptable accuracy. The comparison of the diagnostic test of interest with the reference standard gives a clear perspective on the clinical usability of a new test and helps outline a diagnostic pathway for its application. In the studies included in this category, new diagnostic tests are compared with reference standards, thus limiting their validity to the assumption the following assumption: The diagnostic test used as the reference standard is accurate and the treatment choices that followed resulted in good treatment choices and in turn resulted in an improvement in patients' lives.
The types of studies included in Category 2 are commonly prior planned cross-sectional studies or secondary analysis of cross-sectional data. Hence, the hierarchy of evidence for studies belonging to Category 2 would follow systematic reviews, followed by a prior planned cross-sectional studies followed by secondary analysis of cross-sectional data and finally expert opinion. An earlier publication outlines the process for evaluating the risk of bias and application of these studies, and the same can be applied to any systematic review to assess the included publications.
As measures of accuracy, these studies work by reporting approximation of the diagnostic test of interest with a reference test. A simple example of Category 2 study design is given in [Figure 1]. In this example, all participants are assessed using both the diagnostic test of interest (A) and a reference test (R). The results of these two are compared with each other to check for how closely A approximates with R. They commonly report their results using sensitivity, specificity, positive predictive value, negative predictive value, receiver operator curve, and likelihood ratios.
|Figure 1: Example of a study design for category 2 cross-sectional *Here A refers to the diagnostic test of interest and R is the reference standard. †Both A and R are carried out without significant time delay. The analysis would compare results of A to R|
Click here to view
The validity of studies in Category 2 depends on the validity of the reference standard. Studies in Category 3 sidestep the assumptions related to the reference standard, by measuring outcomes of interest to the patient, such as the occurrence of discomfort (such as pain or sensitivity), functioning (the ability of a child to attend schools or ability to speak), or measures of OHRQoL. These studies include prospective (RCT and cohort) and retrospective (cohort and case–control) study designs and cross-sectional studies. Here, the hierarchy of evidence of studies with low risk of bias would be as follows: Systematic reviews followed by RCTs, prospective cohorts, retrospective cohorts, case–control studies, cross-sectional studies, and expert opinion. Retrospective cohorts were listed before case–controls as some case–controls were reported to be prone to bias when these included only more extreme disease states in their comparison. An example of study design for Category 3 is given in [Figure 2]. Besides this study design, many other design choices are available for answering specific questions that might arise for clinical applications. In this example, the ultimate effects of diagnosis and the resulting treatment choices are evaluated by measuring appropriate outcomes in the patients prospectively.
|Figure 2: Example of a study design for category 3 randomized controlled trial *The sample is allocated randomly in to two groups. One undergoes the diagnostic test of interest (A) followed by the treatment choices advised by the results of A. The other group undergoes the reference test (R) followed by the treatment advised by the results of R. The proportion of favourable versus unfavourable outcomes in the two arms are then compared at an appropriate time for the disease in question and the treatment that was given|
Click here to view
Although questions answered in Category 3 are pertinent to clinical practice for informing actual effects on patients resulting from the choice of diagnosis, they are incomplete for understanding the resources used. Thus, Category 4 includes studies that carry out an economic analysis pertaining to the resources used and the effects of the choice of diagnostic test. The hierarchy of evidence and the study designs are similar to Category 4, except for the additional economic analysis.
| Discussion|| |
This article attempted to accomplish a broad aim, i.e., to categorize the evidence needed by oral health professionals, the patients under their care, and the oral-care system to make the appropriate treatment decisions. The most clinically applicable evidence should clearly advise ability of the diagnostic test under consideration to diagnose the disease of interest correctly under clinical conditions. Beyond this, the diagnostic test by its use and the following determination of treatments must also do the patients overall good. Finally, the patients, clinicians, and other administrators should be able to make a determination of whether it is worth their resources (including time, money, efforts, and discomfort). From this perspective, Category 1 has the largest number of required assumptions for clinical application of the arising evidence, and Category 4 has the potential for the least number of assumptions. As noted earlier, these large assumptions while applying in vitro research to clinical practice make it more prone to inaccuracies in their clinical applications. Cat-4 was created on ideas similar to the phases of diagnostic studies by Sackett and Haynes, and it added in vitro studies and economic analyses to the overall categorization  and used simplified the phases. In situ studies and economic analyses were added here to reflect the strong ties to nonclinical studies present in literature and to express the need for economic evaluation that is required for making oral health choices in a resource-aware environment. The first two phases of preliminary research were dropped out of the categories as they seem less relevant to the oral health context. If they were to be retained, they could be combined in Category 1 as a part of the evidence that is not directly applicable to clinical practice.
The term “categories” was used here to indicate the groupings of studies, instead of “phases” that was used previously., As the present groupings do not require a continuity, each clinical question would not have to go through each of the categories to reach Category 4. For instance, a Category 3 study could be easily changed to a Category 4 study by including an appropriate economic analysis to it. Besides, it is also important to consider that all economic analyses would not reduce assumptions as some modeling could be based on a large number of assumptions. This is especially true when such analyses are not based on valid Category 3 evidence. Then again, Category 3 studies would suffice in cases where the alternative diagnoses do not differ in cost.
A lack of RCTs (Category 3) related to diagnostic tests was mentioned previously, and this was the experience of the authors of this manuscript as well. This suggests a larger need for evidence related to diagnosis in oral health care, and careful consideration of the uncertainties presents in diagnostic tests. With this in mind, a pragmatic approach for clinicians would be to discuss uncertainties and assumptions being made with patients to help them make the appropriate decisions.
Financial support and sponsorship
Conflicts of interest
There are no conflicts of interest.
| References|| |
Knottnerus JA, Begg C, Bossuyt P, Buntinx F, Deville W, Dinant G, et al
. The Evidence Base of Clinical Diagnosis. London: BMJ Books; 2002.
Marcenes W, Kassebaum NJ, Bernabé E, Flaxman A, Naghavi M, Lopez A, et al.
Global burden of oral conditions in 1990-2010: A systematic analysis. J Dent Res 2013;92:592-7.
Sutherland SE. Evidence-based dentistry: Part VI. Critical appraisal of the dental literature: Papers about diagnosis, etiology and prognosis. J Can Dent Assoc 2001;67:582-5.
Brignardello-Petersen R, Carrasco-Labra A, Glick M, Guyatt GH, Azarpazhooh A. A practical approach to evidence-based dentistry: V: How to appraise and use an article about diagnosis. J Am Dent Assoc 2015;146:184-191.e1.
Moynihan R, Doust J, Henry D. Preventing overdiagnosis: How to stop harming the healthy. BMJ 2012;344:e3502.
Black WC. Advances in radiology and the real versus apparent effects of early diagnosis. Eur J Radiol 1998;27:116-22.
Sullivan M. The new subjective medicine: Taking the patient's point of view on health care and health. Soc Sci Med 2003;56:1595-604.
Gill TM, Feinstein AR. A critical appraisal of the quality of quality-of-life measurements. JAMA 1994;272:619-26.
Brondani MA, MacEntee MI. The concept of validity in sociodental indicators and oral health-related quality-of-life measures. Community Dent Oral Epidemiol 2007;35:472-8.
Locker D, Allen F. What do measures of 'oral health-related quality of life' measure? Community Dent Oral Epidemiol 2007;35:401-11.
Drummond MF, Sculpher MJ, Claxton K, Stoddart GL, Torrance GW. Methods for the Economic Evaluation of Health Care Programmes. Oxford, UK: Oxford University Press; 2015.
Raftery JP. Paying for costly pharmaceuticals: Regulation of new drugs in Australia, England and New Zealand. Med J Aust 2008;188:26-8.
Grosse SD. Assessing cost-effectiveness in healthcare: History of the $50,000 per QALY threshold. Expert Rev Pharmacoecon Outcomes Res 2008;8:165-78.
Drummond MF, Richardson WS, O'Brien BJ, Levine M, Heyland D. Users' guides to the medical literature. XIII. How to use an article on economic analysis of clinical practice. A. Are the results of the study valid? Evidence-Based Medicine Working Group. JAMA 1997;277:1552-7.
Whiting PF, Rutjes AW, Westwood ME, Mallett S, Deeks JJ, Reitsma JB, et al.
QUADAS-2: A revised tool for the quality assessment of diagnostic accuracy studies. Ann Intern Med 2011;155:529-36.
Whiting PF, Rutjes AW, Westwood ME, Mallett S; QUADAS – Steering Group. A systematic review classifies sources of bias and variation in diagnostic test accuracy studies. J Clin Epidemiol 2013;66:1093-104.
Sackett DL, Haynes RB. The architecture of diagnostic research. BMJ 2002;324:539-41.
[Figure 1], [Figure 2]