false
ar,be,bn,zh-CN,zh-TW,en,fr,de,hi,it,ja,ko,pt,ru,es,sw,vi
Catalog
Didactics
Evidence Based Medicine
Evidence Based Medicine
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Okay, so thank you so much for this invitation. It's a pleasure for me to be here. Today, we will talk about essential aspects of evidence-based medicine in our practice. I'm a gynecology oncologist. So, evidence-based medicine was originally defined as the conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients. But the practice of evidence-based medicine as a systematic approach to clinical problem solving means integrating individual clinical expertise with the best available external clinical evidence from systematic research and also patient values and preferences. It could be defined as making a conscientious effort to base clinical decisions on research that is most likely to be free from bias and using interventions most likely to improve how long or how well patients live. Patients' values, and this is important, include religious, social, cultural views, health, life goals, priorities. It also includes views about the quality of life, emotional responses, and views about the benefits versus possible harms of treatment. And don't forget that not all patients may want to be fully involved in shared decision-making. As you know, there have been many misinterpretations regarding evidence-based medicine. And for some people, this has not changed. You've got to be careful not to make the same mistake and avoid considering, for example, that evidence-based medicine ignores clinical experience and intuition. Those are still fundamental parts of evidence-based practice. There's another misinterpretation regarding there's no room for basic research. Basic research is still an essential part of evidence construction. Of course, evidence-based medicine ignores the standard aspects of clinical training, but the true and adequate medical record and of course, physical examination remains an essential part of patients' attention in evidence-based medicine. However, is evidence-based medicine enough for clinical practice? Evidence-based medicine is a large step forward. These skills are also necessary, but insufficiently for contemporary medicine. All clinicians should also learn how to find the best evidence for everyday practice, assess the relevance of that evidence, and define if the evidence is patient-oriented. But it also requires some understanding of basic statistics and have just-in-time information at the point of maybe using web-based or computer-based information and some tools for clinical decision-making. In the end, evidence-based medicine is a process of lifelong, self-directed, problem-based learning in which caring for one's own patients creates the need for clinically important information about diagnosis, prognosis, therapy, and other clinical and healthcare issues. Regarding why using EBM, this is a classical example of how important the adequate use of evidence-based medicine can positively change the patient's attention. This is a cumulative methodology. Here, each line on the circle on the plot no longer represents the result from one single study. Each line in the circle represents a methodology. And here we have a series of meta-analyses and they were done cumulatively. And this shows us how important it is to synthesize the information that we have in an ongoing fashion. The topic here is thrombolytic therapy in preventing death in people who have already had a heart attack. The first study on thrombolytic therapy in preventing death in people who already had a heart attack was done in the early 60s. And the first randomized clinical trial included only 23 participants. It has a point estimate of around 0.5, and this outweighed the 0.5 favorable treatment. Also, the confidence interval is very wide, crossing the new value of one. A second analysis was done again in the early 60s, but the second study was added to the first study, combining these studies together. There were 65 participants, and as you can see, the point estimate stays about the same, although the confidence interval gets tighter. By the early 70s, we had something like 10 randomized clinical trials, and more than 2,100 participants were already randomized in those 10 trials. It showed that thrombolytic therapy was effective. The upper bound of the confidence interval does not cross the new value anymore. So by the early 70s, if we had kept track of that evidence, we would know that thrombolytic therapy was effective. Yet, no one was keeping track of the information, the evidence at the time, and investigators kept randomizing patients to thrombolytic therapy versus placebo, nothing. And look how many more patients were randomized. By the 90s, something like 70 randomized clinical trials were done, with over 48,000 participants. We were keeping track of the information. We had known the answer by the early 70s. So if you look at this estimate, the effect size stays essentially the same, it's just getting more and more precise. And it wasn't until meta-analysis was done that thrombolytic therapy was mentioned as a beneficial intervention in a textbook. On the right side of the slide, you see a gray that shows whether the textbook of a reviewer recommended thrombolytic therapy, and you can see that it was not until meta-analysis was done that thrombolytic therapy was mentioned. This is a five-step model of evidence-based medicine, and maybe you can use this as a guide to your practice. For EPM, first, you should convert information needs in answerable questions. Second, you must track the best evidence to answer your question with maximum efficiency, of course. Third, you should critically appraise that evidence for its validity and usefulness. Fourth, you have to apply the results of this appraisal in your practice. And finally, you should evaluate your performance based on this model. To convert the information need in a practical question, you pick a format, maybe the most useful strategy. P goes for patient or population or problems. So how would you describe a group of patients similar to your own patient? It goes for intervention, or you can also consider here prognostic factors or expectations. So what intervention are you considering for your question? P goes for comparison. You must ask, what is the best alternative to make a comparison? What is the fair comparison for that intervention? And finally, for the outcome, you should consider what you could get, what could you assess, improve, or affect with that intervention that you're going to consider. You must consider also when you are making this type of question, what type of question do you want to use? Is it going to be about diagnosis? Is it going to be about etiology, treatment, prognosis, or prevention? And of course, what type of study do you want to find? So you must ask yourself, what is the best study design or methodology for answering your question? Classically, we found a concept that is paramount for evidence levels. At the bottom, there are expert opinions, editorials, case reports, surveys, cross-sectional studies. And for those studies, the basal risk of bias is higher and the quality of evidence is lower. At the top, there are cohort studies, and of course, randomized control trials, and finally, good quality systematic reviews and methodologies of randomized control trials. For those studies, the risk of bias should be lower and the quality of evidence higher. However, this pyramid should be seen with caution. Having a randomized control trial, systematic review, does not automatically mean you are in front of high-quality evidence or evidence with a low risk of bias. And as you see here, this is a recent reanalysis of the pyramid. You must note that there are differences among the studies. You can have a good and bad quality randomized control trial, a good and bad quality cohort study, or case control study. In other words, a meta-analysis of well-conducted randomized control trials at low risk of bias cannot be equated with a meta-analysis of observational studies at a high risk of bias, even when both are systematic reviews and meta-analysis. And in this modification, the systematic reviews are removed from the top of the pyramid and are used as a lens through which other types of studies should be seen and its quality assessed. Well, that objective is where GRADE comes into play as an essential part of doing a systematic review. GRADE goes for grading recommendations of development and evaluations. And as a transparent framework for developing and presenting evidence, it helps you to summarize and provide a systematic approach for making clinical practice recommendations. GRADE gives a framework for specifying healthcare questions, choosing outcomes of interest, and rating their relevance, also evaluating the available evidence, and bringing all that information together with consideration, of course, of values and preferences of patients, and also the society to arrive at recommendations. It is the most widely adopted tool for rating the quality of evidence and for making recommendations, with over 100 organizations worldwide officially endorsing GRADE. Do not forget that for this approach, the quality of evidence varies among the outcomes, even when they are based on the same primary studies. These are great certainty ratings, starting from very low, which means that the true effect is probably markedly different from the estimated effect, and up to the high certainty, where you have a lot of confidence, but the true effect is similar to the estimated effect you have from your evidence. There are defined reasons for rating up or rating down the evidence according to GRADE, so that it can be rated down for a high risk of bias, imprecision, inconsistency, indirectness, or publication bias, and can be rated up for a large magnitude of effect, a dose response gradient, or if all the residuals confounding would decrease the magnitude of the effect that you found with your evidence. Now, moving from quality of evidence to recommendations, in GRADE, recommendations can be strong or weak, inferable or against an intervention. Strong recommendations suggest that all or almost all persons would choose that intervention, and weak recommendations imply that there is likely to be an important variation in the decision that informed persons are likely to make. The strength of recommendations are actionable. A weak recommendation indicates that engaging in a shared decision-making process is essential, while strong recommendations suggest that it's not usually necessary to present both options. Recommendations are more likely to be weak rather than strong when the certainty in evidence is low, when there is a close balance between desirable and undesirable consequences, or when there is a substantial variation or uncertainty in patient values and preferences, and of course when interventions require considerable resources. Finally, evidence-based medicine requires not only reading new articles, but reading the right articles at the right time, and change what you do, what is harder, change what the others do according to what you find and of course, and it's so important, the patient values. As a conclusion, so evidence-based medicine is a fundamental tool in our practice for making clinical decisions. Practicing it requires integrating clinical experience and patient values and preferences with the best available evidence. The pyramid of evidence of the type of the study quality must be seen with caution. We saw that there can be high or low quality systematic reviews, meta-analysis, as well as randomizing the trials, and there are some tools for assessing in a less subjective way the quality of evidence, and grade is the most recognized one. Finally, remember that quality of evidence is different from the recommendation strength, and I think that's all. Thank you so much.
Video Summary
The video discusses the principles of evidence-based medicine (EBM) and its importance in clinical practice. The speaker, a gynecology oncologist, explains that EBM involves using the best research evidence available, along with clinical expertise and patient preferences, to make decisions about patient care. They emphasize that EBM is not intended to replace clinical experience or intuition, but rather to integrate them with unbiased research findings. The speaker also discusses the cumulative nature of evidence and the importance of synthesizing information over time. Additionally, they introduce a five-step model for practicing EBM and highlight the need for critical appraisal of evidence. The video concludes by introducing the GRADE framework for developing and presenting evidence, and emphasizes the importance of considering both the quality of evidence and patient values when making clinical recommendations. No specific credits are mentioned in the video.
Asset Subtitle
David Viveros
Keywords
evidence-based medicine
clinical practice
research evidence
clinical expertise
patient preferences
Contact
education@igcs.org
for assistance.
×