false
ar,be,bn,zh-CN,zh-TW,en,fr,de,hi,it,ja,ko,pt,ru,es,sw,vi
Catalog
Didactics
Editorial Metrics
Editorial Metrics
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Hello, everybody. I'm Rene Pareja, Gynecological Oncologist from Colombia, and I'm going to address some issues on editorial metrics. The objective of this presentation is to remind the main metrics used to evaluate scientific production, its characteristics, its strengths and limitations. Definitively, the researchers, so us, the scientific journals, databases such as National Cancer Database, Sears, and probably others from other countries, publishers, research institutions, and grants agencies need to have information on the research that they produce, publish, index, promote, and finance and support in order to have some backup about the behavior of the publishing-related process. The research evaluation process is a rigorous and systematic process that involves the collection and analysis of data and reports on organizations, processes, projects, services, resources. Its goal is to improve the decision-making process and lead to practical applications in real life. So, our final goal when seeing and handling patients. Evaluate the results of scientific research to know what is relevant and what is not, to support decisions about project financing and to translate this scientific production into public programs and real-life cost-effective interventions. There are some metrics based on citations. I will give you an example. Despite the lack of relationship between citations and quality, impact or scientific merit, and in the little that can be assumed about the real reason, proceed in the final article. This is the article from Spiliotis. This was a paper published in 2015 and it has received more than 400 citations. This is one of the worst papers I ever read because it's a fabricated paper. It received a lot of criticisms and the authors never replied to those criticisms that have been made to this paper. So, the number of citations does not necessarily reflect the quality of the paper. This citation base matrix has an impact mainly on the attribution of relevance to article base exclusively on received citations. The number of citations is one of the most widely used metrics to evaluate the impact of scientific publications. A citation count refers to the number of times a particular publication can be cited by other researchers. Citation counts can be used to evaluate the influence of a publication within a particular field or to compare the impact of different publications. However, citation counts can also be influenced by factors such as the age of the publication, as we are going to see, the number of researchers working in the field, and the quality of the publications. Instead of that, instead of color publications, there is another metric called citation per year, CPY. Citation per year is a metric that measures the number of times a scholarly article has been cited by others in a given year. This metric is often used to evaluate the impact and influence of a particular article of the work of a researcher over time. A high number of citations per year suggests that the article or researcher is having a significant impact on their field and that their work is widely recognized and valued by their peers. On the other hand, a low number of citations per year can mean exactly the opposite. It is important to know that citations per year should be interpreted in the context of the field and the age of the article. I'm going to send you an example. This is the LAC trial published five years ago in November 15, 2008, and LAC trial has received 986 citations in five years. Below is the article from McNasbergall, the RTC 55971, on the use of neoadjuvant chemotherapy in advanced ovarian cancer. This article was published in 2010 and it has received 1,652 citations. If you count citations per year, you will realize that LAC trial has almost 200 citations per year and Burgholtz paper has more than 1,000 citations per year. The difference is the LAC trial was published just five years ago and the other article was published 13 years ago. So, both of them have been quite influential in our specialty, but the number of citations and citations per year can vary regarding the age of the article. This is the most used editorial metric, the impact factor. It was created in 1962 by Eugene Garfield, two of our journals, with publications of science citation index of the Institute for Scientific Information. It is calculated based on the number of received citations and dividing them by a given time frame, usually the two past years, by the number of articles published in the same interval. So, number of citations divided by the number of articles published by any given journal. It is used by the database called Web of Science, which since 2016 belongs to Clarivate Analytics, and thus only the citations from journals indexed in this database are counted, which includes to date approximately 1,300 journals. The impact factor is an average value per journal and not per article. There are texts published in the journal that are not counted as articles, and may be the editorials, case reports, medical boards, pictures, etc. But citations to the same text can be counted in the numerator, number of citations over the number of the articles. Therefore, there are tricks that are used by editors to increase the impact factor journal. One of them is self-citations. You can calculate impact factor taking into account self-citations or excluding them. Usually, impact factors include self-citations. And the database that gives access to the impact factor of the journals is the Journal of Citation Report. It is an integral part of the Web of Science, and it is accessible by subscriptions. Impact factor is a measure of the average number of citations per article in a particular journal. It is widely used as a measure of the quality and importance of a journal within a particular field. However, the impact factor can be influenced by factors such as the size of the journal, the frequency of publication, and the subject area covered by journal. The responsible organization for calculating impact factor is Clarivate Analytics. Clarivate Analytics publishes on an annual basis the Journal of Citation Reports informing the impact factor for that year. And this happens usually in the first week of June. As a criticism, the potential to incentivize editors and authors to prioritize publication in high-impact journals over other considerations, its susceptibility to manipulation, and its limitations in capturing the full range of scholarly impact of individual articles or researchers. You can see here in this slide the 10 most relevant impact factors in medicine. The first one is the Lancet with an impact factor of 156, which means that an article published in Lancet can be cited, was cited, at least 156 times in the past two years. This is really huge. This is considerably big. The Discovery New England Journal of Medicine with impact factor of 145, Journal of Clinical Oncology 71, Lancet Oncology 71, JAMA 69, Journal of American College of Cardiology 66, JAMA Oncology 64, et cetera. So see the size of those impact factors, hundreds and dozens. In blue, you will see the impact factors for oncology journals. And the first is CA, a cancer journal for clinicians. The impact factor is 86. Nature Cancer Reviews is 20. Nature Reviews Clinical Oncology 14. Cancer Cell 12. The Lancet Oncology 12. Annals of Oncology 11. So you can perceive the difference because the first were general medical journals, and those are journals devoted just to the area of oncology. Regarding gynecology, oncology, or field, the impact factor, the current impact factor of international journal gynecological cancer is 4.6. For gynecology, oncology is 5.3. And Journal of Gynecology Oncology, the Korean publication is 4.7. All of them are located in the first quartile, or the first 25% of the publications. This is called Q1, Q2, Q3, and Q4. And it is relevant also for the, it reflects the quality of the journal. Of course, it's better to publish in journals with higher, with higher Q quartile as Q1 or Q2. Last year, there were three top impact factor gainers. And the second one was International Journal of Gynecological Cancer that increased its impact factor by 35%. So we are four weeks away now, when recording this lecture, to know the impact factor for 2023. The CIMAGO journal rank is another alternative impact factor, was created in 2007 and 2008. Made by Elsevier. It is calculated analogous to the impact factor, but also consider the impact of the journal and calculate with some algorithms, the prestige of the journal. The CIMAGO journal rank is not as relevant or as important as the normal impact factor. The H-index is a very interesting editorial metric in order to evaluate the impact of science from a given author. It was created in 2005 by Jay Hirsch. And the H-index is a measure of both the productivity and impact of a researcher's work. It is calculated by determining the number of publications by a researcher that have been cited at least eight times. For example, a researcher with an H-index of 10 has published 10 papers that have each been cited at least 10 times. It is pretty easy to understand. H-index is a useful metric because it takes into account both the number of publications and the impact of those publications. However, it can also be influenced by factors such as the length of a researcher's career. I think this is the most important. And the size of the research community in which they work. It is obtained through the web of science citation report resource in Google Scholar. And this is an example. This is Professor Ignace Bergot, the guru of ovarian cancer literature. I think he's from Belgium. And his H-index is 134, which means that Professor Bergot has published 134 papers that have received at least 134 references. This is really huge. You can check the impact factors of the main authors if you want, just going through the site and asking for it to the platform. But this is a huge H-index. Almetrics is a new tool. It's a relatively new approach to evaluating the impact of scholarly publications that seeks to measure the attention and engagement that a publication receives online. Almetrics stands for alternative metrics, and it refers to the use of non-traditional metrics to measure the impact of a scholarly publication beyond traditional citation counts. The data used in Almetrics is gathered from a variety of online resources, including the social media platforms, scholarly communication networks, news outlets, and institutional repositories. Some of the commonly used data sources for Almetrics include Twitter, Facebook, Reddit, Mandalay, and Almetrics.com. Probably you have seen this before. This is a donut, kind of donut, and every single color corresponds to the number of interactions in a given platform. You can see post documents, news, blogs, Twitter, Google, LinkedIn, Reddit, YouTube, even Pinterest, Facebook, Zillow, Wikipedia, etc. This is a quite modern approach to evaluate the impact of a given author in the science using alternative metrics, not the common ones. As a conclusion, editorial metrics, although not completely perfect, are the better tools we have so far to evaluate the impact of science, particularly in our field, in medicine. And metrics for research evaluation can evolve, change, new methodologies emerge, and ways to refine existing mechanisms need to be discussed. Thank you very much for seeing this presentation.
Video Summary
In this video, Dr. Rene Pareja, a Gynecological Oncologist from Colombia, discusses editorial metrics and their role in evaluating scientific production. He explains that researchers, scientific journals, databases, publishers, research institutions, and grants agencies need information about the research they produce, publish, and finance. The research evaluation process involves the collection and analysis of data on organizations, projects, resources, and services to improve decision-making and real-life applications. Dr. Pareja emphasizes that citation-based metrics, such as the number of citations and citations per year, are commonly used to evaluate the impact of scientific publications. He cautions that the number of citations may not reflect the quality of a paper and can be influenced by factors such as publication age and the number of researchers in the field. The impact factor, calculated based on the number of citations and articles published in a journal, is another widely used metric. Dr. Pareja explains its limitations and the potential for manipulation. He also discusses alternative metrics like the H-index and altmetrics, which measure online attention and engagement. Dr. Pareja concludes that while editorial metrics have their flaws, they are currently the best tools available to evaluate scientific impact in the medical field.
Asset Subtitle
Rene Pareja
Keywords
editorial metrics
scientific production
research evaluation
citation-based metrics
impact factor
Contact
education@igcs.org
for assistance.
×