In a paper published in the Research section of the BMJ today, Peter Gotzsche once again lines up the guns against organized screening programs, targeting in this instance the Danish Screening Program.
http://www.bmj.com/cgi/content/full/340/mar23_1/c1241
His conclusions are -
"We were unable to find an effect of the Danish screening programme on breast cancer mortality. The reductions in breast cancer mortality we observed in screening regions were similar or less than those in non-screened areas and in age groups too young to benefit from screening, and are more likely explained by changes in risk factors and improved treatment than by screening mammography
We believe it is time to question whether screening has delivered the promised effect on breast cancer mortality."
As this may hit mainstream news media, I have gathered some responses so far -
Danish and Swedish experts have replied saying that -
They claim that mammography screening in Denmark had no impact on breast cancer mortality. This claim is unsubstantiated, firstly because the authors used very crude data, and secondly because the analysis was not geared to answer the question.
Firstly, breast cancer screening can only possibly have an effect on women not already diagnosed with breast cancer prior to screening. Therefore the so-called “refined mortality” should be used in evaluation of screening. Jørgensen et al did not used refined mortality. Furthermore, they merge data from three screening areas starting screening at different points in time, and used age groups instead of cohorts. Together this gave quite “polluted” data.
Secondly, they calculated “annual change in the relative risk of breast cancer death” by time period and areas excluding 1992-1996. The relevant outcome measure is, however, the change in breast cancer mortality in the screening area controlled for the change in breast cancer mortality in the non-screening area.
Even using these “polluted” data, the relative breast cancer mortality decreased for women aged 55-74 covered by screening, while the relative breast cancer mortality did not decrease for women aged 35-54 largely uncovered by screening, and the relative breast cancer mortality was slightly but statistically non-significantly decreased for women aged 75-84 where the majority, but not all, of the person years were uncovered by screening. Although this pattern in the data is actually visible in Figure 1 in the paper by Jørgensen et al, it was missed in their analysis among other things because they excluded data from the period 1992-1996.
As we have reported previously, the measured impact of mammography screening on breast cancer mortality is highly dependent on the data set used for the analysis. Use of “polluted” data leads to biased estimates (2). Using cohort based refined mortality, we found a 25% decrease in breast cancer mortality in the municipality of Copenhagen during the first 10 years following the introduction of mammography screening in April 1991 (3). We deliberately did not include data from Funen and Frederiksberg in that analysis, as cause of death data were not available at that time for the first 10 years of these two screening programmes.
Other commentators also note -
The analysis of population trends in breast cancer mortality in the presence of screening is complicated by the inability to measure exposure to screening, and the long period of follow-up required. Studies such as this one by Jørgensen et al obscure whatever benefit may be present with crude, insensitive methodology. While we expect to see a range of benefits from mammography, some small and some large, based on the design and quality of the screening program, its duration, and the participation rate of the target population, to argue that there is no benefit from modern mammography on the basis of such flawed methods means this paper contributes nothing of substance to the on-going debate
Friday, 26 March 2010
Tuesday, 16 March 2010
The US Preventive Services Task Force recommendations on Screening Mammography flawed
Guidelines for mammography screening published by the U.S. Preventive Services Task Force (USPSTF) in November not only are based on flawed methodology, they also fail to address current breast imaging practice and data, making them obsolete, according to a critique published in this month's Journal of Diagnostic Medical Sonography
Author Kevin Evans, Ph.D., evaluated the USPSTF's report methodology and found that it did not meet established standards for systematic reviews (JDMS, January/February 2010, Vol. 26:1, pp. 19-23). Evans is chair of the radiologic sciences division in the School of Allied Medical Professions at Ohio State University in Columbus.
Evans used two resources to evaluate the USPSTF's report: the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA), a 27-point checklist, and the Assessment of Multiple Systematic Reviews (AMSTAR), an 11-point checklist.
The task force's report scored 7 out of 27 on the PRISMA checklist and 1 out of 11 on the AMSTAR list. These low methodological scores put in question the rigor used in developing the report, limiting it to a review of literature instead of a formal systematic review and reducing its overall scientific impact to a much lower level in the hierarchy of evidence, according to Evans.
"I picked two of the most well-known methods to evaluate systematic reviews and applied them to the report," he told AuntMinnie.com. "It's possible that USPSTF met these standards but failed to provide their methodology in the report. This becomes problematic in reading their guidelines."
The USPSTF's intention was to update its 2002 report by using other systematic reviews, meta-analyses, recently published literature, and data from the Breast Cancer Surveillance Consortium from 2000 to 2005. In its guidelines, it proposed the following, according to Evans:
USPSTF used data from film-screen mammography in its report, rather than taking into consideration that digital mammography was developed to address film-screen's limitations and is in widespread use, according to Evans. In fact, one of the puzzling things about the USPSTF report is its lack of any reference to the American College of Radiology Imaging Network (ACRIN) Digital Mammographic Imaging Screening Trial (DMIST), conducted in 2005.
"USPSTF didn't make specific mention of DMIST," he said. "And yet they claim that more evidence is needed to provide a guideline about benefits and harms associated with digital mammography, instead of film-screen mammography."
"Other U.S. Preventive Services Task Force reports are routinely high quality," Evans said. "It's possible that the breast cancer screening task force did a good job but didn't spell out their methods. In any case, their report has created confusion for everyone, as well as our government officials."
If the U.S. Department of Health and Human Services had addressed the guidelines point by point, this confusion might have been put to rest earlier, Evans said.
"[After the guidelines were released], the Department of Health and Human Services responded by telling the public not to pay attention," he said. "They should have asked the USPSTF to provide an addendum with additional details on their review."
In the aftermath of the report's publication, USPSTF should take several steps to clear up the confusion, Evans wrote: It needs to provide an addendum that details its methodology, and if a systematic review as outlined by PRISMA or AMSTAR has not been conducted, it needs to be done.
"A revised set of guidelines is needed to assist patients in making the best decision about participating in screening breast examinations," he concluded.
Author Kevin Evans, Ph.D., evaluated the USPSTF's report methodology and found that it did not meet established standards for systematic reviews (JDMS, January/February 2010, Vol. 26:1, pp. 19-23). Evans is chair of the radiologic sciences division in the School of Allied Medical Professions at Ohio State University in Columbus.
Evans used two resources to evaluate the USPSTF's report: the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA), a 27-point checklist, and the Assessment of Multiple Systematic Reviews (AMSTAR), an 11-point checklist.
The task force's report scored 7 out of 27 on the PRISMA checklist and 1 out of 11 on the AMSTAR list. These low methodological scores put in question the rigor used in developing the report, limiting it to a review of literature instead of a formal systematic review and reducing its overall scientific impact to a much lower level in the hierarchy of evidence, according to Evans.
"I picked two of the most well-known methods to evaluate systematic reviews and applied them to the report," he told AuntMinnie.com. "It's possible that USPSTF met these standards but failed to provide their methodology in the report. This becomes problematic in reading their guidelines."
The USPSTF's intention was to update its 2002 report by using other systematic reviews, meta-analyses, recently published literature, and data from the Breast Cancer Surveillance Consortium from 2000 to 2005. In its guidelines, it proposed the following, according to Evans:
- Routine screening mammography in women ages 40 to 49 years should not be conducted; rather, this process should be biennial at ages 50 to 74 years.
- A lack of published evidence currently exists to provide a guideline for screening mammography for women older than 75 years of age.
- A lack of evidence exists for assessing the benefits and harms of using clinical breast examinations for women 40 years and older.
- Self breast examination is not recommended to be taught to women by clinicians, as it is not a sensitive technique and raises a woman's level of anxiety.
- A lack of published evidence currently exists to provide a guideline about benefits and harms associated with digital mammography or MRI instead of film-screen mammography.
USPSTF used data from film-screen mammography in its report, rather than taking into consideration that digital mammography was developed to address film-screen's limitations and is in widespread use, according to Evans. In fact, one of the puzzling things about the USPSTF report is its lack of any reference to the American College of Radiology Imaging Network (ACRIN) Digital Mammographic Imaging Screening Trial (DMIST), conducted in 2005.
"USPSTF didn't make specific mention of DMIST," he said. "And yet they claim that more evidence is needed to provide a guideline about benefits and harms associated with digital mammography, instead of film-screen mammography."
"Other U.S. Preventive Services Task Force reports are routinely high quality," Evans said. "It's possible that the breast cancer screening task force did a good job but didn't spell out their methods. In any case, their report has created confusion for everyone, as well as our government officials."
If the U.S. Department of Health and Human Services had addressed the guidelines point by point, this confusion might have been put to rest earlier, Evans said.
"[After the guidelines were released], the Department of Health and Human Services responded by telling the public not to pay attention," he said. "They should have asked the USPSTF to provide an addendum with additional details on their review."
In the aftermath of the report's publication, USPSTF should take several steps to clear up the confusion, Evans wrote: It needs to provide an addendum that details its methodology, and if a systematic review as outlined by PRISMA or AMSTAR has not been conducted, it needs to be done.
"A revised set of guidelines is needed to assist patients in making the best decision about participating in screening breast examinations," he concluded.
Subscribe to:
Posts (Atom)