Tag: publication bias

Bioethics Blogs

Can we trust research in science and medicine?

By Brian D. Earp  (@briandavidearp)

Readers of the Practical Ethics Blog might be interested in this series of short videos in which I discuss some of the major ongoing problems with research ethics and publication integrity in science and medicine. How much of the published literature is trustworthy? Why is peer review such a poor quality control mechanism? How can we judge whether someone is really an expert in a scientific area? What happens when empirical research get polarized? Most of these are short – just a few minutes. Links below:

Why most published research probably is false

The politicization of science and the problem of expertise

Science’s publication bias problem – why negative results are important

Getting beyond accusations of being either “pro-science” or “anti-science”

Are we all scientific experts now? When to be skeptical about scientific claims, and when to defer to experts

Predatory open access publishers and why peer review is broken

The future of scientific peer review

Sloppy science going on at the CDC and WHO

Dogmas in science – how do they form?

Please note: this post will be cross-published with the Journal of Medical Ethics Blog.

The views, opinions and positions expressed by these authors and blogs are theirs and do not necessarily represent that of the Bioethics Research Library and Kennedy Institute of Ethics or Georgetown University.

Bioethics Blogs

Can We Trust Research in Science and Medicine?

By Brian D. Earp  (@briandavidearp)

Readers of the JME Blog might be interested in this series of short videos in which I discuss some of the major ongoing problems with research ethics and publication integrity in science and medicine. How much of the published literature is trustworthy? Why is peer review such a poor quality control mechanism? How can we judge whether someone is really an expert in a scientific area? What happens when empirical research get polarized? Most of these are short – just a few minutes. Links below:

Why most published research probably is false

The politicization of science and the problem of expertise

Science’s publication bias problem – why negative results are important

Getting beyond accusations of being either “pro-science” or “anti-science”

Are we all scientific experts now? When to be skeptical about scientific claims, and when to defer to experts

Predatory open access publishers and why peer review is broken

The future of scientific peer review

Sloppy science going on at the CDC and WHO

Dogmas in science – how do they form?

Please note: this post will be cross-published with the Practical Ethics blog. 

The views, opinions and positions expressed by these authors and blogs are theirs and do not necessarily represent that of the Bioethics Research Library and Kennedy Institute of Ethics or Georgetown University.

Bioethics Blogs

Debating the Replication Crisis – Why Neuroethics Needs to Pay Attention

By Ben Wills


Ben Wills studied Cognitive Science at Vassar College, where his thesis examined cognitive neuroscience research on the self. He is currently a legal assistant at a Portland, Oregon law firm, where he continues to hone his interests at the intersections of brain, law, and society.

In 2010 Dana Carney, Amy Cuddy, and Andy Yap published a study showing that assuming an expansive posture, or “power pose,” leads to increased testosterone levels, task performance, and self-confidence. The popular media and public swooned at the idea that something as simple as standing like Wonder Woman could boost performance and confidence. A 2012 TED talk that author Amy Cuddy gave on her research has become the site’s second-most watched video, with over 37 million views. Over the past year and change, however, the power pose effect has gradually fallen out of favor in experimental psychology. A 2015 meta-analysis of power pose studies by Ranehill et al. concluded that power posing affects only self-reported feelings of power, not hormone levels or performance. This past September, reflecting mounting evidence that power pose effects are overblown, co-author Dana Carney denounced the construct, stating, “I do not believe that ‘power pose’ effects are real.”
What happened?

Increasingly, as the power pose saga illustrates, famous findings and the research practices that produce them are being called into question. Researchers are discovering that many attempts to replicate results are producing much smaller effects or no effects at all when compared to the original studies. While there has been concern over this issue among scientists for some time, as the publicity surrounding the rise and fall of the power pose indicates, discussion of this “replication crisis” has unquestionably spilled over from scientists’ listservs into popular culture.

The views, opinions and positions expressed by these authors and blogs are theirs and do not necessarily represent that of the Bioethics Research Library and Kennedy Institute of Ethics or Georgetown University.

Bioethics News

Bioethicist alleges “publication bias” at NEJM

MICHAEL DWYER/ASSOCIATED PRESS   

For a fascinating behind-the-scenes view of how a major medical journal can trash heterodox views, it is hard to beat Ruth Macklin’s saga in the Indian Journal of Medical Ethics (IJME).

Dr Macklin, a prominent bioethicist at Albert Einstein College of Medicine, in New York, disagreed strongly with a battery of articles published in the New England Journal of Medicine about treatments for extremely premature newborns.

The medical issue is complex. Basically the so-called SUPPORT study compared oxygen levels given to newborns in order to determine the optimal level. A lot is at stake; wrong levels can result in blindness and death. One vocal critic of the SUPPORT study, Peter Aleff, has a brain-damaged and blind son whose disabilities he attributes to problematic oxygen levels. He has described the SUPPORT study as “even more unethical than the syphilis studies in Guatemala and Tuskegee”.  

After the results of the SUPPORT study were published in the NEJM in 2010, the Office of Human Research Protections (OHRP) rebuked the University of Alabama for the inadequacy of its informed consent forms. The NEJM came out with all guns blazing in defense of its editorial judgement and launched four counter-attacks on OHRP: an article by two leading bioethicists; an editorial by the NEJM editor, Jeffrey M. Drazen; a long letter signed by 46 bioethicists and paediatricians; and an article by three officials at the National Institutes of Health, including its head, Francis S. Collins.

Dr Macklin and two colleagues studied the forms at the centre of this controversy.

The views, opinions and positions expressed by these authors and blogs are theirs and do not necessarily represent that of the Bioethics Research Library and Kennedy Institute of Ethics or Georgetown University.

Bioethics Blogs

Too much of a good thing

https://commons.wikimedia.org/wiki/File%3ANude_mouse.jpg

A novel anti-cancer drug is found to shrink every tumour type tested in experimental animal models. Let’s rejoice and start clinical trials without delay!

Well, not so fast.

Preclinical experiments in animal models are aimed at showing that a new drug will be useful in human beings. However, individual animal experiments are often too small to support inferences about clinical utility. Techniques such as meta-analysis offer an attractive method for pooling individual studies and supporting more confident assertions regarding the potential clinical utility of a new drug.

With this in mind, our team undertook a meta-analysis using the anti-cancer drug sunitinib (Sutent). Since its publication in eLife, the meta-analysis has been covered in the BMJ, Nature, The Guardian, and Retraction Watch. We were primarily interested in whether the properties of sunitinib observed in preclinical studies would correlate with those observed in patients

What we found were many avoidable roadblocks to proper meta-analysis. Firstly, poor reporting quality in many experiments made it impossible to include them in the analysis. Examples include lack of error bars or no sample size given. Secondly, poor methodological practices such as lack of blinding and randomization, or reliance on single models were common, calling the quality of the data we were extracting into question. Furthermore, there was no discernible dose-response relationship connected across experiments in the same malignancy. Finally, trim and fill analysis (a tool to detect the possibility of publication bias) suggested that the overall efficacy of sunitinib across all malignancies could be inflated as much as 45%.

The views, opinions and positions expressed by these authors and blogs are theirs and do not necessarily represent that of the Bioethics Research Library and Kennedy Institute of Ethics or Georgetown University.

Bioethics Blogs

Psychology is not in crisis? Depends on what you mean by “crisis”

By Brian D. Earp
@briandavidearp

*Note that this article was originally published at the Huffington Post.

Introduction

In the New York Times yesterday, psychologist Lisa Feldman Barrett argues that “Psychology is Not in Crisis.” She is responding to the results of a large-scale initiative called the Reproducibility Project, published in Science magazine, which appeared to show that the findings from over 60 percent of a sample of 100 psychology studies did not hold up when independent labs attempted to replicate them.

She argues that “the failure to replicate is not a cause for alarm; in fact, it is a normal part of how science works.” To illustrate this point, she gives us the following scenario:

Suppose you have two well-designed, carefully run studies, A and B, that investigate the same phenomenon. They perform what appear to be identical experiments, and yet they reach opposite conclusions. Study A produces the predicted phenomenon, whereas Study B does not. We have a failure to replicate.

Does this mean that the phenomenon in question is necessarily illusory? Absolutely not. If the studies were well designed and executed, it is more likely that the phenomenon from Study A is true only under certain conditions. The scientist’s job now is to figure out what those conditions are, in order to form new and better hypotheses to test.

She’s making a pretty big assumption here, which is that the studies we’re interested in are “well-designed” and “carefully run.” But a major reason for the so-called “crisis” in psychology — and I’ll come back to the question of just what kind of crisis we’re really talking about (see my title) — is the fact that a very large number of not-well-designed, and not-carefully-run studies have been making it through peer review for decades.

The views, opinions and positions expressed by these authors and blogs are theirs and do not necessarily represent that of the Bioethics Research Library and Kennedy Institute of Ethics or Georgetown University.

Bioethics Blogs

A Decade’s Worth of Gene-Environment Interaction Studies, in Hindsight

In the early 2000s, Avshalom Caspi, Terrie Moffitt, and their colleagues published two papers (here and here), which suggested that we could finally begin to tell rather simple but evidence-based stories about how genetic and environmental variables interact to influence the emergence of complex phenotypes. It’s hard to exaggerate the level of interest those papers generated. According to Google Scholar, they’ve now been cited more than 7,000 times.  To put that in perspective, Watson and Crick’s paper on the structure of DNA has been cited only about 9,500 times.    

In 2011, however, Laramie Duncan and Matthew Keller published a paper in the American Journal of Psychiatry, which told psychiatrists about the results of their meta-analysis of the 103 gene-environment interaction (GxE) studies that had been done in the first decade of this millennium. The news wasn’t good. According to Duncan and Keller, the results of their meta-analysis “were consistent with the existence of publication bias, low statistical power, and a high false discovery rate.”

This month, Laramie Duncan published a piece with Alisha Pollastri and Jordan Smoller in the American Psychologist that aims to inform psychologists about the results that Duncan and Matthew broke to psychiatrists in 2011. The new piece is called “Mind the Gap: Why Many Geneticists and Psychological Scientists Have Discrepant Views about Gene-Environment Interaction Research.” A franker, tad-longer title would have been, “Psychologists (and Everybody Else) Need to Get Up To Speed on What Psychiatric Geneticists Already Know: Gene-Environment Interaction Research Hasn’t Panned Out Yet.” Or as the new piece puts it, “the first decade of cGxE research has produced few, if any, reliable results.”

The views, opinions and positions expressed by these authors and blogs are theirs and do not necessarily represent that of the Bioethics Research Library and Kennedy Institute of Ethics or Georgetown University.

Bioethics Blogs

Pharmacodynamic Studies in Drug Development: What you don’t know can hurt you

Research biopsies involve the collection of tissues for scientific endpoints. In the early phase cancer trials, research biopsies are often used to assess the biological activity of a drug on a molecular level. This is called pharmacodynamics – the study of what a drug does to the body at the molecular or cellular level. Because these research procedures can burden patients but have no value for their disease management, the ethical justification for research biopsies rides on the benefit to future patients through the development of safe and effective drugs.

Working under the premise that accrual of such “knowledge value” requires the publication of findings, we previously reported that only a third of pharmacodynamic studies are published in full. This initial report, published in Clinical Cancer Research, also found that over 60% of participating research oncologists regard reporting quality of pharmacodynamic data to be fair to poor. The August issue of the British Journal of Cancer  reports on the findings of our recent follow up of that study.

In it, we find that reporting quality varies widely between studies.  Many studies do not report methodologies- or results- that would enable readers of such studies to make reliable inferences about the pharmacodynamics findings.  For instance, only 43% of studies reported use of blinding, 38% reported dimensions of tissues used for biopsy analysis, and 62% reported flow of patients through analysis. We also found a preponderance of “positive results,” suggesting possible publication bias.

Together, our two investigations offer a complex picture of publication and reporting quality for PD studies involving nondiagnostic biopsy.

The views, opinions and positions expressed by these authors and blogs are theirs and do not necessarily represent that of the Bioethics Research Library and Kennedy Institute of Ethics or Georgetown University.