Tag: medical devices

Uncategorized

Cures Act Gains Bipartisan Support That Eluded Obama Health Law

December 9, 2016

(New York Times) – In recent years, few major bills have commanded as much support as the 21st Century Cures Act, which sailed to passage by votes of 392 to 26 in the House on Nov. 30, and 94 to 5 in the Senate a week later. Once it is signed by President Obama on Tuesday, as the White House has said it will be, the law will allow for money to be pumped into biomedical research and speed the approval of new drugs and medical devices. It also includes provisions to improve mental health care and combat opioid abuse.

The views, opinions and positions expressed by these authors and blogs are theirs and do not necessarily represent that of the Bioethics Research Library and Kennedy Institute of Ethics or Georgetown University.

Uncategorized

Deadly Infections Linked to Heart Surgery Device Highlight Holes in FDA Monitoring

December 1, 2016

(Kaiser Health News) – Now hospitals, which consider the heater-cooler machines crucial in open-heart surgery, are scrambling for ways to protect patients. And authorities have urged hospitals from New Jersey to California to notify hundreds of people who underwent surgery in recent years that they might be harboring a dangerous infection. Patients have sued, claiming they were infected in Pennsylvania, Iowa, South Carolina and Quebec. Experts and patient advocates say these cases are only the latest to expose holes in the nation’s approach to spotting and responding to dangerous deficiencies in medical devices.

The views, opinions and positions expressed by these authors and blogs are theirs and do not necessarily represent that of the Bioethics Research Library and Kennedy Institute of Ethics or Georgetown University.

Uncategorized

Medical Device Employees Are Often in the O.R., Raising Concerns About Influence

November 15, 2016

(Kaiser Health News) – They are a little-known presence in many operating rooms, offering technical expertise to surgeons installing new knees, implanting cardiac defibrillators or performing delicate spine surgery. Often called device reps — or by the more cumbersome and less transparent moniker “health-care industry representatives” — these salespeople are employed by the companies that make medical devices: Stryker, Johnson & Johnson and Medtronic, to name a few. Their presence in the OR, particularly common in orthopedics and neurosurgery, is part of the equipment packages that hospitals typically buy.

The views, opinions and positions expressed by these authors and blogs are theirs and do not necessarily represent that of the Bioethics Research Library and Kennedy Institute of Ethics or Georgetown University.

Uncategorized

On the ethics of machine learning applications in clinical neuroscience

By Philipp Kellmeyer

Dr. med. Philipp Kellmeyer, M.D., M.Phil. (Cantab) is a board-certified neurologist working as postdoctoral researcher in the Intracranial EEG and Brain Imaging group at the University of Freiburg Medical Center, German. His current projects include the preparation of a clinical trial for using a wireless brain-computer interface to restore communication in severely paralyzed patients. In neuroethics, he works on ethical issues of emerging neurotechnologies. He is a member of the Rapid Action Task Force of the International Neuroethics Society and the Advisory Committee of the Neuroethics Network.
What is machine learning, you ask? 
As a brief working definition up front: machine learning refers to software that can learn from experience and is thus particularly good at extracting knowledge from data and for generating predictions [1]. Recently, one particularly powerful variant called deep learning has become the staple of much of recent progress (and hype) in applied machine learning. Deep learning uses biologically inspired artificial neural networks with many processing stages (hence the word “deep”). These deep networks, together with the ever-growing computing power and larger datasets for learning, now deliver groundbreaking performances at many tasks. For example, Google’s AlphaGo program that comprehensively beat a Go champion in January 2016 uses deep learning algorithms for reinforcement learning (analyzing 30 million Go moves and playing against itself). Despite these spectacular (and media-friendly) successes, however, the interaction between humans and algorithms may also go badly awry.

The software engineers who designed ‘Tay,’ the chatbot based on machine learning, for instance, surely had high hopes that it may hold its own on Twitter’s unforgiving world of high-density human microblogging.

The views, opinions and positions expressed by these authors and blogs are theirs and do not necessarily represent that of the Bioethics Research Library and Kennedy Institute of Ethics or Georgetown University.

Uncategorized

Why an $80 Artificial Knee Outruns a $1000 Version

“Most medical devices are designed for places like here, not low-income clinics,” Donaldson said in reference to clinics in wealthy nations that can more easily afford and maintain the prosthetics. D-Rev created more affordable prosthetic limbs to help amputees worldwide who don’t have access to the medical devices

The views, opinions and positions expressed by these authors and blogs are theirs and do not necessarily represent that of the Bioethics Research Library and Kennedy Institute of Ethics or Georgetown University.

Uncategorized

FDA Faults 12 Hospitals for Failing to Disclose Injuries, Deaths Linked to Medical Devices

November 1, 2016

(Kaiser Health News) – Federal regulators said 12 U.S. hospitals, including well-known medical centers in Los Angeles, Boston and New York, failed to promptly report patient deaths or injuries linked to medical devices. The Food and Drug Administration publicly disclosed the violations in inspection reports this week amid growing scrutiny of its ability to identify device-related dangers and protect patients from harm.

The views, opinions and positions expressed by these authors and blogs are theirs and do not necessarily represent that of the Bioethics Research Library and Kennedy Institute of Ethics or Georgetown University.

Uncategorized

Mind the Accountability Gap: On the Ethics of Shared Autonomy Between Humans and Intelligent Medical Devices

Guest Post by Philipp Kellmeyer

* Note: this is a cross-post with the Practical Ethics blog.

Imagine you had epilepsy and, despite taking a daily cocktail of several anti-epileptic drugs, still suffered several seizures per week, some minor, some resulting in bruises and other injuries. The source of your epileptic seizures lies in a brain region that is important for language. Therefore, your neurologist told you, epilepsy surgery – removing brain tissue that has been identified as the source of seizures in continuous monitoring with intracranial electroencephalography (iEEG) – is not viable in your case because it would lead to permanent damage to your language ability.

There is however, says your neurologist, an innovative clinical trial under way that might reduce the frequency and severity of your seizures. In this trial, a new device is implanted in your head that contains an electrode array for recording your brain activity directly from the brain surface and for applying small electric shocks to interrupt an impending seizure.

The electrode array connects wirelessly to a small computer that analyses the information from the electrodes to assess your seizure risk at any given moment in order to decide when to administer an electric shock. The neurologist informs you that trials with similar devices have achieved a reduction in the frequency of severe seizures in 50% of patients so that there would be a good chance that you benefit from taking part in the trial.

Now, imagine you decided to participate in the trial and it turns out that the device comes with two options: In one setting, you get no feedback on your current seizure risk by the device and the decision when to administer an electric shock to prevent an impending seizure is taken solely by the device.

The views, opinions and positions expressed by these authors and blogs are theirs and do not necessarily represent that of the Bioethics Research Library and Kennedy Institute of Ethics or Georgetown University.

Uncategorized

Guest Post: Mind the accountability gap: On the ethics of shared autonomy between humans and intelligent medical devices

Guest Post by Philipp Kellmeyer

* Note: this is a cross-post with the Journal of Medical Ethics Blog.

Imagine you had epilepsy and, despite taking a daily cocktail of several anti-epileptic drugs, still suffered several seizures per week, some minor, some resulting in bruises and other injuries. The source of your epileptic seizures lies in a brain region that is important for language. Therefore, your neurologist told you, epilepsy surgery – removing brain tissue that has been identified as the source of seizures in continuous monitoring with intracranial electroencephalography (iEEG) – is not viable in your case because it would lead to permanent damage to your language ability.

There is however, says your neurologist, an innovative clinical trial under way that might reduce the frequency and severity of your seizures. In this trial, a new device is implanted in your head that contains an electrode array for recording your brain activity directly from the brain surface and for applying small electric shocks to interrupt an impending seizure.

The electrode array connects wirelessly to a small computer that analyses the information from the electrodes to assess your seizure risk at any given moment in order to decide when to administer an electric shock. The neurologist informs you that trials with similar devices have achieved a reduction in the frequency of severe seizures in 50% of patients so that there would be a good chance that you benefit from taking part in the trial.

Now, imagine you decided to participate in the trial and it turns out that the device comes with two options: In one setting, you get no feedback on your current seizure risk by the device and the decision when to administer an electric shock to prevent an impending seizure is taken solely by the device.

The views, opinions and positions expressed by these authors and blogs are theirs and do not necessarily represent that of the Bioethics Research Library and Kennedy Institute of Ethics or Georgetown University.

Uncategorized

Neuroimaging in Predicting and Detecting Neurodegenerative Diseases and Mental Disorders

By Anayelly Medina

This post was written as part of a class assignment from students who took a neuroethics course with Dr. Rommelfanger in Paris of Summer 2016.

Anayelly is a Senior at Emory University majoring in Neuroscience and Behavioral Biology. 

If your doctor told you they could determine whether or not you would develop a neurodegenerative disease or mental disorder in the future through a brain scan, would you undergo the process? Detecting the predisposition to or possible development of disorders or diseases not only in adults but also in fetuses through genetic testing (i.e. preimplantation genetics) has been a topic of continued discussion and debate [2]. Furthermore, questions regarding the ethical implications of predictive genetic testing have been addressed by many over the past years [4,8]. However, more recently, neuroimaging and its possible use in detecting predispositions to neurodegenerative diseases as well as mental disorders has come to light. The ethical questions raised by the use of predictive neuroimaging technologies are similar to those posed by predictive genetic testing; nevertheless, given that the brain is the main structure analyzed and affected by these neurodegenerative and mental disorders, different questions (from those posed by predictive genetic testing) have also surfaced.

Computerized Axial Tomography (CAT), Positron Emission Tomography (PET) and radioactive tracers, Magnetic Resonance Imaging (MRI), and Functional Magnetic Resonance Imaging (fMRI) are all current neuroimaging technologies used in the field of neuroscience. While each of these technologies function differently, they ultimately all provide information on brain functioning or structure. Furthermore, these neuroscientific instruments have, in recent years, been used to explore the brain in order to determine predictive markers for neurodegenerative diseases and mental disorders, such as Parkinson’s disease, Schizophrenia, Huntington’s disease, and Alzheimer’s disease [1,9,11,12].

The views, opinions and positions expressed by these authors and blogs are theirs and do not necessarily represent that of the Bioethics Research Library and Kennedy Institute of Ethics or Georgetown University.

Uncategorized

An Ethical Argument for Regulated Cognitive Enhancement in Adults: The Case of Transcranial Direct Current Stimulation (tDCS)

by Selin Isguven

Introduction: Human Enhancement, Enhancement vs. Treatment

Human enhancement consists of methods to surpass natural and biological limitations, usually with the aid of technology. Treatment and enhancement are considered to be different in that treatment aims to cure an existing medical condition and restore the patient to a normal, healthy, or species-typical state whereas enhancement aims to improve individuals beyond such a state.  However, the line between treatment and enhancement remains debatable. There is no one agreed-upon definition of the normal human condition; this definition depends on factors such as time period and location, among many. In fact, the debate stems from discussions about the scope of medicine and the definition of ‘healthy.’  For some, like Norman Daniels, a healthy state is the absence of disease whereas for others, such as the World Health Organization (WHO), it is “a state of complete physical, mental and social well-being.”[1] These two definitions of a healthy state are clearly not identical and there exist similarly differing opinions on what is considered ‘beyond’ healthy, as well.[1]

This article aims to demonstrate the case in favor of the regulated use of cognitive enhancement by examining a technique called Transcranial Direct Current Stimulation (tDCS), while addressing common ethical arguments against cognitive enhancers as well as the ethical obligation for proper regulation. The case for regulated use is primarily articulated through a lens of safety and a comparison is drawn between enhancement and treatment in terms of cost-benefit analysis. Although the aim is to extend regulated use to other similar cognitive enhancers by showing tDCS as a case example, it would be wise to evaluate each technique individually.[

The views, opinions and positions expressed by these authors and blogs are theirs and do not necessarily represent that of the Bioethics Research Library and Kennedy Institute of Ethics or Georgetown University.