Tag: artificial intelligence

Bioethics News

Transhumanism. Error of the gods and the power of man? A phylosofical approach

Technological progress urgently needs the emergence of moral progress so that the spark of reason does not consume the existence of humankind

The technical skills of humans have always been remarkable throughout history, so much so that, even now, the artifacts of classical antiquity leave us astounded at the ingenuity of man. Those who are familiar with the Antikythera mechanism (see video HERE) can attest to the fact that the complexity of this computing system, devised to calculate the movement of the stars, is staggering. The wonder that our creations elicit in us might make us think that the creative and technical ability that characterizes us has a divine origin.

It was Plato, among others, who described in his Protagoras, and put into the mouth of the famous Athenian sophist, the myth of the formation of man. In it, the gods forge the mortal races from other gods that have not yet finished forming in the elements of earth and fire. Thus, the gods command the brothers Prometheus and Epimetheus to capture these incomplete gods and divide their abilities to distribute them among the mortal races. Hence, the link of the mortal races with the gods is obvious according to the myth, because they are formed from the fragments of incomplete gods.

Epimetheus asks Prometheus to allow him to take charge of the formation of the new races, and distributes the fragmented abilities. His intention is to create a balance between the new beings so that they do not destroy each other. Those that enjoy some advantage over the rest are surpassed by others in another aspect.

The views, opinions and positions expressed by these authors and blogs are theirs and do not necessarily represent that of the Bioethics Research Library and Kennedy Institute of Ethics or Georgetown University.

Bioethics Blogs

Ethics & Society Newsfeed: August 18, 2017

Image via 

Politics

Neil Gorsuch Speech at Trump Hotel Raises Ethical Questions
“Justice Neil M. Gorsuch, President Trump’s Supreme Court appointee, is scheduled to address a conservative group at the Trump International Hotel in Washington next month, less than two weeks before the court is set to hear arguments on Mr. Trump’s travel ban.”

Trump’s Washington DC hotel turns $2m profit amid ethics concerns
“Donald Trump’s company is said to have taken home nearly $2m in profits this year at its extravagant hotel in Washington, DC – amid ethics concerns stemming from the President’s refusal to fully divest from his businesses while he is in office.”

3 representatives want to officially censure Trump after Charlottesville
“In response to Donald Trump’s controversial remarks about the violence in Charlottesville, Virginia, three Democrats want to censure the president.”

Does Trump’s Slippery Slope Argument About Confederate Statues Have Merit?
“NPR’s Robert Siegal talks with Ilya Somin, a professor of George Mason University, about President Trump’s warning that pulling down Confederate statues may lead to a slippery slope in which monuments to the Founding Fathers are torn down.”

Bioethics/Medical Ethics and Research Ethics

Vaccination: Costly clash between autonomy, public health
Bioethical principles in conflict with medical exemptions to vaccinations

CRISPR and the Ethics of Human Embryo Research
“Although scientists in China and the United Kingdom have already used gene editing on human embryos, the announcement that the research is now being done in the United States makes a U.S. policy response all the more urgent.”

Exclusive: Inside The Lab Where Scientists Are Editing DNA In Human Embryos
“[Critics] fear editing DNA in human embryos is unsafe, unnecessary and could open the door to “designer babies” and possibly someday to genetically enhanced people who are considered superior by society.”

The views, opinions and positions expressed by these authors and blogs are theirs and do not necessarily represent that of the Bioethics Research Library and Kennedy Institute of Ethics or Georgetown University.

Bioethics Blogs

Reproducing the Speculative: Reproductive Technology, Education, and Science Fiction by Kaitlyn Sherman

Walter, a Synthetic, quietly makes his rounds in the brightly lit, pristine interior of the Covenant, a Weyland Corporation Spaceship. Fingers pressed to the translucent, impermeable glass, he checks the status of each crew member as they rest in their cryochambers, suspended in chemically-induced comas until they reach their destined planet in seven years and four months’ time. The ship’s artificial intelligence system, Mother, chimes, “Seven bells and all is well.” Reassured of their security, Walter moves on to the next zone, where another 2,000 cryochambers contain sleeping colonists from Earth. This zone also features a panel of drawers, each housing dozens of embryos—over 1,100 second-generation colonists. They are packed individually into river-stone sized ovoids; clear, solid, egg-like. Amid the rows, an embryo has died, and its artificial uterine-sack is clouded and dark. Observing it briefly, Walter takes it from its socket with a set of tongs and places it into a biohazard bin. The Covenant is on a mission to colonize a habitable, distant planet. Their ship contains everything that could be useful in setting up a new colony: terraforming vehicles, construction materials, and human life itself. Even though these frozen embryos aren’t yet actively developing, they reflect a technology that allows for such a feat, while ensuring a population boom that is not dependent upon the limited space of mature female colonists’ wombs.

This scene is part of the opening sequence of the latest film in Ridley Scott’s Alien franchise. Alien: Covenant (2017) is the most recent science fiction film to illustrate advances in reproductive technologies, especially that of ectogenesis, or external gestation and birth.

The views, opinions and positions expressed by these authors and blogs are theirs and do not necessarily represent that of the Bioethics Research Library and Kennedy Institute of Ethics or Georgetown University.

Bioethics News

A Start-Up Suggests a Fix to the Health Care Morass

August 16, 2017

Be the first to like.
Share

In Congress, a doomed plan to repeal the Affordable Care Act, President Obama’s health care law, has turned into a precarious effort to rescue it. Meanwhile, President Trump is still threatening to mortally wound the law — which he insists, falsely, is collapsing anyway — while his administration is undermining its being carried out.

So it is surprising that across the continent from Washington, investors and technology entrepreneurs in Silicon Valley see the American health care system as the next great market for reform.

Some of their interest is because of advances in technology like smartphones, wearable health devices (like smart watches), artificial intelligence, and genetic testing and sequencing. There is a regulatory angle: The Affordable Care Act added tens of millions of people to the health care market, and the law created several incentives for start-ups to change how health care is provided. The most prominent of these is Oscar, a start-up co-founded by Joshua Kushner (the younger brother of Mr. Trump’s son-in-law, Jared Kushner), which has found ways to mine health care data to create a better health insurance service.

… Read More

Be the first to like.
Share

NYTimes

Tags: , , , , , , , , , ,

The views, opinions and positions expressed by these authors and blogs are theirs and do not necessarily represent that of the Bioethics Research Library and Kennedy Institute of Ethics or Georgetown University.

Bioethics News

Elon Musk Urges U.S. Governors to Regulate AI Before “It’s Too Late”

July 17, 2017

Be the first to like.
Share

He may want to send us all to space and make the world drive electric cars, but Elon Musk isn’t gung ho about all technologies. In particular, he’s famously uneasy about the development of machine learning, and has in the past gone so far as stating that he believes building a general intelligence AI as tantamount to “summoning the demon.” Now, he’s reaffirmed that message to U.S. governors, urging them to regulate AI—and quickly.

Speaking at the National Governors Association meeting in Rhode Island on Saturday, Musk called AI “the biggest risk that we face as a civilization,” according to the Wall Street Journal. That’s a sentiment shared by a small but influential crowd of techno-thinkers. Whether it’s an accurate assessment of the situation is very much up for debate, however, as Oren Etzioni, the CEO of the Allen Institute for Artificial Intelligence and Professor of Computer Science at the University of Washington, has argued on these very pages.

… Read More

Image via Flickr Attribution Some rights reserved by Heisenberg Media

Be the first to like.
Share

MIT Technology Review

Tags: , , , , , , , ,

The views, opinions and positions expressed by these authors and blogs are theirs and do not necessarily represent that of the Bioethics Research Library and Kennedy Institute of Ethics or Georgetown University.

Bioethics News

What an Artificial Intelligence Researcher Fears about AI

As an artificial intelligence researcher, I often come across the idea that many people are afraid of what AI might bring. It’s perhaps unsurprising, given both history and the entertainment industry, that we might be afraid of a cybernetic takeover that forces us to live locked away, “Matrix”-like, as some sort of human battery

The views, opinions and positions expressed by these authors and blogs are theirs and do not necessarily represent that of the Bioethics Research Library and Kennedy Institute of Ethics or Georgetown University.

Bioethics Blogs

The Neuroethics Blog Series on Black Mirror: Be Right Back

By Somnath Das
Somnath Das recently graduated from Emory University where he majored in Neuroscience and Chemistry. He will be attending medical school at Thomas Jefferson University starting in the Fall of 2017. The son of two Indian immigrants, he developed an interest in healthcare after observing how his extended family sought help from India’s healthcare system to seek relief from chronic illnesses. Somnath’s interest in medicine currently focuses on understanding the social construction of health and healthcare delivery. Studying Neuroethics has allowed him to combine his love for neuroscience, his interest in medicine, and his wish to help others into a multidisciplinary, rewarding practice of scholarship which to this day enriches how he views both developing neurotechnologies and the world around him. 
—-
Humans in the 21st century have an intimate relationship with technology. Much of our lives are spent being informed and entertained by screens. Technological advancements in science and medicine have helped and healed in ways we previously couldn’t dream of. But what unanticipated consequences may be lurking behind our rapid expansion into new technological territory? This question is continually being explored in the British sci-fi TV series Black Mirror, which provides a glimpse into the not-so-distant future and warns us to be mindful of how we treat our technology and how it can affect us in return. This piece is part of a series of posts that will discuss ethical issues surrounding neuro-technologies featured in the show and will compare how similar technologies are impacting us in the real world. 

*SPOILER ALERT* – The following contains plot spoilers for the Netflix television series Black Mirror

The views, opinions and positions expressed by these authors and blogs are theirs and do not necessarily represent that of the Bioethics Research Library and Kennedy Institute of Ethics or Georgetown University.

Bioethics News

Is It Unethical to Design Robots to Resemble Humans?

June 22, 2017

Be the first to like.
Share

So goes the scene in Mike Judge’s cult classic film Office Space, which is a cathartic release from the constant indignities of the modern worker. The printer is a source of chagrin for its regular paper-jam notifications and its inability to properly communicate with its human users. There is no trigger to feel compassion toward this inanimate object: It is only a machine, made of plastic, and filled with microchips and wires. When the printer met its demise, the audience felt only joy.

But what if this brutal assault had been on a human-looking machine that had cried out to its attackers for mercy? If instead of a benign-looking printer, it was given a name and human characteristics? Would we still mindlessly attack it? Would we feel differently?

As technology progresses from inanimate objects governed by numbers to human-looking machines controlled with conversations, it raises questions as to the compassion owed to artificial intelligence—and each other.

… Read More

Image via Flickr AttributionShare Alike Some rights reserved by tellumo

Be the first to like.
Share

Qz

Tags: , , , , , , ,

The views, opinions and positions expressed by these authors and blogs are theirs and do not necessarily represent that of the Bioethics Research Library and Kennedy Institute of Ethics or Georgetown University.

Bioethics Blogs

Using AI to Predict Criminal Offending: What Makes it ‘Accurate’, and What Makes it ‘Ethical’.

Jonathan Pugh

Tom Douglas

 

The Durham Police Force plans to use an artificial intelligence system to inform decisions about whether or not to keep a suspect in custody.

Developed using data collected by the Force, The Harm Assessment Risk Tool (HART) has already undergone a 2 year trial period to monitor the accuracy of the tool. Over the trial period, predictions of low risk were accurate 98% of the time, whilst predictions of high risk were accurate 88% of the time, according to media reports. Whilst HART was not so far been used to inform custody sergeants’ decisions during this trial period, the police force now plans to take the system live.

Given the high stakes involved in the criminal justice system, and the way in which artificial intelligence is beginning to surpass human decision-making capabilities in a wide array of contexts, it is unsurprising that criminal justice authorities have sought to harness AI. However, the use of algorithmic decision-making in this context also raises ethical issues. In particular, some have been concerned about the potentially discriminatory nature of the algorithms employed by criminal justice authorities.

These issues are not new. In the past, offender risk assessment often relied heavily on psychiatrists’ judgements. However, partly due to concerns about inconsistency and poor accuracy, criminal justice authorities now already use algorithmic risk assessment tools. Based on studies of the past offenders, these tools use forensic history, mental health diagnoses, demographic variables and other factors to produce a statistical assessment of re-offending risk.

Beyond concerns about discrimination, algorithmic risk assessment tools raise a wide range of ethical questions, as we have discussed with colleagues in the linked paper.

The views, opinions and positions expressed by these authors and blogs are theirs and do not necessarily represent that of the Bioethics Research Library and Kennedy Institute of Ethics or Georgetown University.

Bioethics News

When AI Botches Your Diagnosis, Who’s To Blame?

Artificial intelligence is not just creeping into our personal lives and workplaces—it’s also beginning to appear in the doctor’s office. The prospect of being diagnosed by an AI might feel foreign and impersonal at first, but what if you were told that a robot physician was more likely to give you a correct diagnosis?

The views, opinions and positions expressed by these authors and blogs are theirs and do not necessarily represent that of the Bioethics Research Library and Kennedy Institute of Ethics or Georgetown University.