Bioethics News

Microsoft and Google Want to Let Artificial Intelligence Loose on Our Most Private Data

April 20, 2016

(MIT Technology Review) – “A lot of people who hold sensitive data sets like medical images are just not going to share them for legal and regulatory concerns,” says Vitaly Shmatikov, a professor at Cornell Tech who studies privacy. “In some sense we’re depriving these people from the benefits of deep learning.” Shmatikov and researchers at Microsoft and Google are all working on ways to get around that privacy problem. By providing ways to use and train the artificial neural networks used in deep learning without needing to gobble up everything, they hope to be able to train smarter software, and convince the guardians of sensitive data to make use of such systems.

The views, opinions and positions expressed by these authors and blogs are theirs and do not necessarily represent that of the Bioethics Research Library and Kennedy Institute of Ethics or Georgetown University.