Last week, Finale Doshi-Velez, Asst. Professor of Computer Science at the John A. Paulson School of Engineering and Applied Science (SEAS) at Harvard University came to the Nest to talk about her data science lab, how AI actually works, and the importance of diversity in the computational sciences.

Here are some student takeaways from Maria Litvin’s CSC 470 students:

Sebastian Frankel
I had always assumed that applying machine learning/deep learning/AI to a problem or set of challenges would improve it. However, she told us that ultimately, the machine learning program that could compare the performance of various drugs on a patient in order to select a regimen to combat the disease was less effective than searching for a person who had working drug regimen with similar physical features than the person at hand, or a “clone”. However, for people who didn’t have a “clone”, the program was moderately more helpful than randomly testing drugs for finding a solution. This was one instance where the deep learning couldn’t help as much, and it showed me that in some things, current systems can handle jobs better than a highly advanced adaptive computer program, which really surprised me.  

Miles Kaufman
Professor Doshi-Velez discussed the difficulty in analyzing around the biased data given by hospitals. Human bias finds its way into algorithms and data analysis as well. One example Professor Doshi-Velez gave of bias in data training can be seen in google images. Since google uses human input to determine which images to show for certain searches, many social biases appear in google searches. For example, if someone searches “Beautiful Women”, the page is almost entirely filled with white women. The computer is given training data by humans, and therefore this data holds many different racist and sexist biases. Data training can be incredibly valuable, but there algorithms can become dangerous and need to be monitored carefully.  

Clara Li
… medical records showed higher success rates for women in response to Prozac—not because, the researchers realized, women were biologically more responsive to the drug, but because most gynecologists only knew primarily of Prozac treatment. Misinterpretations like this show that however powerful machine learning may become, unless humans use the correct input, the programs can only do so much. Thus, it ought to be the goal of health care to not only improve the technical aspect of its machine learning, but its human factor as well. 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s