Skip to content

Q&A: Microsoft’s AI for Good Lab on AI bias and regulation

The head of Juan Lavista Ferres, AI for Good Lab at Microsoft, is co-author of a book that provides real-world examples of how artificial intelligence can be used responsibly to positively impact humanity.

Ferres sat down with MobiHealthNews to discuss your new bookhow to mitigate data input biases in AI and recommendations for regulators creating rules on the use of AI in healthcare.

MobiHealthNews: Can you tell our readers about Microsoft’s AI for Good lab?

Juan Lavista Ferrés: The initiative is completely philanthropic, where we partner with organizations around the world and we provide them with our AI skills, our AI technology, our AI knowledge and they provide subject matter experts.

We create teams that combine those two efforts and, collectively, we help them solve their problems. This is something that’s extremely important because we’ve seen that AI can help a lot of these organizations and a lot of these problems, and unfortunately, there’s a big gap in AI skills, especially in nonprofits or even in organizations. governments that are working on these issues. Projects. They generally do not have the capacity or structure to hire or retain the talent that is needed, and that is why we decided to make an investment from our perspective, a philanthropic investment to help the world with these problems.

We have a lab here in Redmond. We have a laboratory in New York. We have a laboratory in Nairobi. We also have people in Uruguay. We have postdocs in Colombia and we work in many areas, health being one of them and an important area for us, a very important area for us. We do a lot of work in medical imaging, such as through CT scans, X-rays, areas where we have a lot of unstructured data also through text, for example. We can use AI to help these doctors learn more or understand problems better.

MNH: What are you doing to ensure that AI doesn’t cause more harm than good, especially when it comes to inherent data biases?

Ferres: That is something that is in our DNA. It is essential for Microsoft. Even before AI became a trend over the past two years, Microsoft has been investing heavily in areas like our responsible AI. Every project we have goes through very exhaustive work on responsible AI. That’s also why it’s so critical to us that we will never work on a project if we don’t have a subject matter expert on the other side. And not only the experts in the field, we try to choose the best. For example, we are working with pancreatic cancer and with Johns Hopkins University. These are the best doctors in the world working in cancer.

The reason it is so important, especially when it relates to what you have mentioned, is because these experts are the ones who have the best understanding of the data collection and any potential bias. But even with that, we revisit our review of responsible AI. We are making sure the data is representative. We just published a book about this.

MNH: Yes. Tell me about the book.

Ferres: I talk a lot in the first two chapters, specifically about potential biases and the risk of these biases, and unfortunately there are a lot of bad examples for society, particularly in areas like skin cancer screening. A lot of the skin cancer models have been trained on the skin of white people because that’s typically the population that has the most access to doctors, that’s the population that skin cancer typically targets, and that’s why There is an insufficiently representative number of people with these issues.

So, we do a very thorough review. Microsoft has been leading the way, in my opinion, in responsible AI. We have our head of AI at Microsoft, Natasha Crampton.

Additionally, we are a research organization, so we will publish the results. We’ll go through peer review to make sure we’re not missing anything on this, and in the end, our partners are the ones who will understand the technology.

Our job is to make sure they understand all of these risks and potential biases.

MNH: He mentioned that the first two chapters look at the issue of potential biases in the data. What does the rest of the book address?

Ferres: So, the book has like 30 chapters. Each chapter is a case study, and has sustainability case studies and health case studies. These are real case studies that we have worked on with partners. But in the first three chapters I do a good overview of some of the potential risks and try to explain them in a way that’s easy for people to understand. I would say that many people have heard about biases and data collection problems, but sometimes they find it difficult to realize how easy it is for this to happen.

We must also understand that, even from a biased perspective, just because something can be predicted does not necessarily mean that it is causal. Predictive power does not imply causality and many times people understand and repeat that correlation does not imply causality; sometimes people don’t necessarily understand that predictive power also doesn’t imply causality and even explainable AI also doesn’t imply causality. That’s really important to us. Those are some of the examples I cover in the book.

MNH: What recommendations do you have for government regulators regarding creating rules for the implementation of AI in healthcare?

Ferres: I’m not the right person to talk about the regulation itself, but I can tell you that, in general, I understand two things very well.

First, what is AI and what is it not? What is the power of AI? What is not the power of AI? I believe that having a very good knowledge of technology will always help you make better decisions. We believe that technology, any technology, can be used for good and ill, and in many ways it is our social responsibility to ensure that we use technology in the best way, maximizing the likelihood that it will work. used forever and minimizing risk factors.

So from that perspective, I think there’s a lot of work to be done to ensure that people understand the technology. That’s rule number one.

Listen, we as a society need to understand technology better. And what we see and what I see personally is that it has enormous potential. We need to make sure we maximize the potential, but also use it correctly. And that requires governments, organizations, the private sector, and nonprofits to start by understanding the technology, the risks, and working together to minimize those potential risks.

Leave a Reply

Your email address will not be published. Required fields are marked *