Skip to main content

Soumya Ray: How Can Artificial Intelligence Improve Medicine?

September 14, 2021

When Soumya Ray arrived in the United States in 1998, artificial intelligence was just getting off the ground. Researchers involved in the human genome project needed solutions to several intractable problems. “The amount of data was immense,” Ray explained, adding that machines that aggregated the data “were not entirely reliable. You needed a system that could correct the errors introduced in the reading process.” Artificial intelligence—in the form of bioinformatics—delivered that solution. “That’s when bioinformatics was born.”

Ray has worked at the intersection of artificial intelligence and medical science ever since, producing dozens of research papers and textbook chapters, teaching and advising undergraduate and graduate students, and continuing his research into artificial intelligence and machine learning. His work explores subjects ranging from spam filters to medical robots.

Ray teaches “Introduction to Artificial Intelligence” in the Online Master of Science in Computer Science program at Case Western Reserve University. We spoke to him about the course, teaching online and the past, present and future of artificial intelligence.

What is artificial intelligence? Are we teaching computers to think like humans, or is AI something different?

A big part of intelligence is making sense of very complex inputs. That’s actually a core property of an intelligent system; if it couldn’t do that, you couldn’t really call it intelligent. As humans, we look at something and immediately understand what it is. That requires a huge amount of processing. We’re not conscious of it, but it’s happening, and it’s not automatic. A baby doesn’t do that; it’s something that has to be learned.

That’s why people ask, “How is it possible for a computer to do as well as the human brain?” But it’s not about doing as well or better; it’s about doing something different.

The brain has biological and evolutionary constraints. It was shaped by what really mattered for survival. Human intelligence evolved so that we specialize in recognizing patterns that prevent us from being killed and eaten by lions. Because of that, we’re good at specific things but not as much at others. When it comes to processing a large data set, well, there’s no lion chasing you in a large data set. At least I hope not!

AI employs algorithms to build intelligent systems. These are useful in many ways; finding patterns in huge data sets, for example. You could teach people to find those patterns, but it would take a great deal of time and expense to train them and then to complete the work. Artificial intelligence makes a lot more sense economically, and it’s also faster.

You specialize in AI applications for healthcare and bioinformatics. Those were among the earliest uses for AI, weren’t they?

Yes, people have been trying to use AI systems in medical diagnosis for a very long time. In fact, MYCIN, one of the earliest systems, dates back to the 1970s. The technology has led us in many different directions.

When I arrived in the United States in 1998, the human genome project was getting started. It was the first large-scale effort to map the human genome. It was obvious that the process could not be manual. The amount of data was immense; the human genome involves over 4 billion base pairs. A manual system would have required decades to complete. Additionally, the machines that read the base pairs were not entirely reliable. You needed a system that could correct the errors introduced in the reading process.

That’s how bioinformatics was born—the beginning of the big data era in the natural sciences. This was the test case, a huge bet that paid off magnificently. AI was critical to its success. The algorithms and technology made possible what humans could not do on their own.

So, fast forward to today. The faculty at Case Western Reserve have a National Science Foundation grant to study AI technologies in surgical robots. It’s very cutting-edge. Current generations of surgical robots don’t have AI in them, but we can look ahead to a day when they will. It’s not as simple as developing the technology. There are other issues to consider, including how surgeons effectively use these new technologies without losing their own skills in the process.

Other faculty at CWRU are studying image-based diagnosis. We can now use AI methods to read X-rays and CAT scans, but there’s a lot of work to do in this area. How do we optimize our results? How do we improve the rate of positive diagnosis without increasing false positives and false negatives?

What will students learn in “Introduction to Artificial Intelligence”?

Students develop a unified view of the AI field without digging too deep into any specific component. It’s meant to be a broad survey because AI is a big field with many different pieces. We cover all the basics and foundations, then take a closer look at some subfields. At the end, you’ll have the framework so that you can systematically fill in the details through more advanced AI courses, if you choose to do so.

Even though it’s an introductory course, we explore complex and abstract ideas and how they are implemented in various applications. I present several case studies. For example, we study how NASA uses AI technologies in their rovers. These vehicles roam Mars or other parts of deep space where it’s impossible to have regular communication with Earth. It takes a long time to transmit a message over that distance.

We review case studies in clinical medical diagnosis, autonomous vehicles and Deep Blue (that’s the computer that defeated world chess champion Garry Kasparov in 1997). All these advances remain highly relevant to how AI techniques are used today. Hopefully, they give the students a feel for not just the abstract ideas but how to put them into practice.

What’s the biggest difference between online and in-person learning?

The biggest difference is that we’ve modularized the content for the online course. We’ve broken lectures down into 10-minute segments. It’s really interesting. As I was doing it, I was thinking: “Can this actually be done? Can something really be explained in 10 minutes without having awkward cutoffs and omissions?” And the answer is yes—and it got me thinking about changing the way I teach the in-person course because sometimes it’s hard for people to pay attention for a long time span.

Otherwise, the courses are very similar. In fact, we plan to offer the courses side by side, so both groups get the same experience: same assessments, same projects and assignments, everything.

What does the future hold for AI?

There’s limitless potential. The advances we’ve made in health-related, data-driven approaches over the past decades are now making their way into other fields. Some of them are even repeating the same mistakes we made. It’s very interesting to see. The big questions are: what do people want to do with AI, and which technologies will come into play? We can develop many general use cases. AI’s applications are quite broad.

The Online MS in Computer Science program at Case Western Reserve University’s Case School of Engineering follows a broad curriculum that spotlights artificial intelligence, databases and data mining, security and privacy, and software engineering. Live online classes supplemented by recorded lectures emphasize hands-on methods to promote effective learning. Graduates develop essential leadership skills and benefit from a robust alumni network. If that sounds like the right program for you, why not apply today?