Google computers trained to detect cancer
New approach achieved 89 percent accuracy, compared to 73 percent for doctors
Google is using the power of computer-based reasoning to detect breast cancer, training the tool to look for cell patterns in slides of tissue, much the same way that the brain of a doctor might work.
New findings show that this approach — enlisting machine learning, predictive analytics and pattern recognition — has achieved 89 percent accuracy, beyond the 73 percent score of a human pathologist.
“We showed that it was possible to train a model that either matched or exceeded the performance of a pathologist who had unlimited time to examine the slides,” according to a blog item in Google Research posted by technical lead Martin Stumpe and product manager Lily Peng.
Pathologists have always faced a huge data problem in obtaining an accurate diagnosis. A massive amount of information — slides containing cells from tissue biopsies, thinly sliced and stained — must be scanned in search of any abnormal cells. And time is of the essence.
There can be many slides per patient. And each slide contains more than 10 gigapixels when digitized at 40 times magnification, according to the Google team.
“Imagine having to go through a thousand 10-megapixel photos, and having to be responsible for every pixel,” the team wrote.
As a result, even well-trained doctors may arrive at different conclusions, or miss the small percentage of images crucial to identifying pathologies. For example, agreement in diagnosis for some forms of breast cancer and prostate cancer can be as low as 48 percent, according to Google.
Google’s approach is to input vast amounts of data into its system, then train it to look for patterns.
It’s called “deep learning” — a family of computational methods that allow an algorithm to program itself by learning from a large set of examples that demonstrate what needs to be done.
The Google team found that the system can autonomously learn what pathology looks like. The computer was educated by studying billions of images donated from Radboud University Medical Center in the Netherlands.
Its algorithms were optimized for localization of breast cancer that has spread, or metastasized, to lymph nodes adjacent to the breast.
Google Research’s work offers the latest proof that artificial intelligence no longer exists just in the realm of science fiction, but can be used to help humans.
Research teams are embedded throughout Google to tackle tough problems, using computer science to confront related challenges through data-mining and artificial intelligence.
Previously, the team applied deep learning to create an algorithm for automated detection, by scanning photos, of retinal damage in patients with diabetes.
Other companies are also exploring the field. IBM — which developed the Watson technology that triumphed on the TV show “Jeopardy!” — has shown that its system can autonomously learn what a pathology looks like — say, abnormal narrowing in a coronary artery, according to Technology Review, published by the Massachusetts Institute of Technology. The computer’s education is being sped up by studying 30 billion images from hospitals, pharmaceutical companies and clinical research organizations that IBM recently acquired in its $1 billion purchase of Merge Healthcare, the journal reported.
Computers won’t replace doctors, Google said. They lack a human’s breadth of knowledge and experience. For example, a computer can’t detect abnormalities that it hasn’t been trained to classify, such as different types of cancer.
But perhaps a computer could automatically alert the doctor to the most important images. Or it could help doctors more easily and accurately measure tumor size — a factor that is linked to prognosis.
“To ensure the best clinical outcome for patients, these algorithms need to be incorporated in a way that complements the pathologist’s workflow,” the Goggle team wrote.
Originally posted on mercurynews.com