Cancer’s Crystal Ball: Personalized Tumor Models To Guide Treatment
Poised to make a difference
When Kristin Swanson’s father was being treated for lung cancer, doctors collected no shortage of data on his disease. They scanned his chest, regularly drew blood, and biopsied his tumor to study the cancerous cells. But each test told a different, sometimes contradictory, story about the cancer. And when Swanson asked the doctors about her dad’s prognosis, their predictions often seemed rooted in averages for all lung cancer patients, rather than being informed by any of the test results.
“I realized,” says Swanson, who started her career as an applied mathematician, “that there were all these different pieces of data, and no one was bringing them together.”
The experience motivated Swanson to focus her research on developing ways to integrate data on a patient’s cancer into personalized models of tumor growth. Such models, her research group at Northwestern University has shown, can better predict how a tumor will respond to different treatments and drugs than any one piece of data alone.
Swanson isn’t the only researcher trying to build such models. Mathematicians, physicists, and engineers with diverse backgrounds have realized that their expertise in studying complex systems can help them make sense of cancer. So they’re creating models of the physical forces on tumors; developing equations to describe how cancers grow and spread; and using mathematical approaches to study how the molecular pathways in cancer cells interact.
“The reality is that no matter how complicated the molecular biology is, tumors are physical systems that obey the laws of physics,” says Vittorio Cristini, PhD, professor and director of computational biology in the pathology department at the University of New Mexico Cancer Center.
The power of the models and equations lies in the fact that data on any given patient’s tumor can be plugged into the formulas and the resulting output—whether it’s a prediction of a drug’s benefit, a survival prognosis, or a description of the tumor’s growth—will be personalized to that patient. The models haven’t yet led to major changes in how doctors treat cancer outside of clinical trials, but they’re poised to make a difference.
Modeling Tumor Growth
Swanson has focused her initial modeling efforts on gliomas, aggressive brain tumors with few treatment options. Gliomas rarely spread to other organs, making them an appealingly simple tumor type to model. But it’s also notoriously hard to predict the prognosis for patients with glioblastoma multiforme, so there’s lots of room for improvement in the clinical realm. Brain MRI scans can reveal some aspects of the tumor size, but little else.
“Cancer is by definition a dynamic disease,” says Swanson. “So it doesn’t make sense to judge it with scans at single time points.”
In 2010, Swanson reported in Physics in Medicine and Biology that by creating a growth model of a patient’s glioma from a series of brain MRIs, she could predict whether the tumor would shrink in response to radiation therapy. She’s now working with clinicians at Northwestern and other medical institutions to optimize how this model can guide the radiation therapy dose chosen for each patient and to create an iPad app that would put the models into the hands of doctors.
Her research team has also adapted the model to be used in other situations. Based on two MRIs taken at least five days apart, they create a mathematical description of the kinetics and shape patterns of how a patient’s tumor is growing. Then, they can use the model to project the size of the tumor at any later date.
Their most recent study, published in May 2013 in Cancer Research, used these modeled projections to study of the effectiveness of chemotherapy and radiation combinations in 63 glioma patients. Standard response metrics comparing the size of a tumor before and after treatment are poor predictors of overall survival, Swanson says. But using models based on routine scans the patients already had, her group was instead able to compare the projected size of a tumor without treatment to the size of the tumor after treatment. The resulting metric, dubbed “Days Gained,” measured not just a net change in size of a tumor, but took its growth speed into account. Patients who had a “Days Gained” result of more than 117 days after their initial therapy were most likely to survive long-term.
Earlier this year, Swanson laid out the current status of the field that she calls mathematical neuro-oncology in a Frontiers of Oncology review. Models, she wrote, through metrics like “Days Gained,” are helping identify patients who can better be treated with deviations from the standard of care. But it will take more doctors and institutions buying into the benefits of models before such model-based personalized care is routine. Already though, Swanson says she’s seen more acceptance of modeling from clinicians.
“A dozen years ago, I gave presentations on modeling tumors and was routinely laughed at by oncologists,” says Swanson. “Now that we’re getting real clinical results and have cohorts of patients, we’re being listened to.”
Master Equations of Cancer
While Swanson primarily models how tumors grow, Cristini is more concerned with mathematically describing how molecules from the outside can infiltrate a tumor. Whether or not drugs can reach the deepest, densest parts of a tumor, he thinks, is a driving factor in whether the drug can effectively fight the cancer.
“The physics of transport might be the single most important mechanism for drug resistance,” he says.
By modeling the environment in and around a tumor, he’s found, he can predict whether a treatment will be successful based on how drugs can perfuse into the tumor. And such predictions, like those that Swanson has made based on her glioma models, can help guide clinician decisions between therapy options or dosages on a personalized level.
In August 2013 in PNAS, Cristini and his collaborators published the results of a study on colorectal cancers that had spread to the livers of 10 patients. Using microscope slides containing samples of the liver tumors after chemotherapy, the scientists calculated the distribution and sizes of blood vessels that ran through each tumor. Then, they analyzed which tumor cells, and how many, fell into the so-called “kill radius,” the zone where cells had been successfully killed by chemotherapy. Working backward, the team was able to generate a mathematical equation linking blood vessel characteristics to the resulting kill radius. The equation can now be used prospectively to calculate what dosage of chemotherapy is required to penetrate the entire tumor.
Cristini has applied his mathematical models of drug diffusion not only to the liver, but to tumors in the brain and breast as well. In PLOS One in April 2013, he showed that the inability of immunotherapy drugs to reach every part of a breast cancer explains why some tumors are unresponsive to the therapy. Most clinicians and biologists, Cristini says, had assumed that a molecular difference between how tumor cells respond to drugs—rather than a difference in the ability of drugs to reach tumor cells—was to blame for the differing outcomes.
Cristini’s goal is to develop what he calls “master equations of cancer.” Every physical parameter of a tumor, he says, can be described through physics and mathematics. And, as researchers like him are increasingly showing, many of these physical attributes are closely linked to differences between tumors and treatment success rates.
Modeling Molecular Pathways
But the complexity of cancers doesn’t just lie in physical properties that can be extracted from scans. Tumors are also diverse at the molecular and genetic levels. And modeling is ripe for understanding how the molecular attributes of a tumor influence their physical properties.
With multi-scale modeling, Thomas Deisboeck, MD, associate professor of radiology at Massachusetts General Hospital and Harvard University says, researchers integrate data not only from scans, but also from isolated cells, tumor biopsies, and even blood samples. “But getting that type of data consistently even for one patient is spotty, let alone trying to get a big data set,” he says. And that’s what’s holding the field back.
To increase the power of existing data sets he’d like to see more collaboration within the field, and the establishment of standards and common markup languages that work for multi-scale models, he and his colleagues wrote in a 2013 commentary in Cancer Informatics. A new markup language called tumorML, he says, is poised to make a difference by working for models at both the macroscopic and microscopic level.
The more data researchers integrate, though, the more data they have to store and process for modeling. And that presents a challenge. “If a model only runs at Sandia or Los Alamos because it requires so much computing power,” he says, “then it’s not very practical for most clinicians to use.”
Once multi-scale models are perfected, Deisboeck expects them to be used not only to guide decisions on individual patients, but to generate hypotheses on how novel cancer drugs will affect every aspect of a tumor.
“It’s all about target validation,” he says. “With a model, you can ask how targeting a particular protein would change the behavior of the rest of a cellular system.”
If there are five drug options for a particular cancer, he explains, a multi-scale model could predict which drug or drug combinations—even what order and dosages to give the drugs in—would best help a particular patient.
A hundred years ago, predicting a hurricane before it made landfall was nearly impossible. Today, sophisticated satellite measurements and a plethora of data are plugged into models that predict when and where and with how much force a hurricane will hit. That’s the metaphor that Thomas Yankeelov, PhD, associate professor of radiology and cancer biology at Vanderbilt-Ingram Cancer Center, uses to talk about where tumor modeling stands now and where it’s going. The technologies to access quantitative data from scans have already been developed; now, it’s a matter of correlating available data to outcomes.
Yankeelov’s lab at Vanderbilt University is working with oncologists to model how breast tumors respond to neoadjuvant therapy—drugs given before surgery with the goal of eliminating the cancer. By scanning patients before and after a neoadjuvant drug is given, they’re developing equations that may be able to predict better than individual scans whether or not the neoadjuvant therapy will be effective at getting rid of cancer cells.
Like the models others are developing, the ultimate goal of Yankeelov’s work is to find ways to better guide treatment decisions. And the challenge is getting physicians to buy into using the technology. Or at least, initially, integrating it into clinical trials.
“Modeling can help us design better informed clinical trials and gauge better whether treatments are working,” he says. His collaborations with physicians help bring the technology closer to those uses, transitioning from bench to bedside.
Today, a patient being treated for cancer will likely hear the same predictions that have been used in the past on the odds of treatment working for their tumor—based on averages. But as models make their way toward the clinic, these predictions will start to change. And for doctors and patients alike, that could prove a useful forecast.