Identifying and Overcoming Skepticism about Biomedical Computing

Modelers should take the lead.

Many collaborators1 with whom modJim DeLeo, PhD, NIH Computer Scientistelers2 work have little or no training in modeling3 and so it is natural that they may be cautious, intimidated or disinterested—attitudes that give rise to skepticism4. Although, ideally, collaborators could learn more about modeling, it is understandable that they don’t: They are busy keeping up with their own dynamically changing specialty fields, don’t have the time, or are simply not interested. Identifying and overcoming such skepticism is important if biomedical computing is to be of greater value to society, and so I would like to suggest here that we, the modelers, take the lead in addressing and reducing this skepticism.

 

I hope we can agree that there really is no well-established “modeling community.” Typically, modelers are renegade individualists who are fuzzy members of the fuzzy subsets of different modeling disciplines such as computer science, statistics, bioinformatics, analytics and others. It would be helpful if these renegades would transcend their silos, overcome their self-oriented competitive urges and establish more cooperative relationships with one another and with their collaborators. This objective has motivated the NIH Biomedical Computing Interest Group (BCIG) since its inception 10 years ago. BCIG’s mission is to encourage, support and promote good and appropriate computing methodology and technology in all aspects of biomedical research, development, and patient care; and it is open to everyone having interest in this mission. I propose that we form other geographically disparate BCIG groups and network them electronically. Are you interested? I would be happy to help facilitate this.

 

Conference participation, tutorial production and distribution, crowdsourcing and multi-institutional team building are examples of what we can do to improve relationships and extend computational methodology choices and accessibility. For example, BCIG is helping to formulate a panel for a workshop on “Proper Methods for Evaluating Performance of Computational Intelligence Methods and How to Encourage Use of these Evaluation Methods.” This workshop has been proposed for the 2012 World  Congress on Computational Intelligence. As another example, BCIG is about to put in place a mechanism for modelers who subscribe to BCIG to brainstorm on broad biomedical computing topics—a kind of local crowdsourcing operation. The first topic will be “Machine Learning and Statistics: the Interface.”

 

We modelers also need to integrate and standardize our style of thinking as well as our terminology and nomenclature. Many other fields do this as they begin to mature. Statisticians, computer scientists and bioinformaticians think differently from one another. Even within modeler subgroups, individuals think differently about their approaches to modeling. We need to focus on concept consilience and common ontologies!

 

Modelers should try to convince collaborators that modeling is meaningful even when the models may be imperfect. The key is to demonstrate success in significant collaborative biomedical projects—in particular (given current priorities) in translational medicine projects, i.e., projects with results that have a direct and important positive impact on health care. Although many collaborators may not be skeptical per se, some fail to see the value of using modeling in their fields. This can be framed as a challenge for modelers. They can explore these fields and find better ways to introduce modeling. I can point to several examples where computational modeling demonstrated the potential to have significant impact on medicine and biology, particularly with respect to translational medicine. For example, I have developed methodologies that predict glucose tolerance test results, breast cancer, and adverse drug reactions with accuracies suitable for clinical use.

 

Unfortunately, there have been cases in which modeling has produced overhyped, misleading, or flawed outcomes. Years ago a modeler claimed that his artificial neural network (ANN) computer program could predict whether a patient presenting at an emergency room with certain symptoms and findings should be admitted to the ICU. He claimed that his ANN could do a better job than human experts faced with the same task, but his performance statistics were based only on the data used in the ANN training. He had no hold-out data for testing and validation. This is the kind of ill-designed hyped work that gives bad press to modeling. Like all good science, modeling needs good statistical oversight, which includes proper testing and validation. But modelers are often not doing this. We must correct this. When proper testing and validation are missing, it provides strong support for certain groups (e.g., certain fundamentalist, turf-protecting statisticians) who feel that these new-fangled tools from computer science are threatening their professional identity. Computer scientists and other modelers must learn to properly validate their models according to standards set by good classical statistical methodology. I know of other horror stories of modeling misuse. One example is where a physician used evolutionary computing to fit data in an application where a simple linear regression model would have been sufficient.

 

I have suggested here that we, the modelers, take the lead in addressing skepticism associated with biomedical computing and that we do what we can to reduce it. I have suggested several specific things we can do in this regard, namely (1) create other BCIG groups like the NIH BCIG and network hem, (2) engage in conference participation, tutorial production and distribution, crowdsourcing and multi-institutional team building, (3) integrate and standardize our style of thinking, our terminology and our nomenclature, (4) demonstrate success in projects, particularly in translational medicine projects, and (5) avoid overhyping, and misleading and flawed outcomes.  
 


Footnotes:
1.  Physicians, biologists and others who work in biomedical research and health care delivery

2. Fuzzy heterogeneous collection of individuals who work with all types of computational tools used under the general rubric “biomedical computing”

3. Developing algorithms and computer programs to solve specific problems

4. Any questioning attitude towards knowledge, facts, or opinions stated as facts, or any doubt regarding claims
 

DETAILS

Jim DeLeo has been a computer scientist for over 40 years during which he has designed, developed and implemented new and innovative computational solutions to solve medical, space exploration and defense problems. Presently at the NIH, he works collaboratively with most of the NIH institutes and centers, other government agencies, universities and industry. His current work is inspired by the NIH Roadmap translational medicine theme and is directed toward building computational, intelligent systems that have practical impact in improving patient care.

 



2 Comments

Skepticism

Dr. Deleo ought to relook at his policy on refuting skepticism. Probably the most successful advertising for science was that of Thomas Henry Huxley in his proselytizing for Darwin's ideas on evolution. Huxley, no mean scientist himself, and a President of the Royal Society, was on the stump for years with the message to maintain your "thaetige Skepsis", active doubt, a phrase he took from Goethe, the poet scientist. By inviting his audiences of working folk, merchants, matrons and gentlemen in mid-nineteenth century England, to "doubt" and question and analyze until an understanding emerged, he "sold" evolution. His label, "Darwin's bulldog" was perforative perhaps, but his enthusiasm and imagery reached reached tens of thousands.
Maybe Dr. Deleo ought to wander out of his own silo, too. He makes no mention of IMAG, the interagency modeling and analysis group. It includes staffers in 17 NIH institutes, and NSF, DOE, EHS, NASA, etc. and aids in encouraging and organizing the development and applications of multi-scale integrative modeling in the interests of human health. More collaboration needed.

Standardized thought and ontologies.

I agree that the thinking and naming in fields tend to converge over time. But I disagree that we should coerce them into convergence by intentionally standardizing our thinking and tools.

Standardized tools (and thought) is a primary barrier to agile science. Many complain about silos and stovepiped organizations and express a desire for interdisciplinary research, etc. And many of those turn right around and complain about baroque or peculiar tools or paradigms that have evolved in various domains. Those tools and methods usually arise because they work! And that efficacy should be respected, not standardized or homogenized away.

What is really necessary is not standardized artifacts, but a systemic organization of the people doing the research. Interdisciplinary work requires a collection of specialized people plus an organizational structure that allows specialists to wander outside their domain plus some people who specialize in wandering amongst domains.

Humans are the most flexible component of interdisciplinary research. Harvest and capitalize on human flexibility and leave the tools to evolve in whatever diverse directions they need must to facilitate efficacy and efficiency. Then remember to "use the right tool for the right job", an adage much older than computation!

Despite this small nit, I agree completely with the main point. The spread of computation and it's proper use as a tool depends fundamentally on clear advocacy (without hyperbole) by modelers and demonstrable success of particular models and modeling methods.

All submitted comments are reviewed, so it may be a few days before your comment appears on the site.

Post new comment

The content of this field is kept private and will not be shown publicly.
CAPTCHA
This question is for testing whether you are a human visitor and to prevent automated spam submissions.
Image CAPTCHA
Enter the characters shown in the image.