Computing the Ravages of Time: Using Algorithms To Tackle Alzheimer’s Disease

Biomarker research, genetics, and imaging are all coming into play

In 1906, at a small medical meeting in Tübingen, Germany, physician Alois Alzheimer gave a now-famous presentation about a puzzling patient. At age 51, Auguste D.’s memory was failing rapidly. Confused and helpless, she was growing inarticulate and fearful of her family, Alzheimer reported. Auguste died four years later.

 

During the autopsy Alzheimer found dramatic shrinkage in Auguste’s brain, with cells that were already dead and dying at the time of her death—plus two kinds of microscopic deposits that Alzheimer had never seen before. He summed it up in his presentation abstract: “All in all, we are faced obviously with a peculiar disease process.”

 

Now, a century later, about 5 million people in the United States have Alzheimer’s disease, at a cost of more than $100 billion annually. About one in every eight people 65 years and older has been diagnosed with the disease. With lifespans continuing to lengthen and waves of baby-boomers hitting prime-risk ages, the number of Alzheimer’s patients could triple by the time today’s college students enter retirement.

 

Thus far, no clinical treatment has been shown to stop Alzheimer’s neurodegeneration. In addition to searching for new pharmaceutical targets, however, researchers are grappling with other disease fundamentals: how plaques and tangles form on the brain, how best to detect early onset of the disease before cognitive decline starts, and how to predict a person’s genetic risk.

 

The stage is set for computational approaches to Alzheimer’s, says Arthur Toga, PhD, a professor of neurology at the University of California, Los Angeles. The slippery, highly variable nature of the disease demands sensitive tools, an aging population creates the urgency, and new technology provides the power to meet those demands. “In some sense,” he says, “we’re now set for a perfect storm for Alzheimer’s disease research.”

 

Computational tools are extending researchers’ reach at all scales. Molecular dynamics simulations visualize the protein clumps in the brain that experiments can’t capture. With unprecedented ease, data-mining methods sift through the rapidly accumulating information about the proteome and genome. And sophisticated imaging analyses reveal changes in the structure and functioning of the entire brain.

 


PROTEIN DYNAMICS: Getting at the Cause of Alzheimer’s

In the 1960s, researchers were finally able to use new electron microscope technology to see the molecular structure of the two types of mysterious lesions that Alzheimer first noticed in his patient’s cerebral cortex—the so-called senile plaques and neurofibrillary tangles. The plaques, it turns out, consist mainly of amyloid beta peptides, while the tangles consist of abnormal forms of the tau protein. How these two proteins influence each other is not well known.

 

In Alzheimer’s disease, beta amyloid, a protein fragment snipped from amyloid precursor protein (APP), clumps together and is mixed with other molecules, neurons, and non-nerve cells. Plaques develop, as seen here, in the hippocampus and in other areas of the cerebral cortex. Courtesy of the National Institute on Aging.Some researchers postulate, however, that aggregates of amyloid beta—seen as senile plaques in their final form—are the proximal cause behind Alzheimer’s disease, and the tangles and other neuropathological changes are a side effect of the gone-haywire amyloid beta assembly. Known as the amyloid cascade hypothesis, this suggests that understanding amyloid self-assembly could help crack open the puzzle of how Alzheimer’s disease starts in the first place.

 

Researchers trying to study amyloid beta through experimental approaches run into problems, however, because many amyloid beta aggregates are unstable and short-lived. Computer simulations, on the other hand, provide the chance to study small amyloid beta aggregates in full atomic-resolution glory. Over the past two decades computational power has increased, allowing for better “all-atom” molecular dynamics simulations of short time-frames. And for longer simulation dynamics, coarse-grained protein models have been developed that can boil down a large number of degrees of freedom to a more manageable few, for instance by representing amino acids by a less complex structure of “beads.”

 

H. Eugene Stanley, PhD, professor of physics and physiology at Boston University and director of the university’s Center for Polymer Studies, models the folding and aggregation of amyloid beta peptides with a variety of approaches. In recent work, Stanley, Brigita Urbanc, PhD, senior research associate in physics at Boston University, and their students simulated these peptides using a coarse, Healthy tau proteins stabilize microtubules, which themselves support neurons. In Alzheimer’s disease, damaged tau begins to pair with other threads of tau and form tangles, as seen here. The microtubules disintegrate, and the neurons’ support system collapses. Courtesy of the National Institute on Aging.four-bead protein model, in which amino acids are represented by three backbone beads and one side chain bead. Urbanc, Stanley and colleagues have been especially interested in investigating differences between the two most common protein forms seen in senile plaques: amyloid beta 40 and amyloid beta 42.

 

Their results, published in the Proceedings of the National Academy of Sciences in 2004, showed that the amyloid beta 40 and amyloid beta 42 peptides first folded into collapsed coil structures, then assembled into chains of different lengths. During the simulation, Stanley says, amyloid beta 42 tended to form longer chains, and the amyloid beta 40 shorter ones—in proportions consistent with laboratory results.

 

More recently, Stanley and his colleagues refined their simulation model to include the presence of electrostatic interactions between pairs of charged amino acids. Published in Biophysical Journal in June 2007, their results point to a specific spot on the amyloid beta 42 chains—the C-terminal region—that may be crucial for the molecule to aggregate. This suggests that inhibitors targeting this region could prevent chain formation or change the structure of the assemblies to reduce their toxicity, Stanley says.

 

These in silico analyses are useful, Stanley points out, because they lead to predictions that lab researchers can test in vitro. And because computer simulations can reveal crucial, three-dimensional details of amyloid beta molecules, they can also aid in designing and testing drug molecules specifically for this target. “It’s easier to design a key if you know the exact, three-dimensional contours of the lock,” he says.

 


PROTEOMICS: Seeking Biomarkers to Help Diagnose Alzheimer’s

Amyloid Precursor Protein (APP) is associated with the cell membrane, the thin barrier that encloses the cell. After it is made, APP sticks through the neuron’s membrane, partly inside and partly outside the cell.Before Alzheimer’s disease can be treated, of course, it needs to be spotted—and the sooner the better. Evidence suggests that molecular mechanisms of the disease are at work early, perhaps even several years before neurons start dying and cognition starts to decline.

 

Yet tests that can accurately and reliably detect the disease at early stages have been hard to come by. As researchers understand more about the proteins involved in the disease process, they are also starting to investigate whether any of these molecules could serve as an Alzheimer’s biomarker.

 

The answer isn’t likely to be found in a single protein, however. The obvious candidates for biochemical markers—amyloid beta 40, amyloid beta 42, and the hyperphosphorylated tau protein—are indeed found at elevated levels in Alzheimer’s patients, but they are also found in other neurological disease patients as well as in some normal controls.

 

Some researchers are therefore taking a big-picture, proteomic approach. They’re looking for a combination of proteins whose expression levels in blood plasma serum or cerebrospinal fluid might yield a biochemical signature of Alzheimer’s disease in early stages.

 

Enzymes act on the APP and cut it into fragments of protein, one of which is called beta amyloid.The lab of Tony Wyss-Coray, PhD, associate research professor of neurology at Stanford University, recently collaborated with Satoris, Inc., a biotechnology company Wyss-Coray co-founded, on such a project. Their focus: signaling proteins in plasma. Sandip Ray, chief scientific officer and cofounder of Satoris, came up with the idea. Using supervised learning software called Predictive Analysis of Microarrays (PAM), the researchers studied plasma expression levels for 120 immune response factors and other signaling proteins from an initial set of 43 Alzheimer’s disease subjects and 40 age-matched unaffected controls. The algorithm honed in on a subset of 18 proteins that seemed to be characteristic and predictive for Alzheimer’s disease.

 

Individually, each protein could not accurately classify the subjects as either a case or a control. But taken all together, the proteins’ expression signature appeared to be good at predicting disease status, Ray says.

 

The researchers tested the 18-protein predictor on an independent test set of 92 subjects, which, like the training set, was drawn from seven different patient centers to minimize possible center biases, Ray says. The predictor reached a total accuracy of 89 percent.

 

The beta amyloid fragments begin coming together into clumps outside the cell, then join other molecules and non-nerve cells to form insoluble plaques.  Courtesy of the National Institute on Aging.The group went on to evaluate the expression signature’s predictive abilities for a set of 47 patients with mild cognitive impairment, a condition which sometimes precedes Alzheimer’s disease. The expression signature predicted that 27 of these patients would later develop Alzheimer’s disease, and indeed, 20 of the 27 were diagnosed with the disease within six years. Overall, the predictor achieved an estimated 91 percent sensitivity and 72 percent specificity.

 

Biologically, the 18 proteins seem to point to a systemic, not isolated, dysregulation in neuronal support, immune response, cell growth and cell death in Alzheimer’s disease patients several years before clinical symptoms appear, says Markus Britschgi, PhD, a postdoctoral fellow in Wyss-Coray’s lab and presenting author of a poster on the work in June at the Alzheimer’s Biomarkers Meeting in Washington, D.C. The work has recently been accepted for publication in Nature Medicine. “But what we don’t know at this time is whether these dysregulations are due to processes in the brain or processes only in the periphery,” he says.

 

   
GENOME-WIDE ASSOCIATIONS: Tying Genes to Alzheimer’s

Studies of twins hint that up to 80 percent of Alzheimer’s cases are due to genetic causes. Yet only three genes have been found on which mutations likely cause the disease through simple Mendelian inheritance: APP, which encodes the amyloid beta precursor protein, and PSEN1 and PSEN2, which encode presenilin 1 and 2. Mutations on these genes cause familial early-onset forms of the disease.

 

Most people, however, develop Alzheimer’s disease after the age of 65 and do not have such a strong history in the immediate family. For this form, the most important known gene is ApoE, which encodes apolipoprotein E. Yet ApoE doesn’t convey the whole picture: only about half of late-onset cases have a copy of the high-risk allele.

 

Typical conformations of a folded monomer, dimer, and pentamer of amyloid beta 42 in the absence (a, c, and e) and presence (b, d, and f) of electrostatic interactions. Simulations by H.E. Stanley and colleagues suggest that the C-terminal region (marked here by a blue sphere) plays a key role in the formation of amyloid beta 42 oligomers—and the relative importance of this region increases in the presence of electrostatic interactions. Drugs targeting this area may be able to prevent the oligomers from forming or perhaps reduce their toxicity in the brain. Courtesy of Sijung Yun. Reprinted with permission from the Biophysical Society, Biophysical Journal 92, 4064-4077 (2007).The search is on for other Alzheimer’s susceptibility genes, but hard results have been elusive so far. It has been suggested that the survey of at least 300,000 single nucleotide polymorphisms (SNPs) from the whole human genome might be necessary for studies of genetically complex phenotypes. Until recently, however, published studies had not looked at more than 100,000 SNPs at a time.

 

Technology is changing that. “We’re entering the era of high-density genome-wide association studies,” says Eric Reiman, MD, executive director of Banner Alzheimer’s Institute in Phoenix, Arizona. Thanks to advances over the past decade—in computing power, microarray technology and analysis tools, and human genome maps, for instance—genome-wide association studies are suddenly becoming feasible and successful (see the other feature story in this issue of BCR).

 

Their benefits extend beyond simple efficiency. The methods, which use high-throughput processes to examine about half a million genomic markers, can test many SNPs independent of any biases related to a researcher’s favorite gene. “What’s exciting about hypothesis-free genome-wide studies is that they can help uncover new mechanisms that people haven’t thought about before,” Reiman says.

 

The first high-density genome-wide association study of Alzheimer’s disease was published in Neuron in June 2007 by a 15-institution international team led by Reiman and Dietrich Stephan, PhD, associate director at the Translational Genomics Research Institute in Phoenix. That research was supported by 20 of the National Institute on Aging’s Alzheimer’s Disease Centers.

 

Using samples from 861 subjects with late-onset Alzheimer’s disease and 550 elderly unaffected controls, they genotyped about 500,000 SNPs. These classifications were verified in more than 1,000 Alzheimer’s cases and controls at autopsy. In three rounds of analyses, the researchers found six promising SNPs from a single gene that were significantly associated with the disease in subjects with the high-risk ApoE epsilon 4 allele. The SNPs all lay within the GRB-associated binding protein 2 (GAB2) gene.

 

In this particular study, the most significant SNP on GAB2 was associated with an overall four-fold increased risk for Alzheimer’s disease, Reiman says. And people who carried both the epsilon 4 allele and the GAB2 high-risk allele had a 24-fold increase in risk for Alzheimer’s disease.

 

The study’s results need to be replicated with independent data, Reiman cautions. But for now they allow for possible mechanisms to be tested—investigating, for instance, whether the normal form of the GAB2 protein protects vulnerable neurons from tangles, he says.

 

The researchers have deposited all of their data into the public domain. “We have just begun to have enough letters in the genetic book of life to understand the genetic story of Alzheimer’s disease and other common phenotypes,” Reiman says.

 


Letting Intermediate Phenotypes Stand In for Alzheimer’s in Genomic Studies

One problem that genetic studies of complex diseases can run into is simply finding the right people to study. Clinical diagnoses of Alzheimer’s disease in particular are not always accurate, and small errors in identifying the cases and controls in a study can mask or skew real genetic associations in the results.

 

One way to overcome this is to work with endophenotypes: intermediate quantitative traits that stand in for a more complex disease phenotype. Finding a good endophenotype for Alzheimer’s isn’t simple, however. It must be a trait that is heritable, associated with the causes and risks of Alzheimer’s disease—which itself is still a mystery—and ideally be normally distributed within the population, says Alison Goate, PhD, professor of psychiatry, genetics and neurology at Washington University Medical School in St. Louis.

 

With the right endophenotype, however, the power of a genetic study can jump dramatically, Goate says. “If your quantitative trait represents something that is highly correlated with the disease but controlled by a small number of genes, then it should be easier to find those genes with the quantitative trait,” she says.

 

Amyloid beta peptide levels are a natural endophenotype candidate, Goate says, because they are highly correlated with the presence of Alzheimer’s disease and also with high-risk alleles in APP, ApoE, PSEN1 and PSEN2.

 

Goate and her colleagues are working with cerebrospinal fluid levels of two of the most common forms of the protein, amyloid beta 40, amyloid beta 42, plus the ratio of amyloid beta 42 to amyloid beta 40.

 

As part of an ongoing study, they recently looked at a set of 300 subjects in which two-thirds had a family history of Alzheimer’s disease but were themselves unaffected and one-third had a diagnosis of mild Alzheimer’s disease.

 

From a list of 19 candidate SNPs selected from the AlzGene database’s meta-analysis, nine were significantly associated with the amyloid beta endophenotoype, with eight showing directions of association that were consistent with existing meta-analyses. “This is promising, because it suggests that these associations are likely to be real,” Goate says.

 

The group is still collecting more samples, Goate says, and they hope to reach the point where they have a large enough sample to try out the amyloid beta endophenotypes in a broader set of SNPs across the entire genome.

 


IMAGING: Capturing the Brain on Screen to Diagnose and Track Alzheimer’s

Normalized array measurements of 120 plasma signaling proteins from 43 Alzheimer’s disease patients (yellow) and 40 non-demented controls (blue) were analyzed with the statistical program called significance analysis of microarray (SAM) to discover significant differences in protein concentrations. Samples are arranged in columns and proteins in rows. Increased expression in patients versus controls is shown in shades of red, reduced expression is shown in shades of green, and median expression is shown in black. Courtesy of Tony Wyss-Coray.Brain imaging has long played a role in the diagnosis of Alzheimer’s disease by helping physicians exclude the possibility of brain tumors or other ailments. More recently, however, researchers have become interested in using imaging tools for broader purposes: understanding the disease, detecting it at early stages, and tracking its progress over time.

 

As the tangles and plaques of Alzheimer’s disease creep across a brain, its structure changes in subtle ways. With a skilled eye, radiologists examining brain magnetic resonance images one by one can quickly categorize the spread and degree of atrophy in the brain. But researchers would like to use assessments that rely less on subjective evaluations of skilled experts.

 


Voxel-Based Methods to Catch Early Signs of Disease

Some researchers are developing machine learning approaches that focus on individual voxels. Clifford Jack, MD, a professor of radiology at Mayo Clinic and postdoctoral fellow Prashanthi Vemuri, PhD, are investigating one such pattern classification method.

 

The technique uses a support vector machine algorithm, which aims to find a combination of brain image voxels that can best distinguish images of Alzheimer’s patients from unaffected controls, Vemuri says. Their results, from a set of images of 380 Alzheimer’s disease subjects and unaffected controls, were presented at the Human Brain Mapping meeting in June 2007.

 

The researchers first narrowed their attention to those brain regions that showed evidence of atrophy in Alzheimer’s disease subjects. Within these regions, their tool found a subset of voxels that best classified the subjects into cases and controls. Altogether, the algorithm winnowed 10,000 voxels down to an essential set of 300, Vemuri says.

 

And these voxels, it turns out, form regional clusters that mirror the typical spread of neurofibrillary tangles. This provides an extra intuitive affirmation, Jack says. But the quantitative validation is what really counts: the method achieved 85 percent sensitivity and 85 percent specificity. Adding information about age, gender, and ApoE genotype further boosted both scores to 90 percent.

 

The process takes less than 15 minutes per case to run on a desktop computer. “Ten years ago, it might have required a supercomputer to do it,” Jack says. “People in medical imaging are just now taking advantage of improved software available to the public.”

 


Modeling Brain Contours to Find Alzheimer’s

Another approach to analyzing structural brain images is to take a step back from the trees and look at the forest. Rather than analzying data on individual voxels, some methods model overall contours of brain regions, an approach that characterizes the shape of subcortical and cortical structures.

 

John Csernansky, MD, a professor of psychiatry and neurobiology, and Lei Wang, PhD, a research assistant professor, both at the Washington University School of Medicine in St. Louis, along with Michael I. Miller, PhD, a professor of biomedical engineering and electrical and computer engineering at Johns Hopkins University, are working with surface-based methods that stem from classical mechanics.

 

With anatomical modeling tools, Csernansky, Miller, Wang and colleagues are able to capture regional changes in hippocampal shape from Alzheimer’s disease. This image compares subjects with very mild Alzheimer’s disease (as a group) to nondemented controls. Regions colored cool (purple and blue) are smaller in the Alzheimer’s group compared to the controls; regions colored yellow and green are unchanged. The researchers found significant changes in the CA1 subfield and the subiculum (labeled). Courtesy of Lei Wang, PhD.When brain regions of Alzheimer’s patients atrophy over time, they change shape in complicated ways. Miller has pioneered methods based on the principles of computational anatomy—which include tools such as large-deformation high-dimensional brain mapping—to model these variations.

 

The techniques assume that differences in brain contours can be captured by “morphing” one brain anatomy into another through high-dimensional diffeomorphic transformations that smoothly change one shape into another. Essentially, Miller says, brain matter is modeled as if it had the physical properties of a viscous liquid. Since sets of differential equations describe the transformation, group differences can be efficiently characterized.

 

The group has applied their methods in a variety of settings. In a longitudinal study of 44 subjects published in 2003 in Neuroimage, the researchers used patterns of change in hippocampal shape over two years of follow-up to distinguish subjects with mild Alzheimer’s disease from unaffected elderly controls. And in a study of 49 subjects published in 2005 in Neuroimage, variation in the shape of a particular part of the hippocampus surface could predict whether a subject would go on to develop mild Alzheimer’s during five years of follow-up—and if so, how long it took for cognitive effects to show up.

 

With thousands of data points collected on each hippocampal surface and only a relatively small number of subjects, these methods demand some form of data reduction, Csernansky says. Early studies used principal components analysis to hone in on the most informative areas of the brain surface. More recently, however, the group has been working to make their results more interpretable to clinicians by using a simplified anatomical template of the hippocampus.

 

In a study of 135 subjects published in 2006 in Neuroimage, patterns of surface variation in particular hippocampal substructures could distinguish subjects with very mild Alzheimer’s disease from elderly controls. In particular, changes in two specific areas of the hippocampus surface, one in the CA1 subfield and the other near the subiculum, significantly increased the odds that a subject had very mild Alzheimer’s disease.

 

The group is looking now at how particular substructures change over time in Alzheimer’s patients as compared to the normal aging population. And they hope to combine their own measures of surface deformations with other types of data, such as functional images or PIB-PET scans, Wang says. “With this type of metadata, we can understand how the disease progresses and also do a better job of prediction,” he says.

 


Functional Imaging to See the Alzheimer’s Brain in Action

Since the early 1980s, researchers have used PET scans to help distinguish normal and Alzheimer’s diseased brains, and work is ongoing to develop PET-based biomarkers for early-stage diagnosis. Here, two PET scans show the difference between a brain with advanced-stage Alzheimer’s (right) and a normal brain (left). Courtesy of the National Institute on Aging, www.nia.nih.gov.With functional brain imaging, researchers can investigate the clinical aspects of Alzheimer’s disease: how does the brain behave differently when it’s affected by the disease?

 

Functional magnetic resonance imaging provides some of the most detailed clues to this question. Many fMRI studies have pointed out particular brain areas that show damaged functioning in Alzheimer’s disease patients. But some researchers are now interested in how the entire brain might also change and adapt as the disease progresses.

 

Michael Greicius, MD, an assistant professor of neurology at Stanford University, is particularly interested in how brain regions connect and communicate among themselves. Recently, he and Kaustubh Supekar, a biomedical informatics graduate student also at Stanford, turned to analyzing a large network of brain regions for mathematical characteristics that were first used to describe social networks.

 

Their approach uses small-world measures, which have also been used to analyze a variety of other networks, including the Internet, global airline routes, and “six-degrees-of-separation” human networks. In social groups, a network node would be a person; in functional brain networks, a node represents a particular region of the brain.

 

Previous work has suggested that normal brains, like human social networks, exhibit small-world characteristics. This means that they encompass many tight clusters of nodes, and that information shared between any two nodes must likely pass through a large number of short-range connections. Greicius and his colleagues wanted to see if there were any small-world differences between Alzheimer’s disease brains and unaffected brains.

 

The group recorded resting-state fMRI brain activity in 36 Alzheimer’s disease patients and unaffected elderly controls every two seconds for six minutes. They then looked at activity in 90 separate regions of the brain—tens of thousands of voxels for each brain region—and created a time series of activity for each. They could then calculate the connectivity, or the amount of mutual information, between each of the 90 nodes.

 

The Alzheimer’s disease patients, they discovered, had significantly less regional connectivity and displayed more impaired small-world functioning than did healthy controls. The results were presented at the Human Brain Mapping meeting in June, 2007.

 

It’s not yet clear what the results mean biologically, Greicius says, but for now that’s fine. “Intellectually it’s less satisfying if there’s not a clear biological interpretation, but from a practical, clinical standpoint we’re agnostic as to what’s driving the results, as long as they’re reproducible and accurate,” he says. A measure based on regional connectivity was able to distinguish Alzheimer’s disease patients from healthy controls with 73 percent sensitivity and 80 percent specificity.

 

The next step is to see if data reduction tools could construct a more simplified global network—based on 20 or 30 regions, say, rather than 90—that could better classify individual subjects as having Alzheimer’s disease or not, Greicius says.

 


Challenges for the Future:

Future challenges for computational work in Alzheimer’s disease research will likely center around the usual suspects, researchers say: data, people, and money.

 

“From the computational standpoint, researchers need more powerful ways to glean information from an increasing array of complex datasets,” says Eric Reiman of the Banner Alzheimer’s Institute, “and they need new ways to characterize the relationships among these potentially complementary datasets.”

 

Indeed, simply using study subjects recruited in different ways from different clinical centers poses a real problem for the integrity of results, says Clifford Jack, MD of the Mayo Clinic. “The incompatibility of these patient groups is a huge confounder in our field that’s not well recognized,” he says.

 

What’s more, the old, single-lab approaches to research likely won’t survive in today’s über-connected environment. “Increasingly, researchers from different scientific teams must work together to address their problems in a more fundamental way than any one team could do by iself,” Reiman says. “That is both the challenge and opportunity now at hand.”

 

Researchers might need to be jacks-of-all-trades, or at least forge connections with colleagues across campus. “We need computational researchers to enrich our information, but then that information needs to be transformed back into something that biologists and clinicians can comprehend,” says John Csernansky of Washington University. “It takes time and a willingness to struggle together for a common understanding.”

 

But real stumbling blocks to success in Alzheimer’s disease research may lurk from sources beyond control. “It won’t be from a lack of smart people, a lack of insights, a lack of new and useful things to do,” says Jack. “The number one problem will be money.”

 

Nevertheless, the field is a trendsetter of sorts, bringing together an unprecedented diversity of disciplines, data and people. “Alzheimer’s disease is one of the gold standards of this research trend,” says Arthur Toga of University of California Los Angeles. “It’s motivated lots of people to try to do science in this way. That’s very exciting.”

 

ADNI: Bringing it all together
 

Complex, degenerative diseases such as Alzheimer’s can’t easily be captured in small, cross-sectional studies. Researchers need large sample sizes to reach the statistical power necessary to tease apart subtle interactions, and ideally they would like to follow subjects over time in order to eliminate the large variability in how individuals age differently.

 

To that end, the National Institute on Aging in combination with the National Institute of Biomedical Imaging and Bioengineering, the pharmaceutical industry, and private foundations have been supporting a 5-year, $60-million longitudinal study that started in 2004. Known as the Alzheimer’s Disease Neuroimaging Initiative (ADNI), the project’s principle investigator is Michael Weiner, MD, of the University of California at San Francisco. The ambitious, highly collaborative undertaking aims to track the progress of Alzheimer’s disease and its precursors, and develop validated biomarkers for Alzheimer's disease clinical trials.

 

The study is following 400 subjects with mild cognitive impairment, 200 subjects with Alzheimer’s disease, and 200 elderly controls approximately every six months for two to three years, at about 50 sites across the nation. Researchers are collecting a variety of information from the subjects: MR images, clinical ratings, neuropsychological test results, and blood and urine samples from all the subjects, as well as [18F]-2-fluoro-deoxy-D-glucose (FDG) PET scans from half the subjects, cerebrospinal fluid from at least 20 percent of the subjects, and Pittsburgh Compound-B (PIB) PET scans from nearly 100 subjects. The data are immediately deposited into a repository that is freely available to the public.

 

As of June 2007, 804 subjects have been enrolled at 57 sites. Thousands of raw and processed images (scrubbed of the subjects' identities) have already been posted at UCLA’s Laboratory of Neuro Imaging website (http://loni.ucla .edu/ADNI). Researchers expect that all studies and analyses—much of it computational—will be completed by the end of 2010.



All submitted comments are reviewed, so it may be a few days before your comment appears on the site.

Post new comment

The content of this field is kept private and will not be shown publicly.
CAPTCHA
This question is for testing whether you are a human visitor and to prevent automated spam submissions.
Image CAPTCHA
Enter the characters shown in the image.