Metrum Minutes

Cloud Computing at ACoP

by Tim Bergsma

Already widely received in the finance community, cloud computing is gaining acceptance in the pharmaceutical industry as well. At this year’s American Conference on Pharmacometrics, Metrum Research Group staff presented a poster on the topic, and helped lead a related panel discussion.

Download Metrum Research Group’s Poster

For pharmacometricians, perhaps the chief advantage of the cloud is the ability to match the scale of the infrastructure to the analytical problem. For instance, simultaneously running parts of a large model across multiple, on-demand virtual machines in the cloud can greatly reduce analysis time. In fact, some scientists are reporting that the increase in speed can change the way they approach a problem, as a new class of questions become a practical reality.

The most frequently voiced concerns in the pharma community, with respect to cloud infrastructure, are data security and analytical reproducibility. Is my data safe “out there”? Will my model run the same if my virtual machine is re-deployed on a different “real” machine? Fair questions, both. But the outlook is optimistic. ACoP participants noted that large pharmaceutical companies already entrust large sections of confidential operations to the cloud. And overall, reproducibility benefits from the very specific software configuration recorded in a machine image, possible hardware effects not withstanding.

Metrum’s poster illustrates one proven configuration for doing pharmacometrics in the cloud. Mostly based on free software, it could be easy to adopt and adapt. A real plus is that mission critical aspects can be scripted: an administrative convenience that also simplifies compliance issues.


Hepatitis C Viral Dynamic Modeling and Clinical Pharmacology

by Kyle Baron

An article in the recent issue of Heptaology illustrates an insightful application of hepatitis C viral dynamic modeling and clinical pharmacology. Guedj et al. (Hepatology 2012 55: 1030-1037) present an HCV dynamic model of viral load versus time during mericitabine therapy. Rather than seeing the clear bi-phasic viral load profile we see with interferon and the protease inhibitors (with characteristic sharp initial drop in viral load), the viral load decline during mericitabine therapy is more complicated: either a slower bi-phasic profile or mono-phasic profile. The authors propose an interesting modeling solution to gain insight into mericitabine’s antiviral activity.

Mericitabine (RG7128, Roche/Gennentech) is a cytosine nucleoside analog which, when activated to its tri-phosphate form, inhibits the HCV NS5B RNA-dependent RNA polymerase. Nucleotide analogs are well-known to require metabolic processing through a series of intracellular phosphorylation steps before they become active in tri-phosphate form. In general, these phosphorylation steps can take some time, may be subject to a rate-limiting phosphorylation step, and may delay the time to maximal drug efficacy. Could this account for the new and different viral load profiles?

Guedj et at. propose a “varying effectiveness” (VE) viral dynamic model that is similar to a previously-published model. In the VE model, the antiviral efficacy parameter is allowed to gradually increase from one value at the start of therapy to a terminal value some time later. This gradual development antiviral efficacy is to mimic the gradual accumulation and activation of mericitabine tri-phosphase inside the hepatocyte. In addition, the authors allow antiviral activity to decline after treatment is stopped, so the viral rebound at the end of therapy can be modeled.

The VE model seemed to fit the data well. Antiviral efficacy was very low (38% viral production block) at the start of therapy but increased over time and with dose (94-99.8% block), with a half-life of about 1.5 to 3 days. There was also a dose-related increase in the rate of decline in antiviral activity at the end of therapy. This VE model helped to establish and quantify the dose-dependent antiviral effectiveness of mericitabine as well as the time-course for developing antiviral drug activity. My questions about the modeling results are related to parameter identifiability. The viral clearance rate had to be fixed (6/hour) and it’s not clear if or how this could change efficacy estimates or pharmacodynamic time-course. Also, the estimate for the infected cell death rate (delta) seems very small (0.023/day) compared to other published estimates. Maybe it is not unreasonable given that the patients previously failed therapy. But I wonder how reliable this parameter estimate is.

These questions notwithstanding, I think the modeling results from Guedj et al. provide an interesting example of using mechanistic modeling in combination with basic pharmacology knowledge to gain quantitative insight about the antiviral activity of mericitabine.

About the Author:

Kyle Baron joined MetrumRG in 2010 as a Research Scientist, working with the systems biology group to develop Bayesian hepatitis C viral dynamic models to help quantify antiviral drug effects and predict long-term viral response rates.


Marcum Tech Top 40

by Mary Delaney

Metrum Research Group was selected as one of the fastest growing technology companies in Connecticut at the 2012 Marcum Tech Top 40 awards ceremony at the Oakdale Theatre in Wallingford on September 27th.

Out of the six categories including: Advanced Manufacturing, Energy/Environment/Green Technology, IT Services, Life Sciences, New Media/Internet/Telecom, and Software, Metrum Research Group was recognized in the field of Life Sciences.

At Metrum Research Group we are all very excited and honored to be included in the Marcum Tech Top 40 and congratulate the other recognized companies.

To learn more about the Marcum Tech Top 40 please click here.


Alzheimer’s Association International Conference

by James Rogers

“Prevention … we have never uttered that word at the Alzheimer’s Association International Conference, and here we actually highlighted three clinical trials in the planning for prevention of Alzheimer’s … “. This comment from Dr. Mario Carrillo gives voice to the enthusiasm and interest of the attendees at the standing-room-only Alzheimer’s Association International Conference session entitled, “Collaboration for Alzheimer’s Prevention: Common Issues Across Presymptomatic Treatment Trials”, in which the A4, API, and DIAN prevention trials were discussed.

Download Metrum Research Group’s Poster

Among the many interesting and important design considerations for these trials are the choices of cognitive endpoints. As I understand it, both A4 and API will employ novel composite endpoints based on existing neuropsychological instruments in order to assess changes in cognition (it wasn’t clear to me what cognitive endpoint(s) will be used in the DIAN study, but in any event this study is powered primarily to detect changes in biomarkers). While I’m not privy to the specific deliberations in constructing these novel composite endpoints, there will presumably be some consideration of statistical / psychometric methodology known as Item Response Theory (IRT).

What problem is IRT methodology solving? In the context of studying cognitive decline, IRT addresses the fact that different subscores of, say, the ADAS-cog are more or less informative at different stages of the disease, yet all are related to a single construct (“cognition”) that we understand to be in decline over the entire symptomatic spectrum of the disease. By providing insight into the relationship between disease severity and the different subscores, IRT analysis can suggest a “brew your own” composite endpoint that is tailored to the specific stage of the disease that your are studying (in theory, this could even be done adaptively on an individual-specific basis, a possibility that was acknowledged several years ago in the AD community.

Where is this line of thinking going next? In addition to mixing and matching subscores from different neuropsychological assessments to combine novel cognitive endpoints, would it also make sense to include biomarker and imaging endpoints as additional “items”? How about functional endpoints? (The underlying theoretical construct would then no longer be just “cognition” but “disease severity” more generally.) Why not? The value here would go beyond simply constructing new and more sensitive endpoints: it could potentially provide a framework for translating upstream observations (say, changes in hippocampal volume) to quantitative predictions about downstream consequences (say, in activities of daily living).

What do you see as the value of IRT analyses in studying AD? Let me know your thoughts.


PaSiPhIC Conference

by Jonathan French

Recently, I was asked to speak at this year’s PaSiPhIC conference about different approaches to meta-analysis. As I was putting together my presentation, I began to wonder: How can we best leverage traditional meta-analysis methods in a model-based drug development framework?

Model-based meta-analysis is growing in use and for good reason. It allows us to use data in the public domain (e.g., extracted from journal articles, conference abstracts and posters) to quantitatively describe dose-response and disease progression, facilitating more efficient clinical study designs, drug development programs, and cost effectiveness comparisons.

Download Metrum Research Group’s Presentation

However, there is a rich history of meta-analysis methods and applications in the statistics literature, and it seems that we in the pharmacometrics community should be taking better advantage of them.

Models ranging from a simple pairwise random effects meta-analysis to multi-arm network meta-regression models and multivariate meta-analysis models have been used extensively in the comparative effectiveness and outcomes research literature. These models can be fit easily using common statistical software packages. More importantly, unlike MBMA models, they have well-understood assumptions.

While they certainly don’t answer all of the drug development questions we can address with MBMA, I think there’s an important role for traditional meta-analysis models in the MBDD world. Arguably, these models can allow modelers to answer the simple, direct questions of interest quickly and easily (and with fewer assumptions than MBMA). This then allows modelers to spend the time they need to build more sophisticated MBMA models to address the more complex and wide-ranging issues.


Metrum Research Group Presents at PAGE 2012

by Mary Delaney

Last week, Bill Gillespie and Elodie Plan journeyed to Venice, Italy representing Metrum Research Group at the Population Approach Group Europe (PAGE 2012) meeting. Bill and Elodie presented posters of their recent work and communicated the features of METAMODL (www.metamodl.com/) to conference goers.

Download Metrum Research Group’s Poster

Bill and Elodie were inspired as always by the conference, which featured stimulating oral presentations emphasizing significant advances in the field of pharmacometrics. Among the topics shared with conference attendees were methodological or applied findings concerning Markov, item-response, and variability models.

Hopefully you had the opportunity to attend this high-level conference on pharmacometrics, but for those of you who missed it, here is an outline of some of the highlights.

Population analysis practices and therefore clinical endpoints characterization can be further improved with the use of these features:

Markov models enable the dependency between pharmacodynamic observations to be taken into account.
Item-response models acknowledge the composite structure of many efficacy scores.
Variability models allow dynamic distribution transformation of the residual error.
Reducing the impact of assumptions and tailoring the model to the nature of the data leads to valid inferences on which clinical decisions can be made.

More exciting conferences to look forward to this year in the ever-evolving field of pharmacometrics. Stay tuned!


Metrum Research Group Presents at Bio-IT World 2012

by Jeffrey Hane

Jeff Hane of MetrumRG and Adam Kraut of The BioTeam presented details of MetrumRG’s ground-breaking infrastructure in the cloud at Bio-IT World 2012 (Boston, MA). The talk, ‘Building A Scalable Pharmacometrics Platform in the Cloud’, was delivered in the Cloud Computing Track on April 25. Grab your copy of the slide deck today.


What Decisions Benefit from Model-Based Meta-Analysis (MBMA)?

by Jeffrey Hane

The primary rationale for model-based meta-analysis (MBMA) is to improve decision-making by better leveraging prior information from multiple sources. Decision-makers generally attempt to consider such prior information, but it is usually done in a relatively qualitative manner, and each individual decision-maker is usually aware of only a subset of the prior information. MBMA seeks to make the process more quantitative and comprehensive. The process and results of MBMA may be made visible (aka transparent) to the decision-makers. The end result is that the decision-makers are better informed, and they can contribute their knowledge to the modeling process leading to better, more trusted models and model-based inferences.

You may ask: what decisions benefit from MBMA?

Dose-selection and proof-of-concept decisions: Improved quantitative comparisons with competing treatments may permit better selection of a dosing regimen that performs comparable to or better than the competing treatment. Alternatively, the MBMA results may demonstrate that no dosing regimen of the new drug performs favorably relative to competitors. If so, MBMA may support a better and earlier decision to terminate development.

Clinical trial design decisions: Models used for clinical trial simulation reflect a more comprehensive range of evidence and knowledge. MBMA is particularly valuable in cases where no clinical efficacy data is yet available for the new treatment, but quantitative predictions for efficacy-related measurements are possible by leveraging data for related compounds.