Agent Based Modeling (ABM) is a synthetic or constructive modeling style where the primary objective is to construct collections of composite computational actors whose collective behavior generates phenomena similar to the model's referent (the system being studied).
New paper: Agent-based modeling: a systematic assessment of use cases and requirements for enhancing pharmaceutical research and development productivity, published in Wiley Interdisciplinary Reviews: Systems Biology and Medicine. This one walks through various use cases extracted and derived from various more tightly focused modeling and simulation papers. It also lays out a vision for an analog based model repository, including the features such a repository requires in order to satisfy its use cases. In particular, we talk about the need for embedded knowledge inside models.
Relational grounding facilitates development of scientifically useful multiscale models has been published in Theoretical Biology and Medical Modeling. This paper is a "heads up" for biological modelers, actually all modelers in all domains. We're trying to take a methodological step forward in M&S discipline by calling for the explicit treatment of how the symbols within a model are grounded. Many domains already have a handle on this and are actively researching technologies like XML and "ontologies" for embedding semantic knowledge into a computation. And while those efforts are percolating into M&S, many of those particular efforts are tightly focused on the engineering and technology. In this paper, we try to refocus on the purpose for such embedding. I view it as an exercise in systems engineering, which is a discipline designed to help see the forest and the trees.
New paper: Moving beyond in silico tools to in silico science in support of drug development research has been published in Drug Development Research. This is a survey of a class of synthetic biological models, many from the BioSystems Group at UCSF. In it, we lay out what it means for a model to be an analog, as distinct from a mere simulation, and make the case for why analogs are required for scientific modeling.
New paper: Cloud Computing and Validation of Expandable In Silico Livers has been published in BioMed Central Systems Biology. This paper lays out our move from a local hardware cluster to Amazon's EC2 platform. It's pretty straightforward. The most interesting thing in the project was the elimination of the artifactual oscillation by increasing the sample size, something we could not have done without buying more hardware or moving to the cloud.
New paper: Tracing Multiscale Mechanisms of Drug Disposition in Normal and Diseased Livers has been published in J. Pharmacology and Experimental Therapeutics (FastForward Articles). This paper talks in relative detail about how hypothesis formulation and falsification (failed validation) can be done at a fine grain when only coarse grained validation data is available. Because our in silico liver (ISL) is an analog built in software, we can trace its internals. And because the internals of the analog were designed to map to the internal structure and dynamics of its referent (wet-lab liver perfusion experiments), traces of the ISL become detailed hypotheses about the internals of the liver. However, those detailed hypotheses are not falsifiable, except to the extent that they fail to reproduce the coarse validation data. Nothing can be done about that until we design wet-lab experiments to perform on real livers. In the meantime, though, we can alter the ISL mechanisms so that the coarse grained data matches that taken from wet-lab experiments under different conditions. In this case, we build 3 ISLs that generate the outflow profiles for drug and a sucrose marker for: 1) normal healthy livers, 2) alcohol damaged livers, and 3) carbon tetrachloride damaged livers. With the traces for each of the 3 ISLs, based upon the validated (i.e. not proven true, of course, but proven true enough) mechanisms of the ISL, we can formulate 'proto-theories' for the translation of an experimental liver from a healthy to a diseased (cirrhotic) state.
Note that the particulars of the 'proto-theories' suggested by these traces are not as sophisticated as those that might be generated by an expert hepatologist. In fact, these 'proto-theories' may even seem bizarre or patently false to such an expert (though I believe they don't seem so to the experts). Indeed, as Box's aphorism says,
... all models are wrong; the practical question is how wrong do they have to be to not be useful.
The point is not to build computer programs that attempt to compete with the hypothesis formulation of experts. This is not an AI project. The point is to build devices, with whatever tools are available including computers, that make the experts more efficient and effective. By formulating these 'proto-theories' about the translation of healthy livers to diseased livers (and vice versa), models like the ISLs provide a foil or sounding board to help sharpen the theories developed by the experts.
Survey paper: At the Biological Modeling and Simulation Frontier has been published in Pharmaceutical Research, Volume 26, Number 11 / November, 2009. This paper delves deeply into modeling and simulation methodology. But it's not one of those papers filled with box-and-arrow figures that yaps on and on about useless abstractions! No. It starts from the practical task of simulating 5 various fine-grained biological systems and builds out from there, through our oft-cited co-simulation method, and out to a clear and, I think practical, explanation of the roles for deduction, induction, and abduction. The fundamental threads of symbolic grounding and model robustness in this paper grew out of the previous paper in Complexity.
More Good news! Our paper: Evaluating an Hepatic Enzyme Induction Mechanism Through Coarse- and Fine-grained Measurements of an In Silico Liver has been published in Complexity Vol. 14/No. 6. We even landed the cover, thanks to Tony's fantastic artwork! This paper has two main prongs: 1) it demonstrates a concrete example of Petroski's principle and the scientific method, in general, and 2) measuring a system, in this case the simulation and its referent, from multiple aspects, one coarse and one fine. The trick is that the referents (isolated perfused rat livers) are only measured at the coarse grain. Although we couldn't add enough detail (due to space restrictions) to explicitly lay out how indirect validation can be done at the fine grain, the article discusses the methodology for such. In addition, we describe a nice counter intuitive result typical of ABMs of complex systems where decreasing the likelihood of a metabolic event lead to increased extraction because the liver compensated by creating more enzymes.