The following material relates to a new class of event-driven models that is currently under development at UCSF in the Hunt Lab.
Advances in computer science have opened a door to entirely new strategies for simulating and modeling biological systems. To test current concepts about system and organ function within organisms in normal and disease states we need a new class of discrete, event-driven simulation models that achieve a higher level of biological realism across multiple scales, while being sufficiently flexible to represent different aspects of the biology. We have developed and are implementing such models. The models use Object-Oriented Design and are agent-based. Four projects are underway. New projects are being considered. The domains of the four projects are the blood-brain barrier, the liver (currently focused on clearance and metabolism of drugs), the intestinal barrier to absorption, and tumor spheroids (an in vitro cancer model).
The following are excerpts from liver project draft documents.
Well-designed experiments on biological components have in mind a very narrow range of plausible outcomes. Such experiments hone the specific outcome to such a fine grain, that one is intentionally examining only a very tiny slice of that component’s potential behavior. The naturally occurring component is confined by the experimental design and methodology to essentially a single well-defined function or to exhibiting a single attribute. Consequently, an in silico experiment, in order to validate against the in vitro experimental results, need only model the responses that characterize that tiny, well-defined function.
However, the same biological components can be easily manipulated in completely different experimental procedures to study some other well-defined function that may bear little resemblance to the one exhibited in the other experimental context. This vague property of arbitrary interpretation is characteristic of living systems. However, in silico models do not have this property. In fact, many in silico models cannot easily be re-used in another experimental context because their logic is inherently tied to the experimental methodology and results that motivated their creation. The FURM represents progress in creating more realistic in silico models. It introduces methods that enable arbitrary interpretation and reuse of any given component in the system’s framework.
The FURM and the in silico IPRL (isolated perfused rat liver) particular instance of that method is a step toward the goal of building in silico biological systems whose behavior can more closely mimic that of living biological systems at what ever level of granularity is demanded by the problem at hand. Other efforts to do this consist mostly of the integration and concurrent use of expert-designed models. The IUPS Physiome Project [Hunter 2001] and the disease models developed by Entelos [Stix 2003] are two well-known examples. Experts construct a sub-model of a specific biological component that has a pre-specified set of biological functions or roles. Several sub-models may be integrated to create one big model that is intended to behave like the corresponding biological component. If the set of biological responses simulated by the components is incomplete, then one or more new, redesigned sub-models must be built and integrated into the larger model. No capability for automated reparameterization or redesign is provided. FURM make progress towards such a capability.
Hunter, P.J., Nielsen, P.M.F., and Bullivant, D. The IUPS Physiome Project, in Bock, Gregoru, and Goode, Jamie A. (Eds.). 'In Silico' Simulation of Biological Processes (Novartis Foundation Symposium No. 247), Wiley, Chichester, 207- 221, 2002.
Stix, Gary. Reverse-Engineering Clinical Biology. Scientific Ameriacn, 28-30, February 2003.
FURM depends upon Object-Oriented Design (OOD). Objects are instances of classes with both state and behavior. However, some OOD principles are purposefully broken to support the method. For example, when encapsulation is not adhered to strictly we can support some measurement and control operations.
Agents are objects that have the ability to add, remove, and modify events. Philosophically, they are objects that have their own motivation and can initiate causal chains, as opposed to just participating in a sequence of events something else initiated. However, we will usually refer to agents in the prior, more generic sense. Models are agents whose purpose is to mimic some other agent or system.
Many scientists engaged in biological modeling adopt as an important guiding principle some version of the original Occam’s Razor principle. Two versions are frequently encountered: a) use the simplest explanation consistent with the data and b) use the simplest model consistent with the data. An often-encountered extension of b is to prefer the model that gives the best “fit” to the data using the fewest number of parameters. However, upon examination we find that the following is the more accurate statement of the Occam's Razor principle: a problem should be stated in its basic and simplest terms. In science, the simplest theory that fits the facts of a problem, with the fewest critical assumptions, is the one that should be selected. A model should be viewed with caution if it is simplified by making assumptions that require important data about that system, or properties of the system, to be ignored. The challenge is to systematically improve the model by accounting for as much of the available data as possible, while systematically reducing the number of assumptions.
Models often only mimic particular, isolated, phenomena. The data set is a small subset of the real and potential behavior space. The models that treat those data have solution sets that are very small subsets of the real and potential behavior space of the referent. Hence, the choice of data to treat plus adherence to the Occam's razor principle (especially the two earlier versions) leads to brittle models that may serve a specific need well, but which are very limited in their power and usefulness.
If we interpret Occam's razor as being guided by the experimental procedure (problem statement) rather than the broad-spectrum phenomena or some slice of data with implications to phenomena, then the principle is more useful. Problem-driven (as opposed to data-driven) usage of Occam’s razor allows for more flexible models that draw on various kinds of data, various formulations of hypotheses, various other heuristics, etc. Data set-driven usage of Occam's razor restricts the models (built to match that data) to that particular data set and its peculiarities.
Modelers always use an iterative technique to approach a valid model. Often, however, the evolution of the model is ad-hoc and not reproducible. By embracing the standard of always iterating and creating new models, FURM encourages the creation of evolutionary operators for models. The robustness and efficacy of these operators depends, fundamentally, on a meaningful parameterization of the model. When starting out with a new functional unit to be modeled, an initial unrefined parameterization is chosen based on available experimental data. Components are added according to that initial parameterization. If the resulting models are not satisfactory, any given piece of the model may be modified or reparameterized with minimal impact on the other components of the model. This continues until the model provides reasonable coverage of the targeted solution space, according to the measures defined.
Once an adequate initial parameterization is found, the parameter space is searched for solution sets that are partitioned according to defined measures. Bounds for the parameter space are then set to indicate the regions of the solution set for which this model validates.
When the time comes to change the model in some way, there are two options, change the parameterization or change the implementation. Changes to the parameterization fundamentally alter the model and require the complete process of refinement mentioned above. Changes to the implementation, including the structure of the model, without changing the parameterization can also be made. If those changes lead to large shifts in the solution set produced by the model, then the parts of the implementation that were changed should be isolated and added to the parameter set. Eventually, this process leads to a terminal set of parameter values and an operator set of combinations of those parameters, producing a prescriptive encoding and a set of evolutionary operators for the model. Different models need not have the same parameterization. They only need the same underlying space onto which their respective solution sets can be projected.
For additional details email a request to Prof. Hunt.