Please enable JavaScript to view this site.

Vensim Help

Models are representations of reality, or our perceptions of reality.  In order to validate the usefulness of a model, it is important to determine whether things that are observed in reality also hold true in the model.  This validation can be done using formal or informal methods to compare measurements and model behavior.  Comparison can be done by looking at time series data, seeing if conditions correspond to qualitative descriptions, testing sensitivity of assumptions in a model, and deriving reasonable explanations for model generated behavior and behavior patterns.

Another important component in model validation is the detailed consideration of assumptions about structure.  Agents should not require information that is not available to them to make decisions.  There needs to be strict enforcement of causal connectedness.   Material needs to be conserved.

Between the details of structure and the overwhelming richness of behavior, there is a great deal that can be said about a model that is rarely acted upon.  If you were to complete the sentence "For a model or submodel to be reasonable when I _____________ it should _________"  you would see that there are many things that you could do to a model to find problems and build your confidence in it.  

In most cases, some of the things required for a model to be reasonable are so important that they get tested.  In many cases, the things are said not about a complete model but about a small component of the model, or even an equation.  In such cases you, as a model builder, can draw on your experiences and the work of others relating to behavior of generic structures and specific formulations.

Ultimately however, most of the things that need to be true for a model to be reasonable are never tested.  Using traditional modeling techniques, the testing process requires cutting out sectors, driving them with different inputs, changing basic structure in selected locations, making lots of simulations, and reviewing the output.  Even when this gets done, it is often done on a version of the model that is later revised, and the effect of the revisions not tested.

Reality Check equations provide you with a language for specifying what is required for a model to be reasonable, and the machinery to go in and automatically test for conformance to those requirements.  The specifications you make are not tied to a version of a model.  They are separate from the normal model equations and do not interfere with the normal function of the model.  Pressing a button lets you see whether or not the model is in violation of the constraints that reality imposes.