Topic

Pitfalls of modelling

One of the main advantages of using simulation during the design process is its flexibility, because changing parameters to see their effect is often just a case of changing a numerical value and re-running the simulation. The downside is that it is very easy to misuse such flexibility, and it is easy to run a simulation model that is either physically impossible to achieve or else makes little sense when compared to the real system being modelled. It is also very easy to fall into the trap of trusting the output of a model without question, forgetting that a model is only as good as the person who created it and entered its parameters. In short, it is possible that your simulation is not modelling what you think it is, with the outcome that, although the model may appear to be functioning correctly and without numerical error, the results can still be incorrect or inappropriate for a multitude of reasons.

Conversely, a model may give results that are perfectly correct, but may have been ‘over-engineered’ in a sense that too much detail has been entered into the model. This is undesirable as increased model complexity means increased model run time and/or memory usage. We categorise such discrepancies as errors in model design.

As the purpose of simulation is usually to predict the behaviour of a design or system before construction, it is imperative that the results generated by the simulation closely agree with those of the actual device or system. Such confidence in the simulation output can only be gained by comparing the generated results to experimental results obtained from the device or system being modelled. This process is often termed validation.

How closely the simulation should match the experimental results will depend on the accuracy required, though a typical accepted discrepancy between experiment and simulation is in the region of 5%. In some cases it is only necessary for close agreement at certain points in the simulation,

When validating a simulation, it is important to start with a simple experimental set-up that can easily be replicated in the simulation. That way, if the simulation and experimental results do not agree initially, it is much easier to determine why.

Once a simulation model has been fully and successfully validated, it can then be used to predict the behaviour of a real device as many times as required. However, should any of the model parameters change drastically, it may be necessary to re-validate the simulation.

To be confident in a simulation, it is usually necessary to perform two or more sets of different validations, just to be completely sure that the simulation agrees in several different cases. For example, in a PCB thermal model, it may be appropriate to validate against several different component sizes within the same PCB, or compare results for three different ambient temperatures.

An important skill in modelling is to be able to identify the source of any discrepancies or errors within a model, whether they are numerical errors or errors in the model design.

[ back to top ]


Sources of error

The reasons why the results generated by a simulation and those obtained through experiment may not agree can be broken down into four categories.

Problems with model discretisation can be isolated from other sources of error by performing what is termed ‘model convergence’. This involves running the same model several times but using different values of Dt and/or spatial values Dx, Dy, and Dz, and comparing the results generated in each case. For example, the initial value of Dt in a thermal model may be 1s. To check for any transient error we could re-run the model with a smaller value of Dt of 0.5s and compare the results generated. If the results agree sufficiently closely (say 5%), then the model is considered to have converged, and one can be confident that no significant transient error is present. Figure 1 shows a set of possible temperature convergence curves in a thermal model showing the temperature of an electronic component against time, using three different numerical time steps of Dt = 1, 0.5 and 0.25s.

Figure 1: Set of convergence curves

Set of convergence curves

 

The curves in Figure 1 suggest that the initial value of Dt = 1s was too large, because, when the same model is re-run with a value of Dt = 0.5s, the component temperature curve is significantly different. Moving to a smaller value of 0.25s makes only a small difference, hence we conclude that a Dt value of 0.5s only produces a small amount of transient error. The same kind of procedure can be used to isolate discretisation errors due to the values of Dx, Dy and Dz, but to that the convergence curves would be plotted against spatial position rather than against time.

[ back to top ]