Design for Thermal Issues

Unit 11: Approaches using simulation software

Figure 1 is a reminder of how the Units on thermal modelling fit together, from which you can see that this Unit text follows on from the discussion on computational methods for thermal modelling that were explored in Unit 9. What we tried to do there was to indicate the way in which thermal problems could at least be scoped by methods that did not involve simulation software.

Figure 1: Units on thermal modelling

Units on thermal modelling

 

Our aim in this Unit is to look at the next higher level of complexity, where models of the system are analysed using special-purpose software. In most cases, these programs tackle conduction, convection and radiation in three dimensions, dividing the system into “control volumes” of different sizes, and using numerical methods to solve complex differential equations that are derived from basic physical principles, such as conservation of energy, mass and momentum.

Whilst the text starts by looking generally at the principles of such software, the bulk of this Unit takes the form of an introduction to the way in which Flomerics simulation software is used, both to provide a specific example of how such software works and to give you the information on the tools that you will need to tackle the assignments.

Unit contents

For convenience, the text for this Unit has been split into two parts, the detail of the FLOTHERM software implementation continuing on the next page. The links above will take you directly to whichever page is required, and [ back to top ] will return you to this page from either section.

Introduction to thermal simulation tools

As was apparent from the revised product development cycle discussed in Unit 4, employing a combination of intuition, manual calculations, spreadsheets and a CFD tool will suffice for most thermal analysis situations. Intuition comes with experience and is thus outside the scope of what can be achieved here(!), and the use of manual calculations and spreadsheets was covered in Unit 9. Which leaves the CFD tool . . .

Where CFD tools fit in

Recommended viewing

As one of our resources for this module, we are making available part of a presentation that one of the course authors made to a group of engineers attending a one-day awareness seminar. In this (after lunch) session, Murray MacCallum is describing the ways in which thermal analysis tools can be used for modelling and simulation. The presentation has been “tidied up” a little, but is very much a live event, which is why we haven’t provided a script.

 

Murray’s presentation shows CFD tools in the context of a very wide range of approaches – towards the top, but not perhaps at the very highest level of complexity, as you will see in the section “Extending the scope of the simulation”.

CFD tools tend to be focused on solving just thermal problems, but there is a range of tools even within this broad category, some concentrating on board level analysis, whilst more sophisticated products function at the system level.

Recommended reading

We recommend that you read PCB thermal analysis: power dissipations and thermal boundary conditions, where Eric Madsen reflects on the challenges of thermal analysis.

 

Some of the points Madsen makes are particularly worth thinking about:

Madsen concludes with the comment: “Each tool has its place in PCB thermal analysis, and you may find that more than one tool is required for your application”. As we say later, should you need to employ thermal simulation for your own work, we strongly advise you to evaluate a range of software.

What CFD tools do

We have seen in Unit 5 a derivation for the heat diffusion equation, based on the conservation of energy, which applies to conduction. When we come to consider convection as well, we hit an altogether higher level of complexity, and detailed theoretical models become almost impossible to solve. However, based on comparable equations for the conservation of mass and momentum, we can derive similar partial differential equations that describe the time-averaged properties for parameters such as fluid velocity, fluid temperature and pressure.

Our solution has to deal simultaneously with all the relevant differential equations, and tackling these from first principles is not a process for the faint-hearted! Which is why we have deliberately focused on using software tools where the underlying principles are embedded and not made obvious to the user, who is thus left free to concentrate on comparing the consequences of design decisions and adopting best practice thermal design.

Quote

“All you have to do is draw a picture on the computer of your circuit board and indicate . . . the locations of heat sources, the material properties of the board and components, and the shape and locations of the air vents. These are called boundary conditions . . . the program . . . divides up the three-dimensional space you drew into thousands of little pieces, and for each piece, or grid cell, it writes a set of energy balance and mass balance equations . . . which it proceeds to solve [by a series of successive approximations until these converge] on a solution.”

Tony Kordyban, Hot Air Rises and Heat Sinks, pp33–34

 

Kordyban’s description of the process, of defining the elements and the boundaries, and creating a grid, is accurate, but is far from a trivial exercise. Also, the results produced by the simulation are highly dependent on the model they create, so life is unfortunately not usually quite as simple as Tony Kordyban suggests. Or did he have his tongue firmly in his cheek?!

The quotation above is deliberately edited, and when doing this we omitted one key piece of information about the solution process, which is that the equations need to be solved not once but many, many times, so that the total time taken may be very significant. Whilst modern algorithms and computers have made the task easier, it will frequently be “the next morning” that a converged solution becomes available.

How CFD tools work

CFD tools manipulate the equations relating to each of the grid cells that makes up their internal model of the overall system. Although there are many differences between different tools, they also have much in common.

Typically there will be separate definition stages for:

Most tools will have libraries of common components to simplify the input task. Some, but not all, tools will be able to import mechanical features of the design. This is always quicker than drawing the design using the (often rudimentary) drawing tools within the simulation package. However, it may introduce unwanted and unhelpful detail, which at the best will increase the number of grid cells and slow down the computation, and at worst may totally defeat the solution engine.

The grid definition is likely to be automated, but fitting the grid to the requirement, both mechanically and in terms of getting higher detail at the points of most interest, is something that generally requires at least “massaging” by the Design Engineer in order to get a model that will solve within the time available.

Having input the information, the software will generate the appropriate energy and mass balance equations, and attempt their solution. This is always an iterative process, and is usually based on inputting a “first guess”, when successive iterations of the calculations will yield results that approach the final value.

However, given the complexity of calculation, and the complex effects of turbulence, an exact solution will not result. Rather the position will be reached where the result is known within an acceptable tolerance and uncertainty. But this “convergence” cannot be relied upon in every case, particularly if the system is too complex.

The tool then has the task of presenting the results to the designer. Although the steady-state temperatures of key components and any monitoring points will be available in tabular form, generally the most useful presentation mode is graphical – coloured contour lines showing temperature, and arrows representing air flow. The temperature distribution is particularly useful, but the arrow plot can be misleading. Read van Wijk’s paper for an illustration of the different types of flow visualisation that you may encounter.

Recommended reading

Beyond the arrow plot – New methods for flow visualization, Jarke van Wijk, Electronics Cooling, January 1999

 

How the schemes operate in practice can be seen from two papers that we referenced in earlier Units.

Supplementary information

Thermal Simulation of Telecommunications Racks, Wim Nelemans, June 2002 (PDF file, 1.82MB),
or revisit the presentation at this link which was recommended in the Module overview, and is our adaptation of Wim’s paper.

CFD and EDA tools: The Interoperability of FLOTHERM® and Board Station®/AutoTherm®: Concurrent Design of a Motorola PowerPC™ RISC Microprocessor-based Microcomputer, Kromann, Pimont and Addison, 5th International FLOTHERM User Conference, Paris, September 1996 (PDF file, 704KB)

 

CFD tools can be applied at all levels from components to system, and in different amounts of detail. Read Valenta’s paper for an example of a simulation applied to a ball grid array. Note the close correspondence between measurement and calculation, and the use that can be made of this modelling approach, both for setting dissipation limits for the package, and optimising the thermal performance of an assembly.

Recommended reading

Thermal Modelling of Ball Grid Arrays, Pavel Valenta, 5th International FLOTHERM User Conference, Paris, September 1996

 

So far, we have concentrated on the common features of packages, but one respect in which packages may differ significantly is in the nature of the grid. The package we have chosen happens to use a Cartesian system, where the cells are cuboids of different aspect ratios – much more about this later in the Unit!

But this is not the only way of tackling the problem, and sometimes it is not the best way of modelling a physical space. For example, you can fill a volume with tetrahedra, or use a hexagonal mesh. Yet other programmes concentrate on managing the cell boundaries, rather than the cell volume. For those who are interested in understanding more about how the computations are carried out, and of the differences between finite element, finite difference, finite volume and boundary element methods, see our supplementary note on Numerical methods.

Note that the results that you get depend on the model that you build, so it is important to cross-check the output of your simulation package against other methods, and to validate it by physical measurement.

Extending the scope of the simulation

Most simulations will provide information on temperature and airflow, both in the steady state and during expected changes, such as switch-on or fan failure. Prudent designers, or those operating close to the margins, may also carry out some statistical analysis to account for likely variations in components.

But this kind of thermal simulation is really rather narrow. For example, it is helpful to know the effect of the thermal environment on the reliability of the mechanical assembly and on the function of the circuitry. For the latter, the thermal simulation needs to be linked to a simulation of the complete electronic system to enable analysis of its performance in its thermal environment, influenced both by external sources of heat and by heat generated within the system itself. Linking the thermal simulation to the simulation of the electronic system allows the designer to examine the sensitivity of functional parameters under the influence of local temperature fields, and thus verify the stability of the system. For this, the thermal simulation will probably have to include some transient analysis as well as a steady-state evaluation under the extremes of the boundary environment.

One of the challenges of this kind of multiple simulation is exchanging information between the modelling packages. An example of this is given in Multiphysics Modelling for Electronics Design, where John Parry and his colleagues at Flomerics talk about the benefits of a fully-integrated software environment, based on a common data model and user interface, which will integrate thermal, stress and EMC calculations. They suggests using this early within the design stage of an electronics product to provide fast solutions to thermal, stress and EMC issues.

Quote

“Like so many simulation methods, the analysis provides insights into the design, not design solutions. Designers must understand what they are doing and why.”

Parry, Bailey and Addison, Multiphysics Modelling for Electronics Design

 

The integration between different analysis tools is highlighted by the practical examples discussed in the following papers. Note how the SAE paper shows links between thermal, structural and optical analysis, and the CALCE presentation shows both the link to the detailed design, and the way in which a common data set is used to produce simulations of both thermal and environmental response, looking both at thermal cycling stress and at the response to vibration. This mirrors the fact that in the real world, especially in severe environments such as aerospace and automotive, challenges don’t come singly!

Recommended reading

Integrated Analysis of Thermal/Structural/Optical Systems, B. Cullimore, T. Panczak, J. Baumann, Victor Genberg, and Mark Kahan, Society of Automotive Engineers, 2002 (PDF file, 106KB)

Introducing CalcePWA 4.0, University of Maryland, 2003 (PDF file, 7.25MB)

 

[ back to top ]


Some guidelines for thermal simulation tools

In this section we comment on just three guidelines, concerned with using the simplest possible approach, expecting and allowing for some uncertainty in the simulation results and taking a structured approach to setting up the model within the software. There are (at least) two other generic guidelines about which we should remind you:

Key information

It is very important that you appreciate the basis on which the modelling is carried out so that you can anticipate those areas where there may be a divergence between the model and reality. For example, in marginal situations where turbulence may or may not be present, depending on the exact conditions.

 

Keep it simple!

Often, software packages allow for several different levels of complexity. A good example of this is with the treatment of air boundaries in a model, such as on the surface of a PCB. The first level of complexity might be to use a convective heat transfer coefficient that represents the air flow as a single bulk property. The next level of complexity might use a temperature-dependent value of convective heat transfer coefficient, and a further level may actual model different types of air flow such as laminar or turbulent. With each increase in model complexity a greater amount of realism can be achieved in the model, but at the expense of a slower-running simulation. It is therefore the decision of the modeller as to which level of complexity is chosen, depending upon how much complexity is required to achieve results that sufficiently describe the device or system being modelled. A valid approach is to begin with a model that starts simply, and then successively increase the level of complexity, noting the change in results each time. This way, it is possible to see what each added level of complexity actually gives.

Keeping things simple is certainly worth keeping in mind when tackling any thermal calculation, whether by hand or computer simulation. Tony Kordyban (probably our favourite guru) puts it very cogently when he reminds us in the quotation below of what is a thermal parallel to Pareto’s famous observation (the so-called ‘80:20 Rule’).

Quote

“The first thing you do in a thermal simulation is gather up all the information about the board you want to analyse: the bill of material, mechanical drawings, component specifications.

“The second thing you do is throw most of that information away. The trick is not to throw away anything important. You throw away information because simulation is a simplification of reality. A circuit board might have more than 1,000 components. There is no way to include all of them in a thermal simulation – and there really is no need. Most of those thousand components are tiny chip capacitors and resistors that put out only about as much heat as a flea walking on a dog. They don’t get hot, and they don’t heat up other parts. Most of the power is given off by 10 or 20 components. These are the parts you need to know the temperatures of, the ones that cause all the thermal trouble.

“Step 2A is to pick out all of the high power components from the Bill of Materials. Step 2B is to look for temperature-sensitive components, like crystal oscillators. These components may not generate much heat, but they may have a specially low operating temperature limit, so you want to know how hot they get anyway. Step 2C is to toss out everything else and start simulating.”

Tony Kordyban, Hot Air Rises and Heat Sinks, pp116–7

 

Plan for the inevitable uncertainties

Investigating how a design is affected by factors such as design layout, material type and ambient conditions allows us to improve our design and to test how any variability in design (due to inevitable component tolerance) will affect performance. This avoids costly errors and rework, and can be done at an early stage, without having to build anything. If we take a typical board assembly, by varying the parameters we can answer questions such as:

This type of modelling is referred to as “sensitivity analysis”, because we are examining the sensitivity of the solution to variations in the input parameters. A helpful way of presenting the results is to produce a comparative series of curves or values for each parameter. Figure 2 is an example of this, showing a series of simulation-generated curves giving the maximum temperature of an embedded resistor of dimensions 0.3×0.3mm, for different values of heat transfer coefficient to the air and for a range of board sizes. The heat transfer coefficient was varied between 10W·m−2K−1 (representing a PCB operating in still air under natural convection) and 50W·m−2K−1 (representative of high-power fan-assisted cooling or forced convection).

Figure 2: Sensitivity of component temperature to heat transfer coefficient in a range of PCB sizes.

Sensitivity of component temperature to heat transfer coefficient
in a range of PCB sizes.

 

The curves in Figure 2 show that heat transfer only has a marked effect on component and board temperature when the board is small.

Sensitivity analysis can also be used to optimise a simulation model. For example, when setting up a model, it is sometimes difficult to know how much detail is required to provide results with the desired accuracy. Sensitivity analysis can be used to determine what effect different levels of detail in a model will have on the output results. Using this approach, in some cases models can be greatly simplified, whilst in others the results may indicate that a model needs to be more detailed.

Exercise

A transient simulation modelling the temperature of a component as it heats up is found to agree with experimental results to within 5% in the steady state. However, a much larger error between simulation and experiment occurs in the transient as the device heats up. List and explain the possible causes of such a transient error and how you might rectify each possible problem.

After you have written your answer, click here

 

Recommended reading

In this particular case, the uncertainty that we introduced was on the amount of cooling available. But a real situation may well have other uncertainties. As Ake Malhammar points out in his paper Uncertainties in Thermal Design, a number of areas of uncertainty can lead to significant error, both in the simulation itself and in the conclusions we draw.

 

Key information

Variation and uncertainty don’t come singly, so the designer will have to make a judgement on the likely interaction between variables and on any common causes that might influence several parameters simultaneously.

 

Creating the model

The first step towards creating a project in a modelling package is to determine what is to be achieved!

Establish the requirements

At the beginning of a project, it is of great benefit to take the time to assess exactly what to model before starting. A sketch that shows the major features and dimensions will help to start the case. This allows the user to decide how much needs to be included.

At the outset identify the important key features of the model. It is in these areas that it may be necessary to concentrate more grid cells to increase the resolution of the results. Also place monitor points into key areas. These allow the variables to be tracked at particular locations during the solution process. This will quickly give feedback as to whether the results are sensible and increase confidence when leaving a simulation running for an extended period.

Keep it simple

The general recommendation with modelling anything thermally is, in the first instance, to keep it as simple as possible. The addendum to this is that the model should be no more complex than the temperature resolution that is being solved for. The concept is often overlooked and leads to overly-complicated models that may not converge in a thermal simulator.

Three practical suggestions:

Extent of the solution domain

The solution of the variables is calculated within a cube of fluid known as the ‘solution domain’. The size of this will depend on the case in question. Another consideration in setting the solution domain will be whether only the inside of a box is required or whether some of the environment around any box needs to be included. Indeed, it is not even necessary to model the whole box.

Component or board

Modelling a component or board will not generally require the model to be extended to a solid box surface. This type of simulation may well be investigating the methods of local heat removal and therefore a comparison can be made by siting the objects suspended in free air or by defining a particular airflow over the objects in question.

Computer or electronics module

When modelling a whole system such as a computer or electronics module the casing itself will form an important part of the model and will need to be represented. Often the solution domain will end at the casing and the user will need to define the outside temperature and how the heat is transmitted from the exterior of the box – the ambient setting. This is normally acceptable for cases that have forced ventilation or are sited away from significant obstructions.

Heat transfer on the sides of the box

If the outside of the box is designed to aid the heat transfer processes (for example fins) then it is difficult to define a heat transfer coefficient and the structure will have to be modelled explicitly. Hence the solution domain will have to be extended beyond the box.

Objects near the box

If there are local objects near the box in question, then the solution domain will need to be extended to include these. A good example of this would be tabletop electronics with vents in the bottom. Here the gap between the bottom of the box and the table will have a significant effect on airflow within the system and will have to be included in the solution domain.

[ back to top ]


Practical simulation tools

A number of “heavy-duty” simulation packages are in widespread use, all with broadly similar capabilities, but detailed differences in operation and in their level of integration with other software tools. After considerable evaluation of what was commercially available, and in particular the packages operating at a similar level made by Fluent (Icepak®) and Blue Ridge Numerics (CFdesign), the authors selected software produced by Flomerics as the core thermal analysis tools for the module. This decision, which was especially complicated due to the high quality of the other offerings available, was made for the following reasons:

  1. CFD analysis is available at all levels:
    • Component (FLOPACK)
    • Board (FLOPCB)
    • System (FLOTHERM) – reduced component and board modelling is also possible .
  2. The user interface is good and easily understood by Windows users.
  3. The learning curve for the software is reasonably short.
  4. Although XP-based, it can be configured for remote access over Citrix.
  5. FLOTHERM has a large market share in the electronics industry (85% in Europe; 50% in the US), so is a tool that you may come across in your work.
  6. It is a stand-alone tool, although it may be interfaced with the main vendor CAD/CAM tools (for example, Mentor, Cadence, Zuken, Pro/Engineer, AutoCAD).

For the module assignments, you will be using Flomerics software, and thus gain familiarity with what is a professional “top-end” tool. However, always bear in mind:

Should you need to employ thermal simulation in your own work, we would strongly advise you to review the options and evaluate a range of software.

Supplementary information

There are many suppliers of CFD software, and a wide variety of solutions, ranging from open-source programs to complex and expensive tools. The tools vary enormously in the way that they work, in their ease of use, and in their cost.

Take a look at this ANSYS web page CFD codes list – commercial products. Whilst the list is not complete, and appears quite dated, the breadth of entries illustrates the many different types of solution, and the ways in which academic groups have developed simulations for different purposes.

When examining any tool, take into account the original intention of the product, its ease of adaptation to your situation, the “friendliness” of the interface, and the support available either direct from the supplier or through the user community.

 

[ back to top ]


FLOTHERM overview

The Flomerics suite of thermal simulation tools has three components, which are overviewed in the sub-sections that follow. We start with FLOTHERM, not only because this was the first program to be developed, but also because it remains the principal tool in the suite. FLOPACK is a support package, aimed at improving the quality of device modelling; FLO/PCB uses the same principles as FLOTHERM, but has a different interface, and presents as a user-friendly, cut-down and less expensive version of the parent package, targetted at designers of board assemblies.

FLOTHERM has a high-performance CFD software kernel that may be employed for both thermal modelling and simulation, and which is specifically designed to investigate thermal issues within electronic systems, sub-systems and packages. FLOTHERM can be used at four different levels:

FLOTHERM can handle:

By importing basic structural and component information directly from a CAD database, FLOTHERM gives the designer a foundation on which to model a variety of different cooling scenarios, and to evaluate the impacts of changing such design aspects as the location and/or size of fans, vents, baffles, and air filters. The modelling package can also be used to simulate differences in ambient temperature conditions and in thermal load parameters, such as power supplies, electronic components, heat sinks, board materials.

The designer is able to predict actual real-world system performance quickly and accurately, and to observe the effects of design changes on thermal behaviour, and this allows product parameters to be optimised during the earliest stages of the design process, before building and testing the prototype.

General principle of operation

FLOTHERM simulates heat transfer (by conduction, convection and radiation) and air flow (natural and forced convection) using the Navier-Stokes equations1. Named after Claude-Louis Navier and George Gabriel Stokes, these are a set of partial differential equations that describe the motion of fluids, and which are derived by applying the three conservation laws of mass, momentum and energy to an arbitrary ‘control volume’ in which values of temperature, pressure and velocity are calculated.

1 Much more information on the Navier-Stokes equations at http://www.navier-stokes.net/.

The space that is being represented in the model (called the ‘solution domain’) is split up into a set of non-overlapping, contiguous finite volumes over each of which the conservation equations are expressed in algebraic form. These finite volumes are referred to as ‘control cells’, ‘grid cells’, or quite simply as ‘cells’. The use of these arrays of grid cells is a key aspect of FLOTHERM’s operation, and the software employs a Cartesian grid system as shown in Figure 3:

Figure 3: 3-D solution grid and solution domain

3-D solution grid and solution domain

 

The more grid cells in the model, the more points are calculated, and the better will be the resolution of the case, but at the expense of additional computer overheads to calculate the results.

The conservation equations are both non-linear and ‘coupled’. That is, the value of a variable depends on surrounding values of that variable and also on other variables. Hence the equations need to be solved in an iterative manner, until the errors in the conservation equations are at an acceptable level (Figure 4).

Figure 4: Error convergence with multiple iterations

Error convergence with multiple iterations

 

The calculated values for field variables such as temperature and pressure are located at the centres of the grid cells. However, the values of the velocity components are located at the faces of the grid cells, as shown in Figure 5.

Figure 5: Grid cell representation

Grid cell representation

 

The finer the grid used, the greater the number of grid cells, and the more closely the algebraic equations approximate to the differential equations from which they originated.

A key concept within grid configuration within FLOTHERM is the aspect ratio. This is the relationship between the length (L1) and width (L2) of the grid cells. Thus:

aspect ratio = L1/L2

As a general rule, this ratio should be as close to unity as possible:

In addition to the above, transitions from large to small grid cells should be avoided. One of the most common user errors is to create overly small grid cells, which greatly increases the chances of aspect ratio violations. These points should be borne in mind when configuring grid cell sizes in the assignments.

This is illustrated in Figure 6.

Figure 6: Aspect ratios

Aspect ratios

 

Problems with aspect ratios can be alleviated by the use of the grid smoothing tool within the software. This operation adds grid lines to lessen the transition between problematic areas and thus minimise the aspect ratios. This is illustrated in Figure 7.

Figure 7: Grid smoothing

Grid smoothing

Click here for a larger view of the image

Adding fine grid over the entire solution domain increases the computational burden (and therefore solution time) for no real benefit. For this reason, the finest grid is normally reserved for regions of the domain where gradients of the variables are expected to be greatest, and where greater precision in the thermal solution is required in a small area. The technique known in FLOTHERM as ‘localising the grid’ enables the user to add grid in specific areas and not in others. Grids can be localised on objects, assemblies or regions and may be ‘nested’, as illustrated in Figure 8, which shows one grid localised around an enclosure within the solution domain (outlined in red), and another nested and localised around a heatsink assembly (shown in black).

Figure 8: Grid localisation and nesting

Grid localisation and nesting

 

A typical grid mesh is shown in Figure 9: this illustrates the non-uniform distribution of cells across a sub-assembly. The grid is densest across regions of high detail (that is, a small internal assembly) and coarsest across low detail areas (that is, outside the sub-assembly box).

Figure 9: Cell mesh

Cell mesh

 

[ back to top ]


FLOPACK overview

FLOTHERM and FLO/PCB allow users to assemble models from libraries that contain a large number of thermal models for existing components, avoiding the need to create them from scratch. However, as you will have noticed in Unit 10 (Real parts), packages exist in many variants, and layout designers will know that even the best libraries need to be supplemented by details of additional parts.

FLOPACK is designed to generate reliable, accurate thermal models of IC components, test boards, standard test harnesses and other associated parts with the minimum of effort, and consists of a collection of ‘Smart Part’ modules installed on a central web server. Users enter data describing the device using a standard browser: for instance, for a Ball Grid Array IC package, users would enter:

These parameters are then used by FLOPACK to generate a FLOTHERM model that can be downloaded to the user’s local machine and analysed using FLOTHERM.

In order to simplify matters for the user, only the bare minimum of necessary data needs to be input manually. FLOPACK then makes “intelligent guesses” for the rest of the information in the design sheet, using in-built common industry manufacturing and design rules used by most component suppliers to generate a reasonably accurate package model from a reduced set of input parameters. Many part families are supported, including microBGA, PBGA, CBGA, TBGA, BOC, QFN, QFP, PLCC and bare die with multiple heat sources.

The tool is web-based because:

A number of modelling options are provided for each package, allowing the user to make simplifications to the models where appropriate. For example, a solder ball array might be represented either as an assembly of individual elements (accurate, but computationally less efficient) or as a single block with lumped thermal properties.

Supplementary information

More detailed information on FLOPACK can be found at http://www.flopack.com, but you will probably want to refer to this only after you have read Unit 12, learnt about ‘detailed’ and ‘compact’ model types (FLOPACK will create both two-resistor and DELPHI compact models), and been through the FLOPACK and SmartParts walkthroughs.

[ back to top ]


FLO/PCB overview

FLO/PCB is essentially a reduced and simplified version of the board analysis section of the FLOTERM tool, aimed at facilitating collaboration between product marketing, electronic engineers and mechanical engineers on board assembly design, particularly during the conceptual phase of the design process.

A significant change in approach is that FLO/PCB promotes a conceptual design process that is derived from the functional block diagram, where changes made to the diagram are reflected in the physical layout and thermal representation. Key software features are:

Supplementary information

More detailed information on FLO/PCB can be found at http://www.flopcb.com/.

Exercise

Now visit our Supporting material on Flomerics software and go through the preliminary walk-through on FLO/PCB. We are starting with the intermediate-level tool in order to give you an insight into how the packages as a whole work, and what the issues are.

It must be stressed that it is very useful for the new user to progress through the package tutorials that are contained within the tool. These are a structured set of exercises which facilitates a sequential development and understanding of the features of FLO/PCB in the shortest possible time.

Expect to take around 5–6 hours for this preliminary session, including familiarising yourself with the software tool by going through the tutorial material.

 

When you have completed the FLO/PCB walk-through, go to the next page for the following sections.

[ back to top ]