So far in our discussion of board fabrication and assembly we have only briefly mentioned the testing of the product, although we have placed appropriate emphasis on the way in which processes are controlled. In this Unit, our attention moves to the general issue of “test” in its widest meaning, embracing electrical, mechanical, visual and other assessments. We will be looking at why we carry out test, as well as the kinds of test we perform, concentrating primarily on the testing of bare boards and assemblies and leaving the consideration of system test to Unit 9, although the basic concepts remain the same.
When we carry out any kind of “test” on an electronic component or assembly, we are looking for defects and not just faults. In other words, we hope to screen out potential problems, not just the ones that cause the product to fail to perform at the point when the test is being carried out. And we do this because defects result in added cost: not only the cost of labour, materials, equipment and retesting, but also the total cost of any unresolved failures (including rework).
As well as adding these immediate costs, defects may:
Worse still, defects increase potential unreliability, both because repairs (especially repaired joints) may give the product a higher failure rate and because some faulty units do not get screened out.
The whole issue of “the cost of quality” will be familiar to you both from other studies and from experience, but it is well worth reminding ourselves that the true costs of quality are substantially higher than those we might measure in a manufacturing context.
If your company supplies products to an end-user, how might you measure the cost of poor quality other than as the direct cost of dealing with customer returns and meeting warranty claims?
The hidden costs of poor quality are very significant, so inspection and test are not isolated processes which occur after manufacture has finished. They are vital elements in manufacture, whose role is to ensure quality rather than just identify defects, and in consequence should be fully integrated into production. What we are trying to do in the test and inspection process is to
[ back to top ]
We have already used the terms “inspection” and “test” as if taking for granted that everyone would naturally have the same clear view of their meanings! In practice, this is far from being true. For example, you will often find “test” used (as in our Unit title) to embrace almost every kind of activity that has the aim of detecting and eliminating defects and reducing their level of incidence. Yet there are real differences between the two terms, and not just in the dictionary definition:
A test is the act of using something or doing something to an item to find out whether it is working correctly or how effective it is.
To inspect is to look at something carefully to discover information especially about quality or correctness.
Cambridge International Dictionary of English
This particular distinction helps us appreciate why inspection equipment may be able to detect a defect that is not shown by test equipment: inspection looks at the defect directly; test equipment detects not the defect itself, but the effect that the defect produces on the UUT’s performance. Take the case of a faulty solder joint where correct wetting has not occurred. Visible to the eye, this defect may appear as an immediate electrical test fault, but equally may only result in a fault condition if the assembly is flexed or-temperature cycled.
Whilst we can draw a distinction between the fault that appears and the defect that causes the fault, we also need to remember that not every type of defect can be detected by inspection. Particular examples are defects that are internal to components or hidden by them, as in the case of joints on an area array device. So typically we will need to use a combination of inspection and test, looking for defects both directly and indirectly.
This distinction between inspection and test also has an impact in the key area of locating the defect; typically inspection equipment will indicate directly the location of a defect, whereas test results will need a degree of interpretation. In the case of a solder short, for example, electrical test can determine which nodes have been shorted, but locating that short on the board in order to repair it will need reference to design detail, and there may be multiple possible defect locations. As we will see later, the blend of inspection and test that is appropriate for any application will depend on the “fault spectrum”, the range of faults that present on a particular assembly, and on their relative frequency of occurrence.
Because “inspection” and “test” are terms that are often confused, and sometimes used interchangeably, for this Unit we decided to draw the distinction between them as follows:
In general, test methods involve making contact with the UUT, whereas inspection is primarily non-contact. However, this has to be interpreted in terms of electrical contact, because many forms of mechanical inspection involve physical contact between the UUT and the measuring instrument or gauge.
This is not the only way of dividing up the spectrum of defect-detecting activities. For example, Stig Oresje of Agilent prefers to draw a distinction between two sorts of test, ‘structural’ and ‘electrical’. The structural category includes AOI, X-ray, MDA, and ICT “stuck at” tests. The electrical tests include ICT and flying probe, and the categories of embedded test such as 1149.1 and BIST, as well as functional test. Don’t worry about the jargon at this stage, but review later whether you agree with his view.
Both inspection and test are intended to be non-destructive in a production context, although the application of power can sometimes inadvertently destroy units undergoing electrical test. On occasion, however, destructive tests are used, for example in the diagnosis of failed parts (such as by Scanning Electron Microscopy) or for establishing the ability of assemblies to withstand accelerated life conditions. Depending on the exact nature of the test, destruction may be a consequence of the method used (as with SEM), or merely describe the probability that the device quality has been impaired (as with accelerated life testing). When we come to look at equipment test in Unit 9, and specifically at accelerated life testing, we will see that some tests actually reduce expected product life. Of course this is only acceptable if the reduction is small, and the benefit of testing outweighs it.
Defects can cause:
In other words, defects can appear now or later! Table 1 lists some defects in soldered assemblies which fall into these two categories of ‘immediate’ and ‘retarded’. Note that in the table no distinction is made regarding the origin of the defects, which may be any of the production processes or materials, or even the design of the assembly.
= malfunctioning of an assembly
when powered-up for the first time
= initially functioning,
but failure occurs later in life
|not all tracks present||board failure
(for example, delamination)
|not all conductors acceptable|
|component missing||component failure|
|component not acceptable|
|solder short||displacement of solder balls causing short circuits|
|initial open joint||opening of non-soldered joint
fatigue cracking of soldered joint
|low value of insulation resistance||corrosion or similar effect|
|definitely wrong!||you never know!|
If an assembly fails to function, or does not function at all because of the inadequacy of a soldered joint or component, this is an evident defect which has to be reworked to make the assembly operable. It is much more difficult to decide if a joint is a ‘poor joint’, that is, one which has the potential to produce subsequent malfunction.
It is also difficult to decide whether a solder paste deposit has sufficient volume to give a sufficient joint once reflowed. Whilst there are established theoretical relationships between the amount of solder and the reliability of the joint, making an accurate link between the amount of solder paste and the joint volume is more difficult, as this depends on the lead geometry. There is also the problem of making suitable measurements in the test time allocated.
However, variability in joint size can be assessed readily, and is crucial in the case of area arrays, where lean or missing inner joints cannot be repaired. For that reason it is not uncommon for AOI to be used after printing specifically for critical areas where BGAs and similar devices are to be mounted.
It is difficult on a simple one-dimensional scale to express quality in terms of the potential reliability of an assembly. At one extreme is the ‘ideal’ assembly; given only a small deviation from the ideal, the quality is not reduced by a measurable degree; as the deviation increases, the reliability reduces and functionality may also degrade.
At some point, which will depend both on the nature of the assembly and its application, and on the person making the decision, this reduction in quality will be judged to be a ‘defect’ rather than a flaw:
Two definitions from IPC-AI-6401
Defect: A nonconformance to specification in the product, detectable by an automatic inspection system, that violates specification limits and may render the product unfit for use.
Flaw: A nonconformance in a product that is detectable but does not violate specification limits or make the product unfit for use.
‘A defect is always a flaw, but a flaw is not always a defect’
1 This standard is now ‘obsolete without replacement’, but the definitions are still valid.
With visual inspection, the appearance of the inspected item, such as a soldered joint, is usually compared with given samples, drawings or photographs, but the consistency of inspector judgement is a cause for concern. This situation is shown schematically in Figure 1.
Two more definitions from IPC-AI-640
False alarms An anomaly, indicated by an inspection system as a defect, that is not truly a defect. ‘False alarm rates’ are given as percentages of defects called out by system that, upon review, are judged invalid.
Escapes Opposite of false alarm. Defects that are not seen by an inspection system. ‘Escape rates’ are percentages of valid defects that a system passes.
Figure 2 raises many questions:
Automatic inspection solves the grey area problem only to a limited extent, because it is a problem in principle. An automatic inspection system is by no means any better than a human inspector when it comes to assessment of reliability based on joint appearance. In most cases such a system assesses the joint by only a few rather simple accept or reject criteria, such as the amount of solder in the joint and bridges between adjacent leads.
Whereas quality gradually changes as the deviation from the preferred state increases, the decision about what to do is a step function: to repair or not to repair. In practice, it is difficult to decide where the boundary should be placed between
‘good’ = leave as it is, or
‘bad’ = corrective action needed
A frequent strategy is to classify defects into one of three groups (Figure 2):
Major defects always need to be fixed; cosmetic defects should lead only to process improvement and not be reworked; what to do with minor defects has always been a subject of debate!
Historically the rework decision depended on the type of product and the environment in which it would operate. For example, military users were very insistent about solder joint standards, and would frequently insist that imperfect joints were reworked, although the safety margin in such joints was still more than adequate. With surface mount technology, however, joints are smaller, with less safety margin, and need to be properly made in order to be adequate for any purpose. In consequence there is much less distinction between the requirements for different SM products.
What would you say are the main practical problems associated with inspection and test, and how might these be overcome?
We have seen that there is considerable scope for lack of clarity, so an important element in the contract between manufacturer and customer is the test specification, a written set of instructions that describes the tests and shows how an agreed test strategy is implemented.
The level of detail included in the test specification should be comprehensive, but limited to the level required to implement the tasks described. For example, it is not necessary to describe the detailed operation of a particular function. A test specification is typically a utilitarian document, describing only the operations to be carried out on a circuit and the results expected. As a general guide, specifications for final tests would include the following points:
For individual tests, the test specifications will add more detail, as in this example:
|In-Circuit Test||Measure resistance between Node 1 and Node 2. Return measurement in ohms
Ensure reading is between: Min 50 ohms, Max 100 ohms
|Functional test||Apply Logical 1 to Pin 5, read output from Pin 7
Ensure Pin 7 is logical 0
|Stress screening||Apply functional test whilst maintaining ambient temperature at 50°C|
Test specifications detailing complex sets of signals, particularly those used for boundary scan applications may be written with the help of software packages that are also able to carry out support functions such as test simulation and verification.
In many cases the requirements of a customer will be articulated in generic specifications as well as the set of test instructions for a specific circuit. This is particularly true in high-reliability applications, where users have definite ideas as to the standards to be applied. However, wherever possible, the minimum possible number of generic standards should be used, because this reduces the possibility of error by applying consistent decision criteria, and also helps with operator training.
Many early standards derived from military/aerospace practice, but, for commercial and professional applications, frequent use is made of the range of internationally-accepted standards produced by IPC. For example, there are standards for printed circuit boards (IPC-A-600G: Acceptability of Printed Boards), assemblies (IPC-A-610C: Acceptability of Electronic Assemblies), and cables/harnesses (IPC/WHMA-A-620: Requirements and Acceptance for Cable and Wire Harness Assemblies). A useful list of the IPC standards and how they are related can be found at this link.
Detailed consideration of standards is beyond the scope of this Unit, but several useful ideas derive from IPC definitions:
‘Acceptable’ means that no repair is needed. This does not, however, imply that the result is perfect, or incapable of improvement, but only that the expected reliability meets the requirements. Acceptable is also referred to as ‘good’, but this term is even more misleading.
‘Defect’ means that repair is necessary, either necessary for immediate electrical function, or for reasons of reliability. There are two possible reasons for defects:
‘In control’ means that no action is required. This refers to the results of a process that is controlled, and operating within its process window.
‘Out of control’ means that process action is required. This refers to the results of a process that is out of control, and operating outside its process window. Although this may not yet produce defects, the risk is considerable that it may do so in the short term, unless the process is adjusted.
A test specification from an external customer may, in some cases, put restrictions on how much (if any) repair is permitted, and may require a formal documented procedure for dealing with the “out of control” situation. More typically, however, these issues are matters within the control of the manufacturer, but this does not mean that a manufacturer should ignore information from test!
“Test is the window on the manufacturing process.”
Nigel Adams (Aeroflex, Stevenage) at SMART Group Test Day, October 2005
[ back to top ]
Test/inspection is used throughout the manufacturing process, taking a number of different forms. In this brief section we are focusing on the different approaches, rather than on the practicalities. In the four sections that follow, we will then be looking at techniques for inspection and test and at key aspects of the test sequence.
Mechanical test may take a number of forms, and is not restricted to the “shake, rattle and roll” types of environmental test that we will see in Unit 9. For example, components and assemblies must lie within their specified dimensions, so appropriate means of measurement have to be deployed. For small components, these may take the form of gauges; for larger parts, micrometers and vernier gauges may be used, or the image of the part projected onto a measuring screen, as in the “shadowgraph” type of optical inspection equipment.
Mechanical tests don’t just apply to the “form” of the component or assembly, but also to its “fit”2. Sometimes parts that lie within their nominal specification will not fit together unless some compliance is allowed for in the design. The mating of connectors is a key example of such a potential problem area.
But mechanical test does not stop there. For example, we may need to verify the force needed to actuate a switch, both for conventional mechanical switches and membranes. Or we may need to measure the rigidity of an assembly, or conversely the force needed to flex an intentionally flexible assembly, whether a flexible printed circuit or a wire harness. And don’t forget that, in some cases, the weight of a unit may also be important.
Visual standards represent an equally diverse area, looking both at aspects of interconnection that will affect electrical performance, such as solder joints, and at aspects of board “quality” that are more difficult to define, such as the cosmetic standard. In many cases, the testing applied will depend critically on the application. An example of a visual parameter with a mechanical element is the alignment of light-emitting components such as lamps or LEDs. Because the human eye is very sensitive to slight discrepancies, the alignment of components may be critical, whereas no such requirement applies to ordinary passive components.
Electrical tests can be applied to components, to modules, to assemblies, and to the completed system. At the lower end, the concentration is on the conformance of the interconnect or component, gradually changing to a focus on functionality as the UUT becomes more complex. The shift to a wider view is typically accompanied by a reduction in the detail of information collected and in the “coverage” of the test. In other words, at the system level it will be impracticable to test every element of the system, but our expectations of quality rely on more detailed work having been carried out earlier in the overall test sequence.
In our section on electrical test techniques, we will be looking at some generic techniques for measuring individual components located within a board assembly (often referred to as In-Circuit Test), distinguishing this from so-called “functional test”, where the intention is to verify that the module or assembly under test reacts in an appropriate way to electrical stimuli.
Not all electrical test is carried out externally by applying test vectors to the UUT. Particularly as systems become more complex, the designer will frequently embed within the circuit some ability to self-test. Also, depending on the application, systems may include some method of monitoring activity, both hardware excursions out-of-limits and software malfunction.
An important aspect of the total system test relates to conformance and safety testing for the system. For example, does the overall system meet the appropriate standards (such as EMC and safety) required for the product to be CE marked?
But conformance may also apply at a component level. For example, if we are making the claim that the overall system is lead-free, does this apply to every component? This is not just a matter of testing (chemical analysis; energy dispersive X-ray) but may be more a matter of asking the right questions of suppliers and keeping adequate records, especially when attempting to make the task economically viable for the smaller manufacturer.
Conformance may also mean providing evidence of reliability, and this may be tackled in several ways. Typically some verification of the manufacturing standard will be required, whether this is from analysis of test coupons provided during board fabrication or from maintaining records on assembly process controls.
Typically those customers who are concerned about long-term reliability will also require early-life failures to be removed by stress testing. This might be a simple “burn-in” test, where the product is operated for a period at elevated temperature, but more frequently it will involve a sophisticated battery of accelerated stress testing, as described in Unit 9.
So far, with the exception of the EMC test, which is typically carried out only on a prototype, the tests that we have considered would be applied either to all production items, or at the least to a considerable sample. There are two further applications of test/inspection in the widest sense that we should not forget:
For both these applications the aim is to understand how, when and why UUTs fail, so that appropriate corrective action may be taken. And for both the range of available techniques is wider, and it also becomes possible to consider methods that are destructive.
[ back to top ]
This section deals with ways of collecting information from the visual appearance of the UUT when illuminated with light from the visible spectrum – other techniques are considered in the next section. But remember that optical inspection is no longer confined to a “inspector” with magnifier or microscope; nowadays considerable use is made of automated optical inspection (AOI) methods.
Checking out the visual appearance of a product makes good sense when the product is to be handled by the end-user, but is arguably not needed in cases such as printed circuit assemblies that are hidden in the bowels of the system. Our contention, however, is that electrical test on its own is insufficient, and that some degree of inspection is helpful. This is a topic to which we will return at the end of the Unit.
Before reading further, and considering some of the practicalities of inspection, try to produce as complete a list as possible of the purposes for which one might carry out visual inspection of an assembly.
The assessment of the quality of a board assembly includes factors other than components and solder joints. Examples are:
Overall, visual inspection is an extremely complex task, in which large quantities of available information on spatial relationships, form, texture and colour have to be selectively collected and analysed, and consistent rules (the ‘quality criteria’) must be applied in order to make consistent decisions. It is not surprising that computer vision systems are generally aids to the inspection process rather than substitutes for it.
Visual inspection is also a difficult task made the more difficult by using the wrong equipment or an unsuitable environment. The ideal environment is one with quiet and good lighting conditions. Also high reject rates lead to a high incidence of missed rejects. Fatigue is another major factor which needs to be compensated for by altering the work pattern and using a prompt or check-list. Otherwise one can focus on the expected and miss the obvious!
IPC-AI-640 made the comment that “. . . product complexity and high production rates make for low inspection accuracy. Pressed for time, and facing a surface that is finely detailed and extremely monotonous in colour and topographical features, a human inspector can frequently miss defects. If inspection accuracy were as low as 80%, a not unreasonable figure, and one encountered in the PCB industry, then 20% of inspected products will have defects not caught by an inspector, or defects cited that are not truly defects. Missed defects may cause costly rejections further along in the manufacturing process; improperly cited defects may result in good product being scrapped or frequent and costly material review.”
In the sections that follow, we have included some guidelines as to how the inspection process can be performed in appropriate ways, and commented on the part that AOI can play in reducing the error rate, both for missed defects and “false fails”.
To identify surface mount solder joint defects a minimum magnification of ×10 is recommended. However, some operators prefer magnifiers to microscopes due to their greater depth of focus, especially when there is a single product to be inspected over a long production run. There are three types of microscope commonly used for different aspects of visual examination:
Stereo/zoom types with a magnification of ×10 to ×30. These are the most used type for inspecting solder joints and other structural and orientation features. For large boards, a deep-throat stand should be used, in order to allow room for holding the board at an angle so as to be able to view all parts of the assembly. Practice is needed to learn how to keep the board in focus whilst manipulating it through the different viewing angles.
Stereo microscopes fitted with angled mirrors. These permit viewing from three sides of the joint and were developed to reduce the level of operator skill required. This type of microscope is excellent for detailed inspection of individual joints which have already been identified as suspect by other means, or for prototype work, but may not be really acceptable for routine production inspection because of their slower speed.
Measuring microscopes with graticules are used primarily to determine the magnitude of dimensional and displacement errors on the board assembly. A magnification range between ×5 to ×50 or ×100 is usually sufficient.
Whatever type of microscope is used, correct lighting is essential to give good results. For the more complex tasks, both bright and dark field illumination should be available. In other words, it should be possible to vary separately the levels of illumination on object and background. For some applications, and to avoid glare, the use of polarised light and polarised eyepieces is also recommended.
Despite their considerably higher cost, stereo projectors are worth considering for use as part of indexed component comparator systems on assemblies which can be inspected by viewing at a fixed angle to the plane of the board assembly (typically 60º or 90º). They are less suitable for inspecting SM solder joints:
Both monochrome and colour cameras are now widely used for inspection purposes. Given a good quality optical ‘front end’ and lighting, and a monitor of appropriate resolution and size, well-defined images can be obtained which are less tiring to view.
A significant advantage is that more than one person can view the image, and data can be recorded for later discussion in-house, or with customers or suppliers. This is of benefit to marketing, purchasing and the training department.
There are two occasions where TV cameras give problems:
TV cameras are of course the basis of the automated inspection systems which have been developed in recent years. These are generally very much more complex even than placement vision systems, much of the reason for this lying in the need to build up a detailed three-dimensional image of the assembly.
Think about any solder joint inspection that you may have carried out personally, or seen carried out, and relate this to the greater complexities of a real assembly. Then make a list of what you think would be the requirements for an automated vision system.
An AOI system has the same five basic components as any other machine vision system:
The schematic view in Figure 3 shows cameras looking down on a simplistic assembly:
Some systems have four angled cameras (which saves rotating the assembly under test) and take a number of views with lighting from different angles, processing perhaps 100 images for each area of the board. Although rotating the assembly can be avoided, it is still necessary to have X-Y positioning. Whilst this also serves to move the board under the AOI head, the main reason is that typical fields of view cover little more than a 25mm circle.
The details of how AOI systems operate varies between makers, but data for analysis is generally collected by combining the movements of part and camera, and using sophisticated variable lighting, which is frequently ‘adaptive’, that is, is self-adjusting to give the best contrast and resolution.
The image processing software built into the AOI equipment must carry out the following basic operations:
These board markings may include what are referred to as “X-outs”, a term used to describe a bad board within a multiple panel which indicates that the defective part3 should not be processed any further. Ensuring that the product is clearly identified as unsuitable for use was traditionally carried out at the board level by marking a large X with an indelible pen!
[The rationale for proceeding with assembly on a panel with defective circuits is waste avoidance, although the material saving has to be weighed against the extra costs in processing a panel that will yield only a limited number of good circuits. These costs derive from the inescapable facts that the pads will be screened with solder paste, and the circuit will use up the same reflow and cleaning capacity as a good circuit, even though it is possible for assembly equipment to detect the marked circuit, and not populate it. For this reason, most volume manufacturers will set limits on the percentage of partially-defective panels that they will accept from the fabricator.]
In the assembly process, AOI was initially used primarily for post-reflow inspection of the quality of solder joints by checking for:
However, AOI has limitations, because it only analyses visible features – “If you cannot see it, you can’t inspect it”. With Ball Grid Arrays (BGAs), µBGAs, Chip Scale Packages (CSPs), flip chips and other hidden-connection devices, manufacturers have to opt for X-ray systems to analyse the critical solder joints of these new-generation packages.
Automated vision systems with any degree of sophistication are expensive (£100k+), but continuing improvements in computing power have made it possible to scrutinise a complete board in 10–20s. And developments in less costly systems mean that AOI is now often seriously considered for supporting assembly yields as well as carrying out final inspection.
Typical ways of using AOI systems are after printing, to check paste deposits, and after placement, to verify that all components are being placed correctly before the reflow stage. The choice of position for a particular line will depend both on the most common defect causes and on the impact of specific defects. For example, with column BGAs, inspecting the paste deposit to verify that no pads are short of paste may significantly improve both yield and joint reliability, whereas assemblies with components that are frequently supplied misorientated in their tape (such as pre-programmed integrated circuits) may benefit more from positioning the AOI system after placement. Of course, in theory, AOI can be put at multiple stages in the process, but this is always provided that the anticipated yield improvements will justify the capital and engineering costs involved.
AOI systems are not a total panacea, but provide a solution that is usually cost-effective for inner layer inspection during board fabrication, and can often be justified for inspection during assembly. But they are not perfect, and have certain limitations, as is summarised in Table 3.
|applicable to many processes||poor solder joint quality detection|
|no fixturing||high false fail/pass rate4|
|no access requirement||non-electrical test|
|relatively fast program development5||line of sight only|
Bernard Sutton, October 2005
4 AOI has a high false fail/pass rate, which can be hundreds of ppm, despite attempts by the industry to drive this down.
5 An AOI test procedure typically takes a half-day to programme, and a few days to refine.
[ back to top ]
As well as looking at a test piece using visible light, we can apply a range of different sorts of “illumination” to a product. Most of these, with the exception of electron microscopy used for reject investigation, are non-destructive. Of these other inspection techniques, the most commonly encountered is X-ray inspection.
X-ray inspection relies on the UUT being built of materials of varying transparency to X-rays. Metals, especially heavier elements such as lead, are much more effective in blocking X-rays than polymeric and ceramic materials, so that the internals of a package become visible. Fortunately, although many electronic assemblies are no longer permitted to contain lead, there is still sufficient contrast between high-tin solders and the surrounding materials to make continued use of X-ray inspection viable.
Traditional X-ray technology used an X-ray source and a conventional film, protected from light, but not from the X-rays collected after passing through the UUT – often a human body! Whilst familiar from its medical use, this method has the disadvantage that there is no immediate feedback of the results, because the film needs to be processed. Modern equipment, both medical and in electronics, uses a different detection principle, based on an image intensifier feeding through to a camera.
The simplest implementation is referred to as “transmission” mode X-ray technology, where the radiation is passed vertically through the UUT to an X-ray detector. This does not provide any information about the vertical position of components but does allow the horizontal elements of component dimensions to be evaluated. Hence the electrical characteristics can be assessed.
Although the image seen represents the X-ray cross-section of the joint rather than its surface features, X-ray equipment can prove very useful for checking features of solder joints not visible by other means, for example the wetted areas of solder beneath a lead or pad, or connections to a BGA.
Usually referred to as transmission AXI (Automated X-ray Inspection), this technology is a quick way of producing a high-resolution image that is suitable for an analysis of fine-pitch components, and effective for locating solder defects.
As with AOI, more information is available if the UUT can be viewed in three dimensions rather than two. Building up a three-dimensional image can be approached in two ways:
Of these, the second is the more common approach, the off-axis angle being around 12°.
Using the 3-D system is even slower than a 2-D system, typically 5–15% longer, as well as more costly. However, it gives more information about defects, irrespective of their position under UUT features that might screen the defect.
Overall, X-ray inspection systems are effective in monitoring joint quality, and can also screen for solder splash and solder flow shorts as well as open joints, even beneath BGAs.
There is a slight unexpected design issue with X-rays, in that the whole of the assembly is sectioned, so that you have to consider whether parts on the underside might impede a clear view of the vital joints. Components such as tantalum capacitors are particularly good screens of X-rays.
The following figures show typical images from X-ray equipment on an assembly line:
These photographs were kindly provided by Dage Precision Industries
Standalone X-ray systems, particularly those which don’t operate in real time, do not eliminate the need for pass/fail decisions by operators/inspectors but can form a reliable and efficient inspection method, used in conjunction with optical inspection.
More recent systems are more capable and, used as the input device to image processing software, X-ray systems are gradually becoming an element of the main production line, although an expensive one. Part of the reason for this is increased awareness of the need to monitor the solder joints under area arrays, particularly since the move to lead-free solders.
However, X-ray systems are not for everybody. For example, the systems are slow, and not ideally suited to automotive use, where the industry is looking at 15s/board test time! The main advantages and disadvantages of X-ray inspection as a technique are summarised in Table 4.
|main advantages||main disadvantages|
|non-contact test||low speed technique|
|detects solder opens and shorts||high false failure/false pass rate|
|inspects hidden joints (BGA/CSP/flip-chip)||difficult to test reworked joints, since these
have a radically different geometry
|detects potential faults due to solder quality||high cost per board|
|can measure insufficient/excessive solder||long program development times|
|process monitor for solder voids||no electrical verification|
Bernard Sutton, October 2005
As you will know if you have visited your local disco, or seen early advertisements for white shirts, a product viewed under ultraviolet light may present a totally different appearance to that seen in visible light. Deliberately adding materials that will fluoresce to coatings makes it easy to check for imperfections, and this technique is applied both within board fabrication and for the assessment of the integrity of conformal coatings.
At the other end of the spectrum, infrared is used for hot spot detection. In this case, we are looking at the radiation emitted by the UUT, and the inspection instrument takes the form of an infrared thermal imaging camera, which is used to build up a high-definition picture of the temperature distribution on an operating UUT. By comparing the image of the part being tested with a known good assembly, it s possible to locate areas that are hotter than expected.
Whilst a more typical application is to check that thermal interface compound has been correctly applied between a power semiconductor and a heat sink, the technique can be used at much lower power levels, as indicated in the quotation below.
Omega makes an IR temperature sensor that can detect changes in temperature to within 1°C and you can see the holographic temperature spread over very large surfaces. We use it to detect electrical shorts. We can see the short if it generates as little as a few milliwatts. We have even traced inner layer shorts on a 24-layer card.
The cost of infrared imaging will depend on the surface area of coverage, the definition of the image, and the minimum temperature difference that the system will detect. At the top end of the market, equipment can detect temperature differences of as little as 0.01°C, though this is at some cost.
Infrared has even been suggested as an alternative to ICT for final board test, being claimed to be able to make fast assessments of even densely-populated boards. In the ‘Infrared Verification’ (IRV) test, power is applied to an assembly, which heats up and radiates in a characteristic manner. The infrared image of the UUT is then compared with a thermal signature previously created using statistical techniques from a number of known good boards (defined as those that have passed functional test). IRV is reputedly capable of detecting a range of defects including shorts, opens, missing components, defective components and misaligned components.
For the sake of completeness, we are making reference in this section to two techniques used primarily in fault investigation, namely acoustic imagining and scanning electron microscopy (SEM). In the first of these, very high frequency sound waves (typically 10MHz) are coupled to the unit under investigation by immersing transducer and subject in liquid. The signal detected, which may be transmitted through the part or reflected from it, will contain information about imperfections in some joints, and delamination in items such as ceramic capacitors. For more information, search the web for "Scanning Acoustic Microscopy".
SEM is a technique generally employed on decapsulated semiconductors and similar small assemblies, scanning the part under observation with a focused electron beam, and examining the results in a number of ways. More about this technique in our paper Investigative techniques. As with all inspection and test techniques, it is most important to choose the method best suited to the requirement. Which is why our paper concludes with the table repeated below; the technique you choose depends on your application as well as your budget, and there is no universally applicable method that is guaranteed 100% effective.
|optical microscope||easy to use;
|limited depth of field|
|IR microscope||silicon is IR transparent||moderate resolution|
large depth of field;
number of analysis attachments
|surface topography only: no transparency in bulk materials|
|X-ray radiography||good spatial resolution;
quick, easy interpretation
|limited to observation of anomalies with significant changes in mass density|
|acoustic microscope||will detect any defect which results in air gap: ideal for cracks, delamination, and voids;
spatial resolution moderate but well-suited to packages;
moderate time to operate
|correlation to metallographic cross-sections may be required to establish accurate interpretation|
|sectioning||provides detailed section of anomalies||time-consuming;
limited to single plane
[ back to top ]
In the last two sections we have considered three methods of inspection, by operator, using AOI and with X-rays. But how good would each of these be at detecting typical defects on the boards, with solder joints, or with components?
Before you look at the table we drew, try and produce your own table:
[ back to top ]
As with inspection, there are two main reasons for carrying out electrical testing during manufacture:
Whilst the second of these is crucial from the point of view of the end-customer, and tests could therefore be applied at the very end of the process, assemblers need to monitor functionality throughout the assembly process. This makes it easier to trace faults and cheaper to repair them. The task of reducing the number of faults, which is another aim of assembly houses, has to be left to controlling individual processes and to visual inspection – by the time that electrical test is possible on more than just single components or boards, all the joints have been made.
Apart from the specialised test equipment used for the verification of the value and function of individual components, four main types of test system are commonly found in the product build process:
The first three of these are considered in more detail in the sections that follow; the last is related to reliability issues, which are dealt with in Unit 9.
The issues associated with both bare boards and cable harnesses are identical, although the technology and scale is different. the requirement is generally to detect and locate two types of fault:
Boards and cable assemblies differ in scale and in complexity, so the way in which the tests are carried out will be different. The distinction is particularly marked when it comes to the style of automation and the number of connections made. For example, with a simple cable harness, where the expected fault level is low, a simple “buzz” test of continuity of isolation using a pair of probes is not uncommon. Although this is conceptually the same as the flying probe tester we will meet later, the speed is vastly different!
Continuity testing applies a voltage across a pair of contacts and measures the current passing. This is used to calculate the resistance of the interconnect, which is assessed against preset pass/fail criteria. For a board, maximum acceptable resistor values may be as much as 5–200Ω, allowing for the high resistance of thin traces and vias. For a wire loom, typical values are significantly less.
The shorts or insulation resistance test is carried out in a similar way, but pass/fail threshold values are set much higher, typically not less than 100kΩ to 2MΩ for a board, though much higher insulation resistances would be expected for a cable assembly.
Occasionally at the board level, and more frequently for cable assemblies, we need to check that there is no leakage between two points, even when a significant voltage is applied between them. This is particularly important if isolation is required to ensure operator safety for an equipment. The test used is the ‘hi pot’ test, a contraction of High Voltage Potential, and the equipment is essentially a high-voltage DC power supply with control electronics for setting voltage, current, and voltage step/dwell times. This test generally involves making connections, applying a high voltage (500V, 1,000V or even higher) for a sustained dwell time, and measuring the current flow with a sensitive instrument.
The term ‘in-circuit test’ is used to describe the electrical test of individual components after their assembly to an interconnecting substrate, for example a printed wiring board. It is not intended to test the function of the circuit, but serves to confirm that all components have been assembled and are correctly interconnected and thereby gives a high level of confidence that the assembly will work to specification.
ICT uses probes to apply voltages and to sense the resulting signals, using test points designed into the assembly as shown in Figure 11.
Using only a relatively small number of probes at a time, simple, (and thus fast) tests can check for continuity, isolation, and a value of passive components. In the case of assembly, the same test will show whether solder joints have been made, and components are correctly connected through bond wire, bond pad, lead and solder pad.
How far beyond this the equipment can go depends on the number of probes that can be deployed simultaneously. If we use a method that applies all the probes at once, then the scope of test is less limited. For example, even for passive components, we can get a more accurate measurement of value by techniques such as back-driving6, which effectively isolate the component under test from connections made to other components. We can also apply power to a circuit and measure the transfer characteristics of both digital and analogue components.
Whilst the type of measurement equipment built into a typical ICT is able to check the functionality of groups of devices, it is less capable when it comes to verifying the functionality of individual devices directly, although most ICT equipment is supplied with extensive device libraries that are used to generate test instructions, in order to achieve better fault coverage when testing “clusters” of devices.
As with any electrical test, in-circuit testing follows a strict procedure, both to get the maximum information from the test and to protect the test equipment. The tester is fitted to a set of spring-loaded probes that access as many circuit nodes as possible, to maximise the number of components that are exercised by the test. The sequence of tests is fairly standard, and is indicated in Figure 12.
More information about the test sequence involved in a typical ICT situation is to be found at this link.
ICT provides instant diagnosis of assembly and component faults, although there are some limitations:
The initial system procurement is a major investment, so, in order to save the future possible expense of upgrading, assemblers will select a machine with sufficient test node positions to cater for the designs that they expect to make within the foreseeable future.
The equipment used to test individual components may be a dedicated ICT machine, but a similar function is, rather confusingly, also offered by the “Manufacturing Defect Analyser”, an equipment that is sometimes referred to as an “analogue in-circuit tester”. If you review the specifications of typical equipment, you will find that each will test a component in isolation, and carry out complex test sequences. In fact, there is a “fuzzy area” between the two types of system: MDAs cost typically £35k, and have perhaps 300–400 test points, with limited test capabilities; ICTs are more expensive (of the order of £85k), but can be fitted with many more connections, have the ability to isolate components better by using techniques such as back-driving, and can power up the assembly and individually exercise the integrated circuits.
MDAs do not normally apply power to the board being tested, and this precludes thorough inspection of digital integrated circuits and other active devices such as operational amplifiers. However, making the (high-probability) assumption that the ICs are good, greatly reduces the cost of equipment and support costs, such as programming. Added to a relatively short test time, which can be as low as 5s/board, the low cost of ownership has made MDAs very attractive, especially for smaller companies. Some of the advantages and disadvantages are summarised in Table 6.
|low capital costs||no functional digital verification|
|low programming and program maintenance cost||limited ability to load firmware|
|high throughput||usually no test coverage indication|
|easy to follow diagnostics||fixturing costs|
|fast shorts and opens testing||access issues|
|probing required (flux residues)|
Bernard Sutton, October 2005
Whatever kind of electrical test we carry out, we need to make connections to UUT. With a simple “buzzer” test, we need only two hand held probes; for functional test we will need a number of probes, but many other points can be accessed through whatever connections are normally made to the product, such as the connectors. It is at the level of in-circuit test that accessing the nodes creates the greatest probing “challenge”.
The extent of the problem and the resulting cost, will depend on the number of nodes, their separation, whether the connections are on component leads or direct on the board, and whether or not connections need to be made on both sides of the board. Where this is necessary, this means making a clamshell fixture, where probes are applied from both sides (Figure 13). This is, however, expensive and difficult, and should be avoided, especially where fine-pitch probing is required. If at all practicable, test probes should be placed on only one side of the board, preferably the low-profile side.
Whether single- or double-sided probing is used, the probe head itself is normally a separately-tooled item, which interfaces with standard multiple sockets through to the test equipment. Typically machines come with fixed numbers of connections per module, but it is possible to retrofit additional modules if required.
More detailed consideration of probing is contained in our paper on ICT fixtures.
In-circuit test is reported as taking at best 5–7 days for fixture manufacture and programming, and 2–4 weeks is more average. The delay, and the associated high cost, means that the use of full ICT, where all the nodes are brought out to probes, cannot always be afforded. What alternatives are there?
Whilst there is no real answer to this question for the case where every position must be probed simultaneously, a radically different approach is possible if we reduce the number of simultaneous probe points. Whilst this sacrifices some elements of back-driving and functional test, the benefits of accepting a lower number of probes is that it allows us to use a “flying probe” tester. This uses moving test probes that are free to move (fly) around the UUT and are guided to specific XY locations on its surface, in a similar way to a placement machine or circuit board drill. As with a placement head, the nominal height of the point being probed is preset in software, and a small amount of “over-travel” is used to apply a small downward force.
Flying probe testers have considerably lower set-up costs than those that use multiple-pin arrays, and are attractive for small volume manufacture. However, a significant disadvantage is that at least some of the probes have to be moved across the board between successive tests, so that the test time is significantly longer than with a fixed probe array. The overall effect is that test times are at best 6 min/board, and the production rate might reduce to as few as 1–3 boards/hour when the tester is handling a range of different boards.
We have already referred to the limitations of flying probes as reflected in the range of tests that can be applied, and the further limitation is that there is no “far shorts” coverage.
The flying-probe market has risen in popularity in recent years. For example, in 2000, the market was reported as growing by 40%, a much faster rate than the test market as a whole. Can you suggest reasons for this?
So which approach should we take for our application, ICT with full nodal access and a ‘bed of nails’ test fixture? Or should we use a flying probe tester? Our decision is not just about cost. Whilst a ‘bed of nails’ test fixture adds significant NRE cost, as well as the time needed to build and debug fixtures and tests. there are some positives:
By comparison, flying probe test is quick and easy to set up, and there is no test fixture to design, manufacture and purchase. However, flying probe testers are significantly slower than a fixture-based system when testing a given circuit board. This because a test fixture makes all points of contact with the circuit board simultaneously, whilst a flying probe tester can only make the contacts consecutively.
But our decision cannot just be a response to the volume of production. Even though flying probe testers are slow, they can generally test circuit boards that have very small features and dense circuitry that cannot be tested reliably or easily using a test fixture. This means that flying probe testers may be needed, despite their slow speed, if our products have closely-spaced test nodes. In his paper The Increasing Density and Decreasing Spacing Between Test Points in PCBs, Rigo suggests that the decision is normally made by answering the following questions:
The Everett and Charles Technologies web site is a fruitful source of information on probing and testing.
Functional test examines the board for correct operation, verifying that the unit under test (UUT) has the correct ‘transfer function’, that is the correct output response is achieved for a given set of input stimuli. This concept is equally applicable to both digital and analogue assemblies.
There are three basic ways of creating a functional test equipment:
Of these basic ways, the first has been quite popular as a means of verifying board operation at the actual operating speed of the circuit. Typically used for circuits whose performance goes way beyond conventional test kit, so called ‘performance testers’ verify a board operation at the actual operating speed of the circuit. They duplicate (or at least emulate) the actual operating environment in which the board will find itself in the final product.
‘Hot bed’ or ‘hot mock up’ testers are normally one-of-a-kind testers used to verify that the board under test actually operates in the final product. Often, a hot bed tester consists of the entire product except the board that is being tested. The board is inserted into the tester and if the product appears to operate properly, the board is assumed to be good. Although diagnosis of failed boards has usually to be performed manually by a skilled technician or debug engineer who thoroughly understands the design and operation of the board, the technique is still very popular.
The ‘ready-to-use’ analogue test equipment category falls into two sections, based on capital cost. At the top end are sophisticated stand-alone systems with configurations similar to in-circuit test, with probes making contact with the board fed through a switchable matrix to in-built test equipment. Most ATE systems of this type come complete with sophisticated computer control and software.
Bench-top testers are smaller test systems (hence their name) designed to provide analogue and digital, in-circuit and functional test capability at a modest price. This solution is particularly favoured by smaller assemblers, as offering a cost-effective means of carrying out the more basic tests.
Generally, they consist of a small computer, a general-purpose card cage and a variety of stimulus and measurement boards that fit into the cage. The user can meet specific test requirements by “mixing and matching” from this selection of boards. Bench-top testers are often used for final system test in place of a hot-bed tester.
The bus-controlled system option for creating a functional test equipment is certainly the most flexible, and allows the test engineer to do all the things automatically than he/she could do on the bench with an array of test equipment. Typically interfaced into custom probe fixtures, the test equipment will be brought together under local computer control, and connected together by one or more data buses.
Originally a number of types of data bus were developed, each with different features to suit different ATE applications. Defining a data bus unique to your own instrument seemed a good idea at first, but only until users realised that this restricted their choice of peripherals! For this reason interconnecting items of test equipment generally relies on standard interfaces.
Many of the early data buses were borrowed from other fields, and one of those, the IEEE RS232C standard data communications bus, is still occasionally found among lower-end equipment. However, a more common standard is the IEEE 488.1 data bus. First defined by Hewlett-Packard (the Hewlett-Packard Interface Bus, HP-IB) specifically for interfacing programmable measurement instruments, the standard has now become world-wide as IEEE 625. Whatever the name of the standard, all are fundamentally compatible, though with some variations in connectors. They are often known as GPIB, or General-Purpose Interface Bus. GPIB systems are able to interface with a very wide range of peripherals, from the obvious signal generators, oscilloscopes, pulse generators and spectrum analysers to more esoteric items such as a programmable screwdriver to adjust pre-set components!
The limitations of GPIB lie in the space it takes – peripheral instruments often need to be pulled together in a rack in order to contain their spread – and in the system data speed. Up to 15 instruments, referred to by GPIB as ‘devices’, can be connected to one ‘controller’ computer. The limitation on cable length is no more than 2 metres between each device and a maximum of 20 metres between controller and any one device. Data is transferred at 1MByte/s. This data rate is restricted by cable length and other issues, and the HS488 proposal to increase to 8 MByte/s has been controversial. Another way round the problem of data transfer is to use the GPIB bus for instrument control and a second interface such as Ethernet for data transfer, using proprietary protocol and data format.
Another modular (and faster) ATE format derives from a 1987 consortium of test equipment manufacturers , who started with the existing VME computer bus, and upgraded and extended it to include specifications of module size and performance. The result was the VXI bus (short for ‘VME bus extensions for instrumentation’), which forms the basis of complete, high-speed, automatic test equipment systems in modular form. VXI bus data transfer rates are in the region of 1 Gbyte/s and modules are in a range of standard sizes. Looking to avoid incompatibility with existing equipment where possible, the bus was designed to allow fairly simple interface modules to be used to connect GPIB or VME bus instruments. The VXI bus has become the basis of many automatic test equipment systems which combine both test fixturing and programmable modules within a single framework. Both enhanced speed and smaller size have given it an advantage over other bus systems.
Our brief review is not intended to be definitive, and bus-based modular systems are a complete topic in AMI4957 Test Strategies. For more information on products from a typical supplier, and an insight into LabVIEW, a main player in control software for modular systems, we recommend you to visit the National Instruments web site.
Whether ‘home brew’, commercial equipment, or assembled from modules, most ATE systems concentrate on confirming functionality and have poor diagnostic resolution. There are several ways that make it easier to use functional test equipment to diagnose faults. For example, providing guided probes that allow conditions in other parts of the circuit to be monitored. In other cases, guidance can be provided to the technician either from electronic simulations of the circuit or statistical information, which indicates which faults with the specific characteristics have occurred most frequently in the past.
ICT has the fundamental requirement that the test equipment is able to make physical contact with the BUT. However as PCBs have become more complex, and devices have shrunk in size, direct physical access has become limited. With considerable foresight, a group known as the Joint Test Access Group (JTAG) was formed in the 1980s to develop an idea known as ‘Boundary Scan’. Boundary scan is designed to test one of the most fragile parts of a PCB, the ‘boundary’ between the board and a silicon component. This is achieved by adding extra functionality to a device, in the form of a number of boundary cells surrounding the core logic of the device. These cells provide a ‘virtual’ test point on each of the device’s pins.
The main function of boundary scan, to test the boundary of the device, is known as Extest (EXternal TEST) (Figure 16).
But, using the same virtual test points switched to a different mode of operation, the boundary scan is also able to test the internal operation of the components. This is known as Intest (INternal TEST).
Boundary scan is defined by the IEEE 1149.1 standard7, and its use requires compatible test equipment, which is available in a number of forms from PC cards through stand-alone test boxes up to cards which may be plugged into standard ICT equipment.
Whilst boundary scan is a powerful concept, it needs to be designed in from a very early stage of electronic design, because using boundary scan requires that all the components used should be 1149.1-enabled, and the PCB itself must include a 4-wire serial interface, as defined by the standard. This serial interface is connected to each components using its Test Access Port (TAP).
Boundary scan is a powerful technology that is not only capable of testing single devices, but may be extended to test clusters of devices and even a whole assembly. Simplistically, using boundary scan could significantly change the cost of test, reducing requirements for test probe access leading to simpler boards and less complex, cheaper fixtures as well as significantly reducing test times.
Boundary scan is an attractive technique, but one that has to be designed into the product from scratch, and with the major drawback that only a limited number of devices is available. It can also not be used in isolation, because power should only be applied to the UUT once any short-circuit faults have been screened out.
Built-in Self-Test (BIST) is another technique that has long development times and significant cost because of the modifications needed to the silicon devices. For example, CISCO allow 8% of silicon area for BIST purposes.
The technique is very much “what it says on the tin”, with self-test and self-monitoring built into the device by adding appropriate hardware and software. BIST can apply to the entire assembly, and this is how it was originally envisaged, but the approach is more commonly applied to elements of the circuit function, rather than the whole.
Despite the cost, and significant development time, both “distributed BIST” and boundary scan are making headway, particularly at the high-reliability and high-complexity end of the market. The boundary scan approach in particular has seen many additions which can be used at system level, or in a debug mode to test the rest of the product. For example, one might use a JTAG port to control proprietary BIST.
Whilst both boundary scan and BIST need some commitment to use, there is a parallel to software, in that one does not always have to design from scratch, but can re-use older elements in new designs, always assuming that these have been properly documented and de-bugged.
Whichever kind of tester is used, there will always be a need for fixtures and test programmes. Both of these need to be designed and procured, and there are also maintenance and operating costs that continue over the life of the product and the tester. Probe and other handling damage requires maintenance, but often a more significant cost is that each engineering change made to an assembly may require changes to both test fixturing and test program. At any point in time a manufacturer might have tens or even hundreds of board types in production, each incurring retooling costs at different stages of their product cycle.
Some general comments on fixture costs:
Both verification and diagnostic test programs must also be changed when the board design changes, and the costs incurred include test engineering time to determine and document the changes, and programming time to implement and verify/debug the resulting program changes.
Generally, verification test programs and diagnostic programs that find operational faults are more expensive than test programs for testers that diagnose defective parts and assembly workmanship errors.
A key advantage of In-Circuit Test is that the generation of test programmes can be very much automated. Testing for shorts and for analogue component values can be done from programmes created automatically from the net list by test generators. Many of these will carry out sufficient circuit analysis to create the optimal test, setting test limits and deciding how to use surrounding probes to create the best level of isolation.
[ back to top ]
So far in this unit we have looked at generic types of inspection and test system, and inevitably whilst doing this we will have indicated the most common applications of the techniques within the product build process. But in this brief section we are looking more formally at the different levels at which these techniques are applied for component testing, bare board testing, and testing the populated assembly. There is of course a fourth member of the sequence, system level test, but our application of functional testing ideas to the whole assembly will be deferred until the wider context of Unit 9.
Component testing is a very specific field that really lies beyond the scope of this unit. However, despite the range of technologies, there are some common features in that it is important:
For simple passive components, the techniques used are similar to those adopted in ICT, although the voltages applied, for example for leakage current testing, may be significantly higher. Similarly, for active devices, circuitry similar to that used in ICT and MDA may be appropriate.
However, whether for active or passive devices, component tests will have tighter tolerances than can be the case when a part is measured in-circuit, and equipment is usually designed to give much more accurate parametric information, in order to avoid rejecting components that appear in error to lie outside the acceptable tolerance band.
For more complex semiconductors, full testing of function may not be practicable within an acceptable timeframe, and there may also be difficulties in making sufficient number of connections. Both these aspects can be addressed by building into the silicon some capability for self-test and diagnosis, in the same way as was discussed previously under boundary scan.
For a discussion of component testing, in particular as it applies to active devices, see the DTI Electronics Design publication A Guide to Testing and Design for Test, which is downloadable as a PDF file at this link.
[ back to top ]
As well as testing the added components, we also need to ensure the performance of the printed circuit board, physically the largest component of all. So appropriate tests are carried out at all stages in the board fabrication process:
The first inspection process is to check the quality of the inner layers before lay-up and bonding. Given the fine feature size, and the importance of the clearance between features, not just their electrical isolation, AOI is a “natural” for this application. The pattern may be compared with a ‘golden’ or ‘Known Good Board’, or by internal information derived directly from the CAD data.
Once the multilayer has been laminated, the internal pattern is visible only by X-ray, but it is important to ensure that vias are correctly positioned. Here X-ray inspection is a valuable tool for identifying and measuring drill offset. Figure 19, Figure 20 and Figure 21 are typical X-rays taken during the inspection of the inner layers for drill registration.
Images provided by A&D Group (Glenbrook Technologies)
Once the board has been completed, an important stage is to check the electrical integrity of the circuit, using as a reference point the electronic net-list provided as part of the CAD package from the designer. The test is important because populating a defective board can lead to product faults that are difficult to diagnose and impossible to rectify.
What is generally referred to as ‘Bare-Board Test’ (BBT) may be carried out with some kind of bed of nails, or a flying probe tester, depending on volume of manufacture. While the techniques differ in time and cost, both are equally reliable, given the relatively flat nature of the substrate, and its freedom from any projections, such as components.
The type of probing system will also depend on the complexity of the board. In most cases double-sided fixtures will be required, and some designs may need to be tested in several passes, each with its own test fixture.
Typical bare board tests will be for continuity and isolation between all appropriate combinations of nodes, as identified in the net-list. Typically the resistance value that corresponds to an acceptable connection will be set by the fabricator at a value that maximises the likelihood that defective vias are detected. Unfortunately, isolation tests are insufficient to discriminate between tracks that are separated by the intended distance and those that are almost bridged as a result of etching defects, because of the high intrinsic surface resistivity of the laminate. Removing such faults is the task of AOI, both in the internal layers and externally.
Where the multilayer includes embedded components, then the bare board test can be extended to verify the electrical properties, but an automated X-ray inspection may be included to check their shape and size. At the same time, this technique can be used to check internal registration. Unfortunately, the transmission approach used by most BBT X-ray systems precludes direct examination of the quality of through-connects, though these can be examined in detail by 3-D X-ray techniques.
As well as assessing the shipped product itself, board fabricators will also place heavy reliance on test coupons as a way of assuring the quality of production. A test coupon consists of a group of non-functional layout items that are used to test the manufacturing process. Lying within the panel, and thus fully representative of the item produced, the coupon is separate from the electrical circuits and outside the actual board outline, so that it can be removed from the circuits before they are sent for assembly. A typical coupon will contain features such as the narrowest possible track, tracks that are as close together as allowed by the design technology, the smallest allowable holes, and so on.
The tests carried out on the coupon will depend on the application and the quality standards. Typical tests will include a cross-section of any via holes, to check metallisation, and equally destructive tests for solderability and resistance to soldering conditions. In the case of boards that have a requirement for controlled impedance, this measurement is frequently carried out on a test coupon with well-defined characteristics, rather than on the circuit elements themselves. Should the coupon fail any tests, then it is likely that the board as a whole has not been manufactured correctly. If it passes the tests, it does no more than increase the tester’s confidence in the state of the rest of the bare board.
[ back to top ]
At this stage in the product build cycle, nominally good components and boards have been passed through to the board assembly process. Here test is an important element throughout manufacture.
Draw the PCB assembly process from goods-in, through process checks and final test to despatch. For each stage insert all the relevant test and inspection stages. You should have something similar to this one.
‘Get it right early!’ It is never too early to commence testing. Inspection and testing should start at component receipt (goods-inward), with subsequent monitoring at each process stage through production, to end-of-line (final product). Out-of-box audit inspection is also frequently carried out as final monitoring operation. The figure in the answer to the preceding activity is a reminder of the inspection and test stages through which an assembly passes.
Each of these test stages has its part to play in ensuring that a defective product does not progress along the production line, but using all the stages is also important because the focus of each specific test is on a different part of the defect spectrum. The overall objective is to detect most of the faults without necessarily having to test every item produced. Many of these tests will thus be sample tests, the exception being final functional test, which is generally carried out on all shipped parts, as well as aiming to provide 100% defect coverage; although in reality, time may not allow testing of all possible faults.
Test and inspection stages are carefully placed throughout the assembly process, partly to avoid adding value to an item that is already defective, but more importantly to allow information about defects to be collected and fed back into the process in the form of Statistical Process Control (SPC) data to help ensure the process performs at the best possible levels.
Once quality has been built into the whole process, through design, procurement and manufacture, we can consider the regimes for the individual inspection and test stages. Space does not allow us to consider all the test and inspection tasks, but an early and key process is the screening of solder paste, where it is important to ensure that the solder paste applied to the board is in the correct quantities, in the right places and has been uniformly applied.
Solder paste inspection may be integrated as an on-line process step within the printer, or alternatively performed off-line, and applied to only a small sample of boards emerging from the solder paste printing process, and carried out by an operator who selects boards for inspection.
The quality of the joint after reflow is related to the volume of paste deposited, so our inspection system should measure both the height and area of the deposit. This can be measured directly using a 3-D inspection system, but 2-D systems that will only measure the area covered and the alignment of the paste are both cheaper to install and have measurement times that are more compatible with the cycle time required of the print process.
An alternative way of ensuring quality is to combine a top-down camera-based system on the printer with an off-line paste measurement system, such as the light-section microscope whose principle is described in Figure 23.
Given a correct paste deposit, the next opportunity for test/inspection to enhance yield occurs at post-placement inspection. This is used to screen boards prior to reflow and checks for missing and misaligned components. Checking at this stage is a good economic solution as defects can more easily be corrected then after reflow, PPI is most commonly implemented using Automated Optical Inspection (AOI) equipment.
After-reflow inspection may be required to check for problems induced by the reflow process, such as tombstoning. While this is amenable to manual visual inspection of a sample, or using AOI if the whole batch needs screening, testing after reflow is much less important than testing after wave soldering, because the defect rate there is much higher. But here there is a case for some manual inspection, to prevent the occasional board with major defects, such as solder flooding or gross solder shorts, overwhelming AOI and (especially) ICT equipment.
In addition to AOIs frequent use for assessing the quality and conformance of the completed assembly, AXI is increasingly being used. When first introduced, AXI was used as an off-line inspection system complementing AOI. However, with improved technology, AXI has become faster and cheaper and can be used in-line to detect a number of defects that are not detectable with AOI (for example, bad joints under BGAs).
At the end of the line, the completed card needs to be checked electrically to be sure that:
Here our testing will combine both ICT tests on individual parts and functional test. As with bare board testing, both flying probe testers and multiple-pin arrays may be used for making connection. Arguably, the only real difference between the test equipment used is that the controlling electronics is more sophisticated for the assembled board.
In a typical application, both in-circuit and functional test equipment will be used. This is because in-circuit test has a shorter cycle time, and is better able to pinpoint defects, so that it is cost-effective to use an ICT plus repair loop to ensure that the slower functional test is applied only to assemblies that appear to be assembled correctly.
Of course, in circuit test works well on passive components and simple silicon devices such as transistors and diodes, but less well on System-On-Chip devices such as microcontrollers. This is partly because of their complexity, but also reflects difficulties in access to all the connections.
Functional test equipment is frequently specific to each assembly, particularly when the function is “mixed signal” that is combining both digital and analogue elements. We have already looked at how such equipment is assembled, and seen the reason for the several drawbacks of functional test:
However, despite these problems, functional test is carried out on almost all assemblies and is considered a necessary part of the test strategy as it provides the final verification that the product performs to its specification.
[ back to top ]
So far we have looked in some detail at the way that test and inspection are used as ways of ensuring quality, and have indicated what constitutes best practice. However, best practice may not be affordable, and we need to have a test strategy that will provide the best possible results within our budget. Doing this requires that we have our processes under control as far as possible, so that we know the most likely types of failure, and can “spread the net” appropriately to catch defects.
Our aim, whether by inspection or test, is to detect and remove all faulty parts. However, as has been hinted at earlier, it is not always possible to detect all faults at an economic price. Nor, in fact, is it even always possible to detect defects with specific tests.
A particularly good example of this is that of a decoupling capacitor. Where do decoupling capacitors go? The answer of course is that they are fitted between power and ground, at points on the board dictated by the need for them to decouple specific components. In order to improve clarity, decoupling capacitors may be drawn in the top corner of the circuit diagram, but they won’t work if you lay them out that way! However, from the electrical test point of view, it becomes almost impossible to measure individual capacitors, because of the presence of so many parallel paths. In-Circuit Test won’t detect the omission, and functional test rarely has the time to look in detail at all the issues, yet the absence of just one capacitor may cause a circuit to fail in the presence of an interference field.
The answer to the inevitable question of how to ensure product quality is to supplement electrical test by visual inspection, which (especially when automated) will detect missing components with ease.
The coverage that you get with any test depends firstly on the type of test technique, and secondly on the time allowed:
One way of illustrating the way in which different kinds of test detect different types of fault is shown in Figure 24. This diagram derives from work by Stig Oresje from Agilent, who has carried out significant work on the fault spectrum. Although this way of illustrating the information shows what different processes can cover, keep in mind that this assumes that each of the test/inspection processes captures all the faults. As we have seen, particularly with inspection, the percentage of the faults that escape detection will depend on the time available, the level of training, and a number of operator factors. In addition, the diagram is silent about the issue of test “escapes”.
The study by Stig Oresje was of over 1 billion solder joints, spread over multiple EMS and OEM manufacturing sites
The split shown can be expected to shift with the change to lead-free – as an example, the percentage of tombstoning has been reported as increasing.
During your study of this topic you will have looked at a number of possible faults and examined different ways of screening them. But how capable are the methods?
Before you turn to the next section, we should like you to complete the table below with your assessment of the effectiveness of each technique for detecting a fault of the type shown. Indicate the relative effectiveness of each by a number (1 = high . . . 3 = low) or use n/a to show that the technique is inapplicable at that stage.
[ back to top ]
Engineers are constantly faced with compromise, and the test process is no exception. Part of the test engineer’s job description is making sure the board and assembly can be tested thoroughly, efficiently and cost effectively. Decisions made when designing the test process will be based on:
In a typical case, cost will be measured fairly directly as indicated below, whilst the overall effectiveness of the test strategy will be assessed by its impact on that very visible parameter, the observed yield.
Figure 25 shows a typical arrangement for manufacturing inspection and test flow of a board assembly. In this case AOI is the first inspection after assembly, followed by MDA, then Functional Test. All units that pass move on to the following stages and are eventually shipped. Units that fail move to the diagnosis and repair station after which they are resubmitted to the MDA. The proportion of each station that fails is d.
A model for the manufacturing and test cost per unit, based on the simplified cost process presented in Chapter 10 of Test Engineering by Patrick O’Connor, is:
The observant will have noted that the repair loop feeds back into the line after AOI. This is partly because any repair will have been checked visually by the operator, but more because repaired joints tend to have more solder than joints made on the first pass, so could actually fail AOI!
Of course, this model is a simplification, even mathematically, but the key simplification is that the assumption that all diagnosis and repair operations are the same. Repairs will take a variable amount of time, depending on the type of fault. For example, it takes much longer to replace a BGA than it does a chip resistor. And the fault spectrum for the three tests will be different, simple components, tending to fail at AOI/MDA, and the more complex during functional test.
Not only is the repair time a simplification, but even more seriously the diagnosis time will depend on the information produced by the tester. AOI typically identifies the problem at a component level; MDA will sometimes give component information, but more often only locate the fault in a specific area of the board; functional testers rarely give direct diagnostic information, and may require significant highly-skilled intervention to locate the fault.
Viewed from a manufacturing stand point, the model needs to be extended, and supplemented with real information on the fault spectrum and diagnosis and repair times.
Even more crucial than cost from the point of most manufacture managers is the yield of good parts, and this is critically dependent on the average rate of defects per board. For a full understanding of the situation we need to use probability theory, but most will have an intuitive grasp that the yield for the final assembly will depend on the number of possible failure sites and the probability of failure for each site. This We have grown up with this concept for explaining why, in traditional manufacturing, the more complex an assembly, the higher the chance that there will be some type of defect. Where the probability of a specific failure is independent of others, then the overall probability is given by multiplying together the probability of the part being good.
As an example, if we assemble 100 components on a board, and we know the probability that they will be good is 0.998, then we know that the probability of the board will be free from component defects is 0.998 raised to the power of 100, or 0.819. If we also assume that the bare board has a probability of being good of 0.98, then the yield becomes 0.802, or barely over 80%.
If we also know that the probability that are wave soldering process gives us 95% yield, without defects, then our overall yield into the first stage of test will be given by:
Each step in the process has an error rate associated with it, and the defects introduced are cumulative. Of course, this only has a direct effect on cost, because many of the processes will include a repair step, this is shown in Figure 26, which is what Brendan Davies, in his book Economics of Automatic Testing (ISBN 007707792X, unfortunately out of print) calls a ‘yield progression’.
For a test stage, we have three possible yields, because faults escape detection, as shown by Figure 27. The difference between the actual faults per board, and the detected faults per board results in the apparent yield of the test stage potentially being higher than the real yield.
And the perception of yield is also affected by the fault coverage. Davis draws a distinction between yield and apparent yield:
where FPB = faults per board, and FC is the fault coverage.
Without dwelling on the detail, it is clear that, when reviewing reported data, we need to be particularly careful to understand the basis on which it was taken. In particular, in relation to defect rates, we need to ascertain whether the defect rate refers just to the single process under consideration, or includes defects caused by the material, the design and the reliability of the elements
Early in this section we included some figures for defect rates, and must point out that these are for the purposes of illustration, and do not represent current experience. Measures of defect rates, or more often ‘process capability’, which can be seen as the inverse of defect rate, often concentrate only on the process, as in the work being carried out at www.ppm-monitoring.com, a study that does not include faults due to material, design or reliability.
Those who would like to use real figures, including material defects, would benefit from a visit to the International Electronics Manufacturing Initiative (iNEMI) site. Their Test Strategy Project developed a spreadsheet-based economic test-strategy-model tool for comparing test strategies. The model allows inputs for yields or DPMO values out of the manufacturing process, board volumes, board cost, etc., then calculates the total cost of various strategies. The tool is available for free from iNEMI at www.inemi.org/cms/projects/ba/test_strat.html.
Which combination of test strategies will yield the best result will vary with the product and with the test/inspection techniques available. This is shown in Figure 28, which takes two models for quality costs: in the “traditional process”, whilst the costs of failure are high, the total quality costs reach a minimum at less than 100% conformance simply because the cost of finding the last fraction of a percent of defective parts is too expensive to be justified by the resulting quality saving. Where it is possible to use a greater degree of automated testing, the total quality costs may reach a minimum at 100% conformance.
Of course, this view is somewhat simplistic, because it assumes that automated systems allow 100% inspection, and will allow no defects to pass. Nevertheless, this quality cost model approach can be useful in understanding why automated screening can give customer benefits.
Automated systems have control benefits also: “. . . manual inspection is not ideally suited to electronic data processing as a means of collecting process and quality information. A point coming to be increasingly appreciated by manufacturers is that a process lends itself much more to quick and accurate control if the data from the process – number of pieces to a control point, number and type of defects found, etc. – can be put into a computer file.”
How we apply test and inspection procedures most effectively will depend on what the failures are, so an important element in any test strategy is to maximise the feedback, and in-circuit testers in particular are viewed by assemblers as more than a test function, but as a potent means of providing process monitoring. A great deal of specific information about components and the effect of the process is available, since ICT examines every component on every board. In an ideal world this is used to give feedback data for component specification and procurement, as well as improving the assembly process.
Collecting data from test and repair is also important as it enables “hints” to be suggested to an operator for improving efficiency for diagnosis. However, this data has to be based on the system subsequently testing OK, having been repaired properly.
Unfortunately, a great deal of useful data remains unused not because it is uncollected, but because it is not analysed. Here there is no real substitute for making full use of computer aids. The benefits of using analysis software are seen in a number of areas:
[ back to top ]
We have already seen some evidence of a move from fixed probe assemblies towards flying probes, but what other developments are taking place? We have grouped these under the three headings of Technology trends, Flying without wires– which considers other ways of tackling the test problem – and finally, in A change of focus, we look at a radically different approach to assuring quality from the customer perspective.
You will be aware from your own experience that there are many changes in the nature of electronic products, and these may be expected to have an impact on the test challenge and appropriate test strategies. In their November 2002 web seminar “Designing test strategies for modern PCB assembly” Teradyne identified the following technology drivers:
The conclusion that Teradyne drew is worth quoting, which is that “No single test solution covers the entire fault spectrum – a combination of methods is required to ensure full fault coverage. An integrated test approach with complementary value-added by each test station is more often the optimal strategy.”
There are many more good things in this seminar, which we encourage you to read if you have time. You can find it as a 1.31MB PDF file at http://www.teradyne.com/atd/resource/recording/testStrategy/testStrategies.pdf.
Of course, these changes have been some time coming, and a rather earlier quotation is cited below as still worth reading, as a distillation of many of the points. Note the final paragraph in particular . . .
First, there is the continuing trend towards finer pitch and higher density. Ten to fifteen years ago, this was regarded as a result of the transition to surface mount technologies. Now, we see multilayer laminate structures for IC packaging which extends this demand towards finer pitch and higher peak local density.
A second and equally important trend is the demand for improved measurement parameters and measurement modes in electrical test. Customers (and test engineers) demand increased low-resistance continuity measurement accuracy, embedded component measurement, high-resolution measurement of embedded resistance, and on-product verification of RF impedance performance of specific signal traces in substantial volume.
It is also worth noting that the number of electronic interconnect substrates requiring verification continues to increase, as does the average complexity of these products. Put simply, the amount of test measurement we need to perform is increasing and, as noted above, the quality of these measurements must improve accordingly.
Finally, it is important to note that the cost of conducting no testing has increased. Many interconnect customers are now requiring bare board manufacturers to shoulder the responsibility for consequential damages resulting from defective substrates.
David Wilkie (Everett Charles Technologies) interviewed by The Board Authority, March 1999
A high-volume probe tester can achieve 25,000 test points per second, whereas most probe systems are much slower than fixtures and typically limited to around 20 test points per second. A number of proposals have therefore been made for ‘fixture-less’ test systems, using contacts with electron beams, plasma or laser, and promising an increase in performance over moving probes of as much as 10:1.
All these technologies have been promoted over the past ten years, and as yet none have developed beyond “emerging technology”. The reason is probably that the methods have common major shortcomings, to a greater or lesser degree:
Whilst the ideal “directed particle beam tester” may still lie just round the corner, little progress has been reported in recent years, and one suspects that in part this is due to significant improvements made in flying-probe systems as a practical way of achieving the ‘fixture-free’ objective.
Every supplier wants to give his customer a product that will meet his/her needs, and typically this will be done by combining functional test – in some respects the only way of ensuring compliance – with sufficient other checks to ensure a high-quality standard of build.
But this may not be enough in today’s world. For example, have we found all the potential defects that a customer may find when he/she opens the box? Here the test engineer needs to think laterally, in order to understand the customer perspective. And this is one of the reasons why a “out of the box” test has for many years been part of the manufacturing process for products such as cell phones.
For a larger product, Dell have been reported as carrying out sample testing on products as shipped. These are opened up, and every aspect of the customer experience considered, from the identification on the box, to customer documentation, down to the level of whether the buttons work in a satisfactory way.
You will observe that at least some of these aspects of the product might well not have featured on any formal specification. In consequence, the role of the quality engineer becomes much wider, in seeking to align the design, the manufacturing process and the control documentation to what the customer actually wants. This “customer experience test” is an example of the change in approach by customers, and has been reported as particularly significant for the automotive industry.
Of course, one only has a limited test/inspection resource available, and the other side of the coin, reported from several multi-nationals, is a concentration on their own quality issues, and an expectation that their suppliers will deal with the quality issues that they have presented. In other words, if you ship components or sub-assemblies to a large company, you will be expected to investigate any problems and put them right, and there might be less agreement than formerly that the problem is one where both parties will seek a mutual resolution. Customer focus cuts both ways!
[ back to top ]