If your company supplies products to an end-user, how might you measure the cost of poor quality other than as the direct cost of dealing with customer returns and meeting warranty claims?
The hidden costs of poor quality are very significant, so inspection and test are not isolated processes which occur after manufacture has finished, but vital elements integrated into production, whose role is to ensure quality rather than just identify defects.
Only when quality has been built into the whole process (Figure 1), through design, procurement and manufacture, should the regimes for separate inspection and test stages be considered.
‘Inspection’ and ‘test’ are terms which are sometimes used interchangeably. The distinction drawn in this document is that:
Both inspection and test are intended to be non-destructive in a production context, although the application of power can sometimes inadvertently destroy units undergoing electrical test. On occasion, however, destructive tests are used, for example in the diagnosis of failed parts (such as by Scanning Electron Microscopy) or for establishing the ability of assemblies to withstand accelerated life conditions. Depending on the exact nature of the test, destruction may be a consequence of the method used (as with SEM), or merely describe the probability that the device quality has been impaired (as with accelerated life testing).
Defects result in added cost in labour, materials, equipment and retesting, to which must be added the total cost of any unresolved failures (including rework). Defects may also:
Worse still, defects increase potential unreliability, both because repaired joints have a higher failure rate and because some faulty units do not get screened out.
Defects can cause:
In other words, defects can appear now or later! Table 1 lists some defects in soldered assemblies which fall into these two categories of ‘immediate’ and ‘retarded’. Note that in the table no distinction is made regarding the origin of the defects, which may be any of the production processes or materials, or even the design of the assembly.
= malfunctioning of an assembly when powered-up for the first time
= initially functioning, but failure occurs later in life
|not all tracks present||board failure
(for example, delamination)
|not all conductors acceptable|
|component missing||component failure|
|component not acceptable|
|solder short||displacement of solder balls causing short circuits|
|initial open joint||opening of non-soldered joint
fatigue cracking of soldered joint
|low value of insulation resistance||corrosion or similar effect|
|definitely wrong!||you never know!|
If an assembly fails to function, or does not function at all because of the inadequacy of a soldered joint or component, this is an evident defect which has to be reworked to make the assembly operable. It is much more difficult to decide if a joint is a ‘poor joint’, that is, one which has the potential to produce subsequent malfunction.
It is also difficult to decide whether a solder paste deposit has sufficient volume to give a sufficient joint once reflowed. While there are established theoretical relationships between the amount of solder and the reliability of the joint, making an accurate link between the amount of solder paste and the joint volume is more difficult, as this depends on the lead geometry. There is also the problem of making suitable measurements in the test time allocated.
However, variability in joint size can be assessed readily, and is crucial in the case of area arrays, where lean or missing inner joints cannot be repaired. For that reason it is not uncommon for AOI to be used after printing specifically for critical areas where BGAs and similar devices are to be mounted.
It is difficult on a simple one-dimensional scale to express quality in terms of the potential reliability of an assembly. At one extreme is the ‘ideal’ assembly; given only a small deviation from the ideal, the quality is not reduced by a measurable degree; as the deviation increases, the reliability reduces and functionality may also degrade.
At some point, which will depend on the nature of the assembly and its application, and on the person making the decision, this reduction in quality will be judged to be a ‘defect’ rather than a flaw:
Two definitions from IPC-AI-6401
Defect: A nonconformance to specification in the product, detectable by an automatic inspection system, that violates specification limits and may render the product unfit for use.
Flaw: A nonconformance in a product that is detectable but does not violate specification limits or make the product unfit for use.
‘A defect is always a flaw, but a flaw is not always a defect’
1 This standard is now ‘obsolete without replacement’, but the definitions are still valid.
With visual inspection, the appearance of the inspected item, such as a soldered joint, is usually compared with given samples, drawings or photographs, but the consistency of inspector judgement is a cause for concern. This situation is shown schematically in Figure 2.
Two more definitions from IPC-AI-640
False alarms An anomaly, indicated by an inspection system as a defect, that is not truly a defect. ‘False alarm rates’ are given as percentages of defects called out by system that, upon review, are judged invalid.
Escapes Opposite of false alarm. Defects that are not seen by an inspection system. ‘Escape rates’ are percentages of valid defects that a system passes.
Figure 2 raises many questions:
Automatic inspection solves the grey area problem only to a limited extent, because it is a problem in principle. An automatic inspection system is by no means any better than a human inspector when it comes to assessment of reliability based on joint appearance. In most cases such a system assesses the joint by only a few rather simple accept or reject criteria, such as the amount of solder in the joint and bridges between adjacent leads.
Whereas quality gradually changes as the deviation from the preferred state increases, the decision about what to do is a step function: to repair or not to repair. In practice, it is difficult to decide where the boundary should be placed between
‘good’ = leave as it is, or
‘bad’ = corrective action needed
A frequent strategy is to classify defects into one of three groups (Figure 3):
Major defects always need to be fixed; cosmetic defects should lead only to process improvement and not be reworked; what to do with minor defects has always been a subject of debate!
Historically the rework decision depended on the type of product and the environment in which it would operate. For example, military users were very insistent about solder joint standards, and would frequently insist that imperfect joints were reworked, although the safety margin in such joints was still more than adequate. With surface mount technology, however, joints are smaller, with less safety margin, and need to be properly made in order to be adequate for any purpose. In consequence there is much less distinction between the requirements for different SM products.
What would you say are the main practical problems associated with inspection and test, and how might these be overcome?
The primary use of standards, many of which have been accepted internationally, is in applying consistent criteria for decisions. Several useful ideas derive from IPC definitions:
‘Acceptable’ means that no repair is needed. This does not, however, imply that the result is perfect, or incapable of improvement, but only that the expected reliability meets the requirements. Acceptable is also referred to as ‘good’, but this term is even more misleading.
‘Defect’ means that repair is necessary, either necessary for immediate electrical function, or for reasons of reliability. There are two possible reasons for defects:
‘In control’ means that no action is required. This refers to the results of a process that is controlled, and operating within its process window.
‘Out of control’ means that process action is required. This refers to the results of a process that is out of control, and operating outside its process window. Although this may not yet produce defects, the risk is considerable that it may do so in the short term, unless the process is adjusted.
Before reading further, try to produce as complete a list as possible of the purposes for which one would carry out visual inspection of an assembly.
Inspection may be carried out by operators or machine vision systems. Whichever method is chosen, consistent rules (the ‘quality criteria’) must be applied in order to make consistent decisions.
The assessment of the quality of a board assembly includes factors other than components and solder joints. Examples are:
Overall, inspection is an extremely complex task, in which large quantities of available information on spatial relationships, form, texture and colour have to be selectively collected and analysed. It is not surprising that computer vision systems are generally aids to the inspection process rather than substitutes for it.
Visual inspection is a difficult job made the more difficult by using the wrong equipment or an unsuitable environment. The ideal environment is one with quiet and good lighting conditions. Also high reject rates lead to a high incidence of missed rejects. Fatigue is another major factor which needs to be compensated for by altering the work pattern and using a prompt or check-list. Otherwise one can focus on the expected and miss the obvious!
If inspection efficiency is as low as 80% (a not unreasonable figure, and one encountered in the industry), 20% of inspected products will have undetected defects, or defects found that are not really defects. Missed defects may cause costly rejections later in the process: when defects are wrongly attributed, good product may be scrapped or reworked unnecessarily. It is best however, to err on the side of caution: escaped defects are potentially much more serious than ‘false alarms’ because of the possibility of scrapping product at higher build levels, or having field failures.
To identify surface mount solder joint defects a minimum magnification of X10 is recommended. However, some operators prefer magnifiers to microscopes due to their greater depth of focus, especially when there is a single product to be inspected over a long production run. There are three types of microscope commonly used for different aspects of visual examination:
Stereo/zoom types with a magnification of X10 to X30. These are the most used type for inspecting solder joints and other structural and orientation features. For large boards, a deep-throat stand should be used, in order to allow room for holding the board at an angle so as to be able to view all parts of the assembly. Practice is needed to learn how to keep the board in focus whilst manipulating it through the different viewing angles.
Stereo microscopes fitted with angled mirrors. These permit viewing from three sides of the joint and were developed to reduce the level of operator skill required. This type of microscope is excellent for detailed inspection of individual joints which have already been identified as suspect by other means, or for prototype work, but may not be really acceptable for routine production inspection because of their slower speed.
Measuring microscopes with graticules are used primarily to determine the magnitude of dimensional and displacement errors on the board assembly. A magnification range between X5 to X50 or X100 is usually sufficient.
Whatever type of microscope is used, correct lighting is essential to give good results. For the more complex tasks, both bright and dark field illumination should be available. In other words, it should be possible to vary separately the levels of illumination on object and background. For some applications, and to avoid glare, the use of polarised light and polarised eyepieces is also recommended.
Despite their considerably higher cost, stereo projectors are worth considering for use as part of indexed component comparator systems on assemblies which can be inspected by viewing at a fixed angle to the plane of the board assembly (typically 60° or 90°). They are less suitable for inspecting SM solder joints:
Both monochrome and colour cameras are now widely used for inspection purposes. Given a good quality optical ‘front end’ and lighting, and a monitor of appropriate resolution and size, well-defined images can be obtained which are less tiring to view.
A significant advantage is that more than one person can view the image, and data can be recorded for later discussion in-house, or with customers or suppliers. This is of benefit to marketing, purchasing and the training department.
There are two occasions where TV cameras give problems:
TV cameras are of course the basis of the automated inspection systems which have been developed in recent years. These are generally very much more complex even than placement vision systems, much of the reason for this lying in the need to build up a detailed three-dimensional image of the assembly.
Think about any solder joint inspection that you may have carried out personally, or seen carried out, and relate this to the greater complexities of a real assembly. Then make a list of what you think would be the requirements for an automated vision system.
The schematic view in Figure 4 shows cameras looking down on a simplistic assembly. Using a single camera above gives just a plan view, as shown in the left-hand inset; the angled camera on the right gets a generally better view, but uncovers more defects only if it is rotated so that all sides of a component can be viewed.
The very simplest systems use a single camera, but (in combinations with the right vision analysis software) even these make it possible to identify missing, misplaced or misorientated components.
With information from angled cameras, and the right lighting, it is possible to build up a view not just of assembly defects, but also of solder joint volumes. Some systems have four angled cameras (saves rotating the assembly under test) and take a number of views with lighting from different angles, processing almost 100 images for each area of the board.
Whilst rotating the assembly can be avoided, it is still necessary to have X-Y positioning. Whilst this also serves to move the board under the AOI head, the main reason is that typical fields of view cover little more than a 25mm circle.
The details of how AOI systems operate varies between makers, but data for analysis is generally collected by combining the movements of part and camera, and using sophisticated variable lighting, which is frequently ‘adaptive’, that is, is self-adjusting to give the best contrast and resolution.
Automated vision systems with any degree of sophistication are expensive (£100k+), but continuing improvements in computing power have made it possible to scrutinise a complete board in 10–20s. AOI is now often seriously considered for supporting assembly yields as well as carrying out final inspection. On some lines AOI will also be used to check paste deposits, and to verify before reflow that all components have been placed in the correct position.
IPC-AI-640 made the comment that ‘. . . product complexity and high production rates make for low inspection accuracy. Pressed for time, and facing a surface that is finely detailed and extremely monotonous in colour and topographical features, a human inspector can frequently miss defects. If inspection accuracy were as low as 80%, a not unreasonable figure, and one encountered in the PCB industry, then 20% of inspected products will have defects not caught by an inspector, or defects cited that are not truly defects. Missed defects may cause costly rejections further along in the manufacturing process; improperly cited defects may result in good product being scrapped or frequent and costly material review.’
Many of these more extensive AOI systems will be used on the kind of state-of-the-art product for which improvements in inherent failure rates of components and processes combine with automated 100% testing to allow 100% inspection to provide the optimum quality cost (Figure 5). This contrasts with the traditional processes, where the cost of finding the last fraction of a percent of defective parts is too expensive to be justified by the resulting quality saving.
Automated systems have control benefits also: ‘. . . manual inspection is not ideally suited to electronic data processing as a means of collecting process and quality information. A point coming to be increasingly appreciated by manufacturers is that a process lends itself much more to quick and accurate control if the data from the process – number of pieces to a control point, number and type of defects found, etc. – can be put into a computer file.’
X-ray equipment can prove very useful for checking features of solder joints not visible by other means, e.g. the wetted areas of solder beneath a lead or pad, or connections to a BGA, although the image seen represents the X-ray cross-section of the joint rather than its surface features.
Standalone X-ray systems, particularly those which don’t operate in real time, do not eliminate the need for pass/fail decisions by operators/inspectors but can form a reliable and efficient inspection method, used in conjunction with optical inspection. More recent systems are more capable, and used as the input device to image processing software, X-ray systems are gradually becoming an element of the main production line.
There is a slight unexpected design issue with X-rays, in that the whole of the assembly is sectioned, so that you have to consider whether parts on the underside might impede a clear view of the vital joints. Components such as tantalum capacitors are particularly good screens of X-rays.
In the last section we have looked at three methods of inspection, by operator, using AOI and with X-rays. But how good would each of these be at detecting typical defects on the boards, with solder joints, or with components?
Before you look at the table we drew, try and produce your own table:
There are two main reasons for carrying out electrical testing during manufacture:
Whilst the second of these is crucial from the point of view of the end customer, and tests could therefore be applied at the very end of the process, assemblers need to monitor product functionality throughout the assembly process. This makes it easier to trace faults and cheaper to repair them. The task of reducing the number of faults, which is another aim of assembly houses, has to be left to controlling individual processes and to visual inspection – by the time that electrical test is possible, all the joints have been made.
Three main types of test system are commonly used:
The first two of these are considered in more detail in the sections that follow; the third is dealt with in Reliability and screening.
The term in-circuit test is used to describe the electrical test of individual components after their assembly to an interconnecting substrate, e.g. a printed wiring board. It is not intended to test the function of the circuit, but serves to confirm that all components have been assembled and are correctly interconnected and thereby gives a high level of confidence that the assembly will work to specification.
The test is carried out by using a set of spring loaded probes at circuit node points so that as many components as possible are exercised. The sequence of tests is fairly standard, and is indicated in Figure 6.
The exact mechanism that ICT equipment uses to measure one component in the near presence of many others is beyond the scope of this course, but typically a number of connections are accessed simultaneously with two or four making the measurement, whilst others are used to set the remainder of the circuitry so that it interferes as little as possible with the measurement being made. Even so, the measurement made of a passive component may vary considerably from the nominal value, and this has to be taken into account when setting the targets for the measurement equipment.
ICT provides instant diagnosis of assembly and component faults, although there are some limitations:
Accessing the nodes creates a considerable probing ‘challenge’. In some cases for surface mount technology circuits this means making a clamshell fixture, where probes are applied from both sides (Figure 7). This is, however, expensive and difficult, and should be avoided, especially where fine-pitch probing is required. Wherever practicable, all test probes should be placed on one side of the board, preferably the low profile side.
Whether single or double sided probing is used, the probe head itself is normally a separately-tooled item, which interfaces with standard multiple sockets through to the test equipment. Typically machines come with fixed numbers of connections per module, but it is possible to retrofit additional modules if required.
The initial system procurement is a major investment, so in order to save the future possible expense of upgrading, so assemblers will select a machine with sufficient test node positions to cater for the designs that they expect to make within the foreseeable future.
In-circuit testers are viewed by assemblers as more than a test function, but as a potent means of providing process monitoring. A great deal of specific information about components and the effect of the process is available, since ICT examines every component on every board. In an ideal world this is used to give feedback data for component specification and procurement, as well as improving the assembly process.
MDAs sometimes called analogue in-circuit testers, are similar to in-circuit testers in that they examine the board construction. However, unlike ICTs, which can power up the assembly and individually exercise the integrated circuits, MDAs do not normally apply power to the board being tested.
Whilst this precludes thorough inspection of digital integrated circuits and other active devices such as operational amplifiers, making the high-probability assumption that the ICs are good, greatly reduces the cost of ATE and support costs, such as fixturing and programming.
Functional test examines the board for correct operation, verifying that the unit under test (UUT) has the correct ‘transfer function’, that is the correct output response is achieved for a given set of input stimuli. This concept is equally applicable to both digital and analogue assemblies.
There are three basic ways of creating a functional test equipment:
Of these basic ways, the first has been quite popular as a means of verifying board operation at the actual operating speed of the circuit. Typically used for circuits whose performance goes way beyond conventional test kit, so called ‘performance testers’ verify a board operation at the actual operating speed of the circuit. They duplicate (or at least emulate) the actual operating environment in which the board will find itself in the final product.
‘Hot bed’ testers are normally one-of-a-kind testers used to verify that the board under test actually operates in the final product. Often, a hot bed tester consists of the entire product except the board that is being tested. The board is inserted into the hot bed tester and if the product appears to operate properly, the board is assumed to be good. Diagnosis of failed boards is almost always performed manually by a skilled technician or debug engineer who thoroughly understands the design and operation of the board.
This category falls into two sections, based on capital cost. At the top end are sophisticated stand-alone systems with configurations similar to in circuit test, with probes making contact with the board fed through a switchable matrix to in-built test equipment. Most ATE systems of this type come complete with sophisticated computer control and software.
Bench-top testers are smaller test systems (hence their name) designed to provide analogue and digital, in-circuit and functional test capability at a modest price. This solution is particularly favoured by smaller assemblers, as offering a cost-effective means of carrying out the more basic tests.
Generally, they consist of a small computer, a general-purpose card cage and a variety of stimulus and measurement boards that fit into the cage. The user can meet specific test requirements by “mixing and matching” from this selection of boards. Bench-top testers are often used for final system test in place of a hot-bed tester.
The third option for creating a functional test equipment is certainly the most flexible, and allows the test engineer to do all the things automatically than he/she could do on the bench with an array of test equipment. Typically interfaced into custom probe fixtures, the test equipment will be brought together under local computer control, and connected together by one or more data buses.
Originally a number of types of data bus were developed, each with different features to suit different ATE applications. Defining a data bus unique to your own instrument seemed a good idea at first, but only until users realised that this restricted their choice of peripherals!
For this reason interconnecting items of test equipment generally relies on standard interfaces. Many of the early data buses were borrowed from other fields, and one of those, the IEEE RS232C standard data communications bus, is still found among lower-end equipment. However, a more common standard is the IEEE 488.1. This data bus was defined by Hewlett-Packard (the Hewlett-Packard Interface Bus, HP-IB) specifically for interfacing programmable measurement instruments. The standard has now become world wide as IEEE 625. Whatever the name of the standard, all are fundamentally compatible, though with some variations in connectors. They are often known as GPIB, or General-Purpose Interface Bus.
GPIB systems are able to interface with a very wide range of peripherals, from the obvious signal generators, oscilloscopes, pulse generators and spectrum analysers to more esoteric items such as a programmable screwdriver to adjust pre-set components! The limitations of GPIB lie in the space it takes – peripheral instruments often need to be pulled together in a rack in order to contain their spread – and in the system data speed.
Up to 15 instruments, referred to by GPIB as ‘devices’, can be connected to one ‘controller’ computer. The limitation on cable length is no more than 2 metres between each device and a maximum of 20 metres between controller and any one device. Data is transferred at 1MByte/s. This data rate is restricted by cable length and other issues, and the HS488 proposal to increase to 8 MByte/s has been controversial. Another way round the problem of data transfer is to use the GPIB bus for instrument control and a second interface such as Ethernet for data transfer, using proprietary protocol and data format.
Another modular (and faster) ATE format derives from a 1987 consortium of test equipment manufacturers , who started with the existing VME computer bus, and upgraded and extended it to include specifications of module size and performance. The result was the VXI bus (short for ‘VME bus extensions for instrumentation’), which forms the basis of complete, high-speed, automatic test equipment systems in modular form. VXI bus data transfer rates are in the region of 1 Gbyte/s and modules are in a range of standard sizes. Looking to avoid incompatibility with existing equipment where possible, the bus was designed to allow fairly simple interface modules to be used to connect GPIB or VME bus instruments.
The VXI bus has become the basis of many automatic test equipment systems which combine both test fixturing and programmable modules within a single framework. Both enhanced speed and smaller size have given it an advantage over other bus systems.
Whether ‘home brew’, commercial equipment, or assembled from modules, most ATE systems have been designed to make it easy to diagnose failed assemblies. Sometimes these are tools, such as guided probes, where conditions of other parts of the circuit can be monitored, and guidance provided from electronic simulations of the circuit. In other cases, information may be statistical, based on advising the test technician which faults with the specific characteristics have occurred most frequently in the past.
Another requirement which should never be overlooked in functional testing is the need to provide power to the unit under test. Typically this is done by having an array of programmable power supplies which can be switched to appropriate pins.
As most board assembly test fixtures operate with either spring contacts or mating connectors, or both, it is a prime requirement that the board can be easily loaded onto the fixture in a repeatable and consistent manner. In order to achieve this, a few basic rules should be followed:
Experience has shown that, for a simple board assembly with little natural testability and very high board density, the addition of test pads could mean adding as much as 15% board real estate. More typically, however, boards with fewer than 400 nodes may require up to 5% more space.
If the board layout is not designed for testability, then fixturing will be a compromise between what is required and what is practical.
Maintenance and operating costs continue over the life of the product and the tester. Probe and other handling damage requires maintenance, but often a more significant cost is that each engineering change made to an assembly may require changes to both test fixturing and test program. At any point in time a manufacturer might have tens or even hundreds of board types in production, each incurring retooling costs at different stages of their product cycle.
Some general comments on costs:
Both verification and diagnostic test programs must be changed when the board design changes. Costs here include test engineer time to determine and document changes, and programmer time to implement and verify/debug the resulting program changes.
Generally, verification test programs and diagnostic programs that find operational faults are more expensive than test programs for testers that diagnose defective parts and assembly workmanship errors.
A key feature of In-Circuit Test is that the generation of test programmes can be very much automated. Testing for shorts and for analogue component values can be done from programmes created automatically from the net list by test generators. Many of these will do sufficient circuit analysis to create the optimal test, setting test limits and deciding how to use surrounding probes to create the best level of isolation.
An option on some equipment is what is referred to as ‘vectorless test’. The term means that though ‘test vectors’ are required, in other words there is no need to develop test patterns, or sets of test inputs which are applied to the product in order to identify faults and distinguish correct systems behaviour from incorrect. This is particularly helpful in the case of mixed signal circuits, where automatic generation of test vectors is less well established than for digital integrated circuits.
One vectorless test strategy2 is indicated in Figure 8. The sequence involves making contact to pairs of pins, and the sequence is:
2 The description given is of the MultiScan™ system used by Teradyne’s Spectrum 8800 series; other test system vendors will have equivalent strategies.
Whilst no information on the function of the device under test is required, pin assignations are still needed. However, this information is available from the CAD system. The greater advantage from the manufacturing point of view is that, like ICT for passives, the problem is traceable to specific pins, and therefore much easier to deal with.
So far, we have implicitly assumed test solutions which have a test driver/receiver channel on every pin, or the equivalent provided by some method of multiplexing. Vectorless tests are also available using inductive or capacitive probes to stimulate the package. Again, less device specific information is needed than would be the case for a full exercising of the device under test.
Our aim, whether by inspection or test, is to detect and remove all faulty parts. However, as was hinted at in Figure 5, it is not always possible to detect all faults at an economic price. Nor, in fact, is it even always possible to detect defects with specific tests.
A particularly good example of this is that of a decoupling capacitor. Where do decoupling capacitors go? The answer of course is that they are fitted between power and ground, at points on the board dictated by the need for them to decouple specific components. They may be drawn in the top corner of the circuit diagram, in order to improve clarity, but they won’t work if you lay them out that way! However, from the electrical test point of view, it becomes almost impossible to measure individual capacitors, because of the presence of so many parallel paths. In-Circuit Test won’t detect the omission, and functional test rarely has the time to look in detail at all the issues, yet the absence of just one capacitor may cause a circuit to fail in the presence of an interference field.
The answer to the inevitable question of how to ensure product quality is of course to supplement electrical test by visual inspection, which (especially when automated) will detect missing components with ease.
The coverage that you get with any test depends firstly on the type of test technique, and secondly on the time allowed:
The overall situation is summarised in Figure 9. The outer circle represents the totality of defects present on an assembly; the smaller circles represent schematically the test coverage of visual inspection, In-Circuit Test and functional test. Their relative positioning indicates the typical situation that:
Bearing in mind that the inspection and test costs for the methods are different, ICT being particularly effective and low cost, the challenge is to select the best balance of tests that will pull out as many defects as possible for the lowest incurred cost.
‘Get it right early!’ It is never too early to commence testing. Inspection and testing should start at component receipt (goods-inward), with subsequent monitoring at each process stage through production, to end-of-line (final product). Out-of-box audit inspection is also frequently carried out as final monitoring operation. Figure 1 is a reminder of the inspection and test stages through which an assembly passes.
The Test and Quality Control ‘gates’ should be logically placed to collect and evaluate process defect data at the optimum places and time.
Open gate – a test or quality control (QC) sampling gate (including all requisite work instructions, tools, jigs etc. and knowledge) which exists but is not used.
Closed gate – a test or quality control (QC) sampling gate which is currently used (that is, samples product and collects defect data).
Quality Control gates may be open or closed depending on process performance. The general rule is that if the process elements ‘downstream’ of the gate are determined to be functioning consistently correctly (that is, producing quality product) then the gate should be opened.
100% testing is not always affordable or necessary and parts and processes are frequently ‘sampled’, that is only a small (but statistically significant) number of parts is examined. The defect rates observed can be related to the whole population by published standards, and the results obtained used to identify trends in the general quality situation. This may be supplemented by ‘patrol inspection’, where inspection personnel are trained to perform short tours of specific locations by prescribed routes and record the number of potential errors or defects seen.
As trends develop, it may be deemed acceptable to reduce the level of inspection/test, but confidence must be at a high level for this major decision to be made.
Optimum determination, positioning and effective use of all QC gates should be by regular analysis, discussion and agreement at manufacturing department meetings which should consider:
The review should also generate suggested corrective actions, agree actions and time-scales, and set performance indicators for areas of special investigation.
The traditional role of inspection and test was to certify product conformance to specification, with fault diagnosis a secondary function. The emphasis has now shifted to enable the process to be corrected as soon as a fault occurs, rather than identifying it at a later time and having to correct and rework whole batches.
In high-volume production environments, the tasks of inspection and test take place on-line, allowing real-time collection and analysis of production performance data. If faults exceed pre-set levels, alarms may be given and the process stopped. In lower volume environments, the benefit of collecting and analysing production performance is still such a significant factor that the task cannot be neglected, whether it is conducted manually or automatically. Most modern inspection and test equipment is designed to be integrated to enable the ready collection and analysis of this data.
The advantages of computerised systems over manual systems are considerable, and can radically reduce the costs and improve the overall efficiency:
Accuracy: Information captured at source, and as it is created, is intrinsically more accurate. Any manual methods relying on filling in record/status sheets vary much more and are open to abuse and carelessness.
Speed: Information transfer is usually instantaneous. Collection and subsequent analysis of data does not have to wait until the end of the week, or even shift.
Ease of use: Computerised data-base systems allow ready analysis of information. Analysis tasks which would not even be contemplated in a manual operation can be readily performed. This gives both the vital trend data and the specific information required to focus action on the defect causes.
During your study of this topic you will have looked at a number of possible faults and examined different ways of screening them. But how capable are the methods?
Before you turn to the next section, we should like you to complete the table below with your assessment of the effectiveness of each technique for detecting a fault of the type shown. Indicate the relative effectiveness of each by a number (1 = high . . . 3 = low) or use n/a to show that the technique is inapplicable at that stage.