The public today expects medical devices to perform to a high level. At the same time, the medical device business has become highly regulated. For medical devices to perform in this environment, the production testing of medical devices must have both compliant procedures and an excellent technical strategy.
Delivering a perfectly functioning device to the customer from a manufacturing process with less than perfect yields is the goal of electronic testing. The quality of the end product is given by Q = Y(1-T), where Q is the proportion of devices passing the specification, Y is the inherent yield of the manufacturing process with respect to the specification and T is the test coverage.
Test coverage is a general term for the proportion of possible defects detected by any test scheme. Note that a perfect manufacturing process using perfect components does not need electronic tests. The partnership of excellent process control and high test coverage is required because test coverage of 100% is impossible.
The equation assumes a black box process and a black box test strategy where neither ‘knows’ about the internal details of the other. In practice, medical device manufacturing processes are tightly controlled and have high yields. Tests for medical devices are high coverage, but not perfect. Improving the quality of medical devices delivered to the patient requires exploring the equation. A successful test strategy must match test coverage to parameters that will reveal the defects most likely to occur in the manufacturing process.
The simple view that each medical device has a set of functional specifications and that each device produced must be tested to those specifications is both ineffective and inefficient. An exhaustive test for the complete specifications of an implantable cardioverter defibrillator (ICD), for example, would take significant energy from the device’s battery and reduce its useful life. Exhaustive testing to functional specifications is best left to the design validation tests that are a required part of device certification.
The design team is a key source of knowledge for the test strategy, since they can shed light on the black box. Design input helps structure the test to match the hardware. It is also important to align test execution to device operation, preventing improper or potentially damaging test conditions from occurring. A design failure modes, effects and criticality analysis (FMECA) is the source of test requirements that mitigate risks inherent in the device.
How well do you really know your competitors?
Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.
Thank you!
Your download email will arrive shortly
Not ready to buy yet? Download a free sample
We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form
By GlobalDataREALISTIC STRATEGY
A realistic and effective test strategy requires several sets of test requirements based on component and manufacturing process specifications as well as critical functional specifications of the device. Test requirements describe and delineate setup conditions, stimuli and expected response parameters for individual experiments that constitute test steps. Test requirements are based on an understanding or model of real physical phenomenon. The model is never perfect, so measuring test coverage according to the test requirements is never perfect. Even a test that covers 100% of the requirements does not deliver a perfect product.
A set of test requirements is needed because testing occurs repeatedly at different stages of manufacture, from component acceptance to final assembly. Each stage has a different set of specifications as well as different physical constraints on the test process.
MICROPROCESSOR TESTING
An automatic ICD has a complex microcomputer at its core. Testing the millions of transistors in a custom microprocessor requires intimate access to their interconnections. Including built-in self-test (BIST) improves the opportunity for test access, but testing the microprocessor thoroughly is most effectively performed before the chip is installed on the printed circuit board of the ICD. Common models for integrated circuit tests are based on logic gate functions and their interconnections. Several test methods are available in industry. A highly effective test should use several of these methods with high coverage for each to compensate for the inaccuracies of each model in describing the real semiconductor physics.
Once the components are attached to the printed circuit board, the focus should shift to the mounting and interconnection process. The most effective model at this level includes the common defects of assembly (for example, wrong component, missing component, open interconnect, shorted interconnect) rather than component defects. Physical access to the printed circuit board allows the direct measurement of components or small groups of components using modern test equipment. Direct measurement means high diagnostic resolution. No additional troubleshooting is required when the test pinpoints the defective component interconnection.
Most ICD printed circuit boards contain sufficient circuitry to allow functional testing, but functional testing is an indirect way of finding common manufacturing defects, and additional troubleshooting is usually required to identify specific repairs. Functional testing may be necessary to establish important functions, such as circuit trim or battery current drain, that are influenced by the parameters of many components.
The electronic design of complex circuits anticipates variation in component values, but circuits are seldom designed to operate with the improbable combination of extreme specification values known as ‘worst case’. More commonly, a statistically likely variation of values is used. Therefore, it is possible for a correctly designed circuit to be assembled from in-specification components that would not perform within functional specifications. Functional testing is necessary, and leaving all of it to final assembled devices drives up the rework cost. Implantable medical devices are ordinarily hermetically sealed (welded titanium cans for ICDs), which makes repair difficult.
Limited access to the complex circuitry in a finished device makes functional testing an appropriate choice. At this stage in the manufacture, the number of defects has been greatly reduced by eliminating defective components and repairing defective interconnections found by previous testing. For reasons previously mentioned, the functional testing needs to be limited. The focus for device level functional testing shifts from diagnostic resolution to highest efficiency of coverage. That is, stimuli are set up which produce responses that involve major sections of the circuitry, and as much data is gathered from each response as possible.
For example, consider two key parameters of a defibrillator shock output: pulse width and amplitude. Together they determine the energy delivered to the heart, but they are commonly individually programmable and individually specified. A specification-based test might charge and fire several pulses at varying pulse width, measuring the output with a time instrument. A second series of tests would then charge and fire several pulses at varying amplitudes, measuring the response with a voltmeter.
An effective test strategy combines the two by varying the independent parameters simultaneously and measuring the output pulses with both instruments. Half the number of shocks drains half the energy from the battery. An effective strategy adds more layers by commanding the shocks through different mechanisms (induced shock, commanded stat shock and so on) to exercise diverse control or telemetry circuits at the same time.
The test strategy is a product-level exercise coordinating the entire test set for all levels of manufacturing. It selects the defect models used at each level to optimise the correlation to physical defects likely. Individual test goals are set by the strategy to ensure overall effectiveness. It is impossible to calculate an overall test coverage because individual goals work against individual models. Fractions cannot be added unless a common denominator can be found and there is no common denominator in this case.
IMPLEMENTING AUTOMATED TESTS
An automated tester is an electronic system consisting of instruments, a computer and the software to control test execution. For complex electronic devices, an intervening interface test adapter (ITA) is usually required, because medical devices are designed to interface with the body rather than with standard instrument buses. Test system design varies from industry-standard commercial testers to custom designed systems. The capabilities of the test systems employed at each level of manufacture determine the options available in the test strategy. On the other hand, they place constraints on the test engineer, restricting implementation methods.
The ability to create a stimulus or accurately measure a response challenges commercially available testers in the ICD test environment. ICDs are electronic devices characterized by extremes of voltage and current. A shock output sufficient to defibrillate a human heart can be about a thousand volts and several amperes. At the other extreme, the sensing circuitry detecting intrinsic heart waves senses millivolts, and the standby current of the battery-powered device is in the low microamps or nanoamps. Selecting instruments that can achieve these results often means merging disparate bus and programming standards into a single system. Software languages and computer interfaces that can control these systems are, of necessity, tailored.
Implementing automated tests is a matter of concurrent hardware and software design. The test set for a particular place in the manufacturing flow (for example, a printed circuit board test) is defined by the test specifications allocated for that place by the test strategy. Test specifications are then broken down into software and hardware specifications for the test system software and interface hardware, respectively.
There are some system specifications that derive from the production test environment rather than the medical device design. For example, compliance with security and data integrity aspects of 21 CFR Part 11 introduces a distinct set of requirements. Coding, mechanical design and electrical design proceed from all these specs to create the elements of the test system. In a mature environment, where several medical devices have been designed, manufactured and tested, common blocks of hardware and software are re-used to simplify the process. A part 11-compliant user interface or test results database is likely to be re-used, if possible.
VALIDATION PROCESS
Once all the pieces are in place, the test system (including hardware and software) must be validated. Once again, the easy approach of simply validating the results against the requirements is ineffective and inefficient.
Excellent detailed processes exist for validating software and hardware. The structure and approach used to apply these methods is key. The constraints of a test system offer both a challenge and an opportunity in tailoring common practice. The test system should first be considered as a set of blocks for unit testing. The entire test system could be validated as a whole.
However, system complexity makes this a daunting task for most automated test systems. A divide and conquer validation approach is more effective and efficient. An integration testing exercise that involves the entire system is a necessary part of the exercise, but it should be the culmination of a coordinated validation protocol, not the entire plan.
FUNCTIONAL BLOCKS
Instruments and the operating system are examples of blocks that are delivered as a complete functional unit by a vendor. These need validation for the intended use and are often re-used from test system to test system.
Begin by defining the vendor’s specifications or subset of vendor’s specifications that apply to this application. If this block is to be re-used, broaden the selection of specifications to cover the full range of use anticipated and document the rationale for selection.
Work with the vendor to identify any evidence of validation to these specifications they may be able to provide. As a last resort, conduct testing and verify the performance of the block to uncovered portions of the specifications. Document and review the results. Finally, review the vendor’s capability to make sure its development practices are sound, particularly in configuration management, change control, design verification, production testing and calibration.
There is a great opportunity to identify some custom developed hardware or software for stand-alone validation. A robust specification of the functions and interface to other blocks will enable re-use without validation. Configuration management of the stand-alone blocks is crucial to maintaining a validated status. Inside a stand-alone block, the validation procedures can be the same as any custom hardware or software. Test applications for a software block are usually unique because of their dependence on product features and models.
Interface test adapters are hardware blocks with the same characteristic. Begin with requirements based testing. Use a risk-based approach to select from the large set of software or hardware testing techniques available. Test systems operate in a highly constrained environment that reduces the risk of occurrence and increases the likelihood of detection of certain classes of faults. For example, test instruments have a tightly defined software interface and often contain extensive error checking on input messages. The range of message possibilities can be reduced to match the instrument capabilities if error handling is adequately verified.
HIGH-LEVEL TEST STRATEGY
A technical test strategy applied at the highest level of implantable medical device development coordinates the production testing of components, subassemblies and the final product. Specific requirements for the hardware and software that implement each test level flow from the measurement methods and goals set forth in the test strategy.
Validation of each product test system begins with these requirements and divides the system into blocks without losing traceability to the original plan. Validation methods are customised using risk-based analysis to fit each block, and each test system is integration tested. The test strategy provides a high-level reference for technical reviews of the finished system and its validation that keep overall effectiveness in view amid the details.