Computer Validation in Pharmaceutical Industry

V

Thread Starter

V.C. Mehta

Computerised System Validation- An Understanding and Approach

The pharmaceutical industry is facing ever-tightening regulatory constraints, increased pressure to shorten new product development times, escalating R&D costs, shortening product life cycles, and reforms from governments eager to constrain healthcare budgets. The race to discover, develop, and market new drugs is a fiercely competitive one, requiring significant investments in both time and money.
Automation remains critical to optimizing the drug manufacturing process. With the latest advances in automation, information, and business systems, pharmaceutical enterprise architectures continue to evolve.
Computer technology has changed the framework of business in every industry, transforming the way the businesses operate internally, as well as they interface with customers and external businesses. Many companies today see computer technology as vital to delivering products and services to the marketplace. The Pharmaceutical industry is one of the many transformed by computer and software. Computer technology fulfills functions of automation & control as well as facilitates information management.
It is in this area that the rapid pace of computer technology has presented challenges.
In general, GAMP (Good Automated Manufacturing Practices) recognizes automated system in a broad form, including automated manufacturing equipment, control systems, automated laboratory systems, manufacturing execution systems and computers running laboratory or database systems. The automated system generally consists of the hardware, software and network components, together with the controlled functions and associated documentation. GAMP has recently released their latest guidelines named GAMP 4 specifically encompassing 21 CFR Part 11 issues as well.
We are not covering all in this paper and have briefly attempted to address basic issues pertaining to the following systems.
a) Programmable Logic Controller Validation
b) Computer System Validation
c) Software Validation



Programmable Logic Controller (PLC) Validation
Up until the not-too-distant past, industrial equipment was generally controlled with relays. Electricians would wire them together, people would push buttons, and machines would operate. During the '60s or thereabouts, companies began to implement some of the functionality of the relays with software. In doing so, they invented a programming language called Ladder Logic. It was designed so electricians of that day could understand it; it mirrored nomenclature used at the time. The PLC, though, is a general-purpose computer that has been dedicated to input and output functions. Input/Output (IO) operates on simple algorithms, and it is dedicated to control applications.
My first experience with this was probably typical to what many people have experienced. Engineering developed a system, and the specifications were not available. The equipment was there for validation. The system needed to be validated right away, but there was a shortage of time and resources. Also there was no data available and my team has to persuade/cajole equipment vendor to provide some semblance of design documentation. Most of the time was spent in ‘reading’ the program and ‘deciphering’ it as per process needs. Further time was spent in translating this ‘data’ into validation documentation.
PLC validation could be hard, but it doesn't have to be painful. It can, at times, be tedious or frustrating, but it doesn't have to be hard. If you organize your validation program and have the proper things in place before you get started, you can get through it. And, when you're done, you can actually have a validation package that people can look at, understand, and get something from. It can be done.
What Makes PLC Validation Unique?

There are some differences between PLC and "standard" validation, and these differences apply equally well to software validation in general. What makes software validation so different is the complete absence of components. Normally in equipments or machines you can walk out, out your hands on the parts, identify them physically, measure it with some means.

That isn't the case with software of PLC software. You can never actually touch, see, sense, or perceive the true software. Anytime you're looking at a screen display of the software, or a printout of the software, all you are seeing is an interpretation, by another device, of what the software actually looks like.

So the big challenge in PLC validation, and validation in general, is to somehow give reality to these components. Your challenge as validation professionals is to document that these components are valid and responsive to their user's needs.

Another significant difference in the validation of PLCs is that validation starts at the beginning of development. With traditional equipment, it is conceivable that the engineering department could design something. The facilities department could buy it, install it, and then same can be validated.

That won't work with PLC validation. Validation has to be part of the program from the beginning, or you're going to have a huge amount of work at the end. And you probably won't be happy with the results.

Even a simple PLC program and system is probably more complex than the most complex piece of equipment you'll ever validate. A tablet press may have a few thousand moving components, but a PLC program with many thousands of lines of Ladder Logic - if you consider each line to be a component - is immensely more complex.

That is another challenge for the people in validation. You can't be expected to look at every component of the PLC. It certainly isn't done when you look at a piece of equipment. With equipment, you look at its functionality, and that's what we need to do with PLCs. We need to look at the functionality of the software.

Don't forget the expertise of the developer. The person who programs the PLC knows very little about the process, current Good Manufacturing Practice (cGMP), or anything else that matters as validation personnel. The developer is an engineer, an electrician, or a software guy looking at a set of specifications and as he is executing that software, points are going to be missed, very important points.

The Wrong Way to do PLC Validation

Though awareness has started spreading in the industry, most of the times there is no link between DQ and IQ or as its normally, there are no DQ available for PLC system!

The typical development cycle is like this. The end-user asks machine vendor to provide automated machine without defining what does he want in automation portion and that too real fast. Machine vendor in turn provides generic information to his automation vendor to automate certain parameters. This guy is typically seller of some MNC branded PLC hardware & software. The job is entrusted to some local system integrator with limited or no exposure to cGMP norms as well as GDP (Good Documentation Practices).
The system, which doesn’t perform as desired, gets tweaked to make it do what the user originally wanted during installation/commissioning phase. Then the system gets handed over to validation, who is told to get it validated “real quick” to take care of production needs.
When you do it the wrong way, you discover problems during validation. That is the worst possible time to discover something amiss, because, as usual, the validation function is at the tail end of the time line and everybody is interested in getting over with the job and get the production started.
Now, changes are required though, because you discover problems and, worst of all, the defects are hidden in the system. If the right development work hasn’t been done up front, you can’t avoid the fact that there are going to be hidden problems that you won’t find until much further down the road. It’s an inconvenience for validation, but it’s a potential disaster for the business (especially if it requires a recall).

Worst of all, the system will fail to meet the user’s needs. The opportunity is there, during the development phase, to make these software control systems meet the user’s needs. It only requires a little bit of paperwork up front and communication. It’s truly unfortunate when all of the effort to put a system in place is expended, and it doesn’t do what anybody really needed it to do.

One of the biggest challenges with validation of PLCs is to really understand what each piece of code means; it’s not like a computer program. With PLCs, the developer has to sit down and explain address, alarms, input/output, the interfaces, and so forth so you can understand.

Involving validation experts early in the process will result in few changes to the system during development, and that’s always preferable. Hopefully, you’ll have no hidden defects. If the validation and testing plan has been carried out properly, there shouldn’t be any, and the system will meet the user’s needs.

It costs no more to do it the right way than to do it the wrong way. The challenge is getting the people to make the investment up front rather than correcting things later on.

This way of doing things requires the validation personnel to be pro-active in talking to the people who develop and create these things in their facilities. These people need to be convinced that it is in their best interests to be open with you, to tell you what’s going on, and spend time with you explaining the system.
















Computer Validation
Computer validation wasn’t as much “introduced” as “formalized.” The necessity to work in a disciplined, common sense way to deliver the end user’s requirements has always existed. New company policies, procedures, and standards however have been introduced.
Questions abound about how to interpret the regulatory requirements for computerized systems. What constitutes an electronic record? Which systems need validation? To what level does a particular system need validation?
Let’s first put aside the mandatory requirements (two token access, electronic audit trail, etc.), and answer the following question:
“Are you very confident that at any point in time your computerized system is doing exactly the job it was intended to do in an acceptable way and can you prove this to an independent person?”
By picking random times throughout the specification, design, implementation, and usage of a computerized system, this question raises many more questions that force the need for control onto a project.
An interesting question to ponder is:
“Does more documented evidence mean a higher degree of assurance?”
If the validation is being performed to satisfy an external regulatory authority rather than to improve the implemented system, then in all probability important points are missed and maybe you are spending your money unwisely.
There is always a fixed amount of money available for any project - out of which must include the project and validation costs.
The premise that validation will save money is true. The first few projects (for some, this may be quite a few), the validation costs include the additional training (both direct and indirect) costs of instilling the company’s validation policies.
Is Computer Validation Really Needed?
Computer validation as a regulatory requirement is necessary as it provides the impetus for companies to continually review the quality of software solutions they implement and use.
There is no guarantee for a successful computerized system implementation. If there were, the emphasis on computer validation would not be where it is today.
Very few of us are at a stage where computer validation requirements are performed as second nature. As project teams, we are all in a transitional phase where we need to discover what approach to validation works best for our team and the project. During this phase there will be mistakes. Documents will be created (sometimes with little benefit to the project) because procedures say that they are required. Some documents will be reworked countless times and others will be signed off in haste. Activities will be performed before they have been formally approved to occur and signatures will be missed.
In this phase, validation is a stricter policing exercise because examples need to be set.
None of this means that the concepts of validation do not work. It just means that we have a learning curve to get over and experience to be gained to ensure that future projects benefit from the current mistakes.
During this transitional phase, it is important to note that what may be classified a successful validation may not automatically guarantee a successful computerized system.
We sometimes lose focus that these deliverables alone do not guarantee a successful computerized system implementation. They only guarantee a consistent approach within and between projects. It is actually the content of the “project related” deliverables that determines the success of a system.
A validation stage can be deemed successful if it finds the errors or anomalies (i.e., incorrect requirements, wrong design approaches, and coding errors, etc.) introduced during that stage. If it takes many iterations to fix them (or worse, they never get fixed properly) then that project stage cannot be deemed a success.
Key Components of all Computerized Systems
The main features that are required from any system are reliability and consistency. A computerized system consists of mainly following components:
· Software - the most difficult component to understand since it cannot be touched, heard, seen, or smelled.
· Hardware - computers, printers, cabling, modems, scanners, etc.
· People - users, support staff, and developers
· Procedures - ways of integrating the other components to achieve a desired result
What Effect do People Have on These Systems?
The biggest factor of a successful computerized system is trained personnel. The importance of this point in any project cannot be overstated.
Individual training and experience records need to be maintained. For each major project, every person working on that project needs to be assessed as to their current skill and experience for the tasks they are to perform.
Requirements are both explicit and implicit and written to varying degrees of detail. Thought needs to be given to the readers of these documents and their assumed knowledge of the requirements. The skill of being able to write clear, concise requirement documents that are easy to read is now as important as being able to write good code.
Although forms of automatic code generators exist, on the whole, people develop software. Just as we’ve all read books by good and bad authors, we’ve all used software developed by good and bad developers. Good software can be recognized by ease of use, Ability to perform the required process, Ease of support and maintenance.
Just as books can be written in many languages and styles, so can software. Good software can only be written when the developer is skilled in the software language, the development style (development standards), and the intent of the process that the software will automate.
Very few computerized systems will only be used by a single user. Often these people will have different needs from the same process. It is critical to get as many requirements from all the affected user groups early on in a project. This stage is often missed or thought unnecessary when a company adopts a corporate software solution, or Commercial Off-the-Shelf Software (COTS) is implemented.
It is also important that users are adequately trained in the use of the system. As we live in a world where computerized systems are abundant and staff turnovers are typically high, regular personnel retraining needs to be assessed.
Computerized system’s end users make the best software testers. It is always easier to test a system for what it is designed to do rather than to test it for the variety of activities it may encounter for which it was not designed.
Note that it isn’t an auditor’s job to go into the technical depth as to the level of testing performed, just that the qualified personnel have approved the documentation.
As software projects usually involve a team of people, standards across this team are very important to maintain a consistent approach to all tasks. Standards such as document conventions and coding practices may seem obvious, but issues such as defining levels for categorizing testing anomalies can be very subjective.
It is common to define these anomalies into “minor,” “major,” and “critical” deficiencies and have predefined approaches to continuing the testing should these occur.
On the whole, software operates in a consistent manner whereas people do not. Variance not only exists between individuals but within individual people. How I react to a situation today may not be the same way I react to the same situation tomorrow.
This should be kept in mind when designing software solutions. If an application requires a certain process flow (that is, screen one must be completed before screen two, etc.), why should the software allow any ordering of operation? This not only freely enables end users to make mistakes, but makes the validation of available (as opposed to acceptable) process paths almost impossible.
All such features make the system easier to use, reducing to some degree user variability. They also make the system easier to validate as testing, training, and on-going support will all be easier.
We all berate software that does not do what we want it to or what it was developed or purchased to do. Unfortunately, the software doesn’t care and continues to do the wrong thing everytime we conduct that operation. It is far easier to criticize the software/hardware rather than the people, who designed, developed, implemented, and tested the finished product.
For a successful implementation, an open, honest approach should be encouraged and properly managed between all members of the project team. It is far better to ask questions during the requirements and design stages of a project, than after a system has been developed and implemented.
In short, the key to implementing successful computerized system solutions is not in the software or hardware. These days, software and hardware can be purchased, configured, and developed to do virtually anything you can define. The key lies in the selection of the correctly qualified people to perform every task from inception, through development and implementation and continuing with on-going maintenance. To put it simply, “getting the right people for the job.”
Unfortunately, there isn’t a huge pool of people who are trained in good software project management that intimately understand your requirements.
There seems to be two ways to rectify this:
1. Hire or contract IT people skilled in software project management and train them on your business requirements.
2. Take knowledgeable business users with a working knowledge of computerized systems and train them in good software project management.
A combination of both of the above would seem the most logical approach.
Developing/Implementing Computerized Systems in the Regulatory Industry
For most of the manufacturers of automated equipment, software is a small part of a wider development effort. The mechanical element is often perceived as the most important, followed by the electrical parts, and only then, by controlling software.
For many IT professionals, describing IT concepts in a non-technical manner is not as easy as it sounds.
By its nature, most of actual software development involves non-communicative activities. Between hours of coding and debugging, and struggling with restrictions on the software tools, the actual user requirements may become less focused as the project expands.
Consider the typical V-model defined by GAMP that describes the accepted approach to software development and superimpose the key personnel groups and their input percentage of effort at each stage. The point to this is that we typically do not involve our programmers at the requirements stage or our users at the coding stage.
This assumes that the users know exactly what they want and the programmers know exactly how they want it. Two very big assumptions that if not adequately mediated could easily implement the wrong job the wrong way.
Users feel they cannot request different features because it’s not in the specifications. Developers feel they must focus on delivering what’s in the spec, and are thus reluctant to make their own suggestions for improved functionality.
This highlights the importance of the Project Leader in coordinating the correct information to the entire team. It also highlights the importance of having accurate and easy-to-read project documentation to pass from one phase of the project to the next.
Conclusion
The concepts of computer validation are sound and need to be followed. However, rather than just implementing a set of quality policies and procedures and then policing them, the following considerations should be addressed:
· Scrutinize the “roles and responsibilities” section in the quality/validation plan
· Consider additional training in the non-project areas (communications, document writing, and presentation skills)
· Ensure the personnel leading the requirements development have the necessary skills.
· Encourage openness and honesty between all project members
· Foster relationships between users and developers
If every IT project team member is given clear instructions, has the skills to perform the tasks they have been allocated, and can successfully pass this information onto the next person in the project, then this will surely raise the confidence that the computerized system will reliably and consistently perform the tasks it has been documented to perform.

21 CFR Part 11- The Regulation

The FDA rule relating to the use of Electronic Records and Electronic Signatures is one of the most significant pieces of new legislation of USA to affect the pharmaceutical manufacturing industry in recent times.

With ever-greater use of information technology and computer system at all stages of manufacturing, more and more of the operating processes are being automated. As a result, key decisions and actions are being taken through electronic interfaces, with regulatory records being generated electronically.

While recognizing the long-term benefits of 21 CFR Part 11 will bring in permitting technological advances, industry is also faced with applying the rule to existing systems and current projects. With this comes an urgent need to improve understanding of the rule, its interpretation, and application. But that is some other time
 
F

Francis Lovering

There are many Points for discussion here.
1 PLC's
I submit that the points raised about PLC's apply equally to DCS systems. And LIMS, MES, ERP and so on.
And I can't quite understand why the paper distinguishes between PLC's, Computers and Software. A PLC is a computer running software IMHO.
2 "The person who programs the PLC knows very little about the process, current Good Manufacturing Practice (cGMP), or anything else that matters as validation personnel."
I suspect that many PLC/DCS programmers would consider this very insulting. All the good process control programmers I know do try to understand the process and end up understanding it very well. If they are working on validated systems they have normally been trained in what cGMP is. They often have opinions about what matters to "validation personnel", some of which might make the validation personnel themselves feel insulted.
3 The Wrong Way to do PLC Validation .... without defining what does he want "
This is a key point. Yes, many programmers are working without good functional requirements specifications, and in the face of constant change.
4 "Worst of all, the system will fail to meet the user’s needs
Of course the users Must be involved in the Design process. This is well documented. Many PLC/DCS systems integrators understand this well.
5 "Involving validation experts early in the process will result in few changes to the system during development, and that’s always preferable"
Many have experienced that such involvement actually drags them down in a hail of paperwork, prevents the changes that are necessary to change a high-level requirement specification into a workable system and greatly extends the time taken to do even simple tasks. It is only worth starting serious version management when you have got near to a working system. Yes, you need to track the design process and have an audit trail, but be careful you do not make it more important than the actual design.
6 It costs no more to do it the right way than to do it the wrong way"
Indeed, and a ha'porth of tar etc. However, involving validation experts is not a solution to this. A senior validation engineer working on a large parentals pharmaceutical project in which I was involved once said to me that the validation people are the Grade C engineers. He wasn't actually that bad, but what he meant was that the validation should be such that even the low-grade engineers could understand. A valid point. And also a good reason for them not to be a part of the design process.
7 "None of this means that the concepts of validation do not work"
My understanding is that the concepts of validation are no more or less than those of good software engineering. The validation people concentrate on checking that it has been documented, not on being good software engineers. My impression incidentally is that validation Does add to the cost, good software engineering does not.
8 "Software – the most difficult component to understand since it cannot be touched, heard, seen, or smelled
On the contrary, it can be specified, tested, reviewed etc and it is startlingly obvious when it doesn't work. The key is to be able to design it in such a way as to avoid finding out too late that it isn't going to work. Validation in my experience is of little help in that endeavour. Good software design on the other hand is. That is why I have devoted so much of my time to developing tools to help.
9 "The biggest factor of a successful computerized system is trained personnel. "
If it said Skilled I might agree, and training can raise people skills. But you cannot train everyone to do anything. And understanding validation is not the only skill; in fact it is a lesser one!
As you will have gathered I am a little bit skeptical about validation (not about good software engineering). Indeed I sometimes wonder if validation is the new Y2K. Problem is that there is no date when the skeptics are proved right.
Francis
www.controldraw.com
 
V
Once upon a time I led a software team at a major vendor(Siemens). Our site (johnson city, tn) was trying to get ISO-9000 certified. Management put an engineer in charge and sent him to class. He came back on a mission to change everything (including my team's methods and procedures) to get us validated.

He knew nothing about software (plus this was an attempt to validate the whole manufacturing process). Since he did not understand software he was really uncomfortable with any and all processes employed in development. He wanted to revamp everything in such a way that he felt really comfortable. However, this is not what certfication or validation is all about.

Validation is about having a verifiable process in place to track changes. That is all it is about. You have to be able to show what you do and prove that you do what you say you do.

We argued much what validation was. He wanted to create new processes and new methods. I repeatedly stated that all we had to do was document what we did and do what we said we did and be able to prove it by showing it electronically.

We had an electronic system for documenting changes, the actions taken and how we verified the actions did what we said they were supposed to do.

My design team passed the audit hands down and the methods we employed have been verified by an FDA audit within the past five years (so I am told since I left the company in 1994).

I saw then ISO-9000 certifcation as being very similar to FDA validation. When we started upon a major new release we entered into our electronic system a statement of the goals of the new release (I.e. Modify the APT system to support the incremental compile and downloads. The system must be able to ......)

We then developed a design and test plan. We entered those into our electronic system.

We then proceeded to design, implement and perform unit testing until we reached an Alpha release point. The test team worked to put together test scenarios that would execise the system and boundary checks.

At this point we started tracking problems. All problems were entered into the system, assigned a priority and assigned to a developer.

Once the software reached a beta release point then the test team started testing.

The controls team started trying to solve real world problems. The controls team was seperate from the test team. Its mission was to exercise the system in a realistic way to shakeout bugs that test engineers could not find. There is a difference between what a test team finds and what real users find. This is because real users are trying to solve problem using the tool and test engineers are trying to find problems in the tool. The controls team tried to simulate real user so that the product would get a shakedown before the release and not after.

This approach worked wonderfully and while APT was never bug free, each release of APT was of a very high quality and seldom had major flaws in the final released version.

All documented problems had to be resolved, the solution documented, a test plan to verify the fix had to be entered. The change had to be tested and signed off (electronically) by the originator and the test team. The unreloved issue count had to be zero before APT was released to manufacturing for productization.

This method worked wonderfully.

A few years ago I did some work at a contact lense manufacturer. One of the APT team members had gone there to help develop their system. He had worked with his company to put a system similar to the one used by the APT team in place.

When a change was needed to the line, a change request was made and assigned. The program was changed and a report was included in the APT application showing who, what and where the change was made. A test plan was entered and test results were entered. All of this was included in the actual program so a total tracking to the changes over time could be made.

This system was not really comfortable to me at first but after a while I saw how much it helped. Several times I had to go back to a previous version and then redeploy my changes (due to something being do in the field while I was working on a design change). Having documented what was originally in place and what it was repleaced with made redeployment simple.


Validation is what you make it. If you employ good software design methodologies, good engineering practices, document what you, do what you say you do and be able to prove it, then you can certify without it adding a lot of overhead.

However, usually the people charged with the responsibility for validation (not the test engineers who are responsibly for validating the system) know nothing about the process or the software. Within the framework of this ignorance it is safer (for them) to try and build barriers to change.

An auditor is going to look for good practices and answers to questions. If the auditor does not get a good answer to a question then he/she will probe deeper. Therfore, it is imperitive that everyone do the job the way you say that they do the job. Anyone on the team should be prepared to show an auditor the process employed and how they use it personally. That is not to say the everyone should be able to show everything. If an auditor ask what you do and how you use the system in your job then you should have an answer readily at hand. But the answer should be based upon what you really do and not what you think the auditor wants to hear. If in your job you have very little need to interact with the electronic tracking system you simply explain your job, what you know and why you do what you do the way you do it. Be able to show examples that prove that is how you really do it.


The system should not be for show but a tool that makes everybodies job easier and more clear cut.
 
Top