Software Validation with Mack Soneh - Deep-dive in validation
Amber: [00:00:05] Hi, everyone, this is Amber Shao, Founder and CEO of AduroSys, a laboratory data management software company. Welcome to AduroSys lab software podcast. Joining me today is Mack Soneh. Mack is a principal consultant at Venturescope Consulting located in Sunnyvale, CA. Mack started software development in 1984 and was introduced to the world of software validation at Bell Laboratories. Since 1995 he has been applying software validation methodologies and techniques for the FDA regulated industries, especially in the In-Vitro Diagnostics fields known as IVD. Companies he helped successfully by applying the software validation methodologies and techniques include 23AndMe, GRAIL which was acquired by Illumina, Johnson&Johnson, Abbott, and other Silicon Valley start-ups.
Amber: [00:01:01] So in the last episode, we covered some basic concepts about software validation. Today, let's do a deeper dive into the validation process. I deal with both homegrown software and also the vendor-provided commercial software. I used to think the validation process for them is somewhat the same, but I recently learned that actually the process may vary by the type of software. So between the homegrown software and also the vendor-provided software, how are the validation process different in general?
Mack: [00:01:37] Sure. Process you follow through for the software validation is essentially the same. But depending on if it's homegrown software or vendor-provided software, the resources and activities that you're involved or have to support will be different, because simply there are certain types of validation activities that you have to rely on the vendor. You do not have access to certain things because of the proprietary nature and because of the significant resource allocation that you can't just afford to do such things.
Amber: [00:02:18] Where does validation begin?
Mack: [00:02:20] Yeah, ideally the validation begins from the planning phase. In the planning, you have to write software validation plan, which defines what to be accomplished through software validation.
Amber: [00:02:34] So when you say it's better to start at the planning of the software, whether it's acquiring or developing software. And I know there are some cases where the software has already been installed or already been developed and then they have to do the validation. Is that acceptable? Is that another approach or it's not recommended at all?
Mack: [00:02:56] Well, it is not recommended but it is after the fact. So if the system's been installed and people are using and you found out it needs to be validated, the thing to do is even it's being used, it is the best practice and best interest to validate the system. In this case, it will be considered as retrospective validation, and which is better than nothing. It's significantly better than just assuming it's okay. You know, we are using it and hope nothing happens and we can just keep on going without validation. That's the worst case possible.
Amber: [00:03:37] So it's definitely not recommended.
Amber: [00:03:40] What are some of the major components in the validation activities and then what kind of documents created along with those activities.
Mack: [00:03:49] Let me dive into more specifics of what has to be done in software validation. Software validation consists of four major components. And so the first one is validation plan. The validation plan has to be drafted. And this is the discovery process because the plan has to define what has to be accomplished through software validation.
Mack: [00:04:14] Now second component is requirement specification for homegrown system or for vendor-provided system. Only you as the user or user group can specify the requirement because even for the same software, every company operates differently. There is no such thing as one operation or one company operate exactly the same with the other.
Amber: [00:04:42] Every lab is different.
Mack: [00:04:44] Every lab is different, therefore, you have to write your own requirements specification. Now, for the vendor-provided software, vendor can provide you the list of software functions. This software function is not the same as requirements specification. Requirement Specification is how your lab is going to use the software by making use of functions provided by those vendor software. So requirements is the user requirements, which is not the same thing as software, the list of software functions.
Amber: [00:05:21] Right. Going back to your thermometer example. So the user requirement document basically indicates I want to use the thermometer in the patient rather than using in the soup, right? Even though the vendor who manufactured this thermometer allows you to use it in any cases. But you have to say very explicitly, I need to use it in the patient, but not any other cases.
Mack: [00:05:48] Yes, that's right.
Amber: [00:05:51] Okay. That's the second component, requirement specification. The third component, so you mentioned there are four components, right?
Mack: [00:05:58] You have a third component is the software development process. Often in the industry, it's called software lifecycle, initial is called SDLC. Stand for Software Development Life Cycle.
Amber: [00:06:12] This is the part where I remember earlier you gave the example about the software testing is not enough for validation because the design is not covered in the testing. So I'm assuming part of the software development lifecycle document will also cover the design about the software. Is that right?
Mack: [00:06:32] That's right. That's right. So software lifecycle breaks down to four distinct phases of the life cycle. First phase is the requirements of the planning and requirements analysis. Software design and development is the second phase. And third phase is the software testing. Fourth phase is after the software being tested, now you will deploy the system. Installation and deployment of the system is the fourth phase. And so if the software testing is only your concern, then you are just focused on the phase three, which is a software testing phase, and therefore you as a tester do not have the visibility for the software analysis and design phase. Each phase of software lifecycle, you have to continuously revisit your software hazard analysis. When you are gathering the requirements and writing the requirements, you have to question yourself what goes wrong. If this goes wrong, what can be the potential hazard. And that you have to continuously ask the question for every phase of the software lifecycle during software development, during software testing and once it's deployed.
Amber: [00:07:56] And this also explains why it's better to do the validation right from the beginning to the software development or installation, because then those activity will help you to really refine the design and testing along the way rather than doing it retrospectively.
Mack: [00:08:13] That's right. And the fourth phase of the lifecycle is the deployment. That's where many of the security issues are implemented.
Amber: [00:08:23] Those are all just about the software development lifecycle. So moving on to the fourth components of the validation activity, I assume is going to be the protocols, right, the test scripts. Could elaborate on that?
Mack: [00:08:37] Sure. So in the software lifecycle, in the third phase, it's a test cycle. And test cycle or test phase can break down to those execution of protocols. There are three types of protocols like IQ for installation qualification, or OQ for operational qualification. Then PQ is performance qualification and people say just IQ, OQ, PQ.
Amber: [00:09:04] Believe it or not, this is the question I was asked most, which is what is IQ,OQ, PQ.
Mack: [00:09:10] Yeah, so that IQ is for installation qualification. So that's the installation of the software and software system into your in lab environment. Installation qualification originally comes from is installing the machinery such as the centrifuge or sequencer. But for software, software being installed in the environment being used, in this case, lab environment or if it's the virtual environment, then software is installed in the virtual environment and accessible from your production or clinical labs.
Amber: [00:09:52] So then the operational qualification OQ, what is that?
Mack: [00:09:56] Yes, OQ stands for operational qualification and this is the activity supposed to be performed right after the successful completion of the IQ. So operational qualification/OQ supposed to be a challenge of the function. Say your thermometer measures from 100 degrees, the maximum 100 degrees to zero degrees Celsius. If the thermometer gives one degree increment for every point of measurement, starting from zero to 100 degrees, all of a sudden it doesn't jump from 20 degrees Celsius to 30 degrees Celsius. For the entire range of zero to 100 degrees, OQ will test it. And for medical intended uses as the OQ being the challenge test, you're supposed to actually test beyond what specification required, which is you're supposed to test 101 degree, 102 degree and make observations of how thermometer behaves once the measurement exceeds its the range of the measurement. Does it break? Does it explode?
Amber: [00:11:15] Ok. And then for the performance qualification, I'm assuming this is related to the intended use, like how you're going to use the system.
Mack: [00:11:24] PQ is the test for the real situation or the situation expected to be close to real. Close to real being you will be measuring temperature. If this is meant to be used for human, then you will actually use human to measure temperature. And because of the human temperature doesn't drop to zero degrees or 100 degrees, going up to 100 degree, although the temperature of thermometer is capable of measuring those ranges that your performance qualification measures the artistic environment with the realistic subject, in this case, human body. And so you will be repeatedly measure with human body and repeatedly measure its accuracy of range and making sure it performs accurately.
Amber: [00:12:19] Out of all these documents that you have just talked about, which one actually come from vendor if this were for vendor-provided software?
Mack: [00:12:29] Yeah, that's a very good point. So if it's the vendor-provided software and the vendor will do IQ and OQ. Vendor will be able to install the software. Operational Qualification is making sure the thermometer measures correctly within the predefined specifications. So that's something they can perform. But how that thermometer is especially used for human in those special circumstances vendor has no idea of your intended uses, therefore, for vendor-provided software, PQ is always something falls on to lab director and lab manager.
Amber: [00:13:13] So that's probably the other difference between the vendor versus the homegrown software. If this were vendor software, IQ/OQ will be from the vendors. But it were homegrown software, this is the document that internal resources have to work on, right?
Mack: [00:13:30] That's right. And then there are some trends recently because IQ, OQ has to be executed by vendor. Vendor won't really want to use their own resource performing those IQ and OQ because those are labor intensive testing process. So instead of performing IQ and OQ, in some cases they will just issue certification paper. The vendor claims IQ and OQ were performed in their own company's development environment. And carry the certificate, just one page or two pages certification of qualification of those testing performed that you are not getting actual IQ or OQ documentation. So you have to be careful.
Amber: [00:14:22] So that's something to watch out for.
Mack: [00:14:25] Oh yes, absolutely.
Amber: [00:14:29] Yeah, definitely there's a lot of documentation that needs to be produced as part of this validation process. So where does validation end?
Mack: [00:14:41] Validation ends at the successful completion of IQ, OQ and PQ, where you summarize in the validation summary reports that validation has been successfully executed and completed. Its validation final summary report has to be reviewed and approved by the project stakeholder.
Amber: [00:15:01] And then after the validation is completed, if the system continue being used, does one have to do anything to ensure the system is still maintained in the validated state or it doesn't really matter anymore?
Mack: [00:15:14] Yes, that's one crucial question. So once the system is being validated, you've got the sealed approval. Every time that new requirement comes, your company's change control process has to review it. So you have to have change control process or change management process. And before you change software, you have to evaluate the nature of the change. If you decide to change the software by implementing new feature or the software bug being found after being used, that software change has to be tested, not in your production environment, but in the testing or quality assurance environment first. So those changes has to be performed and reviewed within your change control change/management process, and only then you introduce those actual software changes into your production system.
Amber: [00:16:13] So in a way, this process doesn't end. It only ends maybe if you stop using the software.
Mack: [00:16:19] That's right. Once you retire the software, then you're off the responsibility of validating software. And also another thing as the last comment. Once the software is validated and put in production, then you have to make sure only production support team has access to change anything. Everybody else should be blocked from accessing the software. That's a crucial point.
Amber: [00:16:47] Well, thank you so much, Mack, for the deep dive in the validation process. I hope this gave our listeners a better understanding of what the validation entails.
Mack: [00:16:57] Oh, sounds good. Yeah, I hope this will be helpful.
Amber: [00:17:01] Thank you.