Pharmaceuticals and Biotech companies have no doubt that Artificial Intelligence (AI) is here to stay. Nonetheless, the opportunities AI technology offers are developing at a slow pace in a rather conservative manufacturing industry, aimed at managing production risks under controls that are subject to strong regulations. Fear of change? Risk aversion? Likely not.
Pharma and Biotech industries have been relying on physical laws and chemical reactions for years, all explainable through mathematics and statistics, and there is absolutely no reason why this should change. In the end, AI is all about that: math and stats… but maybe a bit more intricate. In fact, the underlying algorithms in today’s Machine Learning (ML) models are formulated in a non-explicit way, such that the continuous increase in computational power e.g., of GPUs, has made them faster at calculations. However, there is still some black box kind of magic behind AI algorithms that the industry tends to avoid, as neither quality control teams nor regulators feel comfortable with the validation of ML models underpinned by such algorithms.
Since AI algorithms are at the core of any ML model, there is a strong need to know that the algorithms are performing as expected when applied to data with known characteristics. As when crafting a table, you need a saw. Your first goal is to know that the saw you have at hand is suited to deal with the wood you selected for your table. In other words, you need to qualify the algorithm (the “saw”) to reassure customers that they can create an ML model (a “table”) using such an algorithm applied to their process data (their “wood”).
The final goal of AI algorithm qualification is not only to ensure that an algorithm produces satisfying results in a given set of conditions, but also to somehow open the “black box” in a way that allows for an understanding of the limitations of that algorithm, and to identify the factors that could contribute to the malfunctioning of the resulting ML model. This predicate aligns well with the Quality by Design principles that are widely applied in GxP-based manufacturing environments  that Bigfinite has adopted for AI algorithms through its own 6-step algorithm qualification policy:
- Definition of the acceptance criteria
- Risk assessment
- Design of the experiment
- Dataset generation
- Execution of the experiments
- Analysis of the results against the acceptance criteria
At Bigfinite, we are working on the qualification of AI algorithms used in AI widgets aimed at providing our customers with maximum confidence in the creation of ML models using our GxP AI platform.
 GAMP, ISPE.5: A Risk-Based Approach to Compliant GxP Computerized Systems. ISPE, Tampa, FL, 2008.
Learn more by joining us for a webinar, How to Qualify AI Algorithms, on May 5, 2020.