Data Understanding and Contextualization
Your drug manufacturing process produces thousands of data points, and the data comes from different systems and different formats. In addition to that, some of it is GMP data (FDA guidance), and some are not. Your processes are working to support the same outcome, but using the data to make improvements is nearly impossible if every system speaks a different language.
The answer isn’t just in aggregating data. The power comes from giving the data context and understanding it as it relates to everything else coming out of your systems.
First, consider data understanding in a different way: language translation.
If you have a French speaker and a Mandarin speaker, how do they communicate without losing anything in translation? Without a common language, we need the French speaker to translate to Mandarin or vice versa. The risk in that is losing intent, meaning, and context. To have a common understanding of the language, we need to do a few things:
- Maintain original language so meaning and intent are kept. To do this, we have to keep the French language as only French and the Mandarin language as only Mandarin.
- Layer on a translator that maintains the original intent, meaning, and context of the French or Mandarin speaker.
- The translator delivers the same message and context in a language that both speakers understand.
The result: Without changing either language, we have created context and commonality by layering on a translator that can understand the intricacies of both languages.
Keeping it Compliant
Now, imagine this for GMP and non-GMP data and what that requires of your systems and processes. To use the data to make decisions, it has to be GxP compliant; however, how do you manage this if not every system is compliant (and it doesn’t need to be)? The answer is in the “translator.”
Let’s take ERP and production data, for example. Your ERP system is not necessarily validated for a manufacturing process; however, the data going into and coming out of it could be relevant to understanding your manufacturing processes and the materials involved. You also have production data in batch records, and that data is required to be compliant. How do you find connections between the data to help you make decisions, and how do you make sure it stays compliant?
- Use GxP compliant systems: This doesn’t mean validating your ERP system, it means getting your data into a platform that is GxP compliant and can easily be validated.
- Aggregate your data: Getting your data into one, GxP compliant, data lake is a hugely important step, but it’s not enough. You’ll have your data in one place, but it’s still not speaking the same language.
- Break data silos: This is where the magic starts to happen: data contextualization.
Let’s look at two examples of where data contextualization matters in a biopharma manufacturing environment:
- CDMOs often manufacture products for more than one Sponsor at a time, and they want to understand how to make a process more efficient, but using data only from customer A. The same equipment could be used for Customer A and Customer B, but of course they have different processes, recipes, and products altogether. Without understanding the data output in the context of the batches developed for Customer A, how could they use it to make any improvements? Without automatic data contextualization, the answer is: manually. The time it takes to do this is prohibitive and is prone to errors. The better, compliant, and more efficient answer is: aggregating data into a GxP Compliant system where data context and understanding are automatic. It is a place where your data speaks a common language and is filtered and associated with other data for specific usages while following security and integrity needs.
- Take lab data, chromatography specifically. You might do chromatography in different types of equipment, meaning that although the data is the same type, it is coming from different systems, and those systems don’t always produce the same type of files. All data coming from the chromatography process is “chromatography data,” holding great possibilities for data analytics. But it is not until the data is transformed (in a GxP compliant way) that this becomes possible.