The Pharma Lab Show: Mixtures and Standards

    Welcome to the Pharma Lab Show Live, and thank you for joining us all here in the wonderful woodlands, Texas.

    Well, what we're going to talk about today, what I'm going to talk about today is  mixtures and standards. And this really comes from my experience of working in the industry for many years, developing a lot release methods. And it's quite a scary experience sometimes. And it gives me a chance to scribble on the board and do my diatribe on the state of the statistics in an analytics right presently. So we'll talk about mixtures and standards today. And the first topic I want to cover, is that of ideality. So what do I mean by ideality? And these are really the requirements that are necessary for a mixture sample to be ideal from the perspective of X-ray powder diffraction. 

    So by that I mean that it will give rise to a consistent, reliable and reproducible X-ray pattern. So if I measure the same sample multiple times, or I take adequate from it, measure multiple aliquots, that I get consistent reproducible and reliable data from those samples. That's what I mean by ideality.  

     And so what things do we need to consider to have an ideal mixture or an ideal standard? Well, the first one and we've covered this a little bit in some of the earlier discussions, is the idea of particle size, which I'll just call PS for short. And the usual guidelines of X-ray powder diffraction is it should be less than 40 microns particle size. I think in practice I find for drug products, If you can actually aim to get it below 10 microns, which is often not an easy thing to do, then you are closer to that ideal basis. So controlling particle size is something that's very important. But more than that, is that all components in the mixture should have very similar particle size distribution if you can manage that. So not only is it the control of particle size that's important, is that all the components in the mixture should have very similar particle size distributions. And this is due to a phenomenon that is called micro absorption. Now we haven't discussed that yet. That's more of an advanced topic in powder diffraction. But essentially when you have mixtures of materials with very different particle sizes, particularly if they have different mass absorption coefficients, it can dramatically affect the reproducibility or the accuracy of peak intensities that you that you measure. So to control for micro absorption even for organics, it's really important. All components should have very similar particle size distribution.  

     A rather consideration is that all components should be homogeneously mixed. And this is a really important consideration. For example, the sample holders which I'd recommended that you use is of the low background silicon holders with a small narrow depth well in them. Typically holder of about 20 mg of material. So when you're looking at mixtures or large amounts of material, you probably want to take multiple aliquots. So it's important when you're taking multiple aliquots, that everything is homogeneously mixed. Or if you make one sample, you're just pulling out 20 mg from a much larger bulk. You're not going to get representative results from a mixture unless its homogeneously mixed at that 20 mg level. So this is another major requirement for ideality.  

     The 4th requirement for ideality really is that no component has a well-defined habit. And this is to do with preferred orientation. We want none of the components in our mixtures to have well-defined habits. We don't want large needles or plates in the mixture because that will give us a completely non ideal system that will vary depending on how the sample is made, who made the sample etcetera. So the fourth requirement is no component should have a well-defined habit.  

     And the final component might surprise you. And that is the powder flow or powder handling. They must have flow in the powder, you should be able to transfer them from weighing vessels into the sample cups to reproducibly make different samples to control the mass. You need to have good flow and handle ability. Now, when you look at these requirements, you realize these are very, very similar to the requirements you might go through in the formulation process to make an ideal drug product powder before going into the tablet, press coating etcetera. So what we can conclude from these requirements is, most of the time drug product powder is close to ideal. And most of the drug products that I've worked with to develop quantitative methods,  

    most of the time the drug product powder itself is really close to ideal to get very 

    reproducible X-ray diffraction patterns. And we'll talk in like two slides further along on what we mean by that, the impact of that ideality. How it affects the X-ray diffraction measurements themselves. So if a drug product powder, this is what we're going to be analyzing in reality when we're actually running the method to for lot release, these materials are close to ideal. They have an ideal matrix most of the time.  

     Now let's go and consider the standards that we make in the lab. So these are standards we're going to use to calibrate our method. And I typically call these boutique standards. So what can we say about boutique standards that you make, you know, on the bench in your lab, to calibrate this very important lot released quantitative method. 

     Well, first of all, when it comes to particle size, it's highly unlikely that you've controlled particle size, at least to get within 40 microns and definitely not below 10 microns. I mean that really, really happens on the bench when you're trying to make, make material. you could try to sieve material, when you sieve it, you lose a lot of the fraction, you don't really know what you're throwing away. So I'd say no; most boutique materials we don't control particle size when we make the standards. And therefore by definition, all components definitely do not have the same particle size distribution or even similar particle size distribution. Which means we have a challenging issue of micro absorption that comes into these standards as we mix them. And are they homogeneously mixed? I would say again, that rarely is the case. You know, you can imagine someone on the bench stirring something with a spatula that's not going to homogeneously mix the material. And I've even tried mixing on the bench 

    using tabbouleh mixes and all sorts of other automated mixing devices. And very often I find it actually causes further phase segregation or further segregation of the components. So I'd say no; they are definitely not homogeneously mixed at all. 

    And very often know the API is received for the analyte. They have been grown in the lab. They have relatively large crystals, well-defined habits. So no; we don't control for habit, we don't do habit control typically on the standards that we mix. And very often they flow very badly. So as you're weighing, you're mixing, you transfer them to different sample boats, very often you're losing different fractions of the material each time. So I mean, who knows what you end up with at the end of the day, after going through all of this boutique playing around. So I would say boutique standards, essentially they are far from ideal. In fact, I would say you can't even really call them standards. 

     And, you know in my experience, the standards that we require to calibrate our methods are often the largest source by far of any error in the method development that we that we put together. So, you know, faced with this challenging question, and also with the realization that to do quantitative methods we typically are required to use some form of standard. How do we cope with this? How do we determine the magnitude of the error that we're introducing by making these boutique standards?  

     So let's first of all, before we consider alternative techniques, let's think about you know sources of error and what that means to our powder pattern, and how we can tell when we've got very non ideal standards that we've mixed to calibrate our method. So we'll go on to slide three or section three, which is sources of error. And this really is how do we see those errors in the powder pattern and how can we extract them as within our method development to understand our independent sources of error that we may have. And you know, my point zero, and I mention this pretty much every time I talk about making samples for quantitative methods, is control for mass. What I mean by that is if you're using the same sample holder, let's say these low background silicon holders with a narrow depth well, you know they should typically hold round about 20 mg of free flowing powder organic. So that becomes a requirement. Every sample that you make, let's say, should be between 18 and 19 mg of material packed within that sample. So that's what I mean by controlling for mass. You don't just throw the sample and squish it and then go on your merry way. You control For mass. You make sure you have a very similar amount of material for each of your analyses that you're doing.  

     Then, how do we begin to tease apart all these different sources of error to get a handle on our ideality in our standards? Well, the first thing that I recommend is that you take a single aliquot, so the same aliquot, Which would be really again around about 20 mg. And you measure that multiple times. So that's multiple analysis. So we take a single aliquot, we collect the data multiple times. So what would we expect to see in our data when we do that? So if this is two theta, the diffraction variable, and this is intensity when we run it through the first time we'll have a diffraction peak. Let's say so you pick one peak or you can pick multiple peaks really, but pick something that's very characteristic. And as we run it through multiple times, we're going to have slight differences and variance between the diffraction peaks that you've chosen to characterize. So for this particular peak, we're going to have a standard deviation that's introduced across all these multiple measurements and we have a mean intensity, so we call that I zero, let's say. So we have a typical peak intensity across all these measurements and we have a standard deviation. 

     Now, for X-ray diffraction, the statistics are controlled by Poisson statistics. So, if you have a relatively intense peak, you can say that the standard deviation is essentially the square root of the mean intensity that you found through your multiple measurements. And this is really a good way to test whether your X-ray instrument is functioning correctly. You can put in a single sample, measure it multiple times, look at the variance across the peak height and make sure that the variance is, I mean 

    standard deviation, so the square root of the variance, is approximately given by the square root of the mean intensity that you've determined. And again, that's Poisson statistics. And of course we're assuming that we've got high count rates. So we can sort of approximate it by a normal distribution to some extent. So we should have our first downward deviation that comes out from that, which is going to be the square to the main peak height intensity. So then we can, you know, we can play around with a subset of that is, we can take the same adequate and we can measure it in multiple positions on the sample changer, let's say. Depending on the sort of sample changer that you're you might be using, you can load the sample at different points on that, load it in, collect the data and see how does your variants change across multiple positions of the sample changer. And the variance change should be very small. Should be smaller typically than the Poisson statistics. So that's another way of testing instrument performance and how the machines are functioning. We'll get another standard deviation, which in this case I'll just call it P for positions. Actually, no, let's call it something different. And I just call 

    it L for like different sample loadings.  

     Now, another study that you can perform is the same aliquot and this one is a really important one, but multiple preps. And in particular, asking different people to do the preparation. So all the analysts that you're going to be using to run the method in reality, they should be involved in the method development. And you ask them to prep this single aliquot, so you can see what is the impact of the individual making the sample. So this is multiple preparations, and again, so I'll call that one, this one, I'll call P. And then the final one is then multiple aliquots. So in this case we're taking multiple aliquots from our single mixture or a single standard that we may have made. And we have a standard deviation on that as well, which we'll call M. And this is really our direct method of ideality. What is the additional variance in the data that's introduced by taking multiple aliquots over these other variances that we have? So these first two are essentially the instrument variants. This is the impact of the analyst making multiple samples, different analysts involved, and then this one is then the impact of taking multiple aliquots from the single sample. So for a first pass, this is our probe of non-ideality. So when we made our mixtures and we can compare that to the drug product matrix itself. Now, if we have this, I'm sure you've seen this in all sorts of statistics textbooks. If you have independent sources of error that are identically distributed and their random, then you can combine them together to calculate averages and combined standard deviations. If this applied, then we have a total standard deviation that we have for our method, if you like, you can write as it's very simple to express. It is just a combination of all of these individual components together to give us a total effective standard deviation for a method itself. 

    And again, in my experience, the impact of a non-ideality of standards is by far and away, the largest error that you have deal with. And that directly feeds through to the limit of detection. So assuming a slope of one, which we'll talk about later on how 

    to develop quantitative methods with a slope of one, then the limit of detection is about 3.3 sigma. This is the sigma over here. So your limits of detection in your method are directly impacted by the ideology or non-ideality of the standards you're using in the method itself. Now it turns out that that we can't really apply this. And if you try and work out mean intensities, mean responses over all these different sources of error. You run into something called Simpson's paradox, which means you get the incorrect answer because these are not independent, identically distributed sources of error.  

     For example, let's consider this. This source of error here, which is the preparation or the analysts themselves that are involved in preparing the standards. And let's say, you have three analysts, you know, these sort of results are quite typical. So this is the mean intensity you get from these different analysts, Analyst 1, 2 and three. This is the probability distribution. You often find that the error introduced by the analysts as a sample is made is multimodal, and it's often closely related to the number of analysts involved in developing the method or are going to be running the method. So clearly this type of error, this multimodal distribution is completely different from the Poisson statistics, they are very different distribution, very different sources of error. And so, you know, it actually is no problematic to combine them all together to get a single average result and a single variance or standard deviation across those. And in fact in this one, this is often impacted by the non-ideality of the material as well. If you have very ideal standards or mixtures, then it doesn't really depend too much on who makes them, you get very similar results. 

     But as the non-ideality gets bigger, for example, with preferred orientation, micro absorption, homogeneity as they get bigger and bigger sources of error, then the impact from different analysts begins to spread further and further apart typically. And the multimodal nature of that becomes really quite extreme. Now, what we can do is, we can do the same study for the drug products that we're going to analyze as well. And I will say again if you have questions, just put them in the chat or if they come to you after the fact, please just email me or post them up to LinkedIn and I'll get back to the questions. But as I was saying, we can run the same analysis on the drug product material that we're going to be releasing the method on. We can certainly now control for mass. We can take a single aliquot from the drug product, we can analyze it multiple times different positions, we can get different analysts to make it and we can take multiple aliquots. So we can also determine these sources of error and variability directly from the drug product itself. And we can estimate how that would actually impact the LODs. If you do that study, I think you'll find that the ideality in a drug product is orders of magnitude better than any standard that you can mix uh in the lab itself. 

     So having said that, what can we do about it? Well, we can go through this sort of statistical analysis to identify sources of error and then you can follow that into your calibration line, effectively removing sources of error due to non-ideality and the impact of individual analysts. So you can sort of back that out and get a more ideal calibration curve and a more ideal LOD which you can then apply to the real drug product itself. But there are such things as standardless methods and there are alternate approaches to making mixtures. And the last one I'll leave you as a quiz for you guys to get back to me with what you think is the answer.  

     So let's have a look at slide four, which is standardless methods. And this really is the first response if the boutique standards we're making in the lab are really problematic and they're not helping us at all in developing a reliable quantitative method. Why don't we just dispense with them and run with standardless methods. Now, currently that's not supported, but there are a number of standardless methods that can be applied. One of these is Rietveld, which I'm sure you're all familiar with. But you know, in my experience in running Rietveld, trying to design a method for quantitative analysis, that if you're not careful, Rietveld blows up particularly for very low levels of material. As you start to approach the LOD, your sources of error become extreme and the whole method tends to blow up. So there's ways of controlling that. If you're going to take that approach, I would recommend one; you use a fundamental parameters method inside of Rietveld. So that effectively controls the peak shape and width of the peak so it becomes a constrained variable within the method itself. And the second thing I'd recommend is that you definitely make it highly constrained. There are so many variables that you can turn on in a Rietveld method. If you just go through blindly, you'll have hundreds of variables that are active, which is really the reason why it diverges when you get down to very low levels of material.  

     An alternate approach, which is my definitely preferred approach is component analysis. And what I'm talking about component analysis, I'm not really talking about, you know, the traditional methods maybe of PCA or PLS, although it's similar to those, but it's also somewhat different in how we implement it for X-ray diffraction measurements. So to do that, you would take multiple batches of material. So for this particular drug product you might be working on if you have multiple batches, which often isn't the case. Sometimes you have one batch to start with, which is definitely problematic. But if you have multiple batches, then you can use the variance between those batches. So you essentially use the variance which is similar to PCA and PLS, but from the variance, you extract out real components. And that is really the difference compared to the more traditional methods which typically stay in variance space. You're using the variance between the measurements on multiple batches to extract out real diffraction components. And the reason for doing that is now we can apply Bayesian analysis because real diffraction components, There are a number of things we know about real diffraction components that we can apply as a Bayesian constraint, like real measured diffraction patterns are always positive. You don't see negative going peaks or negative data and measured pattern. So we had Bayesian constraints to real components and then the final part of that is we apply Vainstein's law which I'm not sure I can spell. And by Vainstein's law essentially saying is that for the same elemental composition you can normalize the equal area, and for different elemental compositions you take into account the change in electron density between the materials and then normalize the area. So these are traditional chemometric normalization approaches with a slight twist for X-ray diffraction data. And in fact we're dealing with real components. So the combination between all of these gives an extraordinarily powerful standardless method.  

     Nevertheless, it's still based on the actual real batches themselves. So it's limited by the number of batches that you have. Some clients when you're working with them will actually run multiple smaller scale batches and even deliberately vary some of the components to give you better access you know to extracting out the appropriate components for the quantitative analysis. So this is something I think for the future is a very robust way to move forward. Probably more advantageous than using Rietveld type approaches and in Rigaku, this is something that we're heavily vested in. We have, you know, our own a component analysis method that we call DD. Not the most inspiring name I must admit, but it's an extraordinarily powerful package. And it includes some of these constraints within the modeling, and I've had really amazing quantitative results. You know, I should mention again Rietveld is based on crystal structures. And if you're dealing with 100% crystalline formulations that's fine it works great. Or if you have some amorphous excipient that you can just ignore, then that can also work pretty good. But if you're dealing with amorphous components, mazel phase components, things you can't model well from a crystal structure, then this type of component analysis is indispensable. And I've used it quite extensively to look at, you know, changes in amorphous components within drug products and getting you know incredibly accurate results.  

     And the final thing before I sign off for the day, I'm going to leave you with a conundrum. And just to investigate, you know, some other potential alternatives. And I'd like you to  get back to me after this and let me know whether you think this works or it doesn't work and the reasons why. So let's take an example, let's say we have a single analyte nice problem which I call A and we want to be able to develop an LOD, I want to make a sample to test its LOD at say 2% maybe or something, let's say 1%. So we want to develop an LOD at 1% for the single analytic. But we've got a constraint. Our GMP balance that we're using, has a limit, LOQ on the GMP balance is let's say two mg. And these are practical constraints when you're in a GMP lab, when you're trying to mix standard materials. So you want to make a sample of LOD test at 1%. Your LOQ on the GMP balance is two mg. That means that when you combine the analyte with the matrix, that's got to be at about 200 mg in mass. That's just constrained by the LOQ of the GMP balance itself. And if our sample holders hold 20 mg, then straight away, you know that you need 10 aliquots of material. 

     Now in the traditional approach, you know you're handling small amounts of material, two mg of your manual balance may go even lower than that. So you're dealing with tiny weeny amounts of material, you're trying to mix them together, you've got like weighing losses, you've got mixing problems, transferring, you transfer them between weighing papers and mixing boats and you're going to lose material at each step. So once you've done all of this at the end of the day, you have really no idea what your mixture really is before you start your method. So often I find when you're faced with a very complex problem, one way to help solve it is to go to extreme examples. So in this extreme example, this is the proposal, the hypothesis, right? So we take a single sample that's 100% analyte. So I mean it weighs about 20 mg, easy to way transfer, no mixing issues, no nothing. We just put 20 mg of material in the holder done. And let's say we make four of those and we collect data on them. So we do multiple preps, you can collect as many as you want. It's just to generate random statistics really.  

     And then the second one that we do is 100% matrix. And let's say we can collect any number of these, they will collect 20, we make 20 preps of the 100% matrix, just to make sure that we get a good handle on the random statistics that's involved. Good, idea on the errors that we're dealing with. Now in principle, so we can do the mixing offline using a mathematical approach where we take these data sets, these four and these 20 and we randomly combine them together to try and reproduce as much as we can some of the error in the method to give us the concentrations that we need. And then we can develop a calibration line from that. So, for example, if I take, you know, one analyte data set plus 19 mixture datasets, assuming that they're both about 20 mg of material, then that is a 5% mixture. And I can do that mathematically, combine them, add them together. So this is taking the mixture problem to really an extreme. And assuming that the things don't mix at all and they've got all these problems. This gets rid of almost all the experimental problems, but is based on the caveat that we can combine them together mathematically. 

     So it turns out this this approach as outlined here is not a good one, but a slight modification to it actually gives a very, very robust and powerful way of making mixtures and avoiding a lot of the experimental error. So I'd like you to give some thought to that and send me an email and let me know what your thoughts are and what you think the modification might be that would get this to work very well. And why do you think this one doesn't work the way it is currently highlighted there?  

     Well, thank you all again, I'll sign off now and we'll see you at the next Pharma Lab Show, which I think is going to be in two weeks this time. So thank you all for attending. And I don't see any questions on the Q & A. So we'll pass off and bye for now. Until next time. 

    The challenge of making mixture standards for solid state analytical methods - different approaches and how to evaluate. 

    In this episode of The Pharma Lab Show Live, we we discuss

    • Requirements for the ideal solid state mixture standards
    • Laboratory mixed standards are usually the largest source of error in a method
    • How to evaluate sources of error from measurement apparatus and sample 
    • Standardless methods and component analysis.

     

    Catch up on episodes of The Pharma Lab Show on our Learning Center

    Learn more about Rigaku's Direct Derivation on the SmartLab Studio II software page

    Watch The Pharma Lab Show on select Fridays
    For more insights into the pharmaceutical industry, subscribe to the podcast on Apple Podcasts, Spotify, or wherever podcasts are found.
    Check out the Rigaku Pharmaceutical Technologies Showcase for more information on Rigaku's commitment to the pharmaceutical industry

    Simon Bates, Ph. D.
    Simon Bates serves customers as the VP of Science and Technology with Rigaku Americas. Simon Bates received his PhD in Applied Physics from the University of Hull, utilizing Neutron diffraction to study the magnetic properties of rare earth materials. The neutron diffraction work was performed at the Institute Laue Langevin in Grenoble. For his postdoctoral work in the Dept. of Physics at the University of Edinburgh, Simon helped design and build high-resolution triple axis X-ray diffraction systems for the study of solid-state phase transformations. Simon continued his work on high resolution X-ray diffraction systems at both Philips NV and Bede Scientific where he was focused on the development of X-ray diffraction and X-ray reflectivity methods for the measurement and modeling of advanced materials. Before moving to Rigaku, Simon spent the last 15 years working in contract research organizations (SSCI and Triclinic Labs) studying solid state pharmaceutical materials. In particular, he was directly involved in the development of advanced characterization methods for formulated pharmaceutical products based on the analysis of structure (crystalline, non-crystalline, meso-phase, polymorph, salt, co-crystal..), microstructure (texture, strain, crystal size, habit..) and their functional relationships in the solid state. Simon also holds an appointment as an Adjunct Professor at LIU in the Division of Pharmaceutical Sciences where he helps teach a graduate course on solid state materials analysis.