By Krish Krishnamurthy

Signal to noise and/or sensitivity and/or detectability are important criteria for accuracy and/or reproducibility in quantitative measurement using any analytical tool. Needless to say, reports exploiting the quantitative nature of NMR appeared as early as 1953 and has been one of the core applications of NMR spectroscopy. Several strategies and tools, such as simple integration, resonance deconvolution, spectral matching, spectral simulations, and time-domain (FID) data decimation have been developed, reported, and validated. Increasing the S/N, i.e., decreasing the uncertainty in the signal (aka, the data that describes the signal), is by far the most coveted need for qNMR. The phrase “minimum required signal-to-noise” itself is debatable considering how one measures such signal-to-noise. Historically measuring the height of a signal in the spectrum against rms noise is widely accepted. The definition of signal to noise in time-domain (FID) adds yet another twist to this debate. The FID, by its very definition, is a decaying signal as a function of time (acquisition time), while it is not unreasonable to assume that the random noise is a constant through the same period. In this presentation we will explore how this can be exploited in improving the accuracy and/or reproducibility of quantitative measurement, compared to conventional processes. Practical examples in simple quantitative measurements as well as in time-course study (such as reaction monitoring) will be presented.

Session #11: Bayes’ed and (Un)Confused: Using Bayesian Statistics and Prior Knowledge to Understand NMR Data and Make Decisions