Most of NMR has finally gone totally digital and is now routinely resorting to high-frequency ADC sampling at rates of several tens of MHz, with a tendency to ever higher sampling frequencies. Such a drastic oversampling has clear advantages in simplifying the front-end electronics. It has also many side benefits due to the fact that the subsequent down-conversion to low frequency range is done digitally and therefore theoretically artefact free. This includes perfect, calibration-free quadrature detection and drastic reduction of quantization noise. By the way, such techniques are routine in other areas of electronics (military, astronomy, audio, …); NMR is just a latecomer to this world.
High-frequency (HF) oversampling, however, has also some problems of its own. The digital decimation from the HF range to the audio range and the contextual digital filtering (a combination of CIC and FIR filters) need to be properly implemented in the hardware in order to be completely transparent to the User.
However, in the case of Bruker spectra a death time or group delay can be observed in the FID: it starts with very small values and then, after some points (usually between 60-80 points) the normal FID starts.
If a plain FT is applied to this FID, we will get a spectrum with a lot of wiggles in the baseline analogous to the convolution with a sinc function centred in the middle of the spectral window. This can be explained by recalling the time shift theorem of the Fourier Transform which says that if the time domain signal is shifted by n points, the frequency domain spectrum corresponds to the standard spectrum (when the FID has not been shifted) multiplied by exp(-i2*pi*w*n). In other words, we have introduced a very large first order phase correction in the spectrum. For example, if the FID is right shifted by 60 points (death time = 60 points), f-spectrum will exhibit a first order phase distortion of 60 * 360 = 21600 degrees.
If a plain FT is applied to this FID, we will get a spectrum with a lot of wiggles in the baseline analogous to the convolution with a sinc function centred in the middle of the spectral window. This can be explained by recalling the time shift theorem of the Fourier Transform which says that if the time domain signal is shifted by n points, the frequency domain spectrum corresponds to the standard spectrum (when the FID has not been shifted) multiplied by exp(-i2*pi*w*n). In other words, we have introduced a very large first order phase correction in the spectrum. For example, if the FID is right shifted by 60 points (death time = 60 points), f-spectrum will exhibit a first order phase distortion of 60 * 360 = 21600 degrees.
In order to work around this problem, most NMR software packages read the decimation factor (and the DSP firmware version) from Bruker files and calculate the required phase correction. So far, so good.
However, the fact remains that Bruker FID’s are not time corrected. Evidently, Varian also uses oversampling and digital filtering and their FIDs are time corrected, that is, they start at time = 0. If the digital filter is known in advance, which is always the case, the group delay should be compensated in the spectrometer, therefore, in my opinion, this death time or group delay is a bug in the spectrometer. For this reason, and going back to the title of this article, any input given as to why Bruker FID’s are not time corrected, would be greatly appreciated.
However, the fact remains that Bruker FID’s are not time corrected. Evidently, Varian also uses oversampling and digital filtering and their FIDs are time corrected, that is, they start at time = 0. If the digital filter is known in advance, which is always the case, the group delay should be compensated in the spectrometer, therefore, in my opinion, this death time or group delay is a bug in the spectrometer. For this reason, and going back to the title of this article, any input given as to why Bruker FID’s are not time corrected, would be greatly appreciated.
1 comment:
As I see it, there is a something fishy with the Bruker hardware/firmware.
The delays appear to be related to the way the signals are filtered in the digital receiver. I have encountered essentially the same problem about two years ago when I was still connected with the development of the Stelar NMR board [1,2]. We just ran into the delays because we did not think enough beforehand, but at least we have analyzed them and corrected the impass. I should probably write an article about digital receivers and about the many engineering options one has to face when developing one. The fact is that modern digital receivers are still far from a settled and well researched piece of hardware, especially when it comes to NMR with its extremely broad ideal requirements in terms of its freely settable spectral-window center frequencies (0.1-1000 MHz) and widths (50 Hz - 10 MHz). Moreover, the area is in fast evolution and what can be done today was unthinkable five years ago when we have started the SpinWhip development and Bruker has started their "2nd generation" hardware.
At present, any particular digital receiver circuitry is buried inside an FPGA firmware and there is no way to get hold of it - it looks and feels exactly as a piece of software source code, has the same complexity, and is prone to the same kind of bugs and algorithmic weaknesses. The only difference is that reverse engineering is not just difficult - it is impossible. So unless the Bruker guys confess their sins, we will never know.
Typically, a digital receiver starts with a digital input from a continuously running high-speed ADC. A few years ago, Bruker was sampling at 20 MHz, Varian at 80 MHz and we at Stelar have decided to go for 105 MHz (today, we could all easily go for 1 GHz and totally forget about any intermediate frequencies). This data flow then has to be down-converted by a digital version of a phase detector (this implies multiplication) and filtered by high-quality low-pass filters which, too, involve multiplication. Since multiplications are [still] too slow to do them at full speed, it is necessary to first reduce the data rate by means of fast, but low-quality, pre-filter (typically a CIC) which also decimates the data rate. The final filters, however, should be high-quality which implies a long chain of so-called tabs (each tab does a multiplication by a settable coefficient and a sum). Typically, FIR filters are used since, unlike the IIR family, they guarantee perfectly linear phase distorsion which is easy to correct by standard phase correction procedures. Of the FIRs (mathematically speaking finate convolution filters) one usually prefers the so-called symmetric ones which are much more efficient and imply true interpolation rather then edge-extrapolation; the drawback is a large data latency (the time a datum is somewhere in the pipeline). The optimization of the clock rates and types of the decimation and FIR filters and, especially, of their mutual balance is still a subject of heated discussions, since a rigorous mathematical analysis of the whole problem is missing. Consequenly, Company secrets worth a broken nickel proliferate.
Whatever the digital receiver design, the fact is that before a valid datum comes out of the data-flow pipeline, it is no longer pertinent to the present time but, in a non-trivial way, to some past moment. The latency D depends strongly upon the adopted design and its parameters, in particular upon the setting of the decimation ratio and the number of FIR tabs and the respective clocks. For any particular filter setting the designer - jointly with the control software programmer - can determine exactly the value of D. Since they are the only persons who can do this, they should make sure that when the data are stored in the acquisition or accumulation memory, they correspond as closely as possible to S(t) and not to S(t-D). In particular, the signal at index 0 should correspond as closely as possible to S(0); typically it corresponds to S(t0), where t0 lies somewhere in the interval [-dw,+dw], dw being the dwell time. To guarantee this, the designer should exert an extra effort in terms of implementing a proper dwell-time delay and/or proper memory indexing.
The dirty way around the problem is not to care and do the data shifting a-posteriori by software. In such a case, however, the software will have to implement a hardware-dependent formula (or a table) for establishing the value of D for various central frequencies (nuclei) and various filter settings - exactly what you encounter in the case of Bruker data. This is most unprofessional since it binds data-evaluation software to the specific hardware which acquired the data. Ideally, acquired data should have a clear physical meaning independent of the hardware used to collect them.
What puzzles me most, however, is that the apparent latency D in the examples you show is, in my opinion, much larger than what the above discussion can ever explain (it would require FIRs with around 200 tabs which is way too much by any standard). My opinion is that there is (or was, on some instruments) a fat bug somewhere in the Bruker hardware and/or control software and, instead of solving it, they have learned how to live with it. I know that this smacks of a foul accusation and it may make the Bruker guys angry at me. But I mean it well: if I am wrong, just let them just come up with a better explanation so that we will no longer need to grope in the dark. If they do so, I will happily and publicly apologize and withraw all my suspicions.
Stan Sykora
Post a Comment