You're reading an old version of this documentation. For up-to-date information, please have a look at v0.5.

cwepr.processing module

Module containing the processing steps of the cwEPR package.

A processing step always operates on a dataset and usually modifies the numerical data contained therein. The result of a processing step is in any case again a dataset, in contrast to analysis steps where this is not necessarily the case. Typical routine processing steps are normalisation (to area, amplitude, maximum, minimum), and for EPR spectroscopy such things as field and frequency correction.

Processing steps implemented

The processing steps implemented in this module can be separated into those specific for cw-EPR data and those that are generally applicable and were inherited from the ASpecD framework.

Processing steps specific for cw-EPR data

Currently, the following processing steps are implemented:

Implemented but not working as they should:

General processing steps inherited from the ASpecD framework

Besides the processing steps specific for cw-EPR data, a number of further processing steps that are generally applicable to spectroscopic data have been inherited from the underlying ASpecD framework:

  • ScalarAlgebra

    Perform scalar algebraic operation on one dataset.

    Operations available: add, subtract, multiply, divide (by given scalar)

  • ScalarAxisAlgebra

    Perform scalar algebraic operation on axis values of a dataset.

    Operations available: add, subtract, multiply, divide, power (by given scalar)

  • DatasetAlgebra

    Perform scalar algebraic operation on two datasets.

    Operations available: add, subtract

  • Projection

    Project data, i.e. reduce dimensions along one axis.

  • SliceExtraction

    Extract slice along one or more dimensions from dataset.

  • BaselineCorrection

    Correct baseline of dataset.

  • Averaging

    Average data over given range along given axis.

  • Filtering

    Filter data

Further processing steps implemented in the ASpecD framework can be used as well, by importing the respective modules. In case of recipe-driven data analysis, simply prefix the kind with aspecd:

- kind: aspecd.processing
  type: <ClassNameOfProcessingStep>

Categories of processing steps

Processing steps can be categorised further. The following is an attempt to do that for cwEPR data and at the same time a list of processing steps one would like to have implemented. Besides that, it seems that this list evolves more and more towards a summary of how to properly record and (post-)process cwEPR data.

For more authoritative answers, you may as well have a look into the EPR literature, particularly the “EPR Primer” by Chechik/Carter/Murphy [P:CCM16] and the book on quantitative EPR by the Eatons [P:EEBW10].

Corrections

When analysing cwEPR data, usually, a series of simple correction steps is performed prior to any further analysis. This is particularly important if you plan to compare different datasets or if you would like to compare your spectra with those from the literature (always a good idea, though).

  • Magnetic field correction

    Usually, the magnetic field in an EPR measurement needs to be determined by measuring a field standard in the identical setup, as the actual magnetic field at the sample will usually differ from the field set in the software.

    Appropriate magnetic field correction becomes particularly important if you are interested in absolute g values of your sample, e.g. to compare it to literature data or quantum-chemical calculations and to get ideas as to where the unpaired spin may predominantly reside on (in terms of nuclear species).

  • Microwave frequency correction

    Comparing datasets is only possible in a meaningful manner if they are either corrected for same frequency, or their magnetic field axes converted in a g axis.

  • Microwave phase correction

    Usually, cwEPR spectra are not recorded with quadrature detection, i.e., with both, absorptive and dispersive signal components. However, using the Hilbert transform, one can reconstruct the dispersive signal ( imaginary component) and correct the phase of the microwave source this way.

  • Baseline correction

    However, careful measurements are performed, baselines are quite often encountered. There are two different kinds of baseline that need to be corrected in different ways. Drifts of some kind can usually be handled by fitting and afterwards subtracting a (low-order) polynomial to the data.

    Particularly for low-temperature data, weak signals, and large magnetic field sweep ranges, resonator background can become quite dramatic. Here, usually the only viable way is to record the empty resonator independently under as much identical conditions as possible compared to recording the signal of the actual sample (but with slightly broader field range to compensate for different microwave frequency) and afterwards subtracting this dataset (empty resonator, i.e. resonator background signal) from the signal of the actual sample.

Algebra

Comparing datasets often involves adding, subtracting, multiplying or dividing the intensity values by a given fixed number. Possible scenarios where one wants to multiply the intensity values of a cwEPR spectrum may be comparing spectra resulting from a single species from those of known two species, different (known) concentrations and alike.

Of course, dividing the intensity of the spectrum by the maximum intensity is another option, however, this would be normalisation to maximum (not always a good idea, usually normalising to area or amplitude is better), and this is handled by a different set of processing steps (see below).

This type of simple algebra is quite different from adding or subtracting datasets together. Whereas simple algebra really is a one-liner in terms of implementation, handling different datasets involves ensuring commensurable axis dimensions and ranges, to say the least. Dataset algebra is available as well in this module.

Normalisation

Normalising data to some common characteristic is a prerequisite for comparing datasets among each other.

There is a number of normalisations that are common for nearly every kind of data, and as such, these normalisation steps should probably eventually be implemented within the ASpecD framework. As there are:

  • Normalisation to maximum

    Simply divide the intensity values by their maximum

    Often used as a very simple “normalisation” approach. Depends highly on the situation and focus of the representation, but usually, other methods such as normalisation to amplitude or area, are better suited.

  • Normalisation to minimum

    Simply divide the intensity values by their minimum

    The same as for the normalisation to maximum applies here. Furthermore, normalising to the minimum usually only makes sense in case of prominent negative signal components, as in first-derivative spectra in cwEPR spectroscopy.

  • Normalisation to amplitude

    Divide the intensity values by the absolute of the difference between maximum and minimum intensity value

    Usually better suited as a simple normalisation than the naive normalising to maximum or minimum described above. However, it strongly depends on what you are interested in comparing and want to highlight.

  • Normalisation to area

    Divide the intensity values by the area under the curve of the spectrum

    Not as easy as it looks like for first-derivative cwEPR spectra, as here, you are usually interested in normalising to the same area ( i.e., integral of the curve) of the absorptive (zeroth-derivative or zeroth harmonic) spectrum.

    At least given appropriate measurement conditions (no saturation, no line broadening due to overmodulation, proper phasing), the cwEPR signal intensity should be proportional to the number of spins in the active volume of the resonator/probehead. Therefore, with all crucial experimental parameters directly affecting the signal strength being equal (microwave power, modulation amplitude), normalising to same area should be the most straight-forward way of comparing two spectra in a meaningful way.

    Bear in mind, however, that spectra with strongly different overall line width will have dramatically different minima and maxima, making comparison of this kind sometimes less meaningful.

Besides these rather general ways of normalising spectra (although described above particularly with cwEPR data in mind), there are some other normalisations more particular to cwEPR spectroscopy:

  • Normalisation to same number of scans

    Some spectrometers (probably only older ones) did usually sum the intensity for each scan, rather than afterwards dividing by the number of scans, making comparison of spectra with different number of scans quite tricky.

    Make sure you know exactly what you do before applying (or not applying) such normalisation if you would like to do some kind of (semi-)quantitative analysis of your data.

  • Normalisation to same receiver gain

    The preamplifiers in the signal channel (as the digitising unit in cwEPR spectrometers is usually called) have usually a gain that can be adjusted to the signal strength of the actual sample. Of course, this setting will have a direct impact on the intensity values recorded ( usually something like mV).

    Comparing spectra recorded with different receiver gain settings therefore requires the user to first normalise the data to the same receiver gain setting. Otherwise, (semi-)quantitative comparison is not possible and will lead to wrong conclusions.

    Note on the side: Adjusting the receiver gain for each measurement is highly recommended, as setting it too high will make the signal clip and distort the signal shape, and setting it too low will result in data with (unnecessary) poor signal-to-noise ratio.

Working with 2D datasets

2D datasets in cwEPR spectroscopy, huh? Well, yes, more often than one might expect in the beginning. There are the usual suspects such as power sweeps and modulation amplitude sweeps, each varying (automatically) one parameter in a given range and record spectra for each value.

There are, however, other types of 2D datasets that are quite useful in cwEPR spectroscopy. Some vendors of EPR spectrometers offer no simple way of saving each individual scan in a series of accumulations. However, this may sometimes be of interest, particularly as a single “spike” due to some external event or other malfunctioning may otherwise ruin your entire dataset, however long it might have taken to record it. Therefore, one way around this limitation is to perform a 2D experiment with repeated field scans, but saving each scan as a row in a 2D dataset.

Generally, there are at least two different processing steps of interest for 2D datasets:

  • Projection along one axis

    Equivalent to averaging along that axis

    If recording multiple scans of one and the same spectrum for better signal-to-noise ratio, but saving each scan individually within a row of a 2D dataset, this is the way to get the dataset with improved signal-to-noise ratio originally intended.

    May as well be used for rotation patterns, i.e., angular-dependent measurements, if there turns out to be no angular dependence in the data. In this case, at least you save the measurement time by having a dataset with clearly better signal-to-noise ratio than initially intended.

  • Extraction of a slice along one dimension

    Having a 2D dataset, we may often be interested in only one slice along one dimension.

    Typical examples would be comparing two positions of the goniometer (zero and 180 degree would be an obvious choice) or slices with similar parameters for different datasets.

More complicated and probably more involved processing of 2D datasets would be to (manually) inspect the individual scans and decide which of those to average, e.g. in case of one problematic scan in between, be it due to external noise sources or spectrometer problems.

Handling multiple datasets

Comparing multiple datasets by plotting them in one and the same axis is a rather simple way of handling multiple datasets. However, usually, you would like to perform much more advanced operations on multiple datasets, such as adding and subtracting one from the other.

May sound pretty simple at first, but is indeed pretty demanding in terms of its implementation, as internally, you need to check for quite a number of things, such as commensurable axes and ranges. However, this is a rather general problem of all kinds of datasets, hence it may be that this functionality eventually gets incorporated in the ASpecD framework.

Particularly in EPR spectroscopy, each measurement will have a unique microwave frequency for which the data were recorded. Therefore, to combine the numerical values of two datasets (subtract, add, average), you will first need to correct them for same microwave frequency. This will generally result in different field axes for different datasets. Furthermore, some vendors like to record data with non-equidistant field axes as well, making handling of those datasets additionally messy.

  • Subtract a dataset from another dataset

    Ensure the datasets are compatible in terms of their axes (dimension, quantity, unit, common area of values), subtract the common range of values and return only the subtracted (i.e., usually truncated) dataset.

    A common use case for subtracting a dataset from another would be a resonator background signal independently recorded, or some other background signal such as the “glass signal” (from impurities in the glass tube you’ve used).

    Other, more advanced applications may involve subtracting the spectrum of a single species from that of a spectrum consisting this and other species. However, in such case be aware of the fact that the spectrum containing more than one species may not be a simple superposition of the spectra of the two independent species.

  • Add a dataset to another dataset

    Ensure the datasets are compatible in terms of their axes (dimension, quantity, unit, common area of values), add the common range of values together and return only the summed (i.e., usually truncated) dataset.

  • Average two datasets

    Ensure the datasets are compatible in terms of their axes (dimension, quantity, unit, common area of values), average the common range of values together and return only the averaged (i.e., usually truncated) dataset.

    A common use case if you performed several independent measurements of the same sample (with otherwise similar/comparable parameters) and would like to average for better signal-to-noise.

Other processing steps

There may well be further types of processing steps the authors are currently not aware of or didn’t dare to document here.

Note to developers

Processing steps can be based on analysis steps, but not inverse! Otherwise, we get cyclic dependencies what should obviously be avoided in order to keep code working.

Implementing own processing steps is rather straight-forward. For details, see the documentation of the aspecd.processing module.

Bibliography

P:CCM16

Viktor Chechik, Emma Carter, and Damien Murphy. Electron Paramagnetic Resonance. Oxford University Press, Oxford, UK, 2016.

P:EEBW10

Gareth E. Eaton, Sandra S. Eaton, David P. Barr, and Ralph T. Weber. Quantitative EPR. Springer, Wien, 2010.

Module documentation

What follows is the API documentation of each class implemented in this module.

class cwepr.processing.FieldCorrection

Bases: aspecd.processing.SingleProcessingStep

Correct magnetic field axis by a linear offset.

Perform a linear field correction of the data with a correction value previously determined.

parameters['offset']

Offset to be added to the field axis values. Should be given in the unit of the axis.

Type

float

See also

cwepr.analysis.FieldCalibration

Determine offset value for a magnetic field calibration

Changed in version 0.2: Renamed parameter correction_value to offset

class cwepr.processing.FrequencyCorrection

Bases: aspecd.processing.SingleProcessingStep

Convert data of a given frequency to another given frequency.

This is used to make spectra comparable. There are two methods for frequency correction which has to be given in the parameter section.

  • calculation via the Zeeman splitting (proportional)

    All magnetic field points will be recalculated to keep their g-value. This preserves field-dependent interactions such as g anisotropy, but artificially scales all field-independent shifts. HFI or ZFS constants are not preserved.

  • Addition of an offset value (offset)

    As the EPR spectrum should be somewhat centered to the signal of interest, the central point will be corrected for the new frequency. The difference between the old and the new magnetic field will be added onto all other magnetic field points. Preserves constant energy shifts such as (first order) HFI or ZFS, but artificially scales field-dependent interactions such as g anisotropy. g factors besides the reference point are not preserved. (Thanks to B. Corzilius for the input)

Examples

- kind: processing
  type: FrequencyCorrection
  properties:
    parameters:
      kind: proportional
      frequency: 9.63
self.parameters['frequency']

Frequency to correct for.

Default: 9.5

self.parameters['kind']

Method used for frequency correction. Can be offset or proportional.

Default: proportional

Changed in version 0.4: Choice between the two methods of frequency correction

static applicable(dataset)

Check applicability.

class cwepr.processing.GAxisCreation

Bases: aspecd.processing.SingleProcessingStep

Change magnetic field axis to g axis.

Particularly when comparing EPR spectra recorded at different frequency bands, the only sensible way to directly compare these spectra is to transform their magnetic field axis to a g axis.

Note

If you only want to have a g axis appearing in your plots (additionally to your magnetic field axis), you can tell the plotters to add a g axis at the opposite side of your axes. See the documentation of the plotters in the cwepr.plotting module for more details.

Changed in version 0.2: axis quantity is set to “g value”, correct calculation of g values

class cwepr.processing.AutomaticPhaseCorrection

Bases: aspecd.processing.SingleProcessingStep

Automatic phase correction via Hilbert transform.

Important

Experimental state: Other methods have been proven to provide a better and reliable phase correction.

Todo

Does not work properly. Already gives wrong values with simulated data without a hyperfine-coupling. Reimplement with other method…

Adapted from the matlab functionality in the cwEPR-toolbox.

class cwepr.processing.NormalisationOfDerivativeToArea

Bases: aspecd.processing.SingleProcessingStep

Normalise a spectrum to the area under the curve.

As typical cw-EPR spectra are derivative spectra, calculating the area under the curve involves an integration step beforehand. This is done here as well.

Note

If the integrated spectra has a baseline shift, it is not currently accounted for.

class cwepr.processing.Normalisation

Bases: aspecd.processing.Normalisation

Normalise data.

Additional to kinds implemented in the parent class, this class provides the following normalisations:

  • receiver_gain

    Normalise data to identical receiver gain

  • scan_number

    Normalise data to same number of scans

For an extended documentation of the kinds implemented directly in ASpecD, see the corresponding documentation: aspecd.processing.Normalisation.

Some details for the two additional kinds of normalisation are given below.

Due to the logarithmic scale of the receiver gain (in dB) at least in Bruker spectrometers, it has to be transferred into the “normal” scale. It calculates as following:

receiver gain = 10^(receiver gain in dB/20)

Source: Stefan Stoll, EasySpin source code, according to Xenon Manual 2011

The normalisation according to the number of scans is necessary to make spectra in which the intensity of different scans is the sum of the single scans, comparable to those where it is averaged.

Important

Know what you are doing, as depending on the software used for recording your data, the data are already normalised with respect to the number of scans.

Examples

For convenience, a series of examples in recipe style (for details of the recipe-driven data analysis, see aspecd.tasks) is given below for how to make use of this class. The examples focus each on a single aspect.

To normalise your dataset(s) with respect to the receiver gain used:

- kind: processing
  type: Normalisation
  properties:
    parameters:
      kind: receiver_gain

To normalise your dataset(s) with respect to the number of scans that have been recorded:

- kind: processing
  type: Normalisation
  properties:
    parameters:
      kind: scan_number
class cwepr.processing.AxisInterpolation

Bases: aspecd.processing.SingleProcessingStep

Interpolating axes to given number of equidistant field points.

Iterates over axes and takes the first axis that is not equidistant and interpolates it and the data as well.

parameters['points']

Number of points that should be interpolated to.

class cwepr.processing.ScalarAlgebra

Bases: aspecd.processing.ScalarAlgebra

Perform scalar algebraic operation on one dataset.

As the class is fully inherited from ASpecD for simple usage, see the ASpecD documentation of the aspecd.processing.ScalarAlgebra class for details.

Examples

For convenience, a series of examples in recipe style (for details of the recipe-driven data analysis, see aspecd.tasks) is given below for how to make use of this class. The examples focus each on a single aspect.

In case you would like to add a fixed value of 42 to your dataset:

- kind: processing
  type: ScalarAlgebra
  properties:
    parameters:
      kind: add
      value: 42

Similarly, you could use “minus”, “times”, “by”, “add”, “subtract”, “multiply”, or “divide” as kind - resulting in the given algebraic operation.

class cwepr.processing.ScalarAxisAlgebra

Bases: aspecd.processing.ScalarAxisAlgebra

Perform scalar algebraic operation on the axis of a dataset.

As the class is fully inherited from ASpecD for simple usage, see the ASpecD documentation of the aspecd.processing.ScalarAxisAlgebra class for details.

Examples

For convenience, a series of examples in recipe style (for details of the recipe-driven data analysis, see aspecd.tasks) is given below for how to make use of this class. The examples focus each on a single aspect.

In case you would like to add a fixed value of 42 to the first axis (index 0) your dataset:

- kind: processing
  type: ScalarAxisAlgebra
  properties:
    parameters:
      kind: plus
      axis: 0
      value: 42

Similarly, you could use “minus”, “times”, “by”, “add”, “subtract”, “multiply”, “divide”, and “power” as kind - resulting in the given algebraic operation.

class cwepr.processing.DatasetAlgebra

Bases: aspecd.processing.DatasetAlgebra

Perform scalar algebraic operation on two datasets.

As the class is fully inherited from ASpecD for simple usage, see the ASpecD documentation of the aspecd.processing.DatasetAlgebra class for details.

Examples

For convenience, a series of examples in recipe style (for details of the recipe-driven data analysis, see aspecd.tasks) is given below for how to make use of this class. The examples focus each on a single aspect.

In case you would like to add the data of the dataset referred to by its label label_to_other_dataset to your dataset:

- kind: processing
  type: DatasetAlgebra
  properties:
    parameters:
      kind: plus
      dataset: label_to_other_dataset

Similarly, you could use “minus”, “add”, “subtract” as kind - resulting in the given algebraic operation.

As mentioned already, the data of both datasets need to have identical shape, and comparison is only meaningful if the axes are compatible as well. Hence, you will usually want to perform a CommonRangeExtraction processing step before doing algebra with two datasets:

- kind: multiprocessing
  type: CommonRangeExtraction
  results:
    - label_to_dataset
    - label_to_other_dataset

- kind: processing
  type: DatasetAlgebra
  properties:
    parameters:
      kind: plus
      dataset: label_to_other_dataset
  apply_to:
    - label_to_dataset
class cwepr.processing.Projection

Bases: aspecd.processing.Projection

Project data, i.e. reduce dimensions along one axis.

As the class is fully inherited from ASpecD for simple usage, see the ASpecD documentation of the aspecd.processing.Projection class for details.

Examples

For convenience, a series of examples in recipe style (for details of the recipe-driven data analysis, see aspecd.tasks) is given below for how to make use of this class. The examples focus each on a single aspect.

In the simplest case, just invoke the projection with default values:

- kind: processing
  type: Projection

This will project the data along the first axis (index 0), yielding a 1D dataset.

If you would like to project along the second axis (index 1), simply set the appropriate parameter:

- kind: processing
  type: Projection
  properties:
    parameters:
      axis: 1

This will project the data along the second axis (index 1), yielding a 1D dataset.

class cwepr.processing.SliceExtraction

Bases: aspecd.processing.SliceExtraction

Extract slice along one or more dimensions from dataset.

As the class is fully inherited from ASpecD for simple usage, see the ASpecD documentation of the aspecd.processing.SliceExtraction class for details.

Examples

For convenience, a series of examples in recipe style (for details of the recipe-driven data analysis, see aspecd.tasks) is given below for how to make use of this class. The examples focus each on a single aspect.

In the simplest case, just invoke the slice extraction with an index only:

- kind: processing
  type: SliceExtraction
  properties:
    parameters:
      position: 5

This will extract the sixth slice (index five) along the first axis (index zero).

If you would like to extract a slice along the second axis (with index one), simply provide both parameters, index and axis:

- kind: processing
  type: SliceExtraction
  properties:
    parameters:
      position: 5
      axis: 1

This will extract the sixth slice along the second axis.

And as it is sometimes more convenient to give ranges in axis values rather than indices, even this is possible. Suppose the axis you would like to extract a slice from runs from 340 to 350 and you would like to extract the slice corresponding to 343:

- kind: processing
  type: SliceExtraction
  properties:
    parameters:
      position: 343
      unit: axis

In case of you providing the range in axis units rather than indices, the value closest to the actual axis value will be chosen automatically.

For ND datasets with N>2, you can either extract a 1D or ND slice, with N always at least one dimension less than the original data. To extract a 2D slice from a 3D dataset, simply proceed as above, providing one value each for position and axis. If, however, you want to extract a 1D slice from a 3D dataset, you need to provide two values each for position and axis:

- kind: processing
  type: SliceExtraction
  properties:
    parameters:
      position: [21, 42]
      axis: [0, 2]

This particular case would be equivalent to data[21, :, 42] assuming data to contain the numeric data, besides, of course, that the processing step takes care of removing the axes as well.

class cwepr.processing.BaselineCorrection

Bases: aspecd.processing.BaselineCorrection

Subtract baseline from dataset.

As the class is fully inherited from ASpecD for simple usage, see the ASpecD documentation of the aspecd.processing.BaselineCorrection class for details.

Examples

For convenience, a series of examples in recipe style (for details of the recipe-driven data analysis, see aspecd.tasks) is given below for how to make use of this class. The examples focus each on a single aspect.

In the simplest case, just invoke the baseline correction with default values:

- kind: processing
  type: BaselineCorrection

In this case, a zeroth-order polynomial baseline will be subtracted from your dataset using ten percent to the left and right, and in case of a 2D dataset, the baseline correction will be performed along the first axis (index zero) for all indices of the second axis (index 1).

Of course, often you want to control a little more how the baseline will be corrected. This can be done by explicitly setting some parameters.

Suppose you want to perform a baseline correction with a polynomial of first order:

- kind: processing
  type: BaselineCorrection
  properties:
    parameters:
      order: 1

If you want to change the (percental) area used for fitting the baseline, and even specify different ranges left and right:

- kind: processing
  type: BaselineCorrection
  properties:
    parameters:
      fit_area: [5, 20]

Here, five percent from the left and 20 percent from the right are used.

Finally, suppose you have a 2D dataset and want to average along the second axis (index one):

- kind: processing
  type: BaselineCorrection
  properties:
    parameters:
      axis: 1

Of course, you can combine the different options.

class cwepr.processing.Filtering

Bases: aspecd.processing.Filtering

Filter data.

As the class is fully inherited from ASpecD for simple usage, see the ASpecD documentation of the aspecd.processing.Filtering class for details.

Examples

For convenience, a series of examples in recipe style (for details of the recipe-driven data analysis, see aspecd.tasks) is given below for how to make use of this class. The examples focus each on a single aspect.

Generally, filtering requires to provide both, a type of filter and a window length. Therefore, for uniform and Gaussian filters, this would be:

- kind: processing
  type: Filtering
  properties:
    parameters:
      type: uniform
      window_length: 10

Of course, at least uniform filtering (also known as boxcar or moving average) is strongly discouraged due to the artifacts introduced. Probably the best bet for applying a filter to smooth your data is the Savitzky-Golay filter:

- kind: processing
  type: Filtering
  properties:
    parameters:
      type: savitzky-golay
      window_length: 10
      order: 3

Note that for this filter, you need to provide the polynomial order as well. To get best results, you will need to experiment with the parameters a bit.