In this short tutorial we are gonna cover some basic aspects of statistics, time-of-flight (TOF) mass spectrometry (MS) and how can we use it to see the chemical reactions.

(image from Scientific Reports volume 10, Article number: 2884 (2020)).

image

What is covariance?


Formal explanation


Probability distribution and measurements


Let X and Y be random variables described by a probability density $\rho(X,Y)$, which should be normalized to unity ($\int \int \rho(X,Y) dX dY = 1$). The probability density can be thought as a underlying law of the physical behavior, while the X or Y are just the observable symptoms. By measuring the X,Y we through the dice, and the probability of observing result $(X,Y)$ is $p(X,Y) = \rho(X,Y)dX dY$.

For every variable $F=F(X,Y)$ dependent on $(X,Y)$ we can calculate its expectation values:

$ \langle F \rangle = \int \int F(X,Y) \rho(X,Y) dX dY \ . $



In practice, the probability distribution $\rho(X,Y)$ is usually unknown and in the experiment we observe = measure the random realizations of the random variables $(X,Y)$. If we do N measurements, we will get a set of results

$ \{ (X_i, Y_i) \}_{i=1}^{N} = \{ (X_1,Y_1), (X_2,Y_2), \ldots , (X_N,Y_N) \} \ . $


The probability distribution will be ``encoded'' inside the measurements, therefore we can calculate expectation value of any variable F as simple as:

$ \langle F \rangle \approx \frac{1}{N} \sum_{i=1}^{N} F(X_i,Y_i) \ . $


There are a lot of useful stuff that can be calculated, but a we will focus only on the few chosen ones.

Moments


Moments describe how one of the variables (X or Y) are distributed, thus using those we can characterize the unknown ``projections'' of $\rho(X,Y)$ on X or Y:

$ \begin{cases} \rho_X(X) = \int \rho(X,Y) dY \ , \\ \rho_Y(Y) = \int \rho(X,Y) dX \ . \end{cases} $


Distributions $\rho_X(X)$ and $\rho_Y(Y)$ are called marginal densities.

There are quite a few of different types of moments, but we will remember only the two main types.

  • The raw moments are calculated as

    $ \nu_k(X) = \langle X^k \rangle = \int \int X^k \rho(X,Y) dX dY = \int X^k \rho_X(X) dX \ . $


    The most important raw moment is the variable expectation value:

    $ \nu_1(X) = \langle X \rangle = \int X \rho_X(X) dX \approx \frac{1}{N} \sum_{i=1}^{N} X_i \ . $


    It characterize the ``center-of-mass'' of the distribution, which however is not necessarily the most probable outcome of the measurement.
    Usually the $\nu_1(X)$ is denoted as $\mu_X$, and due to the lack of letters in Latin+Greek alphabets we will be very confused by the next item in the list.
  • The central moments are the second important parameters:

    $ \mu_k(X) = \langle (X - \langle X \rangle)^k \rangle = \int X^k \rho_X(X) dX \ . $


    The $\mu_1(X) = \langle X - \langle X \rangle \rangle = 0$ is really boring, however

    $ \mu_2(X) = \langle \underbrace{(X - \langle X \rangle)^2}_{X^2 - 2 X \langle X \rangle + \langle X \rangle^2 } \rangle = \langle X^2 \rangle - 2 \underbrace{\langle X \rangle \langle X \rangle}_{\langle X \rangle^2} + \langle X \rangle^2 = \langle X^2 \rangle - \langle X \rangle^2 = \operatorname{var}(X) $


    is really cool and is known as variance of X and thus denoted as $\operatorname{var}(X)$.
    This value is closely related to the standard deviation which is defined as

    $ \sigma_X = \sqrt{\operatorname{var}(X)} = \sqrt{\langle X^2 \rangle - \langle X \rangle^2} \ . $


    Both variance and standard deviation show how the variable is ``spread'' around the expectation value.

image
The best illustration of these concepts is the Gaussian distribution (see Fig. above).

$ \rho_X(X) = \frac{1}{\sqrt{2\pi}\sigma}\exp\left( - \frac{(X - \mu_X)^2}{2 \sigma_X^2} \right) \ . $


The expectation value of X is $\langle X \rangle = \mu_X$ and the variance is $\operatorname{var}(X) = \sigma_X^2$ (i.e. $\sigma_X$ is the standard deviation).

But please always remember the following: FWHM (full width at half maximum) for Gaussian distribution is NOT the standard deviation $\sigma$.
Since $\exp\left( - \frac{(X - \mu_X)^2}{2 \sigma_X^2} \right) \in (0;1]$, the condition for FWHM is $\frac{1}{2} = \exp\left( - \frac{(X - \mu_X)^2}{2 \sigma_X^2} \right)$. Solutions of this equations are

$ \tilde{X}_{\pm} = (X - \mu_X)_{\pm} = \pm \sqrt{2 \ln(2)} \sigma_X \ , $


thus taking the difference between the edges of Gaussian distribution at half-way-to-maximum we will get FWHM:

$ \mathrm{FWHM} = \tilde{X}_{+} - \tilde{X}_{-} = 2 \sqrt{2 \ln(2)} \sigma_X \approx 2.354820045 \cdot \sigma_X \ . $



Covariance


If we want to know something about how the variables X and Y are distributed jointly, we can use a quantity called covariance. It is calculated as

$ \operatorname{cov}(X,Y) = \langle (X - \langle X \rangle ) \cdot (Y - \langle Y \rangle ) \rangle = \langle X Y \rangle - \langle X \rangle \langle Y \rangle \ . $


It is fairly simple to compute from the measurements:

$ \operatorname{cov}(X,Y) \approx \underbrace{\frac{1}{N} \sum_{i=1}^{N} X_i Y_i}_{\approx \langle X Y \rangle} - \underbrace{\left(\frac{1}{N} \sum_{i=1}^{N} X_i \right)}_{\approx \langle X \rangle} \cdot \underbrace{\left( \frac{1}{N} \sum_{i=1}^{N} Y_i \right)}_{\approx \langle Y \rangle} \ , $


and such calculation is usually done by special functions in high-level languages (in MATLAB it is

cov

and in Python one can use either

numpy.cov

or

scipy.stats.cov

).

To understand the meaning of this relation we need a concept of independent events. The events A and B are independent, if their probability is $p(A \bigcap B) = p(A) \cdot p(B)$. In case of continuous distribution $\rho(X,Y)$ this means factorizability of probability density function:

$ \rho(X,Y) = \rho_X(X) \cdot \rho_Y(Y) \ . $


This means that

$ \langle X^n Y^m \rangle = \int \int X^n Y^m \underbrace{\rho_X(X) \cdot \rho_Y(Y) dX dY}_{\rho(X,Y) dX dY} = \\ = \underbrace{\left( \int X^n \rho_X(X) dX \right)}_{\langle X^n \rangle} \cdot \underbrace{\left( \int Y^m \rho_Y(Y) dY \right)}_{\langle Y^m \rangle} = \langle X^n \rangle \langle Y^m \rangle $


therefore $\langle XY \rangle = \langle X \rangle \langle Y \rangle $ and thus for independent X and Y $\operatorname{cov}(X,Y) = 0$.

The covariance normed at the standard deviations is called Pearson correlation coefficient:

$ \rho_{X,Y} = \frac{\operatorname{cov}(X,Y)}{\sigma_X \cdot \sigma_Y} = \frac{\operatorname{cov}(X,Y)}{\sqrt{\operatorname{var}(X) \cdot \operatorname{var}(Y)}} = \frac{\langle XY\rangle - \langle X\rangle \langle Y \rangle}{\sqrt{(\langle X^2\rangle - \langle X\rangle^2) \cdot (\langle Y^2\rangle - \langle Y\rangle^2)}} \ . $


Statisticians love this one more then the pure covariance, however, in the physics it's completely opposite.

Simple example of how covariance works


image
Let us consider a prototypical quantum cryptography scheme (see Fig. above):

  • a source that produces two electrons flying in opposite directions,
  • two detectors (``A'' and ``B'') to catch electrons and measure their spin on the vertical axis.

In this system there are four possible measurement outcomes:
  1. $\uparrow_A$ and $\uparrow_B$ with spins $s_A=+\frac{1}{2}$ and $s_B=+\frac{1}{2}$,
  2. $\uparrow_A$ and $\downarrow_B$ with $s_A=+\frac{1}{2}$ and $s_B=-\frac{1}{2}$,
  3. $\downarrow_A$ and $\uparrow_B$ with $s_A=-\frac{1}{2}$ and $s_B=+\frac{1}{2}$,
  4. $\downarrow_A$ and $\downarrow_B$ with $s_A=-\frac{1}{2}$ and $s_B=-\frac{1}{2}$,

where lower index denotes result of the observation on the detector A or B.
There can be a three basic situations, that can be happening.

Anticorrelated electrons


The source can produce entangled pair of electrons created from a singlet state $|\psi\rangle = \frac{1}{\sqrt{2}} (|\uparrow_A \downarrow_B\rangle + | \downarrow_A \uparrow_B \rangle)$
In this case probabilities of the outcomes will be $p(\uparrow_A \uparrow_B) = p(\downarrow_A \downarrow_B) = 0$ and $p(\uparrow_A \downarrow_B) = p(\downarrow_A \uparrow_B) = \frac{1}{2}$.
The average spin orientations $\langle s_A \rangle$ and $\langle s_B \rangle$ measured at the detectors will be

$ \begin{cases} \langle s_A \rangle = \left(+\frac{1}{2}\right) \overbrace{p(\uparrow_A \uparrow_B)}^{0} + \left(+\frac{1}{2}\right) \overbrace{p(\uparrow_A \downarrow_B)}^{1/2} + \left(-\frac{1}{2}\right) \overbrace{p(\downarrow_A \uparrow_B)}^{1/2} + \left(-\frac{1}{2}\right) \overbrace{p(\downarrow_A \downarrow_B)}^{0} = 0 \\ \langle s_B \rangle = \left(+\frac{1}{2}\right) \underbrace{p(\uparrow_A \uparrow_B)}_{0} + \left(-\frac{1}{2}\right) \underbrace{p(\uparrow_A \downarrow_B)}_{1/2} + \left(+\frac{1}{2}\right) \underbrace{p(\downarrow_A \uparrow_B)}_{1/2} + \left(-\frac{1}{2}\right) \underbrace{p(\downarrow_A \downarrow_B)}_{0} = 0 \\ \end{cases} \ . $


However, the cross-term for the entangled state will be:

$ \langle s_A s_B \rangle = \left(+\frac{1}{2}\right) \left(+\frac{1}{2}\right) \overbrace{p(\uparrow_A \uparrow_B)}^{0} + \left(+\frac{1}{2}\right) \left(-\frac{1}{2}\right) \overbrace{p(\uparrow_A \downarrow_B)}^{1/2} + \\ + \left(-\frac{1}{2}\right) \left(+\frac{1}{2}\right) \overbrace{p(\downarrow_A \uparrow_B)}^{1/2} + \left(-\frac{1}{2}\right) \left(-\frac{1}{2}\right) \overbrace{p(\downarrow_A \downarrow_B)}^{0} = -\frac{1}{4} \ , $


and thus

$ \operatorname{cov}(s_A,s_B) = \langle s_A s_B \rangle - \langle s_A \rangle \langle s_B \rangle = -\frac{1}{4} \ . $


Negative covariance means that if one value of $s_A$ positive, then the value of $s_B$ is probably negative.

Non-correlated electrons


If we break the entanglement we will get independent electrons, which will give equal probabilities for all four outcomes: $p(\uparrow_A \uparrow_B) = p(\downarrow_A \downarrow_B) = p(\uparrow_A \downarrow_B) = p(\downarrow_A \uparrow_B) = \frac{1}{4}$. This will not change the expectation values of the single detector measurements:

$ \begin{cases} \langle s_A \rangle = \left(+\frac{1}{2}\right) \overbrace{p(\uparrow_A \uparrow_B)}^{1/4} + \left(+\frac{1}{2}\right) \overbrace{p(\uparrow_A \downarrow_B)}^{1/4} + \left(-\frac{1}{2}\right) \overbrace{p(\downarrow_A \uparrow_B)}^{1/4} + \left(-\frac{1}{2}\right) \overbrace{p(\downarrow_A \downarrow_B)}^{1/4} = 0 \\ \langle s_B \rangle = \left(+\frac{1}{2}\right) \underbrace{p(\uparrow_A \uparrow_B)}_{1/4} + \left(-\frac{1}{2}\right) \underbrace{p(\uparrow_A \downarrow_B)}_{1/4} + \left(+\frac{1}{2}\right) \underbrace{p(\downarrow_A \uparrow_B)}_{1/4} + \left(-\frac{1}{2}\right) \underbrace{p(\downarrow_A \downarrow_B)}_{1/4} = 0 \\ \end{cases} \ , $


but the cross-term will now change:

$ \langle s_A s_B \rangle = \left(+\frac{1}{2}\right) \left(+\frac{1}{2}\right) \overbrace{p(\uparrow_A \uparrow_B)}^{1/4} + \left(+\frac{1}{2}\right) \left(-\frac{1}{2}\right) \overbrace{p(\uparrow_A \downarrow_B)}^{1/4} + \\ + \left(-\frac{1}{2}\right) \left(+\frac{1}{2}\right) \overbrace{p(\downarrow_A \uparrow_B)}^{1/4} + \left(-\frac{1}{2}\right) \left(-\frac{1}{2}\right) \overbrace{p(\downarrow_A \downarrow_B)}^{1/4} = 0 \ , $


and we will see the independence of the electrons as a

$ \operatorname{cov}(s_A,s_B) = \langle s_A s_B \rangle - \langle s_A \rangle \langle s_B \rangle = 0 \ . $



Positively correlated electrons


However, we can set the source to produce entangled electrons from a triplet state, that can be one of three possibilities:

$ |\psi\rangle = \begin{cases} |\uparrow_A \uparrow_B\rangle \ , \\ \frac{1}{\sqrt{2}}(|\uparrow_A \downarrow_B\rangle + |\downarrow_A \uparrow_B\rangle ) \ , \\ |\downarrow_A \downarrow_B\rangle \ . \end{cases} $


This will give a set of probabilities
$p(\uparrow_A \uparrow_B) = p(\downarrow_A \downarrow_B) = \frac{1}{2}$ and $p(\uparrow_A \downarrow_B) = p(\downarrow_A \uparrow_B) = 0$ (in reality the probabilities will be different, we just do an oversimplification). The $\langle s_A \rangle$ and $\langle s_B \rangle$ will still be zero, but the cross-term

$ \langle s_A s_B \rangle = \left(+\frac{1}{2}\right) \left(+\frac{1}{2}\right) \overbrace{p(\uparrow_A \uparrow_B)}^{1/2} + \left(+\frac{1}{2}\right) \left(-\frac{1}{2}\right) \overbrace{p(\uparrow_A \downarrow_B)}^{0} + \\ + \left(-\frac{1}{2}\right) \left(+\frac{1}{2}\right) \overbrace{p(\downarrow_A \uparrow_B)}^{0} + \left(-\frac{1}{2}\right) \left(-\frac{1}{2}\right) \overbrace{p(\downarrow_A \downarrow_B)}^{1/2} = + \frac{1}{4} \ , $


will be positive, and thus the covariance

$ \operatorname{cov}(s_A,s_B) = \langle s_A s_B \rangle - \langle s_A \rangle \langle s_B \rangle = +\frac{1}{4} \ . $


This positive correlation means that when $s_A$ positive, the $s_B$ is also positive.

All together now!


To summarize:

  • negative covariance = anti-relation between variables (when one is increased, the other is decreased, and vice versa),
  • zero covariance = independent variables,
  • positive covariance = when one value is increased, the second is increased as well (and vice versa).

TOF-TOF covariance maps


What is TOF covariance map?


First, we need to bring back the memory of what is time-of-flight (TOF) mass spectroscopy (MS).
image
The simplest setup (see Fig. above) consists of the two plates charged positively and negatively. This capacitor-like construction forms an uniform electric field. When the ions are produced at distance $L$ from the detector (being close to the negative plate) they are accelerated by Lorentz force $\mathbf{F} = q \mathbf{E}$ towards negative plate. Depending on the mass-to-charge ratio, the time from ion production to registration will be different: the heavier ions will come later then the lighter ones.
This simple scheme provides a very fine mass-to-charge resolution.

In the end, the MS signal is the $I(\mathrm{TOF})$, ion current as a function of TOF. When it is digitized, the continuous signal turns into a discrete vector-like quantity:

$ \mathbf{I} = (\underbrace{I_1}_{I(\mathrm{TOF}_1)}, \underbrace{I_2}_{I(\mathrm{TOF}_2)}, \ldots, \underbrace{I_M}_{I(\mathrm{TOF}_M)} ) \ , $


where $\mathrm{TOF}_\alpha = \mathrm{TOF}_1 + dt \cdot (\alpha-1)$, $dt$ is the discretization time step and M is the number of TOF points taken.
We can perform N measurements of the TOF MS, which will give us a set of independent results $\{\mathbf{I}_i = (I_{1i}, I_{2i}, \ldots, I_{Mi}) = \{ I_{\alpha i} \}_{\alpha=1}^M\}_{i=1}^{N}$.
Based on this ensamble of points we can calculate $M \times M$ size covariance map consisting of elements

$ C_{\alpha \beta} = \operatorname{cov}(\underbrace{I_{\alpha}}_{I(\mathrm{TOF}_\alpha)} , \underbrace{I_{\beta}}_{I(\mathrm{TOF}_\beta)}) = \frac{1}{N} \sum_{i=1}^{N} I_{\alpha i } I_{\beta i} - \left( \frac{1}{N} \sum_{i=1}^{N} I_{\alpha i } \right) \cdot \left( \frac{1}{N} \sum_{i=1}^{N} I_{\beta i} \right) \ . $



From the meaning of the covariance we can expect to see the underlying reactions.

  • If fragments A and B are formed independently, in the corresponding part of the map will be $\operatorname{cov}(I(A),I(B)) = 0$.
  • If the fragment A is formed from parent ion B, the increase of the A will deplete amount = intensity of the parent, and vice versa, less fragmentation of B will yield in smaller amounts of observed A. Therefore we expect from this part of the map $\operatorname{cov}(I(A),I(B)) < 0$.
  • If the A and B are produced in the same reaction sequence from the same parent species, the increase/decrease of one ion can be noticed in the connected fragment, therefore $\operatorname{cov}(I(A),I(B)) > 0$.

However, life is a little bit more interesting than that. :)

Coulomb explosions


The Coulomb explosion is a reaction of the kind

$ \mathrm{AB}^{q+} \rightarrow \mathrm{A}^{q_\mathrm{A}+} + \mathrm{B}^{q_\mathrm{B}+} \ . $


We have then two ions flying away from each other (see Fig. above). According to the momentum conservation law, the final velocities of the fragments ($\mathbf{v}_A$ for $\mathrm{A}^{q_\mathrm{A}+}$ and $\mathbf{v}_B$ for $\mathrm{B}^{q_\mathrm{B}+}$) should be related to the initial velocity of $\mathrm{AB}^{q+}$ ($\mathbf{v}_{AB}$) as

$ \underbrace{(m_A + m_B)}_{m_{AB}} \mathbf{v}_{AB} = m_A \mathbf{v}_A + m_B \mathbf{v}_B \ , $


where m are the masses of the parent and the fragments. Let us consider only the projection of the velocities on the electric field direction (v, see Fig. above):

$ m_{AB} {v}_{AB} = m_A {v}_A + m_B {v}_B \ . $


Each of the fragments will move towards the detector according to the second Newton's law $m \ddot{x} = F = q E$, where $q$ is the charge of the ion and $E$ is the electric field strength (we consider only the electric field direction). This we can solve taking the initial position of the ion $x=0$ and velocity v:

$ x(t) = v \cdot t + \underbrace{\frac{qE}{m}}_{a} \cdot \frac{t^2}{2} \ , $


where a is the ion acceleration in the constant electric field. We need to know the time when the ion will reach the detector located at distance L from the reaction place. It can be found according to $x(t) = L$, which is a quadratic equation for TOF t

$ \frac{a}{2} t^2 + v\cdot t - L = 0 \ . $


The solution is

$ t = \frac{\sqrt{2 a L - v^2} - v}{a} \ . $


Since the applied field should be strong enough too pull each and every ion towards the detector, we can write $\sqrt{2 a L - v^2} \approx \sqrt{2 a L}$, thus

$ t = \underbrace{\sqrt{\frac{2L}{a}}}_{t_0} + \underbrace{\left( - \frac{v}{a} \right)}_{\delta t} \ , $


where $t_0 = \sqrt{\frac{2L}{E} \cdot \frac{m}{q}}$ is the time when the velocity-less ion will reach the detector (it should also be the average arrival time for this sort of ions) and

$ \delta t = - \frac{m}{q} \frac{v}{E} $


is the TOF drift. Obviously, the ions that are already flying towards the detector $v>0$ will reach the final destination earlier ($t<t_0$), and those, that initially run away from detector $v<0$ will come later ($t>t_0$).

image

We want to know how the arrival times of $\mathrm{A}^{q_\mathrm{A}+}$ and $\mathrm{B}^{q_\mathrm{B}+}$ are connected = correlated. From momentum conservation law above we can write

$ {v}_B = \frac{m_{AB} {v}_{AB}}{m_B} - \frac{m_A }{m_B}{v}_A \ . $


The time drift $\delta t$ for $\mathrm{A}^{q_\mathrm{A}+}$ is $\delta t_A = - \frac{m_A}{q_A} \frac{v_A}{E}$, while for $\mathrm{B}^{q_\mathrm{B}+}$ it is

$ \delta t_B = - \frac{m_B}{q_B} \frac{v_B}{E} = \underbrace{ - \frac{m_{AB}}{q_B} \frac{{v}_{AB}}{E} }_{\delta t_{AB}} + \frac{q_A}{q_B} \underbrace{\frac{m_A}{q_A} \frac{v_A}{E}}_{-\delta t_A } = \delta t_{AB} - \frac{q_A}{q_B} \delta t_A \ . $


This gives the TOFs for ions $\mathrm{A}^{q_\mathrm{A}+}$ and $\mathrm{B}^{q_\mathrm{B}+}$:

$ \begin{cases} t_A = t_{0A} + \delta t_A \ , \\ t_B = t_{0B} + \delta t_{AB} - \frac{q_A}{q_B} \delta t_A \ . \end{cases} $


The times $t_A$ and $t_B$ are connected through the variable $\delta t_A$ that shows initial velocity of fragment $\mathrm{A}^{q_\mathrm{A}+}$.
Thus the positive correlation = covariance in coordinates $(t_A,t_B)$ (i.e. in the coordinates of the covariance map, $(\mathrm{TOF}_i,\mathrm{TOF}_j)$) for ions $\mathrm{A}^{q_\mathrm{A}+}$ and $\mathrm{B}^{q_\mathrm{B}+}$ will look like a straight line with a slope $-\frac{q_A}{q_B}$ (see Fig. above). Those straight lines with negative slopes are the clear signatures of the Coulomb explosions. If there are some intermediate or later reactions, the shape of covariances will change in very wierd ways.

Partial covariance maps


The free electron lasers (FELs) are really great at making ions. Also they are known to have large fluctuations of the pulse energies. Let us consider ion yields of two ions: A and B. Assuming that the formation of all the ions happens as a result of one-photon process, we can assume that the ion yields of ions will be proportional to the power of the FEL (W):

$ \begin{cases} A = a \cdot W \ , \\ B = b \cdot W \ , \end{cases} $


where a and b are the true random ion yields, that depend only on the intermolecular random variables, i.e. a and b are independent of W. This means that $\langle a^n W^m\rangle = \langle a^n \rangle \langle W^m \rangle$.
Let us calculate the covariance of A and B that we will get from the covariance maps:

$ \operatorname{cov}(A,B) = \underbrace{\langle AB\rangle}_{\langle a b \rangle \langle W^2 \rangle} - \underbrace{\langle A \rangle}_{\langle a \rangle \langle W \rangle} \underbrace{\langle B \rangle}_{\langle b \rangle \langle W \rangle} = \langle a b \rangle \langle W^2 \rangle - \langle a \rangle \langle b \rangle \langle W \rangle^2 \ . $


But in reality we want to have

$ \operatorname{cov}(a,b) = \langle a b \rangle - \langle a \rangle \langle b \rangle \ , $


so let's clean it up in $\operatorname{cov}(A,B)$. To do that we need to add and subtract missing $\langle a \rangle \langle b \rangle \langle W^2 \rangle$, and by doing that we'll get

$ \operatorname{cov}(A,B) = \underbrace{\langle a b \rangle \langle W^2 \rangle - \langle a \rangle \langle b \rangle \langle W^2 \rangle}_{\langle W^2 \rangle \cdot \operatorname{cov}(a,b)} + \langle a \rangle \langle b \rangle \underbrace{\left( \langle W^2 \rangle - \langle W \rangle^2 \right)}_{\operatorname{var}(W)} = \langle W^2 \rangle \cdot \operatorname{cov}(a,b) + \langle a \rangle \langle b \rangle \operatorname{var}(W) \ . $


The first term is what we want: it represents the true intramolecular relation between ions. The second term is called false covariance: it will give a nonzero covariance even if the ions in reality are unrelated ($\operatorname{cov}(a,b) = 0$).
To clean out the first term we need to express the second addend in the terms of observables $A,B,W$.

Let us calculate covariance of A and B with W:

$ \begin{cases} \operatorname{cov}(A,W) = \overbrace{\langle AW\rangle}^{\langle a \rangle \langle W^2 \rangle} - \overbrace{\langle A \rangle}^{\langle a \rangle \langle W \rangle} \langle W\rangle = \langle a \rangle \cdot \operatorname{var}(W) \ , \\ \operatorname{cov}(B,W) = \underbrace{\langle BW\rangle}_{\langle b \rangle \langle W^2 \rangle} - \underbrace{\langle B \rangle}_{\langle b \rangle \langle W \rangle} \langle W\rangle = \langle b \rangle \cdot \operatorname{var}(W) \ , \end{cases} $


therefore

$ \langle a \rangle \langle b \rangle \operatorname{var}(W) = \frac{\operatorname{cov}(A,W) \cdot \operatorname{cov}(W,B)}{\operatorname{var}(W)} \ . $



Now we can calculate $\langle W^2 \rangle \cdot \operatorname{cov}(a,b)$ which is called partial covariance ($\operatorname{pcov}$) as

$ \operatorname{pcov}(A,B) = \operatorname{cov}(A,B) - \frac{\operatorname{cov}(A,W) \cdot \operatorname{cov}(W,B)}{\operatorname{var}(W)} \ . $


In principle, this value represents the ``true'' covariance of ionic signals.

If we have the same measured data $\{\mathbf{I}_i = (I_{1i}, I_{2i}, \ldots, I_{Mi}) = \{ I_{\alpha i} \}_{\alpha=1}^M\}_{i=1}^{N}$, as we first did when introduced TOF-TOF covariance maps, we would have to calculate the following matrix elements:

$ P_{\alpha \beta} = C_{\alpha \beta} - \frac{1}{\frac{1}{N}\sum_{i=1}^N W_i^2 - \left( \frac{1}{N}\sum_{i=1}^N W_i \right)^2} \cdot \\ \cdot \left( \frac{1}{N}\sum_{i=1}^N I_{\alpha i} W_i - \left( \frac{1}{N}\sum_{i=1}^N I_{\alpha i} \right) \cdot \left( \frac{1}{N}\sum_{i=1}^N W_i \right) \right) \cdot \\ \cdot \left( \frac{1}{N}\sum_{i=1}^N I_{\beta i} W_i - \left( \frac{1}{N}\sum_{i=1}^N I_{\beta i} \right) \cdot \left( \frac{1}{N}\sum_{i=1}^N W_i \right) \right) $


where $C_{\alpha \beta}$ is the covariance map.

If we denote the $M \times M$ TOF-TOF covariance matrix as $\operatorname{cov}(\mathbf{I},\mathbf{I})$ and a define a vector of length M for ion intensity covariance with FEL power W:

$ \operatorname{cov}(\mathbf{I},W) = ( \operatorname{cov}(I_1, W), \operatorname{cov}(I_2, W), \ldots , \operatorname{cov}(I_M, W) ) \ , $


then we can write partial covariance matrix $\operatorname{pcov}(\mathbf{I},\mathbf{I})$ as

$ \operatorname{pcov}(\mathbf{I},\mathbf{I}) = \operatorname{cov}(\mathbf{I},\mathbf{I}) - \frac{\operatorname{cov}(\mathbf{I},W) \otimes \operatorname{cov}(W,\mathbf{I})}{\operatorname{var}(W)} \ , $


where $\otimes$ is an outer product. When using high-level languages (such as MATLAB and Python) please use this or equivalent expressions, otherwise the calculation of the partial covariance will take forever.

In reality, the FEL power can be not the only random variable that creates the false covariances. The molecular beam can fluctuate, the other lasers can also have power variations, the detection devices can do some stuff, so a lot of possible things can happen. If there are multiple random variables $\{ \xi_k \}_{k=1}^{K} = \vec{\xi}$ that are independent from each other ($\operatorname{cov}(\xi_k, \xi_l) =0$ for $k \neq l$), then the generalized covariance can be calculated:

$ \operatorname{pcov}(\mathbf{I},\mathbf{I} |\vec{\xi} ) = \operatorname{cov}(\mathbf{I},\mathbf{I}) - \sum_{k=1}^{K} \frac{\operatorname{cov}(\mathbf{I},\xi_k) \otimes \operatorname{cov}(\xi_k,\mathbf{I})}{\operatorname{var}(\xi_k)} \ . $


However, other variables except for FEL power are usually unknown, therefore people take the assumption that the FEL power correction factor looks exactly like the other ones, i.e.

$ \frac{\operatorname{cov}(\mathbf{I},W) \otimes \operatorname{cov}(W,\mathbf{I})}{\operatorname{var}(W)} \propto \frac{\operatorname{cov}(\mathbf{I},\xi) \otimes \operatorname{cov}(\xi,\mathbf{I})}{\operatorname{var}(\xi)} \ , $


so to get rid of all the false covariances we can just take a partial covariance modified by a scale factor $s>1$:

$ \operatorname{pcov}(\mathbf{I},\mathbf{I}) = \operatorname{cov}(\mathbf{I},\mathbf{I}) - s \cdot \frac{\operatorname{cov}(\mathbf{I},W) \otimes \operatorname{cov}(W,\mathbf{I})}{\operatorname{var}(W)} \ . $



Real life example


image

Let us consider example from paper O. Kornilov et al. «Coulomb explosion of diatomic molecules in intense XUV fields mapped by partial covariance» // J. Phys. B: At. Mol. Opt. Phys. 46 (2013) 164028.

These people have ionized poor N2 molecules with strong X-ray/Ultraviolet (XUV) radiation produced by a free electron laser at Hamburg (FLASH), thus obtaining the TOF-TOF covariance maps given above. Panels (a) and (b) show the components of the covariance maps, namely cross-term $\langle XY \rangle$ and averages product $\langle X \rangle \langle Y \rangle$. The panel © is the resulting covariance map, and there we can see some lines with negative slopes. If we take the FEL power correction map (panel (d)) and substract it from the covariance, we will obtain the partial covariance map (panel (e)) with lots and lots of lines. To correct even more for the other fluctuations, the scaling of the correction matrix (d) can be taken with scale factor $s=1.1$, and this will give an even clearer map, given in panel (f).

And this is how it's done :)

Conclusions


  • Covariance maps are not scary at all.
  • They can show the reaction pathways for the ions.

Stuff to read


  • To know at least something, Wikipedia can help.
  • The really good reviews is:
    Frasinski, L. J. Covariance mapping techniques. Journal of Physics B: Atomic, Molecular and Optical Physics 49, 15 (jul 2016), 152004.
  • It's a paper, but with lots of useful theoretical stuff in it:
    Zhaunerchyk, V., Frasinski, L. J., Eland, J. H. D., and Feifel, R. Theory and simulations of covariance mapping in multiple dimensions for data analysis in high-event-rate experiments. Phys. Rev. A 89 (May 2014), 053418.

P.S.


This tutorial was written for a very specific and advanced people. So if there are any questions and concerns, please feel free to ask. I'll be glad to help! :)