Material and methods
Dataset
The dataset used consists of EEG signals, i.e. multivariate time series (see Fig. 1a), taken from the PhysioNet database [17]. EEGs are performed positioning electrodes at some key points on the patient’s head following some schemes: the database we used adopts the international 10–20 system (see Fig. 1c). The EEGs used in this study were collected at the Children’s Hospital Boston and they consist of recordings from pediatric subjects with intractable seizures. Subjects were monitored for several days following the withdrawal of anti-seizure medication in order to characterise their seizures and assess their candidacy for surgery. We selected 33 recordings with, at least, one epileptic event and 33 without epileptic events. The recordings have the same number of channels (electrodes), 23, with the same length, 921,600 samples with a sampling frequency of 256 Hz.
TDA: a new method for data analysis
Consider a set of points G, i.e. our data, embedded in a d-dimensional space \(\mathbb {D}^d\) and assume that those data were sampled from an unknown k-dimensional space \(\mathbb {D}^k\) with \(k \le d\). Our task is to reconstruct the space \(\mathbb {D}^k\) from the dataset G.
In TDA, G elements are equipped with a notion of proximity that characterises a coordinate-free metric. Those points are converted into topological spaces called simplicial complexes. Simplicial complexes are made up by building blocks called simplices: points are 0-simplices, line segments are 1-simplices, filled triangles are 2-simplices, filled tetrahedra are 3-simplices and so on (see Fig. 1d).
A Filtration is a collection of nested simplicial complexes. Building a filtration can be seen as wearing lenses for examining the dataset: different lenses consent to extract different kinds of information from the topological space. In this paper we use Piecewise filtration and Vietoris–Rips filtration. Choosing a filtration is a crucial step: different filtrations give rise to different conversions of the data points G into simplicial complexes [18,19,20].
Piecewise filtration
Piecewise filtration, recently introduced by Rucco et al. [21], is used for studying signals. The procedure is based on the well known concept of Piecewise Linear function (PL), \(PL:\mathbb {R}\rightarrow \mathbb {R}\), shown in Fig. 2a, b.
Vietoris–Rips filtration
Vietoris–Rips filtration is used for studying Point Cloud Data (PCD). It creates a sequence of simplices, built on a metric space, used to add topological structure to an otherwise disconnected set of points [22, Chapter III]. Figure 2c, d, e show a graphical representation of this approach.
Persistent homology
Persistent homology is the combinatorial counterpart of Homology, an algebraic object that counts the number of n-dimensional holes in a topological space, the so-called Betti numbers. The filtration process is necessary for the computation of persistent homology. The set of Betti numbers is composed by \(\beta _0\), the number of connected components in a generic topological space K; \(\beta _1\), the number of holes in K; \(\beta _2\), the number of voids in K and so on. Along the filtration, persistent homology calculates \(k-\)dimensional Betti intervals: a \(k-\)dimensional Betti interval \([t_{start}, t_{end}]\) defines the time at which a k-dimensional hole appears in the simplicial complex (\(t_{start}\)), while \(t_{end}\) is the time at which it disappears. The holes that are still present at \(t_{end}= t_{max}\) correspond to persistent topological features [23]. A graphical representation of those intervals in K is called persistence barcode and it is associated to a filtration. An equivalent representation is a persistence diagram [24]. An additional information returned by the computation of persistent homology is the list of the generators, which are the simplices involved in the holes. Experimentally, the generators play a crucial role for the description of the data under analysis [25, 26].
Persistent entropy
A new entropy measure called Persistent entropy has been recently introduced for measuring how much the construction of a filtered simplicial complex is “ordered” [27]. Given a topological space K and a set of the filtration parameters F, let \(B = \{[x_i , y_i) \mid i \in I\}\), where i is a set of indexes, be the persistent barcode associated to the filtration of K. The Persistent entropy H of the filtered simplicial complex is calculated as follows:
$$\begin{aligned} H=-\sum _{i \in I} p_i \log {(p_i)} \end{aligned}$$
where \(p_i=\frac{\ell _i}{L}\), \(\ell _i=y_i - x_i\), and \(L=\sum _{i\in I}\ell _i\). In case of a persistent interval \([x_i~,~\infty )\), an interval \([x_i~,~m)\) is used, where \(m = \max \{F\} + 1\). Moreover, to rescale H in the interval [0, 1] and to compare the values from different barcodes we use the stability theorem for H and the normalised H, denoted by \(\mathcal {H}\), and defined as:
$$\begin{aligned} \mathcal {H} = \frac{H}{\log {\ell _{\rm max}}} \end{aligned}$$
where \(\ell _{\rm max}\) is the maximum interval length in the considered barcode [21].
A new topological classifier for epilepsy
Given the above theoretical framework, let us define a new methodology for the analysis of EEG signals. It can be divided in three steps:
-
Step I preprocessing of the input.
-
Step II computation of H using the Piecewise filtration and derivation of a linear topological classifier (LTC).
-
Step III identification of regions involved in the spreading of the epileptic signals using Vietoris–Rips filtration.
Step I
Let \(j \in \{1, 2, \ldots , 66\}\) be the index of the EEG recordings, denoted by \(\mathbb {S}^j\). Each \(\mathbb {S}^j\) is composed of 23 one-dimensional signals, \(\mathbb {S}^j=\{S_1^j, S_2^j, \ldots , S_{23}^j\}\), and each \(S_i^j \subset \mathbb {R}^2\) is a PL function. The length of each \(S_i^j\) is N, the number of samples. For each \(S_i^j\) the preprocessing performs two actions:
-
1.
Filtering the EEG reduces the noise by using a bandpass filter between 1–70 Hz, and removes the power line using a notch filter, between 8 and 52 Hz [28,29,30].
-
2.
Downsampling the EEG reduces the time needed for the computation of the topological features during the subsequent steps. The worst-case complexity of computing persistent homology using the JavaPlex tool [31] is cubic in the number of simplices. This number is linear with respect to the number of points in case of piecewise complexes. Downsampling should be used if and only if it preserves the main geometrical characteristics of the original signals, that is the shape. In MATLAB we used the command “decimate” [32].
After the preprocessing, the signals were denoted \(\tilde{S}_i^j\).
Step II
After performing the Piecewise filtration, we computed \(\mathcal {H}\) for each \(\mathbb {\tilde{S}}^j\) thus obtaining a vector of 23 values of \(\mathcal {H}\). Then, we calculated the average value of this vector, \(\widehat{\mathcal {H}^j}\). \(\widehat{\mathcal {H}^j}\) is our 1-dimensional feature able to differentiate signals by looking at their shapes [21].
We repeated the procedure using Sample Entropy, a well-established technique in time series analysis [33, 34], on the same dataset. Finally, we trained an \(\mathcal {H}\)-based supervised classifier and a Sample Entropy-based supervised LTC. We randomly divided the dataset into a training (\(70\%\)) and a testing (\(30\%\)) subset. We applied a 10-fold cross validation.
Step III
Let us consider our dataset as a PCD: each patient is represented by 23 points in \(\mathbb {R}^{N}\). Assuming that the generators of the persistent holes correspond to the sensors on the head of the patient, we applied the Vietoris–Rips filtration to determine which particular sensors (thus, which areas of the brain) are more “involved”\(\backslash\)“significant” concerning the spreading of epileptic seizures. Standardised Euclidean distance among sensors, see Fig. 3g, is the metric upon which we performed the Vietoris–Rips filtration. This metric is useful when the dataset contains heterogeneous scale variables and it is defined as:
$$\begin{aligned} d(\tilde{S},\tilde{S}')= \sqrt{ \sum _{k=1}^N \Big ( \frac{\tilde{S}|_k}{s_k} - \frac{\tilde{S}'|_k}{s_k} \Big )^2} \end{aligned}$$
where \(\tilde{S}|_k\) stands for the y-component in \((x_k,y_k)\) of the channel signal \(\tilde{S}\) and \(s_k\) is the sample standard deviation calculated among the 23 y-components at position k of the signal \(\tilde{\mathbb {S}}\) to which \(\tilde{S}\) belongs.