FlipediFlop пре 9 месеци
родитељ
комит
b768394f78
1 измењених фајлова са 157 додато и 189 уклоњено
  1. 157 189
      chapters/chap04/chap04.tex

+ 157 - 189
chapters/chap04/chap04.tex

@@ -9,48 +9,51 @@
 % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
 % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
 \chapter{Experiments   10}
 \chapter{Experiments   10}
 \label{chap:evaluation}
 \label{chap:evaluation}
-In the previous chapters we explained the methods (see~\Cref{chap:methods})
+In the preceding chapters, we explained the methods (see~\Cref{chap:methods})
 based the theoretical background, that we established in~\Cref{chap:background}.
 based the theoretical background, that we established in~\Cref{chap:background}.
-In this chapter, we present the setups and results from the experiments and
-simulations, we ran. First, we tackle the experiments dedicated to find the
-epidemiological parameters of $\beta$ and $\alpha$ in synthetic and real-world
-data. Second, we identify the reproduction number in synthetic and real-world
-data of Germany. Each section, is divided in the setup and the results of the
-experiments.
+In this chapter present the setups and results from the experiments and
+simulations, we ran. First, we discuss the experiments dedicated to identify
+the epidemiological parameters of $\beta$ and $\alpha$ in synthetic and
+real-world data. Second, we examine the reproduction number in synthetic and
+real-world data of Germany. Each section, is divided into a description of the
+experimental setup and the results.
 
 
 % -------------------------------------------------------------------
 % -------------------------------------------------------------------
 
 
 \section{Identifying the Transition Rates on Real-World and Synthetic Data  5}
 \section{Identifying the Transition Rates on Real-World and Synthetic Data  5}
 \label{sec:sir}
 \label{sec:sir}
-In this section we seek to find the transmission rate $\beta$ and the recovery
-rate $\alpha$ from either synthetic or preprocessed real-world data. The
-methodology that we employ to identify the transition rates is described
-in~\Cref{sec:pinn:sir}. Meanwhile, the methods we use to preprocess the
-real-world data is to be found in~\Cref{sec:preprocessing:rq}.
+In this section, we aim to identify the transmission rate $\beta$ and the
+recovery rate $\alpha$ from either synthetic or preprocessed real-world data.
+The methodology that we employ to identify the transition rates is described
+in~\Cref{sec:pinn:sir}. Meanwhile, the methods we utilize to preprocess the
+real-world data are detailed in~\Cref{sec:preprocessing:rq}.
 
 
 % -------------------------------------------------------------------
 % -------------------------------------------------------------------
 
 
 \subsection{Setup   1}
 \subsection{Setup   1}
 \label{sec:sir:setup}
 \label{sec:sir:setup}
 
 
-In this section we show the setups for the training of our PINNs, that are
-supposed to find the transition parameters. This includes the specific
-parameters for the preprocessing and the configuration of the PINN their
-selves.\\
+In this subsection, we present the configurations for the training of our
+PINNs, which are designed to identify the transition parameters. This
+encompasses the specific parameters for the preprocessing and the configuration
+of the PINN themselves.\\
 
 
-In order to validate our method we first generate a dataset of synthetic data.
-We conduct this by solving~\Cref{eq:modSIR} for a given set of parameters.
+In order to validate our method, we first generate a dataset of synthetic data.
+We achieve this by solving~\Cref{eq:modSIR} for a given set of parameters.
 The parameters are set to $\alpha = \nicefrac{1}{3}$ and $\beta = \nicefrac{1}{2}$.
 The parameters are set to $\alpha = \nicefrac{1}{3}$ and $\beta = \nicefrac{1}{2}$.
 The size of the population is $N = \expnumber{7.6}{6}$ and the initial amount of
 The size of the population is $N = \expnumber{7.6}{6}$ and the initial amount of
-infectious individuals of is $I_0 = 10$. We simulate over 150 days and get a
-dataset of the form of~\Cref{fig:synthetic_SIR}.\\For the real-world RKI data we
-preprocess the raw data of each state and Germany separately using a
-recovery queue with a recovery period of 14 days. As for the population size of
-each state we set it to the respective value counted at the end of 2019\footnote{\url{https://de.statista.com/statistik/kategorien/kategorie/8/themen/63/branche/demographie/\#overview}}.
+infectious individuals of is $I_0 = 10$. We conduct the simulation over 150
+days, resulting in a dataset of the form of~\Cref{fig:synthetic_SIR}.\\ In
+order to process the real-world RKI data, it is necessary to preprocess the raw
+data for each state and Germany separately. This is achieved by utilizing a
+recovery queue with a recovery period of 14 days. With regard to population
+size of each state, we set it to the respective value counted at the end of
+2019\footnote{\url{https://de.statista.com/statistik/kategorien/kategorie/8/themen/63/branche/demographie/\#overview}}.
 The initial number of infectious individuals is set to the number of infected
 The initial number of infectious individuals is set to the number of infected
 people on March 09. 2020 from the dataset. The data we extract spans from
 people on March 09. 2020 from the dataset. The data we extract spans from
-March 09. 2020 to June 22. 2023, which is a span of 1200 days and covers the time
-in which the COVID-19 disease was the most active and severe.
+March 09. 2020 to June 22. 2023, encompassing a period of 1200 days and
+representing the time span during which the COVID-19 disease was the most
+active and severe.
 
 
 \begin{figure}[h]
 \begin{figure}[h]
     %\centering
     %\centering
@@ -101,16 +104,16 @@ in which the COVID-19 disease was the most active and severe.
     \label{fig:datasets_sir}
     \label{fig:datasets_sir}
 \end{figure}
 \end{figure}
 
 
-The PINN that we employ consists of seven hidden layers with twenty neurons
-each and an activation function of ReLU. For training, we use the Adam optimizer
-and the polynomial scheduler of the pytorch library with a base learning rate
+The PINN that we utilize comprises of seven hidden layers with twenty neurons
+each, and an activation function of ReLU. We employ the Adam optimizer and the
+polynomial scheduler of the PyTorch library, for training, with a base learning rate
 of $\expnumber{1}{-3}$. We train the model for 10000 epochs to extract the
 of $\expnumber{1}{-3}$. We train the model for 10000 epochs to extract the
-parameters. For each set of parameters we do 5 iterations to show stability of
-the values. Our configuration is similar to the configuration, that Shaier
-\etal.~\cite{Shaier2021} use for their work aside from the learning rate and the
-scheduler choice.\\
+parameters. For each set of parameters, we conduct five iterations to
+demonstrate stability of the values. The configuration is similar to the
+configuration, that Shaier \etal ~\cite{Shaier2021} use for their work aside
+from the learning rate and the scheduler choice.\\
 
 
-In the next section we present the results of the simulations conducted with the
+The following section presents the results of the simulations conducted with the
 setups that we describe in this section.
 setups that we describe in this section.
 
 
 % -------------------------------------------------------------------
 % -------------------------------------------------------------------
@@ -126,17 +129,18 @@ setups that we describe in this section.
     \label{fig:reprod}
     \label{fig:reprod}
 \end{figure}
 \end{figure}
 
 
-In this section we describe the results, that we obtain from the conducted
-experiments, that we describe in the preceding section. First we show the
-results for the synthetic dataset and look at the accuracy and reproducibility.
-Then we present and discuss the results for the German states and Germany.\\
+In this section, we present the results, that we obtain from the conducted
+experiments, that we describe in the preceding section. We begin by examining
+the results for the synthetic dataset, focusing the accuracy and
+reproducibility. We then proceed to present and discuss the results for the
+German states and Germany.\\
 
 
 The results of the experiment regarding the synthetic data can be seen
 The results of the experiment regarding the synthetic data can be seen
 in~\Cref{table:alpha_beta_synth} and in~\Cref{fig:reprod}.~\Cref{fig:reprod}
 in~\Cref{table:alpha_beta_synth} and in~\Cref{fig:reprod}.~\Cref{fig:reprod}
-shows the values of $\beta$ and $\alpha$ of each iteration compared to the true
+depicts the values of $\beta$ and $\alpha$ for each iteration in comparison to the true
 values of $\beta=\nicefrac{1}{2}$ and $\alpha=\nicefrac{1}{3}$. In~\Cref{table:alpha_beta_synth}
 values of $\beta=\nicefrac{1}{2}$ and $\alpha=\nicefrac{1}{3}$. In~\Cref{table:alpha_beta_synth}
-we present the mean $\mu$ and standard variation $\sigma$ of both values across
-all 5 iterations.\\
+we present the mean $\mu$ and standard deviation $\sigma$ of both values across
+all five iterations.\\
 
 
 \begin{table}[h]
 \begin{table}[h]
     \begin{center}
     \begin{center}
@@ -145,23 +149,25 @@ all 5 iterations.\\
             \hline
             \hline
             0.3333        & 0.3334        & 0.0011           & 0.5000       & 0.5000       & 0.0017          \\
             0.3333        & 0.3334        & 0.0011           & 0.5000       & 0.5000       & 0.0017          \\
         \end{tabular}
         \end{tabular}
-        \caption{The mean $\mu$ and standard variation $\sigma$ across the 5
+        \caption{The mean $\mu$ and standard deviation $\sigma$ across the 5
             independent iterations of training our PINNs with the synthetic dataset.}
             independent iterations of training our PINNs with the synthetic dataset.}
         \label{table:alpha_beta_synth}
         \label{table:alpha_beta_synth}
     \end{center}
     \end{center}
 \end{table}
 \end{table}
-From the results we can see that the model is able to approximate the correct
-parameters for the small, synthetic dataset in each of the 5 iterations. Even
-though the predicted value is never exactly correct, the standard deviation is
-negligible small and taking the mean of multiple iterations yields an almost
-perfect result.\\
+
+The results demonstrate that the model is capable of approximating the correct
+parameters for the small, synthetic dataset in each of the five iterations.
+While the predicted value is not precisely accurate, the standard deviation is
+sufficiently small, and taking the mean of multiple iterations produces an
+almost perfect result.\\
 
 
 In~\Cref{table:alpha_beta} we present the results of the training for the
 In~\Cref{table:alpha_beta} we present the results of the training for the
-real-world data. These are presented from top to bottom, in the order of the
-community identification number, with the last entry being Germany. $\mu$ and
-$\sigma$ are both calculated across all 5 iterations of our experiment. We can
-see that the values of \emph{Hamburg} have the highest standard deviation, while
-\emph{Mecklenburg Vorpommern} has the smallest $\sigma$.\\
+real-world data. The results are presented from top to bottom, in the order of
+the community identification number, with the last entry being Germany. Both
+the mean $\mu$ and the standard deviation $\sigma$ are calculated across all
+five iterations of our experiment. We can observe that the values of
+\emph{Hamburg} have the highest standard deviation, while \emph{Mecklenburg Vorpommern}
+has the lowest $\sigma$.\\
 
 
 \begin{table}[h]
 \begin{table}[h]
     \begin{center}
     \begin{center}
@@ -186,39 +192,47 @@ see that the values of \emph{Hamburg} have the highest standard deviation, while
             Thüringen              & 0.0952        & 0.0011           & 0.1248       & 0.0016          \\
             Thüringen              & 0.0952        & 0.0011           & 0.1248       & 0.0016          \\
             Germany                & 0.0803        & 0.0012           & 0.1044       & 0.0014          \\
             Germany                & 0.0803        & 0.0012           & 0.1044       & 0.0014          \\
         \end{tabular}
         \end{tabular}
-        \caption{Mean and standard variation across the 5 iterations, that we
+        \caption{Mean and standard deviation across the 5 iterations, that we
             conducted for each German state and Germany as the whole country.}
             conducted for each German state and Germany as the whole country.}
         \label{table:alpha_beta}
         \label{table:alpha_beta}
     \end{center}
     \end{center}
 \end{table}
 \end{table}
 
 
-In~\Cref{fig:alpha_beta_mean_std} we visualize the means and standard variations
-in contrast to the national values. The states with the highest transmission rate
-values are Thuringia, Saxony Anhalt and Mecklenburg West-Pomerania. It is also,
-visible that all six of the eastern states have a higher transmission rate than
-Germany. These results may be explainable with the ratio of vaccinated individuals\footnote{\url{https://impfdashboard.de/}}.
-The eastern state have a comparably low complete vaccination ratio, accept for
-Berlin. While Berlin has a moderate vaccination ratio, it is also a hub of
-mobility, which means that contact between individuals happens much more often. This is also a reason for Hamburg being a state with an above national standard rate of transmission.
-\\
-
-
-
-We visualize these numbers in~\Cref{fig:alpha_beta_mean_std},
-where all means and standard variations are plotted as points, while the values
-for Germany are also plotted as lines to make a classification easier. It is
-visible that Hamburg, Baden-Württemberg, Bayern and all six of the states that
-lie in the eastern part of Germany have a higher transmission rate $\beta$ than
-overall Germany. Furthermore, it can be observed, that all values for the
-recovery $\alpha$ seem to be correlating to the value of $\beta$, which can be
-explained with the assumption that we make when we preprocess the data using the
-recovery queue by setting the recovery time to 14 days.
-\begin{figure}[h]
+\begin{figure}[t]
     \centering
     \centering
     \includegraphics[width=\textwidth]{mean_std_alpha_beta_res.pdf}
     \includegraphics[width=\textwidth]{mean_std_alpha_beta_res.pdf}
     \label{fig:alpha_beta_mean_std}
     \label{fig:alpha_beta_mean_std}
 \end{figure}
 \end{figure}
 
 
+In~\Cref{fig:alpha_beta_mean_std}, we present a visual representation of the
+means and standard deviations in comparison to the national values. It is
+noteworthy that the states of Saxony-Anhalt and Thuringia have the highest
+transmission rates of all states, while Bremen and Hessen have the lowest
+values for $\beta$. The transmission rates of Hamburg, Baden Württemberg,
+Bavaria, and all eastern states lay above the national rate of transmission.
+Similarly, the recovery rate yields comparable outcomes. For the recovery rate,
+the same states that exhibit a transmission rate exceeding the national value,
+have a higher recovery rate than the national standard, with the exception of
+Saxony.It is noteworthy that the recovery rates of all states exhibit a
+tendency to align with the recovery rate of $\alpha=\nicefrac{1}{14}$, which is
+equivalent to a recovery period of 14 days.\\
+
+It is evident that there is a correlation between the values of $\alpha$ and
+$\beta$ for each state. States with a high transmission rate tend to have a
+high recovery rate, and vice versa. The correlation between $\alpha$ and
+$\beta$ can be explained by the implicate definition of $\alpha$ using a
+recovery queue with a constant recovery period of 14 days. This might result to
+the PINN not learning $\alpha$ as a standalone parameter but rather as a
+function of the transmission rate $\beta$. This phenomenon occurs because the
+transmission rate determines the number of individuals that get infected per
+day, and the recovery queue moves a proportional number of people to the
+removed compartment. Consequently, a number of people defined by $\beta$ move
+to the $R$ compartment 14 days after they were infected.\\
+
+This issue can be addressed by reducing the SIR model, thereby eliminating the
+significance of the $R$ compartment size. In the following section, we present
+our experiments for the reduced SIR model with time-independent parameters.
+
 % -------------------------------------------------------------------
 % -------------------------------------------------------------------
 
 
 \section{Reduced SIR Model   5}
 \section{Reduced SIR Model   5}
@@ -234,128 +248,82 @@ are described in~\Cref{sec:pinn:rsir}.
 
 
 \subsection{Setup    1}
 \subsection{Setup    1}
 \label{sec:rsir:setup}
 \label{sec:rsir:setup}
-In this section we describe the choice of parameters and configuration for data
-generation, preprocessing and the neural networks. We use these setups to train
-the PINNs to find the reproduction number on both synthetic and real-world data.\\
-
-For validation reasons we create a synthetic dataset, by setting the parameters
-of $\alpha$ and $\beta$ each to a specific value, and solving~\Cref{eq:modSIR}
-for a given time interval. We set $\alpha=\nicefrac{1}{3}$ and
-$\beta=\nicefrac{1}{2}$ as well as the population size $N=\expnumber{7.6}{6}$
-and the initial amount of infected people to $I_0=10$. Furthermore, we set our
-simulated time span to 150 days.We will use this dataset to show, that our
-method is working on a simple and minimal dataset.\\ For the real-world data we
-we processed the data of the dataset \emph{COVID-19-Todesfälle in Deutschland}
-to extract the number of infections in the whole of Germany, while we used the
-data of \emph{SARS-CoV-2 Infektionen in Deutschland} for the German states. For
-the preprocessing we use a constant rate for $\alpha$ to move individual into
-the removed compartment. First we choose $\alpha = \nicefrac{1}{14}$ as this is
-covers the time of recovery\footnote{\url{https://github.com/robert-koch-institut/SARS-CoV-2-Infektionen_in_Deutschland.git}}.
-Second we use $\alpha=\nicefrac{1}{5}$ since the peak of infectiousness is
-reached right in front or at 5 days into the infection\footnote{\url{https://www.infektionsschutz.de/coronavirus/fragen-und-antworten/ansteckung-uebertragung-und-krankheitsverlauf/}}.
-Just as in~\Cref{sec:sir} we set the population size $N$ of each state and
-Germany to the corresponding size at the end of 2019. Also, for the same reason
-we restrict the data points to an interval of 1200 days starting from March 09.
-2020.
-\begin{figure}[h]
-    %\centering
-    \setlength{\unitlength}{1cm} % Set the unit length for coordinates
-    \begin{picture}(12, 14.5) % Specify the size of the picture environment (width, height)
-        \put(0, 10){
-            \begin{subfigure}{0.3\textwidth}
-                \centering
-                \includegraphics[width=\textwidth]{I_synth.pdf}
-                \caption{Synthetic data}
-                \label{fig:synthetic_I}
-            \end{subfigure}
-        }
-        \put(4.75, 10){
-            \begin{subfigure}{0.3\textwidth}
-                \centering
-                \includegraphics[width=\textwidth]{datasets_states/Germany_I_14.pdf}
-                \caption{Germany with $\alpha=\nicefrac{1}{14}$}
-                \label{fig:germany_I_14}
-            \end{subfigure}
-        }
-        \put(9.5, 10){
-            \begin{subfigure}{0.3\textwidth}
-                \centering
-                \includegraphics[width=\textwidth]{datasets_states/Germany_I_5.pdf}
-                \caption{Germany with $\alpha=\nicefrac{1}{5}$}
-                \label{fig:germany_I_5}
-            \end{subfigure}
-        }
-        \put(0, 5){
-            \begin{subfigure}{0.3\textwidth}
-                \centering
-                \includegraphics[width=\textwidth]{datasets_states/Nordrhein_Westfalen_I_14.pdf}
-                \caption{NRW with $\alpha=\nicefrac{1}{14}$}
-                \label{fig:schleswig_holstein_I_14}
-            \end{subfigure}
-        }
-        \put(4.75, 5){
-            \begin{subfigure}{0.3\textwidth}
-                \centering
-                \includegraphics[width=\textwidth]{datasets_states/Hessen_I_14.pdf}
-                \caption{Hessen with $\alpha=\nicefrac{1}{14}$}
-                \label{fig:berlin_I_14}
-            \end{subfigure}
-        }
-        \put(9.5, 5){
-            \begin{subfigure}{0.3\textwidth}
-                \centering
-                \includegraphics[width=\textwidth]{datasets_states/Thueringen_I_14.pdf}
-                \caption{Thüringen with $\alpha=\nicefrac{1}{14}$}
-                \label{fig:thüringen_I_14}
-            \end{subfigure}
-        }
-        \put(0, 0){
-            \begin{subfigure}{0.3\textwidth}
-                \centering
-                \includegraphics[width=\textwidth]{datasets_states/Nordrhein_Westfalen_I_5.pdf}
-                \caption{NRW with $\alpha=\nicefrac{1}{5}$}
-                \label{fig:schleswig_holstein_I_5}
-            \end{subfigure}
-        }
-        \put(4.75, 0){
-            \begin{subfigure}{0.3\textwidth}
-                \centering
-                \includegraphics[width=\textwidth]{datasets_states/Hessen_I_5.pdf}
-                \caption{Hessen with $\alpha=\nicefrac{1}{5}$}
-                \label{fig:berlin_I_5}
-            \end{subfigure}
-        }
-        \put(9.5, 0){
-            \begin{subfigure}{0.3\textwidth}
-                \centering
-                \includegraphics[width=\textwidth]{datasets_states/Thueringen_I_5.pdf}
-                \caption{Thüringen with $\alpha=\nicefrac{1}{5}$}
-                \label{fig:thüringen_I_5}
-            \end{subfigure}
-        }
+This section outlines the selection of parameters and configuration for data
+generation, preprocessing, and the neural networks. We employ these setups to
+train the PINNs to identify the reproduction number on both synthetic and
+real-world data.\\
+
+For the purposes of validation, we create a synthetic dataset, by setting the parameter
+of $\alpha$ and the reproduction value each to a specific values, and solving~\Cref{eq:reduced_sir_ODE}
+for a given time interval. We set $\alpha=\nicefrac{1}{3}$ and $\Rt$ to the
+values as can be seen in~\Cref{fig:synthetic_I_r_t} as well as the population
+size $N=\expnumber{7.6}{6}$ and the initial amount of infected people to
+$I_0=10$. Furthermore, we set our simulated time span to 150 days. We use this
+dataset to demonstrate, that our method is working on a simple and minimal
+dataset.\\ To obtain a dataset of the infectious group, consisting of the
+real-world data, we we processed the data of the dataset
+\emph{COVID-19-Todesfälle in Deutschland} to extract the number of infections
+in Germany as a whole. For the German states, we use the data of \emph{SARS-CoV-2
+    Infektionen in Deutschland}. In the preprocessing stage, we employ a constant
+rate for $\alpha$ to move individuals into the removed compartment. For each
+state we generate two datasets with a different recovery rate. First, we choose
+$\alpha = \nicefrac{1}{14}$, which aligns with the time of recovery\footnote{\url{https://github.com/robert-koch-institut/SARS-CoV-2-Infektionen_in_Deutschland.git}}.
+Second, we use $\alpha=\nicefrac{1}{5}$, as 5 days into the infection is the
+point at which the infectiousness is at its peak\footnote{\url{https://www.infektionsschutz.de/coronavirus/fragen-und-antworten/ansteckung-uebertragung-und-krankheitsverlauf/}}.
+As in~\Cref{sec:sir}, we set the population size $N$ of each state and Germany
+to the corresponding size at the end of 2019. Furthermore, for the same reason
+we restrict the data points to an interval of 1200 days, beginning on March 09.
+2020.\\
 
 
-    \end{picture}
-    \caption{Visualization of the datasets for the training process.
-        Illustration (a) is the synthetic data. For the real-world data we use a
-        dataset with $\alpha=\nicefrac{1}{14}$ and $\alpha=\nicefrac{1}{5}$ each.
-        (b) and (c) for Germany, (d) and (g) for Nordrhein-Westfalen (NRW), (e) and (h)
-        for Hessen, and (f) and (i) for Thüringen.}
-    \label{fig:i_datasets}
-\end{figure}
-
-For this task the chosen architecture of the neural network consists of 4 hidden
-layers with each 100 neurons. The activation function is the tangens
-hyperbolicus function tanh. We weight the data loss with a weight of
-$\expnumber{1}{6}$ into the total loss. The model is trained using a base
-learning rate of $\expnumber{1}{-3}$ with the same scheduler and optimizer as
-we use in~\Cref{sec:sir:setup}. We train the model for 20000 epochs. Also, we
-conduct each experiment 15 times to reduce the standard deviation.
+\begin{figure}[t]
+    \centering
+    \begin{subfigure}{0.3\textwidth}
+        \centering
+        \includegraphics[width=\textwidth]{I_synth.pdf}
+        \caption{Synthetic data}
+        \label{fig:synthetic_I}
+    \end{subfigure}
+    \quad
+    \begin{subfigure}{0.3\textwidth}
+        \centering
+        \includegraphics[width=\textwidth]{I_synth_r_t.pdf}
+        \caption{Synthetic data}
+        \label{fig:synthetic_I_r_t}
+    \end{subfigure}
+    \vskip\baselineskip
+    \begin{subfigure}{0.67\textwidth}
+        \centering
+        \includegraphics[width=\textwidth]{datasets_states/Germany_datasets.pdf}
+        \caption{}
+        \label{fig:germany_I_14}
+    \end{subfigure}
 
 
+\end{figure}
 
 
+In order to achieve the desired output, the selected neural network
+architecture comprises of four hidden layers, each containing 100 neurons. The
+activation function is the tangens hyperbolicus function. For the real-world
+data, we weight the data loss by a factor of $\expnumber{1}{6}$, to the total
+loss. The model is trained using a base learning rate of $\expnumber{1}{-3}$,
+with the same scheduler and optimizer as we describe in~\Cref{sec:sir:setup}.
+We train the model for 20000 epochs. To reduce the standard deviation, each
+experiment is conducted 15 times.\\
 
 
 % -------------------------------------------------------------------
 % -------------------------------------------------------------------
 
 
 \subsection{Results   4}
 \subsection{Results   4}
 \label{sec:rsir:results}
 \label{sec:rsir:results}
 
 
+In this section we provide the results for our experiments. First, we present
+our findings for the synthetic dataset. Then, we provide and discuss the
+results for the real-world data.\\
+
+\begin{figure}
+    \centering
+    \includegraphics[width=\textwidth]{synthetic_R_t_statistics.pdf}
+    \caption{text}
+    \label{fig:synth_r_t_results}
+\end{figure}
+
+
 % -------------------------------------------------------------------
 % -------------------------------------------------------------------