chap02.tex 40 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631632633634635636637638639640641642643644645646647648649650651652653654655656657658659660661662663664665666667668669670671672673674675676677678679680681682683684685686687688689690691692693694695696697698699700701702703704705706707708709710711712713
  1. % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
  2. % Author: Phillip Rothenbeck
  3. % Title: Investigating the Evolution of the COVID-19 Pandemic in Germany Using Physics-Informed Neural Networks
  4. % File: chap02/chap02.tex
  5. % Part: theoretical background
  6. % Description:
  7. % summary of the content in this chapter
  8. % Version: 20.08.2024
  9. % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
  10. \chapter{Theoretical Background}
  11. \label{chap:background}
  12. This chapter introduces the theoretical foundations for the work presented in
  13. this thesis. In~\Cref{sec:domain} and~\Cref{sec:differentialEq}, we describe
  14. differential equations and the underlying theory. In these Sections both the
  15. explanations and the approach are based on a book on analysis by
  16. Rudin~\cite{Rudin2007} and a book about ordinary differential equations by
  17. Tenenbaum and Pollard~\cite{Tenenbaum1985}. Subsequently, we employ this
  18. knowledge to examine various pandemic models in~\Cref{sec:epidemModel}. Finally,
  19. we address the topic of neural networks with a focus on the multilayer
  20. perceptron in~\Cref{sec:mlp} and physics informed neural networks
  21. in~\Cref{sec:pinn}.
  22. % -------------------------------------------------------------------
  23. \section{Mathematical Modelling using Functions}
  24. \label{sec:domain}
  25. To model a physical problem mathematically, it is necessary to define a
  26. fundamental set of numbers or quantities upon which the subsequent calculations
  27. will be based. These sets may represent, for instance, a specific time interval
  28. or a distance. The term \emph{domain} describes these fundamental sets of
  29. numbers or quantities~\cite{Rudin2007}. A \emph{variable} is a changing entity
  30. living in a certain domain. In this thesis, we will focus on domains of real
  31. numbers in $\mathbb{R}$.\\
  32. The mapping between variables enables the modeling of a physical process and may
  33. depict semantics. We use functions in order to facilitate this mapping. Let
  34. $A, B\subset\mathbb{R}$ be to subsets of the real numbers, then we define a
  35. function as the mapping
  36. \begin{equation}
  37. f: A\rightarrow B.
  38. \end{equation}
  39. In other words, the function $f$ maps elements $x\in A$ to values
  40. $f(x)\in B$. $A$ is the \emph{domain} of $f$, while $B$ is the \emph{codomain}
  41. of $f$. Functions are capable of representing the state of a system as a value
  42. based on an input value from their domain. One illustrative example is a
  43. function that maps a time step to the distance covered since a starting point.
  44. In this case, time serves as the domain, while the distance is the codomain.
  45. % -------------------------------------------------------------------
  46. \section{Mathematical Modelling using Differential Equations}
  47. \label{sec:differentialEq}
  48. Often, the behavior of a variable or a quantity across a domain is more
  49. interesting than its current state. Functions are able to give us the latter,
  50. but do not contain information about the change of a system. The objective
  51. is to determine an effective method for calculating the change of a function
  52. across its domain. Let $f$ be a function and $[a, b]\subset \mathbb{R}$ an
  53. interval of real numbers. The expression
  54. \begin{equation}
  55. m = \frac{f(b) - f(a)}{a-b}
  56. \end{equation}
  57. gives the average rate of change. While the average rate of change is useful in
  58. many cases, the momentary rate of change is more accurate. To calculate this, \todo{look up in Rudin - cite (wordly)}
  59. we need to narrow down, the interval to an infinitesimal. For each $x\in[a, b]$
  60. we calculate
  61. \begin{equation} \label{eqn:differential}
  62. \frac{df}{dx} = \lim_{t\to x} \frac{f(t) - f(x)}{t-x},
  63. \end{equation}
  64. if it exists. As the Tenenbaum and Pollard~\cite{Tenenbaum1985} define,
  65. $\nicefrac{df}{dx}$ is the \emph{derivative}, which is ``the rate of change of a
  66. variable with respect to another''. The relation between a variable and its
  67. derivative is modeled in a \emph{differential equation}. The derivative of
  68. $\nicefrac{df}{dx}$ yields $\nicefrac{d^2f}{dx^2}$, which is the function that
  69. calculates the rate of change of the rate of change and is called the
  70. \emph{second order derivative}. Iterating this $n$ times results in
  71. $\nicefrac{d^nf}{dx^n}$, the derivative of the $n$'th order. A method for
  72. obtaining a differential equation is to derive it from the semantics of a
  73. problem. For example, in physics a differential equation can be derived from the
  74. law of the conservation of energy~\cite{Demtroeder2021}. Differential equations
  75. find application in several areas such as engineering \eg, the Kirchhoff's
  76. circuit laws~\cite{Kirchhoff1845} to describe the relation between the voltage
  77. and current in systems with resistors, inductors, and capacitors; physics with,
  78. \eg, the Schrödinger equation, which predicts the probability of finding
  79. particles like electrons in specific places or states in a quantum system;
  80. economics, \eg, Black-Scholes equation~\cite{Oksendal2000} predicting the price
  81. of financial derivatives, such as options, over time; epidemiology with the SIR
  82. Model~\cite{1927}; and beyond.\\
  83. In the context of functions, it is possible to have multiple domains, meaning
  84. that function has more than one parameter. To illustrate, consider a function
  85. operating in two-dimensional space, wherein each parameter represents one axis.
  86. Another example would be a function, that maps its inputs of a location variable
  87. and a time variable on a height. The term \emph{partial differential equations}
  88. (\emph{PDE}'s) describes differential equations of such functions, which contain
  89. partial derivatives with respect to each individual domain. In contrast, \emph{ordinary differential
  90. equations} (\emph{ODE}'s) are the single derivatives for a function having only
  91. one domain~\cite{Tenenbaum1985}. In this thesis, we restrict ourselves to ODE's.\\
  92. A \emph{system of differential equations} is the name for a set of differential
  93. equations. The derivatives in a system of differential equations each have their
  94. own codomain, which is part of the problem, while they all share the same
  95. domain.\\
  96. Tenenbaum and Pollard~\cite{Tenenbaum1985} provide many examples for ODE's,
  97. including the \emph{Motion of a Particle Along a Straight Line}. Further,
  98. Newton's second law states that ``the rate of change of the momentum of a body
  99. ($momentum = mass \cdot velocity$) is proportional to the resultant external
  100. force $F$ acted upon it''~\cite{Tenenbaum1985}. Let $m$ be the mass of the body
  101. in kilograms, $v$ its velocity in meters per second and $t$ the time in seconds.
  102. Then, Newton's second law translates mathematically to
  103. \begin{equation} \label{eq:newtonSecLaw}
  104. F = m\frac{dv}{dt}.
  105. \end{equation}
  106. It is evident that the acceleration, $a=\frac{dv}{dt}$, as the rate of change of
  107. the velocity is part of the equation. Additionally, the velocity of a body is
  108. the derivative of the distance traveled by that body. Based on these findings,
  109. we can rewrite the~\Cref{eq:newtonSecLaw} to,
  110. \begin{equation}
  111. F=ma=m\frac{d^2s}{dt^2},
  112. \end{equation}
  113. showing that the force $F$ influences the changes of the body's position over
  114. time.\\
  115. To conclude, note that this explanation of differential equations focuses on the
  116. aspects deemed crucial for this thesis and is not intended to be a complete
  117. explanation of the subject. To gain a better understanding of it, we recommend
  118. the books mentioned above~\cite{Rudin2007,Tenenbaum1985}. In the following
  119. section we describe the application of these principles in epidemiological
  120. models.
  121. % -------------------------------------------------------------------
  122. \section{Epidemiological Models}
  123. \label{sec:epidemModel}
  124. Pandemics, like \emph{COVID-19}, which have resulted in a significant
  125. number of fatalities. Hence, the question arises: How should we analyze a
  126. pandemic effectively? It is essential to study whether the employed
  127. countermeasures are efficacious in combating the pandemic. Given the unfavorable
  128. public response to measures such as lockdowns, it is imperative to investigate
  129. that their efficacy remains commensurate with the costs incurred to those
  130. affected. In the event that alternative and novel technologies were in use, such
  131. as the mRNA vaccines in the context of COVID-19, it is needful to test the
  132. effect and find the optimal variant. In order to shed light on the
  133. aforementioned events, we need a method to quantify the pandemic along with its
  134. course of progression.\\
  135. The real world is a highly complex system, which presents a significant
  136. challenge attempting to describe it fully in a mathematical model. Therefore,
  137. the model must reduce the complexity while retaining the essential information.
  138. Furthermore, it must address the issue of limited data availability. For
  139. instance, during COVID-19 institutions such as the Robert Koch Institute
  140. (RKI)\footnote[1]{\url{https://www.rki.de/EN/Home/homepage_node.html}} were only
  141. able to collect data on infections and mortality cases. Consequently, we require
  142. a model that employs an abstraction of the real world to illustrate the events
  143. and relations that are pivotal to understanding the problem.
  144. % -------------------------------------------------------------------
  145. \subsection{SIR Model}
  146. \label{sec:pandemicModel:sir}
  147. In 1927, Kermack and McKendrick~\cite{1927} introduced the \emph{SIR Model},
  148. which subsequently became one of the most influential epidemiological models.
  149. This model enables the modeling of infections during epidemiological events such as pandemics.
  150. The book \emph{Mathematical Models in Biology}~\cite{EdelsteinKeshet2005}
  151. reiterates the model and serves as the foundation for the following explanation
  152. of SIR models.\\
  153. The SIR model is capable of illustrating diseases, which are transferred through
  154. contact or proximity of an individual carrying the illness and a healthy
  155. individual. This is possible due to the distinction between infected individuals
  156. who are carriers of the disease and the part of the population, which is
  157. susceptible to infection. In the model, the mentioned groups are capable to
  158. change, \eg, healthy individuals becoming infected. The model assumes the
  159. size $N$ of the population remains constant throughout the duration of the
  160. pandemic. The population $N$ comprises three distinct compartments: the
  161. \emph{susceptible} group $S$, the \emph{infectious} group $I$ and the
  162. \emph{removed} group $R$ (hence SIR model). Let $\mathcal{T} = [t_0, t_f]\subseteq
  163. \mathbb{R}_{\geq0}$ be the time span of the pandemic, then,
  164. \begin{equation}
  165. S: \mathcal{T}\rightarrow\mathbb{N}, \quad I: \mathcal{T}\rightarrow\mathbb{N}, \quad R: \mathcal{T}\rightarrow\mathbb{N},
  166. \end{equation}
  167. give the values of $S$, $I$ and $R$ at a certain point of time
  168. $t\in\mathcal{T}$. For $S$, $I$, $R$ and $N$ applies:
  169. \begin{equation} \label{eq:N_char}
  170. N = S + I + R.
  171. \end{equation}
  172. The model makes another assumption by stating that recovered people are immune
  173. to the illness and infectious individuals can not infect them. The individuals
  174. in the $R$ group are either recovered or deceased, and thus unable to transmit
  175. or carry the disease.
  176. \begin{figure}[h]
  177. \centering
  178. \includegraphics[width=\textwidth]{sir_graph.pdf}
  179. \caption{A visualization of the SIR model, illustrating $N$ being split in the
  180. three groups $S$, $I$ and $R$.}
  181. \label{fig:sir_model}
  182. \end{figure}
  183. As visualized in the~\Cref{fig:sir_model} the
  184. individuals may transition between groups based on epidemiological parameters. The
  185. transmission rate $\beta$ is responsible for individuals becoming infected,
  186. while the rate of removal or recovery rate $\alpha$ (also referred to as
  187. $\delta$ or $\nu$, \eg,~\cite{EdelsteinKeshet2005,Millevoi2023}) moves
  188. individuals from $I$ to $R$.\\
  189. We can describe this problem mathematically using a system of differential
  190. equations (see ~\Cref{sec:differentialEq}). Thus, Kermack and
  191. McKendrick~\cite{1927} propose the following set of differential equations:
  192. \begin{equation}\label{eq:sir}
  193. \begin{split}
  194. \frac{dS}{dt} &= -\beta S I,\\
  195. \frac{dI}{dt} &= \beta S I - \alpha I,\\
  196. \frac{dR}{dt} &= \alpha I.
  197. \end{split}
  198. \end{equation}
  199. This set of differential equations, is based on the following assumption:
  200. ``The rate of transmission of a microparasitic disease is proportional to the
  201. rate of encounter of susceptible and infective individuals modelled by the
  202. product ($\beta S I$)'', according to Edelstein-Keshet~\cite{EdelsteinKeshet2005}.
  203. The system shows the change in size of the groups per time unit due to
  204. infections, recoveries, and deaths.\\
  205. The term $\beta SI$ describes the rate of encounters of susceptible and infected
  206. individuals. This term is dependent on the size of $S$ and $I$, thus Anderson
  207. and May~\cite{Anderson1991} propose a modified model:
  208. \begin{equation}\label{eq:modSIR}
  209. \begin{split}
  210. \frac{dS}{dt} &= -\beta \frac{SI}{N},\\
  211. \frac{dI}{dt} &= \beta \frac{SI}{N} - \alpha I,\\
  212. \frac{dR}{dt} &= \alpha I.
  213. \end{split}
  214. \end{equation}
  215. In~\Cref{eq:modSIR} $\beta SI$ gets normalized by $N$, which is more correct in
  216. a real world aspect~\cite{Anderson1991}.\\
  217. The initial phase of a pandemic is characterized by the infection of a small
  218. number of individuals, while the majority of the population remains susceptible.
  219. The infectious group has not yet infected any individuals thus
  220. neither recovery nor mortality is possible. Let $I_0\in\mathbb{N}$ be
  221. the number of infected individuals at the beginning of the disease. Then,
  222. \begin{equation}\label{eq:startCond}
  223. \begin{split}
  224. S(0) &= N - I_{0},\\
  225. I(0) &= I_{0},\\
  226. R(0) &= 0,
  227. \end{split}
  228. \end{equation}
  229. describes the initial configuration of a system in which a disease has just
  230. emerged.\\
  231. \begin{figure}[h]
  232. %\centering
  233. \setlength{\unitlength}{1cm} % Set the unit length for coordinates
  234. \begin{picture}(12, 9.5) % Specify the size of the picture environment (width, height)
  235. % reference
  236. \put(0, 1.75){
  237. \begin{subfigure}{0.35\textwidth}
  238. \centering
  239. \includegraphics[width=\textwidth]{reference_params_synth.pdf}
  240. \caption{$\alpha=0.35$, $\beta=0.5$}
  241. \label{fig:synth_norm}
  242. \end{subfigure}
  243. }
  244. % 1. row, 1.image (low beta)
  245. \put(5.25, 5){
  246. \begin{subfigure}{0.3\textwidth}
  247. \centering
  248. \includegraphics[width=\textwidth]{low_beta_synth.pdf}
  249. \caption{$\alpha=0.25$, $\beta=0.5$}
  250. \label{fig:synth_low_beta}
  251. \end{subfigure}
  252. }
  253. % 1. row, 2.image (high beta)
  254. \put(9.5, 5){
  255. \begin{subfigure}{0.3\textwidth}
  256. \centering
  257. \includegraphics[width=\textwidth]{high_beta_synth.pdf}
  258. \caption{$\alpha=0.45$, $\beta=0.5$}
  259. \label{fig:synth_high_beta}
  260. \end{subfigure}
  261. }
  262. % 2. row, 1.image (low alpha)
  263. \put(5.25, 0){
  264. \begin{subfigure}{0.3\textwidth}
  265. \centering
  266. \includegraphics[width=\textwidth]{low_alpha_synth.pdf}
  267. \caption{$\alpha=0.35$, $\beta=0.4$}
  268. \label{fig:synth_low_alpha}
  269. \end{subfigure}
  270. }
  271. % 2. row, 2.image (high alpha)
  272. \put(9.5, 0){
  273. \begin{subfigure}{0.3\textwidth}
  274. \centering
  275. \includegraphics[width=\textwidth]{high_alpha_synth.pdf}
  276. \caption{$\alpha=0.35$, $\beta=0.6$}
  277. \label{fig:synth_high_alpha}
  278. \end{subfigure}
  279. }
  280. \end{picture}
  281. \caption{Synthetic data, using~\Cref{eq:modSIR} and $N=7.9\cdot 10^6$, $I_0=10$ with different sets of parameters.
  282. We visualize the case with the reference parameters in (a). In (b) and (c) we keep $\alpha$ constant, while varying
  283. the value of $\beta$. In contrast, (d) and (e) have varying values of $\alpha$.}
  284. \label{fig:synth_sir}
  285. \end{figure}
  286. In the SIR model the temporal occurrence and the height of the peak (or peaks)
  287. of the infectious group are of paramount importance for understanding the
  288. dynamics of a pandemic. A low peak occurring at a late point in time indicates
  289. that the disease is unable to keep pace with the rate of recovery, resulting
  290. in its demise before it can exert a significant influence on the population. In
  291. contrast, an early and high peak means that the disease is rapidly transmitted
  292. through the population, with a significant proportion of individuals having been
  293. infected.~\Cref{fig:sir_model} illustrates this effect by varying the values of
  294. $\beta$ or $\alpha$ while simulating a pandemic using a model such
  295. as~\Cref{eq:modSIR}. It is evident that both the transmission rate $\beta$
  296. and the recovery rate $\alpha$ influence the height and time of the peak of $I$.
  297. When the number of infections exceeds the number of recoveries, the peak of $I$
  298. will occur early and will be high. On the other hand, if recoveries occur at a
  299. faster rate than new infections the peak will occur later and will be low. Thus,
  300. it is crucial to know both $\beta$ and $\alpha$, as these parameters
  301. characterize how the pandemic evolves.\\
  302. The SIR model is based on a number of assumptions that are intended to reduce
  303. the overall complexity of the model while still representing the processes
  304. observed in the real-world. For example, the size of a population
  305. in the real-world is subject to a number of factors that can contribute to
  306. change. The population is increased by the occurrence of births and decreased
  307. by the occurrence of deaths. One assumption, stated in the SIR model is that
  308. the size of the population, $N$, remains constant, as the daily change is
  309. negligible to the total population. Other examples include the impossibility
  310. for individuals to be susceptible again, after having recovered, or the
  311. possibility for the epidemiological parameters to change due to new variants or the
  312. implementation of new countermeasures. We address this latter option in the
  313. next~\Cref{sec:pandemicModel:rsir}.
  314. % -------------------------------------------------------------------
  315. \subsection{Reduced SIR Model and the Reproduction Number}
  316. \label{sec:pandemicModel:rsir}
  317. The~\Cref{sec:pandemicModel:sir} presents the classical SIR model. This model
  318. contains two scalar parameters $\beta$ and $\alpha$, which describe the course
  319. of a pandemic over its duration. This is beneficial when examining the overall
  320. pandemic; however, in the real world, disease behavior is dynamic, and the
  321. values of the parameters $\beta$ and $\alpha$ change throughout the course of
  322. the disease. The reason for this is due to events such as the implementation of
  323. countermeasures that reduce the contact between the infectious and susceptible
  324. individuals, the emergence of a new variant of the disease that increases its
  325. infectivity or deadliness, or the administration of a vaccination that provides
  326. previously susceptible individuals with immunity without ever being infected.
  327. As these fine details of disease progression are missed in the constant rates,
  328. Liu and Stechlinski~\cite{Liu2012}, and Setianto and Hidayat~\cite{Setianto2023},
  329. introduce time-dependent epidemiological parameters and the time-dependent reproduction
  330. number to address this issue. Millevoi \etal~\cite{Millevoi2023} present a
  331. reduced version of the SIR model.\\
  332. First, they alter the definition of $\beta$ and $\alpha$ to be dependent on the time interval
  333. $\mathcal{T} = [t_0, t_f]\subseteq \mathbb{R}_{\geq0}$,
  334. \begin{equation}
  335. \beta: \mathcal{T}\rightarrow\mathbb{R}_{\geq0}, \quad\alpha: \mathcal{T}\rightarrow\mathbb{R}_{\geq0}.
  336. \end{equation}
  337. Another crucial element is $D(t) = \frac{1}{\alpha(t)}$, which represents the initial time
  338. span an infected individual requires to recuperate. Subsequently, at the initial time point
  339. $t_0$, the \emph{reproduction number},
  340. \begin{equation}
  341. \RO = \beta(t_0)D(t_0) = \frac{\beta(t_0)}{\alpha(t_0)},
  342. \end{equation}
  343. represents the number of susceptible individuals, that one infectious individual
  344. infects at the onset of the pandemic. In light of the effects of $\beta$ and
  345. $\alpha$ (see~\Cref{sec:pandemicModel:sir}), $\RO < 1$ indicates that the
  346. pandemic is emerging. In this scenario $\alpha$ is relatively low due to the
  347. limited number of infections resulting from $I(t_0) << S(t_0)$. Further,
  348. $\RO > 1$ leads to the disease spreading rapidly across the population, with an
  349. increase in $I$ occurring at a high rate. Nevertheless, $\RO$ does not cover
  350. the entire time span. For this reason, Millevoi \etal~\cite{Millevoi2023}
  351. introduce $\Rt$ which has the same interpretation as $\RO$, with the exception
  352. that $\Rt$ is dependent on time. The time-dependent reproduction number is
  353. defined as,
  354. \begin{equation}\label{eq:repr_num}
  355. \Rt=\frac{\beta(t)}{\alpha(t)}\cdot\frac{S(t)}{N},
  356. \end{equation}
  357. on the time interval $\mathcal{T}$ and the population site $N$. This definition
  358. includes the epidemiological parameters for information about the spread of the disease
  359. and information of the decrease of the ratio of susceptible individuals in the
  360. population. In contrast to $\beta$ and $\alpha$, $\Rt$ is not a parameter but
  361. a state variable in the model, which gives information about the reproduction of the disease
  362. for each day. As Millevoi \etal~\cite{Millevoi2023} show, $\Rt$ enables the
  363. following reduction of the SIR model.\\
  364. \Cref{eq:N_char} allows for the calculation of the value of the group $R$ using
  365. $S$ and $I$, with the term $R(t)=N-S(t)-I(t)$. Thus,
  366. \begin{equation}
  367. \begin{split}
  368. \frac{dS}{dt} &= \alpha(\Rt-1)I(t),\\
  369. \frac{dI}{dt} &= -\alpha\Rt I(t),
  370. \end{split}
  371. \end{equation}
  372. is the reduction of~\Cref{eq:sir} on the time interval $\mathcal{T}$ using this
  373. characteristic and the reproduction number $\Rt$ (see ~\Cref{eq:repr_num}).
  374. Another issue that Millevoi \etal~\cite{Millevoi2023} seek to address is the
  375. extensive range of values that the SIR groups can assume. Accordingly, they
  376. initially scale the time interval $\mathcal{T}$ using its borders to calculate
  377. the scaled time $t_s = \frac{t - t_0}{t_f - t_0}\in[0, 1]$. Subsequently, they
  378. calculate the scaled groups,
  379. \begin{equation}
  380. S_s(t_s) = \frac{S(t)}{C},\quad I_s(t_s) = \frac{I(t)}{C},\quad R_s(t_s) = \frac{R(t)}{C},
  381. \end{equation}
  382. using a large constant scaling factor $C\in\mathbb{N}$. Applying this to the
  383. variable $I$, results in,
  384. \begin{equation}\label{eq:reduced_sir_ODE}
  385. \frac{dI_s}{dt_s} = \alpha(t_f - t_0)(\Rt - 1)I_s(t_s),
  386. \end{equation}
  387. which is a further reduced version of~\Cref{eq:sir}. This less complex
  388. differential equation results in a less complex solution, as it entails the
  389. elimination of a parameter ($\beta$) and the two state variables ($S$ and $R$).
  390. The reduced SIR model is more precise due to fewer input variables, making it
  391. advantageous in situations with limited data, such as when recovery data is
  392. missing.
  393. % -------------------------------------------------------------------
  394. \section{Multilayer Perceptron}
  395. \label{sec:mlp}
  396. In~\Cref{sec:differentialEq}, we discuss the modeling of systems using differential
  397. equations in systems, illustrating how they can be utilized to elucidate the
  398. impact of a specific parameter on the system's behavior.
  399. In~\Cref{sec:epidemModel}, we show specific applications of differential
  400. equations in an epidemiological context. Solving such systems is crucial and
  401. involves finding a function that best fits the data. Fitting measured data points to
  402. approximate such a function, is one of the multiple methods to achieve this
  403. goal. The \emph{Multilayer Perceptron} (MLP)~\cite{Rumelhart1986} is a
  404. data-driven function approximator. In the following section, we provide a brief
  405. overview of the structure and training of these \emph{neural networks}. For
  406. reference, we use the book \emph{Deep Learning} by Goodfellow
  407. \etal~\cite{Goodfellow-et-al-2016} as a foundation for our explanations.\\
  408. The objective is to develop an approximation method for any function $f^{*}$,
  409. which could be a mathematical function or a mapping of an input vector to the
  410. desired output. Let $\boldsymbol{x}$ be the input vector and $\boldsymbol{y}$
  411. the label, class, or result. Then, $\boldsymbol{y} = f^{*}(\boldsymbol{x})$,
  412. is the function to approximate. In the year 1958,
  413. Rosenblatt~\cite{Rosenblatt1958} proposed the perceptron modeling the concept of
  414. a neuron in a neuroscientific sense. The perceptron takes in the input vector
  415. $\boldsymbol{x}$ performs an operation and produces a scalar result. This model
  416. optimizes its parameters $\theta$ to be able to calculate $\boldsymbol{y} =
  417. f(\boldsymbol{x}; \theta)$ as accurately as possible. As Minsky and
  418. Papert~\cite{Minsky1972} demonstrate, the perceptron is only capable of
  419. approximating a specific class of functions. Consequently, there is a necessity
  420. for an expansion of the perceptron.\\
  421. As Goodfellow \etal~\cite{Goodfellow-et-al-2016} proceed, the solution to this issue is to decompose $f$ into
  422. a chain structure of the form,
  423. \begin{equation} \label{eq:mlp_char}
  424. f(\boldsymbol{x}) = f^{(3)}(f^{(2)}(f^{(1)}(\boldsymbol{x}))).
  425. \end{equation}
  426. This nested version of a perceptron is a multilayer perceptron. Each
  427. sub-function, designated as $f^{(i)}$, is represented in the structure of an
  428. MLP as a \emph{layer}, which contains a linear mapping and a nonlinear mapping
  429. in form of an \emph{activation function}. A multitude of
  430. \emph{Units} (also \emph{neurons}) compose each layer. A neuron performs the
  431. same vector-to-scalar calculation as the perceptron does. Subsequently, a
  432. nonlinear activation function transforms the scalar output into the activation
  433. of the unit. The layers are staggered in the neural network, with each layer
  434. being connected to its neighbors, as illustrated in~\Cref{fig:mlp_example}. The
  435. input vector $\boldsymbol{x}$ is provided to each unit of the first layer
  436. $f^{(1)}$, which then gives the results to the units of the second layer
  437. $f^{(2)}$, and so forth. The final layer is the \emph{output layer}. The
  438. intervening layers, situated between the first and the output layers are the
  439. \emph{hidden layers}. The term \emph{forward propagation} describes the
  440. process of information flowing through the network from the input layer to the
  441. output layer, resulting in a scalar loss. The alternating structure of linear
  442. and nonlinear calculation enables MLP's to approximate any function. As Hornik
  443. \etal~\cite{Hornik1989} proves, MLP's are universal approximators.\\
  444. \begin{figure}[h]
  445. \centering
  446. \includegraphics[width=\textwidth]{MLP.pdf}
  447. \caption{A illustration of an MLP with two hidden layers. Each neuron of a layer
  448. is connected to every neuron of the neighboring layers. The arrow indicates
  449. the direction of the forward propagation.}
  450. \label{fig:mlp_example}
  451. \end{figure}
  452. The term \emph{training} describes the process of optimizing the parameters
  453. $\theta$. In order to undertake training, it is necessary to have a set of
  454. \emph{training data}, which is a set of pairs (also called training points) of
  455. the input data $\boldsymbol{x}$ and its corresponding true solution
  456. $\boldsymbol{y}$ of the function $f^{*}$. For the training process we must
  457. define a \emph{loss function} $\Loss{ }$, using the model prediction
  458. $\hat{\boldsymbol{y}}$ and the true value $\boldsymbol{y}$, which will act as a
  459. metric for evaluating the extent to which the model deviates from the correct
  460. answer. One common loss function is the \emph{mean square error}
  461. (MSE) loss function. Let $N$ be the number of points in the set of training
  462. data. Then,
  463. \begin{equation} \label{eq:mse}
  464. \Loss{MSE} = \frac{1}{N}\sum_{i=1}^{N} ||\hat{\boldsymbol{y}}^{(i)}-\boldsymbol{y}^{(i)}||^2,
  465. \end{equation}
  466. calculates the squared difference between each model prediction and true value
  467. of a training and takes the mean across the whole training data. \\
  468. Ultimately, the objective is to utilize this information to optimize the parameters, in order to minimize the
  469. loss. One of the most fundamental and seminal optimization strategy is \emph{gradient
  470. descent}. In this process, the derivatives are employed to identify the location
  471. of local or global minima within a function, which lie where the gradient is
  472. zero. Given that a positive gradient
  473. signifies ascent and a negative gradient indicates descent, we must move the
  474. variable by a \emph{learning rate} (step size) in the opposite
  475. direction to that of the gradient. The calculation of the derivatives in respect
  476. to the parameters is a complex task, since our functions is a composition of
  477. many functions (one for each layer). We can address this issue taking advantage
  478. of~\Cref{eq:mlp_char} and employing the chain rule of calculus. Let
  479. $\hat{\boldsymbol{y}} = f(\boldsymbol{x}; \theta)$ be the model prediction with the
  480. decomposed version $f(\boldsymbol{x}; \theta) = f^{(3)}(w; \theta_3)$ with
  481. $w = f^{(2)}(z; \theta_2)$ and $z = f^{(1)}(\boldsymbol{x}; \theta_1)$.
  482. $\boldsymbol{x}$ is the input vector and $\theta_3, \theta_2, \theta_1\subset\theta$.
  483. Then,
  484. \begin{equation}\label{eq:backprop}
  485. \nabla_{\theta_3} \Loss{ } = \frac{d\mathcal{L}}{d\hat{\boldsymbol{y}}}\frac{d\hat{\boldsymbol{y}}}{df^{(3)}}\nabla_{\theta_3}f^{(3)},
  486. \end{equation}
  487. is the gradient of $\Loss{ }$ in respect of the parameters $\theta_3$. To obtain
  488. $\nabla_{\theta_2} \Loss{ }$, we have to derive $\nabla_{\theta_3} \Loss{ }$ in
  489. respect to $\theta_2$. The name of this method in the context of neural
  490. networks is \emph{back propagation}~\cite{Rumelhart1986}, as it propagates the
  491. error backwards through the neural network.\\
  492. In practical applications, an optimizer often accomplishes the optimization task
  493. by executing back propagation in the background. Furthermore, modifying the
  494. learning rate during training can be advantageous. For instance, making larger
  495. steps at the beginning and minor adjustments at the end. Therefore, schedulers
  496. are implementations algorithms that employ diverse learning rate alteration
  497. strategies.\\
  498. For a more in-depth discussion of practical considerations and additional
  499. details like regularization, we direct the reader to the book
  500. \emph{Deep Learning} by Goodfellow \etal~\cite{Goodfellow-et-al-2016}. The next
  501. section will demonstrate the application of neural networks in approximating
  502. solutions to differential systems.
  503. % -------------------------------------------------------------------
  504. \section{Physics Informed Neural Networks}
  505. \label{sec:pinn}
  506. In~\Cref{sec:mlp}, we describe the structure and training of MLP's, which are
  507. wildely recognized tools for approximating any kind of function. In 1997
  508. Lagaris \etal~\cite{Lagaris1997} provided a method, that utilizes gradient
  509. descent to solve ODEs and PDEs. Building on this approach, Raissi
  510. \etal~\cite{Raissi2019} introduced the methodology with the name
  511. \emph{Physics-Informed Neural Network} (PINN) in 2017. In this approach, the
  512. model learns to approximate a function using provided data points while
  513. leveraging the available knowledge about the problem in the form of a system of
  514. differential equations.\\
  515. In contrast to standard MLP models, PINNs are not solely data-driven. The differential
  516. equation,
  517. \begin{equation}
  518. \boldsymbol{y}=\mathcal{D}(\boldsymbol{x}),
  519. \end{equation}
  520. includes both the solution $\boldsymbol{y}$, and the operant $\mathcal{D}$,
  521. which incorporates all derivatives with respect to the input $\boldsymbol{x}$.
  522. This equation contains the information of the physical properties and dynamics of
  523. $\boldsymbol{y}$. In order to find the solution $\boldsymbol{y}$, we must solve the
  524. differential equation with respect to data which is related to the problem at hand. As
  525. Raissi \etal~\cite{Raissi2019} propose, we employ a neural network with the
  526. parameters $\theta$. The MLP then is supposed to optimize its parameters for
  527. its output $\hat{\boldsymbol{y}}$ to approximate the solution $\boldsymbol{y}$.
  528. In order to achieve this, we train the model on data containing input-output
  529. pairs with measures of $\boldsymbol{y}$. The output $\hat{\boldsymbol{y}}$ is
  530. fitted to the data through the mean square error data loss $\mathcal{L}_{\text{data}}$.
  531. Moreover, the data loss function may include additional terms for initial and boundary
  532. conditions. Furthermore, the physics are incorporated through an additional loss
  533. term of the physics loss $\mathcal{L}_{\text{physics}}$ which includes the
  534. differential equation through its residual $r=\boldsymbol{y} - \mathcal{D}(\boldsymbol{x})$.
  535. This leads to the PINN loss function,
  536. \begin{align}\label{eq:PINN_loss}
  537. \mathcal{L}_{\text{PINN}}(\boldsymbol{x}, \boldsymbol{y},\hat{\boldsymbol{y}}) & = & & \mathcal{L}_{\text{data}} (\boldsymbol{y},\hat{\boldsymbol{y}}) & + & \quad \mathcal{L}_{\text{physics}} (\boldsymbol{x}, \boldsymbol{y},\hat{\boldsymbol{y}}) & \\
  538. & = & & \frac{1}{N_t}\sum_{i=1}^{N_t} || \hat{\boldsymbol{y}}^{(i)}-\boldsymbol{y}^{(i)}||^2 & + & \quad\frac{1}{N_d}\sum_{i=1}^{N_d} || r_i(\boldsymbol{x},\hat{\boldsymbol{y}})||^2 & ,
  539. \end{align}
  540. with $N_d$ the number of differential equations in a system and $N_t$ the
  541. number of training samples used for training. Utilizing~\Cref{eq:PINN_loss}, the
  542. PINN simultaneously optimizes its parameters $\theta$ to minimize both the data
  543. loss and the physics loss. This makes it a multi-objective optimization problem.\\
  544. Given the nature of differential equations, calculating the loss term of
  545. $\mathcal{L}_{\text{physics}}(\boldsymbol{x},\hat{\boldsymbol{y}})$ requires the
  546. calculation of the derivative of the output with respect to the input of
  547. the neural network. As we outline in~\Cref{sec:mlp}, during the process of
  548. back-propagation we calculate the gradients of the loss term in respect to a
  549. layer-specific set of parameters denoted by $\theta_l$, where $l$ represents
  550. the index of the respective layer. By employing
  551. the chain rule of calculus, the algorithm progresses from the output layer
  552. through each hidden layer, ultimately reaching the first layer in order to
  553. compute the respective gradients. The term,
  554. \begin{equation}
  555. \nabla_{\boldsymbol{x}} \hat{\boldsymbol{y}} = \frac{d\hat{\boldsymbol{y}}}{df^{(2)}}\frac{df^{(2)}}{df^{(1)}}\nabla_{\boldsymbol{x}}f^{(1)},
  556. \end{equation}
  557. illustrates that, in contrast to the procedure described in~\Cref{eq:backprop},
  558. this procedure the \emph{automatic differentiation} goes one step further and
  559. calculates the gradient of the output with respect to the input
  560. $\boldsymbol{x}$. In order to calculate the second derivative
  561. $\frac{d\hat{\boldsymbol{y}}}{d\boldsymbol{x}}=\nabla_{\boldsymbol{x}} (\nabla_{\boldsymbol{x}} \hat{\boldsymbol{y}} ),$
  562. this procedure must be repeated.\\
  563. Above we present a method by Raissi et al.~\cite{Raissi2019} for approximating
  564. functions through the use of systems of differential equations. As previously
  565. stated, we want to find a
  566. solution for systems of differential equations. In problems, where we must solve
  567. an ODE or PDE, we have to find a set of parameters, that satisfies the system
  568. for any input $\boldsymbol{x}$. In the context of PINNs, this is an inverse
  569. problem. We have training data from measurements and the corresponding
  570. differential equations, but the parameters of these equations are unknown. To
  571. address this challenge, we implement these parameters as distinct learnable
  572. parameters within the neural network. This enables the network to utilize a
  573. specific value, that actively influences the physics loss
  574. $\mathcal{L}_{\text{physics}}(\boldsymbol{x},\hat{\boldsymbol{y}})$. During the
  575. training phase the optimizer aims to minimize the physics loss, which should
  576. ultimately yield an approximation of the true parameter value fitting the
  577. observations.\\
  578. \begin{figure}[h]
  579. \centering
  580. \includegraphics[width=\textwidth]{oscilator.pdf}
  581. \caption{Illustration of of the movement of an oscillating body in the
  582. underdamped case. With $m=1kg$, $\mu=4\frac{Ns}{m}$ and $k=200\frac{N}{m}$.}
  583. \label{fig:spring}
  584. \end{figure}
  585. In order to illustrate the working of a PINN, we use the example of a
  586. \emph{damped harmonic oscillator} taken from~\cite{Moseley}. In this problem, we
  587. displace a body, which is attached to a spring, from its resting position. The
  588. body is subject to three forces: firstly, the inertia exerted by the
  589. displacement $u$, which points in the direction of the displacement; secondly,
  590. the restoring force of the spring, which attempts to return the body to its
  591. original position and thirdly, the friction force, which points in the opposite
  592. direction of the movement. In accordance with Newton's second law and the
  593. combined influence of these forces, the body exhibits oscillatory motion around
  594. its position of rest. The system is influenced by $m$ the mass of the body,
  595. $\mu$ the coefficient of friction and $k$ the spring constant, indicating the
  596. stiffness of the spring. The residual of the differential equation,
  597. \begin{equation}
  598. m\frac{d^2u}{dx^2}+\mu\frac{du}{dx}+ku=0,
  599. \end{equation}
  600. shows relation of these parameters in reference to the problem at hand. As
  601. Tenenbaum and Morris~\cite{Tenenbaum1985} provide, there are three potential solutions to this
  602. issue. However only the \emph{underdamped case} results in an oscillating
  603. movement of the body, as illustrated in~\Cref{fig:spring}. In order to apply a
  604. PINN to this problem, we require a set of training data $x$. This consists of
  605. pairs of time points and corresponding displacement measurements
  606. $(t^{(i)}, u^{(i)})$, where $i\in\{1, ..., N_t\}$. In this hypothetical case,
  607. we know the mass $m=1kg$, and the spring constant $k=200\frac{N}{m}$ and the
  608. initial displacement $u^{(1)} = 1$ and $\frac{du(0)}{dt} = 0$. However, we do
  609. not know the value of the friction $\mu$. In this case the loss function,
  610. \begin{equation}
  611. \begin{split}
  612. \mathcal{L}_{\text{osc}}(\boldsymbol{x}, \boldsymbol{u}, \hat{\boldsymbol{u}}) = & (u^{(1)}-1)+\frac{du(0)}{dt}+ \frac{1}{N_t}\sum_{i=1}^{N_t} ||\hat{\boldsymbol{u}}^{(i)}-\boldsymbol{u}^{(i)}||^2 \\
  613. + & ||m\frac{d^2u}{dx^2}+\mu\frac{du}{dx}+ku||^2,
  614. \end{split}
  615. \end{equation}
  616. includes the border conditions, the residual, in which $\mu$ is a learnable
  617. parameter and the data loss. This shows the methodology by which PINNs are capable
  618. of learning the parameters of physical systems, such as the damped harmonic oscillator.
  619. In the following section, we present the approach of Shaier \etal~\cite{Shaier2021}
  620. to find the transmission rate and recovery rate of the SIR model using PINNs.
  621. % -------------------------------------------------------------------
  622. \subsection{Disease Informed Neural Networks}
  623. \label{sec:pinn:dinn}
  624. In the preceding section, we present a data-driven methodology, as described by Lagaris
  625. \etal~\cite{Lagaris1997}, for solving systems of differential equations by employing
  626. PINNs. In~\Cref{sec:pandemicModel:sir}, we describe the SIR model, which models
  627. the relations of susceptible, infectious and removed individuals and simulates
  628. the progress of a disease in a population with a constant size. A system of
  629. differential equations models these relations. Shaier \etal~\cite{Shaier2021}
  630. propose a method to solve the equations of the SIR model using a PINN, which
  631. they call a \emph{Disease-Informed Neural Network} (DINN).\\
  632. To solve~\Cref{eq:sir} we need to find the transmission rate $\beta$ and the
  633. recovery rate $\alpha$. As Shaier \etal~\cite{Shaier2021} point out, there are
  634. different approaches to solve this set of equations. For instance, building on
  635. the assumption, that at the beginning one infected individual infects $-n$ other
  636. people, concluding in $\frac{dS(0)}{dt} = -n$. Then,
  637. \begin{equation}
  638. \beta=-\frac{\frac{dS}{dt}}{S_0I_0}
  639. \end{equation}
  640. would calculate the initial transmission rate using the initial size of the
  641. susceptible group $S_0$ and the infectious group $I_0$. The recovery rate, then
  642. could be defined using the amount of days a person between the point of
  643. infection and the start of isolation $d$, $\alpha = \frac{1}{d}$. The analytical
  644. solutions to the SIR models often use heuristic methods and require knowledge
  645. like the sizes $S_0$ and $I_0$. A data-driven approach such as the one that
  646. Shaier \etal~\cite{Shaier2021} propose does not suffer from these problems. Since the
  647. model learns the parameters $\beta$ and $\alpha$ while learning the training
  648. data consisting of the time points $\boldsymbol{t}$, and the corresponding
  649. measured sizes of the groups $\boldsymbol{S}, \boldsymbol{I}, \boldsymbol{R}$.
  650. Let $\hat{\boldsymbol{S}}, \hat{\boldsymbol{I}}, \hat{\boldsymbol{R}}$ be the
  651. model predictions of the groups and
  652. $r_S=\frac{d\hat{\boldsymbol{S}}}{dt}+\beta \hat{\boldsymbol{S}}\hat{\boldsymbol{I}},
  653. r_I=\frac{d\hat{\boldsymbol{I}}}{dt}-\beta \hat{\boldsymbol{S}}\hat{\boldsymbol{I}}+\alpha \hat{\boldsymbol{I}}$
  654. and $r_R=\frac{d \hat{\boldsymbol{R}}}{dt} - \alpha \hat{\boldsymbol{I}}$ the
  655. residuals of each differential equation using the model predictions. Then,
  656. \begin{equation}
  657. \begin{split}
  658. \mathcal{L}_{SIR}(\boldsymbol{t}, \boldsymbol{S}, \boldsymbol{I}, \boldsymbol{R}, \hat{\boldsymbol{S}}, \hat{\boldsymbol{I}}, \hat{\boldsymbol{R}}) = &||r_S||^2 + ||r_I||^2 + ||r_R||^2\\
  659. + &\frac{1}{N_t}\sum_{i=1}^{N_t} ||\hat{\boldsymbol{S}}^{(i)}-\boldsymbol{S}^{(i)}||^2 + ||\hat{\boldsymbol{I}}^{(i)}-\boldsymbol{I}^{(i)}||^2 + ||\hat{\boldsymbol{R}}^{(i)}-\boldsymbol{R}^{(i)}||^2,
  660. \end{split}
  661. \end{equation}
  662. is the loss function of a DINN, with $\alpha$ and $\beta$ being learnable
  663. parameters. These are represented in the residuals of the ODEs.
  664. % -------------------------------------------------------------------