Chapter 13
Time-dependent problems

Context: Many of the problems we encounter, such as the diffusion process already discussed, are time-dependent. So far, we have only treated the stationary solution of linear problems. A treatment of the initial value problem, which explicitly includes the time dependency, requires corresponding integration algorithms.

13.1 Initial value problems

Typical time-dependent PDGLs have the form, \begin {equation} \frac {\partial u}{\partial t} + \mathcal {L} u(\v {r}, t) = f(\v {r}, t), \label {eq:boundary-value-problem} \end {equation} where \(\mathcal {L}\) is an operator of some kind, e.g. \(\mathcal {L}=-\nabla \cdot D\nabla \) for diffusion processes. The corresponding stationary solution, which is usually achieved for \(t\to \infty \), is given by \(\mathcal {L} u=0\). In the last chapters, we learned how to numerically calculate such a stationary solution for linear and nonlinear problems.

Equation \eqref{eq:boundary-value-problem} is an initial value problem because the field \(u(\v {r},t)\) has to be specified at a single point in time (usually \(t=0\)). Then, one integrates these initial value problems from this point in time into the future. A selection of such time integration algorithms (or “time marching”, as it is also called) will be discussed in this chapter. Before we get to that, however, we first need to come back to the discretization of the spatial derivatives.

13.2 Spatial derivatives

https://uni-freiburg.cloud.panopto.eu/Panopto/Pages/Embed.aspx?id=f7743809-a4c4-47ee-aa5b-acc1012a8425

We now again approximate the (now time-dependent) function \(u(\v {r}, t)\) by a series expansion \(u_N(\v {r})\). In contrast to the previous chapters, we now assume that the coefficients are no longer constant but time-dependent, \begin {equation} u_N(\v {r}, t) = \sum _n a_n(t) \varphi _n(\v {r}). \end {equation} Now we multiply the entire time-dependent PDE Eq. \eqref{eq:boundary-value-problem} with the basis functions \(\varphi _n(\v {r})\) of the series expansion, \begin {equation} \left (\varphi _n, \frac {\partial u_N}{\partial t}\right ) + (\varphi _n, \mathcal {L} u_N) = (\varphi _n, f). \end {equation} In the previous chapters, we have already learned how to calculate the two terms on the right-hand side of this equation. For linear equations, we get \begin {align} (\varphi _n, \mathcal {L} u_N) &= \sum _k K_{nk} a_k(t),\\ (\varphi _n, f) &= f_n(t) \end {align}

where \(\t {K}\) is the known system matrix and \(\v {f}(t)\) is the (now potentially time-dependent) load vector. Now we introduce the time derivative into the scalar product. This yields \begin {equation} \sum _k M_{nk} \frac {\dif a_k}{\dif t} + \sum _k K_{nk} a_k(t) = f_n(t), \label {eq:discrete-time-dependent} \end {equation} with \(M_{nk}=(\varphi _n, \varphi _k)\), the mass matrix. This is a system of coupled ordinary differential equations. Thus, we have converted the PDE into a system of ODEs by spatial discretization. For a Fourier basis, the mass matrix is diagonal, for a finite element basis, it is sparsely populated but no longer diagonal. However, we can formally multiply Eq. \eqref{eq:discrete-time-dependent} from the left with \(\t {M}^{-1}\) and obtain, \begin {equation} \frac {\dif \v {a}}{\dif t} = - \t {M}^{-1} \cdot \t {K} \cdot \v {a}(t) + \t {M}^{-1} \cdot \v {f}(t) \equiv \v {g}(\v {a}(t), t). \label {eq:discrete-time-dependent-invmass} \end {equation}

Note: Since the mass matrix does not change over time, \(\t {M}^{-1}\) can be precalculated here. However, since \(\t {M}\) is usually sparse, it may also be numerically more efficient to solve the corresponding system of equations in each step. This is because the inverse of a sparse matrix is no longer sparse. Thus, multiplication by \(\t {M}^{-1}\) requires \(\sim N^2\) operations, while solving the system of equations requires only \(\sim N\) operations. The number of operations required is called the complexity of an algorithm.

13.3 Runge-Kutta Methods

https://uni-freiburg.cloud.panopto.eu/Panopto/Pages/Embed.aspx?id=13ac878b-710a-4f49-9653-acc1012a83f0

13.3.1 Euler Method

The equation \eqref{eq:discrete-time-dependent-invmass} can be propagated in time. We assume that \(a_n(t)\) is known, then we can develop \(a_n(t+\Delta t)\) around \(t\) in a Taylor series. This yields \begin {equation} \v {a}(t+\Delta t) = \v {a}(t)+\Delta t \v {g}(\v {a}(t),t)+\mathcal {O}(\Delta t^2), \label {eq:Euler} \end {equation} where \(\mathcal {O}(\Delta t^2)\) denotes quadratic and higher terms that are neglected here. Equation \eqref{eq:Euler} can be used directly to propagate the coefficients \(\v {a}\) one step \(\Delta t\) into the future. This algorithm is called explicit Euler integration. The explicit Euler algorithm is not particularly stable and requires very small time steps \(\Delta t\).

13.3.2 Heun method

Based on the Euler integration, we can construct a simple method with a higher convergence order. The convergence order indicates how the error decreases when the step size is reduced. For a first-order method, the error decreases linearly with the step size, for a second-order method, quadratically.

In the Heun method, the function value at the time \(t+\Delta t\) is first estimated using the Euler method. This means calculating \begin {equation} \tilde {\v {a}}(t+\Delta t) = \v {a}(t)+\Delta t \v {g}(\v {a}(t),t). \end {equation} We then use the trapezoidal rule with this estimated function value to integrate the function over a time step \(\Delta t\): \begin {equation} \v {a}(t+\Delta t) = \v {a}(t)+\frac {\Delta t}{2} \left (\v {g}(\v {a}(t),t) + \v {g}(\tilde {\v {a}}(t+\Delta t),t)\right ) \end {equation} The Heun method has quadratic convergence. Methods that first estimate function values and then correct them are also called predictor-corrector methods.

13.3.3 Automatic time step control

With the help of two integration methods with different convergence orders, an automatic step size control can be realized in which the time step \(\Delta t\) is adjusted so that a certain error is not exceeded. The methods are particularly interesting if the calculations of the lower order of error can be reused in the calculation of the higher order of error, as is the case, for example, with the Heun method.

When combining two methods (e.g. Euler and Heun), the error can be estimated from the difference between the two integrations. Given a global error bound, the time step can then be adjusted so that the error always remains below this bound.

Note: Both the Euler integration and the Heun method belong to the class of Runge-Kutta methods. There is a whole range of Runge-Kutta methods with different orders of convergence. Of particular interest are methods with automatic step size control as described here. In scipy in particular methods with convergence orders \(2\)/\(3\) and \(4\)/\(5\) are implemented. These can be used with the function scipy.integrate.solve_ivp.

13.4 Stability analysis

https://uni-freiburg.cloud.panopto.eu/Panopto/Pages/Embed.aspx?id=90f02a91-a31d-47f8-b085-acc101358727

Time propagation methods become unstable if the time step is too large. A step size control is an automated method to prevent such instabilities.

To understand why such instabilities occur, we now analyze the one-dimensional diffusion equation as an example, \begin {equation} \frac {\partial c}{\partial t} = D \frac {\partial ^2 c}{\partial x^2}. \end {equation} A discretization of the spatial derivative with linear finite elements leads to \begin {equation} \frac {\partial c}{\partial t} = \frac {D}{\Delta x^2} \left (c(x-\Delta x) - 2c(x) + c(x+\Delta x)\right ). \label {eq:fediff} \end {equation}

Note: Actually, the mass matrix should appear on the left side of Eq. \eqref{eq:fediff}. We neglect this here and approximate \(\t {M}=\t {1}\). This type of approximation is called a lumped mass model.

We now write the function \(c(x)\) as an expansion in a Fourier basis, i.e. \begin {equation} c(x) = \sum _n c_n \exp (ik_nx). \end {equation} This means that terms of the form \(c(x+\Delta x)\) become \begin {equation} c(x+\Delta x,t) = \sum _n c_n(t) \exp \left [ik_n (x+\Delta x)\right ] = \sum _n \exp (ik_n \Delta x) c_n(t) \exp (ik_n x). \end {equation} We now write Eq. \eqref{eq:fediff} as \begin {equation} \begin {split} \frac {\partial c_n}{\partial t} &= \frac {D}{\Delta x^2} \left (\exp (-ik_n\Delta x) - 2 + \exp (ik_n\Delta x)\right ) c_n(t) \\ &= \frac {2D}{\Delta x^2} \left (\cos (k_n\Delta x) - 1\right ) c_n(t), \end {split} \end {equation} However, we can solve this equation analytically for a time interval \(\Delta t\), \begin {equation} c_n(t+\Delta t) = c_n(t) \exp \left [\frac {2D}{\Delta x^2} \left (\cos (k_n\Delta x) - 1\right ) \Delta t \right ], \label {eq:cfl-exact} \end {equation} whereas Euler integration yields \begin {equation} c_n(t+\Delta t) \approx \left [1 + \frac {2D}{\Delta x^2} \left (\cos (k_n\Delta x) - 1\right ) \Delta t\right ] c_n(t). \label {eq:cfl-euler} \end {equation} The value of the term \(\cos (k_n \Delta x)-1\) lies between \(-2\) and \(0\). This means that we can propagate Eq. \eqref{eq:cfl-exact} for arbitrary \(\Delta t\) without the concentration \(c_n(t)\) diverging in time. Except for \(k_n=0\), the coefficients \(c_n(t)\) decrease over time.

For the Euler method, Eq. \eqref{eq:cfl-euler}, this is only the case if \begin {equation} \mu = \frac {D \Delta t}{\Delta x^2} < \frac {1}{2}. \label {eq:diff-cfl} \end {equation} For \(\mu >1/2\), some of the coefficients \(c_n(t)\) increase with \(t\) and the algorithm becomes unstable. The dimensionless number \(\mu \) is called a Courant-Friedrichs-Lewy (CFL) number and the condition Eq. \eqref{eq:diff-cfl} is called a CFL condition. The exact form of the CFL condition depends on the PDGL and the algorithm.

Note: The CFL condition says that the maximum time step \begin {equation} \Delta t < \frac {1}{2D} \Delta x^2 \end {equation} depends on the spatial discretization \(\Delta x\). If we make the spatial discretization finer, we must also choose a smaller time step. Halving the spatial discretization requires a time step that is a quarter smaller. This increases the cost of a simulation of the same simulation duration by a factor of \(8\). Finely resolved simulations therefore quickly become numerically expensive. For methods with automatic step control, this adaptation of the time step happens automatically.

Bibliography


Copyright © 2017-2020 Andreas Greiner, 2020-2025 Lars Pastewka.