(TU Munich)

Nick Trefethen’s "SIAM 100-Digit Challenge" from 2002 was very much to the heart of Dirk Laurie’s mathematical tastes and, consequently, he got to be one of the single-handed winners of this contest. In its wake the two of us teamed up with Stan Wagon (USA) and Jörg Waldvogel (CH) to write a book about the allure of the ten problems and about the huge variety of different mathematical and computational approaches that are available to solve them. Even though, by rephrasing Zhou Enlai, it is still too early to tell about the impact of this challenge, we will revisit some aspects from about nearly twenty years later: philosophy, problems, solutions, and software - old and new.

(University of Antwerp)

The Nyquist constraint, which is the digital signal processing equivalent of stating that the argument of a complex exponential \(\exp(\phi\Delta)\) with \(\phi\in\mathbb{C}\) and \(\Delta\in\mathbb{R}^+\) can only be retrieved uniquely under the condition that \(|\Im(\phi)|\Delta < \pi\), governs signal processing since the beginning of the 20-th century. In the past two decades this constraint was first broken with the use of randomly collected signal samples and later for use with uniform samples. \smallskip The latter method closely relates to the original exponential fitting algorithm published in 1795 by the French mathematician de Prony. Besides avoiding the Nyquist constraint, the result also solves a number of remaining open problems in exponential analysis, which we plan to discuss. \smallskip In the identification, from given values \(f_k \in \mathbb{C}\), of the nonlinear parameters \(\phi_1, \ldots, \phi_n \in\mathbb{C}\), the linear coefficients \(\alpha_1, \ldots, \alpha_n \in \mathbb{C}\) and the sparsity \(n \in \mathbb{N}\) in the inverse problem \begin{equation} \sum_{j=1}^n \alpha_j \exp(\phi_j k\Delta) = f_k, \qquad k=0, \ldots, 2n-1, \ldots \quad f_k \in \mathbb{C}, \quad \Delta \in \mathbb{R}^+\qquad (1) \end{equation} several cases are considered to be hard:

- When some of the \(\phi_j\) cluster, the identification and separation of these clustered \(\phi_j\) becomes numerically ill-conditioned. We show how the problem may be reconditioned.
- From noisy \(f_k\) samples, retrieval of the correct value of \(n\) is difficult, and more so in case of clustered \(\phi_j\). Here, decimation of the data offers a way to obtain a reliable estimate of \(n\) automatically.
- Such decimation allows to divide and conquer the inverse problem statement. The smaller subproblems are largely independent and can be solved in parallel, leading to an improved complexity and efficiency.
- At the same time, the sub-Nyquist Prony method proves to be robust with respect to outliers in the data. Making use of some approximation theory results , we can also validate the computation of the \(\phi_j\) and \(\alpha_j\).
- The Nyquist constraint effectively restricts the bandwidth of the \(\Im(\phi_j)\). Therefore, avoiding the constraint offers so-called superresolution, or bandwidth extension, or the possibility to unearth higher frequency components in the samples.

All of the above can be generalized in several ways, to the use of more functions besides the exponential on the one hand, and to the solution of multidimensional inverse problems as in (1) on the other.

(University of KwaZulu-Natal)

We revisit problems of double diffusive convection in a ferromagnetic fluid layer with temperature modulation and thermal instability in a porous layer saturated using a nanofluid model. We review and show how some classical results may be derived from the new configuration. In the case of nonlinear instability, we derive Ginzburg-Landau and Lorentz type equations appropriate for the nanofluid. We demonstrate how a multi-domain spectral collocation method may be used to solve a system of amplitude evolution equations.

(Stellenbosch University)

It is important in many areas of science and engineering to be able to predict and simulate rare events with very small probabilities, say of the order of \(10^{-10}\), but whose occurrence may have negative or even catastrophic consequences. Examples include internet server overflows, mechanical breakdowns, floods, and financial crashes. Rare events can also have positive effects, for example, when triggering chemical reactions or driving genetic evolution via random mutations. In these examples, it is important not only to estimate the probability of rare events, but also to predict how these events happen following specific trajectories or mechanisms, called transition pathways.

Of course, one can't just simulate a given system to 'see' rare events, since they're rare! New types of simulations are required and usually involve biasing the system artificially to render a rare event typical. In this talk, I will give an overview of recent algorithms that have been developed in statistics, engineering, and physics to efficiently simulate rare events in Markov processes. I will focus on algorithms based on importance sampling, a general biasing technique for which optimality results can be obtained in the context of a theory of rare events known as large deviation theory.

Participants will be added here soon. For now we encourage participants to register here.