(TU Munich)

Nick Trefethen’s "SIAM 100-Digit Challenge" from 2002 was very much to the heart of Dirk Laurie’s mathematical tastes and, consequently, he got to be one of the single-handed winners of this contest. In its wake the two of us teamed up with Stan Wagon (USA) and Jörg Waldvogel (CH) to write a book about the allure of the ten problems and about the huge variety of different mathematical and computational approaches that are available to solve them. Even though, by rephrasing Zhou Enlai, it is still too early to tell about the impact of this challenge, we will revisit some aspects from about nearly twenty years later: philosophy, problems, solutions, and software - old and new.

(University of Antwerp)

The Nyquist constraint, which is the digital signal processing equivalent of stating that the argument of a complex exponential \(\exp(\phi\Delta)\) with \(\phi\in\mathbb{C}\) and \(\Delta\in\mathbb{R}^+\) can only be retrieved uniquely under the condition that \(|\Im(\phi)|\Delta < \pi\), governs signal processing since the beginning of the 20-th century. In the past two decades this constraint was first broken with the use of randomly collected signal samples and later for use with uniform samples. \smallskip The latter method closely relates to the original exponential fitting algorithm published in 1795 by the French mathematician de Prony. Besides avoiding the Nyquist constraint, the result also solves a number of remaining open problems in exponential analysis, which we plan to discuss. \smallskip In the identification, from given values \(f_k \in \mathbb{C}\), of the nonlinear parameters \(\phi_1, \ldots, \phi_n \in\mathbb{C}\), the linear coefficients \(\alpha_1, \ldots, \alpha_n \in \mathbb{C}\) and the sparsity \(n \in \mathbb{N}\) in the inverse problem \begin{equation} \sum_{j=1}^n \alpha_j \exp(\phi_j k\Delta) = f_k, \qquad k=0, \ldots, 2n-1, \ldots \quad f_k \in \mathbb{C}, \quad \Delta \in \mathbb{R}^+\qquad (1) \end{equation} several cases are considered to be hard:

- When some of the \(\phi_j\) cluster, the identification and separation of these clustered \(\phi_j\) becomes numerically ill-conditioned. We show how the problem may be reconditioned.
- From noisy \(f_k\) samples, retrieval of the correct value of \(n\) is difficult, and more so in case of clustered \(\phi_j\). Here, decimation of the data offers a way to obtain a reliable estimate of \(n\) automatically.
- Such decimation allows to divide and conquer the inverse problem statement. The smaller subproblems are largely independent and can be solved in parallel, leading to an improved complexity and efficiency.
- At the same time, the sub-Nyquist Prony method proves to be robust with respect to outliers in the data. Making use of some approximation theory results , we can also validate the computation of the \(\phi_j\) and \(\alpha_j\).
- The Nyquist constraint effectively restricts the bandwidth of the \(\Im(\phi_j)\). Therefore, avoiding the constraint offers so-called superresolution, or bandwidth extension, or the possibility to unearth higher frequency components in the samples.

All of the above can be generalized in several ways, to the use of more functions besides the exponential on the one hand, and to the solution of multidimensional inverse problems as in (1) on the other.

(University of Washington)

A widespread approach for tsunami simulation is to model the ocean with the shallow water equations (SWE). The steady-state ocean floor deformation that results from a seismic event is often used as the initial ocean surface profile for the SWE, taking advantage of the separation of the tsunami and seismic timescales when the event occurs far from the inundation area of interest. This work focuses instead on tsunamis generated by near-field seismic events and the data that might be measured by proposed seafloor sensors for use in tsunami early warning, as part of a project to study the feasibility and utility of an offshore network of sensors on the Cascadia Subduction Zone (cascadiaoffshore.org). The dynamic seismic/acoustic and tsunami waves are simulated by modeling the ground as an elastic solid that is coupled with an acoustic ocean layer. A gravity term is added to the ocean layer in order to capture the tsunami (gravity waves) that are generated by the acoustic waves. The AMRClaw and Geoclaw (www.clawpack.org) packages are used to simulate the resulting waves, and to explore some of the issues that arise.

(Kent State University)

It is important to be able to estimate the error in Gauss quadrature rules. Dirk Laurie introduced anti-Gauss quadrature rules for estimating the error in Gauss rules associated with a positive measure with support on an interval. Several generalizations of anti-Gauss quadrature rules will be described, including generalizations to matrix-valued measures and to quadrature rules associated with multiple orthogonal polynomials. This talk presents joint work with Hessah Alqahtani and Miroslav Pranić.

(University of KwaZulu-Natal)

We revisit problems of double diffusive convection in a ferromagnetic fluid layer with temperature modulation and thermal instability in a porous layer saturated using a nanofluid model. We review and show how some classical results may be derived from the new configuration. In the case of nonlinear instability, we derive Ginzburg-Landau and Lorentz type equations appropriate for the nanofluid. We demonstrate how a multi-domain spectral collocation method may be used to solve a system of amplitude evolution equations.

(Stellenbosch University)

It is important in many areas of science and engineering to be able to predict and simulate rare events with very small probabilities, say of the order of \(10^{-10}\), but whose occurrence may have negative or even catastrophic consequences. Examples include internet server overflows, mechanical breakdowns, floods, and financial crashes. Rare events can also have positive effects, for example, when triggering chemical reactions or driving genetic evolution via random mutations. In these examples, it is important not only to estimate the probability of rare events, but also to predict how these events happen following specific trajectories or mechanisms, called transition pathways.

Of course, one can't just simulate a given system to 'see' rare events, since they're rare! New types of simulations are required and usually involve biasing the system artificially to render a rare event typical. In this talk, I will give an overview of recent algorithms that have been developed in statistics, engineering, and physics to efficiently simulate rare events in Markov processes. I will focus on algorithms based on importance sampling, a general biasing technique for which optimality results can be obtained in the context of a theory of rare events known as large deviation theory.

(Inria & Sorbonne Université)

Blood flow modeling has become quite mature through advances of multidomain or multiscale methods that take into account the interaction between a local change due to disease or intervention and the rest of the circulation. An important step in patient-specific simulations is model parameterization from data. We will show how numerical challenges have been faced on these topics. We will then illustrate through different applications how hemodynamics modeling can help to ask ‘what if’ questions on the computer to better understand the effect of surgery in different disease states, or even to test novel surgical procedures. Modeling can also be combined with machine learning to anticipate the risk in disease progression. We will present different clinical cases from congenital heart disease, liver disease and cardiovascular acquired disease.

(University of South Australia)

What do the integers, the Sierpiński gasket, compact Riemann surfaces and the Heisenberg group have in common? Each of them is a space of homogeneous type \((X,d,\mu)\): a set \(X\) equipped with a way of measuring the distance between any two points (a quasi-metric \(d\)) and a way of measuring the volume of subsets of \(X\) (a doubling measure \(\mu\)). A familiar example is Euclidean space \(\mathbb{R}^n\) equipped with the Euclidean metric and Lebesgue measure. Spaces of homogeneous type arise in many areas, including several complex variables and Riemannian geometry.

The Calderón-Zygmund theory in harmonic analysis deals with singular integral operators and the functions on which they act. Early impetus came from problems in partial differential equations and Fourier analysis. Here we focus on the generalisation from functions defined on Euclidean spaces \(\mathbb{R}^n\) to functions defined on spaces \(X\) of homogeneous type. In particular, for general \(X\) the Fourier transform and the group structure of \(\mathbb{R}^n\) are missing.

The goal is to build a Calderón-Zygmund theory on spaces of homogeneous type. I will survey some recent progress towards this goal.

Participants will be added here soon. For now we encourage participants to register here.