Research

I am broadly interested in (magneto)hydrodynamic processes and their role in (astro)physical systems. Given my background in computer science I often use (i.e., conduct and analyze) simulations to understand MHD turbulence. The (current) related research questions generally fall into the following categories:

Energy transfer in compressible MHD turbulence

Energy transfer is traditionally (i.e., in the incompressible hydrodynamic regime) expected to happen down-scale, i.e., from large scales to small scales (see also the turbulent cascade below). In magnetohydrodynamics the picture is more complex as two reservoirs of energy exist: kinetic and magnetic energy. Energy injected into one reservoir is not “simply” transfered down-scale any more, but energy exchange between reservoirs (and scales) takes places due to the nonlinear nature of the underlying equations. This process is getting even more complex when compressiblity effects are taken into account as additional ways to exchange energy are introduced.

Several studies in the past analyzed the complex nature of energy transfer in the incompressible MHD regime. We are interested in systems where compressible effects, e.g., shock fronts (from galaxy mergers in galaxy clusters), potentially are an important factor within the overall dynamics. For this reason, we are currently working on extending established analysis techniques to the compressible regime. First results have recently been published. Eventually, this analysis allows us to quantify the compressibilty effects in different regimes and supports the development of appropriate subgrid-scale models.

Subgrid-scale modeling of MHD turbulence

Phenomenologically (incompressible hydrodynamic) turbulence is often described by the successive break-up of large eddies into smaller and smaller eddies. In this picture, the system is driven on largest scales, where energy is injected into the system. This energy then cascades down to smaller scales by inertial motions until it eventually dissipates. Even in moderate systems the separation of scales (between the largest and smallest scales) can already be quite large. In astrophysical system this separation easily spans many orders of magnitudes. The resulting dynamical range is beyond what can be reasonably simulated directly - even on the most powerful supercomputers.

One concept to circumvent this problem is a large eddy simulation (LES). In an LES only the largest scales are simulated directly. The smallest scales are unresolved, in particular, the scales smaller than the grid scale (or below the resolution limit \(1/\Delta\)). However, they can be reintroduced to the simulation by means of a subgrid-scale (SGS) model, which only depends on the quantities that are resolved in the simulation.

While the subject of SGS modeling is well established in the incompressible hydrodynamic regime, there is comparatively little research in the compressible magnetohydrodynamic regime. Given that the latter regime is relevant for astrophysics, e.g., in the interstellar medium, we are working on developing new SGS models that are particularly capable of capturing compressibility effects. Our latest nonlinear model has proven itself to yield better results than traditional models, e.g., eddy-viscosity or scale-similarity type models, in both a priori tests and a posteriori tests.

Driving mechanisms in astrophysical systems

Driving, i.e., energy and momentum input, is ubiquitous in astrophysical systems and comes in many different forms such as feedback from supernovae or from galaxy mergers. One important open question concerns the influence of driving mechanisms on the statistical properties of turbulence. Often statistical properties can be observed directly, e.g., the extent of non-Gaussian statistics of the velocity field in the interstellar medium, and thus are used to deduce physical properties of astrophysical systems. Previous studies already analyzed in detail the influence of the ratio between compressive and solenoidal modes in the driving mechanism. Less attention has been paid to the autocorrelation time of the forcing, i.e., the timescale on which the driving mechanism evolves. We are currently working on understanding and quantifying physically realistic driving mechanisms, i.e., ones that evolve on a finite timescale within the system.

Turbulent magnetic fields in the early Universe

Supermassive black holes with \(10^9\) solar masses have been observed in the early Universe at redshifts \(z>6\). The early formation of such massive objects is still poorly understood and different scenarios are currently being investigated. We are currently studying direct collapse scenarios with particular emphasis on the influence of turbulent magnetic fields. We do so by running cosmological simulations with Enzo that employ the novel nonlinear subgrid-scale model for MHD turbulence described above. Differences in the stability of accretion disks and their fragmentation are expected due to magnetic torques and angular momentum transport by unresolved magnetic fields.

Numerical methods

In relying on simulations to study processes that are otherwise difficult to observe (in experiments or in nature), for example MHD turbulence, it is important to understand the potential impact of the numerical method on the results.

We work with different codes, e.g., Enzo or Athena, and conduct simulations under (almost) identical conditions. This allows us to quantify the effects of different numerical methods and their implementations in practice.

In addition, we are preparing the transition to next generation, exascale systems. While the final architecture and design are still not determined, the previous years have shown that the entire environment is quite dynamic, e.g., the cancellation of Intel’s Xeon Phi series, so that performance portability is a key requirement in current code development. Performance portability broadly refers to writing code once that performs well (in the sense of making efficient use of the architecture) on multiple different architectures, e.g., CPUs or GPUs.

We are current exploring Kokkos - a C++ library that abstracts parallel execution and (hierarchical) memory management - Part of this exploration is porting a modern C++ astrophysical MHD code to make use of this approach - eventually allowing us to run the code on large GPU supercomputers such as Summit in near future.