Philipp Grete

I am a Marie Skłodowska-Curie Actions Postdoctoral Fellow at the Hamburg Observatory. Previously, I was a postdoctoral research associate in the Department of Physics & Astronomy at Michigan State University .
In general, my interdisciplinary research covers both physics, e.g., (astrophysical) plasma modeling including magnetohydrodynamic processes and their role in (astro)physical systems, and computer science, e.g., parallelization and high performance computing, as well as topics in between such as computational fluid dynamics and numerical methods. Currently, I am working on analyzing and characterizing energy transfer in compressible MHD turbulence, understanding driving and feedback mechanisms in astrophysical systems, and performance portable programming models for exascale computing.
Contact information
Office Room: 105, Direktorenvilla
Email pgrete [at] hs.uni-hamburg.de
Phone +49 40 42838 8536
Address Hamburg Observatory
University of Hamburg
Gojenbergsweg 112
21029 Hamburg
Funding My research is currently supported by the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 101030214.


News and upcoming events

May 2023 CUG23 Best Paper Runner-up Award
We (J. K. Holmen, P. Grete, and V. G. Melesse Vergara) submitted a paper to the Cray User Group Meeting 2023 in Helsinki on Early Experiences on the OLCF Frontier System with AthenaPK and Parthenon-Hydro. While I attended the Athena++ Workshop in the Center for Computational Astrophysics, Flatiron Institute, to give an invited talk on AthenaPK, John Holmen presented our accepted paper at CUG23. It was well received and awarded the Best Paper Runner-up Award. Therefore, it will be published later this year in a special edition of the Concurrency and Computation Practice and Experience Journal. Please contact me if you are interested in a preprint.

Earlier in April, we were invited to give a talk on Parthenon in the EE04: V: Exascale Computational Astrophysics session at the annual April Meeting of the American Physical Society in Minneapolis, MN. The talk presented by F. Glines is covered among the other talks by a more detailed press release.
May 2023 Paper updates
Over the past months several projects came to a successful conclusion.

As a matter of dynamical range — scale dependent energy dynamics in MHD turbulence by Grete, O'Shea, and Beckwith (see below) was accepted without any major revision for publication in The Astrophysical Journal Letters. The final full text (open access) can be found at doi:10.3847/2041-8213/acaea7.

A morphological analysis of the substructures in radio relics by Wittor, Brüggen, Grete, and Rajpurohit was published in Monthly Notices of the Royal Astronomical Society. In both observations and simulations of radio relics we find that both the magnetic field and the shock front consist of filaments and ribbons that cause filamentary radio emission (i.e., it is not a projection effect of sheet-like structures). The full text (and associated code) can be found at doi:10.1093/mnras/stad1463 and arXiv:2305.07046.

The Launching of Cold Clouds by Galaxy Outflows. V. The Role of Anisotropic Thermal Conduction by Brüggen, Scannapieco, and Grete was published in The Astrophysical Journal. It is the first (astro)physics application paper featuring simulations with AthenaPK for which we implemented a second-order accurate Runge–Kutta–Legendre super-time-stepping approach for diffusive fluxes to make these simulations feasible. The final full text (open access) can be found at doi:10.3847/1538-4357/acd63e.
Apr 2023
Girlsday at the Hamburg Observatory
On 27 April we welcomed over 20 girls from high schools at the Observatory for this year's Girlsday. In addition to projects on spectroscopy and radio observations, we also offered two on computing this year. First, using the supercomputer model I built for the Open Day (see below), we interactively demonstrated domain decomposition, load balancing and (the limits of) numerical algorithms. Second, using AthenaPK the participants were able to explore simulations using adaptive mesh refinement and even create their own input file to study the interaction of a blast wave with its surroundings, see video.

PS: The video resolution is significantly higher than the small preview. Click on the full-screen button (bottom right) to enlarge.
Dec 2022 INCITE award
I am happy to share that we (i.e., B. O'Shea, P. Grete, F. Glines, D. Prasad) received an INCITE award this year.
Starting in Jan 2023, we will use the first TOP500 exascale supercomputer Frontier to study Feedback and energetics from magnetized AGN jets in galaxy groups and clusters using AthenaPK. The capabilities of Frontier will allow us to conduct the highest dynamical range studies of the diffuse intragroup and intracluster plasmas ever achieved, studying the accretion of gas onto supermassive black holes, the impact of the resulting AGN jets on the interstellar and circumgalactic plasma (cf., the illustration by D. Prasad on the right), and the relationship between these phenomena.
We will also receive support for visualizing the results. Stay tuned for regular updates.
Nov 2022 As a matter of dynamical range — scale dependent energy dynamics in MHD turbulence
I submitted our latest paper in the (what now can probably be called) As a matter of ... series. While the paper itself is not too long, we present two important results.
First, contrary to hydrodynamic turbulence cross-scale (i.e., from large to small scales) energy fluxes are neither constant with respect to the mediator (such as advection or magnetic tension) nor with respect to dynamical range (or Reynolds number). Given that we do not observe any indication of convergence, this calls into question whether an asymptotic regime in MHD turbulence exists, and, if so, what it looks like, which is highly relevant to the modeling community (and, thus, to interpreting observations).
Second, we demonstrate how to measure numerical dissipation in implicit large eddy simulations (ILES, such as shock-capturing finite volume methods) and that ILES correspond to direct numerical simulations (DNS) with equivalent explicit dissipative terms included.

The full text of the submitted version is available on the arXiv:2211.09750 as usual.
Oct 2022 Parthenon has a new home and first developer meeting
In early October we organized the first (excluding the kickoff meeting in Jan 2020) larger Parthenon developer meeting. More than 20 people from 7 different institutions attended in person. We had a very productive week with many mid- to long-term strategic discussions. The results and plans forward can be found in various GitHub issues.

One non-technical result is that Parthenon now has a new home, the Parthenon HPC Lab organization on GitHub. The organization now hosts the Parthenon code itself, but also various downstream codes including the parthenon-hydro miniapp and AthenaPK.
Oct 2022 Open Day at the Observatory
After a two year break an Open Day at the Observatory was finally possible again. The program was packed with talks, Q&A sessions, guided tours, interactive exhibitions, public observing, and more.
I built a new Rasperry Pi based supercomputer model similar to SparPi at MSU (see below) and used it for interactive demonstrations of simulations in astrophysics. Again, visitors of all ages enjoyed the various aspects of the setup ranging from controlling gravity (do things flow up if gravity is negative?) or viscosity (water or honey?) to concepts of parallel computing to how gas/plasma is treated as a fluid in many astrophysical simulations.

In preparation for the Open Day several generic pictures in the main building were replaced by illustration of research conducted at the observatory.
This includes an illustration of a current project between Marcus Brüggen, Evan Scannapieco, and I on cloud crushing simulations with magnetic fields and anisotropic (with respect to the local direction of the magnetic field) thermal conduction.
The illustrations shows a volume rendering of the plasma density with the (adaptively refined) simulation grid (top left), velocity streamlines in gray and strong vortical motions in orange (bottom left), magnetic field lines that drape around the cloud (bottom right), and an observation of a "jellyfish" galaxy for reference (top right).
A paper covering this project is in preparation.
Sep 2022 Parthenon runs at scale on the first TOP500 exascale supercomputer Frontier

Over the past months, I have been working with John Holmen at OLCF on using the parthenon-hydro miniapp in acceptance testing of Frontier, the first TOP500 exascale supercomputer.
Now I can finally share the results and am happy to report that we reached 92% weak scaling parallel efficient going from a single node all the way to 9,216 nodes. At that scale the simulation ran on 73,728 GPUs in parallel.
The updated results including additional scaling plots are included in the revised version of the Parthenon paper.
Sep 2022 Teaching paper published
In 2019 I took part in the Professional Development Program of the Institute for Scientist & Engineer Educators as design team leader targeting to develop an inquiry activity on Sustainable Software Practices with Collaborative Version Control.
The module was implemented multiple times (both in person and online) and well received. Therefore, we decided to prepare a more formal write-up.
The results are now published as Frisbie, R. L, Grete, P., & Glines, F. W. (2022). An Inquiry Approach to Teaching Sustainable Software Development with Collaborative Version Control. UC Santa Cruz: Leaders in effective and inclusive STEM: Twenty years of the Institute for Scientist & Engineer Educators.
The article is open access and released under a CC-BY 4.0 license.
Aug 2022 Conferences/workshops
After a long streak of virtual meetings, I was very happy to travel again and have a much more productive exchange with colleagues in person:
  • 6th ICM Theory and Computation Workshop: I presented results from a soon-to-be-submitted paper on As a matter of dynamical range – scale-dependent energy dynamics in magnetized turbulence . Slides are available here
  • Plasma Physics Meets AI: Workshop on Subgrid-scale Modeling for Turbulence was the first in-person meeting of a new collaboration between the Astroinformatics group at Aalto University, Michigan State University, and the Computer and Information Sciences group at Helsinki University. As the name suggests, one goal of the collaboration is to develop and evaluate more modern approaches to modeling of plasma turbulence in astrophysics (extending the approach in my PhD thesis). For this reason, I gave an overview on Subgrid-scale modeling of magnetized turbulence in astrophysics.
Jun 2022 Turbulence driver in AthenaPK
I finished porting the turbulence driver originally implemented in Athena and K-Athena to AthenaPK. As before the driver allows to control the ratio of compressive to solenoidal modes of the driving, the energy injection scales and power, and the correlation time of the stochastic process describing the evolution of the driving field.
Apr 2022 GCS Large Scale project proposal accepted
Earlier this year I submitted a computing time proposal on The role of low collisionality in compressible, magnetized turbulence to support the my Marie Skłodowska-Curie Actions research.
The proposal was well received and I got 7.5 million core-h on the JUWELS Booster Module for the simulations and 1 million core-h on the JUWELS Cluster Module CPU for post-processing. Over the course of May 2022 to April 2023 I will use the time to conduct and analyze simulations with anisotropic transport processes. Here, anisotropic refers to the transport of energy and momentum with respect to the local magnetic field direction.

Texascale days - parthenon-hydro using 458752 ranks/cores
I got the chance to conduct scaling tests of parthenon-hydro on TACC's Frontera supercomputer. At the large scale I ran a simulation on 8192 node using almost half a milltion CPU cores (with one MPI rank per core). At this scale parthenon-hydro still reached 93% weak scaling parallel efficiency (compared to running on a single node). The more detailed results will be included in revision of the Parthenon paper.
Mar 2022 Second-order, RKL-based operator split super timestepping in AthenaPK
Parabolic terms (like the ones describing thermal conduction or viscosity/momentum diffusion) can put a severe constraint on the explicit timestep in a simulation. To circumvent this issue, various solutions exists including implicit and super timestepping methods where multiple subcycles are taken during one large (hyperbolic) timestep. The latter is particularly suitable for highly-parallelized, finite volume code like AthenaPK. For this reason, I have implemented the second-order accurate method by Meyer, Balsara, and Aslam (2014) in AthenaPK in preparation for running simulations with anisotropic transport processes.
Feb 2022 Parthenon paper submitted
The Parthenon collaboration just submitted the first code paper describing the key features of Parthenon - a performance portable block-structured adaptive mesh refinement framework. The preprint is available on the arXiv:2202.12309.
Key feature that are described in detail in the paper are
  • High performance by
    • device first/device resident approach (work data only in device memory to prevent expensive transfers between host and device)
    • transparent packing of data across blocks (to reduce/hide kernel launch latency)
    • direct device-to-device communication via asynchronous, one-sided MPI communication
  • Intermediate abstraction layer to hide complexity of device kernel launches
  • Flexible, plug-in package system
  • Abstract variables controlled via metadata flags
  • Support for particles
  • Multi-stage drivers/integrators with support for task-based parallelism
Jan 2022 parthenon-hydro miniapp public
I am happy to announce that parthenon-hydro is now available on GitHub at https://github.com/pgrete/parthenon-hydro. It is a finite volume, compressible hydrodynamics sample implementation using the performance portable adaptive mesh framework Parthenon and Kokkos. It is effectively a simplified version of AthenaPK demonstrating the use of the Parthenon interfaces (including the testing framework) with a use-case that is more complex than the examples included in Parthenon itself.
Nov 2021 Conference presentations
Last month I got the opportunity to present a pre-recorded talk at the virtual RAS specialist meeting on Galactic magnetic fields: connecting theory, simulations, and observations. My short talk on As a Matter of Tension: Kinetic Energy Spectra in MHD Turbulence is now online and available on YouTube.

This month I also attended the virual IAU Symposium 362 on The predictive power of computational astrophysics as discovery tool and gave a presentation on Performance Portable Astrophysical Simulations: Challenges and Successes with K-Athena (MHD) and AthenaPK/Parthenon (AMR).
Oct 2021 Welcome Hamburg!
This month I officially started my Marie Skłodowska-Curie Actions Postdoctoral Fellowship at the Hamburg Observatory.
I will study processes in weakly collisional plasmas as they are found, for example, in the intracluster medium with a specific emphasis on effects from anisotropic transport processes such as thermal conduction and viscosity.
To this end, I am in the process of implementing the necessary numerical algorithms in AthenaPK allowing for detailed simulations on current and next generation supercomputers.
Aug 2021 Turbulence in the intragroup and circumgalactic medium
Our latest paper by W. Schmidt, J. P. Schmidt and P. Grete has been accepted for publication in Astronomy & Astrophysics, see preprint on arxiv:2107.12125.
We conduct and analyze cosmological zoom-in simulations of dark-matter filaments. One key result is that the scaling law between the turbulent velocity dispersion and thermal energy observed in massive objects, such as galaxy clusters, also extends to lower-mass objects down to the CGM of individual galaxies.

Update (Oct 2021): The paper has now been published, see doi:10.1051/0004-6361/202140920.
Jun 2021 First AthenaPK scaling results
I recently conducted scaling tests on Summit up to 12288 GPUs (2048 nodes) using AthenaPK. While the parallel efficiency is not as good as with K-Athena (yet!), the results are nevertheless very encouraging. Using the GPUs still results in a ≥16x speedup compared to using all CPU cores. Moreover, this also pertains to simulations with mesh refinement, which are not supported by K-Athena on GPUs.
Finally, I also tested parallel IO at scale using Parthenon's parallel HDF output. Again, the results were very encouraging. Even writing files of 6TB size at approx. 350GB/s did not cause any issues.
Apr 2021 Latest papers published
The latest two papers are now available on the homepage of the respective publishers:
  • P. Grete, B. W. O'Shea, and K. Beckwith 2021 As a Matter of Tension: Kinetic Energy Spectra in MHD Turbulence ApJ 909 148
  • F. W. Glines, P. Grete, and B. W. O'Shea 2021 Magnetized decaying turbulence in the weakly compressible Taylor-Green vortex Phys. Rev. E 103, 043203
LRAC proposal renewed
Despite hitting some roadblocks in the process of adapting to a new supercomputer in the first year, our Leadership Resource Allocation (LRAC) on Frontera has been renewed for a second year. I will continue to run simulations to study the role of low collisionality in compressible, magnetized turbulence with an emphasis on the hot, dilute plasma found in the Intracluster medium.

MHD support in AthenaPK
Over the past two months many new features have been added to AthenaPK. This includes first, second and third order hydrodynamics as well as support for magnetohydrodynamics simulations using the generalized Lagrange multiplier, divergence cleaning approach by Dedner et al. 2002, see README in the repository for a complete list of supported features.

As before, AthenaPK follows a "device first" design in order to achieve high performance. In other words, there are no explicit data transfers between the host and devices (such as as GPUs) because of the comparatively slow data transfer in between.

Finally, a Parthenon team team applied and was accepted to the Argonne GPU Hackathon 2021. Over the course of several days we will work with mentors on improving the overall performance with an emphasis on load balancing and mesh refinement on GPUs.
Feb 2021 Marie Skłodowska-Curie Actions
I am happy to share that my proposal on Unraveling effects of anisotropy from low collisionality in the intracluster medium will receive funding under the Marie Skłodowska-Curie Actions. Later this year I will move to Hamburg and work with Marcus Brüggen at the Hamburg Observatory.

AthenaPK is public
I am also happy to share the (magneto)hydrodynamics code on top of Parthenon is now publicly available. While it is still in development, second order hydrodynamics with mesh refinement on devices (such as GPUs) is working.
More details can be found in the repository. As with Parthenon, we welcome any questions, feedback, and/or would be happy to hear if you want to directly get involved.
Jan 2021 Happy New Year!

I am happy to report that our development code AthenaPK is now capable of running second-order, compressible hydrodynamics with adaptive mesh refinement (AMR) fully on GPUs (see able movie).
AthenaPK will implement the (magneto)hydrodynamic methods of Athena++ on top of the performance portable AMR framework Parthenon and Kokkos.
We are currently working on further improving AMR performance for small block sizes. Stay tuned for further updates.
Sep 2020 Research update
We recently submitted two papers:
  • First, on the role of magnetic tension in MHD turbulence and how it results in the kinetic energy spectra becoming more shallow by suppressing the kinetic energy cascade, see Grete, O'Shea, and Beckwith arXiv:2009.03342.
  • Second, on the statistics and energy dynamics in the compressible, magnetized Taylor-Green vortex (a canonical transition to turbulence and decay problem) where we also demonstrate the importance of magnetic tension even in the presence of weak magnetic fields, see Glines, Grete, and O'Shea arXiv:2009.01331.
Computational Astrophysics Splinter Meeting Program is online
  • The call for proposals was well received and our Computational Astrophysics session received the largest number of submissions. Thus, we scheduled additional session on Thursday (24 Sep) morning in addition to the morning and afternoon sessions on Friday (25 Sep).
    The full program, which spans Galaxy Formation and Assembly, Stars and Stellar Clusters, Galaxy Evolution, Protoplanetary Discs and Exoplanets, and Computational Methods and Algorithms, is available online here.
Jul 2020 Splinter Meeting on Computational Astrophysics
I am co-organizing the splinter meeting on Computational Astrophysics as part of the virtual German Astronomical Society Meeting Sep 21-25 2020.
Abstract submission is open until 15 August. The general meeting registration is open until 15 September and there is no registration fee.
Splinter abstract: Numerical simulations are a key pillar of modern research. This is especially true for astrophysics where the availability of detailed spatial and temporal data from observations is often sparse for many systems of interest. In many areas large-scale simulations are required, e.g., in support of the interpretation of observations, for theoretical modeling, or in the planning of experiments and observation campaigns. The need and and relevance of large-scale simulations in astrophysics is reflected in a significant share of 25-30% of the overall German supercomputing time. While the supercomputing landscape has been stable for a long time, it started to change in recent years on the path towards the first exascale supercomputer. New technologies such as GPUs for general purpose computing, ARM based platforms (versus x86 platforms), and manycore systems in general have been introduced and require to rethink and revisit traditional algorithms and methods. This splinter meeting will bring together experts in computational astrophysics from all fields covering (but not limited to) fluid-based methods (from hydrodynamics to general relativistic magnetohydrodynamics), kinetic simulations, radiation transport, chemistry, and N-body dynamics applied to astrophysical systems on all scales, e.g., supernovae, planetary and solar dynamos, accretion disks, interstellar, circumgalactic, and intracluster media, or cosmological simulations. The goal of this meeting is to present and discuss recent developments in computational astrophysics and their application to current problems. Thus, contributions involving large-scale simulations and new methods/algorithms are specifically welcome. In addition to astrophysical results obtained from simulations, speakers are also encouraged to highlight numerical challenges they encountered and how they addressed those in their codes. These may include, but are not limited to, new algorithms (e.g., higher-order methods), changing HPC environments (e.g., manycore, GPUs, or FPGAs), or data storage (e.g., availability of space, sharing, or long term retention).

K-Athena paper accepted for publication
After a revision that primarily addressed details in the roofline analysis, the K-Athena paper has been accepted for publication in IEEE Transactions on Parallel and Distributed Systems.
Please find the final (early access) version at doi:10.1109/TPDS.2020.3010016 as well as an updated preprint on the arXiv:1905.04341.
Apr 2020 Postdoctoral Excellence in Research Award
I am happy to report that I am one recipient of this year's MSU Postdoctoral Excellence in Research Award. This annual award for two postdocs at MSU from all disciplines is provided by the Office of the Senior Vice President for Research and Innovation and the MSU Office of Postdoctoral Affairs and recognizes exceptional contributions to MSU and the greater research community.
I will give a brief presentation of my work during the (this year virtual) reception mid May.
Mar 2020 Leadership Resource Allocation on Frontera
  • The proposal on The role of low collisionality in compressible, magnetized turbulence was successful and awarded a Leadership Resource Allocation (LRAC) computing grant on NSF's latest flagship supercomputer Frontera. The total amount of 500,000 CPU node-hours and 50,000 GPU node-hours will allow us to study turbulence in multiple regimes at unprecedented detail. This includes the intracluster medium (ICM) - the hot, diffuse, fully ionized plasma in clusters of galaxies.
Parthenon is public
  • The performance portable structured grid adaptive mesh refinement (AMR) framework Parthenon is now publicly available on GitHub. It extends the idea behind K-Athena to the entire AMR framework of Athena++. The core team currently consists of two teams at Los Alamos National Lab, Jim Stone at IAS, and MSU. All code development is open and coordinated via the GitHub repository. Do not hesitate to get in contact/involved.
Feb 2020 Research update
  • The Kokkos Bootcamp in January was quite a success. We used the meeting to establish a larger collaboration with the long term goal to refactor the entire Athena++ codebase to make use of Kokkos. Part of the effort is to separate the mesh framework and generic infrastructure code from the physics modules. This will allow for a straightforward adoption of an extensible, performance portable (e.g., with GPU support) framework in other projects and by other groups. Stay tuned for more information on project Parthenon.
  • Our latest paper on the role of thermodynamics in magnetohydrodynamic turbulence is now published in The Astrophysical Journal: Grete, O'Shea, and Beckwith (2020) ApJ 889 19.
  • Last year's review paper on "Correlations and Cascades in Magnetized Turbulence" is now publicly available for free through OSTI.
Nov 2019 Research update

Our recent work on non-isothermal MHD turbulence that I presented both in Los Alamos in October and at the Cosmic turbulence and magnetic fields : physics of baryonic matter across time and scales meeting in Cargèse in November has been accepted for publication in The Astrophysical Journal (official link will follow once available).

Moreover, the paper on internal energy dynamics in compressible hydrodynamic turbulence has been accepted for publication in Physical Review E.
Official reference: W. Schmidt and P. Grete Phys. Rev. E 100, 043116 and preprint arXiv:1906.12228

Finally, the K-Athena tutorial sessions I gave in Los Alamos were well received.
Stay tuned for updates on the code developments.
Oct 2019 The Enzo code paper has been published in the The Journal of Open Source Software (JOSS).
This version includes the subgrid-scale model for compressible MHD turbulence I developed.

JOSS is a developer friendly, open access journal for research software packages. (about JOSS)
In addition to a code peer review (mostly focussing on improve the quality of the software submitted, e.g., with respect to documentation, functionality, or testing) each accepted submission is assigned a Crossref DOI that can be referenced to.
This supports the recognition of the software developers and the overall significant (but still all too often underappreciated) contribution of open source software to science.
Sep 2019 During the German Astronomical Society Meeting in Stuttgart I had the chance to tour SOFIA and see this impressive technical achievement in person.
SOFIA (Stratospheric Observatory for Infrared Astronomy) is a 2.7m telescope mounted on a plane so that observations can be made from high altitude (above most of the water vapor in the atmosphere, which absorbs infrared light).



In addition, I organized a Splinter Meeting on Computational Astrophysics during the conference.
Many different topic were presented ranging from galaxy mergers, to star formation, to radiative transport methods.
On top of the science presentations we also used the meeting to discuss current trends and developments in computational (astro)physics including GPU computing and FPGAs.
Aug 2019 Research update
In addition to the work on K-Athena/performance portability several more physics focused project have reached maturity over the past months.
  • We (W. Schmidt and P. Grete) extended the energy transfer framework previously applied to compressible MHD to also include internal energy dynamics. This extension allows the detailed study of energy dynamics within and between kinetic and thermal energy budget, and we applied it to study compressible hydrodynamic turbulence at various Mach numbers. One main result is an indication of a Mach number dependent natural forcing composition. Preprint available at arXiv:1906.12228.
  • The article on Intermittent fragmentation and statistical variations during gas collapse in magnetized atomic cooling haloes by P. Grete, M. A. Latif, D. R. G. Schleicher, and W. Schmidt has been publiblished in MNRAS. We used subgrid-scale model for unresolved MHD turbulence in cosmological simulations targeting supermassive black hole seed formation within the direct collapse scenario. We find a turbulent supersonic and super-Alfvénic collapse with an active small-scale dynamo amplifying magnetic fields to dynamically relevant strengths and only intermittent fragmentation making the direct collapse a viable scenario for the formation of massive objects.
  • Finally, we (P. Grete, B. W. O'Shea, and K. Beckwith) studied the effects in departing from an isothermal equation of state in compressible MHD turbulence. We find that several statistics (e.g., the strong anticorrelation between density and magnetic field strength or the scaling of density variations with Mach number) depend on the adiabatic index while other statistics exhibit no systematics -- this may help to break degeneracies in different observables. Preprint available at arXiv:1908.03989.
May 2019 K-Athena is public!
Last year at the Performance Portability with Kokkos training we started to experiment with combining Kokkos with Athena++ in order to allow for GPU-accelerated simulations. The experiment is a complete success and we are happy to make the resulting K-Athena (= Kokkos + Athena++) code publicly available on GitLab.
Details on the porting experience, roofline models on CPUs and GPUs, and scaling on many different architectures are presented in more detail in the accompanying code paper (arXiv preprint), which is currently under review.
In the paper, we demonstrate that K-Athena achieves about 2 trillion (1012) total cell-updates per second for double precision MHD on Summit — currently the fastest supercomputer in the world. This translates to updating an MHD simulation with a resolution of 10,0003 twice per second! At about 80% parallel efficiency using 24,576 Nvidia V100 Volta GPUs on 4,096 nodes, K-Athena achieves a speedup of more than 30 compared to using all available 172,032 CPU cores.
The code release has also been featured on Tech Xplore, and we encourage feedback and/or contribution to the code via the GitLab repository.

Image credit: Carlos Jones/ORNL CC-BY-2.0
Apr 2019 After I took part in the Professional Development Program of the Institute for Scientist & Engineer Educators last year for the first time, I returned this year as a design team leader.
Our design team (consisting of Rachel Frisbie and Forrest Glines — two MSU graduate students) will develop an inquiry activity on Sustainable Software Practices with Collaborative Version Control. We will teach the activity for two REU (Research Experiences for Undergraduates) programs at MSU in the summer.
If you are interested in the teaching material, please contact me. We are happy to share the material and experience.
Mar 2019 Our review paper covering Kris Beckwith's invited talk at ICOPS 2018 on Correlations and Cascades in Magnetized Turbulence (Beckwith, Grete, and O'Shea 2019) has been published in IEEE Transactions on Plasma Science last month.
Similarly, I also gave several talks, e.g., in Budapest, Las Vegas, Berkeley, and UC Santa Cruz, covering our recent results on correlations, energy transfer, and statistics in adiabatic, compressible MHD turbulence.
In addition, I presented first results on our GPU-enabled version of Athena++ using Kokkos. Stay tuned — more information will follow shortly.
Dec 2018 Last year when I gave a talk at the MIPSE (Michigan Institute for Plasma Science and Engineering) seminar in Ann Arbor I was also interviewed. The interview is part of a MIPSE sponsored outreach program to capture some of the importance (and excitement) of plasmas for students and the general public.
The short 3 minute clip is now online on YouTube.

All interviews of the series can be found on the MIPSE YouTube channel.
Aug 2018 Last month I took part in the Performance Portability with Kokkos training at Oak Ridge National Laboratory.
The four-day training was organized by the Exascale Computing Project and covered the Kokkos programming model and library for writing performance portable code. In other words, code is only written once and can afterwards be compiled for different target architectures, e.g., CPUs or GPUs, automatically taking into account architecture specific properties such as the memory layout and hierarchy. We are currently implementing this approach and will use it for the next project.

In addition, SparPi, the Raspberry Pi supercomputer model, had another appearance this month at Astronomy on Tap.
I presented a talk on What’s so super in supercomputer? Theory & Practice including a live demonstration. After the presentation, people made ample use of the opportunity to get their hands-on model themselves and directly interact with the live simulation.
May 2018 Our latest paper on Systematic Biases in Idealized Turbulence Simulations has been accepted for publication in The Astrophysical Journal Letters, see Grete et al 2018 ApJL 858 L19.
We show how the autocorrelation time of the forcing is intrinsically linked to the amplitude of the driving field, and how this affects the presence of (unphysical) compressive modes even in the subsonic regime.
I also presented the results at
Apr 2018 The MSU's science festival was a great success. We had a lot of interested people of all ages at our booth.
The assembled supercomputer model featured all components of an actual supercomputer, e.g., power supply units, networking/interconnects, compute nodes, and a frontend. On the frontend we are running TinySPH, an interactive hydrodynamics code that allows for both illustrating high performance computing techniques such as dynamic load balancing, and illustrating physics by changing fluid parameters such as viscosity or density.
In addition, we offered multiple virtual reality experiences including one that featured a magnetohydrodynamic turbulence simulation. Visitors were able to dive into a turbulent box and experience the rich and complex fluid structure interacting with highly tangled magnetic fields from within.
Feb 2018 I am very happy that I was accepted to participate in the 2018 Professional Development Program of the Institute for Scientist & Engineer Educators. This inquiry based teaching training covers multiple workshops where the participants collaboratively design an inquiry activity that will eventually be implemented in class later this year. The first workshop is the "Inquiry Institute" from March 25-28, 2018 in Monterey, CA.

In addition, I will attend SnowCluster 2018 - The Physics of Galaxy Clusters from March 18 - 23, 2018 in Salt Lake City where I present a poster talk of our latest results on "Systematic biases in idealized turbulence simulations".
Feb 2018 Parts for a Raspberry Pi based "supercomputer" arrived.
We will use the system for outreach, e.g., to demonstrate high performance computing, and in class for hands-on tutorials. I am currently setting the system up, which consists of 8x Raspberry Pi 3 for a total of 32 cores with 8 GB main memory.
The system will premiere at MSU's science festival.
Visit our booth at the Expo on April 7 for Hands-on Supercomputing and multiple Virtual Reality experiences.
Nov 2017 I presented the latest the results on energy transfer in compressible MHD turbulence
  • on Nov 14 in the Astrophysics Seminar at Notre Dame
  • on Nov 15 in the MIPSE seminar (Michigan Institute for Plasma Science and Engineering) at the University of Michigan
A recording of the MIPSE seminar is available here.
Sep 2017 My PhD thesis on Large eddy simulations of compressible magnetohydrodynamic turbulence was awarded with this year's Doctoral Thesis Award by the German Astronomical Society.
I presented the work at the Annual Meeting of the German Astronomical Society . See official press release [pdf] (in German), and MSU press releases (in English) from the CMSE department and the College of Natural Science .
Sep 2017 Our article on Energy transfer in compressible magnetohydrodynamic turbulence has been accepted for publication. In the paper, we introduce a scale-by-scale energy transfer analysis framework that specifically takes into account energy transfer within and between kinetic and magnetic reservoirs by compressible effects. The paper appears as a featured article in the journal Physics of Plasmas. Moreover, the article is covered by an AIP Scilight.
Sep 2017 We (PI B. W. O'Shea, Co-PIs B. Côté, P. Grete, and D. Silvia) successfully obtained computing time through an XSEDE allocation. I will use the resources to study driving mechanisms in astrophysical systems.
Aug 2017 I took part in the Argonne Training Program on Extreme-Scale Computing. This intensive two-week training is funded by the DOE's Exascale Computing Project and allowed me to gain knowledge and hands-on experience on next-generation hardware, programming models, and algorithms. I can highly recommend this training to everyone involved in high-performance computing.
Jun 2017 I presented first results of our energy transfer study for compressible MHD turbulence and the method itself