Philipp Grete

I am a postdoctoral research associate in the Department of Physics & Astronomy at Michigan State University .
In general, my interdisciplinary research covers both physics, e.g., (astrophysical) plasma modeling including magnetohydrodynamic processes and their role in (astro)physical systems, and computer science, e.g., parallelization and high performance computing, as well as topics in between such as computational fluid dynamics. Currently, I am working on analyzing and characterizing energy transfer in compressible MHD turbulence, understanding driving mechanisms in astrophysical systems, and performance portable programming models for exascale computing.
Contact information
Office BPS 3248
Email grete [at]
Phone +1 (517) 884-5622
Address Department of Physics & Astronomy
Michigan State University
Biomedical Physical Sciences
567 Wilson Road, Room 3248
East Lansing, MI 48824

News and upcoming events

Apr 2021 Latest papers published
The latest two papers are now available on the homepage of the respective publishers:
  • P. Grete, B. W. O'Shea, and K. Beckwith 2021 As a Matter of Tension: Kinetic Energy Spectra in MHD Turbulence ApJ 909 148
  • F. W. Glines, P. Grete, and B. W. O'Shea 2021 Magnetized decaying turbulence in the weakly compressible Taylor-Green vortex Phys. Rev. E 103, 043203
LRAC proposal renewed
Despite hitting some roadblocks in the process of adapting to a new supercomputer in the first year, our Leadership Resource Allocation (LRAC) on Frontera has been renewed for a second year. I will continue to run simulations to study the role of low collisionality in compressible, magnetized turbulence with an emphasis on the hot, dilute plasma found in the Intracluster medium.

MHD support in AthenaPK
Over the past two months many new features have been added to AthenaPK. This includes first, second and third order hydrodynamics as well as support for magnetohydrodynamics simulations using the generalized Lagrange multiplier, divergence cleaning approach by Dedner et al. 2002, see README in the repository for a complete list of supported features.

As before, AthenaPK follows a "device first" design in order to achieve high performance. In other words, there are no explicit data transfers between the host and devices (such as as GPUs) because of the comparatively slow data transfer in between.

Finally, a Parthenon team team applied and was accepted to the Argonne GPU Hackathon 2021. Over the course of several days we will work with mentors on improving the overall performance with an emphasis on load balancing and mesh refinement on GPUs.
Feb 2021 Marie Skłodowska-Curie Actions
I am happy to share that my proposal on Unraveling effects of anisotropy from low collisionality in the intracluster medium will receive funding under the Marie Skłodowska-Curie Actions. Later this year I will move to Hamburg and work with Marcus Brüggen at the Hamburg Observatory.

AthenaPK is public
I am also happy to share the (magneto)hydrodynamics code on top of Parthenon is now publicly available. While it is still in development, second order hydrodynamics with mesh refinement on devices (such as GPUs) is working.
More details can be found in the repository. As with Parthenon, we welcome any questions, feedback, and/or would be happy to hear if you want to directly get involved.
Jan 2021 Happy New Year!

I am happy to report that our development code AthenaPK is now capable of running second-order, compressible hydrodynamics with adaptive mesh refinement (AMR) fully on GPUs (see able movie).
AthenaPK will implement the (magneto)hydrodynamic methods of Athena++ on top of the performance portable AMR framework Parthenon and Kokkos.
We are currently working on further improving AMR performance for small block sizes. Stay tuned for further updates.
Sep 2020 Research update
We recently submitted two papers:
  • First, on the role of magnetic tension in MHD turbulence and how it results in the kinetic energy spectra becoming more shallow by suppressing the kinetic energy cascade, see Grete, O'Shea, and Beckwith arXiv:2009.03342.
  • Second, on the statistics and energy dynamics in the compressible, magnetized Taylor-Green vortex (a canonical transition to turbulence and decay problem) where we also demonstrate the importance of magnetic tension even in the presence of weak magnetic fields, see Glines, Grete, and O'Shea arXiv:2009.01331.
Computational Astrophysics Splinter Meeting Program is online
  • The call for proposals was well received and our Computational Astrophysics session received the largest number of submissions. Thus, we scheduled additional session on Thursday (24 Sep) morning in addition to the morning and afternoon sessions on Friday (25 Sep).
    The full program, which spans Galaxy Formation and Assembly, Stars and Stellar Clusters, Galaxy Evolution, Protoplanetary Discs and Exoplanets, and Computational Methods and Algorithms, is available online here.
Jul 2020 Splinter Meeting on Computational Astrophysics
I am co-organizing the splinter meeting on Computational Astrophysics as part of the virtual German Astronomical Society Meeting Sep 21-25 2020.
Abstract submission is open until 15 August. The general meeting registration is open until 15 September and there is no registration fee.
Splinter abstract: Numerical simulations are a key pillar of modern research. This is especially true for astrophysics where the availability of detailed spatial and temporal data from observations is often sparse for many systems of interest. In many areas large-scale simulations are required, e.g., in support of the interpretation of observations, for theoretical modeling, or in the planning of experiments and observation campaigns. The need and and relevance of large-scale simulations in astrophysics is reflected in a significant share of 25-30% of the overall German supercomputing time. While the supercomputing landscape has been stable for a long time, it started to change in recent years on the path towards the first exascale supercomputer. New technologies such as GPUs for general purpose computing, ARM based platforms (versus x86 platforms), and manycore systems in general have been introduced and require to rethink and revisit traditional algorithms and methods. This splinter meeting will bring together experts in computational astrophysics from all fields covering (but not limited to) fluid-based methods (from hydrodynamics to general relativistic magnetohydrodynamics), kinetic simulations, radiation transport, chemistry, and N-body dynamics applied to astrophysical systems on all scales, e.g., supernovae, planetary and solar dynamos, accretion disks, interstellar, circumgalactic, and intracluster media, or cosmological simulations. The goal of this meeting is to present and discuss recent developments in computational astrophysics and their application to current problems. Thus, contributions involving large-scale simulations and new methods/algorithms are specifically welcome. In addition to astrophysical results obtained from simulations, speakers are also encouraged to highlight numerical challenges they encountered and how they addressed those in their codes. These may include, but are not limited to, new algorithms (e.g., higher-order methods), changing HPC environments (e.g., manycore, GPUs, or FPGAs), or data storage (e.g., availability of space, sharing, or long term retention).

K-Athena paper accepted for publication
After a revision that primarily addressed details in the roofline analysis, the K-Athena paper has been accepted for publication in IEEE Transactions on Parallel and Distributed Systems.
Please find the final (early access) version at doi:10.1109/TPDS.2020.3010016 as well as an updated preprint on the arXiv:1905.04341.
Apr 2020 Postdoctoral Excellence in Research Award
I am happy to report that I am one recipient of this year's MSU Postdoctoral Excellence in Research Award. This annual award for two postdocs at MSU from all disciplines is provided by the Office of the Senior Vice President for Research and Innovation and the MSU Office of Postdoctoral Affairs and recognizes exceptional contributions to MSU and the greater research community.
I will give a brief presentation of my work during the (this year virtual) reception mid May.
Mar 2020 Leadership Resource Allocation on Frontera
  • The proposal on The role of low collisionality in compressible, magnetized turbulence was successful and awarded a Leadership Resource Allocation (LRAC) computing grant on NSF's latest flagship supercomputer Frontera. The total amount of 500,000 CPU node-hours and 50,000 GPU node-hours will allow us to study turbulence in multiple regimes at unprecedented detail. This includes the intracluster medium (ICM) - the hot, diffuse, fully ionized plasma in clusters of galaxies.
Parthenon is public
  • The performance portable structured grid adaptive mesh refinement (AMR) framework Parthenon is now publicly available on GitHub. It extends the idea behind K-Athena to the entire AMR framework of Athena++. The core team currently consists of two teams at Los Alamos National Lab, Jim Stone at IAS, and MSU. All code development is open and coordinated via the GitHub repository. Do not hesitate to get in contact/involved.
Feb 2020 Research update
  • The Kokkos Bootcamp in January was quite a success. We used the meeting to establish a larger collaboration with the long term goal to refactor the entire Athena++ codebase to make use of Kokkos. Part of the effort is to separate the mesh framework and generic infrastructure code from the physics modules. This will allow for a straightforward adoption of an extensible, performance portable (e.g., with GPU support) framework in other projects and by other groups. Stay tuned for more information on project Parthenon.
  • Our latest paper on the role of thermodynamics in magnetohydrodynamic turbulence is now published in The Astrophysical Journal: Grete, O'Shea, and Beckwith (2020) ApJ 889 19.
  • Last year's review paper on "Correlations and Cascades in Magnetized Turbulence" is now publicly available for free through OSTI.
Nov 2019 Research update

Our recent work on non-isothermal MHD turbulence that I presented both in Los Alamos in October and at the Cosmic turbulence and magnetic fields : physics of baryonic matter across time and scales meeting in Cargèse in November has been accepted for publication in The Astrophysical Journal (official link will follow once available).

Moreover, the paper on internal energy dynamics in compressible hydrodynamic turbulence has been accepted for publication in Physical Review E.
Official reference: W. Schmidt and P. Grete Phys. Rev. E 100, 043116 and preprint arXiv:1906.12228

Finally, the K-Athena tutorial sessions I gave in Los Alamos were well received.
Stay tuned for updates on the code developments.
Oct 2019 The Enzo code paper has been published in the The Journal of Open Source Software (JOSS).
This version includes the subgrid-scale model for compressible MHD turbulence I developed.

JOSS is a developer friendly, open access journal for research software packages. (about JOSS)
In addition to a code peer review (mostly focussing on improve the quality of the software submitted, e.g., with respect to documentation, functionality, or testing) each accepted submission is assigned a Crossref DOI that can be referenced to.
This supports the recognition of the software developers and the overall significant (but still all too often underappreciated) contribution of open source software to science.
Sep 2019 During the German Astronomical Society Meeting in Stuttgart I had the chance to tour SOFIA and see this impressive technical achievement in person.
SOFIA (Stratospheric Observatory for Infrared Astronomy) is a 2.7m telescope mounted on a plane so that observations can be made from high altitude (above most of the water vapor in the atmosphere, which absorbs infrared light).

In addition, I organized a Splinter Meeting on Computational Astrophysics during the conference.
Many different topic were presented ranging from galaxy mergers, to star formation, to radiative transport methods.
On top of the science presentations we also used the meeting to discuss current trends and developments in computational (astro)physics including GPU computing and FPGAs.
Aug 2019 Research update
In addition to the work on K-Athena/performance portability several more physics focused project have reached maturity over the past months.
  • We (W. Schmidt and P. Grete) extended the energy transfer framework previously applied to compressible MHD to also include internal energy dynamics. This extension allows the detailed study of energy dynamics within and between kinetic and thermal energy budget, and we applied it to study compressible hydrodynamic turbulence at various Mach numbers. One main result is an indication of a Mach number dependent natural forcing composition. Preprint available at arXiv:1906.12228.
  • The article on Intermittent fragmentation and statistical variations during gas collapse in magnetized atomic cooling haloes by P. Grete, M. A. Latif, D. R. G. Schleicher, and W. Schmidt has been publiblished in MNRAS. We used subgrid-scale model for unresolved MHD turbulence in cosmological simulations targeting supermassive black hole seed formation within the direct collapse scenario. We find a turbulent supersonic and super-Alfvénic collapse with an active small-scale dynamo amplifying magnetic fields to dynamically relevant strengths and only intermittent fragmentation making the direct collapse a viable scenario for the formation of massive objects.
  • Finally, we (P. Grete, B. W. O'Shea, and K. Beckwith) studied the effects in departing from an isothermal equation of state in compressible MHD turbulence. We find that several statistics (e.g., the strong anticorrelation between density and magnetic field strength or the scaling of density variations with Mach number) depend on the adiabatic index while other statistics exhibit no systematics -- this may help to break degeneracies in different observables. Preprint available at arXiv:1908.03989.
May 2019 K-Athena is public!
Last year at the Performance Portability with Kokkos training we started to experiment with combining Kokkos with Athena++ in order to allow for GPU-accelerated simulations. The experiment is a complete success and we are happy to make the resulting K-Athena (= Kokkos + Athena++) code publicly available on GitLab.
Details on the porting experience, roofline models on CPUs and GPUs, and scaling on many different architectures are presented in more detail in the accompanying code paper (arXiv preprint), which is currently under review.
In the paper, we demonstrate that K-Athena achieves about 2 trillion (1012) total cell-updates per second for double precision MHD on Summit — currently the fastest supercomputer in the world. This translates to updating an MHD simulation with a resolution of 10,0003 twice per second! At about 80% parallel efficiency using 24,576 Nvidia V100 Volta GPUs on 4,096 nodes, K-Athena achieves a speedup of more than 30 compared to using all available 172,032 CPU cores.
The code release has also been featured on Tech Xplore, and we encourage feedback and/or contribution to the code via the GitLab repository.

Image credit: Carlos Jones/ORNL CC-BY-2.0
Apr 2019 After I took part in the Professional Development Program of the Institute for Scientist & Engineer Educators last year for the first time, I returned this year as a design team leader.
Our design team (consisting of Rachel Frisbie and Forrest Glines — two MSU graduate students) will develop an inquiry activity on Sustainable Software Practices with Collaborative Version Control. We will teach the activity for two REU (Research Experiences for Undergraduates) programs at MSU in the summer.
If you are interested in the teaching material, please contact me. We are happy to share the material and experience.
Mar 2019 Our review paper covering Kris Beckwith's invited talk at ICOPS 2018 on Correlations and Cascades in Magnetized Turbulence (Beckwith, Grete, and O'Shea 2019) has been published in IEEE Transactions on Plasma Science last month.
Similarly, I also gave several talks, e.g., in Budapest, Las Vegas, Berkeley, and UC Santa Cruz, covering our recent results on correlations, energy transfer, and statistics in adiabatic, compressible MHD turbulence.
In addition, I presented first results on our GPU-enabled version of Athena++ using Kokkos. Stay tuned — more information will follow shortly.
Dec 2018 Last year when I gave a talk at the MIPSE (Michigan Institute for Plasma Science and Engineering) seminar in Ann Arbor I was also interviewed. The interview is part of a MIPSE sponsored outreach program to capture some of the importance (and excitement) of plasmas for students and the general public.
The short 3 minute clip is now online on YouTube.

All interviews of the series can be found on the MIPSE YouTube channel.
Aug 2018 Last month I took part in the Performance Portability with Kokkos training at Oak Ridge National Laboratory.
The four-day training was organized by the Exascale Computing Project and covered the Kokkos programming model and library for writing performance portable code. In other words, code is only written once and can afterwards be compiled for different target architectures, e.g., CPUs or GPUs, automatically taking into account architecture specific properties such as the memory layout and hierarchy. We are currently implementing this approach and will use it for the next project.

In addition, SparPi, the Raspberry Pi supercomputer model, had another appearance this month at Astronomy on Tap.
I presented a talk on What’s so super in supercomputer? Theory & Practice including a live demonstration. After the presentation, people made ample use of the opportunity to get their hands-on model themselves and directly interact with the live simulation.
May 2018 Our latest paper on Systematic Biases in Idealized Turbulence Simulations has been accepted for publication in The Astrophysical Journal Letters, see Grete et al 2018 ApJL 858 L19.
We show how the autocorrelation time of the forcing is intrinsically linked to the amplitude of the driving field, and how this affects the presence of (unphysical) compressive modes even in the subsonic regime.
I also presented the results at
Apr 2018 The MSU's science festival was a great success. We had a lot of interested people of all ages at our booth.
The assembled supercomputer model featured all components of an actual supercomputer, e.g., power supply units, networking/interconnects, compute nodes, and a frontend. On the frontend we are running TinySPH, an interactive hydrodynamics code that allows for both illustrating high performance computing techniques such as dynamic load balancing, and illustrating physics by changing fluid parameters such as viscosity or density.
In addition, we offered multiple virtual reality experiences including one that featured a magnetohydrodynamic turbulence simulation. Visitors were able to dive into a turbulent box and experience the rich and complex fluid structure interacting with highly tangled magnetic fields from within.
Feb 2018 I am very happy that I was accepted to participate in the 2018 Professional Development Program of the Institute for Scientist & Engineer Educators. This inquiry based teaching training covers multiple workshops where the participants collaboratively design an inquiry activity that will eventually be implemented in class later this year. The first workshop is the "Inquiry Institute" from March 25-28, 2018 in Monterey, CA.

In addition, I will attend SnowCluster 2018 - The Physics of Galaxy Clusters from March 18 - 23, 2018 in Salt Lake City where I present a poster talk of our latest results on "Systematic biases in idealized turbulence simulations".
Feb 2018 Parts for a Raspberry Pi based "supercomputer" arrived.
We will use the system for outreach, e.g., to demonstrate high performance computing, and in class for hands-on tutorials. I am currently setting the system up, which consists of 8x Raspberry Pi 3 for a total of 32 cores with 8 GB main memory.
The system will premiere at MSU's science festival.
Visit our booth at the Expo on April 7 for Hands-on Supercomputing and multiple Virtual Reality experiences.
Nov 2017 I presented the latest the results on energy transfer in compressible MHD turbulence
  • on Nov 14 in the Astrophysics Seminar at Notre Dame
  • on Nov 15 in the MIPSE seminar (Michigan Institute for Plasma Science and Engineering) at the University of Michigan
A recording of the MIPSE seminar is available here.
Sep 2017 My PhD thesis on Large eddy simulations of compressible magnetohydrodynamic turbulence was awarded with this year's Doctoral Thesis Award by the German Astronomical Society.
I presented the work at the Annual Meeting of the German Astronomical Society . See official press release [pdf] (in German), and MSU press releases (in English) from the CMSE department and the College of Natural Science .
Sep 2017 Our article on Energy transfer in compressible magnetohydrodynamic turbulence has been accepted for publication. In the paper, we introduce a scale-by-scale energy transfer analysis framework that specifically takes into account energy transfer within and between kinetic and magnetic reservoirs by compressible effects. The paper appears as a featured article in the journal Physics of Plasmas. Moreover, the article is covered by an AIP Scilight.
Sep 2017 We (PI B. W. O'Shea, Co-PIs B. Côté, P. Grete, and D. Silvia) successfully obtained computing time through an XSEDE allocation. I will use the resources to study driving mechanisms in astrophysical systems.
Aug 2017 I took part in the Argonne Training Program on Extreme-Scale Computing. This intensive two-week training is funded by the DOE's Exascale Computing Project and allowed me to gain knowledge and hands-on experience on next-generation hardware, programming models, and algorithms. I can highly recommend this training to everyone involved in high-performance computing.
Jun 2017 I presented first results of our energy transfer study for compressible MHD turbulence and the method itself