Aller au contenu principal
INRIA recrutement

Internship Python Data Processing on Supercomputers For Large Parallel Numerical Simulations. H/F INRIA

  • Saint-Martin-d'Hères - 38
  • Stage
  • Télétravail partiel
  • Service public des collectivités territoriales
Lire dans l'app

Détail du poste

Internship: Python Data Processing on Supercomputers for Large Parallel Numerical Simulations.
Le descriptif de l'offre ci-dessous est en Anglais
Type de contrat : Convention de stage

Niveau de diplôme exigé : Bac +4 ou équivalent

Fonction : Stagiaire de la recherche

A propos du centre ou de la direction fonctionnelle

The Centre Inria de l'Université de Grenoble groups together almost 600 people in 23 research teams and 9 research support departments.

Staff is present on three campuses in Grenoble, in close collaboration with other research and higher education institutions (Université Grenoble Alpes, CNRS, CEA, INRAE, ...), but also with key economic players in the area.

The Centre Inria de l'Université Grenoble Alpe is active in the fields of high-performance computing, verification and embedded systems, modeling of the environment at multiple levels, and data science and artificial intelligence. The center is a top-level scientific institute with an extensive network of international collaborations in Europe and the rest of the world.

Contexte et atouts du poste

The internship will take place at the DataMove team located in the IMAG building on the campus of Saint Martin d'Heres (Univ. Grenoble Alpes) near Grenoble, under the supervision of Bruno Raffin (****@****.**), Andres Bermeo (****@****.**) and Yushan Wang ()

The length of the internship is 4 months minimum and the start date is flexible, but need a 2 month delay before starting the interhsip due to administrative constraints. The DataMove team is a friendly and stimulating environment that gathers Professors, Researchers, PhD and Master students all leading research on High-Performance Computing. The city of Grenoble is a student-friendly city surrounded by the Alps mountains, offering a high quality of life and where you can experience all kinds of mountain-related outdoor activities.

Mission confiée

The field of high-performance computing has reached a new milestone, with the world's most powerful supercomputers exceeding the exaflop threshold. These machines will make it possible to process unprecedented quantities of data, which can be used to simulate complex phenomena with superior precision in a wide range of application fields: astrophysics, particle physics, healthcare, genomics, etc.

Without a significant change in practices, the increased computing capacity of the next generation of computers will lead to an explosion in the volume of data produced by numerical simulations. Managing this data, from production to analysis, is a major challenge.

The use of simulation results is based on a well-established calculation-storage-calculation protocol. The difference in capacity between computers and file systems makes it inevitable that the latter will be clogged. For instance, the Gysela code in production mode can produce up to 5TB of data per iteration. It is obvious that storing 5TB of data is not feasible at high frequency. What's more, loading this quantity of data for later analysis and visualization is also a difficult task. To bypass this difficulty, we choose to rely on the in situ data analysis approach.

We developed an in situ data processing approach, called Deisa, relying on Dask, a Python environment for distributed tasks. Dask defines tasks that are executed asynchronously on workers once their input
data are available. The user defines a graph of tasks to be executed. This graph is then forwarded to the Dask scheduler. The scheduler is in charge of (1) optimizing the task graph and (2) distributing the tasks
for execution on the different workers according to a scheduling algorithm aiming at minimizing the graph execution time.

Deisa extends Dask so it becomes possible to couple a MPI-based parallel simulation code with Dask. Deisa enables the simulation code to directly send newly produced data into the worker memories, notifies
the Dask scheduler that these data are available for analysis and that associated tasks can then be scheduled for execution.

Compared to previous in situ approaches, which are typically MPI-based, our approach, relying on Python tasks, strikes a good balance between programming ease and runtime performance.

But Dask has one major limitation: the scheduler is centralized creating a performance bottleneck at large scale. To circumvent this limitation we developed a variation of Deisa (Deisa-on-Ray or Doreisa) that relies on
the Ray runtime. Ray is a framework for distributed task and actors very popular in the AI community. Ray is more flexible than Dask and supports a distributed task scheduler, making it a more suitable runtime than Dask when targeting the large scale.

What Dask-on-Ray acheives is:

- The Dask task graph is split in sub-graphs and distributed to different Ray Actors
- These Ray actors implement a local Dask scheduler. Each Dask to be executed is turned into a Ray task and handled to the local Ray scheduler. The execution of the Dask task graph is them distributed, showing sginficant performance gains
- If a rask requires a data that is actually produced by an other task handled by an other remote Ray scheduling actor, the Ray scheduler will fetch it automatically by relying on the Ray reference mechanism
(can be seen as some kind of distributed smart pointer).

Dask-on-Ray has demonstrated significant performance improvement at scale (tested with up to 15 000 core) than the pure Dask-based appraoch.

The goal of this internship is to investigate solutions for:

- Further improving performance. In situ analytics often repeats the execution of the same task graph at different iterations. So far
the task graph is always processed, split and distributed at each iteration, while it could be kept in place, saving all pre-processing steps, for various consecutive iterations. Ray has some mechanisms that could be leveraged for that purpose, namely compiled-graphs and streams.
- Extending functionalities. The data the simulation push to the analysis is staticaly defined at init time with no possibility for the analysis to change it during execution. Adding the capability to change the simulation behavior dynamically from the analytics would open the way to support more advanced simulation?analytics patterns like changing the data extracted from the simulation based on analysis results, or changing some internal states of the simulation based on analytics for assimilation of observation data for instance.

References
- Deisa and Deisa-on-Ray Repo:
- Ray -
- Dask -
- Ownership: A Distributed Futures System for Fine-Grained Tasks. Stephanie Wang et al. NSDI 2021.
- Ray: A Distributed Framework for Emerging AI Applications. Philipp Moritz et al. 2018.
- Deisa Paper: Dask-enabled in situ analytics. Amal Gueroudji, Julien Bigot, Bruno Raffin. Hipc 2021.
- Deisa Paper: Dask-Extended External Tasks for HPC/ML In Transit Workflows, Amal Gueroudji, Julien Bigot, Bruno Raffin, Robert Ross. Work workshop at Supercomputing 23.
- Damaris: How to Efficiently Leverage Multicore Parallelism to Achieve Scalable, Jitter-free I/O. Matthieu Dorier , Gabriel Antoniu , Franck Cappello, Marc Snir , Leigh Orf. IEEE Cluster 2012.
- Integrating External Resources with a Task-Based Programming Model. Zhihao Jia, Sean Treichler, Galen Shipman, Michael Bauer, Noah Watkins, Carlos Maltzahn, Patrick McCormick and Alex Aiken
In the International Conference on High Performance Computing, Data, and Analytics (HiPC 2017).
- Visibility Algorithms for Dynamic Dependence Analysis and Distributed Coherence. Michael Bauer, Elliott Slaughter, Sean Treichler, Wonchan Lee, Michael Garland and Alex Aiken
In Principles and Practices of Parallel Programming (PPoPP 2023). f

Principales activités

After studying related work and getting familair with existing code, the candidate will start elaborating new solutions. The proposed approach will be iteratively refined through cycles of implementation, experimentation, result analysis, and design improvements. The candidate will have access to supercomputers for the experiments. If the results are promising, we may consider writing and submitting a publication.

Compétences

Expected skills include

- Knowledge on distributed, parallel computing and numerical simulations.
- Python, Numpy, Parallel programming (MPI)
- English (working language)

Avantages

- Subsidizedmeals
- Partial reimbursement of public transport costs
- Leave: for annual work contract 7 weeks of annual leave + 10 extra days off due to RTT (statutory reduction in working hours) + possibility of exceptional leave (sick children, moving home, etc.)
- Possibility of teleworking (90 days / year for an annual contract) and flexible organization of working hours at the condition of team leader approval
- Social, cultural and sports events and activities

Rémunération

€4.35 per hour of actual presence at 1 January 2025.

About 590€ gross per month (internship allowance)

A propos d'Inria

Inria est l'institut national de recherche dédié aux sciences et technologies du numérique. Il emploie 2600 personnes. Ses 215 équipes-projets agiles, en général communes avec des partenaires académiques, impliquent plus de 3900 scientifiques pour relever les défis du numérique, souvent à l'interface d'autres disciplines. L'institut fait appel à de nombreux talents dans plus d'une quarantaine de métiers différents. 900 personnels d'appui à la recherche et à l'innovation contribuent à faire émerger et grandir des projets scientifiques ou entrepreneuriaux qui impactent le monde. Inria travaille avec de nombreuses entreprises et a accompagné la création de plus de 200 start-up. L'institut s'eorce ainsi de répondre aux enjeux de la transformation numérique de la science, de la société et de l'économie.

Publiée le 22/10/2025 - Réf : 367f6d1cb85975709f338cc6c15de0c7

Internship Python Data Processing on Supercomputers For Large Parallel Numerical Simulations. H/F

INRIA
  • Saint-Martin-d'Hères - 38
  • Stage
Publiée le 22/10/2025 - Réf : 367f6d1cb85975709f338cc6c15de0c7

Finalisez votre candidature

sur le site du recruteur

Créez votre compte pour postuler

sur le site du recruteur !

Voir plus d'offres
Les sites
L'emploi
  • Offres d'emploi par métier
  • Offres d'emploi par ville
  • Offres d'emploi par entreprise
  • Offres d'emploi par mots clés
L'entreprise
  • Qui sommes-nous ?
  • On recrute
  • Accès client
Les apps
Application Android (nouvelle fenêtre) Application ios (nouvelle fenêtre)
Nous suivre sur :
Informations légales CGU Politique de confidentialité Gérer les traceurs Accessibilité : non conforme Aide et contact