Aller au contenu principal

Phd Position F - M Mechanistic Interpretability And Problem-Space Adversarial Attacks For Llm-Based Software Vulnerability Detection H/F

INRIA

  • Rennes - 35
  • CDD
  • 36 mois
  • Bac +5
  • Service public des collectivités territoriales
Lire dans l'app

Détail du poste

PhD Position F/M Mechanistic Interpretability and Problem-Space Adversarial Attacks for LLM-based Software Vulnerability Detection
Le descriptif de l'offre ci-dessous est en Anglais
Type de contrat : CDD

Niveau de diplôme exigé : Bac +5 ou équivalent

Fonction : Doctorant

A propos du centre ou de la direction fonctionnelle

The Inria Centre at Rennes University is one of Inria's eight centres and has more than thirty research teams. The Inria Centre is a major and recognized player in the field of digital sciences. It is at the heart of a rich R&D and innovation ecosystem: highly innovative PMEs, large industrial groups, competitiveness clusters, research and higher education players, laboratories of excellence, technological research institute, etc.

Contexte et atouts du poste

Within the framework of the ANR PRCI project "SecLLM4SVD (Secured Large Language Models in Reliable Software Vulnerability Detection)", Principal Investigator: Dr. Yufei Han.

Mission confiée

Context and Motivation:

Large Language Models (LLMs) have demonstrated remarkable capabilities in automating the detection of software vulnerabilities (SVD) due to their ability to process both natural and programming languages. However, a critical reliability concern with state-of-the-art LLMs is their susceptibility to adversarial attacks. Subtle, problem-space modifications to source code-such as variable renaming or dead code insertion-can mislead the model without changing the code's main functionality or underlying vulnerabilities. Furthermore, the opaque, "black-box" nature of LLMs makes it difficult to understand whether they truly grasp code semantics or simply recognize superficial statistical artifacts.

Collaboration :

The recruited person will be in connection with Dr. Yuejun Guo at Luxembourg Institute of Science and Technology.

Responsibilities :
The person recruited is responsible for conducing full-time research activities centered at the theme of the thesis.

Steering/Management :
The person recruited will be supervised by Dr. Yufei Han

Principales activités

- Thesis Objectives
This 36-month PhD position aims to bridge the gap between LLM transparency and adversarial robustness. The PhD candidate will spearhead research in two dedicated work packages: WP2 (Mechanistic Interpretability of LLM-based SVD) and WP3 (Problem-space Adversarial Attacks against LLM-based SVD).
Goal 1: Unveiling LLM Decision-Making
The first phase of the thesis will focus on a systematic analysis of how LLMs detect software vulnerabilities. The candidate will:

- Investigate the causal relationships encoded in LLMs' vulnerability detection mechanisms.
- Analyse how specific code properties (e.g., syntactic patterns, data flow structures) trigger vulnerability flags.
- Explore how the attention mechanisms in LLMs encode correlations between code properties and detection outputs, providing human-understandable insights into the LLM logic.

Goal 2: Assessing and Exploiting Vulnerabilities via Adversarial Attacks
Building upon the mechanistic understanding from WP2, the candidate will generate adversarially manipulated source code to systematically mislead LLM-based SVD systems. The candidate will:

- Design and propose advanced problem-space adversarial attacks that preserve code functionality and mimic real-world developer practices.
- Leverage heuristic optimization methods, such as multi-armed bandit programming and reinforcement learning, to craft these attacks.
- Develop innovative in-context learning techniques to overcome the limited input windows of LLMs, ensuring efficient and comprehensive evaluations of model robustness.

Compétences

Candidate Profile and Requirements

To successfully carry out the research objectives of WP2 and WP3, the ideal candidate should possess a strong foundational background in both artificial intelligence and software security. We are looking for candidates who meet the following requirements:

- Educational Background:A Master's degree or equivalent engineering degree in Computer Science, Artificial Intelligence, Cybersecurity, or a closely related discipline.
- Deep Learning Expertise:Solid knowledge and proven project experience in designing, training, and evaluating Deep Neural Network (DNN)-based classification models.
- Program Analysis Proficiency:Demonstrated understanding and practical experience in program analysis. Specifically, the candidate must be familiar with the static analysis of source code using semantic representations, such as Control Flow Graphs (CFG) and Data Flow Graphs (DFG).
- Programming Skills:Strong programming skills in Python and proficiency with standard deep learning frameworks (e.g., PyTorch, TensorFlow). Experience with code parsing and analysis tools (e.g., Tree-sitter, Joern) is highly desirable.
- Additional Assets:Prior exposure to Large Language Models (LLMs), Natural Language Processing (NLP), or Adversarial Machine Learning will be considered a significant plus.
- Soft Skills:Excellent analytical and problem-solving skills, an autonomous and rigorous work ethic, and good communication skills in English for scientific writing and presentation within an international consortium.

Avantages

- Subsidized meals
- Partial reimbursement of public transport costs
- Leave: 7 weeks of annual leave + 10 extra days off due to RTT (statutory reduction in working hours) + possibility of exceptional leave (sick children, moving home, etc.)
- Possibility of teleworking (after 6 months of employment) and flexible organization of working hours
- Professional equipment available (videoconferencing, loan of computer equipment, etc.)
- Social, cultural and sports events and activities
- Access to vocational training
- Social security coverage

Rémunération

monthly gross salary 2300 euros

Bienvenue chez INRIA

A propos d'Inria

Inria est l'institut national de recherche dédié aux sciences et technologies du numérique. Il emploie 2600 personnes. Ses 215 équipes-projets agiles, en général communes avec des partenaires académiques, impliquent plus de 3900 scientifiques pour relever les défis du numérique, souvent à l'interface d'autres disciplines. L'institut fait appel à de nombreux talents dans plus d'une quarantaine de métiers différents. 900 personnels d'appui à la recherche et à l'innovation contribuent à faire émerger et grandir des projets scientifiques ou entrepreneuriaux qui impactent le monde. Inria travaille avec de nombreuses entreprises et a accompagné la création de plus de 200 start-up. L'institut s'eorce ainsi de répondre aux enjeux de la transformation numérique de la science, de la société et de l'économie.

Publiée le 13/03/2026 - Réf : f8d4f69c354e6ae625619cef24d8fcb6

Phd Position F - M Mechanistic Interpretability And Problem-Space Adversarial Attacks For Llm-Based Software Vulnerability Detection H/F

INRIA
  • Rennes - 35
  • CDD
Postuler sur le site du partenaire Publiée le 13/03/2026 - Réf : f8d4f69c354e6ae625619cef24d8fcb6

Finalisez votre candidature

sur le site du partenaire

Créez votre compte
Hellowork et postulez

sur le site du partenaire !

Ces offres pourraient aussi
vous intéresser

Arche MC2 recrutement
Arche MC2 recrutement
Taden - 22
CDI
32 000 - 35 000 € / an
Télétravail partiel
Voir l’offre
il y a 18 jours
Externatic recrutement
Externatic recrutement
Rennes - 35
CDI
60 000 - 65 000 € / an
Télétravail partiel
Voir l’offre
il y a 13 jours
Voir plus d'offres
Initialisation…
Les sites
L'emploi
  • Offres d'emploi par métier
  • Offres d'emploi par ville
  • Offres d'emploi par entreprise
  • Offres d'emploi par mots clés
L'entreprise
  • Qui sommes-nous ?
  • On recrute
  • Accès client
Les apps
Nous suivre sur :
Informations légales CGU Politique de confidentialité Gérer les traceurs Accessibilité : non conforme Aide et contact