- Trouver mon job s
- Trouver mon entreprise s
-
Accès recruteur
-
Emploi
- Formation
-
Mon compte
-
Hellowork a estimé le salaire pour cette offre
Cette estimation de salaire pour le poste de Doctorant Doctorant Phd Position Bias Fairness And Fidelity In Image And Video Generation Methods H/F à Montbonnot-Saint-Martin est calculée grâce à des offres similaires et aux données de l’INSEE.
Cette fourchette est variable selon expérience.
Salaire brut min
33 100 € / an 2 758 € / mois 18,19 € / heureSalaire brut estimé
43 800 € / an 3 650 € / mois 24,07 € / heureSalaire brut max
55 000 € / an 4 583 € / mois 30,22 € / heureCette information vous semble-t-elle utile ?
Merci pour votre retour !
Doctorant Doctorant Phd Position Bias Fairness And Fidelity In Image And Video Generation Methods H/F
INRIA
- Montbonnot-Saint-Martin - 38
- CDD
- 36 mois
- Bac +5
- Service public des collectivités territoriales
Détail du poste
Doctorant F/H Doctorant F/H PhD Position : Bias, Fairness, and Fidelity in Image and Video Generation Methods
Type de contrat : CDD
Niveau de diplôme exigé : Bac +5 ou équivalent
Fonction : Doctorant
A propos du centre ou de la direction fonctionnelle
The Centre Inria de l'Université de Grenoble groups together almost 600 people in 22 research teams and 7 research support departments.
Staff is present on three campuses in Grenoble, in close collaboration with other research and higher education institutions (Université Grenoble Alpes, CNRS, CEA, INRAE, ...), but also with key economic players in the area.
The Centre Inria de l'Université Grenoble Alpe is active in the fields of high-performance computing, verification and embedded systems, modeling of the environment at multiple levels, and data science and artificial intelligence. The center is a top-level scientific institute with an extensive network of international collaborations in Europe and the rest of the world.
Contexte et atouts du poste
Titre :Control, Motion Fidelity, and Computational Efficiency in Long-Form Audio-Visual Video Generation
Supervision : Dr Stéphane Lathuilière (INRIA-UGA)
Funding : BPI contract
Contexte :Background and Motivation
Recent progress in generative AI has revolutionized the creation and manipulation of visual media. Models such as Stable Diffusion, DALL·E, and Sora have demonstrated the ability to generate highly realistic images and videos from textual descriptions. These models are increasingly applied to editing tasks - such as virtual try-on, style transfer, and image restoration - where maintaining both semantic coherence and visual fidelity is crucial
However, the deployment of these systems also raises serious ethical and technical concerns. Research has shown that generative models can encode and amplify societal biases present in their training data, leading to unfair performance across demographic groups (e.g., gender, race, body type, or age). In editing scenarios, this may manifest as disproportionate errors, inconsistent realism, or stereotypical representations for certain groups. Furthermore, maintaining fidelity - i.e., ensuring the edited output remains consistent with the original input outside modified regions - remains a key challenge. Diffusion models, by design, regenerate entire images from noise, often unintentionally altering unedited regions and compromising visual integrity. Balancing fairness and fidelity within a stochastic generative process is thus both a scientific and ethical frontier for AI research.
This PhD will systematically investigate bias, fairness, and fidelity in diffusion-based image and video generation models, particularly within editing tasks. It will develop new frameworks for evaluating, understanding, and mitigating bias while preserving high fidelity in generative outcomes.
Mission confiée
Research Objectives :
The overarching aim of this research is to develop a principled framework for understanding, evaluating, and mitigating bias in diffusion-based image and video generation while maintaining high fidelity in editing outcomes. The project begins with a systematic characterization of bias in existing diffusion models. It will analyze how these models perform across different groups defined by age, gender, skin tone, and body morphology, with particular attention to editing quality and consistency. Some other biases not related to humans will also be analyzed for general scenes. This involves both quantitative and qualitative analyses, comparing perceptual realism, structural accuracy, and fairness metrics across diverse datasets.
Principales activités
Methodology
To achieve this, the research will first investigate robust evaluation metrics for fairness and fidelity in generative editing tasks. While traditional measures such as Fréchet Inception Distance (FID) and perceptual similarity scores (CLIP-based or LPIPS) are valuable, they do not capture demographic disparities or context preservation. Therefore, new composite metrics will be developed that integrate demographic parity, perceptual consistency, and semantic coherence. These metrics will form the basis for a systematic bias audit of existing diffusion models in editing tasks like virtual try-on and face retouching. The second stage of the research will focus on fidelity analysis, emphasizing the preservation of unedited regions. This will include developing new metrics that account for context-specific deviations, measuring how global visual properties-such as lighting or color tone-shift during editing. User studies and psychophysical evaluations will complement quantitative measures, ensuring that technical fidelity aligns with human-perceived consistency. The final and most substantial component will involve bias mitigation and fidelity enhancement. Several methodological strategies will be explored. One approach will modify conditioning mechanisms to ensure equitable generative quality across demographics by learning balanced feature representations. Another will involve re-weighting training data or applying adversarial fairness constraints that penalize demographic performance gaps. At the same time, novel diffusion control mechanisms-such as mask-preserving denoising schedules and attention modulation-will be developed to maintain high fidelity during editing. The project will explore whether fairness and fidelity objectives can be co-optimized through a unified loss function or multi-objective training regime, potentially establishing a new paradigm for fair generative editing. The study will begin with static image models and later extend to video diffusion models, which introduce additional challenges of temporal coherence and fairness over time. Temporal fidelity (preserving motion and lighting consistency) and temporal fairness (maintaining equal generative performance across demographics over consecutive frames) will both be evaluated. Throughout the research, standard diffusion architectures such as Stable Diffusion and ControlNet will be used as baselines. The outcomes will include a benchmark dataset and an open-source evaluation toolkit for fairness and fidelity in generative editing, enabling broader community use and transparency. In addressing the core research questions- how fairness and fidelity can be quantitatively assessed, and how both can be improved without sacrificing visual realism-this project will contribute a comprehensive understanding of ethical and technical reliability in generative AI systems. 3. References Ho, J., Jain, A., & Abbeel, P. (2020). Denoising Diffusion Probabilistic Models. NeurIPS. Rombach, R. et al. (2022). High-Resolution Image Synthesis with Latent Diffusion Models. CVPR. Meng, C. et al. (2021). SDEdit: Image Synthesis and Editing with Stochastic Differential Equations. ICLR.
Compétences
Compétences techniques et niveau requis :We are seeking a motivated PhD candidate with a strong background in one or more the following areas :
- speech processing, computer vision, machine learning,
- solid programmming skills
- interest in connecting AI with human cognition Prior experience with LLM, SpeechLMs, RL algorithms, or robotic platforms is a plus, but not mandatory
Langues : Anglais
Avantages
- Subsidized meals
- Partial reimbursement of public transport costs
- Leave: 7 weeks of annual leave + 10 extra days off due to RTT (statutory reduction in working hours) + possibility of exceptional leave (sick children, moving home, etc.)
- Possibility of teleworking and flexible organization of working hours
- Professional equipment available (videoconferencing, loan of computer equipment, etc.)
- Social, cultural and sports events and activities
- Access to vocational training
- Social security coverage
A propos d'Inria
Inria est l'institut national de recherche dédié aux sciences et technologies du numérique. Il emploie 2600 personnes. Ses 215 équipes-projets agiles, en général communes avec des partenaires académiques, impliquent plus de 3900 scientifiques pour relever les défis du numérique, souvent à l'interface d'autres disciplines. L'institut fait appel à de nombreux talents dans plus d'une quarantaine de métiers différents. 900 personnels d'appui à la recherche et à l'innovation contribuent à faire émerger et grandir des projets scientifiques ou entrepreneuriaux qui impactent le monde. Inria travaille avec de nombreuses entreprises et a accompagné la création de plus de 200 start-up. L'institut s'eorce ainsi de répondre aux enjeux de la transformation numérique de la science, de la société et de l'économie.
La carte
655 Avenue de l'Europe
38330 Montbonnot-Saint-Martin
Publiée le 17/12/2025 - Réf : 69232cdd137cda246b3c854e10e6117f
Créez une alerte
Doctorant Doctorant Phd Position Bias Fairness And Fidelity In Image And Video Generation Methods H/F
- Montbonnot-Saint-Martin - 38
- CDD
Finalisez votre candidature
sur le site du
partenaire
Créez votre compte pour postuler
sur le site du
partenaire !
sur le site du partenaire
sur le site du partenaire !
Ces offres pourraient aussi
vous intéresser
Recherches similaires
- Job Ingénieur en intelligence artificielle
- Job Informatique
- Job Grenoble
- Job Voiron
- Job Bourgoin-Jallieu
- Job La Tour-du-Pin
- Job Morestel
- Job Saint-Marcellin
- Job La Mure
- Job Villard-de-Lans
- Job Vienne
- Job Salaise-sur-Sanne
- Job Développeur
- Job Technicien support informatique
- Job Développeur Java
- Job DevOps
- Job Ingénieur de développement
- Entreprises Informatique
- Entreprises Ingénieur en intelligence artificielle
- Entreprises Montbonnot-Saint-Martin
- Job Fonction publique
- Job Collectivités
- Job Fonction publique territoriale
- Job Doctorant
- Job Vercors
- INRIA Montbonnot-Saint-Martin
- INRIA Ingénieur en intelligence artificielle
Testez votre correspondance
Chargement du chat...
{{title}}
{{message}}
{{linkLabel}}