Irina Proskurina
Research Engineer @ Jean Monnet University, Hubert Curien Laboratory
I am currently a Research Engineer at the Laboratory Hubert Curien at Jean Monnet University, supervised by Prof. Antoine Gourru. Previously, I was a PhD student at the ERIC Lab, part of the University of Lyon and Lyon 2, under the supervision of Prof. Julien Velcin and Prof. Guillaume Metzler.
My research focuses on the compression and efficiency of large language models, particularly regarding potential unsafe generations.
I am also a member of the Cohere Community. Feel free to reach out if you are interested in discussing related topics or potential collaborations !
Before starting my PhD, I earned an MSc in Computer Science through a double-degree program between the Higher School of Economics (HSE University) and the University of Clermont-Ferrand. During this time, I interned at the Learner Corpora Lab at HSE University, where I collaborated with corpus and computational linguists on grammatical error detection in learner essays and contributed to improving the Grazie tool in partnership with JetBrains. Later, I worked at the Laboratory for Models and Methods of Computational Pragmatics, also part of HSE University, where I explored the use of topological data analysis for acceptability judgments.
News
- Pre-print: Fair-GPTQ: Bias-Aware Quantization for Large Language Models
- PhD defense: I defended my PhD thesis, “Towards an Unbiased Compression of Large Language Models”, on March 5, 2026 at the Université de Lyon, France. The manuscript will be available online soon !
- Publications: Histoires Morales: A French Dataset for Assessing Moral Alignment (NAACL 2025 & TALN 2025), HatePrototypes: Interpretable and Transferable Representations for Implicit and Explicit Hate Speech Detection (LREC 2026)
- Talks: Biased Outcomes of Quantization: Debiasing Quantization and Pre-training of LLMs (FAMOUS Workshop, Jean Monnet University), Quantization of Language Models at Scale (Seminar, Hubert Curien Laboratory, Jean Monnet University), National French Reading Group on NLP, Compression-Induced Biases in Large Language Models (EALM Workshop@TALN25), Topological Data Analysis in NLP (Data Science and Statistics Laboratory, University of Brescia)
- Workshops & Winter schools: EALM Workshop at TALN on Evaluation and Analysis of Language Models (organising committee), Tutorials@Lyon NLP Connect, HPLT NLP Winter School on Multilingual Evaluation
- I am organizing a joint event for PhD students in Lyon who are working in NLP. If you’re interested, you can sign up by completing this form. More details on the website: https://lyon-nlp-connect.github.io
- Publications: When Quantization Affects Confidence of Large Language Models? at NAACL
- Talks: Calibration of Quantized Large LanguageModels (Naver Labs Europe), National French Reading Group on NLP, Conférence sur l’Apprentissage automatique (CAp)
- Workshops & Winter schools: LoRAINNe workshop, Ethics and NLP Workshop, ALPS NLP School
- Publications: 1 paper at IDA symposium, paper at BSNLP@EACL, and paper at BabyLM@CoNLL
- Talks: Seminar on recent advances in NLP and recent large language models (ERIC Lab, University of Lyon), French National Reading Group on Information Retrieval, Bias in Pruned Transformers (ERIC Lab, DMD research group, University of Lyon)
- Publications: 1 paper at EMNLP 2022 (Part of master’s thesis)
- Research internship: Towards an Ethical Compression of Deep Learning Models (ERIC Lab, University of Lyon)
- Seminars: Bias in (compressed) language Models @ ANR Dike Day, Rule-based vs neural approaches for grammatical error correction (Learner Corpora Lab @ Higher School of Economics)
- Research collaborations: Grammatical error detection models (Learner Corpora Lab & JetBrains Grazie team), MGPR models with Prof. Hamed Rahimian (Clemson University, USA)
