Avijit Ghosh

prof_pic.jpg

Dr. Avijit Ghosh is an Applied Policy Researcher in the Machine Learning and Society Team at Hugging Face 🤗 and an Associate Researcher in the RIET Lab at the University of Connecticut. Previously, he was a Lecturer in the Khoury College of Computer Sciences at Northeastern University. He works at the intersection of machine learning, ethics, and policy, aiming to implement fair ML algorithms into the real world. He has published and peer-reviewed several research papers in top ML and AI Ethics venues, and has organized academic workshops as a member of QueerInAI. His work has been covered in the press, including articles in The New York Times, Forbes, The Guardian, Propublica, Wired, and the MIT Tech Review. Dr. Ghosh has been an invited speaker as a Responsible AI expert, at events held by organizations such as SXSW, Trustworthy ML Initiative, AI Risk and Vulnerability Alliance, and AI Village. He has also engaged with policymakers, having spoken to US Congressional Staffers and to the UK Government Centre for Data Ethics and Innovation. His research and outreach have led to real-world impact, such as helping shape regulation in New York City and causing Facebook to remove their biased ad targeting algorithm.

View my CV here.

News

Nov 27, 2024 The accepted tiny papers for our Neurips 2024 Evaluating Evaluations: Examining Best Practices for Measuring Broader Impacts of Generative AI (EvalEval) Workshop are out! Read them here. Congrats to the authors and looking forward to the workshop!
Nov 14, 2024 “To Err is AI : A Case Study Informing LLM Flaw Reporting Practices,” with Sean McGregor, Allyson Ettinger, Nick Judd, Paul Albee, Liwei Jiang, Kavel Rao, Will Smith, Shayne Longpre, Christopher Fiorelli, Michelle Hoang, Sven Cattell, and Nouha Dziri, has been accepted at the Thirty-Seventh Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-25)!
Oct 15, 2024 I was quoted on another WIRED article on emotion detecting glasses. where I point out that mainstream emotion detection tech could be a path to doom if implemented without considering its pitfalls and impact on human agency.
Sep 13, 2024 “Reducing Population-level Inequality Can Improve Demographic Group Fairness: a Twitter Case Study,” with Tomo Lazovich, Kristian Lum, and Christo Wilson, has been accepted at the FAccTRec Workshop at ACM RecSys 2024!
Sep 6, 2024 I led Hugging Face’s response to the RFC on AI 800-1: Managing Misuse Risk for Dual-Use Foundational Models, highlighting ways to protect the open source AI ecosystem while creating effective safety practices to manage the risks of dual-use foundational models.
Aug 28, 2024 I was quoted on a new WIRED article on Wearable AI devices. I talk about hallucinations, overreliance and the additional hurdles people with non Western accents often face while interacting with such devices.
Aug 22, 2024 Truly open-source AI should include not just model weights but also training data, code, and thorough documentation. Open Source Initiative has a new definition of Open-Source AI, and I got to talk to MIT Tech Review about it!
Aug 21, 2024 I commented on a Les Echos article about SB 1047, a new California AI regulation currently up for vote. I raised concerns about potential chilling effects on the open source AI ecosystem due to the bill.
Aug 12, 2024 I was quoted in a new Fortune Magazine article on DEF CON 2024, talking about our push towards structured harm reporting in AI. DEF CON and AI Village this year was a resounding success, and I am looking forward to having more of these conversations with practitioners, civil society and policymakers towards greater transparency!
Aug 1, 2024 Excited to announce our NeurIPS 2024 Workshop, “Evaluating Evaluations: Examining Best Practices for Measuring Broader Impacts of Generative AI” aka EvalEval 2024!. Evaluation is an important governance tool; if sufficiently grounded, defined, and motivated by the needs of affected parties. See you at NeurIPS!
Jul 18, 2024 My paper: Coordinated Flaws Disclosure for AI: Beyond Security Vulnerabilities with Sven Cattell and Lucie-Aimée Kafee was accepted at AIES 2024! We describe the current landscape of disclosure, the need for extension of model cards towards intent and scope, and propose a framework to disclose AI flaws including independent adjudication.
Jul 17, 2024 Very happy to see my op-ed about AI Ethics in India on today’s Dainik Bhaskar. It is the largest Hindi Daily newspaper in India and being able to talk about the unique challenges of AI in the Indian context to such a wide audience means a lot to me! 🙂
Jun 2, 2024 I am excited to share my first ever policy response - Hugging Face’s response to the NIST RFC on AI 600-1: Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile! Thanks a ton to my manager/mentor duo Irene Solaiman and Yacine Jernite for walking me through this massive effort!
May 28, 2024 I was quoted in a new article at Fast Company about our work in finding demographic biases in Google and Bing image search with real user queries!
Apr 25, 2024 I had a lot of fun speaking at the AI Bias and Ethics panel organized by Out in Tech and MassChallenge. It was great to interact with the thriving queer tech community in the Boston area and highlight important issues of discrimination and social justice issues in AI and tech in general.
Apr 22, 2024 I participated in a closed door AI and Copyright Law workshop at Harvard Law School called Transform Copyright, where I represented Hugging Face and forwarded the cause of open source, democratic AI.
Apr 8, 2024 My collaborative art piece, “Because of You” with Eryk Salvaggio, just got accepted to the CVPR 2024 AI Art Gallery!
Mar 16, 2024 My comments on covert racial bias in LLMs appeared in articles at MIT Technology Review and The Guardian.
Jan 23, 2024 “Perceptions in pixels: analyzing perceived gender and skin tone in real-world image search results” has been accepted as a full paper at WWW 2024! Congratulations to my co-authors Jeffrey Gleason, Dr. Ronald E. Robertson and Dr. Christo Wilson.
Jan 12, 2024 I had the distinct honor of moderating and participating in a panel of experts, talking to Congressional staffers and cybersecurity professionals, at the AI Vulnerability Reporting event organized by Hackers on the Hill (HotH), with support from AI Village and AI Vulnerability Database/ARVA!
Oct 12, 2023 I was featured on IndiaTimes, where I talked about India-specific issues that Queer people might face in the current landscape of generative AI.
Aug 10, 2023 I was excited to be involved with the AI Village DC and partners at DEF CON 2023 to help organize the largest-ever public Generative AI Red Teaming challenge!. Myself, along with several other attendees and organizers were mentioned in the New York Times and Forbes!
Jun 15, 2023 “Queer In AI: A Case Study in Community-Led Participatory AI” received BEST PAPER award at ACM FAccT 2023! Congratulations QueerInAI!
May 5, 2023 Paper: “When Fair Classification Meets Noisy Protected Attributes” accepted at AIES 2023!
Apr 10, 2023 Two CRAFT proposals: “Towards an India-first Responsible AI research agenda”, and “Humanitarian AI for the Global South” accepted at ACM FAccT 2023!
Apr 7, 2023 Paper: “Queer In AI: A Case Study in Community-Led Participatory AI” accepted at ACM FAccT 2023!
Mar 12, 2023 I’m speaking at SXSW 2023 in Austin! Talk title: Can There Be AI Art Without An Artist?

The best way to reach me is via email. I am also frequently active on X/Twitter.
Email Options