Avijit Ghosh

prof_pic.jpg

Dr. Avijit Ghosh is an Applied Policy Researcher in the Machine Learning and Society Team at Hugging Face 🤗 and an Associate Researcher in the RIET Lab at the University of Connecticut. He works at the intersection of machine learning, ethics, and policy, aiming to implement fair ML algorithms into the real world. He has published and peer-reviewed several research papers in top ML and AI Ethics venues, and has organized academic workshops as a member of QueerInAI. His work has been covered in the press, including articles in The New York Times, Forbes, The Guardian, Propublica, Wired, and the MIT Tech Review. Dr. Ghosh has been an invited speaker as a Responsible AI expert, at various high impact events such as SXSW, MIT Sloan AI Conference and the Summit on State AI Legislation. He has also engaged with policymakers at various levels in the United States, United Kingdom, and Singapore. His research and outreach have led to real-world impact, such as helping shape regulation in New York City and causing Facebook to remove their biased ad targeting algorithm.

View my CV here.

News

Mar 24, 2025 My first ever op-ed! Why handing over total control to AI agents would be a huge mistake, with Margaret Mitchell, Sasha Luccioni, and Giada Pistilli is out on MIT Tech Review! In it, we investigate the slow but sure ceding of power and control to increasingly autonomous AI Agents and ask - how much autonomy is too much?
Mar 19, 2025 Some more coverage of our paper on why fully automating agents is a bad idea! I was quoted on this new piece on Science News Explores on Deepseek, Agents, and Open Source.
Mar 14, 2025 We submitted Hugging Face’s response to the White House Office of Science and Technology Policy’s request for information on the White House AI Action Plan. We took this opportunity to (re-)assert the fundamental role that open AI systems and open science play in enabling the technology to be more performant and efficient, broadly and reliably adopted, and meeting the highest standards of security. Covered in VentureBeat.
Mar 14, 2025 I was incredibly thrilled to speak on a panel on Open Source AI Regulation at the Summit for State AI Legislation (SSAIL) hosted by the Digital Ethics Center, Yale University. We got into the nitty gritty of emergent AI regulation, how it might impact open source in particular, and important considerations going forward.
Mar 13, 2025 New paper: In-House Evaluation Is Not Enough: Towards Robust Third-Party Flaw Disclosure for General-Purpose AI. 34 prominent researchers from industry, academia, civil society, and policy came together draft a proposal for a better, self sustaining system for flaw reporting for AI that involves coordination and transparency from all the actors involved. Covered in Wired.
Mar 8, 2025 I spoke on the NLP & Generative AI Panel at the Harvard Business School Tech Conference, where I reflected on my work that sits at the intersection of technology, ethics and policy. We talked policy at Hugging Face, Chinese LLMs and more!
Feb 25, 2025 We submitted Hugging Face’s response to the United Kingdom Intellectual Property Office’s Open Consultation on Copyright and AI. We continue to champion data transparency mechanisms and standardized opt-outs, and we contend that TDM exceptions are required for research orgs and small open source developers to do important work.
Feb 23, 2025 I was quoted on a Business Insider article on AI Agents and Job Boards. I push back against conflating autonomy with agency, and point out that agents with human oversight will augment, not replace, human workers.
Feb 19, 2025 New article - I spoke to TechCircle India on the harms of Open washing and better open standards! We also talked about Hugging Face’s open reproduction of DeepSeek R1.
Feb 17, 2025 New paper - Protecting Human Cognition in the Age of AI with Anjali Singh, Karan Taneja and Klara Guan - accepted at the Tools for Thought Workshop at CHI 2025! In this work, we claim that overreliance on Generative AI models disrupt traditional learning pathways and suggest best practices for better teaching, testing, and learning tools to restore these paths.
Feb 7, 2025 I spoke on a panel called “Concrete Applications of AI” at the MIT Sloan School of Management AI Conference. I discussed the broken AI Harm reporting landscape, the importance of evals, safe harbors, structured and coordinated disclosure processes, and our proposed framework (Coordinated AI Flaws Disclosure) as a path forward.
Feb 7, 2025 New paper! Stop treating “AGI” as the north-star goal of AI research. In this paper, co authored with a group of incredible scholars, we posit that “AGI” is as nebulous to define as its supposed benefits, and we argue that principled scientific, engineering and societal needs should drive AI research instead.
Feb 4, 2025 New paper with Margaret Mitchell, Sasha Luccioni and Giada Pistilli - Fully Autonomous AI Agents Should Not be Developed. In this paper, we argue from a purely value laden perspective, with increasing autonomy, the values that people care about will start breaking down. Some level of human control needs to always be in place.
Jan 24, 2025 Another article with my comments about Stargate! New article on Business Insider where I talk about competitive hardware moats. I reject the false premise of compute = utility. Teams like Deepseek have made remarkable progress creatively with less resources, and this type of progress will only be more possible by openness and collaboration!
Jan 22, 2025 New article on Fortune with my comments about Stargate. I talked about the importance of public AI infrastructure, the concentration of power, and the need for openness to counter that, as well as how this mad dash for AGI can actively siphon away resources from issues that can be solved with technology in the present.
Jan 13, 2025 The newest Ethics and Society Newsletter: AI Agents Are Here. What Now? is out! Had a lot of fun writing this with my colleagues: Margaret Mitchell, Giada Pistilli and Sasha Luccioni. Our analyses found that there’s a spectrum of “agent”-ness, and that safety is a key issue, leading to many other value-based concerns.
Jan 3, 2025 My column with predictions for AI in 2025 - in India and Globally - is out in the Dainik Bhaskar in Hindi, Marathi and Gujarati!
Dec 18, 2024 “Quantifying Misalignment Between Agents: Towards a Sociotechnical Understanding of Alignment,” with Aidan Kierans, Hananel Hazan, and Shiri Dori-Hacohen, has been accepted at the Thirty-Ninth AAAI Conference on Artificial Intelligence (AAAI-25)!
Dec 15, 2024 I was happy to contribute to a Institute for Security and Technology report on Navigating AI Compliance via tracing failure patterns from history, led by Mariami Tkeshelashvili and Tiffany Saade. The report examines past compliance failures in other industries and how we can apply the lessons learned to AI Governance.
Nov 27, 2024 The accepted tiny papers for our Neurips 2024 Evaluating Evaluations: Examining Best Practices for Measuring Broader Impacts of Generative AI (EvalEval) Workshop are out! Read them here. Congrats to the authors and looking forward to the workshop!
Nov 14, 2024 “To Err is AI : A Case Study Informing LLM Flaw Reporting Practices,” with Sean McGregor, Allyson Ettinger, Nick Judd, Paul Albee, Liwei Jiang, Kavel Rao, Will Smith, Shayne Longpre, Christopher Fiorelli, Michelle Hoang, Sven Cattell, and Nouha Dziri, has been accepted at the Thirty-Seventh Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-25)!
Oct 15, 2024 I was quoted on another WIRED article on emotion detecting glasses. where I point out that mainstream emotion detection tech could be a path to doom if implemented without considering its pitfalls and impact on human agency.
Sep 13, 2024 “Reducing Population-level Inequality Can Improve Demographic Group Fairness: a Twitter Case Study,” with Tomo Lazovich, Kristian Lum, and Christo Wilson, has been accepted at the FAccTRec Workshop at ACM RecSys 2024!
Sep 6, 2024 I led Hugging Face’s response to the RFC on AI 800-1: Managing Misuse Risk for Dual-Use Foundational Models, highlighting ways to protect the open source AI ecosystem while creating effective safety practices to manage the risks of dual-use foundational models.
Aug 28, 2024 I was quoted on a new WIRED article on Wearable AI devices. I talk about hallucinations, overreliance and the additional hurdles people with non Western accents often face while interacting with such devices.
Aug 22, 2024 Truly open-source AI should include not just model weights but also training data, code, and thorough documentation. Open Source Initiative has a new definition of Open-Source AI, and I got to talk to MIT Tech Review about it!
Aug 21, 2024 I commented on a Les Echos article about SB 1047, a new California AI regulation currently up for vote. I raised concerns about potential chilling effects on the open source AI ecosystem due to the bill.
Aug 12, 2024 I was quoted in a new Fortune Magazine article on DEF CON 2024, talking about our push towards structured harm reporting in AI. DEF CON and AI Village this year was a resounding success, and I am looking forward to having more of these conversations with practitioners, civil society and policymakers towards greater transparency!
Aug 1, 2024 Excited to announce our NeurIPS 2024 Workshop, “Evaluating Evaluations: Examining Best Practices for Measuring Broader Impacts of Generative AI” aka EvalEval 2024!. Evaluation is an important governance tool; if sufficiently grounded, defined, and motivated by the needs of affected parties. See you at NeurIPS!
Jul 18, 2024 My paper: Coordinated Flaws Disclosure for AI: Beyond Security Vulnerabilities with Sven Cattell and Lucie-Aimée Kafee was accepted at AIES 2024! We describe the current landscape of disclosure, the need for extension of model cards towards intent and scope, and propose a framework to disclose AI flaws including independent adjudication.
Jul 17, 2024 Very happy to see my column about AI Ethics in India on today’s Dainik Bhaskar. It is the largest Hindi Daily newspaper in India and being able to talk about the unique challenges of AI in the Indian context to such a wide audience means a lot to me! 🙂
Jun 2, 2024 I am excited to share my first ever policy response - Hugging Face’s response to the NIST RFC on AI 600-1: Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile! Thanks a ton to my manager/mentor duo Irene Solaiman and Yacine Jernite for walking me through this massive effort!
May 28, 2024 I was quoted in a new article at Fast Company about our work in finding demographic biases in Google and Bing image search with real user queries!
Apr 25, 2024 I had a lot of fun speaking at the AI Bias and Ethics panel organized by Out in Tech and MassChallenge. It was great to interact with the thriving queer tech community in the Boston area and highlight important issues of discrimination and social justice issues in AI and tech in general.
Apr 22, 2024 I participated in a closed door AI and Copyright Law workshop at Harvard Law School called Transform Copyright, where I represented Hugging Face and forwarded the cause of open source, democratic AI.
Apr 8, 2024 My collaborative art piece, “Because of You” with Eryk Salvaggio, just got accepted to the CVPR 2024 AI Art Gallery!
Mar 16, 2024 My comments on covert racial bias in LLMs appeared in articles at MIT Technology Review and The Guardian.
Jan 23, 2024 “Perceptions in pixels: analyzing perceived gender and skin tone in real-world image search results” has been accepted as a full paper at WWW 2024! Congratulations to my co-authors Jeffrey Gleason, Dr. Ronald E. Robertson and Dr. Christo Wilson.
Jan 12, 2024 I had the distinct honor of moderating and participating in a panel of experts, talking to Congressional staffers and cybersecurity professionals, at the AI Vulnerability Reporting event organized by Hackers on the Hill (HotH), with support from AI Village and AI Vulnerability Database/ARVA!
Oct 12, 2023 I was featured on IndiaTimes, where I talked about India-specific issues that Queer people might face in the current landscape of generative AI.
Aug 10, 2023 I was excited to be involved with the AI Village DC and partners at DEF CON 2023 to help organize the largest-ever public Generative AI Red Teaming challenge!. Myself, along with several other attendees and organizers were mentioned in the New York Times and Forbes!
Jun 15, 2023 “Queer In AI: A Case Study in Community-Led Participatory AI” received BEST PAPER award at ACM FAccT 2023! Congratulations QueerInAI!
May 5, 2023 Paper: “When Fair Classification Meets Noisy Protected Attributes” accepted at AIES 2023!
Apr 10, 2023 Two CRAFT proposals: “Towards an India-first Responsible AI research agenda”, and “Humanitarian AI for the Global South” accepted at ACM FAccT 2023!
Apr 7, 2023 Paper: “Queer In AI: A Case Study in Community-Led Participatory AI” accepted at ACM FAccT 2023!
Mar 12, 2023 I’m speaking at SXSW 2023 in Austin! Talk title: Can There Be AI Art Without An Artist?

The best way to reach me is via email. I am also frequently active on X/Twitter.
Email Options