Avijit Ghosh
Dr. Avijit Ghosh is a Technical AI Policy Researcher at Hugging Face 🤗 and a Research Affiliate at the the University of Connecticut. He sits at a critical junction between AI research and policy responses to ongoing legislative and regulatory movements. His research examines critical challenges in AI safety: from algorithmic bias to agent autonomy to standardization efforts in AI Evaluation and Vulnerability Disclosure. His work has been covered in the press, including articles in The New York Times, Forbes, The Guardian, Propublica, Wired, and the MIT Tech Review. His research has influenced AI regulation, established best practices for AI Documentation, and advanced the democratization of machine learning technology while ensuring it serves human wellbeing. Dr. Ghosh has been an invited speaker as a Responsible AI expert at various high-impact events such as SXSW, MIT Sloan AI Conference, and the Summit on State AI Legislation. He has also organized academic workshops as a member of QueerInAI and engaged with policymakers at various levels in the United States, United Kingdom, and Singapore. His work has led to real-world impact, including helping shape regulation in New York City and causing Facebook to remove their biased ad targeting algorithm.
News
| Nov 13, 2025 | New paper: Who Evaluates AI’s Social Impacts? Mapping Coverage And Gaps In First And Third Party Evaluations. This is the first major output from the Evaluating Evaluations Coalition. We analyzed 186 first-party model release documents and 183 post-release evaluation documents across 7 dimensions of social impact. We find that model developers have become less transparent about their social impact evaluation results over time. This strengthens the case for independent third-party evaluations for AI safety. |
|---|---|
| Nov 12, 2025 | New paper: Open Technical Problems in Open-Weight AI Model Risk Management. We present 16 open technical challenges spanning training, evaluation, deployment, and ecosystem monitoring, emphasizing that openness about research and methods—not just weights—is key to rigorous risk management. |
| Nov 5, 2025 | I was on the Responsible AI podcast with Alex Alben, discussing how AI could address human drudgery and why we need to focus on solving immediate, practical problems rather than pursuing speculative frontier challenges. |
| Nov 3, 2025 | I had the distinct honor of speaking at a panel entitled ‘AI’s Effect on the Creative Industries’, organized by All Tech Is Human and hosted by the Consulate General of Canada in New York. We looked at the intersection of Generative AI and its impact on the creative industries, and I stressed on the importance of holistic labor protections for creatives and going beyond copyright as a single tool. |
| Nov 1, 2025 | We at Eval-Eval, along with the UK AISI, are hosting a closed-door workshop on the state of evaluations at UC San Diego on December 8th for those attending NeurIPS. Request to join. |
| Oct 17, 2025 | We are launching a weekly paper spotlight series to improve AI evaluations through community engagement. First up: Do Large Language Model Benchmarks Test Reliability? |
| Oct 9, 2025 | I spoke on a panel on “Rights (to Life, Liberty, and the Pursuit of Happiness)” at the UConn Humanities Institute AI Symposium. |
| Oct 6, 2025 | Our new position paper argues that AI for scientific discovery is fundamentally a social problem, showing that culture, incentives, and coordination are the main obstacles to progress. We are launching the Hugging Science Initiative to address these challenges. Read the paper. |
| Sep 24, 2025 | I gave a talk at my alma mater Northeastern University about personal anecdotes of how I have experienced AI, how those experiences fueled my research, and the questions that still remain. Event details |
| Aug 11, 2025 | We at Eval-Eval published a blog post on the AI Eval Charts Crisis, examining the challenges and incentive structures behind misleading evaluation visualizations. |
| Jul 16, 2025 | We are officially kicking off the Eval Science Workstream, building a shared scientific foundation for evaluating AI systems that is rigorous, open, and grounded in real-world and cross-disciplinary best practices. Read our blog post. |
| Jul 15, 2025 | New blog post: “What is the Hugging Face Community Building?”, with Yacine Jernite and Irene Soliaman. We explore how the real story of AI development is happening in the open source ecosystem across 1.8M models, 450K datasets, and 560K apps on Hugging Face. |
| Jun 17, 2025 | We are hosting a short interactive workshop on AI, Misrepresentation and Mental Health at both FAccT in Athens and Alt-FAccT in NYC. Sign up here. |
| May 22, 2025 | I was on the Ethical Machines podcast with Reid Blackman, to talk about AI, Agency, and how we need better scalable oversight mechanisms as the technical infrastructure around us starts to account for Agents! |
| May 19, 2025 | I appeared in a segment by Matt Levin on NPR Marketplace! I talk about how we are getting dependent on GenAI for coding (and other tasks) and how we need better structures to use them to aid and upskill us instead of replacing us. |
| Apr 18, 2025 | I was quoted in a new article on Politico with my comments about SB 1047 and the need for proportional liability mechanisms. |
| Apr 17, 2025 | I spoke on a panel discussing “Is U.S. Policy Ready for Agentic AI?” hosted by the Center for Data Innovation. |
| Apr 14, 2025 | “It’s not a representation of me”: Examining Accent Bias and Digital Exclusion in Synthetic AI Voice Services has been accepted as a full paper at ACM FAccT 2025! Congratulations to my co-authors Shira Michel, Sufi Kaur, Sarah Elizabeth Gillespie, Jeffrey Gleason, and Dr. Christo Wilson. |
| Apr 10, 2025 | New blog post: Empowering Public Organizations: Preparing Your Data for the AI Era now out on Hugging Face! In this article, we outline comprehensive guide on transforming public data into AI-ready formats, empowering government agencies, libraries, and nonprofits to maximize their data’s potential for machine learning applications. |
| Mar 24, 2025 | My first ever op-ed! Why handing over total control to AI agents would be a huge mistake, with Margaret Mitchell, Sasha Luccioni, and Giada Pistilli is out on MIT Tech Review! In it, we investigate the slow but sure ceding of power and control to increasingly autonomous AI Agents and ask - how much autonomy is too much? |
| Mar 23, 2025 | I was quoted in a new article on Business Insider about the hidden costs of reasoning with ChatGPT. I reflect on the common conflation of automation and autonomy. |
| Mar 19, 2025 | Some more coverage of our paper on why fully automating agents is a bad idea! I was quoted on this new piece on Science News Explores on Deepseek, Agents, and Open Source. |
| Mar 14, 2025 | We submitted Hugging Face’s response to the White House Office of Science and Technology Policy’s request for information on the White House AI Action Plan. We took this opportunity to (re-)assert the fundamental role that open AI systems and open science play in enabling the technology to be more performant and efficient, broadly and reliably adopted, and meeting the highest standards of security. Covered in VentureBeat. |
| Mar 14, 2025 | I was incredibly thrilled to speak on a panel on Open Source AI Regulation at the Summit for State AI Legislation (SSAIL) hosted by the Digital Ethics Center, Yale University. We got into the nitty gritty of emergent AI regulation, how it might impact open source in particular, and important considerations going forward. |
| Mar 13, 2025 | New paper: In-House Evaluation Is Not Enough: Towards Robust Third-Party Flaw Disclosure for General-Purpose AI. 34 prominent researchers from industry, academia, civil society, and policy came together draft a proposal for a better, self sustaining system for flaw reporting for AI that involves coordination and transparency from all the actors involved. Covered in Wired. |
| Mar 8, 2025 | I spoke on the NLP & Generative AI Panel at the Harvard Business School Tech Conference, where I reflected on my work that sits at the intersection of technology, ethics and policy. We talked policy at Hugging Face, Chinese LLMs and more! |
| Mar 8, 2025 | I spoke on an NLP and Generative AI panel at the Harvard Business School Tech Conference. |
| Feb 25, 2025 | We submitted Hugging Face’s response to the United Kingdom Intellectual Property Office’s Open Consultation on Copyright and AI. We continue to champion data transparency mechanisms and standardized opt-outs, and we contend that TDM exceptions are required for research orgs and small open source developers to do important work. |
| Feb 23, 2025 | I was quoted on a Business Insider article on AI Agents and Job Boards. I push back against conflating autonomy with agency, and point out that agents with human oversight will augment, not replace, human workers. |
| Feb 19, 2025 | New article - I spoke to TechCircle India on the harms of Open washing and better open standards! We also talked about Hugging Face’s open reproduction of DeepSeek R1. |
| Feb 17, 2025 | New paper - Protecting Human Cognition in the Age of AI with Anjali Singh, Karan Taneja and Klara Guan - accepted at the Tools for Thought Workshop at CHI 2025! In this work, we claim that overreliance on Generative AI models disrupt traditional learning pathways and suggest best practices for better teaching, testing, and learning tools to restore these paths. |
| Feb 7, 2025 | I spoke on a panel called “Concrete Applications of AI” at the MIT Sloan School of Management AI Conference. I discussed the broken AI Harm reporting landscape, the importance of evals, safe harbors, structured and coordinated disclosure processes, and our proposed framework (Coordinated AI Flaws Disclosure) as a path forward. |
| Feb 7, 2025 | New paper! Stop treating “AGI” as the north-star goal of AI research. In this paper, co authored with a group of incredible scholars, we posit that “AGI” is as nebulous to define as its supposed benefits, and we argue that principled scientific, engineering and societal needs should drive AI research instead. |
| Feb 4, 2025 | New blog post: Smol but Mighty: Can Small Models Reason well? 🤔 now out on Hugging Face! In this article, I show that “Smol” (less than 2B parameters) language models pack impressive performance, competing with models several orders of magnitudes better from last year. I also find that Chinese-developed models (Deepseek R1, Qwen) exhibiting distinctly different cultural biases than American models. |
| Feb 4, 2025 | New paper with Margaret Mitchell, Sasha Luccioni and Giada Pistilli - Fully Autonomous AI Agents Should Not be Developed. In this paper, we argue from a purely value laden perspective, with increasing autonomy, the values that people care about will start breaking down. Some level of human control needs to always be in place. |
| Jan 24, 2025 | Another article with my comments about Stargate! New article on Business Insider where I talk about competitive hardware moats. I reject the false premise of compute = utility. Teams like Deepseek have made remarkable progress creatively with less resources, and this type of progress will only be more possible by openness and collaboration! |
| Jan 22, 2025 | New article on Fortune with my comments about Stargate. I talked about the importance of public AI infrastructure, the concentration of power, and the need for openness to counter that, as well as how this mad dash for AGI can actively siphon away resources from issues that can be solved with technology in the present. |
| Jan 13, 2025 | The newest Ethics and Society Newsletter: AI Agents Are Here. What Now? is out! Had a lot of fun writing this with my colleagues: Margaret Mitchell, Giada Pistilli and Sasha Luccioni. Our analyses found that there’s a spectrum of “agent”-ness, and that safety is a key issue, leading to many other value-based concerns. |
| Jan 3, 2025 | My column with predictions for AI in 2025 - in India and Globally - is out in the Dainik Bhaskar in Hindi, Marathi and Gujarati! |
| Dec 18, 2024 | “Quantifying Misalignment Between Agents: Towards a Sociotechnical Understanding of Alignment,” with Aidan Kierans, Hananel Hazan, and Shiri Dori-Hacohen, has been accepted at the Thirty-Ninth AAAI Conference on Artificial Intelligence (AAAI-25)! |
| Dec 15, 2024 | I was happy to contribute to a Institute for Security and Technology report on Navigating AI Compliance via tracing failure patterns from history, led by Mariami Tkeshelashvili and Tiffany Saade. The report examines past compliance failures in other industries and how we can apply the lessons learned to AI Governance. |
| Nov 27, 2024 | The accepted tiny papers for our Neurips 2024 Evaluating Evaluations: Examining Best Practices for Measuring Broader Impacts of Generative AI (EvalEval) Workshop are out! Read them here. Congrats to the authors and looking forward to the workshop! |
| Nov 14, 2024 | “To Err is AI : A Case Study Informing LLM Flaw Reporting Practices,” with Sean McGregor, Allyson Ettinger, Nick Judd, Paul Albee, Liwei Jiang, Kavel Rao, Will Smith, Shayne Longpre, Christopher Fiorelli, Michelle Hoang, Sven Cattell, and Nouha Dziri, has been accepted at the Thirty-Seventh Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-25)! |
| Oct 15, 2024 | I was quoted on another WIRED article on emotion detecting glasses, where I point out that mainstream emotion detection tech could be a path to doom if implemented without considering its pitfalls and impact on human agency. |
| Oct 9, 2024 | I spoke at Arthur AI Fest 2024 on “ERASED BY AI: Personal Experiences of the Psychological Impact of AI Bias.” |
| Oct 8, 2024 | I spoke about Coordinated Disclosure for AI: Beyond Security Vulnerabilities at the Strengthening AI Accountability Through Better Third-Party Evaluations event hosted by Stanford HAI. |
| Sep 13, 2024 | “Reducing Population-level Inequality Can Improve Demographic Group Fairness: a Twitter Case Study,” with Tomo Lazovich, Kristian Lum, and Christo Wilson, has been accepted at the FAccTRec Workshop at ACM RecSys 2024! |
| Sep 10, 2024 | I gave a talk at IIT Kharagpur on “Bridging the Gap: Real-World AI Biases and Responsible Governance Frameworks.” |
| Sep 6, 2024 | I led Hugging Face’s response to the RFC on AI 800-1: Managing Misuse Risk for Dual-Use Foundational Models, highlighting ways to protect the open source AI ecosystem while creating effective safety practices to manage the risks of dual-use foundational models. |
| Aug 28, 2024 | I was quoted on a new WIRED article on Wearable AI devices. I talk about hallucinations, overreliance and the additional hurdles people with non Western accents often face while interacting with such devices. |
| Aug 22, 2024 | Truly open-source AI should include not just model weights but also training data, code, and thorough documentation. Open Source Initiative has a new definition of Open-Source AI, and I got to talk to MIT Tech Review about it! |
| Aug 21, 2024 | I commented on a Les Echos article about SB 1047, a new California AI regulation currently up for vote. I raised concerns about potential chilling effects on the open source AI ecosystem due to the bill. |
| Aug 12, 2024 | I was quoted in a new Fortune Magazine article on DEF CON 2024, talking about our push towards structured harm reporting in AI. DEF CON and AI Village this year was a resounding success, and I am looking forward to having more of these conversations with practitioners, civil society and policymakers towards greater transparency! |
| Aug 1, 2024 | Excited to announce our NeurIPS 2024 Workshop, “Evaluating Evaluations: Examining Best Practices for Measuring Broader Impacts of Generative AI” aka EvalEval 2024!. Evaluation is an important governance tool; if sufficiently grounded, defined, and motivated by the needs of affected parties. See you at NeurIPS! |
| Jul 18, 2024 | My paper: Coordinated Flaws Disclosure for AI: Beyond Security Vulnerabilities with Sven Cattell and Lucie-Aimée Kafee was accepted at AIES 2024! We describe the current landscape of disclosure, the need for extension of model cards towards intent and scope, and propose a framework to disclose AI flaws including independent adjudication. |
| Jul 17, 2024 | Very happy to see my column about AI Ethics in India on today’s Dainik Bhaskar. It is the largest Hindi Daily newspaper in India and being able to talk about the unique challenges of AI in the Indian context to such a wide audience means a lot to me! 🙂 |
| Jun 2, 2024 | I am excited to share my first ever policy response - Hugging Face’s response to the NIST RFC on AI 600-1: Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile! Thanks a ton to my manager/mentor duo Irene Solaiman and Yacine Jernite for walking me through this massive effort! |
| May 28, 2024 | I was quoted in a new article at Fast Company about our work in finding demographic biases in Google and Bing image search with real user queries! |
| May 20, 2024 | I spoke about Coordinated Disclosure for AI: Beyond Security Vulnerabilities at the EQUAL lab at MILA. |
| Apr 25, 2024 | I had a lot of fun speaking at the AI Bias and Ethics panel organized by Out in Tech and MassChallenge. It was great to interact with the thriving queer tech community in the Boston area and highlight important issues of discrimination and social justice issues in AI and tech in general. |
| Apr 22, 2024 | I participated in a closed door AI and Copyright Law workshop at Harvard Law School called Transform Copyright, where I represented Hugging Face and forwarded the cause of open source, democratic AI. |
| Apr 8, 2024 | My collaborative art piece, “Because of You” with Eryk Salvaggio, just got accepted to the CVPR 2024 AI Art Gallery! |
| Mar 28, 2024 | I spoke on “Technology Impact on Cybersecurity” at Boston University. |
| Mar 16, 2024 | My comments on covert racial bias in LLMs appeared in articles at MIT Technology Review and The Guardian. |
| Jan 23, 2024 | “Perceptions in pixels: analyzing perceived gender and skin tone in real-world image search results” has been accepted as a full paper at WWW 2024! Congratulations to my co-authors Jeffrey Gleason, Dr. Ronald E. Robertson and Dr. Christo Wilson. |
| Jan 12, 2024 | I had the distinct honor of moderating and participating in a panel of experts, talking to Congressional staffers and cybersecurity professionals, at the AI Vulnerability Reporting event organized by Hackers on the Hill (HotH), with support from AI Village and AI Vulnerability Database/ARVA! |
| Oct 12, 2023 | I was featured on IndiaTimes, where I talked about India-specific issues that Queer people might face in the current landscape of generative AI. |
| Aug 10, 2023 | I was excited to be involved with the AI Village DC and partners at DEF CON 2023 to help organize the largest-ever public Generative AI Red Teaming challenge!. Myself, along with several other attendees and organizers were mentioned in the New York Times and Forbes! |
| Jun 15, 2023 | “Queer In AI: A Case Study in Community-Led Participatory AI” received BEST PAPER award at ACM FAccT 2023! Congratulations QueerInAI! |
| May 5, 2023 | Paper: “When Fair Classification Meets Noisy Protected Attributes” accepted at AIES 2023! |
| Apr 10, 2023 | Two CRAFT proposals: “Towards an India-first Responsible AI research agenda”, and “Humanitarian AI for the Global South” accepted at ACM FAccT 2023! |
| Apr 7, 2023 | Paper: “Queer In AI: A Case Study in Community-Led Participatory AI” accepted at ACM FAccT 2023! |
| Mar 12, 2023 | I’m speaking at SXSW 2023 in Austin! Talk title: Can There Be AI Art Without An Artist? |