Aatman Vaidya
I am a Master’s student in Computational Linguistics at the University of Tübingen. Previously, I worked as a Research Engineer at Tattle, where I built citizen-centric tools and datasets to understand and respond to online harms.
I work at the intersection of language, technology and society. My research interests broadly lie in computational social science and human centered AI, with a focus in measuring and mitigating online harms in low-resource contexts and making digital spaces safer and more inclusive.
Here are some research directions I work on (and I am exicted about!):
-
Platform-Mediated Harms. I study how harmful content spreads across online platforms. My earlier work analyzed the propagation of toxic content on Twitter, Koo, and Gab. I have also done a mixed-methods cross-platform study of the illicit betting app ecosystem in India tracing how deceptive social media promotion; misleading ads, fake celebrity endorsements, deepfakes connects to documented financial harm experienced by users.
-
Human-AI Interaction. I am interested in understanding how people actually use AI systems? how they seek information through AI versus Web. My current work collects and analyzes real-world interaction data from the Global South to study how these exchanges unfold in practice across diverse demographic contexts and what that means for designing human-AI systems that are safer and more socio-culturally aligned. (ongoing work)
-
Participatory Design & Safety Tools. I believe safety interventions must be grounded in the lived experiences of those most affected by online harm. At Tattle, I worked with gender rights organizations in India using participatory methods to build a dataset on online gender-based violence and co-developed an AI safety benchmark for Hindi, now part of the MLCommons AIluminate suite. These insights directly shaped tools like Uli, a browser plugin that redacts abusive content in low-resource languages, and Feluda, an engine for multilingual and multimodal content analysis.
You can find more information about me from my CV.
updates
| Mar 2026 | Our work on Quantifying the Illicit Ecosystem of Betting Apps in India is accepted at ICWSM 2026! |
|---|---|
| Nov 2025 | Pre-print for our work on Modelling the Spread of Toxicity |
| Oct 2025 | Started a Master’s in Computational Linguistics at University of Tübingen!! |
| Sep 2025 | Spoke on how to implement AI safety guardiral in GenAI systems at the Tech4Dev AI Cohort.(slides) |
| Aug 2025 | Presented at the Tech4Dev AI Cohort Program on incorporating Safety in AI for good applications. |
| May 2025 | Got featured by FOSS United as a part of Maintainers Month where they feature 31 Indian FOSS projects in the month. (see Folklore page) |
| Mar 2025 | Gave a lightning talk on Open Source Deepfake Detection at MisinfoCon India 2025. (slides) |
| Mar 2025 | Gave a talk on “Evaluating Indian Language Performance in LLMs” at the AI for Global Development by Agency Fund. (slides) |
| Jan 2025 | Conducted a suvery of Indic Language Capabilities in LLMs. (preprint) |
| Nov 2024 | Worked with MLCommons to develop an AI safety bechmark dataset in Hindi on Hate and Sex-realted crimes in India. |
| Apr 2024 | The Uli Dataset paper is accepted at Workshop on Online Harms and Abuse at NAACL 2024. (Won the Outstanding Paper Award!!) |
| Dec 2023 | Conducted a Shared Task on Gendered Abuse Detection in Indic Languages at ICON 2023. |
| Nov 2023 | Our Paper titled, “Analysing the Spread of Toxicity on Twitter” got accepted at CODS-COMAD. |
| Nov 2023 | Invited for a pannel at Digital Citizen Summit. |
| Sep 2023 | Our Paper titled, “Forecasting the Spread of Toxicity on Twitter” got accepted at IEEE CogMI |
| Aug 2023 | Started working at Tattle ! |
| May 2023 | Went to Cluj, Romania to participate in the semi-final round of the Bosch Future Mobility Challenge - Built an autonomous driving car |