I am currently a Staff Research Scientist at Google working in the general area of Privacy and Security.
I graduated in 2014 with a Ph.D. in Computer Science from the Department of Computer Science and Engineering at the New York University, Polytechnic School of Engineering (currently Tandon School of Engineering). I completed my Bachelor of Technology in ICT from Dhirubhai Ambani Institute of Information and Communication Technology, DA-IICT (now Dhirubhai Ambani University, DAU), India in 2009.
PhD in Computer Science
New York University, School of Engineering
BTech in Information & Communication Technology
Dhirubhai Ambani Institute of Information and Communication Technology (DA-IICT), now Dhirubhai Ambani University (DAU)
I’m a research scientist in the privacy research team at Google. I publish in the areas of privacy & security, artificial intelligence (AI), machine learning, and data mining.
My research leverages a dual approach—combining qualitative and quantitative methods—to comprehensively investigate user and developer privacy preferences and concerns. I then employ large-scale data analysis and Machine Learning/AI (including LLMs) techniques to design and implement novel privacy and security features.
Most recently, I have been leveraging and critically analyzing Large Language Models (LLMs) to address fundamental, real-world challenges in cybersecurity and privacy. This vision spans two main areas: first, employing LLMs as a scalable and interpretable foundation to solve complex security and privacy problems, such as high-accuracy IoT device identification from incomplete network metadata; and second, conducting large-scale empirical studies to understand and mitigate the emerging privacy risks associated with AI. This critical analysis involves investigating how users and developers perceive and respond to threats like implicit attribute inference by LLMs, as well as studying real-world, large-scale trends in user concerns (e.g., analyzing millions of Android app reviews) to inform and drive the development of more effective, human-centered mitigation strategies for AI systems.