Good news! I have successfully defended my PhD dissertation titled “Security Analysis of Malicious Socialbots on the Web.” It is available here.
I would like to thank my examination committee members, namely, Konstantin Beznosov (co-advisor), Matei Ripeanu (co-advisor), William (Bill) Aiello, Sidney Fels, and David Lie (external, University of Toronto). I’m also grateful to all my friends and colleagues who have been there for me. Thank you folks!
While it has been a long and humbling journey, I cannot wait to start a new one. I’ll keep you updated!
Our latest research on identifying automated fake accounts in online social networks has been accepted at the 2015 Network and Distributed System Security Symposium (NDSS’15), to be held in Feb in San Diego, USA.
In this work, we present Integro, a scalable defense system that helps OSNs detect fake accounts using a meaningful user ranking scheme. We implemented Integro using Mahout and Giraph in which it scaled nearly linearly. We evaluated Integro against SybilRank, the state-of-the-art in fake account detection, using real-world datasets and a large-scale deployment at Tuenti, the largest OSN in Spain. In particular, we show that Integro significantly outperforms SybilRank in user ranking quality. Moreover, the deployment of Integro at Tuenti resulted in an order of magnitude higher fake account detection precision, as compared to SybilRank.
Integro is published as part of Grafos ML, a system and tools for large-scale machine learning and graph analytics on top of Giraph.
It’s my pleasure to announce that I will be giving a talk at AAAI 2014 Spring Symposia on March 24th. As part of the Social Hacking and Cognitive Security on the Internet and New Media Symposium, I will be talking about our on-going research on protecting the social web from abusive automation, socialbots in particular.
This symposium, which is sponsored by AAAI and held in cooperation with the Stanford University Computer Science Department, will convene a diverse group of experts relevant to the broad area of cognitive security (“CogSec”) that includes the development of methods that (1) detect and analyze cognitive vulnerabilities and (2) block efforts that exploit cognitive vulnerabilities to influence collective action at multiple scales.