UWashington Distinguished Seminar in Optimization and Data: Fatma Kilinc-Karzan
Topic
Mistake, Manipulation, and Margin Guarantees in Online Strategic Classification
Speakers
Details
We consider an online strategic classification problem where each arriving agent can manipulate their true feature vector to obtain a positive predicted label while incurring a cost that depends on the amount of manipulation. The learner seeks to predict the agent's true label, while having access to only the manipulated features. After the learner releases their prediction, the agent's true label is revealed. While previous algorithms such as the strategic perceptron guarantee finitely many mistakes under a separability assumption on agents' true feature vectors, we show that they fail to encourage agents to be truthful resulting in infinitely many manipulations. We show that promoting truthfulness is intimately linked to obtaining adequate margin on the predictions, and provide two new algorithms aimed at recovering the maximum margin classifier in the presence of strategic agent behavior in an online setting. We prove finite mistake and finite manipulation guarantees as well as convergence to max margin classifier for our algorithms for a variety of agent cost structures under minor assumptions. We also provide a generalized version of the strategic perceptron algorithm along with its mistake guarantees for different costs under some assumptions, and also establish the necessity of the assumptions from the earlier literature to guarantee its finite mistake bound. Our numerical study on real and synthetic data demonstrates that the new algorithms outperform previous ones in terms of number of mistakes, number of manipulations, and margin in various settings. This is joint work with Lingqing Shen, Nam Ho-Nguyen and Khanh-Hung Giang-Tran.