D. Amodei, C. Olah, J. Steinhardt, P. Christiano, J. Schulman, and D. Man´e, “Concrete problems in AI safety,” arXiv preprint arXiv:1606.06565, 2016.
 O. Ohrimenko, F. Schuster, C. Fournet, A. Mehta, S. Nowozin, K. Vaswani, and M. Costa, “Oblivious multi-party machine learning on trusted processors,” in 25th USENIX Security Symposium (USENIX Security 16), 2016.
 K. P. Murphy, Machine Learning: A Probabilistic Perspective. MIT Press, 2012.
 A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classiﬁcation with deep convolutional neural networks,” in Advances in Neural Information Processing Systems, 2012, pp. 1097–1105.
 I. Sutskever, O. Vinyals, and Q. V. Le, “Sequence to sequence learning with neural networks,” in Advances in Neural Information Processing Systems, 2014, pp. 3104–3112.
 H. Drucker, D. Wu, and V. N. Vapnik, “Support vector machines for spam categorization,” IEEE Transactions on Neural Networks, vol. 10, no. 5, pp. 1048–1054, 1999.
 A. K. Jain, M. N. Murty, and P. J. Flynn, “Data clustering: A review,” ACM Computing Surveys, vol. 31, no. 3, pp. 264–323, 1999.
 A. Krizhevsky and G. Hinton, “Learning multiple layers of features from tiny images,” 2009.
 J. Masci, U. Meier, D. Cires¸an, and J. Schmidhuber, “Stacked convolutional auto-encoders for hierarchical feature extraction,” in International Conference on Artiﬁcial Neural Networks and Machine Learning, 2011, pp. 52–59.
 D. Erhan, Y. Bengio, A. Courville, P.-A. Manzagol, P. Vincent, and S. Bengio, “Why does unsupervised pre-training help deep learning?” Journal of Machine Learning Research, vol. 11, pp. 625–660, 2010.
 V. Chandola, A. Banerjee, and V. Kumar, “Anomaly detection: A survey,” ACM Computing Surveys, vol. 41, no. 3, pp. 15:1–15:58, 2009.
 J. Hu and M. P. Wellman, “Nash Q-learning for general-sum stochastic games,” Journal of Machine Learning Research, vol. 4, pp. 1039–1069, 2003.
 R.S.SuttonandA.G.Barto, Reinforcement Learning: An Introduction. MIT Press, 1998.
 D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. Van Den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot et al., “Mastering the game of Go with deep neural networks and tree search,” Nature, vol. 529, no. 7587, pp. 484–489, 2016.
 C. M. Bishop, “Pattern recognition,” Machine Learning, 2006.
 I. Goodfellow, Y. Bengio, and A. Courville, “Deep learning,” 2016, Book in preparation for MIT Press (www.deeplearningbook.org).
 M. Christodorescu and S. Jha, “Static analysis of executables to detect malicious patterns,” in 12th USENIX Security Symposium (USENIX Security 06), 2006.
 J. Zhang and M. Zulkernine, “Anomaly based network intrusion detection with unsupervised outlier detection,” in IEEE International Conference on Communications, vol. 5, 2006, pp. 2388–2393.
 R. Sommer and V. Paxson, “Outside the closed world: On using machine learning for network intrusion detection,” in 2010 IEEE symposium on security and privacy. IEEE, 2010, pp. 305–316.
 J. Cannady, “Next generation intrusion detection: Autonomous reinforcement learning of network attacks,” in Proceedings of the 23rd national information systems security conference, 2000, pp. 1–12.
 N. S. Altman, “An introduction to kernel and nearest-neighbor nonparametric regression,” The American Statistician, vol. 46, no. 3, pp. 175–185, 1992.
 M. Anthony and P. L. Bartlett, Neural network learning: Theoretical foundations. cambridge university press, 2009.
 L. Rosasco, E. De Vito, A. Caponnetto, M. Piana, and A. Verri, “Are loss functions all the same?” Neural Computation, vol. 16, no. 5, pp. 1063–1076, 2004.
 A. Sinha, D. Kar, and M. Tambe, “Learning adversary behavior in security games: A pac model perspective,” in Proceedings of the 2016 International Conference on Autonomous Agents & Multiagent Systems. International Foundation for Autonomous Agents and Multiagent Systems, 2016, pp. 214–222.
 M. Barreno, B. Nelson, R. Sears, A. D. Joseph, and J. D. Tygar, “Can machine learning be secure?” in Proceedings of the 2006 ACM Symposium on Information, computer and communications security. ACM, 2006, pp. 16–25.
 N. Papernot, P. McDaniel, S. Jha, M. Fredrikson, Z. B. Celik, and A. Swami, “The limitations of deep learning in adversarial settings,” in Proceedings of the 1st IEEE European Symposium on Security and Privacy. IEEE, 2016.
 V. Vapnik and A. Vashist, “A new learning paradigm: Learning using privileged information,” Neural Networks, vol. 22, no. 5, pp. 544–557, 2009.
 M. Kloft and P. Laskov, “Online anomaly detection under adversarial impact,” in International Conference on Artiﬁcial Intelligence and Statistics, 2010, pp. 405–412.
 D. Lowd and C. Meek, “Good word attacks on statistical spam ﬁlters.” in CEAS, 2005.
 C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus, “Intriguing properties of neural networks,” in Proceedings of the 2014 International Conference on Learning Representations. Computational and Biological Learning Society, 2014.
 N. Papernot, P. McDaniel, I. Goodfellow, S. Jha, Z. B. Celik, and A. Swami, “Practical black-box attacks against deep learning systems using adversarial examples,” arXiv preprint arXiv:1602.02697, 2016.
 P. Laskov et al., “Practical evasion of a learning-based classiﬁer: A case study,” in 2014 IEEE Symposium on Security and Privacy. IEEE, 2014, pp. 197–211.
 R. J. Bolton and D. J. Hand, “Statistical fraud detection: A review,” Statistical science, pp. 235–249, 2002.
 T. C. Rindﬂeisch, “Privacy, information technology, and health care,” Communications of the ACM, vol. 40, no. 8, pp. 92–100, 1997.
 M. Fredrikson, S. Jha, and T. Ristenpart, “Model inversion attacks that exploit conﬁdence information and basic countermeasures,” in Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security. ACM, 2015, pp. 1322–1333.
 R. Shokri, M. Stronati, and V. Shmatikov, “Membership inference attacks against machine learning models,” arXiv preprint arXiv:1610.05820, 2016.
 M. Sharif, S. Bhagavatula, L. Bauer, and M. K. Reiter, “Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition,” in Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. ACM, 2016, pp. 1528–1540.
 D. M. Powers, “Evaluation: from precision, recall and f-measure to roc, informedness, markedness and correlation,” 2011.
 M. Kearns and M. Li, “Learning in the presence of malicious errors,” SIAM Journal on Computing, vol. 22, no. 4, pp. 807–837, 1993.
 A. Globerson and S. Roweis, “Nightmare at test time: robust learning by feature deletion,” in Proceedings of the 23rd international conference on Machine learning. ACM, 2006, pp. 353–360.
 N. Manwani and P. S. Sastry, “Noise tolerance under risk minimization,” IEEE Transactions on Cybernetics, vol. 43, no. 3, pp. 1146–1151, 2013.
 B. Nelson and A. D. Joseph, “Bounding an attack’s complexity for a simple learning model,” in Proc. of the First Workshop on Tackling Computer Systems Problems with Machine Learning Techniques (SysML), Saint-Malo, France, 2006.
 G. Hulten, L. Spencer, and P. Domingos, “Mining time-changing data streams,” in Proceedings of the seventh ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 2001, pp. 97–106.
 B. Biggio, B. Nelson, and P. Laskov, “Support vector machines under adversarial label noise.” in ACML, 2011, pp. 97–112.
 M. Mozaffari-Kermani, S. Sur-Kolay, A. Raghunathan, and N. K. Jha, “Systematic poisoning attacks on and defenses for machine learning in healthcare,”IEEEjournalofbiomedicalandhealthinformatics,vol.19, no. 6, pp. 1893–1905, 2015.
 B. Biggio, B. Nelson, and L. Pavel, “Poisoning attacks against support vector machines,” in Proceedings of the 29th International Conference on Machine Learning, 2012.
 S. Mei and X. Zhu, “Using machine teaching to identify optimal training-set attacks on machine learners.” in AAAI, 2015, pp. 2871– 2877.
 J. Newsome, B. Karp, and D. Song, “Polygraph: Automatically generating signatures for polymorphic worms,” in Security and Privacy, 2005 IEEE Symposium on. IEEE, 2005, pp. 226–241.
 R. Perdisci, D. Dagon, W. Lee, P. Fogla, and M. Sharif, “Misleading worm signature generators using deliberate noise injection,” in Security and Privacy, 2006 IEEE Symposium on. IEEE, 2006, pp. 15–pp.
 H. Xiao, B. Biggio, G. Brown, G. Fumera, C. Eckert, and F. Roli, “Is feature selection secure against training data poisoning?” in Proceedings of the 32nd International Conference on Machine Learning (ICML-15), 2015, pp. 1689–1698.
 B. Biggio, I. Corona, D. Maiorca, B. Nelson, N. ˇSrndi´c, P. Laskov, G. Giacinto, and F. Roli, “Evasion attacks against machine learning at test time,” in Machine Learning and Knowledge Discovery in Databases. Springer, 2013, pp. 387–402.
 I. J. Goodfellow, J. Shlens, and C. Szegedy, “Explaining and harnessing adversarial examples,” in International Conference on Learning Representations. Computational and Biological Learning Society, 2015.
 S.-M. Moosavi-Dezfooli, A. Fawzi, and P. Frossard, “Deepfool: a simple and accurate method to fool deep neural networks,” arXiv preprint arXiv:1511.04599, 2015.
 S. Alfeld, X. Zhu, and P. Barford, “Data poisoning attacks against autoregressive models,” in Thirtieth AAAI Conference on Artiﬁcial Intelligence, 2016.
 G. Ateniese, L. V. Mancini, A. Spognardi, A. Villani, D. Vitali, and G. Felici, “Hacking smart machines with smarter ones: How to extract meaningful data from machine learning classiﬁers,” International Journal of Security and Networks, vol. 10, no. 3, pp. 137–150, 2015.
 K. Grosse, N. Papernot, P. Manoharan, M. Backes, and P. McDaniel, “Adversarial perturbations against deep neural networks for malware classiﬁcation,” arXiv preprint arXiv:1606.04435, 2016.
 A. Kurakin, I. Goodfellow, and S. Bengio, “Adversarial examples in the physical world,” arXiv preprint arXiv:1607.02533, 2016.
 W. Xu, Y. Qi, and D. Evans, “Automatically evading classiﬁers,” in Proceedings of the 2016 Network and Distributed Systems Symposium, 2016.
 M. Fredrikson, E. Lantz, S. Jha, S. Lin, D. Page, and T. Ristenpart, “Privacy in pharmacogenetics: An end-to-end case study of personalized warfarin dosing,” in 23rd USENIX Security Symposium (USENIX Security 14), 2014, pp. 17–32.
 F. Tram`er, F. Zhang, A. Juels, M. K. Reiter, and T. Ristenpart, “Stealing machine learning models via prediction apis,” arXiv preprint arXiv:1609.02943, 2016.
 N. Papernot, P. McDaniel, and I. Goodfellow, “Transferability in machine learning: from phenomena to black-box attacks using adversarial samples,” arXiv preprint arXiv:1605.07277, 2016.
 G. Hinton, O. Vinyals, and J. Dean, “Distilling the knowledge in a neural network,” in NIPS-14 Workshop on Deep Learning and Representation Learning. arXiv:1503.02531, 2014.
 D. C. Liu and J. Nocedal, “On the limited memory bfgs method for large scale optimization,” Mathematical programming, vol. 45, no. 1-3, pp. 503–528, 1989.
 Y. LeCun and C. Cortes, “The mnist database of handwritten digits,” 1998.
 D. Warde-Farley and I. Goodfellow, “Adversarial perturbations of deep neural networks,” Advanced Structured Prediction, T. Hazan, G. Papandreou, and D. Tarlow, Eds, 2016.
 R. Huang, B. Xu, D. Schuurmans, and C. Szepesvari, “Learning with a strong adversary,” arXiv preprint arXiv:1511.03034, 2015.
 A. Nguyen, J. Yosinski, and J. Clune, “Deep neural networks are easily fooled: High conﬁdence predictions for unrecognizable images,” in In Computer Vision and Pattern Recognition (CVPR 2015). IEEE, 2015.
 N. Carlini, P. Mishra, T. Vaidya, Y. Zhang, M. Sherr, C. Shields, D. Wagner, and W. Zhou, “Hidden voice commands,” in 25th USENIX Security Symposium (USENIX Security 16), Austin, TX, 2016.
 L. Pinto, J. Davidson, and A. Gupta, “Supervision via competition: Robot adversaries for learning tasks,” arXiv preprint arXiv:1610.01685, 2016.
 I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Advances in Neural Information Processing Systems, 2014, pp. 2672– 2680.
 G. L. Wittel and S. F. Wu, “On attacking statistical spam ﬁlters.” in CEAS, 2004.
 Y. Vorobeychik and B. Li, “Optimal randomized classiﬁcation in adversarial settings,” in Proceedings of the 2014 international conference on Autonomous agents and multi-agent systems. International Foundation for Autonomous Agents and Multiagent Systems, 2014, pp. 485–492.
 D. Lowd and C. Meek, “Adversarial learning,” in Proceedings of the eleventh ACM SIGKDD international conference on Knowledge discovery in data mining. ACM, 2005, pp. 641–647.