Newsroom

Attila A. Yavuz receives second Cisco Award for “Trustworthy and Privacy-Preserving Machine Learning Platforms”

September 29, 2020

Attila A. Yavuz, CSE Assistant Professor, Director of the Applied Cryptography Research Laboratory and the Co-Director of Center for Cryptographic Research at USF, received the Cisco Award for his research project entitled “Trustworthy and Privacy-Preserving Machine Learning Platforms”. Dr. Yavuz earned the Cisco Award twice as the sole-PI, with the current award of $99,980. The Cisco Research Center grants the research awards with the goal of” facilitating collaboration and exploration of new and promising technologies.” 

Machine Learning (ML) is an invaluable tool to develop intelligent systems targeting critical domains such as healthcare, financial analytics, and cyber-security (e.g., intrusion detection, malware classification). Since these systems involve sensitive data (e.g., personal, financial, location information), it is of paramount importance to ensure their privacy and security. Ideally, the sensitive data to be processed by ML algorithms should always be kept encrypted for privacy, and its integrity/authenticity must be guaranteed despite the presence of persistent threats such as malware. However, there is a substantial research gap towards the creation of such a secure, privacy-preserving yet practical ML technique. That is, existing approaches generally rely on either extremely costly encryption methods (e.g., fully homomorphic encryption) or make security assumptions that might not be applicable for real-world applications (e.g., collusion-freeness, semi-honest attackers). Hence, there is a critical need for ML platforms that can process large-scale data under encryption with high efficiency, even in the presence of active malware. 

The objective of this project is to create a new trustworthy machine learning platform that can be executed on untrusted software environments with high privacy and efficiency. The proposed platform will overcome the limitations of secure ML approaches in terms of efficiency and resiliency by enabling trustworthy ML functionalities on commodity platforms in the presence of active attackers (e.g., malware, collusion, or corrupted inputs). The key innovation is to augment secure multi-party computation (MPC) and trusted execution environments together to mitigate the impacts of collusion vulnerability, malicious inputs, single point of failure, and privacy leakages. On one hand, secure hardware will enable breach, collusion, and malicious input-resiliency for MPC with high efficiency. On the other hand, MPC will offer robustness and strong privacy guarantees (e.g., curious hardware). The expected outcome includes a trustworthy ML framework that can realize a myriad of ML functionalities (e.g., PCA, SVM, EM) with MPC and secure hardware. The project will then build a novel privacy-preserving network intrusion detection platform that can execute ML algorithms on the encrypted network traffic without leaking sensitive information. The proposed framework can be extended to various other use-cases such as financial and healthcare applications. The strategic use of secure hardware to encapsulate special MPC techniques is expected to provide orders of magnitude faster performance compared to the existing approaches with compromise-resiliency.