Date of Award

2022

Document Type

Dissertation

Degree Name

Doctor of Philosophy (PhD)

Department

Computer Science

First Advisor

Christian Skalka

Second Advisor

Joseph P. Near

Abstract

We present novel techniques to forward the goal of secure and private machine learning. The widespread use of machine learning poses a serious privacy risk to the data used to train models. Data owners are forced to trust that aggregators will keep their data secure, and that released models will maintain their privacy. The works presented in this thesis strive to solve both problems through secure multiparty computation and differential privacy based approaches. The novel FLDP protocol leverages the learning with errors (LWE) problem to mask model updates and implements an efficient secure aggregation protocol, which easily scales to large models. Continuing on the vein of scalable secure aggregation the SHARD protocol utilizes a multi-layered secret sharing scheme to perform efficient secure aggregation on very large federations. Together, these protocols allow a federation to train models without requiring data owners to trust an aggregator. In order to ensure the privacy of trained models, we propose immediate sensitivity, a framework for reducing membership inference attack efficacy against neural networks. Immediate sensitivity uses a differential privacy inspired additive noise mechanism to privatize model updates during training. By determining the scale of the noise through the gradient of the gradient, immediate sensitivity trains more accurate models than differentially private gradient clipping approach. Each of these works is supported by extensive experimental evaluation.

Language

en

Number of Pages

145 p.

Share

COinS