论文标题

忘记框架:从输入输出观察访问可访问的深层信息网络

Forgetting Outside the Box: Scrubbing Deep Networks of Information Accessible from Input-Output Observations

论文作者

Golatkar, Aditya, Achille, Alessandro, Soatto, Stefano

论文摘要

我们描述了一种从受过训练的深网络中依赖培训数据的依赖的过程,该过程改进并将以前的方法推广到不同的读取功能,并可以扩展以确保忘记网络激活中的忘记。我们介绍了一个新的限制,介绍了从黑盒网络中只能观察到输入输出行为的黑框网络中有关被遗忘的队列的信息。所提出的遗忘过程具有从模型的线性化版本的微分方程中得出的确定性部分,以及一个随机部分,通过添加针对损失景观的几何形状量身定制的噪声来确保信息破坏的随机部分。我们利用受神经切线内核启发的DNN的激活和权重动力学之间的连接以计算激活中的信息。

We describe a procedure for removing dependency on a cohort of training data from a trained deep network that improves upon and generalizes previous methods to different readout functions and can be extended to ensure forgetting in the activations of the network. We introduce a new bound on how much information can be extracted per query about the forgotten cohort from a black-box network for which only the input-output behavior is observed. The proposed forgetting procedure has a deterministic part derived from the differential equations of a linearized version of the model, and a stochastic part that ensures information destruction by adding noise tailored to the geometry of the loss landscape. We exploit the connections between the activation and weight dynamics of a DNN inspired by Neural Tangent Kernels to compute the information in the activations.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源