Dog-whistle GANS
Share on: Facebook Twitter Reddit LinkedIn
This is a free sample of the content available on my new subscription service, Matthew Explains.
In this talk I go over the paper "Generative Adversarial Nets," by Goodfellow et al. from NIPS 2014. This paper is often credited with introducing the technique in its title, which involves training a neural network by making it attempt to fool another neural network into confusing the first one with the original training data. I go over how that works, and add some perspective on its significance in the present day, where we're especially concerned with recognition of neural network output for applications like "plagiarism" detection. I also add some ideas of my own on watermarking and making the output of models covertly recognizable.