There’s this really neat new idea on how to train neural networks that recently came out know as generative adversarial nets (GAN).
The basic idea of a GAN is to train two networks to compete with each other (hence the name “adversarial“). One network (called the generator) creates images that look just like real images. The other network (called the discriminator) distinguishes between real images and those images the generator produced.
Thus the two networks compete with each other, where the generator generates images to fool the discriminator, and the discriminator discriminates between the generator’s images and real images.
Here’s how to compute true positives, false positives, true negatives, and false negatives in Python using the Numpy library.
Note that we are assuming a binary classification problem here. That is a value of 1 indicates a positive class, and a value of 0 indicates a negative class. For multi-class problems, this doesn’t really hold.
I recently finished the audiobook, “Napoleon: A Life” by Andrew Roberts. As you may expect from the title, this hefty audiobook (nearly 33 hours) gives great detail on the life of Napoleon Bonaparte.
What struck me most about the life of Napoleon was his ability to cope – no, more than cope, he thrived – in such an environment of confrontation. Napoleon waged wars against the strongest powers in Europe, and even when in massive confrontations, he wrote obsessive letters to people on seemingly trivial topics, detailing their marriages and affairs. Even when opposed by with such great forces, he did not shut down, he did not seek escapism, except perhaps for brief periods with his mistresses.
During an odd phase of fascination with American politics (I’m Canadian), I stumbled across Nate Silver’s website’s (fivethirtyeight.com) political coverage of the 2016 Idaho primary. Their cold, analytical coverage of the election appealed to me. It turns out Nate also wrote a book about prediction, which luckily for me, is also in audiobook format.
The core idea The takeaway from this book is essentially this: prediction is really hard and most people (and machines) suck at it (expect for weather forecasters). More concerningly (is that a word?), most people don’t even know that they suck at it. Oh, and you should use Bayesian statistics to give probabilistic estimates and update your probabilities when you get new information.
This book encouraged me to take a hard look at my own predictions. Do I suck at it? Do I actually understand Bayes theorem?
Here is a combined short summary on my travels to the city of Prague in the Czech Republic along with corresponding images created using Google’s DeepDreams.
What is this DeepDreams you speak of?
Basically, DeepDream is a deep neural network that was trained to recognize objects from millions of images. A deep neural network is composed of a stack of layers. Basically, these layers learn image filters that when applied to an image classify the image (e.g., is this an image of a cat or a dog?).
You give DeepDream an image and specify a layer in the neural network. The original image is then slightly perturbed to create a modified image that causes the specified layer in the neural network to be more activated.
Early layers in the neural network are sensitive to low level concepts like the edges and textures in the image. So if you specify an early layer, your image will be modified to have edges and textures that most activate the early selected layer.
Later (or deeper) layers in the neural network are activated when they see higher level concepts such as faces. So any areas in the original image that slightly look like a face, will be modified to look more like a face.
Okay, but now you might ask, but what about Prague? How was your trip? Did you like the city?
Yeah it was nice! Thanks for asking. Did you want to see some pictures? Here’s one of an old building.
Let’s try some deep dreaming on this. We’ll use the neural network known as VGG16 (it’s a famous neural network that performs very well on competitions). We’ll start by telling VGG16 (the neural network) to modify this image so that one of it’s middle layers becomes more activated. Specifically, we will activate layerconv3_1 from VGG16 (if you don’t know what conv3_1 means, that’s okay – it’s just a technical detail specifying what layer to use). This gives us this:
Now if we activate a deeper layer, conv5_2, we get this crazy looking image,