Conditional Image Generation with PixelCNN Decoders – slides

Awhile ago I presented and attempted to explain this work to our reading group:

van den Oord, A., Kalchbrenner, N., Vinyals, O., Espeholt, L., Graves, A., & Kavukcuoglu, K. (2016). Conditional Image Generation with PixelCNN Decoders. In D. D. Lee, M. Sugiyama, U. V Luxburg, I. Guyon, & R. Garnett (Eds.), NIPS (pp. 4790–4798). Retrieved from http://arxiv.org/abs/1606.05328

And also dived a bit into their previous work,
van den Oord, A., Kalchbrenner, N., & Kavukcuoglu, K. (2016). Pixel Recurrent Neural Networks. Arxiv, 48. Retrieved from http://arxiv.org/abs/1601.06759

While I usually post slides to the web shortly after, this time I’ve been scared to do so. There are a few critical points from this paper that I still don’t understand. And while I told myself that I would spend some time to figure this out, it is now months later, and I’ve taken no action. So as now is always the time to continue on in spite of the fear, I’ll let you, dear Internet, have these slides in all there erroneous ways.

Continue reading “Conditional Image Generation with PixelCNN Decoders – slides”

Convolutional Neural Networks for Adjacency Matrices

We had our work, BrainNetCNN, published in NeuroImage awhile ago,

Kawahara, J., Brown, C. J., Miller, S. P., Booth, B. G., Chau, V., Grunau, R. E., Zwicker, J., G., Hamarneh, G. (2017). BrainNetCNN: Convolutional neural networks for brain networks; towards predicting neurodevelopment. NeuroImage, 146(Feb), 1038–1049. http://doi.org/10.1016/j.neuroimage.2016.09.046

and I’ve meant to do a blog writeup about this. We recently released our code for BrainNetCNN on GitHub (based on Caffe), which implements the proposed filters designed for adjacency matrices.

We called this library Ann4Brains. In hindsight, we could have called this something more general and cumbersome like Ann4AdjacencyMatrcies, but I still like the zombie feel that Ann4Brains has.

We designed BrainNetCNN specifically with brain connectome data in mind. Thus the tag line of,

“Convolutional Neural Networks for Brain Networks”

seemed appropriate. However, after receiving some emails about using BrainNetCNN for other types of (non-connectome) data, I’ll emphasize that this approach can be applied to any sort of adjacency matrix, and not just brain connectomes.

The core contribution of this work is the filters designed for adjacency matrices themselves. So we’ll go through each of them. But first, let’s make sure we are clear on what the brain connectome (or adjacency matrix) is.

Continue reading “Convolutional Neural Networks for Adjacency Matrices”

HP Stream 11 review – running Ubuntu 16

tldr; Not recommended for non-technical people. Not recommended as a primary machine. But if you want a small secondary laptop for travel and light work, and if you install Ubuntu on it, this laptop is a surprising treat!

If installing a new operating system terrifies you (it’s actually not that hard), buy something else. If it does not, then this is a great little machine. I find myself using this little HP Stream more than my other powerful laptop. The utility of a physically light laptop is not to be underestimated.

Note that some reviews claimed that if you remove all the bloatware of it, this machine runs Windows fine. So you might get a decent Windows experience if you remove bloatware at the start.

Now what is this HP Stream 11 you might ask. Well it’s …

A light travel laptop

This machine is light in all the sense of the words. Physically, it’s a light machine; it’s tiny. Color-wise, it’s a light bright blue or purple. Spec-wise, it’s very light.

They should have called this HP Light 11.

But light can be good. Sometimes I want a light machine, one that I don’t care if it gets lost or stolen, or dropped and broken. With a light machine, I can fit it into a travel bag, and do some rough prototyping before pushing the code to more capable machines.

Continue reading “HP Stream 11 review – running Ubuntu 16”

Mastering the Game of Go – slides [paper explained]

This week I presented to our weekly reading group, this work:

Silver, D., Huang, A., Maddison, C. J., Guez, A., Sifre, L., van den Driessche, G., … Hassabis, D. (2016). Mastering the game of Go with deep neural networks and tree search. Nature, 529(7587), 484–489.

To quickly summarize this work…

Basically, they create a policy network, which is a convolutional neural network, that predicts the next move a human player would do from a board state. They create a value network, also a convolutional neural network, that predicts the outcome (win or lose) of the game given the current board state.
Continue reading “Mastering the Game of Go – slides [paper explained]”

Dermofit 10-class – differences in ISBI and MLMI accuracy explained

I just got a great question asking why there is a discrepancy in the accuracy reported in our two works:

[ISBI paper, we report 81.8% accuracy over 10 classes]
Kawahara, J., BenTaieb, A., & Hamarneh, G. (2016). Deep features to classify skin lesions. In IEEE ISBI (pp. 1397–1400). Summary and slides here.

[MICCAI MLMI paper, we report 74.1% accuracy over 10 classes]
Kawahara, J., & Hamarneh, G. (2016). Multi-Resolution-Tract CNN with Hybrid Pretrained and Skin-Lesion Trained Layers. In MLMI. Summary and slides here.

We use the same Dermofit dataset, so it seems surprising the accuracy we report in the papers are different. So I thought I would elaborate on why here.
Continue reading “Dermofit 10-class – differences in ISBI and MLMI accuracy explained”

Mendeley crashes on Ubuntu laptop with NVIDIA GPU

On a Ubuntu laptop, with a NVIDIA GPU, when trying to open Mendeley, you get this rather unhelpful error:

The application Mendeley Desktop has closed unexpectedly.

I’m sure there are many causes for this error, but one unexpected reason you might get this error is related to your graphics card.

If you have a NVIDIA GPU on your laptop, try to switch to your Intel graphics card instead of NVIDIA..

To switch to your Intel graphics card, open your terminal and type:

sudo prime-select intel

Then restart Mendeley. Like magic and deep learning, it just seems to work.

(if you need to switch back to your NVIDIA card, just type sudo prime-select nvidia)

TensorFlow – failed call to cuInit: CUDA_ERROR_UNKNOWN

Scenario: You’re trying to get your GPU to work in TensorFlow on a Ubuntu Laptop. You’ve already installed Tensorflow, Cuda, and Nvidia drivers.

You run python and import TensorFlow:

import tensorflow as tf

And you see encouraging messages like: "successfully opened CUDA library libcublas.so locally"

But in Python, when you run,

tf.Session()

You get this cryptic error:

failed call to cuInit: CUDA_ERROR_UNKNOWN

Here’s how to fix this.
Continue reading “TensorFlow – failed call to cuInit: CUDA_ERROR_UNKNOWN”

How to normalize vectors to unit norm in Python

There are so many ways to normalize vectors… A common preprocessing step in machine learning is to normalize a vector before passing the vector into some machine learning algorithm e.g., before training a support vector machine (SVM).

One way to normalize the vector is to apply l2-normalization to scale the vector to have a unit norm. “Unit norm” essentially means that if we squared each element in the vector, and summed them, it would equal 1.

(note this normalization is also often referred to as, unit norm or a vector of length 1 or a unit vector)

So given a matrix X, where the rows represent samples and the columns represent features of the sample, you can apply l2-normalization to normalize each row to a unit norm. This can be done easily in Python using sklearn.

Here’s how to l2-normalize vectors to a unit vector in Python

import numpy as np
from sklearn import preprocessing
 
# Two samples, with 3 dimensions.
# The 2 rows indicate 2 samples, 
# and the 3 columns indicate 3 features for each sample.
X = np.asarray([[-1,0,1],
                [0,1,2]], dtype=np.float) # Float is needed.
 
# Before-normalization.
print X
# Output,
# [[-1.  0.  1.]
#  [ 0.  1.  2.]]
 
# l2-normalize the samples (rows). 
X_normalized = preprocessing.normalize(X, norm='l2')
 
# After normalization.
print X_normalized
# Output,
# [[-0.70710678  0.          0.70710678]
#  [ 0.          0.4472136   0.89442719]]

Now what did this do?
Continue reading “How to normalize vectors to unit norm in Python”

caffe – Check failed: proto.SerializeToOstream(&output)

You suddenly get this error when training/saving a model in Caffe or saving a model in pycaffe.

io.cpp:69] Check failed: proto.SerializeToOstream(&output)
*** Check failure stack trace: ***

Here are two possible reasons for this error

  1. The directory the snapshot is trying to write the .caffemodel into does not exist
  2. You are out of disk space

Continue reading “caffe – Check failed: proto.SerializeToOstream(&output)”

An Anthropologist on Mars – Oliver Sacks – audiobook review

Title: An Anthropologist on Mars
Author: Oliver Sacks
Narrator: Jonathan Davis
Year: 1995
Tags; non-fiction; clinical; neurology;

Overall impressions

This book is in the same spirit as Dr. Sack’s earlier enjoyable book, The Man Who Mistook His Wife for a Hat. Of the two, while I preferred his earlier work, this book, An Anthropologist on Mars, is still definitely worth a read/listen. If you read only one, choose The Man Who Mistook His Wife for a Hat. If you liked that book, and you want more of the same, then this is it.
Continue reading “An Anthropologist on Mars – Oliver Sacks – audiobook review”