Deep features to classify skin lesions – summary and slides

Here I'm nervously just starting our talk on our approach to skin lesion classification
Me nervously just starting to talk about our approach to skin lesion classification.

We presented our work, “Deep Features to Classify Skin Lesions” at ISBI 2016 in Prague! And I’m happy to report that our work was awarded runner-up for the Best Student Paper Award 🙂

In this work, we looked at how to classify skin lesions from images captured with a digital camera (i.e., non-dermoscopy). Our approach was able to distinguish among 10 different types of skin diseases over 1300 images and achieved an accuracy higher than what was previously reported over the same dataset. We did this by applying deep learning (i.e., pretrained convolutional neural networks) to melanoma and non-melanoma skin images.

The 10 different types of skin lesions we want to classify
The 10 different types of skin lesions we want to classify

An interesting takeaway from this work, is that we showed that we can use a model (i.e. a convolutional neural network, CNN) that was developed over a large dataset of images containing natural objects (e.g., dogs, cats, trees), and use the same model to distinguish among different types of skin lesions. In other words, the CNN learns general parameters that work across widely different domains. That this works well is somewhat surprising, as the images found in ImageNet have very different appearance to the skin images. The CNN’s ability to generalize has also been shown in other works (e.g., Codella et. al., Donahue et. al.)

We then improved on these results by creating a feature vector that was based on features of the image computed from multiple scales and under different augmentation (i.e., flip, rotation).

Another key idea of this work was that we did not require lesions segmentations. Many works require that the lesion is delineated (segmented) from the background, and then this segmentation is used to guide the feature vector. However, segmentation is a hard unsolved problem. So in this work, we skipped the segmentation step and extracted features directly from the image – and still managed to classify the images quite well.

I have some slides that I presented during our talk at ISBI that I’ll be posting shortly I’ve posted below!

Questions/comments? If you just want to say thanks, consider sharing this article or following me on Twitter!