Awhile ago I presented and attempted to explain this work to our reading group: van den Oord, A., Kalchbrenner, N., Vinyals, O., Espeholt, L., Graves, A., & Kavukcuoglu, K. (2016). Conditional Image Generation with PixelCNN Decoders. In D. D. Lee, M. Sugiyama, U. V Luxburg, I. Guyon, & R. Garnett (Eds.), NIPS (pp. 4790–4798). Retrieved from http://arxiv.org/abs/1606.05328
And also dived a bit into their previous work, van den Oord, A., Kalchbrenner, N., & Kavukcuoglu, K. (2016). Pixel Recurrent Neural Networks. Arxiv, 48. Retrieved from http://arxiv.org/abs/1601.06759
While I usually post slides to the web shortly after, this time I’ve been scared to do so. There are a few critical points from this paper that I still don’t understand. And while I told myself that I would spend some time to figure this out, it is now months later, and I’ve taken no action. So as now is always the time to continue on in spite of the fear, I’ll let you, dear Internet, have these slides in all there erroneous ways.
Basically, they create a policy network, which is a convolutional neural network, that predicts the next move a human player would do from a board state. They create a value network, also a convolutional neural network, that predicts the outcome (win or lose) of the game given the current board state. Continue reading “Mastering the Game of Go – slides [paper explained]”
I just got a great question asking why there is a discrepancy in the accuracy reported in our two works:
[ISBI paper, we report 81.8% accuracy over 10 classes] Kawahara, J., BenTaieb, A., & Hamarneh, G. (2016). Deep features to classify skin lesions. In IEEE ISBI (pp. 1397–1400). Summary and slides here.
[MICCAI MLMI paper, we report 74.1% accuracy over 10 classes] Kawahara, J., & Hamarneh, G. (2016). Multi-Resolution-Tract CNN with Hybrid Pretrained and Skin-Lesion Trained Layers. In MLMI. Summary and slides here.
In this work, we looked at how to classify skin lesions from images captured with a digital camera (i.e., non-dermoscopy). Our approach was able to distinguish among 10 different types of skin diseases over 1300 images and achieved an accuracy higher than what was previously reported over the same dataset. We did this by applying deep learning (i.e., pretrained convolutional neural networks) to melanoma and non-melanoma skin images.