Kaggle Submission: In my last post, I explained how I managed to reach a testset accuracy equal to 97.5%. Since I was not aware that it is still possible to make a Kaggle submission after the compe…| IFT6266-Deep Learning
In my last post, I was evaluating the model presented here and I’ve shown some misclassified images. Among them, there were images of small pets appearing at random locations. For those images, an …| IFT6266-Deep Learning
Last Sunday, I presented a model which achieved a testset accuracy of 94.9% (see this post). This network follows the VGG approach (“depth is good”, 3×3 filters), and Batch Normali…| IFT6266-Deep Learning
Last tuesday, I did a presentation in IFT6268 class about the Batch Normalization paper. Therefore, I’ll start this blog post by a review of this paper. Then, I’ll present my experiment…| IFT6266-Deep Learning
Well, I spent most of the week working on my code (adding the possibility to use Fuel instead of loading datasets in memory, and moving the scripts to the GPU cluster). But I still have some result…| IFT6266-Deep Learning
VGG net : (VGG = Visual Geometry Group, Department of Engineering Science, University of Oxford) Few days ago, I read this article which presents the work made by the “VGG” team for the…| IFT6266-Deep Learning
For the class project, I decided to work on the “Dogs vs Cats” Kaggle challenge, which was held from September 25, 2013 to February 1st, 2014. Project presentation The challenge consits…| IFT6266-Deep Learning
Parametric ReLUs During these last days, I’ve been experimenting an activation function called Parametric ReLU. Similarly to Leaky ReLU, this activation function does not saturate when z < 0. Se…| IFT6266-Deep Learning