This is part 5 of my series. In the previous post, we looked at both VGG and GoogleNet models. We implemented both models and discussed their architectural differences in detail. We then used pre-trained versions of the models to transfer learn on the Caltech-256 dataset.| Maybe-Ray
This is part 4 of my series. In the previous post, we explored AlexNet, where we implemented the model. We then used a pre-trained version of AlexNet to transfer learn on the Caltech-256 dataset.| Maybe-Ray
This is part 2 of my previous post on LeNet-5, where we implemented and trained it on the MNIST dataset. This post will walk through the AlexNet architecture and its different implementations in other frameworks. After this, we will use transfer learning on AlexNet so it can learn the Caltech-256 dataset| Maybe-Ray
For an interactive experience with this post's code, check out this kaggle notebook| Maybe-Ray
Disclaimer: This blog post is more of a reference guide and personal notes. So, it should not be taken as a tutorial or a beginner's guide to Computer Vision, CNNs, or a deep discussion of the paper Gradient-Based Learning Applied to Document Recognition.| Maybe-Ray
Note: This post gives an overview of the system's design and how it was built. If you're looking for the full technical walkthrough, check out the Kaggle Notebook.| Maybe-Ray
If you had told me a month ago that I’d be working for a startup where I was late for the interview and had put the wrong contact details on my CV, I would h...| Maybe-Ray
( ͡° ͜ʖ ͡°) All these thoughts are purely my own and no one else's (for legal reasons ). ...| Maybe-Ray
To cut things short, always use the easiest solution to solve a particular problem and once that solution does not work for the business anymore reassess wha...| Maybe-Ray