This blog is dedicated to my friends who want to learn AI/ML/deep learning.
Explore Plant Seedling Classification dataset in Kaggle at the link https://www.kaggle.com/c/plant-seedlings-classification. It has training set images of 12 plant species seedlings organized by folder. Each image has a filename that is its unique id. The goal of the competition is to create a classifier capable of determining a plant's species from a photo. Test set we need to predict the species of each image.
You can download this code from here.
Start a new Kernel. First import all the required python modules
We can look at the contents of ../input/train directory to see what it contains. Create two functions that converts string classes of plant seedlings into integer and reverse. This is for beautification only.
Then we set the parameters of the model like Epoch, Learning rate, Batch size. The more we tune these the better the results will be.
In training neural network, one epoch means one pass of the full training set. Batch size refers to the number of training examples utilized in one iteration. Here is a blog that explains learning rate.
Then we read the training data images. We resize all images into 128*128.
Then we create model we user 3 layers with activation function ReLU and in the last layer add a "softmax" layer.
In the context of artificial neural networks, the rectifier is an activation function. It enables better training of deeper networks,compared to the widely used activation functions prior to 2011, i.e., the logistic sigmoid and its more practical counterpart, the hyperbolic tangent. The rectifier is, as of 2018, the most popular activation function for deep neural networks. A unit employing the rectifier is also called a rectified linear unit (ReLU).
The softmax function is often used in the final layer of a neural network-based classifier. Such networks are commonly trained under a log loss (or cross-entropy) regime, giving a non-linear variant of multinomial logistic regression.
We have used loss function is categorical cross-entropy function and Adam Optimizer.
Then we read training data partition into 75:25 split, compile the model and save it. We also used image augmentation. We have added Image Data Generator to generate more images by slightly shifting the current images.
Next step is to generate matplotlib plots and read test data
The output of this is shown below :
Next step is to create the CSV file for test data and upload it to the competition.
A copy of this blog is posted here