Building a Convolutional Neural network Using TensorFlow and Keras API
TensorFlow is an open-source machine learning framework developed by Google. It provides a comprehensive set of tools, libraries, and resources for building and deploying machine learning models. TensorFlow is known for its flexibility and scalability, supporting various types of neural networks and distributed computing.
Keras, on the other hand, is a high-level neural networks API that runs on top of TensorFlow (or other backend engines like Theano or CNTK). It simplifies the process of building and training deep learning models by providing a user-friendly and intuitive interface. Keras allows users to define and configure complex neural network architectures with just a few lines of code.
Together, TensorFlow and Keras form a powerful combination for developing deep learning models, offering flexibility, scalability, and ease of use to researchers and developers in the field of machine learning. Multi-Class Image Classification:
TensorFlow, in combination with CNNs, excels in multi-class image classification, where images need to be assigned to one of several predefined classes. By leveraging the power of deep learning, CNNs can handle complex visual data and recognize patterns that may be challenging for traditional machine learning algorithms. TensorFlow's ability to handle large datasets and distributed training further enhances its suitability for multi-class image classification tasks.
Let us go deep into Image Classification by taking a multi dataset:
Dataset:
The dataset is taken from Kaggle.Link of Dataset is provided: Intel Image Classification | Kaggle
Libraries:
- We need to import the necessary libraries and modules for data manipulation, visualization, machine learning, and image processing.
- They set up the environment for using TensorFlow and Keras for building deep learning models.
- They import additional modules for data preprocessing, shuffling, and evaluation.
- They configure the display settings for plots and visualizations.
Here we are loading and processing the training images and their corresponding labels for further analysis and model training.
Here we are creating a 5x5 grid of subplots for visualizing a random selection of images from the training dataset. Each subplot displays an image along with its corresponding class label. The purpose is to get a visual overview of the data and verify that the images and labels are correctly loaded and associated with each other.
CNN MODEL:
Here we built a sequential model that is defined using the Keras API.
The model consists of several convolutional layers followed by max pooling layers to extract features from the input images. The img_size variable represents the desired size of the input images.
The model starts with a 3x3 convolutional layer with 32 filters, followed by a max pooling layer. This pattern is repeated with increasing numbers of filters (64, 128, 256) to capture more complex features at each layer. The strides parameter in the last convolutional layer is set to 2 to reduce the spatial dimensions.
The flattened output from the last max pooling layer is fed into a fully connected layer with 256 units and ReLu activation to further process the extracted features. Finally, a dense layer with 6 units and softmax activation is added for multiclass classification, representing the 6 output classes.
This architecture is commonly used for image classification tasks, where the convolutional layers learn hierarchical representations of the images and the fully connected layers perform the classification based on the learned features.
For compiling the model, we specify the optimizer, loss function, and metrics for evaluation. It sets the Adam optimizer, categorical cross-entropy as the loss function (suitable for multi-class classification), and accuracy as the metric to measure the model's performance during training and evaluation.
Here, we train the compiled model using the provided training data for a specified number of epochs. The validation data is used to evaluate the model's performance during training. The callbacks, such as checkpoints and early stopping, help in saving the best model weights and stopping training early if performance stops improving, respectively.
Plotting the Loss Curves:
Here we plot the training and validation loss curves during the model training process. The loss values from the history object, which stores the training history, are used to plot the curves. The loss and val_loss represent the training and validation loss values, respectively, over each epoch. The resulting plot provides insights into the model's convergence and potential overfitting or underfitting. The plt functions are used to configure the plot's title, axes labels, legend, and display the plot.
Here we plot the training and validation accuracy curves during the model training process. The accuracy values from the history object are used to plot the curves, providing insights into the model's performance over each epoch.
Do Checkout:
The link to our product named AIEnsured offers explainability and many more techniques.
To know more about explainability and AI-related articles please visit this link.
References:
Image classification | TensorFlow Core
Transfer learning and fine-tuning | TensorFlow Core
Rushitha