Subscribe
Transfer Learning for Image Classification with TensorFlow

By: vishwesh

Transfer Learning for Image Classification with TensorFlow

Image classification is an important task in computer vision, which involves assigning a label to an image based on its content. Deep learning models have achieved state-of-the-art performance on this task, but training them from scratch requires a large amount of labeled data and computing resources. Transfer learning is a technique that leverages pre-trained models to solve new tasks with smaller data sets and less computation.

In this article, we will introduce transfer learning for image classification with TensorFlow, a popular deep learning framework. We will cover the following topics:

  • What is transfer learning and why is it useful?
  • How to use pre-trained models in TensorFlow
  • Fine-tuning a pre-trained model for a new task
  • Evaluating and testing the transfer learning model

What is Transfer Learning and Why is it Useful?

Transfer learning is a technique in deep learning that uses pre-trained models as a starting point for a new task. The pre-trained model has already learned a lot of knowledge from a large dataset and can be used to extract features from new data. The features can then be used as input to a new model that is trained on a smaller dataset for a new task. Transfer learning is useful because it can save time and computing resources by using the pre-trained model as a starting point, and it can also improve the performance of the new model by leveraging the knowledge learned from the pre-trained model.

How to Use Pre-trained Models in TensorFlow

TensorFlow provides a library of pre-trained models called TensorFlow Hub. These models are trained on large datasets such as ImageNet and are available for download and use in your own projects. To use a pre-trained model in TensorFlow, you can use the hub.KerasLayer class, which wraps the pre-trained model as a Keras layer.

Here is an example of using a pre-trained model for image classification:

import tensorflow_hub as hub
import tensorflow as tf

# Load a pre-trained model from TensorFlow Hub
model_url = "https://tfhub.dev/google/imagenet/mobilenet_v2_100_224/feature_vector/5"
feature_extractor = hub.KerasLayer(model_url, input_shape=(224, 224, 3))

# Build a new model for image classification
model = tf.keras.Sequential([
    feature_extractor,
    tf.keras.layers.Dense(1000, activation="softmax")
])

# Compile the model
model.compile(optimizer="adam", loss="categorical_crossentropy", metrics=["accuracy"])

In this example, we load the MobileNetV2 model from TensorFlow Hub and use it as a feature extractor. We then build a new model for image classification by adding a dense layer with a softmax activation function. Finally, we compile the model with the Adam optimizer and categorical cross-entropy loss function.

Fine-tuning a Pre-trained Model for a New Task

In some cases, the pre-trained model may not be directly applicable to the new task. For example, the pre-trained model may have been trained on a different set of classes than the new task. In these cases, we can fine-tune the pre-trained model by retraining some or all of the layers on the new data set.

Here is an example of fine-tuning a pre-trained model for a new task:

import tensorflow_datasets as tfds

# Load the new data set
dataset_name = "cats_vs_dogs"
dataset, info = tfds.load(dataset_name, with_info=True)

# Split the data set into train and test sets
train_dataset = dataset["train"].map(lambda x: (x["image"], x["label"]))
test_dataset = dataset["test"].map(lambda x: (x["image"], x["label"]))

# Preprocess the data set
def preprocess(image, label):
    image = tf.image.resize(image, (224, 224))
    image = tf.cast(image, tf.float32) / 255.0
    label = tf.one_hot(label, 2)
    return image, label

train_dataset = train_dataset.map(preprocess).shuffle(1000).batch(32)
test_dataset = test_dataset.map(preprocess).batch(32)

# Load a pre-trained model from TensorFlow Hub
model_url = "https://tfhub.dev/google/imagenet/mobilenet_v2_100_224/feature_vector/5"
feature_extractor = hub.KerasLayer(model_url, input_shape=(224, 224, 3))

# Build a new model for fine-tuning
model = tf.keras.Sequential([
    feature_extractor,
    tf.keras.layers.Dense(2, activation="softmax")
])

# Freeze the feature extractor layers
feature_extractor.trainable = False

# Compile the model
model.compile(optimizer="adam", loss="categorical_crossentropy", metrics=["accuracy"])

# Fine-tune the model on the new data set
history = model.fit(train_dataset, epochs=5, validation_data=test_dataset)

# Unfreeze the feature extractor layers
feature_extractor.trainable = True

# Compile the model again
model.compile(optimizer=tf.keras.optimizers.Adam(1e-5), loss="categorical_crossentropy", metrics=["accuracy"])

# Fine-tune the model again
history = model.fit(train_dataset, epochs=10, validation_data=test_dataset)

In this example, we load the cats vs. dogs dataset from TensorFlow Datasets and split it into train and test sets. We then preprocess the data set by resizing the images to 224x224, scaling the pixel values to the range [0, 1], and one-hot encoding the labels. We use the MobileNetV2 model from TensorFlow Hub as a feature extractor and add a dense layer with a softmax activation function for binary classification. We freeze the feature extractor layers and compile the model with the Adam optimizer and categorical cross-entropy loss function. We fine-tune the model on the new data set for 5 epochs and then unfreeze the feature extractor layers. We compile the model again with a lower learning rate and fine-tune it for another 10 epochs.

Evaluating and Testing the Transfer Learning Model

After training the transfer learning model, we can evaluate its performance on the test set and use it for inference on new data.

Here is an example of evaluating and testing the transfer learning model:

import matplotlib.pyplot as plt
import numpy as np

# Evaluate the model on the test set
test_loss, test_accuracy = model.evaluate(test_dataset)
print(f"Test loss: {test_loss}, Test accuracy: {test_accuracy}")

# Make predictions on new data
image_url = "https://www.catster.com/wp-content/uploads/2018/05/A-gray-cat-closing-its-eyes-and-enjoying-the-sunshine.-Photography-by-StocksyUnited.jpg"
image = tf.keras.utils.get_file("image.jpg", image_url)
image = tf.keras.preprocessing.image.load_img(image, target_size=(224, 224))
image = tf.keras.preprocessing.image.img_to_array(image)
image = image / 255.0
image = np.expand_dims(image, axis=0)

prediction = model.predict(image)
class_names = ["cat", "dog"]
predicted_class = class_names[np.argmax(prediction)]
confidence = np.max(prediction)

plt.imshow(image[0])
plt.axis("off")
plt.title(f"Prediction: {predicted_class}, Confidence: {confidence:.2f}")
plt.show()

In this example, we evaluate the model on the test set and print the test loss and accuracy. We then download an image of a cat from the internet and preprocess it in the same way as the training and test data. We use the predict method of the model to make a prediction on the new data and convert the prediction to a class name and confidence score. We then plot the image and the predicted class and confidence score.

Conclusion

Transfer learning is a powerful technique for image classification that allows us to leverage pre-trained models to solve new tasks with limited data. In this article, we have explored the basics of transfer learning for image classification with TensorFlow, including how to load pre-trained models from TensorFlow Hub, how to fine-tune them for new tasks, and how to evaluate and test the transfer learning model. By following these steps, even beginners can build accurate image classifiers with limited data.

Recent posts

Don't miss the latest trends

    Popular Posts

    Popular Categories