This lesson guides you in using Docker to deploy a Flask application, covering installation, image building, container running, and pushing to Docker Hub.
In this lesson, we guide you through using Docker to deploy a Flask application as part of a model deployment process. You will learn how to install Docker on an Ubuntu server, build a container image for your Flask app, verify the installation, run the container, test the application, and finally push the image to Docker Hub.
Docker runs on Linux, Windows, and macOS, but in this example we focus on an Ubuntu server. The following instructions are adapted from the official Docker documentation.
Before starting the installation, update your package index to ensure you have the latest information from the repositories.
To confirm that Docker has been installed correctly, check the version:
Copy
Ask AI
docker version
Next, run Docker’s official “Hello World” container to verify that both the client and daemon are functioning properly:
Copy
Ask AI
docker run hello-world
A successful run will produce output like:
Copy
Ask AI
Unable to find image 'hello-world:latest' locallylatest: Pulling from library/hello-worldc1ec31eb5944: Pull complete...Hello from Docker!This message shows that your installation appears to be working correctly....
The Dockerfile below uses a slim official Python image and installs the CPU version of PyTorch to minimize the image size. It then installs Python dependencies, copies your Flask app, and sets up a non-root user for enhanced security.
Copy
Ask AI
# Use the official Python base imageFROM python:3.11-slim# Set the working directoryWORKDIR /opt/app# Install CPU version of PyTorchRUN pip install torch==2.4.1 --index-url https://download.pytorch.org/whl/cpu# Copy requirements and install dependenciesCOPY requirements.txt .RUN pip install --no-cache-dir -r requirements.txt# Copy the Flask app codeCOPY ./flask_app .# Create a user and group for running the appRUN groupadd -r pytorch && useradd --no-log-init -r -g pytorch pytorch# Change ownership of the app directoryRUN chown -R pytorch:pytorch /opt/app# Switch to the created userUSER pytorch# Expose the port that our Flask app is listening onEXPOSE 8000# Command to run the Flask appCMD ["flask", "run", "--host=0.0.0.0", "--port=8000"]
Place this Dockerfile in the same directory as your requirements.txt file and your Flask app. Then, build your Docker image with the following command:
Copy
Ask AI
docker build -t mobilenetv3lg-flask:v1.0 .
During the build process, you will see steps that include loading the base image, copying files, installing Python packages via pip, and setting up a non-root user. After the build completes, verify the new image with:
Copy
Ask AI
docker images
A sample output might look like this:
Copy
Ask AI
REPOSITORY TAG IMAGE ID CREATED SIZEmobilenetv3lg-flask v1.0 1734db15a849 About a minute ago 5.39GBhello-world latest d2c94e258dc9 20 months ago 13.3kB
After optimizing the image by including only CPU dependencies, you may notice a reduction in image size (e.g., from 5.39GB to 1.34GB).
Now that your Docker image is ready, you can run it as a container. The command below maps port 8000 in the container to port 8000 on your local machine:
Copy
Ask AI
docker run -p 8000:8000 mobilenetv3lg-flask:v1.0
When the container starts, you may see log messages similar to the following:
Copy
Ask AI
2025-01-16 19:05:41,363 - INFO - Loading MobileNetV3 Large pre-trained model...Downloading: "https://download.pytorch.org/models/mobilenet_v3_large-5c1a4163.pth" to /home/pytorch/.cache/torch/hub/checkpoints/mobilenet_v3_large-5c1a4163.pth100.0%2025-01-16 19:05:41,827 - INFO - Model loaded successfully.* Running on all addresses (0.0.0.0)* Running on http://127.0.0.1:8000...
For deploying the container in detached mode (running in the background), use the -d flag:
Copy
Ask AI
docker run -d -p 8000:8000 mobilenetv3lg-flask:v1.0
With your container running, test the Flask endpoint by sending a POST request with an image payload. The following Python script encodes an image in Base64, constructs a JSON payload, and sends it to the /predict endpoint:
Copy
Ask AI
import requestsimport base64# Open the image file and encode it to Base64with open('dog-1.jpg', 'rb') as img_file: base64_string = base64.b64encode(img_file.read()).decode('utf-8')# Construct the JSON payloadpayload = { "image": base64_string }# Specify the headersheaders = { "Content-Type": "application/json" }# Send the POST requestresponse = requests.post("http://127.0.0.1:8000/predict", headers=headers, json=payload)# Print the response from the serverprint("Response JSON:", response.json())
A successful response might show:
Copy
Ask AI
Response JSON: {'prediction': 207}
This indicates that the inference request was processed correctly by your Flask application.
Build a Docker image using a Dockerfile that packages a Flask application along with CPU-based PyTorch dependencies.
Run a Docker container and verify its operation by sending a test inference request.
Tag and push the Docker image to Docker Hub for easy distribution and deployment.
With your Docker image now hosted on Docker Hub, you can seamlessly deploy and share your application across different environments. Next, we will explore containerizing a best-trained model in a lab exercise.Happy containerizing!