This guide explains how to perform integration testing on AWS EC2 by dynamically retrieving instance details for a flexible Jenkins pipeline.
In this guide, we explore how to perform integration testing on an AWS EC2 instance by dynamically retrieving the instance’s public IP address or DNS name. This approach removes the need for hard-coded URLs in your Jenkins pipeline, ensuring a more flexible and secure deployment process.
Previously, our Docker image was deployed to an Amazon Elastic Compute Cloud (EC2) instance. Now, we improve upon that setup by integrating dynamic fetching of instance details using the AWS CLI. This enables our Jenkins pipeline to extract the correct endpoint and validate our application’s responsiveness through automated tests.
Before proceeding with dynamic integration testing, you might want to confirm that your Docker container is running on the EC2 instance. Execute the following command on your instance:
Copy
Ask AI
ubuntu@ip-172-31-25-250:~$ sudo docker psCONTAINER ID IMAGE NAMEScab883634d99 siddharth67/solar-system:537efda2bdf4113ff4f77c5ecaf solar-system0.0.0.0:3000->3000/tcp solar-system
This output shows that the container is active and its port mapping is correct.
To enhance this process, we now use a shell script that fetches EC2 instance details dynamically. This adjustment not only adds flexibility but also minimizes manual errors.
AWS CLI Execution
The script initiates by running the aws ec2 describe-instances command to retrieve details about the instances.
Parsing Instance Information
It then filters for the EC2 instance tagged as "dev-deploy" using jq to extract either the public IP address or DNS name.
Endpoint Testing
Once the URL is identified, two tests follow:
A GET request to /live verifies service availability.
A POST request to /planet with JSON data ({"id": "3"}) retrieves planet data.
Validation
The script examines whether the HTTP status code is 200 and if the retrieved planet name is "Earth". If both conditions are met, the tests pass; otherwise, the script exits with an error.
Below is a representative JSON response from the describe-instances command. Although the response contains multiple details, only the public DNS or public IP is used in our script:
Integrate the testing script into your Jenkins pipeline by adding a dedicated stage. Update your Jenkinsfile as follows:
Copy
Ask AI
stage('Integration Testing - AWS EC2') { when { branch 'feature/*' } steps { // Optionally print environment variables to verify branch details sh 'printenv | grep -i branch' // Use the AWS Pipeline Steps plugin to set AWS credentials and region withAWS(credentials: 'aws-s3-ec2-lambda-creds', region: 'us-east-2') { sh ''' bash integration-testing-ec2.sh ''' } }}
Make sure that your Jenkins controller node has the necessary AWS credentials and that the AWS CLI is properly configured with the correct region. You can verify the installation by running aws --version.
When the Jenkins pipeline runs, it will build the Docker image, deploy it to the AWS EC2 instance, and execute the integration tests. The console logs might contain output similar to:
By dynamically fetching your AWS EC2 instance details using the AWS CLI and validating service endpoints via automated tests, you can significantly streamline your integration testing process. This setup enhances security by removing hard-coded URLs and ensures that your Jenkins pipeline accurately reflects the current state of your deployed environment.Thank you for exploring this integration testing approach on AWS EC2.