Complete DevOps Tutorial with Usage Examples

Table of Contents

1. What is DevOps?

DevOps is a set of practices that combines software development (Dev) and IT operations (Ops). It aims to shorten the systems development life cycle and provide continuous delivery with high software quality. DevOps is a cultural and professional movement that stresses communication, collaboration, integration, and automation to improve the flow of work between software developers and IT operations professionals.

The core idea is to break down the silos between these traditionally separate teams, allowing for faster, more reliable software releases and a more efficient overall development process.

Key Goals of DevOps:

2. DevOps Principles & Culture (CALMS)

The success of DevOps is driven by its cultural principles, often summarized by the CALMS acronym:

3. DevOps Lifecycle

The DevOps lifecycle is a continuous loop, representing the iterative process of software delivery.

  1. Plan:
    • Define features, requirements, and project scope.
    • Tools: Jira, Azure Boards, Trello.
  2. Code:
    • Develop application code.
    • Tools: Git, GitHub, GitLab, Bitbucket, Azure Repos, VS Code.
  3. Build:
    • Compile source code, run unit tests, package artifacts.
    • Tools: Maven, Gradle, npm, Webpack, Jenkins, GitLab CI, Azure Pipelines, AWS CodeBuild.
  4. Test:
    • Automated testing (unit, integration, end-to-end, performance, security).
    • Tools: JUnit, Selenium, Jest, Cypress, JMeter, SonarQube.
  5. Release:
    • Prepare application for deployment.
    • Tools: Jenkins, GitLab CI, Azure Pipelines, Spinnaker, Octopus Deploy.
  6. Deploy:
    • Deploy applications to various environments (dev, staging, production).
    • Tools: Jenkins, GitLab CI, Azure Pipelines, Kubernetes, Docker, Ansible, Terraform.
  7. Operate:
    • Manage and maintain the deployed application in production.
    • Tools: Kubernetes, Docker, Cloud Platforms (AWS, Azure, GCP).
  8. Monitor:
    • Monitor application performance, infrastructure health, and user experience.
    • Gather feedback and identify issues for the next iteration.
    • Tools: Prometheus, Grafana, ELK Stack, Splunk, Datadog, CloudWatch, Azure Monitor, Google Cloud Monitoring.

4. Version Control (Git)

Version Control Systems (VCS) are essential for managing changes to code, configuration files, and documentation. **Git** is the most widely used distributed VCS.

Key Concepts:

Usage Example: Basic Git Workflow:

# 1. Initialize a new Git repository
git init my_devops_project
cd my_devops_project

# 2. Create a file
echo "Hello, DevOps!" > README.md

# 3. Add file(s) to the staging area
git add README.md

# 4. Commit changes
git commit -m "Initial commit: Add README file"

# 5. Create a new branch for a feature
git checkout -b feature/new-dashboard

# 6. Make changes on the new branch
echo "Dashboard feature coming soon." >> README.md
git add README.md
git commit -m "Add dashboard placeholder"

# 7. Switch back to the main branch
git checkout main

# 8. Merge the feature branch into main
git merge feature/new-dashboard

# 9. (Optional) Delete the feature branch
git branch -d feature/new-dashboard

# 10. Connect to a remote repository (e.g., GitHub, GitLab, Azure Repos)
git remote add origin https://github.com/youruser/my_devops_project.git

# 11. Push changes to the remote repository
git push -u origin main
GitHub/GitLab/Bitbucket: These are popular web-based platforms for hosting Git repositories, providing collaboration features like Pull Requests, issue tracking, and integrated CI/CD.

5. Continuous Integration (CI)

Continuous Integration (CI) is a development practice where developers regularly merge their code changes into a central repository, after which automated builds and tests are run. The goal is to detect integration errors early and frequently.

Key Aspects of CI:

Common CI Tools:

Usage Example: Simple CI with Jenkins (Declarative Pipeline):

The `Jenkinsfile` is committed to the root of your Git repository.

// Jenkinsfile
pipeline {
    agent any
    stages {
        stage('Checkout') {
            steps {
                git 'https://github.com/your-org/your-repo.git' // Replace with your repository
            }
        }
        stage('Build') {
            steps {
                // Example for a Node.js project
                sh 'npm install'
                sh 'npm run build'
            }
        }
        stage('Test') {
            steps {
                // Example for Node.js (Jest)
                sh 'npm test'
                // Publish test results (requires JUnit plugin in Jenkins)
                junit '**/test-results/*.xml' 
            }
        }
        stage('Archive Artifacts') {
            steps {
                // Example for Node.js build output
                archiveArtifacts artifacts: 'dist/**/*', fingerprint: true 
            }
        }
    }
    post {
        always {
            echo "CI pipeline finished for build ${env.BUILD_NUMBER}"
        }
        failure {
            echo "CI pipeline failed!"
            // mail to: 'devs@example.com', subject: 'CI Build Failed!'
        }
    }
}

In Jenkins: Create a new "Pipeline" item, configure it to pull the `Jenkinsfile` from your Git repository, and set up a "SCM Poll" or "GitHub hook trigger" for automatic builds on push.

6. Continuous Delivery & Deployment (CD)

Continuous Delivery (CD) is a software engineering approach where teams produce software in short cycles, ensuring that the software can be reliably released at any time. It's an extension of CI, adding automated release and deployment steps.

Continuous Deployment (CD) is a further step where every change that passes all automated tests is automatically deployed to production. This requires a high degree of confidence in your automated testing and infrastructure.

Key Aspects of CD:

Common CD Tools:

Usage Example: Extending the CI Pipeline for CD:

// Jenkinsfile (Continuous Delivery Pipeline)
pipeline {
    agent any
    stages {
        stage('Checkout') { /* ... same as CI ... */ }
        stage('Build') { /* ... same as CI ... */ }
        stage('Test') { /* ... same as CI ... */ }
        stage('Archive Artifacts') { /* ... same as CI ... */ }

        stage('Deploy to Staging') {
            steps {
                echo "Deploying application to staging environment..."
                // Example: Deploy to a server via SSH (requires SSH Agent plugin and credentials)
                // sshagent(['your-ssh-credential-id']) {
                //     sh "scp -o StrictHostKeyChecking=no target/*.jar user@staging-server:/var/www/html/app.jar"
                // }
                sh 'echo "Deployment to staging simulated."'
            }
        }

        stage('Manual Approval for Production') { // Manual gate for Continuous Delivery
            steps {
                input {
                    message "Deployment to staging complete. Proceed to production?"
                    ok "Deploy to Production"
                    submitter "devops-team,qa-lead" // Users/groups who can approve
                }
            }
        }

        stage('Deploy to Production') { // Only runs after manual approval
            steps {
                echo "Deploying application to production environment..."
                sh 'echo "Deployment to production simulated."'
            }
            post {
                success {
                    echo 'Production Deployment successful!'
                }
                failure {
                    echo 'Production Deployment FAILED!'
                }
            }
        }
    }
    post { /* ... cleanup and notifications ... */ }
}

7. Infrastructure as Code (IaC)

Infrastructure as Code (IaC) is the practice of managing and provisioning computing infrastructure (e.g., networks, virtual machines, load balancers) using configuration files rather than manual hardware configuration or interactive configuration tools.

Key Benefits:

Common IaC Tools:

Usage Example: Simple EC2 Instance with Terraform:

Create a file named `main.tf` in a new directory.

# main.tf
# Configure the AWS provider
provider "aws" {
  region = "us-east-1" # N. Virginia
}

# Create a VPC
resource "aws_vpc" "my_vpc" {
  cidr_block = "10.0.0.0/16"
  tags = {
    Name = "my-devops-vpc"
  }
}

# Create a public subnet
resource "aws_subnet" "public_subnet" {
  vpc_id                  = aws_vpc.my_vpc.id
  cidr_block              = "10.0.1.0/24"
  availability_zone       = "us-east-1a"
  map_public_ip_on_launch = true
  tags = {
    Name = "my-devops-public-subnet"
  }
}

# Create an Internet Gateway
resource "aws_internet_gateway" "gw" {
  vpc_id = aws_vpc.my_vpc.id
  tags = {
    Name = "my-devops-igw"
  }
}

# Create a route table for public subnet
resource "aws_route_table" "public_route_table" {
  vpc_id = aws_vpc.my_vpc.id
  route {
    cidr_block = "0.0.0.0/0"
    gateway_id = aws_internet_gateway.gw.id
  }
  tags = {
    Name = "my-devops-public-rt"
  }
}

# Associate public route table with public subnet
resource "aws_route_table_association" "public_rt_assoc" {
  subnet_id      = aws_subnet.public_subnet.id
  route_table_id = aws_route_table.public_route_table.id
}

# Create a security group to allow SSH and HTTP traffic
resource "aws_security_group" "web_ssh_sg" {
  name        = "web_ssh_sg"
  description = "Allow web and SSH traffic"
  vpc_id      = aws_vpc.my_vpc.id

  ingress {
    description = "SSH from anywhere"
    from_port   = 22
    to_port     = 22
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }

  ingress {
    description = "HTTP from anywhere"
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }

  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1" # All protocols
    cidr_blocks = ["0.0.0.0/0"]
  }
  tags = {
    Name = "my-devops-web-ssh-sg"
  }
}

# Create an EC2 instance
resource "aws_instance" "web_server" {
  ami           = "ami-0abcdef1234567890" # Replace with a valid Amazon Linux 2 AMI for us-east-1
  instance_type = "t2.micro"
  subnet_id     = aws_subnet.public_subnet.id
  security_groups = [aws_security_group.web_ssh_sg.id]
  key_name      = "your-ssh-key-name" # Replace with your existing SSH key pair name
  user_data     = <<-EOF
              #!/bin/bash
              sudo yum update -y
              sudo yum install -y httpd
              sudo systemctl start httpd
              sudo systemctl enable httpd
              echo "Hello from Terraform!" | sudo tee /var/www/html/index.html
              EOF

  tags = {
    Name = "MyTerraformWebServer"
  }
}

# Output the public IP address of the web server
output "web_server_public_ip" {
  value = aws_instance.web_server.public_ip
}

Execute Terraform:

terraform init     # Initialize Terraform in the directory
terraform plan     # Show what changes will be made (plan)
terraform apply    # Apply the changes to create resources (type 'yes' to confirm)
terraform destroy  # Delete all resources defined in the configuration (type 'yes' to confirm)
Caution: When using IaC tools like Terraform, `terraform apply` creates real cloud resources that incur costs. Always run `terraform plan` first to understand the impact, and `terraform destroy` to clean up resources when done with learning.

8. Containerization (Docker)

Containerization packages an application and all its dependencies into a single, isolated unit that can run consistently across different environments.

Key Concepts:

Usage Example: Dockerizing a Simple Node.js App:

Create a directory (e.g., `my-node-app`) with these files:

# my-node-app/app.js
const express = require('express');
const app = express();
const port = 3000;

app.get('/', (req, res) => {
  res.send('Hello from Dockerized Node.js App!');
});

app.listen(port, () => {
  console.log(`App listening at http://localhost:${port}`);
});
# my-node-app/package.json
{
  "name": "my-node-app",
  "version": "1.0.0",
  "description": "A simple Node.js app",
  "main": "app.js",
  "scripts": {
    "start": "node app.js"
  },
  "dependencies": {
    "express": "^4.18.2"
  }
}
# my-node-app/Dockerfile
FROM node:18-alpine                  # Base image with Node.js
WORKDIR /app                       # Set working directory inside container
COPY package*.json ./              # Copy package.json and package-lock.json
RUN npm install                    # Install Node.js dependencies
COPY . .                           # Copy the rest of the application code
EXPOSE 3000                        # Expose port 3000
CMD ["npm", "start"]               # Command to run the application
# Build the Docker image (from my-node-app directory)
docker build -t my-node-app:1.0 .

# Run the Docker container, mapping host port 80 to container port 3000
docker run -d -p 80:3000 --name node-web-app my-node-app:1.0

# Access the app in your browser: http://localhost/

Docker Compose Example (`compose.yaml`):

# my-node-app/compose.yaml
version: '3.8'
services:
  web:
    build: .
    ports:
      - "80:3000"
    volumes:
      - ./app.js:/app/app.js # Mount app.js for hot-reloading in dev
    networks:
      - app-network
networks:
  app-network:
    driver: bridge
# Run with Docker Compose (from my-node-app directory)
docker compose up -d

# Access the app at http://localhost/

9. Container Orchestration (Kubernetes)

Container Orchestration automates the deployment, scaling, networking, and management of containerized applications.

Key Concepts:

Usage Example: Deploying a simple Nginx app on Kubernetes (using `kubectl`):

This assumes you have a Kubernetes cluster running (e.g., Minikube, Docker Desktop Kubernetes, GKE, EKS, AKS) and `kubectl` configured.

Create a file named `nginx-deployment.yaml`:

# nginx-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3 # Desired number of Nginx instances
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:latest # Use official Nginx image
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: nginx-service
spec:
  selector:
    app: nginx
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
  type: LoadBalancer # Exposes the service externally (Cloud provider specific)
                     # For Minikube/Docker Desktop, use NodePort instead if LoadBalancer isn't available
                     # type: NodePort
# Apply the deployment and service
kubectl apply -f nginx-deployment.yaml

# Check deployment status
kubectl get deployment nginx-deployment
kubectl get pods -l app=nginx

# Get service status and external IP/port
kubectl get service nginx-service
# If type is LoadBalancer, wait for EXTERNAL-IP to be assigned.
# If type is NodePort, use 'minikube service nginx-service --url' or similar to get URL.

# Access Nginx via the assigned external IP/Port in your browser.

# Delete the deployment and service
kubectl delete -f nginx-deployment.yaml

10. Configuration Management (Ansible, Puppet, Chef)

Configuration Management automates the consistent setup, management, and deployment of software and configurations on multiple servers. It ensures that systems are configured to a desired state.

Key Benefits:

Common Tools:

Usage Example: Simple Web Server Setup with Ansible:

This assumes Ansible is installed on your control machine and SSH access to target servers.

Create `inventory.ini`:

# inventory.ini
[webservers]
web1 ansible_host=your_web_server_ip_1
web2 ansible_host=your_web_server_ip_2
# Add ansible_user=your_ssh_user if different from current user
# Add ansible_ssh_private_key_file=/path/to/your/key.pem if using SSH keys

Create `apache_playbook.yaml`:

# apache_playbook.yaml
---
- name: Configure Apache Web Server
  hosts: webservers # Apply to hosts in the 'webservers' group from inventory
  become: yes       # Run tasks with sudo/root privileges

  tasks:
    - name: Ensure Apache (httpd) is installed
      ansible.builtin.yum: # For Red Hat-based systems (CentOS, Fedora)
        name: httpd
        state: present
      # For Debian-based (Ubuntu): use 'ansible.builtin.apt' instead
      # ansible.builtin.apt:
      #   name: apache2
      #   state: present

    - name: Ensure Apache service is running and enabled on boot
      ansible.builtin.systemd:
        name: httpd # or apache2 for Ubuntu
        state: started
        enabled: yes

    - name: Create index.html for web server
      ansible.builtin.copy:
        content: "Hello from Ansible on {{ inventory_hostname }}!" # Use Jinja2 templating
        dest: /var/www/html/index.html
        owner: root
        group: root
        mode: '0644'

    - name: Ensure firewalld allows HTTP service (Fedora/CentOS)
      ansible.posix.firewalld:
        service: http
        permanent: true
        state: enabled
      # For Ubuntu (ufw):
      # community.general.ufw:
      #   rule: allow
      #   name: http
      #   state: enabled

    - name: Reload firewalld to apply changes
      ansible.builtin.systemd:
        name: firewalld
        state: reloaded
      # For Ubuntu (ufw):
      # community.general.ufw:
      #   state: reloaded
# Run the playbook
ansible-playbook -i inventory.ini apache_playbook.yaml

11. Monitoring & Logging

Essential for understanding application performance, infrastructure health, and troubleshooting issues in a DevOps environment.

Key Concepts:

Common Tools:

Usage Example: Basic Monitoring with Prometheus & Grafana (Conceptual):

Prometheus Configuration (`prometheus.yml`):

# prometheus.yml
global:
  scrape_interval: 15s # How frequently to scrape targets
scrape_configs:
  - job_name: 'node_exporter' # Monitoring a Linux server
    static_configs:
      - targets: ['your_linux_server_ip:9100'] # node_exporter runs on 9100
  - job_name: 'cadvisor' # Monitoring Docker containers
    static_configs:
      - targets: ['your_docker_host_ip:8080'] # cadvisor runs on 8080

Grafana Dashboard:

# Example PromQL query in Grafana:
node_cpu_seconds_total{mode="idle"} # Idle CPU time
container_memory_usage_bytes{name="my-app-container"} # Memory usage of a container

12. Cloud Platforms (AWS, Azure, GCP)

DevOps practices are heavily intertwined with cloud computing, leveraging cloud services for scalability, flexibility, and automation.

Each cloud provider offers its own suite of services that map to various stages of the DevOps lifecycle. A common DevOps approach involves building cloud-agnostic applications (e.g., using Docker/Kubernetes) or leveraging specific cloud provider's managed services.

13. Security (DevSecOps)

DevSecOps integrates security practices into every stage of the DevOps pipeline, shifting security "left" (earlier in the development cycle) to identify and address vulnerabilities proactively.

Key Principles:

Examples of DevSecOps Practices:

Usage Example: Container Image Scanning in CI Pipeline:

// Jenkinsfile (DevSecOps stage)
pipeline {
    agent any
    stages {
        // ... previous stages (Checkout, Build) ...

        stage('Build Docker Image') {
            steps {
                script {
                    docker.build("my-app:${env.BUILD_NUMBER}")
                }
            }
        }

        stage('Scan Docker Image') {
            steps {
                // Assuming Trivy is installed on the agent or used in a Docker container
                echo "Scanning Docker image for vulnerabilities..."
                sh "trivy image --severity HIGH,CRITICAL my-app:${env.BUILD_NUMBER}"
                // Fail the build if critical vulnerabilities are found
                // sh "trivy image --exit-code 1 --severity HIGH,CRITICAL my-app:${env.BUILD_NUMBER}" 
            }
            post {
                failure {
                    echo 'Image scan found critical vulnerabilities!'
                    // mail to: 'security-team@example.com', subject: 'Container Vulnerability Alert!'
                }
            }
        }
        // ... subsequent stages (Deploy) ...
    }
}

14. Testing in DevOps

Testing in DevOps is continuous, automated, and integrated throughout the pipeline, rather than being a separate phase at the end.

Usage Example: Running API Tests in CI/CD:

// Jenkinsfile (API Testing stage)
pipeline {
    agent any
    stages {
        // ... previous stages (Build, Deploy to Dev/Test Environment) ...

        stage('Run API Tests') {
            steps {
                echo "Running automated API tests..."
                // Assuming you have a Newman (Postman CLI) collection
                // and a 'api_tests.json' Postman collection in your repo
                sh 'npm install -g newman # Ensure Newman is installed on agent'
                sh 'newman run api_tests.json -e dev_env.json -r cli,junit --reporter-junit-export api-test-results.xml'
                junit '**/api-test-results.xml' // Publish results
            }
            post {
                failure {
                    echo 'API tests failed!'
                }
            }
        }
        // ... subsequent stages ...
    }
}

15. Best Practices

16. DevOps Tools Ecosystem (Summary)

The DevOps landscape is vast and constantly evolving. Here's a summary of key tool categories and popular examples:

The DevOps Journey: Continuous Improvement!

DevOps is more than just tools; it's a fundamental shift in how organizations approach software delivery. By embracing its principles and leveraging automation across the entire lifecycle, teams can achieve unprecedented speed, quality, and reliability. This tutorial provides a strong foundation. The real learning comes from hands-on practice, building pipelines, experimenting with tools, and continuously optimizing your processes.