🛡️
CTHFM: Kubernetes
  • Welcome
  • Kubernetes Fundamentals
    • Kubernetes Components
      • Kubernetes Master Node
      • Worker Nodes
      • Pods
      • Service
      • ConfigMaps and Secrets
      • Namespaces
      • Deployments
      • ReplicaSets
      • Jobs and CronJobs
      • Horizontal Pod Autoscaler (HPA)
      • Kubernetes Ports and Protocols
    • Kubectl
      • Installation and Setup
      • Basic Kubectl
      • Working With Pods
      • Deployments and ReplicaSets
      • Services and Networking
      • ConfigMaps and Secrets
      • YAML Manifest Management
      • Debugging and Troubleshooting
      • Kubectl Scripting: Security
      • Customizing Kubectl
      • Security Best Practices
      • Common Issues
      • Reading YAML Files
    • MiniKube
      • Intro
      • Prerequisites
      • Installation MiniKube
      • Starting MiniKube
      • Deploy a Sample Application
      • Managing Kubernetes Resources
      • Configuring MiniKube
      • Persistent Storage in Minikube
      • Using Minikube for Local Development
      • Common Pitfalls
      • Best Practices
  • Kubernetes Logging
    • Kubernetes Logging Overview
    • Audit Logs
    • Node Logs
    • Pod Logs
    • Application Logs
    • Importance of Logging
    • Types of Logs
    • Collecting and Aggregating Logs
    • Monitoring and Alerting
    • Log Parsing and Enrichment
    • Security Considerations in Logging
    • Best Practices
    • Kubernetes Logging Architecture
  • Threat Hunting
    • Threat Hunting Introduction
    • What Makes Kubernetes Threat Hunting Unique
    • Threat Hunting Process
      • Hypothesis Generation
      • Investigation
      • Identification
      • Resolution & Follow Up
    • Pyramid of Pain
    • Threat Frameworks
      • MITRE Containers Matrix
        • MITRE Att&ck Concepts
        • MITRE Att&ck Data Sources
        • MITRE ATT&CK Mitigations
        • MITRE Att&ck Containers Matrix
      • Microsoft Threat for Kubernetes
    • Kubernetes Behavioral Analysis and Anomaly Detection
    • Threat Hunting Ideas
    • Threat Hunting Labs
  • Security Tools
    • Falco
      • Falco Overview
      • Falco's Architecture
      • Runtime Security Explained
      • Installation and Setup
      • Falco Rules
      • Tuning Falco Rules
      • Integrating Falco with Kubernetes
      • Detecting Common Threats with Falco
      • Integrating Falco with Other Security Tools
      • Automating Incident Response with Falco
      • Managing Falco Performance and Scalability
      • Updating and Maintaining Falco
      • Real-World Case Studies and Lessons Learned
      • Labs
        • Deploying Falco on a Kubernetes Cluster
        • Writing and Testing Custom Falco Rules
        • Integrating Falco with a SIEM System
        • Automating Responses to Falco Alerts
    • Open Policy Agent (OPA)
      • Introduction to Open Policy Agent (OPA)
      • Getting Started with OPA
      • Rego
      • Advanced Rego Concepts
      • Integrating OPA with Kubernetes
      • OPA Gatekeeper
      • Policy Enforcement in Microservices
      • OPA API Gateways
      • Introduction to CI/CD Pipelines and Policy Enforcement
      • External Data in OPA
      • Introduction to Decision Logging
      • OPA Performance Monitoring
      • OPA Implementation Best Practices
      • OPA Case Studies
      • OPA Ecosystem
    • Kube-Bench
    • Kube-Hunter
    • Trivy
    • Security Best Practices and Documentation
      • RBAC Good Practices
      • Official CVE Feed
      • Kubernetes Security Checklist
      • Securing a Cluster
      • OWASP
  • Open Source Tools
    • Cloud Native Computing Foundation (CNCF)
      • Security Projects
  • Infrastructure as Code
    • Kubernetes and Terraform
      • Key Focus Areas for Threat Hunters
      • Infastructure As Code: Kubernetes
      • Infrastructure as Code (IaC) Basics
      • Infastructure As Code Essential Commands
      • Terraform for Container Orchestration
      • Network and Load Balancing
      • Secrets Management
      • State Management
      • CI/CD
      • Security Considerations
      • Monitoring and Logging
      • Scaling and High Availability
      • Backup and Disaster Recovery
    • Helm
      • What is Helm?
      • Helm Architecture
      • Write Helm Charts
      • Using Helm Charts
      • Customizing Helm Charts
      • Customizing Helm Charts
      • Building Your Own Helm Chart
      • Advanced Helm Chart Customization
      • Helm Repositories
      • Helm Best Practices
      • Helmfile and Continuous Integration
      • Managing Secrets with Helm and Helm Secrets
      • Troubleshooting and Debugging Helm
      • Production Deployments
      • Helm Case Studies
Powered by GitBook
On this page
  • Deploying a Sample Application
  • 1. Deploying Your First Application
  • Step 1: Create a Deployment
  • Step 2: Verify the Deployment
  • 2. Exposing the Application to External Traffic
  • Step 1: Expose the Deployment
  • Step 2: Get the Service Details
  • Step 3: Access the Application
  • 3. Scaling the Application
  • Step 1: Scale the Deployment
  • Step 2: Verify the Scaling
  • 4. Cleaning Up
  • Step 1: Delete the Service
  • Step 2: Delete the Deployment
  • Conclusion
  1. Kubernetes Fundamentals
  2. MiniKube

Deploy a Sample Application

Deploying a Sample Application

In this lesson, we’ll walk through the process of deploying a simple application on your Minikube cluster. This hands-on experience will help you understand how to interact with Kubernetes resources, manage deployments, and expose your application to external traffic. By the end of this lesson, you will have deployed a "Hello World" application on Minikube and accessed it via a web browser.

1. Deploying Your First Application

We’ll start by deploying a simple "Hello World" application using a Kubernetes Deployment. A Deployment manages a set of identical Pods and ensures that the specified number of Pods are running at any given time.

Step 1: Create a Deployment

  • Open your terminal or command prompt and run the following kubectl command to create a Deployment:

    kubectl create deployment hello-minikube --image=kicbase/echo-server:1.0
    • Explanation:

      • kubectl create deployment: This command creates a new Deployment.

      • hello-minikube: This is the name of the Deployment.

      • --image=kicbase/echo-server:1.0: Specifies the Docker image to use for the Pods. In this case, we're using a simple echo server that will respond with "Hello World" when accessed.

Step 2: Verify the Deployment

  • Check that the Deployment has been created and the Pods are running:

    kubectl get deployments
    • Expected Output:

      • You should see your hello-minikube Deployment listed with 1/1 Pods running.

    kubectl get pods
    • Expected Output:

      • This command lists the Pods created by the Deployment. You should see a Pod with a name that starts with hello-minikube, and its status should be Running.

2. Exposing the Application to External Traffic

To access your deployed application from outside the cluster, you need to expose it as a Kubernetes Service. A Service is an abstraction that defines a logical set of Pods and a policy by which to access them.

Step 1: Expose the Deployment

  • Use the following command to create a Service that exposes your Deployment on a specific port:

    kubectl expose deployment hello-minikube --type=NodePort --port=8080
    • Explanation:

      • kubectl expose deployment: This command exposes a Deployment as a Service.

      • --type=NodePort: This exposes the Service on a port on each node in the cluster. The port will be randomly selected from a range of 30000-32767.

      • --port=8080: This specifies the port that the application inside the Pods listens on.

Step 2: Get the Service Details

  • Retrieve the details of the Service, including the NodePort assigned:

    kubectl get services hello-minikube
    • Expected Output:

      • The output should include a PORT(S) column showing something like 8080:<NodePort>. The <NodePort> is the external port you’ll use to access the application.

Step 3: Access the Application

  • To access the application, you need the IP address of the Minikube node and the NodePort:

    Option 1: Use Minikube Service Command

    • The easiest way to access the application is to run:

      minikube service hello-minikube
      • This command will automatically open your default web browser and direct it to the correct URL.

    Option 2: Manually Access via IP and Port

    • Alternatively, you can manually retrieve the Minikube IP and access the application:

      minikube ip
      • This command will return the IP address of the Minikube node.

      • Open your web browser and go to http://<Minikube_IP>:<NodePort> (replace <Minikube_IP> with the IP address returned and <NodePort> with the NodePort from the kubectl get services command).

    • Expected Output:

      • You should see a page that displays "Hello World" or a simple echo server response.

3. Scaling the Application

Kubernetes makes it easy to scale your application to handle more traffic by increasing the number of Pods.

Step 1: Scale the Deployment

  • Use the following command to scale your Deployment to 3 replicas:

    kubectl scale deployment hello-minikube --replicas=3
    • Explanation:

      • This command tells Kubernetes to maintain 3 replicas (Pods) of your hello-minikube Deployment.

Step 2: Verify the Scaling

  • Check the status of your Deployment to ensure that 3 Pods are running:

    kubectl get deployments
    • Expected Output:

      • The AVAILABLE column should show 3, indicating that all 3 Pods are running.

    kubectl get pods
    • Expected Output:

      • You should see 3 Pods listed, all with a Running status.

4. Cleaning Up

Once you’re done experimenting, it’s a good idea to clean up the resources you created to free up system resources.

Step 1: Delete the Service

  • Remove the Service that exposed your application:

    kubectl delete service hello-minikube

Step 2: Delete the Deployment

  • Remove the Deployment, which will also terminate the associated Pods:

    kubectl delete deployment hello-minikube

Conclusion

You’ve successfully deployed a sample application on your Minikube cluster, exposed it to external traffic, scaled it, and cleaned up the resources. This lesson introduced you to the core concepts of managing Kubernetes Deployments and Services, laying the groundwork for more complex application deployments in the future. In the next lesson, we’ll explore how to manage Kubernetes resources using kubectl and dive deeper into Kubernetes commands and operations.

PreviousStarting MiniKubeNextManaging Kubernetes Resources

Last updated 9 months ago