Kubernetes Management Design Patterns, 1st ed.
With Docker, CoreOS Linux, and Other Platforms

Author:

Language: English
Cover of the book Kubernetes Management Design Patterns

Subjects for Kubernetes Management Design Patterns

Approximative price 47.46 €

In Print (Delivery period: 15 days).

Add to cartAdd to cart
Publication date:
Support: Print on demand
Take container cluster management to the next level; learn how to administer and configure Kubernetes on CoreOS; and apply suitable management design patterns such as Configmaps, Autoscaling, elastic resource usage, and high availability.  Some of the other features discussed are logging, scheduling, rolling updates, volumes, service types, and multiple cloud provider zones.
 
The atomic unit of modular container service in Kubernetes is a Pod, which is a group of containers with a common filesystem and networking. The Kubernetes Pod abstraction enables design patterns for containerized applications similar to object-oriented design patterns. Containers provide some of the same benefits as software objects such as modularity or packaging, abstraction, and reuse.

CoreOS Linux is used in the majority of the chapters and other platforms discussed are CentOS with OpenShift, Debian 8 (jessie) on AWS, and Debian 7 for Google Container Engine.  

CoreOS is the main focus becayse Docker is pre-installed on CoreOS out-of-the-box. CoreOS: 
  • Supports most cloud providers (including Amazon AWS EC2 and Google Cloud Platform) and virtualization platforms (such as VMWare and VirtualBox)
  • Provides Cloud-Config for declaratively configuring for OS items such as network configuration (flannel), storage (etcd), and user accounts 
  • Provides a production-level infrastructure for containerized applications including automation, security, and scalability
  • Leads the drive for container industry standards and founded appc 
  • Provides the most advanced container registry, Quay  
 
Docker was made available as open source in March 2013 and has become the most commonly used containerization platform. Kubernetes was open-sourced in June 2014 and has become the most widely used container cluster manager.  The first stable version of CoreOS Linux was made available in July 2014 and since has become one of the most commonly used operating system for containers. 
 

What You'll Learn

  • Use Kubernetes with Docker
  • Create a Kubernetes cluster on CoreOS on AWS
  • Apply cluster management design patterns
  • Use multiple cloud provider zones
  • Work with Kubernetes and tools like Ansible
  • Discover the Kubernetes-based PaaS platform OpenShift
  • Create a high availability website
  • Build a high availability Kubernetes master cluster
  • Use volumes, configmaps, services, autoscaling, and rolling updates
  • Manage compute resources
  • Configure logging and scheduling


Who This Book Is For

Linux admins, CoreOS admins, application developers, and container as a service (CAAS) developers. Some pre-requisite knowledge of Linux and Docker is required. Introductory knowledge of Kubernetes is required such as creating a cluster, creating a Pod, creating a service, and creating and scaling a replication controller. For introductory Docker and Kubernetes information, refer to Pro Docker (Apress) and Kubernetes Microservices with Docker (Apress). Some pre-requisite knowledge about using Amazon Web Services (AWS) EC2, CloudFormation, and VPC is also required. 

Introduction
 
Section I  Platforms
1 Kubernetes On AWS
1.1 Installing a Kubernetes Cluster on AWS
1.2 Creating a Deployment
1.3 Creating a Service
1.4 Accessing the Service
<1.5 Scaling the Deployment
1.6 Summary 
2 Kubernetes on CoreOS
2.1 Setting the Environment
2.2 Configuring AWS Credentials
2.3 Installing Kube-aws
2.4 Setting Up Cluster Parameters
    2.3.1 Creating a KMS Key
    2.3.2 Setting Up an External DNS Name
2.5 Creating the Cluster CloudFormation
     2.4.1 Creating an Asset Directory
     2.4.2  Initializing the Cluster CloudFormation
     2.4.3 Rendering Contents of the Asset Directory
     2.4.4 Customizing the Cluster
     2.4.5 Validating the CloudFormation Stack
     2.4.6 Launching the Cluster CloudFormation
2.6 Configuring DNS
2.7 Accessing the Cluster
2.8 Testing the Cluster
2.9 Summary 
3 Kubernetes on Google Cloud Platform
3.1 Setting the Environment
3.2 Creating a Project on Google Cloud Platform
3.3 Enabling Permissions
3.4 Enabling the Compute Engine API
3.5 Creating a VM Instance
3.6 Connecting to the VM Instance
3.7 Reserving a Static Address
3.8 Creating a Kubernetes Cluster
3.9 Creating a Kubernetes Application and Service
3.10 Stopping the Cluster
            3.11 Summary 
 
Section 2 Administration and Configuration
4 Using Multiple Zones
4.1 Setting the Environment
4.2 Initializing a CloudFormation
4.3 Configuring Cluster.yaml for Multiple Zones
4.4 Launching the  CloudFormation
4.5 Configuring External DNS
4.6 Running a Kubernetes Application
4.7 Using Multiple Zones on AWS
4.8 Summary 
5 Using the Tectonic Console
5.1 Setting the Environment
5.2 Downloading the Pull Secret and the Tectonic Console manifest
5.3 Installing the Pull Secret and the Tectonic Console
5.4 Accessing the Tectonic Console
5.5 Using the Tectonic Console
5.6 Removing the Tectonic Console
5.7 Summary 
6 Using Volumes
6.1 Setting the Environment
6.2 Creating a AWS Volume
6.3 Using a awsElasticBlockStore Volume
6.4 Creating a Git Repo
6.5 Using a gitRepo Volume
            6.6 Summary 
7 Using Services
7.1 Setting the Environment
7.2 Creating a ClusterIP Service
7.3 Creating a NodePort Service
7.4 Creating a LoadBalancer Service
7.5 Summary 
 
8 Using Rolling Updates
           8.1 Setting the Environment
8.2 Rolling Update with a RC Definition File
8.3 Rolling Update by Updating  the Container Image
8.4 Rolling Back an Update
8.5 Use only one of file or image
8.6 Rolling Update on Deployment with Deployment file
 
9 Scheduling Pods
     9.1 Scheduling Policy
      9.2 Setting the Environment
      9.3 Using the Default Scheduler
      9.4 Scheduling Pods without a Node Selector
      9.5 Setting Node Labels
      9.6 Scheduling Pods with Node Selector
       9.7 Setting Node Affinity
9.6.1 Setting requiredDuringSchedulingIgnoredDuringExecution
9.6.2 Setting preferredDuringSchedulingIgnoredDuringExecution
 
10 Configuring Compute Resources
      10.1 Types of Compute Resources
       10.2 Resource Requests and Limits
       10.3 Quality of Service
       10.4 Setting the Environment
       10.5 Finding Node Capacity
       10.6 Creating a Pod with Resources Specified
       10.7 Overcommitting Resource Limits
       10.8 Reserving Node Resources
 
11 Using Configmaps
      11.1 Kubectl create configmap Command
       11.2 Setting the Environment
       11.3 Creating ConfigMaps from Directories
       11.4 Creating ConfigMaps from Files
       11.5 Creating configmap from literal values
       11.6 Consuming a ConfigMap in a Volume
 
12 Setting Resource Quotas
       12.1 Setting the Environment
        12.2 Defining Compute Resource Quotas
        12.3 Exceeding Compute Resource Quotas
        12.4 Defining Object Quotas
        12.5 Exceeding Object Resource Quotas
        12.6 Defining Best Effort Quotas
        12.7 Using Quotas
        12.8 Exceeding Object Quotas
        12.9 Exceeding ConfigMaps Quota
 
13 Using Autoscaling
       13.1 Setting the Environment
        13.2 Running PHP Apache Server Deployment
        13.3 Creating a Service
        13.4 Creating a Horizontal Pod Autoscaler
        13.5 Increasing Load
 
14 Configuring Logging
       14.1 Setting the Environment
       14.2 Getting the Logs generated by Default Logger
       14.3 Docker Log Files
       14.4 Cluster Level Logging with Elasticsearch and Kibana
14.4.1 Starting Elastic Search
14.4.2 Starting a Replication Controller
14.4.3 Starting Fluentd Elasticsearch to Collect Logs
14.4.4 Starting Kibana
 
Section 3 High Availability
15 Using a HA Master with OpenShift
15.1 Setting the Environment
15.2 Installing the Credentials
15.3 Installing the Network Manager
15.4 Installing OpenShift Ansible
14.5 Configuring the Ansible
15.6 Running Ansible Playbook
15.7 Testing the Cluster
15.8 Testing the HA
            15.9 Summary 
16 Developing a Highly Available Web Site
16.1 Setting the Environment
16.2 Creating  Multiple CloudFormations
16.3 Configuring External DNS
16.4 Creating a Kubernetes Service
16.5 Creating a AWS Route 53
    16.4.1 Creating a Hosted Zone
    16.4.2 Configuring Name Servers
    16.4.3 Creating Record Sets
16.6 Testing HA
            16.7 Summary

Deepak Vohra is an Oracle Certified Associate and a Sun Certified Java Programmer. Deepak has published in Oracle Magazine, OTN, IBM developerWorks, ONJava, DevSource,  WebLogic Developer’s Journal, XML Journal, Java Developer’s Journal, FTPOnline, and devx.

The only advanced practical guide on using Kubernetes management patterns on CoreOS

Discusses the ease of use provided by Kubernetes in developing and running applications in Pods

Covers cloud platforms platforms, such as Amazon AWS EC2 and Google Cloud Platform