Updating an application
Users expect applications to be available all the time and developers are expected to deploy new versions of them several times a day. In Kubernetes this is done with rolling updates. Rolling updates allow Deployments’ update to take place with zero downtime by incrementally updating Pods instances with new ones. The new Pods will be scheduled on Nodes with available resources.
Let think, we scaled our application to run multiple instances. This is a requirement for performing updates without affecting application availability. By default, the maximum number of Pods that can be unavailable during the update and the maximum number of new Pods that can be created, is one. Both options can be configured to either numbers or percentages (of Pods). In Kubernetes, updates are versioned and any Deployment update can be reverted to previous (stable) version.
Continue reading “[Kubernetes] – P.6 – Rolling Update”
In case the Deployment creates only one Pod for running our application, when traffic increases, we will need to scale the application to keep up with user demand.
Scaling is accomplished by changing the number of replicas in a Deployment
Continue reading “[Kubernetes] – P.5 – Scale”
Overview of Services
Pods, in fact, have a lifecycle. When a worker node dies, the Pods running on the Node are also lost. A ReplicaSet might then dynamically drive the cluster back to the desired state via the creation of new Pods to keep your application running.
The back-end system with some replicas. Those replicas are exchangeable; the front-end system should not care about backend replicas or even if a Pod is lost and recreated. That said, each Pod in a Kubernetes cluster has a unique IP address, even Pods on the same Node, so there needs to be a way of automatically reconciling changes among Pods so that your applications continue to function.
Continue reading “[Kubernetes] – P.4 – Services”
- A Pod is a group of one or more application containers (such as Docker or rkt) and includes shared storage (volumes), IP address and information about how to run them.
A Pod is a Kubernetes abstraction that represents a group of one or more application containers, and some shared resources for those containers. Those resources include:
- Shared storage (as Volumes)
- Networking (as a unique cluster IP address)
- Information about how to run each container (such as the container image version or specific ports to use)
Continue reading “[Kubernetes] – P.3 – Pods and Nodes”
A Deployment is responsible for creating and updating instances of your application
Once you have a running Kubernetes cluster, you can deploy your containerized applications on top of it. To do so, you create a Kubernetes Deployment configuration. The Deployment instructs Kubernetes how to create and update instances of your application. Once you’ve created a Deployment, the Kubernetes master schedules mentioned application instances onto individual Nodes in the cluster.
Continue reading “[Kubernetes] – P.2 – Deploying First App”
Kubernetes coordinates a highly available cluster of computers that are connected to work as a single unit.
The abstractions in Kubernetes allow you to deploy containerized applications to a cluster without tying them specifically to individual machines. To make use of this new model of deployment, applications need to be packaged in a way that decouples them from individual hosts: they need to be containerized.
Kubernetes automates the distribution and scheduling of application containers across a cluster in a more efficient way.
Continue reading “[Kubernetes] – P.1 – Clusters”
With modern web services, users expect applications to be available 24/7, and developers expect to deploy new versions of those applications several times a day. Containerization helps package software to serve these goals, enabling applications to be released and updated in an easy and fast way without downtime. Continue reading “[Kubernetes] – P.0 – What is Kubernetes?”
Redis is an open source (BSD licensed), in-memory data structure store, used as a database, cache and message broker.
Initialize the local environment
Continue reading “Iterate Redis keys using Python”
Performance optimization is making a program run faster, and also is closely related to refactoring
Typically, an 80/20 rule applies. A 20% effort is needed to implement the bulk of the program. Then another 80% effort (and budget) is needed to optimize, refactor, extend and/or document the program.
You first need the program to produce correct results (with correct input) before you know what is making it slow. Do not worry about performance during development.
Continue reading “Python Code Optimization”
First, we need to init the folder structure for docker compose
Continue reading “[Docker] Setting up LEMP environment”