EKS Kubernetes Cluster Service

A container application management service that is high performance and scalable

How it works

With unified multi-cluster management as the core, it supports one-click deployment of Kubernetes clusters using underlying cloud infrastructure resources, and quickly completes node initialization. For usage scenarios, you can choose to deploy Kubernetes clusters of different sizes, such as a test environment with a single master node or a highly available production environment with multiple master nodes. Each tenant can create multiple Kubernetes clusters, and support management operations such as capacity expansion, monitoring, and deletion of clusters.
Kubernetes.png

Advantages

Why choose EKS?

Easy to Use

One-click deployment and expansion of clusters through the interface, one-stop automated deployment and operation and maintenance of container applications.

Extremely Flexible Experience

Through a flexible elastic scaling strategy, the second-level expansion of large-scale container instances can be achieved.

Flexible Deployment Method

It supports multiple container deployment methods such as designated images, Chart templates, Yaml import, and automated pipelines.

Cluster High Availability

The cluster control plane supports 3 Master high availability. When one of the control nodes fails, the cluster is still available to ensure high availability of your business.

Multi Computing Architecture Support

Mainstream chips compatible with x86 and Arm computing architectures, such as Intel, Phytium, and Kunpeng.

Integrated Design

The deep integration of Container Service and Cloud Foundation unifies resources, such as permissions, networks, and storage, to help transform traditional applications to cloud-native architecture.

Use cases

EKS Use Cases

p-1-en.png

Continuous Delivery

With DevOps services, the platform can help you automatically complete code compilation, image building, testing, containerized deployment, etc. based on the code source. Realizing a one-stop containerized delivery process, greatly improve the efficiency of software release, and reduce release risks.

p-2-en.png

Batch Task

You can create job-type workloads that execute in sequence or in parallel in a Kubernetes container cluster, and support one-time short jobs and periodic jobs. One-time short jobs can be executed after deployment. Periodic jobs can run short jobs according to a specific time period (for example, executed at 8 am every day), and can perform regular time synchronization, data backup, and so on.

p-3-en.png

Microservice Architecture Support

The microservice architecture is suitable for building complex applications, splitting a single application into multiple manageable microservices from different dimensions, and freely selecting development technologies, and each service can also be independently deployed and expanded. Applications are split through microservices, you only need to pay attention to each microservice iteration, and the platform provides scheduling, orchestration, deployment, and release capabilities.

p-4-en.png

Elastic Scaling

The business is strategically scaled according to the access traffic, in order to avoid the system failure caused by the sudden increase in traffic and the untimely expansion of the capacity, as well as the waste caused by the usual idle resources. Elastic scaling at the Pod level will be triggered when the CPU usage and average memory load of Kubernetes Pods which grouped by workload exceeds the threshold. When the cluster resources are insufficient, the cluster nodes can be quickly expanded to carry more containers to run.

Leave a Reply

Your email address will not be published. Required fields are marked *