Application Platform Concepts
The kind of capabilities we expect of the enterprise application platform are:
Requirements of the Platform
Service Discovery: The system should be able to route service requests to the right service, taking care of aspects such as routing and load balancing. See below for what Kubernetes provides.
Configuration: Centralised Configuration Management. Kubernetes provides configuration management with ConfigMaps and Secrets so that your container does not have to hard code configuration information into the application.
Invocation: Applications running in a container can be accessed through Ingress or Service access. Thus traffic can be routed to your application at OSI level 4 or 7 by using either of these methods.
Elasticity: Kubernetes provides Deployments and ReplicaSets to easily scale your application up or down. The Horizontal Pod Autoscaler, scales pods based on CPU utilization or other metrics.
Resilience: Kubernetes is architected to be resilient by providing PersistentVolumes and ReplicaSets to ensure that the number of specified pods are always running in a cluster.
Pipeline: The chosen platform should be able to automatically check-out the lastest code, perform a code review, build, package and deploy this to the test environment. Automated feature testing and regression testing would then be performed.
Authentication: Access to the tools and API provided by the platform should have Role Based Access Control and authentication to ensure only allowed users are able to access features provided by the platform.
Logging: Kubernetes provides a platform to deploy logging applications such as the EFK stack which comprises of Elastisearch (object store where all logs are stored), Fluentd (gathers logs from nodes to Elastisearch) and Kibana (a web UI for Elastisearch).
Monitoring: You could install Prometheus for this.
Tracing
What Kubernetes provides
In this section, we give examples of how Kubernetes provides some of the above capabilities.
Service Discovery
Kubernetes supports two primary modes of finding a service
Environment Variables
DNS
Environment Variables
When a pod runs on a node, kublet adds a set of standard environment variables for each active service in the form of
{SVCNAME}_SERVICE_HOST
and{SVCNAME}_SERVICE_PORT
VARIABLES. For example, with the serviceredis-master
, the following environment variables are available to the pod:
REDIS_MASTER_SERVICE_HOST=10.0.0.11
REDIS_MASTER_SERVICE_PORT=6379
REDIS_MASTER_PORT=tcp://10.0.0.11:6379
REDIS_MASTER_PORT_6379_TCP=tcp://10.0.0.11:6379
REDIS_MASTER_PORT_6379_TCP_PROTO=tcp
REDIS_MASTER_PORT_6379_TCP_PORT=6379
REDIS_MASTER_PORT_6379_TCP_ADDR=10.0.0.11
DNS
If you have CoreDNS available in your Kubernetes cluster, your service would be available on the cluster DNS as
my-service.my-namespace
. This name will be resolved to the ClusterIP assigned to the service.
Microservices Design Patterns
In the top half of this page, we have collated the set of requirements we expect of an execution platform. My execution platform of choice in this regard is the Kubernetes platform for running cloud native applications and microservices. This is because Kubernetes is an open standard which is supported by all the major cloud service providers such as Amazon, Azure and GCP. It is also implemented in many enterprises through the RedHat OpenShift Container Platform (ROCP). Since most enterprises already have a partnership with RedHat, they already have ROCP ready in a bid to move internal applications to take advantage of the cloud.
Hence, the applications should be developed as cloud native microservices.
Now that we have established the execution platform (Kubernetes), implementing microservices comes with their own set of challenges and best practices. We have already explained how some of these challenges are addressed by Kubernetes. I will address the others given below in the Microservices Design Patterns section:
Edge server
Distributed Tracing
Circuit breaker