Documentation PortalBack to Self Assist PortalBack
Documentation Portal
Contents

CloudOne: Definition and Configuration of Kubernetes Deployment Objects - V 3.6

Definition and Configuration of Kubernetes Deployment Objects

Specific details about the deployment of the application as a set of Kubernetes objects in the OpenShift cluster is controlled through a collection of files used to modify the deployment details based on the environment to which the application is deployed. There is currently only one available method to modify the deployed Kubernetes objects based on the environment, and that via Helm v2 Charts.

Helm Charts

The default method is to use Helm Charts. A Helm Chart is actually a group of files organized under a directory, by default located at the top of the Git repository named for the application by the (usually) 3-letter unique AppCode followed by a hyphen and then by the name of the application stack. Helm charts are a hierarchy of files with default values set which can be overridden via a templating language, allowing environment-specific values to be defined via files referred to as override files, listed in the YAML structure within azure-pipelines.yml for each specific Workspace or Hostspace. In addition to values within these files, the YAML structure can also set specific override values (what would otherwise be values defined within override files) directly within the YAML structures.

Most of the contents of the Helm template files (located under the ./templates sub-directory of the Helm chart directory) are pre-defined and should not be modified, however there are certain details within these files that can be modified according to need, by overriding their preset values. Some examples are:

Property (in YAML format) Purpose / Reason to Modify
replicaCount Sets the number of instances (replicas) of the container to run
resources.limits.cpu CPU limit (in milli-cores) per container/pod
resources.limits.memory Memory limit (in MB or GB) per container/pod
resources.requests.cpu CPU initial allocation (in milli-cores) per container/pod
resources.requests.memory Memory initial allocation (in MB or GB) per container/pod

The above-listed values are initially set in the values.yaml file as unique values for the specific application component. However, these values can also be defined in other files specified for a given workspace or hostspace. Previously, V2 pipeline conventions had used specific values.*.yaml files to override the values in the general values.yaml file. Values assigned, for example, in values.workspace.yaml file would take precedence for deployments into the development workspace while values assigned in the values.<<HOSTSPACE>>.yaml file (where <<HOSTSPACE>> is the name assigned to a specific hostspace) will take precedence for deployments into a particular hostspace. Under V3 the assignment of specific file names for these values files is not pre-determined and can be more flexibly assigned by the YAML structures in azure-pipelines.yml. In this manner, settings like the ones described in the table above can be assigned uniquely for specific environments, allowing, for example, for scalability differences between production and test environments.

Any values assigned in these named override files will take precedence over the assignments in the default values.yaml file.

In addition to override files stored within the source code repository (probably within the Helm Charts directory structure), tasks can be defined in the azure-pipeline.yml file to retrieve Secure Files, files containing secrets (values stored in a vault system and not explicitly visible or maintained with the source code repository) and then these retrieved files of secrets can be specified as override files as well.

Note that values stored in the Secure Files area cannot be read – they can be created and replaced by members of the development leads and operations groups, but are not editable or viewable.

Route Definition

An application can be exposed in two different ways: Either using an OpenShift route definition or as a mapping through Ambassador, an edge inbound proxy service.

Ambassador Mapping

When using Ambassador, there are two Ambassador proxy infrastructures available:

  • internal - for services to be exposed within the Netapp/CloudOne network
  • external - for services to be exposed outside of the network

If ambassador.internal.enabled is set to true then an Ambassador route mapping will be created on the internal network, under the subdomain: prd01i.cloudone.netapp.com for workspace and for the first stage and production spaces (stg-1 and prd-1) and under the subdomain: prd02i.cloudone.netapp.com for the second stage and production spaces (stg-2 and prd-2).

If ambassador.external.enabled is set to true then an Ambassador route mapping will be created and exposed on the external network, under the subdomain: prd01e.cloudone.netapp.com for workspace and for the first stage and production spaces (stg-1 and prd-1) and under the subdomain: prd02e.cloudone.netapp.com for the second stage and production spaces (stg-2 and prd-2).

As multiple environments will share these same subdomains, the namespace into which the application is deployed will be appended to the application name within the DNS name exposed through Ambassador.

OpenShift Route

When using OpenShift routes, in the case of Helm charts, this is defined under the templates/route.yaml file of the Helm chart and enabled by have the YAML variable route.enabled set to true in an appropriate override file. If no additional attributes are defined in the template file, the default route for an application is defined as follows:

  • The host part of the URL will be composed of the $(serviceName)-$(appCode)-$(spaceName), where

    • $(serviceName) is the name of the application or service
    • $(appCode) is the (usually) 3-letter unique application code
    • $(spaceName) is the namespace, i.e. name of the workspace or hostspace
  • In the case of a Workspace, the host part will also have a unique build identifier (based on the git pull request) appended after the name of the application or service
  • The subnet of the host part of the URL will be one of the following:

    • In Workspaces, it will be: east1.ncloud.netapp.com
    • In Hostspaces, it will depend on the location of the hostspace, so it may look like the following: npc-us-west-dc61.ncloud.netapp.com but may be different, depending on the OpenShift location.

Additional modifications and attributes can be set for the route following normal Kubernetes route definition rules in the templates/route.yaml file of the Helm chart.

Using ConfigMap structures to pass Environment Variables

In addition to setting values within the deployment space, some applications require environment variables to be set prior to their start-up and these environments may be either application-specific or environment-specific (e.g. different values in a Staging environment versus a Production environment). For these situations, a ConfigMap can be defined within the Helm charts, for example with the values.*.yaml files described above.

A ConfigMap is simply a yaml syntax structure listing key-value pairs to define variables. The environment variables are assigned by defining a set of keys with values in a ConfigMap, which is a lookup structure used during deployment. An example of a ConfigMap structure is as follows:

   apiVersion: v1
   kind: ConfigMap
   metadata:
     name: env-map
   data:
     ENV_VAR_1: env_value_1
     ENV_VAR_2: env_value_2

The ConfigMap would then be referenced in the container specification in an envFrom: section of the yaml file, which will insert all of the config maps variables as environment variable assignments, within the templates/deployment.yaml file of the Helm chart. This is where the actual environment variables would be set and the values used would be looked up from the ConfigMap. The following is an example using Helm charts:

   spec:
    containers:
     - name: {{ .Chart.name }}
        .
        .
        .
   envFrom:
    - configMapRef:
      name: env-map
        .
        .
        .   

Using the system of overrideFiles for Helm charts in the workspace and hostspace YAML objects in the azure-pipelines.yml file, different values can be assigned to environment variables through the ConfigMaps for different environments.