Springboot Application Stack - V 4.7
Springboot Application Stack
The Springboot application stack is used to build and deploy Java applications in the Springboot framework and produce deployment-ready containers for the CloudOne environment.
The containers produced in this application stack share common requirements with other application stacks deployed to the CloudOne environment. These shared requirements include:
- Security scanning for any vulnerabilities
- Code coverage scanning for test quality
- Staging of the application container in the appropriate Docker container repositories and tagged according to CloudOne conventions
- Completion of Unit Testing and User Acceptance Testing
- Gated approvals before allowing deployments into pre-production and production environments
- Audit trail and history of deployment within the CloudOne CI/CD Pipeline
Because of the above-listed requirements, the Springboot application stack is provided in order to support the build and deployment of Java/Springboot applications in a manner that integrates with CloudOne requirements and processes. The flow of deployment includes first a Continuous Integration stage of processing in a pipeline prior to deployment in the Continuous Deployment stages. The Continuous Integration stage focuses on building the application, running scans to check for test coverage and security vulnerabilities, and staging the container in the appropriate Docker repository ready for deployment. Subsequent pipeline stages deploy the application to the appropriate target Kubernetes spaces.
Getting Started in the Azure DevOps environment
Refer to the following link to learn about getting started in the Azure DevOps environment: Getting Started in Azure DevOps Environment
Source Code Repository Structure
The structure of the source code repository for the Springboot application stack will contain a source directory tree with the application source code, and one or two directory trees with details for deployment, as follows:
- source - this directory will contain a pom.xml file for use by Maven to build the application and a src sub-directory containing the source code of the application itself
- appCode-serviceName – (named as the AppCode followed by the application or service name) contains the Helm chart deployment materials
Also at the top of the repository is a file called azure-pipelines.yml. This file contains reference to the appropriate version of the CI/CD pipeline logic, some variables unique to the application (e.g. container version) as well as YAML data structures providing key information about the environments into which to deploy the application and the sequence of events to complete the deployment (e.g. dependencies, additional steps to retrieve secrets to be passed to the deployed container, etc).
Additional items in this repository will generally not be modified and should not be changed to avoid risk of breaking the pipeline workflows.
Application Configuration into the CloudOne CI/CD Pipeline
Once navigated to the appropriate repository, it can be cloned to a local development environment (i.e. local workstation) for the actual software development and configuration work for the Springboot application to be deployed. Configuration of the pipeline would include setting the appropriate container version and JDK version (reflected in the appVersion and jdkVersion variables under the parameters: section passed to the pipeline code under the extends YAML section) and defining the target workspace and hostspaces plus any dependencies between hostspaces or prerequisite deployment objects required. Once it is ready to be deployed into the CloudOne workspace and the new application version has been set as well, create a Git pull request for the updates to the code. The creation of this pull request will trigger the Continuous Integration pipeline to start.
Application Logging
Certain log format guidelines should be followed in order to ensure the capture of logs. All logging from the application should go to standard output (also sometimes referred to as “console” in the context of common logging libraries). In order for log entries to be picked up and collected into the logging collection infrastructure (Splunk), the log entries must match a particular format. That format looks like the following:
[ISO8601-Date] logLevel=level pid=pid thread=thread class=logger-class message=free-style-message where:
- level is the logging level (INFO, WARN, etc.)
- pid is the process ID
- thread is the thread ID
- logger-class is the class calling the logger (indicates where in the code the log message was issued)
- free-style-message is any addition logging content
The format can be further extended by adding additional key=value pairs before the free-style log message.
Continuous Integration and Continuous Delivery Pipelines
Please note that the document “CloudOne Concepts and Procedures” contains more details about the specific flow of the application through the CI/CD pipelines
Once a pull request has been created against the Git repository, it will trigger the start of the CI/CD pipeline in order to build, scan and prepare the application to deploy into the Workspace. Although the CI/CD pipeline flow is automatically triggered, there are times when it needs to be manually run (or re-run) and the pipeline should be examined for the results of its preparation of the application and deployments. Details for examining the CI/CD pipeline and analyzing its results can be found here: Continuous Integration and Continuous Delivery Pipelines
Added JDK21 support
Springoot catalog now supports RedHat OpenJDK 21 image, and is deafulted in this pipeline release. The list of available JDK versions are 11,17 and 21. We have also improved the memory usage during build time by add mvn build parameters to fix 137 issues in large applications.
Upgrading to Pipeline v4.7
CloudOne DevOps Pipeline v4.7 introduces CyberArk Integration with Rancher. Conjur performs certificate-based authentication with each Rancher cluster and securely supplies secrets to each namespace/application within the cluster. Secrets are securely stored in the CyberArk vault. For more information refer Conjur Integration Document.
- Setup CyberArk integration (link)
- Remove
ambassador.yaml
file from templates folder. - Add release 4.7 referance in
azure-pipelines.yaml
.
resources:
repositories:
- { repository: templates, type: git, name: devexp-engg/automation, ref: release/v4.7 }
- { repository: spaces, type: git, name: spaces, ref: devops/v1.0 }
Note : Security scan stage was release as a minor update in v4.5 to read more about how this feature works please refer <a href="https://selfassist.cloudone.netapp.com/docs/secret-scanning-pipeline">New Security Stage of Pipeline: CrowdStrike Image Scanning and Gitleaks Secret Scanning</a>.
To transition to v4.7, we recommend using the dxpctl tool.
- Access the latest tool here
Downloadable Version Docker Build/Run Locally
Pipeline Definition: azure-pipelines.yml file
The logic for the v4.7 springboot CI/CD pipeline is a common and shared code base for all applications; however the configuration of the pipeline that applies the common logic to the specific application is defined in the top level directory of the source code repository for the application in a file named azure-pipelines.yml.
The structure of the azure-pipelines.yml file is a YAML file using standard YAML syntax. For general information about YAML structures, there are many available resources, including the tutorial at this link: YAML Tutorial.
There are certain required data elements that must be defined within the azure-pipelines.yml file as a prerequisite to the CI/CD pipeline running while other elements are optional and used to modify the standard behavior or the CI/CD pipeline.
More details about the azure-pipelines.yml file can be found here: General Structure of the azure-pipelines.yml File
Configuration which refers the logic for V4 pipeline for the springboot application, repo & branch can configure under resources/repositories variable
resources:
repositories:
- { repository: templates, type: git, name: devexp-engg/automation, ref: release/v4.7 }
- { repository: spaces, type: git, name: spaces, ref: devops/v1.0 }
Application-Specific Pipeline Configuration
The extends YAML object is a complex object consisting of additional YAML objects. This object is used to extend the pipeline logic (referenced by the repository defined in the resources object) by (a) referencing the correct appstack pipeline entry point (devops/springboot.yml for the Springboot pipeline) and (b) passing a set of YAML objects as parameters to influence the behavior of the pipeline to meet an application teams specific needs.
The extends YAML object consists of 2 objects beneath it:
- template
- parameters
The template YAML object is a single value set to the initial entry point for the V3 pipeline for the Springboot appstack, so it should always be defined as follows:
extends:
template: devops/springboot.yml@spaces
Please refer to upgrade document to know more about springboot.yml@templates which holds required metadata for the pipeline.
The parameters YAML object is defined immediately following the template object and at the same indentation level. This is the object that requires the most attention and definition to be set up.
The parameters YAML object includes a couple of variables required for the Springboot application stack. The first of these is the appVersion variable to define the version of the application to be built and deployed. The second variable defines the version of Java (the JDK) to be used to compile and to run the application. The variable is named jdkVersion and accepts one of 2 valid values: jdk11 for Java 1.11 and jdk8 for Java 1.8. The following is an example of how these variables would be defined (including all of the extends object preceding it):
parameters:
appVersion: 4.7.x
javaVersion: "jdk21"
Dockerfile Changes
In pipeline v4.7, we switched to a different vendor to provide us with our JDK images. This vendor does not include a CMD or ENTRYPOINT configuration, as our last one did. You will need to include the CMD at the bottom of the Dockerfile:
CMD ["java", "-jar", "app.jar"]
Defining hostspaces for application deployment
Pipeline 4.7 simplifies the structure of azure-pipelines.yml to define all spaces(workspace,devint,db/hostspaces) as a single list. Older format is still supported but to continue using old format ref and k8s must be removed from spaces. It's strongly recommended to follow newer format. acceptanceSpace must be specified in the file. For example:
parameters:
acceptanceSpace: devexp-ae103-stg
spaces:
workspace:
helm:
overrideFiles: |
devexp-springboot/values.workspace.yaml
devexp-ae101-dnt:
helm:
overrideFiles: |
devexp-springboot/values.hostspace.yaml
devexp-ae103-stg:
helm:
overrideFiles: |
devexp-springboot/values.hostspace.yaml
devexp-npc02-prd:
helm:
overrideFiles: |
devexp-springboot/values.hostspace.yaml
Add monitoring metrics to Springboot
Added actuator to support monitoring of springboot APIs in the boilerplate. Refer Here
Skipping deployments
It's a new handy feature introduced in v4.x pipeline. During initial builds or testing, deployments can be excluded for all spaces (workspace,devint,db/hostspaces) even if configured in azure-pipelines.yaml. Pipeline will only perform application build and scans.
extends:
template: devops/springboot.yml@spaces
parameters:
system:
skipDeploy: true
Horizontal Pod Auto-scaler
Pipeline v4.7 includes configuration of Horizontal Pod Auto-scaling for any deployment. Please refer to documentation for more details. To configure HPA use below template in values.yaml for the host space required. Below commented example starts the deployment with a single pod which can be scaled up to 3 pods when avarage cpu usage crossed 70%.
# If you are using Horizontal POD Auto Scaler, you must disable ReplicaCount
replicaCount: 2
# hpa:
# min: 1
# max: 3
# cpu: 70
PVC for application stacks
Persistent Volume Claims (PVCs) for application stacks involve providing support for persistent storage in application stacks deployed in Kubernetes clusters. By enabling PVCs for application stacks, This feature will allow developers to define and configure persistent storage requirements for their applications using PVCs.
persistence:
enabled: true
#storageClass: ""
accessMode: ReadWriteMany
# subPath: appdata
size: 25Gi
mountPath: /data
DR changes
With the introduction of pipeline v4.7, we have implemented DR capabilities that require changes in the Helm chart. Please refer to the document below for the necessary modifications and kindly update your Helm chart accordingly.
https://selfassist.cloudone.netapp.com/docs/getting-started-with-dr
Please refer to the following document to prepare your codebase for disaster recovery.
https://selfassist.cloudone.netapp.com/docs/configmap-ambassador-update
Pod Anti-Affinity
Pipeline v4.7 enchances pod scheduling on the cluster with addition of Pod Anti-affinity. For multi-pod applications, this ensures pod scheduled scattered across the nodes within a cluster.
Detailed Pipeline Configuration
The remainder of the configuration work in the azure-pipelines.yml file focuses primarily on defining the target workspace and hostspaces for the springboot application and providing details for these spaces, any additional tasks needed to prepare the deployments, and dependencies that will dictate the sequence of deployments in these spaces beyond the established Cloudone environment deployment approval processes. Details for configuring these elements of the pipeline can be found here: Pipeline Configuration Details
Kubernetes Deployment Objects
In order to deploy an application number of Kubernetes objects must be defined and deployed. The definition of these objects is controlled through a set of files in one of two forms: Either as Helm charts (the default method) or as Kustomize files. Information about the contents and customization of these Kubernetes deployment objects can be found here: Kubernetes Deployment Objects
Troubleshooting
If something fails to deploy, the information about why the deployment failed (or was not even initiated) will be found in the logs of the CI/CD pipeline and can be tracked down using the methods described earlier in the “Continuous Integration and Continuous Delivery Pipelines” section.
However, additional information may be required either to better troubleshoot a failed deployment or to investigate the runtime behavior of an application that has been successfully deployed. In those cases, much of the information can be found in the Rancher web console. Information about navigating and analyzing information from the Rancher web console can be found here: Navigating the Rancher Web Console