# Advanced cluster management ## Advanced cluster management with OSM client This guide contains OSM client commands to operate infrastructure and applications following the new declarative framework introduced in Release SIXTEEN. ### OSM client initialization ```bash export OSM_HOSTNAME=$(kubectl get -n osm -o jsonpath="{.spec.rules[0].host}" ingress nbi-ingress) echo "OSM_HOSTNAME: $OSM_HOSTNAME" ``` ### VIM/Cloud account operations #### VIM/Cloud account registration Example for Azure: ```bash export OSM_CREDS_FOLDER="${HOME}/vims" source ${OSM_CREDS_FOLDER}/azure-env.rc osm vim-create --name azure-site --account_type azure \ --auth_url http://www.azure.com \ --user "$AZURE_CLIENT_ID" --password "$AZURE_SECRET" --tenant "$AZURE_TENANT" \ --description "AZURE site" \ --creds ${OSM_CREDS_FOLDER}/azure-credentials.json \ --config "{region_name: westeurope, resource_group: '', subscription_id: '$AZURE_SUBSCRIPTION_ID', vnet_name: 'osm', flavors_pattern: '^Standard'}" ``` File `${OSM_CREDS_FOLDER}/azure-env.rc`: ```bash export AZURE_CLIENT_ID="**********************************" export AZURE_TENANT="**********************************" export AZURE_SECRET="**********************************" export AZURE_SUBSCRIPTION_ID="**********************************" ``` File `${OSM_CREDS_FOLDER}/azure-credentials.json`: ```json { "clientId": "{************************************}", "clientSecret": "************************************", "subscriptionId": "************************************", "tenantId": "************************************", "activeDirectoryEndpointUrl": "https://login.microsoftonline.com", "resourceManagerEndpointUrl": "https://management.azure.com/", "activeDirectoryGraphResourceId": "https://graph.windows.net/", "sqlManagementEndpointUrl": "https://management.core.windows.net:8443/", "galleryEndpointUrl": "https://gallery.azure.com/", "managementEndpointUrl": "https://management.core.windows.net/" } ``` The JSON credentials file corresponds to the service principal credentials obtained during the service principal creation: ```bash az ad sp create-for-rbac --role Contributor --scopes /subscriptions//resourceGroups/ ``` #### VIM/Cloud account deletion ```bash osm vim-delete azure-site ``` ### Cluster operations #### Cluster creation ```bash CLUSTER_NAME=cluster1 CLUSTER_VM_SIZE=Standard_D2_v2 CLUSTER_NODES=1 REGION_NAME=northeurope VIM_ACCOUNT=azure-site RESOURCE_GROUP= KUBERNETES_VERSION="1.30" osm cluster-create --node-count ${CLUSTER_NODES} --node-size ${CLUSTER_VM_SIZE} --version ${KUBERNETES_VERSION} --vim-account ${VIM_ACCOUNT} --description "Cluster1" ${CLUSTER_NAME} --region-name ${REGION_NAME} --resource-group ${RESOURCE_GROUP} ``` ```bash osm cluster-list osm cluster-show cluster1 ``` When the cluster is created, the field `resourceState` should be `READY`. #### Getting kubeconfig Once the cluster is ready, you can get the credentials in this way: ```bash osm cluster-show cluster1 -o jsonpath='{.credentials}' | yq -P # Save them in a file osm cluster-show cluster1 -o jsonpath='{.credentials}' | yq -P > ~/kubeconfig-cluster1.yaml # Test it export KUBECONFIG=~/kubeconfig-cluster1.yaml kubectl get nodes ``` In case credentials are renewed by the cloud policy, credentials can be obtained using this command: ```bash osm cluster-get-credentials cluster1 ``` #### Cluster scale ```bash osm cluster-scale cluster1 --node-count 2 ``` #### Cluster deletion ```bash osm cluster-delete cluster1 ``` #### Cluster registration This should be run over a cluster that was not created by OSM: ```bash CLUSTER_NAME=cluster2 VIM_ACCOUNT=azure-site osm cluster-register --creds ~/kubeconfig-${CLUSTER_NAME}.yaml --vim ${VIM_ACCOUNT} --description "My existing K8s cluster" ${CLUSTER_NAME} ``` ```bash osm cluster-list osm cluster-show cluster2 ``` When the cluster is created, the field `resourceState` should be `READY`. #### Cluster deregistration ```bash osm cluster-deregister cluster2 ``` ### OKA operations #### OKA addition ```bash # git clone --recursive https://osm.etsi.org/gitlab/vnf-onboarding/osm-packages.git export OSM_PACKAGES_FOLDER="${HOME}/osm-packages" export OKA_FOLDER="${OSM_PACKAGES_FOLDER}/oka/apps" osm oka-add jenkins ${OKA_FOLDER}/jenkins --description jenkins --profile-type app-profile osm oka-add testapp ${OKA_FOLDER}/testapp --description testapp --profile-type app-profile osm oka-add testacme ${OKA_FOLDER}/testacme --description testacme --profile-type app-profile ``` ```bash osm oka-list ``` When the OKA is created, the field `resourceState` should be `READY`. #### OKA deletion ```bash osm oka-delete testapp osm oka-delete testacme osm oka-delete jenkins ``` #### OKA generation for helm charts: ```bash osm oka-generate jenkins --base-directory okas --profile-type app-profile --helm-repo-name bitnamicharts --helm-repo-url oci://registry-1.docker.io/bitnamicharts --helm-chart jenkins --version 13.4.20 --namespace jenkins tree okas/jenkins # Once generated, you can add it with: osm oka-add jenkins okas/jenkins --description jenkins --profile-type app-profile ``` ### Profile operations #### Listing profiles ```bash osm profile-list ``` ### KSU operations #### KSU creation from OKA You must specify the destination profile: ```bash export OSM_PACKAGES_FOLDER="${HOME}/osm-packages" export OKA_FOLDER="${OSM_PACKAGES_FOLDER}/oka/apps" osm ksu-create --ksu testapp --profile mydemo --profile-type app-profile --oka testapp --params ${OKA_FOLDER}/testapp-params.yaml osm ksu-create --ksu testacme --profile mydemo --profile-type app-profile --oka testacme --params ${OKA_FOLDER}/testacme-params.yaml osm ksu-create --ksu jenkins --description "Jenkins" --profile mydemo --profile-type app-profile --oka jenkins --params ${OKA_FOLDER}/jenkins-params.yaml ``` ```bash osm ksu-list ``` When the KSU is created, the field `resourceState` should be `READY`. #### KSU deletion ```bash osm ksu-delete testapp osm ksu-delete testacme osm ksu-delete jenkins ``` ## Tutorial: how to operate infra and apps with OSM declarative framework The tutorial assumes that you have added a VIM/Cloud account to OSM. ```bash export OSM_HOSTNAME=$(kubectl get -n osm -o jsonpath="{.spec.rules[0].host}" ingress nbi-ingress) ``` Create a cluster: ```bash CLUSTER_NAME=mydemo CLUSTER_VM_SIZE=Standard_D2_v2 CLUSTER_NODES=2 REGION_NAME=northeurope VIM_ACCOUNT=azure-site RESOURCE_GROUP= KUBERNETES_VERSION="1.30" osm cluster-create --node-count ${CLUSTER_NODES} --node-size ${CLUSTER_VM_SIZE} --version ${KUBERNETES_VERSION} --vim-account ${VIM_ACCOUNT} --description "Mydemo cluster" ${CLUSTER_NAME} --region-name ${REGION_NAME} --resource-group ${RESOURCE_GROUP} ``` Check progress: ```bash osm cluster-list ``` When the cluster is created, the field `resourceState` should be `READY`. Get credentials: ```bash osm cluster-show mydemo -o jsonpath='{.credentials}' | yq -P > ~/kubeconfig-mydemo.yaml export KUBECONFIG=~/kubeconfig-mydemo.yaml # Check that the credentials work kubectl get nodes ``` Refreshing credentials in case they are renewed by the cloud policy: ```bash osm cluster-get-credentials mydemo > ~/kubeconfig-mydemo.yaml export KUBECONFIG=~/kubeconfig-mydemo.yaml ``` OKA addition: ```bash export OSM_PACKAGES_FOLDER="${HOME}/osm-packages" export OKA_FOLDER="${OSM_PACKAGES_FOLDER}/oka/apps" osm oka-add jenkins ${OKA_FOLDER}/jenkins --description jenkins --profile-type app-profile osm oka-add testapp ${OKA_FOLDER}/testapp --description testapp --profile-type app-profile osm oka-add testacme ${OKA_FOLDER}/testacme --description testacme --profile-type app-profile ``` Check the progress: ```bash osm oka-list ``` When the OKAs are created, the field `resourceState` should be `READY`. KSU creation: ```bash osm ksu-create --ksu testapp --profile mydemo --profile-type app-profile --oka testapp --params ${OKA_FOLDER}/testapp-params.yaml osm ksu-create --ksu testacme --profile mydemo --profile-type app-profile --oka testacme --params ${OKA_FOLDER}/testacme-params.yaml osm ksu-create --ksu jenkins --description "Jenkins" --profile mydemo --profile-type app-profile --oka jenkins --params ${OKA_FOLDER}/jenkins-params.yaml ``` Check the progress: ```bash osm ksu-list ``` When the KSUs are created, the field `resourceState` should be `READY`. Check in the destination cluster: ```bash export KUBECONFIG=~/kubeconfig-mydemo.yaml watch "kubectl get ns; echo; kubectl get ks -A; echo; kubectl get hr -A" watch "kubectl get all -n testapp" watch "kubectl get all -n testacme" watch "kubectl get all -n jenkins" ``` KSU deletion: ```bash osm ksu-delete testapp osm ksu-delete testacme osm ksu-delete jenkins ``` Cluster scale: ```bash CLUSTER_NAME=mydemo osm cluster-scale ${CLUSTER_NAME} --node-count 3 ``` Check progress: ```bash osm cluster-list ``` When the cluster is created, the field `resourceState` should be `READY`. Cluster deletion: ```bash osm cluster-delete ${CLUSTER_NAME} ``` ## Reference: how to prepare OSM Kubernetes Applications (OKA) ### Reminder on some OSM concepts - KSU (Kubernetes Software Unit): - The minimal unit of state to be synced by the workload cluster from a Git repo - It is a set of manifests placed in the Git repo, associated to a profile, which is in turn associated to a cluster. - OKA (OSM Kubernetes Application) is a blueprint for a KSU, a convenient way to encapsulate the logic for a KSU in a package. ### Introduction There are three layers that contribute to the final result (Kubernetes SW unit) that will be commited to the Git repository, and therefore applied in the workload cluster: 1. Manifests. They are given. 2. Overlays that allow customization of these manifests. They are changes relative to the original manifests. 3. Modifications to the overlays that OSM can make at instantiation time (KSU deployment). ```mermaid block-beta columns 3 space blockArrowId<["   "]>(down) space space A["Manifests"] space space B["Overlays"] space space C["OSM modifications"] space %% A --> B %% B --> C ``` Manifests and overlays can be encapsulated in a package called OKA. Examples of OKAs can be found [here](https://osm.etsi.org/gitlab/vnf-onboarding/osm-packages/-/tree/master/oka?ref_type=heads) - `apps/namespace`: OKA for namespace creation - `apps/testacme`: OKA based on Kubernetes manifests - `apps/jenkins`: OKA based on Helm charts ### Structure of an OKA It consists of two folders: `manifests` and `templates`: - `manifests` (the first layer) represents the manifests that are given by the vendor, with optional modifications. Two options here: - From Kubernetes manifests - From Helm charts - `templates` (the second layer): Flux kustomizations (at least one) pointing to the manifests, and defining overlays to customize the manifests. - Optionally, it can contain auxiliary Kubernetes manifests to be created at instantiation time. Manifests are not applied directly in the workload cluster, but they are mediated by the Flux kustomizations. Examples of structures of an OKA can be seen below: - `apps/namespace`: OKA for namespace creation ```text $ tree apps/namespace namespace/ |-- manifests | `-- namespace.yaml `-- templates `-- namespace-ks.yaml ``` - `apps/testacme`: OKA based on Kubernetes manifests ```text $ tree testacme testacme |-- manifests | |-- testacme-deploy.yaml | `-- testacme-svc.yaml `-- templates `-- testacme-ks.yaml ``` - `apps/jenkins`: OKA based on Helm charts ```text $ tree apps/jenkins jenkins |-- manifests | |-- bitnamicharts-repo.yaml | `-- jenkins-hr.yaml `-- templates `--- jenkins-ks.yaml ``` The OKA with its manifests and templates is stored in the `sw-catalogs` Git repo. Meanwhile, the final KSU that will be generated at instantiation time from the templates is stored in the `fleet` Git repo, under the appropriate profile. ### How to create objects in the manifests folder #### Option 1. Set of Kubernetes manifests Just put the manifests in the folder `manifests`. Below an example for the `apps/testacme` OKA: ```yaml apiVersion: apps/v1 kind: Deployment metadata: creationTimestamp: null labels: config: myapp name: myapp-deployment namespace: mynamespace spec: replicas: 1 selector: matchLabels: run: myapp strategy: {} template: metadata: creationTimestamp: null labels: run: myapp spec: containers: - image: docker.io/hashicorp/http-echo:1.0 imagePullPolicy: Always name: myapp ports: - containerPort: 5678 protocol: TCP resources: {} args: - "-text=\"hello\"" imagePullSecrets: - name: docker.io ``` ```yaml apiVersion: v1 kind: Service metadata: creationTimestamp: null labels: config: myapp run: myapp name: myapp-http namespace: mynamespace spec: ports: - name: http5678tls port: 5678 protocol: TCP targetPort: 5678 selector: run: myapp type: ClusterIP ``` #### Option 2. For Helm charts Two manifests need to be created, one for the helm repository and one for the helm release. Below an example for the `apps/jenkins` OKA: ```yaml apiVersion: source.toolkit.fluxcd.io/v1beta2 kind: HelmRepository metadata: name: bitnamicharts namespace: jenkins spec: interval: 10m0s type: oci url: oci://registry-1.docker.io/bitnamicharts ``` ```yaml apiVersion: helm.toolkit.fluxcd.io/v2beta1 kind: HelmRelease metadata: name: jenkins namespace: jenkins spec: chart: spec: chart: jenkins version: '13.4.20' reconcileStrategy: ChartVersion sourceRef: kind: HelmRepository name: bitnamicharts namespace: jenkins interval: 3m0s targetNamespace: jenkins values: {} ``` ### How to create the Kustomizations in the templates folder The Flux kustomization should be prepared to point to the manifests in the `sw-catalogs` repo: ```yaml apiVersion: kustomize.toolkit.fluxcd.io/v1 kind: Kustomization metadata: name: ${APPNAME} namespace: flux-system spec: interval: 1h0m0s path: ./apps/jenkins/manifests prune: true sourceRef: kind: GitRepository name: sw-catalogs namespace: flux-system ``` Please use the right path pointing to the folder where the manifests of your OKA are located (`./apps/jenkins/manifests` in this case). For the moment, we will skip the meaning of the variable `${APPNAME}`. Then, we need to define in the kustomization the overlay patches (second layer) that will be applied to the manifests. #### How to create overlays in Flux Kustomization Three mechanisms to create overlays in [Flux kustomizations](https://fluxcd.io/flux/components/kustomize/kustomizations/): - Overlay patch. - It follows the mechanisms described [here](https://fluxcd.io/flux/components/kustomize/kustomizations/#patches) - They are added with the directive: `patches`. - Example: ```yaml patches: - target: kind: Namespace version: v1 name: mynamespace patch: |- - op: replace path: /metadata/name value: finalnamespace ``` - Postbuilder: A simple parameterization mechanism that allows replacing values defined in the manifests. It can be compared to Helm values but is less powerful. - It follows the mechanisms described [here](https://fluxcd.io/flux/components/kustomize/kustomizations/#post-build-variable-substitution) - They are added with the directive `postbuild`. - Manifests should be properly updated to use the variables that will be substituted by the `postBuild` directive. For instance, the manifests for `apps/testacme` OKA [here](https://osm.etsi.org/gitlab/vnf-onboarding/osm-packages/-/tree/master/oka/apps/testacme/manifests?ref_type=heads) use the variables `${appname}`, `${target_ns}` and `${echo:message}`. - Example: ```yaml postBuild: substitute: appname: myappname target_ns: mynamespace echo_message: Hello everybody ``` - High-level directives in the kustomization: - `targetNamespace`: optional field to specify the target namespace for all the objects that are part of the Kustomization. - `commonMetadata`: optional field used to specify any metadata (labels and annotations) that should be applied to all the Kustomization’s resources. - `namePrefix` and `nameSuffix`: optional fields used to specify a prefix and suffix to be added to the names of all the resources in the Kustomization. #### How to expose parameters that can be defined at instantiation time In the previous section, the parameterization mechanism based on `postbuild` was detailed, which allows replacing values defined in the manifests. In order to expose some of these parameters at instantiation time, some custom env vars can be defined and OSM will substitute them at instantiation time. In the example below, three custom env vars are defined: `APPNAME`, `TARGET_NS` and `ECHO_MESSAGE`. As a best practice, we recommend the use of capital letters for those exposed env vars, to differentiate from the variables used in the manifests for the `postBuild` patches. ```yaml postBuild: substitute: appname: ${APPNAME} target_ns: ${TARGET_NS} echo_message: ${ECHO_MESSAGE} ``` ### How to modify kustomization overlays at instantiation time (third layer) At instantiation time, when KSUs are created, OSM take the files defined in the `templates` folder and applies a third layer of modifications before adding those files to the `fleet` repo. OSM has some directives that can be applied at instantiation time to do those transformations, which allow: - Replacement of variables. - Dynamically generate objects, e.g. encrypted secrets. - Add additional overlay patches. #### OSM transformations that can be applied to any KSU to replace values Exposed variables can be replaced with the directive `custom_env_vars` at instantiation time. By default, there are two pre-defined parameters in OSM that are always replaced: - KSU name, which will replace APPNAME in the `templates` folder. It is always defined. - `namespace`, which will replace TARGET_NS in the `templates` folder. If not defined, it defaults to `default`. The rest of exposed variables can be provided at instantiation time, like this: ```yaml custom_env_vars: ECHO_MESSAGE: hello to everybody ``` #### OSM transformations that can be only applied to helm releases There are three directives that can be used at instantiation time to provide values to a helm release: - `inline_values`: - OSM adds an overlay patch to the helm release with that content. - `configmap_values`: - Values come from a configmap - `secret_values`: - Values come from a secret The directives correspond to the three ways values can be supplied to a Flux HelmRelease object: inline (with `inline_values`), configmap (with `configmap_values`) and secret (with `secret_values`). You are not forced to use one or the other. All of them can be used. What you need to take into account is that the patches are applied in a specific order: 1. `inline_values` 2. `configmap_values` 3. `secret_values` ### Recommended namespace management in OSM Although it is technically possible to create the namespaces together with the KSUs (as part of the `templates` folder), it is recommended to manage the namespace with an independent KSU (based on `apps/namespace` OKA), which will guarantee that multiple KSUs can be created from the same OKA in the same namespace. Otherwise, if two KSUs are deployed on the same namespace, the creation will work (no conflict if the namespace already exists), but there will be issues when deleting one KSU because the namespace could not be deleted when being used by other KSUs. ### OSM commands to do the operations using the pre-existing OKAs ```bash export OKA_FOLDER="${OSM_PACKAGES_FOLDER}/oka/apps" osm oka-add jenkins ${OKA_FOLDER}/namespace --description namespace --profile-type app-profile osm oka-add jenkins ${OKA_FOLDER}/jenkins --description jenkins --profile-type app-profile osm oka-add testacme ${OKA_FOLDER}/testacme --description testacme --profile-type app-profile osm ksu-create --ksu namespace --profile mydemo --profile-type app-profile --oka namespace osm ksu-create --ksu testacme --profile mydemo --profile-type app-profile --oka testacme --params ${OKA_FOLDER}/testacme-params.yaml osm ksu-create --ksu jenkins --description "Jenkins" --profile mydemo --profile-type app-profile --oka jenkins --params ${OKA_FOLDER}/jenkins-params.yaml ```