Day 2: VNF Runtime Operations

Description of this phase

The objective of this section is to provide the guidelines for including all the necessary elements in the VNF Package so that it can be operated at runtime and therefore, reconfigured on demand at any point by the end-user. Typical operations include reconfiguration of services, KPI monitoring and the enablement of automatic, closed-loop operations triggered by monitored status.

The main mechanisms to achieve reconfiguration in OSM is to build a Proxy Charm and include it in the descriptor. On the other hand, monitoring and VNF-specific policy management can be achieved by specifying the requirements at the descriptor (modifying monitored indicators and policies at runtime is not supported in OSM as of version 9)

Day-2 Onboarding Guidelines

Adding Day-2 primitives to the descriptor

Day-2 primitives are actions invoked on demand, so the config-primitive block is used instead of the initial-config-primitive block at the VNF or VDU level.

For example, a VNF-level set of Day-2 primitives would look like this:

vnfd:
...
    df:
    - ...
    # VNF/VDU Configuration must use the ID of the VNF/VDU to be configured
    lcm-operations-configuration:
      operate-vnf-op-config:
        day1-2:
        -  id: vnf_id
           execution-environment-list:
           - id: operate-vnf
             connection-point-ref: vnf-mgmt
             juju:
               charm: samplecharm
           config-primitive:
           -   execution-environment-ref: operate-vnf
               name: restart-service
               parameter:
               -   name: offset
                   default-value: 10
                   data-type: STRING
           -   name: clean-cache
               parameter:
               -   name: force
                   default-value: true
                   data-type: BOOLEAN

### Building Juju-based (charms) or Helm-based execution environments

Juju-based execution environments (charms), or helm-based ones for implementing Day-2 primitives are built exactly in the same way as when implementing Day-1 primitives.

### Adding monitoring parameters

#### Collecting NFVI metrics

In order to collect NFVI-level metrics associated to any given VDU and store them in the OSM TSDB (using Prometheus software), a set of `monitoring-params` should be declared both globally and at the VDU level.

Only CPU, Memory and Network metrics are supported as of OSM version 9. For example:

```yaml
vnfd:
  vdu:
  - ...
    monitoring-parameter:
    - id: vnf_cpu_util
      name: vnf_cpu_util
      performance-metric: cpu_utilization
    - id: vnf_memory_util
      name: vnf_memory_util
      performance-metric: average_memory_utilization
    - id: vnf_packets_sent
      name: vnf_packets_sent
      performance-metric: packets_sent
    - id: vnf_packets_received
      name: vnf_packets_received
      performance-metric: packets_received

Collecting VNF indicators

As of OSM version 9, collection of VNF indicators is done by using Prometheus Exporters running as “execution environments”, which translate into PODs instantiated in the same K8s cluster where OSM runs. These PODs follow the VNF lifecycle (as charms do) and are dedicated to the collection of metrics. A first implementation supports SNMP Exporters, to grab scalar provided by any SNMP MIB/OID.

At the VNF package level:

  • The only file that needs to be created before building it is the “generator.yaml” file at the helm-charts/chart_name/snmp/ folder, just as in this sample VNF Package, where the chart is called eechart.

  • Required MIBs should be included in the helm-charts/chart_name/snmp/mibs folder.

  • The rest of the structure inside the helm-chart folder shown in the example above needs to be included.

The generator.yaml file follows the same format as in the open-source Prometheus SNMP Exporter project which we use, documented here In this example, the interfaces metrics from IF-MIB are collected, using the “public” SNMP community.

# generator.yaml file
modules:
  osm-snmp:
    walk: [interfaces]
    lookups:
      - source_indexes: [ifIndex]
        lookup: ifAlias
      - source_indexes: [ifIndex]
        lookup: ifDescr
      - source_indexes: [ifIndex]
        # Use OID to avoid conflict with Netscaler NS-ROOT-MIB.
        lookup: 1.3.6.1.2.1.31.1.1.1.1 # ifName
    auth:
      # Community string is used with SNMP v1 and v2. Defaults to "public".
      community: public

Once the generator.yml has been created and included in the VNF Package, the descriptor needs to define the helm-based monitoring that will be launched, and running the generate_snmp primitive, which compiles the MIBs and builds the SNMP Exporter POD configuration.

vnfd:
  ...
  df:
  - ...
    vnf-configuration-id: default-vnf-configuration

  vnf-configuration:
  - id: default-vnf-configuration
    execution-environment-list:
    - connection-point-ref: vnf-mgmt
      helm-chart: eechart
      helm-version: v2
      id: monitor
      metric-service: snmpexporter
    initial-config-primitive:
    - execution-environment-ref: monitor
      name: generate_snmp
      seq: 2
    config-primitive:
    - execution-environment-ref: monitor
      name: generate_snmp

Adding scaling operations

Scaling operations happen at a VDU level and can be added with automatic triggers (closed-loop mode triggered by monitoring-parameters thresholds), or with a manual trigger.

In both cases, a scaling-aspect section must be added to the VNF Deployment Flavour. The following example enables VDU scaling based on a manual trigger (OSM API or CLI).

vnfd:
  df:
  - ...
    scaling-aspect:
    - aspect-delta-details:
        deltas:
        - id: vdu_autoscale-delta
          vdu-delta:
          - id: hackfest_basic_metrics-VM
            number-of-instances: "1"
      id: vdu_autoscale
      max-scale-level: 1
      name: vdu_autoscale
      scaling-policy:
      - cooldown-time: 120
        name: cpu_util_above_threshold
        scaling-type: manual

The following example defines a closed-loop scaling operation based on a specific monitoring parameter threshold. In this case, the vdu-profile should specify both min-number-of-instances and max-number-of-instances to limit the sum of the original and the scaled instances.

vnfd:
  df:
  - ...
    vdu-profile:
    - ...
      max-number-of-instances: "2"
      min-number-of-instances: "1"
    scaling-aspect:
    - aspect-delta-details:
        deltas:
        - id: vdu_autoscale-delta
          vdu-delta:
          - id: hackfest_basic_metrics-VM
            number-of-instances: "1" # how many instances will be added / removed
      id: vdu_autoscale
      max-scale-level: 1
      name: vdu_autoscale
      scaling-policy:
      - cooldown-time: 120
        name: cpu_util_above_threshold
        scaling-criteria:
        - name: cpu_util_above_threshold
          scale-in-relational-operation: LT
          scale-in-threshold: 10
          scale-out-relational-operation: GT
          scale-out-threshold: 60
          vnf-monitoring-param-ref: vnf_cpu_util
        scaling-type: automatic
        threshold-time: 10

More information about scaling can be found in the OSM Autoscaling documentation

Testing Instantiation of the VNF Package

Each of the objectives of this phase can be tested as follows:

  • Enabling a way of re-configuring the VNF on demand: primitives can be called through the OSM API, dashboard, or directly by running the following OSM client command: osm ns-action [ns-name] --vnf_name [vnf-index] --action_name [primitive-name] --params '{param-name-1: "param-value-1", param-name-2: "param-value-2", ...}

  • Monitor the main KPIs of the VNF: if correctly enabled, metrics will automatically start appearing in the OSM Prometheus database. More information on how to access, visualize and troubleshoot metrics can be found in the OSM Performance Management documentation

  • Enabling scaling operations: automatic scaling should be tested by making the metric reach the corresponding threshold, while manual scaling can be tested by using the following command (which also works when the “scaling-type” has been set to “automatic”): osm vnf-scale [ns-name] [vnf-name] --scaling-group [scaling-group name] [--scale-in|--scale-out]