# OSM Usage
## Deploying your first Network Service
Before going on, download the required VNF and NS packages from this URL:
### Onboarding a VNF
The onboarding of a VNF in OSM involves preparing and adding the corresponding VNF package to the system. This process also assumes, as a pre-condition, that the corresponding VM images are available in the VIM(s) where it will be instantiated.
#### Uploading VM image(s) to the VIM(s)
In this example, only a vanilla Ubuntu16.04 image is needed. It can be obtained from the following link:
It will be required to upload the image into the VIM. Instructions differ from one VIM to another (please check the reference of your type of VIM).
For instance, this is the OpenStack command for uploading images:
```bash
openstack image create --file="./xenial-server-cloudimg-amd64-disk1.img" --container-format=bare --disk-format=qcow2 ubuntu16.04
```
And this one is the appropriate command in OpenVIM:
```bash
#copy your image to the NFS shared folder (e.g. /mnt/openvim-nfs)
cp ./xenial-server-cloudimg-amd64-disk1.img /mnt/openvim-nfs/
openvim image-create --name cirros034 --path /mnt/openvim-nfs/xenial-server-cloudimg-amd64-disk1.img
```
#### Onboarding a VNF Package
- From the UI:
- Go to 'VNF Packages' on the 'Packages' menu to the left
- Drag and drop the VNF package file `hackfest_basic_vnf.tar.gz` in the importing area.

- From OSM client:
```bash
osm nfpkg-create hackfest_basic_vnf.tar.gz
osm nfpkg-list
```
### Onboarding a NS Package
- From the UI:
- Go to 'NS Packages' on the 'Packages' menu to the left
- Drag and drop the NS package file `hackfest_basic_ns.tar.gz` in the importing area.

- From OSM client:
```bash
osm nspkg-create hackfest_basic_ns.tar.gz
osm nspkg-list
```
### Instantiating the NS
#### Instantiating a NS from the UI
- Go to 'NS Packages' on the 'Packages' menu to the left
- Next the NS descriptor to be instantiated, click on the 'Instantiate NS' button.

- Fill in the form, adding at least a name, description and selecting the VIM:

#### Instantiating a NS from the OSM client
```bash
osm ns-create --ns_name --nsd_name hackfest_basic-ns --vim_account
osm ns-list
```
## Advanced instantiation: using instantiation parameters
OSM allows the parametrization of NS or NSI upon instantiation (Day-0 and Day-1), so that the user can easily decide on the key parameters of the service without any need of changing the original set of validated packages.
Thus, when creating a NS instance, it is possible to pass instantiation parameters to OSM using the `--config` option of the client or the `config` parameter of the UI. In this section we will illustrate through some of the existing examples how to specify those parameters using OSM client. Since this is one of the most powerful features of OSM, this section is intended to provide a thorough overview of this functionality with practical use cases.
### Specify a VIM network name for a NS VLD
In a generic way, the mapping can be specified in the following way, where `vldnet` is the name of the network in the NS descriptor and `netVIM1` is the existing VIM network that you want to use:
```yaml
--config '{vld: [ {name: vldnet, vim-network-name: netVIM1} ] }'
```
You can try it using one of the examples of the hackfest (**packages: [hackfest_basic_vnf](https://osm.etsi.org/gitlab/vnf-onboarding/osm-packages/-/tree/master/hackfest_basic_vnf), [hackfest_basic_ns](https://osm.etsi.org/gitlab/vnf-onboarding/osm-packages/-/tree/master/hackfest_basic_ns)); images: [ubuntu16.04](https://cloud-images.ubuntu.com/xenial/current/xenial-server-cloudimg-amd64-disk1.img)**) in the following way:
```bash
osm ns-create --ns_name hf-basic --nsd_name hackfest_basic-ns --vim_account openstack1 --config '{vld: [ {name: mgmtnet, vim-network-name: mgmt} ] }'
```
### Specify a VIM network name for an internal VLD of a VNF
In this scenario, the mapping can be specified in the following way, where `"1"` is the member vnf index of the constituent vnf in the NS descriptor, `internal` is the name of `internal-vld` in the VNF descriptor and `netVIM1` is the VIM network that you want to use:
```yaml
--config '{vnf: [ {member-vnf-index: "1", internal-vld: [ {name: internal, vim-network-name: netVIM1} ] } ] }'
```
You can try it using one of the examples of the hackfest (**packages: [hackfest_multivdu_vnf](https://osm.etsi.org/gitlab/vnf-onboarding/osm-packages/-/tree/master/hackfest_multivdu_vnf), [hackfest_multivdu_ns](https://osm.etsi.org/gitlab/vnf-onboarding/osm-packages/-/tree/master/hackfest_multivdu_ns)); images: [US1604](https://osm-download.etsi.org/ftp/images/tests/US1604.qcow2)**) in the following way:
```bash
osm ns-create --ns_name hf-multivdu --nsd_name hackfest_multivdu-ns --vim_account openstack1 --config '{vnf: [ {member-vnf-index: "1", internal-vld: [ {name: internal, vim-network-name: mgmt} ] } ] }'
```
### Specify a VIM network (provider network) to be created with specific parameters (physnet label, encapsulation type, segmentation id) for a NS VLD
The mapping can be specified in the following way, where `vldnet` is the name of the network in the NS descriptor, `physnet1` is the physical network label in the VIM, `vlan` is the encapsulation type and `400` is the segmentation IDthat you want to use:
```yaml
--config '{vld: [ {name: vldnet, provider-network: {physical-network: physnet1, network-type: vlan, segmentation-id: 400} } ] }'
```
You can try it using one of the examples of the hackfest (**packages: [hackfest_basic_vnf](https://osm.etsi.org/gitlab/vnf-onboarding/osm-packages/-/tree/master/hackfest_basic_vnf), [hackfest_basic_ns](https://osm.etsi.org/gitlab/vnf-onboarding/osm-packages/-/tree/master/hackfest_basic_ns)); images: [ubuntu16.04](https://cloud-images.ubuntu.com/xenial/current/xenial-server-cloudimg-amd64-disk1.img)**) in the following way:
```bash
osm ns-create --ns_name hf-basic --nsd_name hackfest_basic-ns --vim_account openstack1 --config '{vld: [ {name: mgmtnet, provider-network: {physical-network: physnet1, network-type: vlan, segmentation-id: 400} } ] }'
```
### Specify IP profile information and IP for a NS VLD
In a generic way, the mapping can be specified in the following way, where `datanet` is the name of the network in the NS descriptor, ip-profile is where you have to fill the associated parameters from the data model ( [NS data model](http://osm-download.etsi.org/ftp/osm-doc/etsi-nfv-nsd.html) ), and vnfd-connection-point-ref is the reference to the connection point:
```yaml
--config '{vld: [ {name: datanet, ip-profile: {...}, vnfd-connection-point-ref: {...} } ] }'
```
TODO: update example with latest Hackfest
You can try it using one of the examples of the hackfest (**descriptors: [hackfest2-vnf](https://osm-download.etsi.org/ftp/osm-4.0-four/3rd-hackfest/packages/hackfest_2_vnfd.tar.gz), [hackfest2-ns](https://osm-download.etsi.org/ftp/osm-4.0-four/3rd-hackfest/packages/hackfest_2_nsd.tar.gz); images:[ubuntu1604](https://osm-download.etsi.org/ftp/osm-3.0-three/1st-hackfest/images/US1604.qcow2), presentation: [modeling multi-VDU VNF](https://osm-download.etsi.org/ftp/osm-4.0-four/3rd-hackfest/presentations/20180626%20OSM%20Hackfest%20-%20Session%203%20-%20Modeling%20multi-VDU%20VNF%20v2.pdf)**) in the following way:
```bash
osm ns-create --ns_name hf2 --nsd_name hackfest2-ns --vim_account openstack1 --config '{vld: [ {name: datanet, ip-profile: {ip-version: ipv4 ,subnet-address: "192.168.100.0/24", gateway-address: "0.0.0.0", dns-server: [{address: "8.8.8.8"}],dhcp-params: {count: 100, start-address: "192.168.100.20", enabled: true}}, vnfd-connection-point-ref: [ {member-vnf-index-ref: "1", vnfd-connection-point-ref: vnf-data, ip-address: "192.168.100.17"}]}]}'
```
### Specify IP profile information for an internal VLD of a VNF
In this scenario, the mapping can be specified in the following way, where `"1"` is the member vnf index of the constituent vnf in the NS descriptor, `internal` is the name of internal-vld in the VNF descriptor and ip-profile is where you have to fill the associated parameters from the data model ([VNF data model](http://osm-download.etsi.org/ftp/osm-doc/etsi-nfv-vnfd.html)):
```yaml
--config '{vnf: [ {member-vnf-index: "1", internal-vld: [ {name: internal, ip-profile: {...} ] } ] }'
```
TODO: update example with latest Hackfest
You can try it using one of the examples of the hackfest (**descriptors: [hackfest2-vnf](https://osm-download.etsi.org/ftp/osm-4.0-four/3rd-hackfest/packages/hackfest_2_vnfd.tar.gz), [hackfest2-ns](https://osm-download.etsi.org/ftp/osm-4.0-four/3rd-hackfest/packages/hackfest_2_nsd.tar.gz); images:[ubuntu1604](https://osm-download.etsi.org/ftp/osm-3.0-three/1st-hackfest/images/US1604.qcow2), presentation: [modeling multi-VDU VNF](https://osm-download.etsi.org/ftp/osm-4.0-four/3rd-hackfest/presentations/20180626%20OSM%20Hackfest%20-%20Session%203%20-%20Modeling%20multi-VDU%20VNF%20v2.pdf)**) in the following way:
```bash
osm ns-create --ns_name hf2 --nsd_name hackfest2-ns --vim_account openstack1 --config '{vnf: [ {member-vnf-index: "1", internal-vld: [ {name: internal, ip-profile: {ip-version: ipv4 ,subnet-address: "192.168.100.0/24", gateway-address: "0.0.0.0", dns-server: [{address: "8.8.8.8"}] ,dhcp-params: {count: 100, start-address: "192.168.100.20", enabled: true}}}]}]} '
```
### Specify IP address and/or MAC address for an interface
#### Specify IP address for an interface
In this scenario, the mapping can be specified in the following way, where `"1"` is the member vnf index of the constituent vnf in the NS descriptor, 'internal' is the name of internal-vld in the VNF descriptor, ip-profile is where you have to fill the associated parameters from the data model ([VNF data model](http://osm-download.etsi.org/ftp/osm-doc/etsi-nfv-vnfd.html)), `id1` is the internal-connection-point id and `a.b.c.d` is the IP that you have to specify for this scenario:
```yaml
--config '{vnf: [ {member-vnf-index: "1", internal-vld: [ {name: internal, ip-profile: {...}, internal-connection-point: [{id-ref: id1, ip-address: "a.b.c.d"}] ] } ] }'
```
TODO: update example with latest Hackfest
You can try it using one of the examples of the hackfest (**descriptors: [hackfest2-vnf](https://osm-download.etsi.org/ftp/osm-4.0-four/3rd-hackfest/packages/hackfest_2_vnfd.tar.gz), [hackfest2-ns](https://osm-download.etsi.org/ftp/osm-4.0-four/3rd-hackfest/packages/hackfest_2_nsd.tar.gz); images:[ubuntu1604](https://osm-download.etsi.org/ftp/osm-3.0-three/1st-hackfest/images/US1604.qcow2), presentation: [modeling multi-VDU VNF](https://osm-download.etsi.org/ftp/osm-4.0-four/3rd-hackfest/presentations/20180626%20OSM%20Hackfest%20-%20Session%203%20-%20Modeling%20multi-VDU%20VNF%20v2.pdf)**) in the following way:
```bash
osm ns-create --ns_name hf2 --nsd_name hackfest2-ns --vim_account ost4 --config '{vnf: [ {member-vnf-index: "1", internal-vld: [ {name: internal, ip-profile: {ip-version: ipv4 ,subnet-address: "192.168.100.0/24", gateway-address: "0.0.0.0", dns-server: [{address: "8.8.8.8"}] ,dhcp-params: {count: 100, start-address: "192.168.100.20", enabled: true}}, internal-connection-point: [{id-ref: mgmtVM-internal, ip-address: "192.168.100.3"}]}]}]}'
```
#### Specify MAC address for an interface
In this scenario, the mapping can be specified in the following way, where `"1"` is the member vnf index of the constituent vnf in the NS descriptor, `id1` is the id of VDU in the VNF descriptor and `interf1` is the name of the interface to which you want to add the MAC address:
```yaml
--config '{vnf: [ {member-vnf-index: "1", vdu: [ {id: id1, interface: [{name: interf1, mac-address: "aa:bb:cc:dd:ee:ff" }]} ] } ] } '
```
TODO: update example with latest Hackfest
You can try it using one of the examples of the hackfest (**descriptors: [hackfest1-vnf](https://osm-download.etsi.org/ftp/osm-4.0-four/3rd-hackfest/packages/hackfest_1_vnfd.tar.gz), [hackfest1-ns](https://osm-download.etsi.org/ftp/osm-4.0-four/3rd-hackfest/packages/hackfest_1_nsd.tar.gz); images: [ubuntu1604](https://osm-download.etsi.org/ftp/osm-3.0-three/1st-hackfest/images/US1604.qcow2), presentation: [creating a basic VNF and NS](https://osm-download.etsi.org/ftp/osm-4.0-four/3rd-hackfest/presentations/20180626%20OSM%20Hackfest%20-%20Session%202%20-%20Creating%20a%20basic%20VNF%20and%20NS.pdf)**) in the following way:
```bash
osm ns-create --ns_name hf12 --nsd_name hackfest1-ns --vim_account openstack1 --config '{vnf: [ {member-vnf-index: "1", vdu: [ {id: hackfest1VM, interface: [{name: vdu-eth0, mac-address: "52:33:44:55:66:21"}]} ] } ] } '
```
#### Specify IP address and MAC address for an interface
In the following scenario, we will bring together the two previous cases.
TODO: update example with latest Hackfest
You can try it using one of the examples of the hackfest (**descriptors: [hackfest2-vnf](https://osm-download.etsi.org/ftp/osm-4.0-four/3rd-hackfest/packages/hackfest_2_vnfd.tar.gz), [hackfest2-ns](https://osm-download.etsi.org/ftp/osm-4.0-four/3rd-hackfest/packages/hackfest_2_nsd.tar.gz); images:[ubuntu1604](https://osm-download.etsi.org/ftp/osm-3.0-three/1st-hackfest/images/US1604.qcow2), presentation: [modeling multi-VDU VNF](https://osm-download.etsi.org/ftp/osm-4.0-four/3rd-hackfest/presentations/20180626%20OSM%20Hackfest%20-%20Session%203%20-%20Modeling%20multi-VDU%20VNF%20v2.pdf)**) in the following way:
```bash
osm ns-create --ns_name hf12 --nsd_name hackfest2-ns --vim_account ost4 --config '{vnf: [ {member-vnf-index: "1", internal-vld: [ {name: internal , ip-profile: {ip-version: ipv4, subnet-address: "192.168.100.0/24", gateway-address: "0.0.0.0", dns-server: [{address: "8.8.8.8"}] , dhcp-params: {count: 100, start-address: "192.168.100.20", enabled: true} }, internal-connection-point: [ {id-ref: mgmtVM-internal, ip-address: "192.168.100.3"} ] }, ], vdu: [ {id: mgmtVM, interface: [{name: mgmtVM-eth0, mac-address: "52:33:44:55:66:21"}]} ] } ] } '
```
### Force floating IP address for an interface
In a generic way, the mapping can be specified in the following way, where `id1` is the name of the VDU in the VNF descriptor and `interf1` is the name of the interface:
```yaml
--config '{vnf: [ {member-vnf-index: "1", vdu: [ {id: id1, interface: [{name: interf1, floating-ip-required: True }]} ] } ] } '
```
TODO: update example with latest Hackfest
You can try it using one of the examples of the hackfest (**descriptors: [hackfest2-vnf](https://osm-download.etsi.org/ftp/osm-4.0-four/3rd-hackfest/packages/hackfest_2_vnfd.tar.gz), [hackfest2-ns](https://osm-download.etsi.org/ftp/osm-4.0-four/3rd-hackfest/packages/hackfest_2_nsd.tar.gz); images:[ubuntu1604](https://osm-download.etsi.org/ftp/osm-3.0-three/1st-hackfest/images/US1604.qcow2), presentation: [modeling multi-VDU VNF](https://osm-download.etsi.org/ftp/osm-4.0-four/3rd-hackfest/presentations/20180626%20OSM%20Hackfest%20-%20Session%203%20-%20Modeling%20multi-VDU%20VNF%20v2.pdf)**) in the following way:
```bash
osm ns-create --ns_name hf2 --nsd_name hackfest2-ns --vim_account openstack1 --config '{vnf: [ {member-vnf-index: "1", vdu:[ {id: mgmtVM, interface: [{name: mgmtVM-eth0, floating-ip-required: True }]} ] } ] } '
```
Make sure that the target specified in `vim-network-name` of the NS Package is made available from outside to be able to use the parameter `floating-ip-required`.
### Multi-site deployments (specifying different VIM accounts for different VNFs)
In this scenario, the mapping can be specified in the following way, where `"1"` and `"2"` are the member vnf index of the constituent vnfs in the NS descriptor, `vim1` and `vim2` are the names of vim accounts and `netVIM1` and `netVIM2` are the VIM networks that you want to use:
```yaml
--config '{vnf: [ {member-vnf-index: "1", vim_account: vim1}, {member-vnf-index: "2", vim_account: vim2} ], vld: [ {name: datanet, vim-network-name: {vim1: netVIM1, vim2: netVIM2} } ] }'
# NOTE: From release SIX (current master) add 'wim_account: False' (inside --config) to avoid wim network connectivity if you have not a WIM in your system
```
TODO: update example with latest Hackfest
You can try it using one of the examples of the hackfest (**descriptors: [hackfest2-vnf](https://osm-download.etsi.org/ftp/osm-4.0-four/3rd-hackfest/packages/hackfest_2_vnfd.tar.gz), [hackfest2-ns](https://osm-download.etsi.org/ftp/osm-4.0-four/3rd-hackfest/packages/hackfest_2_nsd.tar.gz); images:[ubuntu1604](https://osm-download.etsi.org/ftp/osm-3.0-three/1st-hackfest/images/US1604.qcow2), presentation: [modeling multi-VDU VNF](https://osm-download.etsi.org/ftp/osm-4.0-four/3rd-hackfest/presentations/20180626%20OSM%20Hackfest%20-%20Session%203%20-%20Modeling%20multi-VDU%20VNF%20v2.pdf)**) in the following way:
```bash
osm ns-create --ns_name hf12 --nsd_name hackfest2-ns --vim_account openstack1 --config '{vnf: [ {member-vnf-index: "1", vim_account: openstack1}, {member-vnf-index: "2", vim_account: openstack3} ], vld: [ {name: mgmtnet, vim-network-name: {openstack1: mgmt, openstack3: mgmt} } ] }'
```
### Specifying a volume ID for a VNF volume
In a generic way, the mapping can be specified in the following way, where `VM1` is the name of the VDU, `Storage1` is the volume name in VNF descriptor and `05301095-d7ee-41dd-b520-e8ca08d18a55` is the volume id:
```yaml
--config '{vnf: [ {member-vnf-index: "1", vdu: [ {id: VM1, volume: [ {name: Storage1, vim-volume-id: 05301095-d7ee-41dd-b520-e8ca08d18a55} ] } ] } ] }'
```
TODO: update example with latest Hackfest
You can try it using one of the examples of the hackfest (**descriptors: [hackfest1-vnf](https://osm-download.etsi.org/ftp/osm-4.0-four/3rd-hackfest/packages/hackfest_1_vnfd.tar.gz), [hackfest1-ns](https://osm-download.etsi.org/ftp/osm-4.0-four/3rd-hackfest/packages/hackfest_1_nsd.tar.gz); images: [ubuntu1604](https://osm-download.etsi.org/ftp/osm-3.0-three/1st-hackfest/images/US1604.qcow2), presentation: [creating a basic VNF and NS](https://osm-download.etsi.org/ftp/osm-4.0-four/3rd-hackfest/presentations/20180626%20OSM%20Hackfest%20-%20Session%202%20-%20Creating%20a%20basic%20VNF%20and%20NS.pdf)**) in the following way:
With the previous hackfest example, according [VNF data model](http://osm-download.etsi.org/ftp/osm-doc/etsi-nfv-vnfd.html) you will add in VNF Descriptor:
```yaml
volumes:
- name: Storage1
size: 'Size of the volume'
```
Then:
```bash
osm ns-create --ns_name h1 --nsd_name hackfest1-ns --vim_account openstack1 --config '{vnf: [ {member-vnf-index: "1", vdu: [ {id: hackfest1VM, volume: [ {name: Storage1, vim-volume-id: 8ab156fd-0f8e-4e01-b434-a0fce63ce1cf} ] } ] } ] }'
```
### Adding additional parameters
Since OSM Release SIX, additional user parameters can be added, and they land at `vdu:cloud-init` (Jinja2 format) and/or `vnf-configuration` primitives (enclosed by `<>`). Here is an example of a VNF descriptor that uses two parameters called `touch_filename` and `touch_filename2`.
```yaml
vnfd:
...
vnf-configuration:
config-primitive:
- name: touch
parameter:
- data-type: STRING
default-value:
name: filename
initial-config-primitive:
- name: config
parameter:
- name: ssh-hostname
value: # this parameter is internal
- name: ssh-username
value: ubuntu
- name: ssh-password
value: osm4u
seq: '1'
- name: touch
parameter:
- name: filename
value:
seq: '2'
```
And they can be provided with:
```yaml
--config '{additionalParamsForVnf: [{member-vnf-index: "1", additionalParams: {touch_filename: your-value, touch_filename2: your-value2}}]}'
```
## Understanding Day-1 and Day-2 Operations
VNF configuration is done in three "days":
- Day-0: The machine gets ready to be managed (e.g. import ssh-keys, create users/pass, network configuration, etc.)
- Day-1: The machine gets configured for providing services (e.g.: Install packages, edit config files, execute commands, etc.)
- Day-2: The machine configuration and management is updated (e.g.: Do on-demand actions, like dump logs, backup databases, update users etc.)
In OSM, Day-0 is usually covered by cloud-init, as it just implies basic configurations.
Day-1 and Day-2 are both managed by the VCA (VNF Configuration & Abstraction) module, which consists of a Juju Controller that interacts with VNFs through "charms", a generic set of scripts for deploying and operating software which can be adapted to any use case.
There are two types of charms:
- **Native charms:** the set of scripts run inside the VNF components.
- **Proxy charms:** the set of scripts run in LXC containers in an OSM-managed machine (which could be where OSM resides), which use ssh or other methods to get into the VNF instances and configure them.

These charms can run with three scopes:
- VDU: running a per-vdu charm, with individual actions for each.
- VNF: running globally for the VNF, for the management VDU that represents it.
- NS: running for the whole NS, after VNFs have been configured, to handle interactions between them.
For detailed instructions on how to add cloud-init or charms to your VNF, visit the following references:
- [VNF Onboarding Guidelines, Day-0](https://osm.etsi.org/docs/vnf-onboarding-guidelines/02-day0.html)
- [VNF Onboarding Guidelines, Day-1](https://osm.etsi.org/docs/vnf-onboarding-guidelines/03-day1.html)
- [VNF Onboarding Guidelines, Day-2](https://osm.etsi.org/docs/vnf-onboarding-guidelines/04-day2.html)
Furthermore, you can find a good explanation and examples [in this presentation](http://osm-download.etsi.org/ftp/osm-6.0-six/8th-hackfest/presentations/8th%20OSM%20Hackfest%20-%20Session%207.1%20-%20Introduction%20to%20Proxy%20Charms.pdf)
## Monitoring and autoscaling
### Performance Management
#### VNF Metrics Collection
OSM MON features a "mon-collector" module which will collect metrics whenever specified at the descriptor level. For metrics to be collected, they have to exist first at any of these two levels:
- NFVI - made available by VIM's Telemetry System
- VNF - made available by OSM VCA (Juju Metrics)
Reference diagram:

##### VIM Metrics
For VIM metrics to be collected, your VIM should support a Telemetry system. As of Release 7.0, metric collection works with:
- OpenStack VIM legacy or Gnocchi-based telemetry services.
- VMware vCloud Director with vRealizeOperations.
Next step is to activate metrics collection at your VNFDs. Every metric to be collected from the VIM for each VDU has to be described both at the VDU level, and then at the VNF level. For example:
```yaml
vdu:
id: vdu1
...
monitoring-param:
- id: metric_vdu1_cpu
nfvi-metric: cpu_utilization
- id: metric_vdu1_memory
nfvi-metric: average_memory_utilization
...
monitoring-param:
- id: metric_vim_vnf1_cpu
name: metric_vim_vnf1_cpu
aggregation-type: AVERAGE
vdu-monitoring-param:
vdu-ref: vdu1
vdu-monitoring-param-ref: metric_vdu1_cpu
- id: metric_vim_vnf1_memory
name: metric_vim_vnf1_memory
aggregation-type: AVERAGE
vdu-monitoring-param:
vdu-ref: vdu1
vdu-monitoring-param-ref: metric_vdu1_memory
```
As you can see, a list of "NFVI metrics" is defined first at the VDU level, which contains an ID and the corresponding normalized metric name (in this case, `cpu_utilization` and `average_memory_utilization`) Then, at the VNF level, a list of `monitoring-params` is referred, with an ID, name, aggregation-type and their source (`vdu-monitoring-param` in this case)
###### Additional notes
- Available attributes and values can be directly explored at the [OSM Information Model](11-osm-im.md)
- A complete VNFD example can be downloaded from [here](https://osm-download.etsi.org/ftp/osm-4.0-four/4th-hackfest/packages/webserver_vimmetric_autoscale_vnfd.tar.gz).
- Normalized metric names are: `cpu_utilization`, `average_memory_utilization`, `disk_read_ops`, `disk_write_ops`, `disk_read_bytes`, `disk_write_bytes`, `packets_received`, `packets_sent`, `packets_out_dropped`, `packets_in_dropped`
###### OpenStack-specific notes
Since Rel SIX onwards, MON collects the last measure for the corresponding metric, so no further configuration (i.e. granularity) is needed anymore.
###### VMware vCD specific notes
Since REL6 onwards, MON collects all the normalized metrics, with the following exceptions:
- `packets_in_dropped` is not available and will always return 0.
- `packets_received` cannot be measured. Instead the number of bytes received for all interfaces is returned.
- `packets_sent` cannot be measured. Instead the number of bytes sent for all interfaces is returned.
The rolling average for vROPS metrics is always 5 minutes. The collection interval is also 5 minutes, and can be changed, however, it will still report the rolling average for the past 5 minutes, just updated according to the collection interval. See for more information.
Although it is not recommended, if a more frequent interval is desired, the following procedure can be used to change the collection interval:
- Log into vROPS as an admin.
- Navigate to Administration and expand Configuration.
- Select Inventory Explorer.
- Expand the Adapter Instances and select vCenter Server.
- Edit the vCenter Server instance and expand the Advanced Settings.
- Edit the Collection Interval (Minutes) value and set to the desired value.
- Click OK to save the change.
##### VNF Metrics/Indicators
Metrics can also be collected directly from VNFs using VCA, through the [Juju Metrics](https://docs.jujucharms.com/2.4/en/developer-metrics) framework. A simple charm containing a metrics.yaml file at its root folder specifies the metrics to be collected and the associated command.
For example, the following metrics.yaml file collects three metrics from the VNF, called 'users', 'load' and 'load_pct'
```yaml
metrics:
users:
type: gauge
description: "# of users"
command: who|wc -l
load:
type: gauge
description: "5 minute load average"
command: cat /proc/loadavg |awk '{print $1}'
load_pct:
type: gauge
description: "1 minute load average percent"
command: cat /proc/loadavg | awk '{load_pct=$1*100.00} END {print load_pct}'
```
Please note that the granularity of this metric collection method is fixed to 5 minutes and cannot be changed at this point.
After metrics.yaml is available, there are two options for describing the metric collection in the VNFD:
###### 1) VNF-level VNF metrics
```yaml
mgmt-interface:
cp: vdu_mgmt # is important to set the mgmt VDU or CP for metrics collection
vnf-configuration:
initial-config-primitive:
...
juju:
charm: testmetrics
metrics:
- name: load
- name: load_pct
- name: users
...
monitoring-param:
- id: metric_vim_vnf1_load
name: metric_vim_vnf1_load
aggregation-type: AVERAGE
vnf-metric:
vnf-metric-name-ref: load
- id: metric_vim_vnf1_loadpct
name: metric_vim_vnf1_loadpct
aggregation-type: AVERAGE
vnf-metric:
vnf-metric-name-ref: load_pct
```
Additional notes:
- Available attributes and values can be directly explored at the [OSM Information Model](11-osm-im.md)
- A complete VNFD example with VNF metrics collection (VNF-level) can be downloaded from [here](https://osm-download.etsi.org/ftp/osm-4.0-four/4th-hackfest/packages/ubuntuvm_vnfmetric_autoscale_vnfd.tar.gz).
###### 2) VDU-level VNF metrics
```yaml
vdu:
- id: vdu1
...
interface:
- ...
mgmt-interface: true ! is important to set the mgmt interface for metrics collection
...
vdu-configuration:
initial-config-primitive:
...
juju:
charm: testmetrics
metrics:
- name: load
- name: load_pct
- name: users
...
monitoring-param:
- id: metric_vim_vnf1_load
name: metric_vim_vnf1_load
aggregation-type: AVERAGE
vdu-metric:
vdu-ref: vdu1
vdu-metric-name-ref: load
- id: metric_vim_vnf1_loadpct
name: metric_vim_vnf1_loadpct
aggregation-type: AVERAGE
vdu-metric:
vdu-ref: vdu1
vdu-metric-name-ref: load_pct
```
Additional notes:
- Available attributes and values can be directly explored at the [OSM Information Model](11-osm-im.md)
- A complete VNFD example with VNF metrics collection (VDU-level) can be downloaded from [here](https://osm-download.etsi.org/ftp/osm-4.0-four/4th-hackfest/packages/ubuntuvm_vnfvdumetric_autoscale_vnfd.tar.gz).
As in VIM metrics, a list of "metrics" is defined first either at the VNF or VDU "configuration" level, which contain a name that comes from the metrics.yaml file. Then, at the VNF level, a list of monitoring-params is referred, with an ID, name, aggregation-type and their source, which can be a "vdu-metric" or a "vnf-metric" in this case.
#### Infrastructure Status Collection
OSM MON collects, automatically, "status metrics" for:
- VIMs - each VIM that OSM establishes contact with, the metric will be reflected with the name `osm_vim_status` in the TSDB.
- VMs - VMs for each VDU that OSM has instantiated, the metric will be reflected with the name `osm_vm_status` in the TSDB.
Metrics will be "1" or "0" depending on the element availability.
#### System Metrics
OSM collects system-wide metrics directly using Prometheus exporters. The way these metrics are collected is highly dependant on how OSM was installed:
| | OSM on Kubernetes | OSM on Docker Swarm |
|:----:|:-----------------:|:--------------------:|
| Components | Prometheus Operator Chart / Other charts: MongoDB, MySQL and Kafka exporters | Node exporter / CAdvisor exporter |
| Implements | Multiple Grafana dashboards for a comprehensive health check of the system. | Single Grafana dashboard with the most important system metrics.|
The name with which these metrics are stored in Prometheus also depends on the installation, so Grafana Dashboards will be available by default, already showing these metrics.
Please note that the K8 installation requires the optional Monitoring stack.

#### Retrieving OSM metrics from Prometheus TSDB
Once the metrics are being collected, they are stored in the Prometheus Time-Series DB **with an 'osm_' prefix**, and there are a number of ways in which you can retrieve them.
##### 1) Visualizing metrics in Prometheus UI
Prometheus TSDB includes its own UI, which you can visit at `http://[OSM_IP]:9091`.
From there, you can:
- Type any metric name (i.e. `osm_cpu_utilization`) in the 'expression' field and see its current value or a histogram.
- Visit the Status --> Target menu, to monitor the connection status between Prometheus and MON (through `mon-exporter`)

##### 2) Visualizing metrics in Grafana
Starting in Release 7, OSM includes by default its own Grafana installation (deprecating the former experimental `pm_stack`)
Access Grafana with its default credentials (admin / admin) at `http://[OSM_IP_address]:3000` and by clicking the 'Manage' option at the 'Dashboards' menu (to the left), you will find a sample dashboard containing two graphs for VIM metrics, and two graphs for VNF metrics. You can easily change them or add more, as desired.

###### Dashboard Automation
Starting in Release 7, Grafana Dashboards are created by default in OSM. This is done by the "dahboarder" service in MON, which provisions Grafana following changes in the common DB.
|Updates in|Automates these dashboards|
|:--------:|:------------------------:|
|OSM installation|System Metrics, Admin Project-scoped|
|OSM Projects|Project-scoped|
|OSM Network Services|NS-scoped sample dashboard|
##### 3) Querying metrics through OSM SOL005-based NBI
For collecting metrics through the NBI, the following URL format should be followed:
`https://:/osm/nspm/v1/pm_jobs//reports/`
Where:
- ``: Is the machine where OSM is installed.
- ``: The NBI port, i.e. 9999
- ``: Currently it can be any string.
- ``: It is the NS ID got after instantiation of network service.
Please note that a token should be obtained first in order to query a metric. More information on this can be found in the [OSM NBI Documentation](12-osm-nbi.md)
In response, you would get a list of the available VNF metrics, for example:
```yaml
performanceMetric: osm_cpu_utilization
performanceValue:
performanceValue:
performanceValue: '0.9563615332000001'
vduName: test_fet7912-2-ubuntuvnf2vdu1-1
vnfMemberIndex: '2'
timestamp: 1568977549.065
```
##### 4) Interacting with Prometheus directly through its API
The [Prometheus HTTP API](https://prometheus.io/docs/prometheus/latest/querying/api/) is always directly available to gather any metrics. A couple of examples are shown below:
Example with Date range query
```bash
curl 'http://localhost:9091/api/v1/query_range?query=osm_cpu_utilization&start=2018-12-03T14:10:00.000Z&end=2018-12-03T14:20:00.000Z&step=15s'
```
Example with Instant query
```bash
curl 'http://localhost:9091/api/v1/query?query=osm_cpu_utilization&time=2018-12-03T14:14:00.000Z'
```
Further examples and API calls can be found at the [Prometheus HTTP API documentation](https://prometheus.io/docs/prometheus/latest/querying/api/).
##### 5) Interacting directly with MON Collector
The way Prometheus TSDB stores metrics is by querying Prometheus 'exporters' periodically, which are set as 'targets'. Exporters expose current metrics in a specific format that Prometheus can understand, more information can be found [here](https://prometheus.io/docs/instrumenting/exporters/)
OSM MON features a "mon-exporter" module that exports **current metrics** through port 8000. Please note that this port is by default not being exposed outside the OSM docker's network.
A tool that understands Prometheus 'exporters' (for example, Elastic Metricbeat) can be plugged-in to integrate directly with "mon-exporter". To get an idea on how metrics look alike in this particular format, you could:
###### 1. Get into MON console
```bash
docker exec -ti osm_mon.1.[id] bash
```
###### 2. Install curl
```bash
apt -y install curl
```
###### 3. Use curl to get the current metrics list
```bash
curl localhost:8000
```
Please note that as long as the Prometheus container is up, it will continue retrieving and storing metrics in addition to any other tool/DB you connect to `mon-exporter`.
##### 6) Using your own TSDB
OSM MON integrates Prometheus through a plugin/backend model, so if desired, other backends can be developed. If interested in contributing with such option, you can ask for details at our Slack #service-assurance channel or through the OSM Tech mailing list.
### Fault Management
Reference diagram:

#### Basic functionality
##### Logs & Events
Logs can be monitored on a per-container basis via command line, like this:
```bash
docker logs
```
For example:
```bash
docker logs osm_lcm.1.tkb8yr6v762d28ird0edkunlv
```
Logs can also be found in the corresponding volume of the host filesystem: `/var/lib/containers/[container-id]/[container-id].json.log`
Furthermore, there are some important events flowing between components through the Kafka bus, which can be monitored on a per-topic basis by external tools.
##### Alarm Manager for Metrics
As of Release FIVE, MON includes a new module called 'mon-evaluator'. The only use case supported today by this module is the configuration of alarms and evaluation of thresholds related to metrics, for the Policy Manager module (POL) to take actions such as [auto-scaling](#autoscaling).
Whenever a threshold is crossed and an alarm is triggered, the notification is generated by MON and put in the Kafka bus so other components, like POL can consume them. This event is today logged by both MON (generates notification) and POL (consumes notification, for its auto-scaling or webhook actions)
By default, threshold evaluation occurs every 30 seconds. This value can be changed by setting an environment variable, for example:
```bash
docker service update --env-add OSMMON_EVALUATOR_INTERVAL=15 osm_mon
```
To configure alarms that send webhooks to a web service, add the following to the VNF descriptor:
```yaml
vdu:
- alarm:
- alarm-id: alarm-1
operation: LT
value: 20
actions:
alarm:
- url: https://webhook.site/1111
ok:
- url: https://webhook.site/2222
insufficient-data:
- url: https://webhook.site/3333
vnf-monitoring-param-ref: vnf_cpu_util
```
Regarding how to configure alarms through VNFDs for the auto-scaling use case, follow the [auto-scaling documentation](#autoscaling)
#### Experimental functionality
An optional 'OSM ELK' stack is available to allow for events visualization, consisting of the following tools:
- **Elastisearch** - scalable search engine and event database.
- **Filebeat & Metricbeat** - part of Elastic 'beats', which evolve the former Logstash component to provide generic logs and metrics collection, respectively.
- **Kibana** - Graphical tool for exploring all the collected events and generating customized views and dashboards.
##### Enabling the OSM ELK Stack
If you want to install OSM along with the ELK stack, run the installer as follows:
```bash
./install_osm.sh --elk_stack
```
If you just want to add the ELK stack to an existing OSM installation, run the installer as follows:
```bash
./install_osm.sh -o elk_stack
```
This will install four additional docker containers (Elasticsearch, Filebeat, Metricbeat and Kibana), as well as download a Docker image for an auxiliary tool named [Curator](https://www.elastic.co/guide/en/elasticsearch/client/curator/5.5/index.html) (`bobrik`/`curator`)
If you need to remove it at some point in time, just run the following command:
```bash
docker stack rm osm_elk
```
If you need to deploy the stack again after being removed:
```bash
docker stack deploy -c /etc/osm/docker/osm_elk/docker-compose.yml osm_elk
```
**IMPORTANT**: As time passes and more events are generated in your system, and depending on your configured searches, views and dashboards, Elasticsearch database which become very big, which may not be desirable in testing environments. In order to delete your data periodically, you can launch a Curator container that will delete the saved indexes, freeing the associated disk space.
For example, to delete all the data older than the last day:
```bash
docker run --rm --name curator --net host --entrypoint curator_cli bobrik/curator:5.5.4 --host localhost delete_indices --filter_list '[{"filtertype":"age","source":"creation_date","direction":"older","unit":"days","unit_count":1}]'
```
Or to delete the data older than 2 hours:
```bash
docker run --rm --name curator --net host --entrypoint curator_cli bobrik/curator:5.5.4 --host localhost delete_indices --filter_list '[{"filtertype":"age","source":"creation_date","direction":"older","unit":"hours","unit_count":2}]'
```
##### Testing the OSM ELK Stack
1. Download the sample dashboards to your desktop from this link (right click, save link as):
2. Visit Kibana at `http://[OSM_IP]:5601` and:
1. Go to "Management" --> Saved Objects --> Import (select the downloaded file)
2. Go to "Dashboard" and select the "OSM System Dashboard", which connects to other three sub-dashboards (You may need to redefine "filebeat-*" as the default 'index-pattern' by selecting it, marking the star and revisiting the Dashboards)
3. Metrics (from Metricbeat) and logs (from Filebeat) should appear at the corresponding visualizations.

### Autoscaling
#### Reference diagram
The following diagram summarizes the feature:

- Scaling descriptors can be included and be tied to automatic reaction to VIM/VNF metric thresholds.
- Supported metrics are both VIM and VNF metrics. More information about metrics collection can be found at the [Performance Management documentation](#performance-management)
- An internal alarm manager has been added to MON through the 'mon-evaluator' module, so that both VIM and VNF metrics can also trigger threshold-violation alarms and scaling actions. More information about this module can be found at the [Fault Management documentation](#fault-management)
#### Scaling Descriptor
The scaling descriptor is part of a VNFD. Like the example below shows, it mainly specifies:
- An existing metric to be monitored, which should be pre-defined in the monitoring-param list (`vnf-monitoring-param-ref`).
- The VDU to be scaled (`aspect-delta-details:deltas:vdu-delta:id`) and the amount of instances to scale per event (`number-of-instances`)
- The thresholds to monitor (`scale-in/out-threshold`)
- The VDU's (`vdu-profile:id`) minimum and maximum amount of **scaled instances** to produce
- The minimum time it should pass between scaling operations (`cooldown-time`)
- The minimum amount of scaled instances to produce (`max-scale-level`)
```yaml
scaling-aspect:
- aspect-delta-details:
deltas:
- id: vdu01_autoscale-delta
vdu-delta:
- id: vdu01
number-of-instances: 1
id: vdu01_autoscale
max-scale-level: 1
name: vdu01_autoscale
scaling-policy:
- cooldown-time: 120
name: cpu_scaling_policy
scaling-criteria:
- name: cpu_scaling_policy
scale-in-relational-operation: LT
scale-in-threshold: 20
scale-out-relational-operation: GT
scale-out-threshold: 60
vnf-monitoring-param-ref: vnf01_cpu_util
scaling-type: automatic
threshold-time: 10
vdu-profile:
- id: vdu01
min-number-of-instances: 1
max-number-of-intannces: 11
```
#### Example
This will launch a Network Service formed by an HAProxy load balancer and an (autoscalable) Apache web server. Please check:
1. Your VIM has an accesible 'public' network and a management network (in this case called "PUBLIC" and "vnf-mgmt")
2. Your VIM has the 'haproxy_ubuntu' and 'apache_ubuntu' images, which can be found [here](https://osm-download.etsi.org/ftp/osm-4.0-four/4th-hackfest/images/)
Get the descriptors:
```bash
git clone https://osm.etsi.org/gitlab/vnf-onboarding/osm-packages.git
```
Onboard them:
```bash
cd osm-packages
osm vnfd-create wiki_webserver_autoscale_vnfd
osm nsd-create wiki_webserver_autoscale_nsd
```
Launch the NS:
```bash
osm ns-create --ns_name web01 --nsd_name wiki_webserver_autoscale_ns --vim_account |
osm ns-list
osm ns-show web01
```
Testing:
1. To ensure the NS is working, visit the Load balancer's IP at the public network using a browser, the page should show an OSM logo and active VDUs.
2. To check metrics at Prometheus, visit `http://[OSM_IP]:9091` and look for `osm_cpu_utilization` and `osm_average_memory_utilization` (initial values could take some some minutes depending on your telemetry system's granularity).
3. To check metrics at Grafana, just visit `http://[OSM_IP]:3000` (`admin`/`admin`), you will find a sample dashboard (the two top charts correspond to this example).
4. To increase CPU in this example to auto-scale the web server, install Apache Bench in a client within reach (could be the OSM host) and run it towards `test.php`.
```bash
sudo apt install apache2-utils
ab -n 5000000 -c 2 http:///test.php
# Can also be run in the HAproxy machine.
ab -n 10000000 -c 1000 http://:8080/
# This will stress CPU to 100% and trigger a scale-out operation in POL.
# In this test, scaling will usually go up to 3 web servers before HAProxy spreads to load to reach a normal CPU level (w/ 60s granularity, 180s cooldown)
```
If HA proxy is not started
```bash
service haproxy status
sudo service haproxy restart
```
Any of the VMs can be accessed through SSH (credential: `ubuntu`/`osm2021`) to further monitor (with `htop`, for example), and there is an HAProxy UI at port `http://[HAProxy_IP]:32700` (credential: `osm`/`osm2018`)
## Using Network Slices
In order to illustrate better how network slicing works in OSM, it will be discussed in the context of a running example.
### Resources
This example of use network slicing requires a set of resources (VNFs, NSs, NSTs) that are available in the following [link](https://osm-download.etsi.org/ftp/osm-6.0-six/8th-hackfest/packages/)
- [slice_hackfest_vnfd.tar.gz](https://osm-download.etsi.org/ftp/osm-6.0-six/8th-hackfest/packages/slice_hackfest_vnfd.tar.gz)
- [slice_hackfest_middle_vnfd.tar.gz](https://osm-download.etsi.org/ftp/osm-6.0-six/8th-hackfest/packages/slice_hackfest_middle_vnfd.tar.gz)
**NS:**
- [slice_hackfest_nsd.tar.gz](https://osm-download.etsi.org/ftp/osm-6.0-six/8th-hackfest/packages/slice_hackfest_nsd.tar.gz)
- [slice_hackfest_middle_nsd.tar.gz](https://osm-download.etsi.org/ftp/osm-6.0-six/8th-hackfest/packages/slice_hackfest_middle_nsd.tar.gz)
**NST:**
- [slice_hackfest_nst.yaml](https://osm-download.etsi.org/ftp/osm-6.0-six/8th-hackfest/packages/slice_hackfest_nst.yaml)
- [slice_hackfest2_nst.yaml](https://osm-download.etsi.org/ftp/osm-6.0-six/8th-hackfest/packages/slice_hackfest2_nst.yaml)
### Network Slice Template Diagram
The diagram below shows the Network Slice Template created for the example. As is shown in the picture, three network slice subnets are connected by Virtual Links Descriptors (VLDs) through the connection points of the network services. We have a Virtual Link for management `slice_vld_mgmt` and two Virtual links for data, `slice_vld_data1` and `slice_vld_data2`. In the middle, we have a `network-slice-subnet` that interconnects the Netslice subnets we have on both sides.

#### Virtual Network Functions
We use two VNFs for this example. The difference between them is the number of network interfaces to create connections. While the `slice_hackfest_middle_vnfd` VNF have three interfaces (`mgmt`, `data1`, `data2`), the `slice_hackfest_vnfd` have only two (mgmt, data). The specifications vCPU (1), RAM (1GB), disk (10GB), and `image-name` ('US1604') are the same in both VNFs.


#### Network Services
We use two network services in this example. They are differentiated by 1) the number of interfaces that posses, 2) the VNF contained inside the Network service, 3) the NS *slice_hackfest_nsd* have two VLDs, one for data and other for management 4) the *slice_hackfest_middle_nsd* has three VLDs, one for management and the other two for data1 and data2.
The *slice_hackfest_middle_nsd* have inside the `slice_hackfest_middle_vnfd` and the *slice_hackfest_nsd* has the vnf `slice_hackfest_vnfd`.
The diagram below shows the `slice_hackfest_nsd` and `slice_hackfest_middle_nsd`, its connection points, VLDs and VNFs.


### Creating a Network Slice Template (NST)
Based on the OSM information model for Network slice templates [here](http://osm-download.etsi.org/repository/osm/debian/ReleaseSIX/docs/osm-im/osm_im_trees/nst.html) it is possible to start writing the YAML descriptor for the NST.
```yaml
nst:
- id: slice_hackfest_nst
name: slice_hackfest_nst
SNSSAI-identifier:
slice-service-type: eMBB
quality-of-service:
id: 1
```
The snippet above contains the mandatory fields for the NST. Additionally, we can find the description below of the `netslice-subnet` and `netslice-vld` sections. When we create an NST, the `id` references the Network Slice Template, and the `name` is the name set to the NST. Additionally, the required parameter `SNSSAI-identifier` is a reference to which kind of service is inside this slice. In OSM we have three types of `slice-service-type`. Enhanced mobile broadband (eMBB), Ultra-reliable low-latency communications (URLLC) or massive machine type communications (mMTC). Moreover, we add a `quality-of-service` parameter that is related to the 5G QoS Indicator (5QI).
The section `netslice-subnet` shown below is the place to allocate the network services that compose the slice. Each item of the *netslice-subnet* list has:
1. An `id` to identify the netslice-subnet.
2. The option `is-shared-nss` is a boolean flag to determine if the NSS is shared among Network Slice Instances that use this Netslice Subnet.
3. An optional `description`.
4. The `nsd-ref` is the reference to the Network Service descriptor that forms the netslice subnet.
```yaml
netslice-subnet:
- id: slice_hackfest_nsd_1
is-shared-nss: false
description: NetSlice Subnet (service) composed by 1 vnf with 2 cp
nsd-ref: slice_hackfest_nsd
- id: slice_hackfest_nsd_2
is-shared-nss: true
description: NetSlice Subnet (service) composed by 1 vnf with 3 cp
nsd-ref: slice_hackfest_middle_nsd
- id: slice_hackfest_nsd_3
is-shared-nss: false
description: NetSlice Subnet (service) composed by 1 vnf with 2 cp
nsd-ref: slice_hackfest_nsd
```
Finally, it is defined the connections among the `netslice-subnets` in section `netslice-vld` as is shown below:
```yaml
netslice-vld:
- id: slice_vld_mgmt
name: slice_vld_mgmt
type: ELAN
mgmt-network: true
nss-connection-point-ref:
- nss-ref: slice_hackfest_nsd_1
nsd-connection-point-ref: nsd_cp_mgmt
- nss-ref: slice_hackfest_nsd_2
nsd-connection-point-ref: nsd_cp_mgmt
- nss-ref: slice_hackfest_nsd_3
nsd-connection-point-ref: nsd_cp_mgmt
- id: slice_vld_data1
name: slice_vld_data1
type: ELAN
nss-connection-point-ref:
- nss-ref: slice_hackfest_nsd_1
nsd-connection-point-ref: nsd_cp_data
- nss-ref: slice_hackfest_nsd_2
nsd-connection-point-ref: nsd_cp_data1
```
Having the network slice template ready is needed to onboard the resources to the OSM before upload the network slice template. The following commands help you to onboard packages to OSM:
- **VNF package:**
- List Virtual Network Functions Descriptors
- `osm vnfd-list`
- Upload the *slice_hackfest_vnf* package
- `osm vnfd-create slice_hackfest_vnf.tar.gz`
- Upload the *slice_hackfest_middle_vnf package*
- `osm vnfd-create slice_hackfest_middle_vnf.tar.gz`
- Show if *slice_hackfest_vnf* was uploaded correctly to OSM
- `osm vnfd-show slice_hackfest_vnfd`
- Show if *slice_hackfest_vnf* was uploaded correctly to OSM
- `osm vnfd-show slice_hackfest_middle_vnfd`
- **NS package:**
- List Network Service Descriptors
- `osm nsd-list`
- Upload the *slice_hackfest_ns* package
- `osm nsd-create slice_hackfest_ns.tar.gz`
- Upload the *slice_hackfest_middle_ns* package
- `osm nsd-create slice_hackfest_middle_ns.tar.gz`
- Show if *slice_hackfest_nsd* was uploaded correctly to OSM
- `osm nsd-show slice_hackfest_nsd`
- Show if *slice_hackfest_middle_nsd* was uploaded correctly to OSM
- `osm nsd-show slice_hackfest_middle_nsd`
- **NST:**
- List network slice templates
- `osm nst-list`
- Upload the *slice_hackfest_nst.yaml* template
- `osm nst-create slice_hackfest_nst.yaml`
- Upload the *slice_hackfest2_nst.yaml* template
- `osm nst-create slice_hackfest2_nst.yaml`
- Show if *slice_hackfest_nst* was uploaded correctly to OSM
- `osm nst-show slice_hackfest_nst`
- Show if *slice_hackfest2_nst* was uploaded correctly to OSM
- `osm nst-show slice_hackfest2_nst`
With all resources already available in OSM, it is possible to create the Network Slice Instance (NSI) using the `slice_hackfest_nst`. You can find below the help of the command to create a network slice instance:
```text
osm nsi-create --help
Usage: osm nsi-create [OPTIONS]
creates a new Network Slice Instance (NSI)
Options:
--nsi_name TEXT name of the Network Slice Instance
--nst_name TEXT name of the Network Slice Template
--vim_account TEXT default VIM account id or name for the deployment
--ssh_keys TEXT comma separated list of keys to inject to vnfs
--config TEXT Netslice specific yaml configuration:
netslice_subnet: [
id: TEXT, vim_account: TEXT,
vnf: [member-vnf-index: TEXT, vim_account: TEXT]
vld: [name: TEXT,
vim-network-name: TEXT or DICT with vim_account,
vim_net entries]
additionalParamsForNsi: {param: value, ...}
additionalParamsForsubnet: [{id: SUBNET_ID,
additionalParamsForNs: {},
additionalParamsForVnf: {}}]
],
netslice-vld: [name: TEXT,
vim-network-name: TEXT or DICT with vim_account,
vim_net entries]
--config_file TEXT nsi specific yaml configuration file
--wait do not return the control immediately, but keep it
until the operation is completed, or timeout
-h, --help Show this message and exit.
```
To instantiate the network slice template use the following command:
```bash
osm nsi-create\
--nsi_name my_first_slice\
--nst_name slice_hackfest_nst\
--vim_account \
--config 'netslice-vld: [{ "name": "slice_vld_mgmt", "vim-network-name": }]'
```
Where:
- `--nsi-name` is the name of the Network Slice Instance: `my_first_slice`
- `--nst-name` is the name of the Network Slice Template: `slice_hackfest_nst`
- `--vim_account` is the default VIM account id or name to be used by the NSI
- `--config` is the configuration parameter used for the slice. For example, it is possible to attach the NS management network to an external network of the VIM to have access to the VNF deployed in the slice. In this case, `netslice-vld` list, contains the name of the VLD `slice_vld_mgmt` used to attach the external network of the VIM by `vim-network-name` key.
The commands to operate the slice are:
- List Network Slice Instances
- `osm nsi-list`
- Delete Network Slice Instance
- `osm nsi-delete or `
The result of the deployment in Openstack looks like:


In the picture above, it is shown three VNFs deployed in OpenStack connected to management OpenStack network `osm-ext` and also connected among them, following the VLDs described in the network slice template.
### Sharing a Network Slice Subnet
To test the feature of sharing a network slice subnet, we create a new network slice template that uses the shared netslice subnet from the previous instantiation. The picture below shows the Network Slice Template.

The network slice template used for sharing a network slice subnet is *slice_hackfest2_nst.yaml* and it is available in the [resources](#resources) section.
```yaml
nst:
- id: slice_hackfest2_nst
name: slice_hackfest2_nst
SNSSAI-identifier:
slice-service-type: eMBB
quality-of-service:
id: 1
netslice-subnet:
- id: slice_hackfest_nsd_2
is-shared-nss: true
description: NetSlice Subnet (service) composed by 1 vnf with 3 cp
nsd-ref: slice_hackfest_middle_nsd
- id: slice_hackfest_nsd_3
is-shared-nss: false
description: NetSlice Subnet (service) composed by 1 vnf with 2 cp
nsd-ref: slice_hackfest_nsd
netslice-vld:
- id: slice_vld_mgmt
name: slice_vld_mgmt
type: ELAN
mgmt-network: true
nss-connection-point-ref:
- nss-ref: slice_hackfest_nsd_2
nsd-connection-point-ref: nsd_cp_mgmt
- nss-ref: slice_hackfest_nsd_3
nsd-connection-point-ref: nsd_cp_mgmt
- id: slice_vld_data2
name: slice_vld_data2
type: ELAN
nss-connection-point-ref:
- nss-ref: slice_hackfest_nsd_2
nsd-connection-point-ref: nsd_cp_data2
- nss-ref: slice_hackfest_nsd_3
nsd-connection-point-ref: nsd_cp_data
```
The YAML above contains 2 `netslice-subnet`, one with the flag `is-shared-nss` as true and the other one with the flag `is-shared-nss` as false. The `netslice-vlds` will connect the `slice_hackfest_middle_nsd` nss with management interface and data2 with the `slice_hackfest_nsd` via `nsd_cp_data`.
To instantiate this network slice, we will use the same command used previously but changing the `nst_name` to `slice_hackfest2_nst`:
```text
osm nsi-create\
--nsi_name my_shared_slice\
--nst_name slice_hackfest2_nst\
--vim_account \
--config 'netslice-vld: [{ "name": "slice_vld_mgmt", "vim-network-name": }]'
```
You can see the result of the instantiation in the picture below:


Only one Network Slice Subnet was instantiated since the middle Network Slice Subnet is shared with this second NSI.
#### Result of deleting the Network Slice Instance 1
What would happens with the shared Network Slice Subnet and the second Network Slice Instance if we delete the first Network Slice Instance?
With the command `osm nsi-delete my_first_slice` we can delete the first Network Slice Instance. The result is that the middle Network Slice Subnet (shared) belongs to the `NSI2`, and it is not deleted when NSI1 is deleted. All networks and services created for NSS middle are kept. In the picture below, is shown the result in Openstack and the logical result of the deletion of NSI1:


To remove the NSI2 run the command: `osm nsi-delete my_shared_slice`.
## Using Kubernetes-based VNFs (KNFs)
OSM supports Kubernetes-based Network Functions (KNF). This feature unlocks more than 20.000 packages that can be deployed besides VNFs and PNFs. This section guides you to deploy your first KNF, from the installation of multiple ways of Kubernetes clusters until the selection of the package and deployment.
### Kubernetes installation
KNFs feature requires an operative Kubernetes cluster. There are several ways to have that Kubernetes running. From the OSM perpective, the Kubernetes cluster is not an isolated element, but it is a technology that enables the deployment of microservices in a cloud-native way. To handle the networks and facilitate the conection to the infrastructure, the cluster have to be associated to a VIM. There is an special case where the Kubernetes cluster is installed in a baremetal environment without the management of the networking part but in general, OSM consider that the Kubernetes cluster is located in a VIM.
For OSM you can use one of these three different ways to install your Kubernetes cluster:
1. [OSM Kubernetes cluster Network Service](15-k8s-installation.md#installation-method-1-osm-kubernetes-cluster-from-an-osm-network-service)
2. [Self-managed Kubernetes cluster in a VIM](15-k8s-installation.md#installation-method-2-local-development-environment)
3. [Kubernetes baremetal installation](15-k8s-installation.md#method-3-manual-cluster-installation-steps-for-ubuntu)
### OSM Kubernetes requirements
After the Kubernetes installation is completed, you need to check if you have the following components in your cluster.
1. [Kubernetes Loadbalancer](15-k8s-installation.md): to expose your KNFs to the network
2. [Kubernetes default Storageclass](15-k8s-installation.md): to support persistent volumes.
### Adding kubernetes cluster to OSM
In order to test Kubernetes-based VNF (KNF), you require a K8s cluster, and that K8s cluster is expected to be connected to a VIM network. For that purpose, you will have to associate the cluster to a VIM target, which is the deployment target unit in OSM.
The following figures illustrate two scenarios where a K8s cluster might be connected to a network in the VIM (e.g. `vim-net`):
- A K8s cluster running on VMs inside the VIM, where all VMs are connected to the VIM network
- A K8s cluster running on baremetal and it is physically connected to the VIM network


In order to add the K8s cluster to OSM, you can use these instructions:
```bash
osm k8scluster-add --creds clusters/kubeconfig-cluster.yaml --version '1.15' --vim --description "My K8s cluster" --k8s-nets '{"net1": "vim-net"}' cluster
osm k8scluster-list
osm k8scluster-show cluster
```
The options used to add the cluster are the following:
- `--creds`: Is the location of the kubeconfig file where you have the cluster credentials
- `--version`: Current version of your Kubernetes cluster
- `--vim`: The name of the VIM where the Kubernetes cluster is deployed
- `--description`: Give a description to your Kubernetes cluster
- `--k8s-nets`: It is a dictionary of the cluster network, where the `key` is an arbitrary name and the `value` of the dictionary is the name of the network in the VIM. In case your k8s cluster is not located in a VIM, you could use '{net1: null}'
In some cases, you might be interested in using an isolated K8s cluster to deploy your KNF. Although these situations are discouraged (an isolated K8s cluster does not make sense in the context of an operator network), it is still possible by creating a dummy VIM target and associating the K8s cluster to that VIM target:
```bash
osm vim-create --name mylocation1 --user u --password p --tenant p --account_type dummy --auth_url http://localhost/dummy
osm k8scluster-add cluster --creds .kube/config --vim mylocation1 --k8s-nets '{k8s_net1: null}' --version "v1.15.9" --description="Isolated K8s cluster in mylocation1"
```
### Adding repositories to OSM
You might need to add some repos from where to download helm charts required by the KNF:
```bash
osm repo-add --type helm-chart --description "Bitnami repo" bitnami https://charts.bitnami.com/bitnami
osm repo-add --type helm-chart --description "Cetic repo" cetic https://cetic.github.io/helm-charts
osm repo-add --type helm-chart --description "Elastic repo" elastic https://helm.elastic.co
osm repo-list
osm repo-show bitnami
```
### KNF Service on-boarding and instantiation
KNFs can be on-boarded using Helm Charts or Juju Bundles. In the following section is shown an example with Helm Chart and for Juju Bundles.
#### KNF Helm Chart
Once the cluster is attached to your OSM, you can work with KNF in the same way as you do with any VNF. You can onboard them. For instance, you can use the example below of a KNF consisting of a single Kubernetes deployment unit based on OpenLDAP helm chart.
```bash
wget http://osm-download.etsi.org/ftp/Packages/hackfests/openldap_knf.tar.gz
wget http://osm-download.etsi.org/ftp/Packages/hackfests/openldap_ns.tar.gz
osm nfpkg-create openldap_knf.tar.gz
osm nspkg-create openldap_ns.tar.gz
```
You can instantiate two NS instances:
```bash
osm ns-create --ns_name ldap --nsd_name openldap_ns --vim_account
osm ns-create --ns_name ldap2 --nsd_name openldap_ns --vim_account --config '{additionalParamsForVnf: [{"member-vnf-index": "openldap", additionalParamsForKdu: [{ kdu_name: "ldap", "additionalParams": {"replicaCount": "2"}}]}]}'
```
Check in the cluster that pods are properly created:
- The pods associated to ldap should be using version `openldap:1.2.1` and have 1 replica
- The pods associated to ldap2 should be using version `openldap:1.2.1` and have 2 replicas
Now you can upgrade both NS instances:
```bash
osm ns-action ldap --vnf_name openldap --kdu_name ldap --action_name upgrade --params '{kdu_model: "stable/openldap:1.2.2"}'
osm ns-action ldap2 --vnf_name openldap --kdu_name ldap --action_name upgrade --params '{kdu_model: "stable/openldap:1.2.1", "replicaCount": "3"}'
```
Check that both operations are marked as completed:
```bash
osm ns-op-list ldap
osm ns-op-list ldap2
```
Check in the cluster that both actions took place:
- The pods associated to ldap should be using version openldap:1.2.2
- The pods associated to ldap2 should be using version openldap:1.2.1 and have 3 replicas
Rollback both NS instances:
```bash
osm ns-action ldap --vnf_name openldap --kdu_name ldap --action_name rollback
osm ns-action ldap2 --vnf_name openldap --kdu_name ldap --action_name rollback
```
Check that both operations are marked as completed:
```bash
osm ns-op-list ldap
osm ns-op-list ldap2
```
Check in the cluster that both actions took place:
- The pods associated to ldap should be using version openldap:1.2.1
- The pods associated to ldap2 should be using version openldap:1.2.1 and have 2 replicas
Delete both instances:
```bash
osm ns-delete ldap
osm ns-delete ldap2
```
Delete the packages:
```bash
osm nspkg-delete openldap_ns
osm nfpkg-delete openldap_knf
```
Optionally, remove the repos and the cluster
```bash
#Delete repos
osm repo-delete cetic
osm repo-delete bitnami
osm repo-delete elastic
#Delete cluster
osm k8scluster-delete cluster
```
#### KNF Juju Bundle
This is an example on how to onboard a service that uses a Juju Bundle. For this example the service to be onboarded is a mediawiki that is comprised by a mariadb-k8s database and a mediawiki-k8s frontend.
```bash
wget http://osm-download.etsi.org/ftp/Packages/hackfests/mediawiki_cnf.tar.gz
wget http://osm-download.etsi.org/ftp/Packages/hackfests/mediawiki_cnf_ns.tar.gz
osm nfpkg-create mediawiki_cnf.tar.gz
osm nspkg-create mediawiki_cnf_ns.tar.gz
```
You can instantiate the Network Service:
```bash
osm ns-create --ns_name hf-k8s --nsd_name ubuntu-cnf-ns --vim_account
```
To check the status of the deployment you can run the following command:
```bash
osm ns-op-list hf-k8s
+--------------------------------------+-------------+-------------+-----------+---------------------+--------+
| id | operation | action_name | status | date | detail |
+--------------------------------------+-------------+-------------+-----------+---------------------+--------+
| 364c1378-ba86-447e-ad00-93fc1bf1bdd5 | instantiate | N/A | COMPLETED | 2020-02-24T13:49:03 | - |
+--------------------------------------+-------------+-------------+-----------+---------------------+--------+
```
To remove the network service you can:
```bash
osm ns-delete hf-k8s
```