References

References: Configuration Files

Eryph uses YAML files for most configuration settings. The only exception are startup application settings for applications configured in JSON.

The following configuration files are available

  • Agent settings
    Agent settings configures host agents that manage hypervisor hosts. The agent settings can be imported into a host agent from the agent or from the eryph-zero command line.

  • Network Config
    Host network configurations configure how networks should be built within eryph.
    This is done by declaring "network providers", which declare which networks should be visible to eryph and how they are physically connected. The network configuration is imported into the network agent, and can then be synchronized with any agent host from the host agent's command line, or from the eryph-zero command line.

Agent settings

Default settings

These settings contain the defaults that are used if not declared otherwise by datastores and environments.

settings group name: defaults
setting type: object

SettingDescription
vmsThis configures or reads the path where the VM data is stored on the host agent.
It defaults to the Hyper-V VM data path configured in the operating system. Therefore, it can also be changed by editing the Hyper-V defaults.
volumesConfigures or reads the path where disk volumes are stored on the host agent.
It defaults to the Hyper-V VM disk path configured in the operating system. Therefore, it can also be changed by editing the Hyper-V defaults.
This setting determines where the local genepool is stored on the host agent.

Data store settings

These settings configure which datastores should be available within eryph. Datastores allow you to configure where VM data is stored without exposing the physical path to catlet breeders.

settings group name: datastores
setting type: list of named objects

SettingDescription
nameThe name of the datastore.
pathPath to the datastore.
The path has to be accessable for host agent. It can also be a network share, however in this case access has to be allowed for the host computer account.

Environment settings

These settings configure environment names and (optionally) overwrite both defaults and datastore paths.
This allows you to separate the storage structure by environment, e.g. to have different quality of service by environment. For example, a development or staging environment can be completely separated from production at the storage level.

settings group name: environments
setting type: list of named objects

SettingDescription
nameThe name of the environment.
defaultsDefaults for the environment.
Same settings as the defaults settings above. If not configured host defaults will be used.
datastoresDatastore overwrites by environment.
Same settings as the datastore settings above. If not configured host defaults will be used.

Network Config

The network configuration file declares one or more network providers. Network providers are declarations which networks should be accessible by eryph. For overlay networks, the networks are not directly exposed to catlets. Instead, eryph creates a virtual network that allows only limited access to the provider network by the catlets (e.g. routing to the gateway).

eryph-zero default configuration

Eryph-zero has a built-in default provider network configuration that uses nat overlay and therefore doesn't require any configuration. It uses the network 10.249.248.0/22 as the local nat network, which is expected to be unusual enough not to conflict with any local network.

network_providers:
- name: default
  type: nat_overlay
  bridge_name: br-nat
  subnets: 
  - name: default
    network: 10.249.248.0/22
    gateway: 10.249.248.1
    ip_pools:
    - name: default
      first_ip: 10.249.248.10
      next_ip: 10.249.248.10
      last_ip: 10.249.251.241

Network provider settings

These settings declares network providers.

settings group name: network_providers
setting type: list of named objects

SettingDescription

name

The name of the network provider

There should be a network provider "default" of type nat_overlay or overlay, as the default configuration of projects use overlay networks by default. If you want to use flat networks, declare them as additional networks and then add them to the project networks.

type

Type of the network provider.
One of: overlay, nat_overlay, flat

This configures which type of network is built by the network provider.
Flat networks are standard Hyper-V networks with virtual switches. The network provider settings are only used to declare how the physical network is built.

Overlay networks are networks built using OpenVSwitch/OVN virtual networks. The provider network configuration is used to create virtual routing between the physical and virtual networks.
Overlay networks use virtual bridges to separate traffic. On Windows hosts, a bridge is a virtual network adapter.

NAT overlay networks are a variation of overlay networks that don't require a physical network connection. Instead, a local host network is created with a NAT configuration on the host that allows access to virtual networks from the host. However, NAT overlay networks do not allow access from clients other than the host.

bridge_name

Name of the overlay network bridge.

The bridge adapter defines where traffic is routed on the virtual network. It should be a name like br-pif, br-lan, etc. The name br-int is reserved for the internal overlay network bridge name.

bridge_optionsConfigures additional options for the network bridge.
See bridge_options settings below.
adaptersA string array of network adapter names.
Each adapter listed here is used as a physical connection from the bridge adapter to the physical network. Multiple adapters can be used here for load balancing and failover. NAT overlay networks and flat networks ignore this setting.
switch_nameName of the virtual network switch for flat networks.
This configures the switch name which should be used when connecting the catlet to the flat network. NAT overlay and overlay networks ignore this setting.
vlanVLAN Id for connections from catlets. This optional setting configures a VLAN that should be used for traffic from catlet networks.
subnetsSubnets provided by network provider.
This settings configures the subnets settings for the network provider. See subnet settings below.

Bridge Options

settings group name: bridge_options
setting type: object

SettingDescription

bridge_vlan

VLAN id for the bridge adapter.

This configures the VLAN used for the bridge adapter. Traffic from catlet networks is not effected by this setting.

vlan_mode

VLAN Mode for the bridge adapter.
One of: access, native_untagged or native_tagged

Normally you don't have to use this setting. However if you have problems accessing the VLAN as it is already tagged or untagged on the switch you can control here if VLAN tag should be added or not.

default_ip_mode

Configures if bridge adapter should have a IP.
One of: dhcp, disabled

By default, bridge adapters don't need to be enabled on the host/network controller to route traffic for Catlet networks. However, if you don't have multiple adapters available to separate host traffic from virtual network traffic, you can also use the bridge for traffic to/from the host/network controller.

If you set this setting to dhcp, the bridge will be enabled for DHCP traffic. This only happens when the bridge is created.

Subnet settings

settings group name: subnets
setting type: list of named objects

SettingDescription
nameThe name of the subnet.
There should be at least one subnet "default", as this is also the default for project networks.
networkAdress of the subnet network in CIDR notation.
gatewayIP V4 address of the subnet's gateway.
ip_poolsConfigures the Ipv4 pools for this subnet.
IP pools are used to provide a range of IPs that can be used by eryph to assign ips in this range.

settings group name: subnet/ip_pools
setting type: list of named objects

SettingDescription
nameThe name of the IP pool.
first_ipThe first IP address in the pool.
last_ipLast IP address in the pool.
next_ipNext IP address in the pool.
Normally you don't need to change this setting, as it is only used to initialize the IP pool in case the current IP pool position is unavailable.