Introduction

eryph-zero

Eryph-zero is designed to manage virtual machines on a single Hyper-V host.

It is called eryph-zero because it has zero dependencies, except that you must run it on a Windows system that supports Hyper-V (amd64). This includes Windows 10 and 11, even for enterprise-grade features such as virtual networking.

Installation

eryph-zero can be installed on

  • Windows Server 2016
  • Windows Server 2019
  • Windows Server 2022
  • Windows 10 (22H2) Pro/Enterprise
  • Windows 11 Pro/Enterprise

Preparations

  1. Enable Hyper-V
    Before running the script below, you should enable Hyper-V to avoid running the script twice, as it will automatically enable Hyper-V if it is not installed, which requires a reboot.
    Follow these instructions to enable Hyper-V on your machine:
    https://learn.microsoft.com/en-us/virtualization/hyper-v-on-windows/quick-start/enable-hyper-v

  2. Invitation code
    Beta downloads require an invitation code. If you have not yet received an invitation code, please sign up for the eryph-zero waitlist at https://www.eryph.io.

Installation script

We provide a powershell script to download and install eryph:

Security Notice!

Please inspect the file install.ps1 before running any of the commands below. We already know it's safe, but you should check the security and content of any script from the Internet that you're not familiar with.

Install with Command Prompt (cmd.exe)

Run the following command in an elevated (as Admin) command prompt:

@"%SystemRoot%\System32\WindowsPowerShell\v1.0\powershell.exe" -NoProfile -InputFormat None -ExecutionPolicy Bypass -Command "[System.Net.ServicePointManager]::SecurityProtocol = 3072; iex ((New-Object System.Net.WebClient).DownloadString('https://raw.githubusercontent.com/eryph-org/eryph/main/src/apps/src/Eryph-zero/install.ps1'))"`

Install with PowerShell

Run the following command in an elevated (as Admin) powershell:

Set-ExecutionPolicy Bypass -Scope Process -Force; `
[System.Net.ServicePointManager]::SecurityProtocol = `
[System.Net.ServicePointManager]::SecurityProtocol -bor 3072; `
iex ((New-Object System.Net.WebClient).DownloadString( `
 'https://raw.githubusercontent.com/eryph-org/eryph/main/src/apps/src/Eryph-zero/install.ps1'))

Install with Options

To provide additional options to the installation, you need to use powershell with a slightly different command:

Set-ExecutionPolicy Bypass -Scope Process -Force; `
[System.Net.ServicePointManager]::SecurityProtocol = `
[System.Net.ServicePointManager]::SecurityProtocol -bor 3072; `
iex "& { `
 $(irm https://raw.githubusercontent.com/eryph-org/eryph/main/src/apps/src/Eryph-zero/install.ps1) `
 } [YOUR OPTIONS] "

Replace [YOUR OPTIONS] with parameters of install.ps1.
Common options are:

  • Force: Overwrites existing installation
  • Email: email for invitation code
  • InvitationCode: your beta invitation code
  • Version: a specific version to install. A list of versions available can be found here.

Basic Configuration

Eryph-zero comes with resonable defaults that should work in all environments without additional configuration.
However you should consider some topics:

  • Space requirements and storage locations
    Catlets (especially for Windows) take up a lot of space on your hard drive. Even though disk space is not expensive these days, it may still be limited if you install eryph on your laptop. Therefore, you should place the locations on a volume with enough free space (at least 20 GB).

    Eryph will store the local genepool with all downloaded artifacts in the default Hyper-V disk folder. On Windows 10/11 this is C:\Users\Public\Documents\Hyper-V\Virtual Disks.
    You can either change this to another folder (ask Google for guides like this), or configure a custom path for eryph in the agent settings.

    Please note that if you change the path, you will have to move the eryph folder manually if you have already created catlets.

  • Network access and adapters
    The default configuration of eryph uses virtual networks that allow catlets to communicate only with the Internet and the host.
    Eryph-zero creates 2 virtual network adapters to provide this networking:

    • br-int, which is used for internal communication of virtual networks.
      It works even if it is shown as disabled on the host (which just means that the host has no IP for it).
    • br-nat, which is used for NAT between the host and the virtual networks.
      This adapter maps virtual network traffic to the host IP via network address translation (NAT).
      Do not modify this adapter.

    Do not rename the eryph adapters!

    In general, you should not touch the eryph network adapters at all.
    But you should never rename them, because OpenVSwitch (which is used by eryph internally for virtual networking) uses adapter names to reference them. This includes the eryph_overlay switch, which cannot be used by other virtual machines and should not be changed.

    If networking fails after changing adapters, please reboot and sync networks afterwards.

    Eryph-zero defaults with NAT-based virtual networks are designed to work with any type of surrounding network. For example, your office environment, your home office with wireless, or even a hyperscaler host with it's own virtual network. All basic network services are provided internally (routing, DHCP, DNS) and don't change when the host (network) is moved.

    Shared access

    The default network configuration does not allow remote access to the virtual networks.
    If you want to share access to eryph, please read the advanced guides Networking and Shared Access & Security.

  • Client configuration and non-admin access
    By default you can access eryph-zero only as administrator.
    This is caused by the way eryph clients look up credentials:

    1. Client will try to use your default identity client configuration and it's credentials.
    2. Client will use a build-in identity client called system-client. This identity client is protected by file system security to allow only administrator access.

    To access eryph without administrator privileges, you must first create your own identity client. See Security for details on how to create and use a identity client.


Concepts

Virtual Machines vs. Catlets

Catlets are virtual machines that are automatically managed by eryph.
The name "catlet" refers to the pets vs. cattle concept in DevOps. However, with a catlet, you can choose how you want to manage your virtual machines - mostly automated, partially automated, fully admin-managed:

  • Catlets can be created and destroyed as quickly as cattle machines in DevOps.
  • Catlets are automatically configured on first boot.
  • They can also be treated like pets to be handled with care (manual configuration).

Build and manage catlets with eryph
Catlets are built from catlet specifications (see also Catlet Specifications). These declare how the catlet should be built and configured.

This process is called breeding and feeding in eryph:

  1. Catlets are first built (bred) with all the attributes they inherit from their parent (which inherits all the attributes from its parent, and so on).
    Catlets can contain information on how to mutate certain attributes, so they can change compared to their parents.
  2. On first boot, catlets are fed with all the fodder their parent likes (declared in the parent catlet) and their own fodder. And like physical beings, a catlet looks like what it eats - so it will be configured as the fodder commands.

To create and manage catlets, we provide a set of Powershell commands. See Using the Powershell for an introduction.

Manage catlets as Virtual Machines
Under the hood, catlets are still simple Hyper-V virtual machines. Therefore, in addition to using eryph to manage them, you can continue to use the Hyper-V commands and tools to manage them. In general, eryph assumes that you have other virtual machines created without eryph on your machine, so you can freely use it side-by-side with classic management.

Changing catlet properties manually is generally allowed, and any changes made manually to the virtual machine will be detected by eryph's inventory and applied to the catlet accordingly. In cases where this might cause inconsistencies with eryph, eryph won't apply further changes on its own (see also Storage Management and Frozen Storage).

Eryph-based virtual machines can also be included in backups, exported or cloned. To support this, eryph creates a metadata ID where it stores information that is not available in the virtual machine itself. This metadata ID is stored in the comment field of the virtual machine.
If you clone or restore a virtual machine with a metadata ID, it is automatically imported as a catlet by the eryph inventory.

You can also use eryph to build machines and then export them to another host without eryph installed. With the exception of virtual networking, which is only available with eryph, the machine will still be fully functional.

Genes and the Genepool

A key feature of any cloud environment is that it supports building machines from pre-built images or templates. Some also provide a way to inject bootstrapping configuration.

The concept of gene and the eryph genepool provide the same:

  • Genes are artifacts that catlets build from. These are volumes, fodder (configurations), and parent catlets.
  • The genepool is the registry of genes that are either publicly available or available to a single organization.

Besides the central genepool for eryph, there is a local genepool on each eryph host. The local genepool contains parent catlets and their volumes, and caches fodder genes. When you create a catlet, eryph downloads the required genes to the local genepool and extracts them if necessary. Further requests of the same catlet will no longer require any download if the parents are not changed. If you are offline, you can continue to work because eryph will only use local catlets.

Cleanup of local genepool:
You can cleanup the local genepool with following powershell command:

Remove-CatletGene -Unused

Alternatively you can also delete all catlets and then delete the genepool folder (see Default settings for its location).

Storage Management

One of the challenges with native Hyper-V is how to apply proper storage management.
Because it builds on core Windows features with file systems and storage services, Hyper-V storage is extremely flexible, but it lacks a built-in way to abstract between physical storage layout and placement rules. As a result, administrators must manually manage where VM files are stored.

Eryph addresses this by applying rules to the way VM file storage is structured:

  • Each catlet is stored in a unique folder (called a storage identifier).
    The storage identifier is either randomly generated or manually assigned.
    This ensures that even catlets with the same name never conflict at the storage level.

  • Volumes created for the catlet are created within a unique folder (using the catlet storage identifier by default).

  • When eryph projects are used, the storage folder of the catlet is stored in a folder of p_[projectname]. This ensures that catlets from different projects never conflict on the storage layer.

  • Optionally, eryph allows you to declare datastores.
    Datastores have a name and a path. The name is used to refer to the datastore in catlets. The path is used to place the catlet files in the file system. With datastores, eryph users don't need to know where data is stored. Typically this is used for different quality of service such as a high speed storage with SSDs and a slower (and cheaper) storage for archiving.

  • Further customization can be done with environments.
    Environments can be used to separate catlets from each other, even if they have the same configuration except for the environment name. Typical use cases for environments include a staging environment and a production environment. For example, the staging environment can run on one storage and the production environment on another.

See also Agent Settings for information on configuring datastores and environments.

Frozen storage:
Eryph detects which environment, datastore and project a file is in by reverse resolving the path according to the rules above. So if a file is manually moved from one datastore, environment or project to another, eryph will change it accordingly in it's own data.

If the path cannot be resolved to a valid path by the rules, it is considered frozen storage. Frozen storage will still be useable, but eryph will not change anything on the frozen storage (e.g. resize or mount a disk). To make it changeable again, the frozen storage must be moved back into the eryph file system structure.

Deleting storage files:
Eryph will automatically delete all files of a catlet when the catlet is deleted, if it use the same storage identifier as the catlet. Only disks created with another storage identifier will be keep.

Projects

Projects are groups of resources in eryph that can be managed independently. In general, resources in one project cannot access resources in another project:

  • Catlets cannot access each other, neither on the storage nor on the network (if overlay networks are used).
  • Users (identity clients) can be assigned to a project in specific roles to restrict access.

There is always a default project that only the built-in system client can access. See Security for more information on how to change this.

Virtual Networks

Hyper-V networking capabilities differ significantly between Windows Server editions and workstations. Windows workstations have a standard switch with built-in host-internal NAT and DHCP, but are limited in configuration. Server editions support full virtual networking, but require complex infrastructure. As a result, they are rarely used in smaller server environments that instead use VLANs to separate the network by security requirements.

Eryph supports virtual networks of any size on all versions of Windows because it uses OpenVSwitch and OVN for networking by default. OVN and OpenSwitch are very feature rich and mature networking solutions and are used in large scale virtualization solutions such as OpenStack. However, both are not easy to use and require a high learning curve to master.
Therefore, eryph manages OpenVSwitch and OVN internally and eliminates the need to manually manage both. Instead, eryph's networking features can be managed with "network providers" and "project networks".

Catlet IPs
Catlets IPs and hostnames in virtual networks are managed by eryph and are only valid witin the virtual network.
To access them remotely (from hosts or other devices in the network) eryph assigns a second Ip to each catlet from a network provider Ip range.
To find out the IP addresses of a catlet you can use the powershell command Get-CatletIp:

# This command lists the external Ip addresses of all catlets: 
Get-CatletIP

# Use argument -InternalIp to retrieve internal Ip:
Get-CatletIP -InternalIp

Network providers
Network providers are declarations which networks are available for eryph. With a network provider you declare the network type (should it be a virtual network with OVN or a physical network), network addresses and available IP ranges.

Eryph supports 3 types of network providers:

  • Flat networks
    These are used for physical networks that are made available to catlet by connecting them to a Hyper-V virtual switch. This is Hyper-V native networking without virtual networking. Flat networks do not provide network isolation.
  • Overlay networks
    This is the default network provider in eryph. The physical network defined in the network provider is used here to define how the virtual networks can be routed to the physical network, and IP ranges are used to provide NAT addresses between physical and virtual networks.
  • NAT Overlay Networks
    These are the default networks in eryph-zero. NAT overlay doesn't require a physical network to be defined at all. Instead, a host-only network is defined, which is used only for communication between the eryph-zero host and the virtual networks on it.

See also Network configuration on how to manage network providers with eryph-zero.

Project networks

Project networks declare virtual networks available within a project.
Since each project is independent and, at least for overlay networks, not dependent on the surrounding physical network, they can be freely defined.
As a rule of thumb, for most use cases you will not need to change the default project network. But if you do, you can manage them like catlets with specification files.

Typical cases where you need to change project networks:

  • Change of DNS settings.
  • Mimic physical networks by building the virtual network with the same structure as the physical network.
  • Use of environments that have subnets per environment.
  • Use of flat networks.

IPv6
Currently IPv6 is not supported in eryph, although some configurations already define settings for IPv6. We plan to add IPv6 support in the near future.

Inventory

As explained above eryph allows that catlets are changed with virtual machine management tools. To detect changes it runs a inventory on all virtual machines on the host every 10 minutes.

The inventory basically performs following steps:

  • Detects if a virtual machine has been added that contains a metadata id. It that happens it will import it as catlet (metadata is recreated in this case) as long the metadata id is known by eryph.

    Primary use case for this is a restore from backup.

  • Detects the storage identifier of each storage artifact.
    see also Storage Management

  • Imports settings like memory, cpu and capabilities.

Security

Access to eryph is protected by a built-in openid identity service.

On eryph-zero, this identity service doesn't use user identities, but only client identities.
Identity clients authenticate with a private key generated by the identity service. In addition, clients are restricted by scopes on the operations they can perform in eryph.

There is always a default client in eryph-zero called system-client. Because this client is protected by administrator credentials, it can only be accessed as an administrator.
To allow access as a normal user without administrative privileges, or for remote access, you must create dedicated clients.

To create a client for local access to eryph-zero without administrative privileges, run the following commands:

# This will create a identity client with full access including identity management
New-EryphClient -Name [client name] `
   -AllowedScopes compute:write,identity:write `
   -AddToConfiguration -AsDefault  # adds to client configuration

Or a more secure variant that disables identity management access for yourself by default:

# This will create a client with compute access only
New-EryphClient -Name [client name] `
   -AllowedScopes compute:write `
   -AddToConfiguration -AsDefault

#To access identity commands, you now need to explicitly use the system client:
$systemClient = Get-EryphClientCredentials `
    -SystemClient -Configuration zero

# Use the credential options for client commands:
Get-EryphClient -Credentials $systemClient

Now you can create a project for your user to work with:

New-EryphProject -Name myProject

Or add yourself to the default project:

$systemClient = Get-EryphClientCredentials `
    -SystemClient -Configuration zero
$clientId = =(Get-EryphClientConfiguration `
    -Default -Configuration zero).Id
Add-EryphProjectMemberRole -MemberId $clientId `
    -ProjectName default -Role Owner -Credentials $systemClient

The above command to create a client saves it directly to your local client configuration by default.
See Client Configuration for information about managing client configurations.


Command Line

Normally, you manage eryph through a client library such as the powershell client.
However, some operations require you to interact directly with eryph-zero from the command line.

  eryph-zero [commands]

Command Input
All commands that support inputs accept input either from the input pipe or from a file with argument --inFile

Command Output
All commands that support output to be redirect to output pipe via > or to a file with argument --outFile

Installation Management

The commands install and uninstall are used to install or uninstall eryp-zero.
However you should not use them directly. Instead install eryph-zero with the installation script above and uninstall it with the eryph-uninstaller.exe.

Agent settings

Agent settings commands are used to read and update agent settings in eryph-zero.

eryph-zero agentsettings

Usage:
  eryph-zero agentsettings [command] [options]

Commands:
  get
  import

Genepool authentication

To access private genesets in the eryph genepool eryph-zero needs to authenticate on the genepool. To create a API key to access your private organization you can use these commands to login to the genepool.

eryph-zero.exe genepool

Usage:
  eryph-zero genepool [command] [options]

Commands:
  info
  login
  logout

Network configuration

The network commands of eryph zero is used to import network providers into eryph-zero and to apply network changes required on the host for the imported providers. It can also be used to repair the network configuration (sync).

eryph-zero.exe networks

Usage:
  eryph-zero networks [command] [options]

Commands:
  get
  import
  sync

Network changes and rollbacks
Especially on remotely accessed systems, changing the network configuration can be critical, as you may interrupt your own connection while applying the changes. Therefore, eryph does its best to ensure that the changes are automatically rolled back in case of an error.
Before any changes are made, you will be prompted to confirm that the changes should be applied.

However, you should still be careful when changing networks remotely:

  • If possible, use a dedicated management network connection or network independent remote management (available for many servers).
  • If you only have one network interface and want to use overlay networking on that interface, be sure to enable bridge_options/default_ip_mode.
  • Note that WiFi generally doesn't work well with virtual networks. If you are using WiFi, you should only use nat_overlay.