Introduction

Fernglas is a looking glass for your network which is using BGP and BMP as data source. It provides a web frontend with serval nice features to discover and visualize. It is designed to prevent the necessity of the looking glass to access your routers and do querys on the cli.

The project is currently under heavy development, please consider this when deploying it.

Deployment

Deploying fernglas using NixOS

Requirements:

Optional: Set up the binary cache to use prebuilt binaries from our CI

$ nix run nixpkgs#cachix use wobcom-public

Add fernglas to your flake inputs:

inputs.fernglas = {
  type = "github";
  owner = "wobcom";
  repo = "fernglas";
};

Import the fernglas NixOS module and declare your configuration.

{ inputs, ... }:

let
  bmpPort = 11019;
in {
  imports = [
    inputs.fernglas.nixosModules.default
  ];

  services.fernglas = {
    enable = true;
    settings = {
      api.bind = "[::1]:3000";
      collectors = {
        my_bmp_collector = {
          collector_type = "Bmp";
          bind = "[::]:${toString bmpPort}";
          peers = {
            "192.0.2.1" = {};
          };
        };
      };
    };
  };

  networking.firewall.allowedTCPPorts = [ bmpPort ];
}

Configure a reverse proxy for the API and a webserver to serve the frontend.

{ config, inputs, ... }:

{
  services.nginx = {
    enable = true;
    recommendedProxySettings = true;
    virtualHosts."lg.example.org" = {
      enableACME = true;
      forceSSL = true;
      locations."/".root = inputs.fernglas.packages.${config.nixpkgs.hostPlatform.system}.fernglas-frontend;
      locations."/api/".proxyPass = "http://${config.services.fernglas.settings.api.bind}";
    };
  };

  networking.firewall.allowedTCPPorts = [ 80 443 ];
}

Deploying fernglas using OCI Containers / Docker

We have two different images. One image contains the UI, which is statically built and could be served from any other static webserver, i.e. nginx, apache, caddy. The other image contains the Fernglas software itself and is considered the backend. It exposes an HTTP API for the UI.

Prequesits

  • OCI Runtime
    • You need to have installed Docker, podman, or any similar container daemon that can run OCI containers provided by us
  • You need to have a working reverse proxy setup
    • Fernglas only exposes HTTP. TLS and probably authentication needs to be handled by yourself.
  • A Domain or Subdomain
    • Fernglas currently do not support path-based deployments, i.e. example.org/fernglas.

Fernglas Backend

docker pull ghcr.io/wobcom/fernglas:fernglas-0.2.1

You need to write a config file to specify Fernglas configuration. This needs to be put under /config/config.yaml in the standard configuration. See the chapter on configuration for more information on how to write the collectors configuration.

Fernglas Frontend

docker pull ghcr.io/wobcom/fernglas-frontend:fernglas-0.2.1

By setting serve_static: true in the config, the backend will also serve the bundled frontend files from the same webserver as the API.

You can take the fernglas-frontend image as base and serve the files with your own web server directly, if you want. The files need to be exposed at / of your domain, while the /api/ path should be passed through to the API server.

Manual Setup

Backend

Download the statically linked binaries and place them at /usr/local/bin/. Make them executable.

$ sudo mkdir -p /usr/local/bin
$ wget -O- https://github.com/wobcom/fernglas/releases/download/fernglas-0.2.1/fernglas-static-0.2.1-x86-64-linux.tar.xz | sudo tar -C /usr/local/bin -xJ

File: /etc/fernglas/config.yml

api:
  bind: "[::1]:3000"
collectors:
  - collector_type: Bmp
    bind: "[::]:11019"
    peers:
      "192.0.2.1": {}

systemd service with hardening options:

File: /etc/systemd/system/fernglas.service

[Service]
ExecStart=/usr/local/bin/fernglas /etc/fernglas/config.yml
Environment=RUST_LOG=warn,fernglas=info
Restart=always
RestartSec=10
DynamicUser=true
DevicePolicy=closed
MemoryDenyWriteExecute=true
NoNewPrivileges=true
PrivateDevices=true
PrivateTmp=true
ProtectControlGroups=true
ProtectHome=true
ProtectSystem=strict

Optionally, add AmbientCapabilities=CAP_NET_BIND_SERVICE if your configuration requires binding to privileged ports.

Enable and start the service:

systemctl enable --now fernglas.service

Don't forget to open the appropriate firewall ports if necessary!

Frontend and Reverse Proxy

To serve the bundled frontend files from the same web server as the API, set serve_static: true in the config.

Alternatively download the prebuilt frontend tar. Extract it to /usr/local/share/fernglas-frontend.

$ sudo mkdir -p /usr/local/share/fernglas-frontend
$ wget -O- https://github.com/wobcom/fernglas/releases/download/fernglas-0.2.1/fernglas-frontend-0.2.1.tar.xz | sudo tar -C /usr/local/share/fernglas-frontend -xJ

Set up your reverse proxy / webserver. A configuration for nginx might look like this:

server {
	# we expect that you know how to set up a secure web server on your platform

	location / {
		root /usr/local/share/fernglas-frontend;
	}
	location /api/ {
		proxy_pass http://[::1]:3000; # match the api.bind setting from your fernglas config
		proxy_set_header Host $host;
		proxy_set_header X-Real-IP $remote_addr;
		proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
		proxy_set_header X-Forwarded-Proto $scheme;
		proxy_set_header X-Forwarded-Host $host;
		proxy_set_header X-Forwarded-Server $host;
	}
}

Configuration

The example configuration uses port 3000 and 11019 to expose the API and collect the BMP data stream. You can change those ports, if needed, but you need to expose 11019 to an IP address reachable from your router, probably bind this port to [::]:11019 and check for outside reachability. Note: You also need to specify the IP addresses of possible peers in the config file to ensure no unauthorized person is steaming a BMP stream to your machine.

To hook up routers to your looking glass, you will have to configure either a BMP (BGP Monitoring Protocol) or BGP session between your router and the looking glass.

For both the BGP and BMP collectors, multiple instances can be created (listening on different ports, etc.) and per-peer configuration can be provided based on the client IP.

If multiple collectors collect data with the same hostname (as reported by BMP or BGP peer, or set in name_override), the data will be combined in the frontend. This can be used to build complex views of the Pre/Post Policy Adj-In and LocRib tables using multiple BGP sessions.
If using BMP, everything should 'just work'.


collectors:

  # BMP collector that listens on port 11019 and accepts all incoming connections
  - collector_type: Bmp
    bind: "[::]:11019"
    default_peer_config: {}

  # BMP collector that listens on the privileged port 11020 and accepts incoming connections only from select client IPs
  - collector_type: Bmp
    bind: "[::]:11020"
    peers:
      "192.0.2.1": {}
      "192.0.2.2":
        name_override: router02.example.org

  # BGP collector that listens on port 1179 and accept all  incoming connections
  - collector_type: Bgp
    bind: "[::]:1179"
    default_peer_config:
      asn: 64496
      router_id: 192.0.2.100

  # BGP collector that listens on the privileged port 179 and accepts incoming connections only from select client IPs
  - collector_type: Bgp
    bind: "[::]:179"
    peers:
      "192.0.2.1":
        asn: 64496
	router_id: 192.0.2.100
      "192.0.2.2":
        asn: 64496
	router_id: 192.0.2.100
        name_override: router02.example.org

Valid options for BMP peer config:

  • name_override (optional): Use this string instead of the sys_name advertised in the BMP initiation message

Valid options for BGP peer config:

  • asn (required): AS Number advertised to peer
  • router_id (required): Router ID advertised to peer
  • name_override (optional): Use this string instead of the hostname advertised in the BGP hostname capability
  • route_distinguisher (optional): Routes belonging to this route-distinguisher are advertised in the default table. See VRF/Routing-Instances for more information

Junos using BMP

routing-options {
    bmp {
        station looking-glass {
            station-address 2001:db8::100;
            station-port 11019;
            local-address 2001:db8::1;
            connection-mode active;
            route-monitoring {
                pre-policy { exclude-non-feasible; }
                post-policy { exclude-non-eligible; }
                loc-rib;
            }
        }
    }
}

Be aware that in some older Junos versions the BMP implementation is buggy and causes memory leaks in the routing process.

Tested Junos ReleaseKnown Issues
20.2R3-S3.6PR1526061 BGP Monitoring Protocols may not releases IO buffer correctly
21.4R3-S2.3PR1713444 The rpd process may crash when BMP socket write fails or blocks
21.4R3-S4.9✅ None
22.2R3.15✅ None

bird2 using BGP

protocol bgp bgp_lg from bgp_all {
  local as 64496;
  source address 2001:db8::1;
  neighbor 2001:db8::100 port 1179 as 64496;
  multihop 64;
  rr client;
  advertise hostname on;

  ipv6 {
    add paths tx;
    import filter { reject; };
    export filter { accept; };
    next hop keep;
  };
  ipv4 {
    add paths tx;
    import filter { reject; };
    export filter { accept; };
    next hop keep;
  };
}

VRF/Routing-Instances

VRFs/Routing-Instances are supported out of the box and do not need any configuration for BMP- and MPBGP-Sessions.

BGP Session in VRF

If the BGP Session (on the router) belongs to a routing-instance fernglas should be configured to belong to the same RI:

# config.yml
collectors:
    bgp_peer:
        bind: "192.0.2.1:179"
        default_peer_config:
            asn: 64496
            router_id: 192.0.2.1
            # same RD as peer
            route_distinguisher: 192.1.2.5:100

If this configuration is missing routes would be added as if they were in the default routing-instance instead of the routing-instance (and consequently matched when querying for routes of the default routing-instance.)

Appendix

NixOS specialArgs pattern

Borrowed from here

problem: you want to get the home-manager nixos module from the home-manager flake into a nixos config:

the home-manager documentation says this: https://nix-community.github.io/home-manager/index.html#sec-flakes-nixos-module

{
  description = "NixOS configuration";

  inputs = {
    nixpkgs.url = "github:nixos/nixpkgs/nixos-unstable";
    home-manager.url = "github:nix-community/home-manager";
    home-manager.inputs.nixpkgs.follows = "nixpkgs";
  };

  outputs = inputs@{ nixpkgs, home-manager, ... }: {
    nixosConfigurations = {
      hostname = nixpkgs.lib.nixosSystem {
        system = "x86_64-linux";
        modules = [
          ./configuration.nix
          home-manager.nixosModules.home-manager
          {
            home-manager.useGlobalPkgs = true;
            home-manager.useUserPackages = true;
            home-manager.users.jdoe = import ./home.nix;

            # Optionally, use home-manager.extraSpecialArgs to pass
            # arguments to home.nix
          }
        ];
      };
    };
  };
}

I find this ugly, because it forces to have the home manager include (modules = [ ... home-manager.nixosModules.home-manager ... ];) in the flake.nix, because only there the inputs or home-manager attrset is in scope

In "old" configurations, with niv or plain fetchTarball, you would have done this in the configuration.nix of the respective host(, or a common.nix, if it should be included on all hosts) imports = [ <home-manager/nixos> ]; actually anywhere in any nixos config file

The solution is the following:

flake.nix


{
  description = "NixOS configuration";

  inputs = {
    nixpkgs.url = "github:nixos/nixpkgs/nixos-unstable";
    home-manager.url = "github:nix-community/home-manager";
    home-manager.inputs.nixpkgs.follows = "nixpkgs";
  };

  outputs = inputs@{ nixpkgs, ... }: {
    nixosConfigurations = {
      hostname = nixpkgs.lib.nixosSystem {
        system = "x86_64-linux";
        specialArgs = { inherit inputs; };
        modules = [
          ./configuration.nix
        ];
      };
    };
  };
}

configuration.nix

{ pkgs, lib, config, inputs, ... }:

{
  networking.hostName = "foo";
  [...]

  imports = [
    inputs.home-manager.nixosModules.home-manager
  ];
  home-manager.useGlobalPkgs = true;
  home-manager.useUserPackages = true;
  home-manager.users.jdoe = import ./home.nix;
}

specialArgs means, the inputs attrset is now available in the module args in every nixos module. just add it to the function parameters anywhere you need it, like you do it with pkgs, lib and config. specialArgs also means that in contrast to _module.args this parameter to the module system is fixed, and can not be changed by nixos modules themselves. this prevents infinite recursions when using stuff from the inputs attrset in nixos module imports (which is exactly what we want to do).

and like this using flake inputs in nixos configs becomes much easier and more natural. in many cases you can rewrite "old" configs 1:1 to this new pattern without moving the includes to flake.nix