lux/README.md
2025-01-30 18:39:52 +02:00

11 KiB

LUX

Lain Uplink eXchange

Problem

When networking is not bound to static IPs, or some home deployment lacking proper network equipment, Lain Uplink eXchange aims to resolve these issues by providing distributed host information resolution.

LUX - The solution

Nodes are able to discover other nodes by pre-configured neighbors, while holding information about all hosts equally in each host, thus distributed.

Host information can be anything that is implemented with LuxOption. Current implementation provides a way to determine and encapsulate such information as

  • Hostname
  • WAN IPv4/IPv6
  • Host's network interfaces

While nodes providing convinient way of accessing and managing informaton, such as

  • XML RPC
  • DNS frontend

As well capable calling external scripts whenever update of host information. This is primary goal, to be able update IPFW/PF/nftables rules dynamically and/or reconfigure tunnel devices.

Layout

LUX network can be configured as shown here

   [HOST test-laptop] --exterior--> [NODE Linux]
                                       ||
                                       ||
   [NODE DFly BSD]_<-interior->________**________<-interior->_[NODE FreeBSD]
         ^
         |
     <interior>
   [NODE OpenBSD]  <-exterior-- [HOST openbsd-host]

Exterior and Interior

Exterior channels are used for host to node communication. Interior channels are used for node-to-node sync. Sync packets are big, since they bear all hosts, therefore

INTERIOR LINK MUST HAVE HIGH MTU OR IP FRAGMENTATION. If this is not done, large packets will be dropped and sync to be lost.

Setup

To begin with, first you need to install appropriate for your platform binary, found in releases.

Then, you need first, initial node, from which later nodes and hosts will be deducted through RPC commands.

A LUX node config must look like this

<?xml version="1.0" encoding="UTF-8"?>
<node>
    <keystore>/var/lux/lux-node.dat</keystore>
    <id>{YOUR ID}</id>
    <log level="debug"></log>
    <rpc>unix:///var/run/lux-node.sock</rpc>
    <dns>127.0.0.1:9953</dns>
    <interior>127.0.0.1:9979</interior>
    <interior>10.1.0.254:9979</interior>
    <exterior>127.0.0.1:9980</exterior>
    <sync>1</sync>
</node>

Note that keystore is important file.

To generate first node keystore and obtain ID, use lux --node --config <path to xml config> --bootstrap

Then put ID into config.

Adding hosts

Host must be added via node RPC, so node keystore has host key.

lux --rpc unix:///var/run/lux-node.sock --rpc-new-host /tmp/host-keystore.dat

Output will show ID of new host that must be used in host config.

(Currently, the new created keystore in /tmp is useless due to limitations and work-in-progress.)

Copy node's keystore to host location

cp /var/lux/lux-node.dat /var/lux/lux-host.dat

Then, a host must be configured like this

<?xml version="1.0" encoding="UTF-8"?>
<host>
    <keystore>/var/lux/lux-host.dat</keystore>
    <id>{host ID from rpc-new-host}</id>
    <hostname>acer-laptop</hostname>
    <option type="wan">
        <wan method="identme"></wan>
    </option>
    <option type="netif"></option>
    <heartbeat>1</heartbeat>
    <node>
        <id>{NODE'S ID}</id>
        <exterior>{NODE'S EXTERIOR CHANNEL IP:PORT}</exterior>
    </node>
</host>

Adding new neighboring nodes

Procedure is similar to creating host

lux --rpc unix:///var/run/lux-node.sock --rpc-new-node /tmp/new-keystore.dat

New keystore must be used in neighbor node config

New node's config must be configured like this

<?xml version="1.0" encoding="UTF-8"?>
<node>
    <keystore>/var/lux/new-keystore.dat</keystore>
    <id>{new node ID from RPC output}</id>
    <log level="debug"></log>
    <rpc>unix:///var/run/lux-neighbor.sock</rpc>
    <dns>127.0.0.1:9953</dns>
    <interior>10.1.0.6:9979</interior>
    <exterior>10.1.0.6:9980</exterior>
    <sync>1</sync>
    <neighbor>
        <id>{ID of initial node}</id>
        <address>{exterior IP:Port of initial node}</address>
    </neighbor>
</node>

Update hook

Into the node tag of config, a hooks can be added

    <hook>
        <id>48d14b3a-e737-4914-9f14-d9906eebbf82</id>
        <script>/etc/lux/hooks/hook.ksh</script>
    </hook>

Where <id> is ID of the host, and <script> is full path to executable.

Node will execute script and write XML state object into stdin, that looks like this

<?xml version="1.0" encoding="UTF-8"?>
<host id="4cc879d8-b8e8-4276-869f-69e571f98538" hostname="openbsd-host">
    <state>
        <wan>
            <addr4 />
            <addr6 />
        </wan>
        <netif>
            <if name="lo0" idx="3">
                <addr type="ip6ll">fe80::1</addr>
            </if>
            <if name="vio0" idx="1">
                <addr type="ip6ll">fe80::d47d:22ff:fe32:9456</addr>
                <addr type="ip6ula">fd10:101::ed54:556f:5c9f:6c8d</addr>
                <addr type="ip6ula">fd10:101::679e:a520:6bc2:d451</addr>
                <addr type="ip4">10.1.0.7</addr>
            </if>
        </netif>
    </state>
</host>

And a example script hook

#!/bin/ksh

host=$(</dev/stdin)

ip4=$(echo "$host" | xml sel -t -v '/host/state/netif/if[@name="vio0"]/addr[@type="ip4"]')

echo "$ip4" > /tmp/ip_for_config.txt

BE AWARE OF SHELL EXPANSION EXPLOITS!

DNS frontend

A LUX node can export host information via DNS UDP protocol, that implements A and AAAA requests and can be easily integrated into Unbound / Dnsmasq.

.lux is the TLD and shall be used as "search" field in Unbound / Dnsmasq.

host.lux OR host.wan.lux -> resolves WAN IP of ```host``

host.eth0.lux -> resolves IP of network interface of host

subdomain.host.wan.lux, subdomain.host.eth0.lux -> same pattern, but with subdomain. Note that full 4 label notation is required when using subdomains

Running

lux --node --config <path to node xml>

lux --host --config <path to host xml>

RPC / CLI

lux --rpc unix:///var/run/lux-node.sock --rpc-get-hosts

lux --rpc unix:///var/run/lux-node.sock --rpc-get-keys

lux --rpc unix:///var/run/lux-node.sock --rpc-get-routes

And other. For reference look in ---help

"RFC"

Warning, this is old RFC. It may not be correct or relevant.

Lain Uplink eXchange

        ______________                   ______________
        |            |                   |            |
        |   Node A   |     Interior      |   Node B   |
        |            |  <------------>   |            |
        |   State    |      State        |   State    |
        ______________                   ______________

               ^                                ^
            E  |                                | I
            x  |                                | n
            t  |                                | t
            e  |                                | e
            r  |                                | r
            i  |                                | i
            o  |                                | o
            r  |                                | r
               |                                |
        ______________                   ______________
        |            |                   |            |
        |   Host 1   |                   |   Host 2   |
        |            |                   |            |
        |            |                   |            |
        ______________                   ______________

Node

Each node receives heartbeats from various hosts and registers their status, name, WAN IP and operating time via external connections. Each host can only query information about other hosts via internal connections. The status must remain the same for all nodes so that a failover can be configured.

The node can also provide a DNS server front-end for easy integration with DNS resolvers such as unbound, dnsmasq or systemd-resolved.

Exterior and Interior

An external connection is made via a medium with low trustworthiness, e.g. the WAN Internet, which is why the encryption layer protects against replay attacks and ensures the uniqueness of the packets.

Internal connections are established via a medium with high trustworthiness, e.g. VPN tunnels such as OpenVPN or Tailscale. Only internal connections can be used to synchronize the status between the nodes.

Host

Each host sends a heartbeat and thus transmits its status - such as WAN IP, operating time, resource utilization. Hosts can send a heartbeat via external and internal connections, but the information request can only be made via internal connections, and the status can only be synchronized via internal connections, because at the time of the request or synchronization, the external uplink information, such as the WAN IP, may not be read out; therefore, the internal connection must always be available and cost-effective.

State

Host's state consists of:

  • Hostname(+.lux)
  • WAN IP

The state of the node is a table of the states of the hosts + their last heartbeat time. The state of the node must also contain the generation ID, which must be guaranteed to be unique within the last 128 generations. A new generation should only take place if one of the hosts has a new heartbeat.

Sync state broadcast

Once consensus has been reached, the state must be synchronized across all nodes. To achieve this, neighbouring nodes must be recognized and registered.

The sending process is as follows.

  1. the list of neighbors must already be created (through neighbor discovery)
  2. the sync packet is formed and contains the list of nodes to which it is addressed: all neighbors (except the sending node)
  3. the sync packet is sent to all nodes via internal connections
  4. so that the transmission does not end in a loop and is short, each node must merge its neighbor list when the sync packet arrives - in this way, neighbor discovery will also take place, update the node's neighbor list and/or add new nodes to the broadcast
  • remember the generation ID of the sync packet and ignore all other sync packets with the same generation ID.

This procedure will cause a tolerable but still good amount of packet storms, but it will also serve to resend packets if they are lost on the network path.

Encryption

The symmetric cipher AES-256 is used for communication from host to node and from node to node.

Each node has a node key and the node stores the host key for each host. The host must be configured with its host key, which is provided by the node.

The node key is only used for node-to-node communication and must be kept secret unless another node is used.

Identification

Each host and each node has its own unique UUID, which is used for packet addressing.

Software architecture

  • Config are defined as INI files.
  • Daemon that runs continuously and operates the protocol. Also provides a UNIX socket for the CLI configuration
  • CLI for communication with UNIX socket and for issuing commands