Startup nebula ansible role

* Provide most options for nebula config
* Provide ability for future modifications to use other distros
* Provide information on usable variables in README
This commit is contained in:
Louis Abel 2024-04-18 18:30:15 -07:00
commit 1be345119f
Signed by: label
GPG key ID: 2A6975660E424560
23 changed files with 921 additions and 0 deletions

5
.ansible-lint Normal file
View file

@ -0,0 +1,5 @@
---
# .ansible-lint
warn_list:
- yaml[line-length]
...

4
.yamllint.yml Normal file
View file

@ -0,0 +1,4 @@
---
rules:
line-length: disable
...

86
README.md Normal file
View file

@ -0,0 +1,86 @@
# ansible-role-nebula
This role helps setup Nebula on applicable nodes in the RESF. Most settings for this role are specifically for the RESF and its projects. However, it is perfectly possible to use this on your own. Note that this relies specifically on `rocky-release-core` being installable, which means Rocky Linux will work without issues. Fedora Linux will work as nebula is available in their base repositories. Other distributions may not work.
If there are issues with this role for your use case, please file an issue or a PR if you would like to enhance this role.
## Requirements
Requirements are as follows:
* Enterprise Linux 9+ or Fedora Linux
* Ansible collections: community.general
* ansible-core >= 2.14
## Role Variables
A description of the settable variables for this role should go here, including any variables that are in defaults/main.yml, vars/main.yml, and any variables that can/should be set via parameters to the role. Any variables that are read from other roles and/or the global scope (ie. hostvars, group vars, etc.) should be mentioned here as well.
## Dependencies
* `community.general`
* `ansible.posix` (only for tests)
## Variables
This is not an all inclusive list. For additional variables, check `defaults/main.yml`.
| Variable | Default Value | Required | Description |
|---------------------------------------------|--------------------------------|-------------|-------------|
| `nebula_am_lighthouse` | `false` | Conditional | Sets this node as a lighthouse. |
| `nebula_lighthouse_internal` | 60 | No | How often (in seconds) should a node report to a lighthouse. |
| `nebula_routable_ip` | | No | The public routable IP that nebula needs to know about. If not set, it will be determined automatically. |
| `nebula_ip` | | Yes | IP required by nebula in the form of `X.X.X.X/X` (for example, `10.100.0.44/24`). |
| `nebula_ca_host` | | Yes | The hostname of the host which should be used as a CA. Exactly one (1) MUST be a CA. Required if `nebula_am_lighthouse` is `true`. |
| `nebula_is_ca` | `false` | Conditional | If the host is the CA or not. If `nebula_ca_host` is not defined, exactly one play host will need this set to `true`. Required if `nebula_am_lighthouse` is `true`. |
| `nebula_is_member` | `true` | Yes | This node a member of the mesh. |
| `nebula_ca_name` | RESF Nebula CA | Yes | Sets the name of the CA. |
| `nebula_ca_life` | 175200h | No | Sets the life of the CA certificate. |
| `nebula_ca_wait_timeout_secs` | 300 | No | Timeout in seconds for members to wait until the CA is ready to issue certificates. |
| `nebula_nodename` | `{{ ansible_facts.hostname }}` | No | Name of this nebula member. This is determined by the hostname by default. Otherwise, it can be set. |
| `nebula_groups` | `[]` | Conditional | List of groups that a node is assigned to. This added to the issued certificate for the node. |
| `nebula_listen_host` | 0.0.0.0 | Conditional | The IP of the interface nebula will need to bind to. Default is all IPv4 interfaces. Use `[::]` if you want to enable IPv6. |
| `nebula_listen_port` | 4242 | Conditional | The port to bind to. Default is `4242`, just like the documentation. |
| `nebula_listen_batch` | | No | Max number of packets to pull from the kernel on each syscall. |
| `nebula_listen_read_buffer` | | No | Read socket buffers for the UDP side. Values will be doubled in the kernel. Default is `net.core.rmem.default` on the system. |
| `nebula_listen_write_buffer` | | No | Read socket buffers for the UDP side. Values will be doubled in the kernel. Default is `net.core.wmem.default` on the system. |
| `nebula_listen_send_recv_error` | | No | Nebula will reply to packets it has no tunnel for with a recv_error packet. This helps speed up reconnection in cases of nebula not shutting down cleanly. The caveat is that this can be used to abuse checking for nebula running on some host. |
| `nebula_punchy_punch` | `true` | Conditional | Used for NAT situations. Most cases NAT exists, so this is set to `true`. Enabling this causes the node to send small packets at an interval. |
| `nebula_punchy_respond` | | No | Set this to `true` if the node is unable to receive handshakes and will attempt to initiate one (in the case where hole punching fails in one direction). Useful if a host is behind a difficult NAT (like symmetric NAT). |
| `nebula_punchy_respond_delay` | | No | Set this to the number of seconds to delay before attempting a punch. Only valid is `nebula_punchy_respond` is `true`.
| `nebula_punchy_delay` | | No | Set this to the number of seconds to delay/slow down punch responses. This is helpful if NAT is unable to handle certain race conditions. Only valid if `nebula_punchy_respond` is`true`. |
| `nebula_cipher` | aes | No | Unless you know what you're doing, avoid touching this setting. Refer to the nebula documentation. |
| `nebula_tun_disabled` | `false` | Conditional | Set to true if you do not want the tunnel up. Most people want a tunnel. |
| `nebula_tun_dev` | rneb01 | No | Set the tunnel device name. |
| `nebula_tun_drop_local_broadcast` | `false` | No | Toggles forwarding of local broadcast packets. This depends on the CIDR in the certificate for the node. |
| `nebula_tun_drop_multicast` | `false` | No | Toggles forwarding of multicast packets. |
| `nebula_tun_tx_queue` | 500 | No | Transmit queue length. Raise this number if there are a lot of transmit drops. |
| `nebula_tun_mtu` | 1300 | No | Default MTU for every packet. Safest setting is 1300 for internet routed packets. |
| `nebula_tun_use_system_route_table` | `false` | No | Exactly as it says, set to `true` if you want to manage unsafe routes directly on the system route table with gateway routes instead of nebula. |
| `nebula_routes` | `[]` | No | List of dictionaries. Use this to create route based MTU overrides. If you have a known path that can support a larger MTU, you can set it this way. |
| `nebula_unsafe_routes` | `[]` | No | List of dictionaries. Allows you to route traffic over nebula to non-nebula nodes. This should be avoided unless you have hosts that cannot run nebula. See nebula documentation. |
| `nebula_logging_level` | info | No | Sets the log level. |
| `nebula_logging_format` | text | No | Formatting of the logs. Can be `text` or `json`. |
| `nebula_logging_disable_timestamp` | `false` | No | Disables timestamp logging. If the output is redirected to some logging system, set to `true`. |
| `nebula_logging_timestamp_format` | | No | Sets the timestamp format. Default is RFC3339 unless format is `text` and is attached to a TTY. |
| `nebula_firewall_conntrack_tcp_timeout` | 12m | No | Sets the connection tracking TCP timeout. |
| `nebula_firewall_conntrack_udp_timeout` | 3m | No | Sets the connection tracking UDP timeout. |
| `nebula_firewall_conntrack_default_timeout` | 10m | No | Sets the default connection tracking timeout. |
| `nebula_firewall_inbound_rules` | `any` | No | List of dictionaries. Sets the appropriate inbound rules for this node. |
| `nebula_firewall_outbound_rules` | `any` | No | List of dictionaries. Sets the appropriate outbound rules for this node. |
| `nebula_pki_disconnect_invalid` | `true` | No | Forcefully disconnets a client if the certificate is expired or invalid. |
| `nebula_pki_block_list` | `[]` | No | List of certificate fingerprints that should be blocked even if it's valid. |
| `nebula_cert_public_key` | | No | Nebula node public key to use. If defined, no public key is generated on the CA. This will be signed and used. Requires `nebula_cert_private_key` to be set. |
| `nebula_cert_private_key` | | No | Nebula node privatekey to use. If defined, no public key is generated on the CA. This will be used. Requires `nebula_cert_public_key` to be set. |
| `nebula_preferred_ranges` | | No | Sets the priority order for underlay IP addresses. See [the documentation](https://nebula.defined.net/docs/config/preferred-ranges/). |
| `nebula_routines` | | No | Number of thread pairs to run that consume from the tun and UDP queues. The default is `1`, which means there's one tun and one UDP queue reader. The maximum recommended setting is half the available CPU cores. |
## Example Playbook
## License
...
## Author Information
Louis Abel <label@rockylinux.org>

123
defaults/main.yml Normal file
View file

@ -0,0 +1,123 @@
---
################################################################################
# These are the defaults for this role. Commented items are values that can be
# set but are not automatically. If they are defined, they will be used in tasks
# or templates as necessary.
################################################################################
# nebula high level system items
nebula_version: "1.8.2"
nebula_nodename: "{{ ansible_facts.hostname }}"
# This attempts to do a package installation of nebula. For the case of Rocky
# Linux, the SIG/Core infra repo has it available. EPEL may have it available.
nebula_use_native_package: true
nebula_service_name: "nebula.service"
nebula_config_dir: "/etc/nebula"
# these only apply when native package is set to false and you want to change
# where things go.
nebula_download_dir: "/opt"
nebula_local_bin_dir: "/usr/local/bin"
nebula_pkg_bin_dir: "/usr/bin"
# nebula member configuration items
nebula_is_ca: false
nebula_is_member: true
nebula_ca_name: "RESF Nebula CA"
nebula_ca_life: "175200h"
nebula_ca_wait_timeout_secs: "300"
# nebula_ca_host: somehost.example.com
nebula_groups: []
nebula_am_lighthouse: false
nebula_lighthouse_interval: "60"
# nebula_routable_ip: "X.X.X.X"
# nebula_ip: "X.X.X.X/24"
# nebula listening settings
# leaving buffers unset will use the system settings.
# see: https://nebula.defined.net/docs/config/listen/
nebula_listen_host: "0.0.0.0"
nebula_listen_port: "4242"
# nebula_listen_batch: "64"
# nebula_listen_read_buffer: "10485760"
# nebula_listen_write_buffer: "10485760"
# nebula_listen_send_recv_error: always
# static_map settings
# this role doesn't support DNS names (yet anyway). so these settings are here
# for when we do.
nebula_static_map: false
# nebula_static_map_cadence: "30s"
# nebula_static_map_network: "ip4"
# nebula_static_map_lookup_timeout: "250ms"
# punchy settings - use this for NAT situations. most cases there are NAT
# situations.
# see: https://nebula.defined.net/docs/config/punchy/
nebula_punchy_punch: true
# nebula_punchy_respond: true
# nebula_punchy_respond_delay: "5s"
# nebula_punchy_delay: "1s"
# cipher options
# AES is the default. Most hardware supports this. ALL NODES MUST HAVE THE SAME
# CIPHER OPTION SET.
nebula_cipher: "aes"
# tun settings
# see: https://nebula.defined.net/docs/config/tun/
nebula_tun_disabled: false
nebula_tun_dev: "rneb01"
nebula_tun_drop_local_broadcast: false
nebula_tun_drop_multicast: false
nebula_tun_tx_queue: "500"
nebula_tun_mtu: "1300"
# set this to true if you want to let the system route table handle unsafe
# routes instead of nebula.
nebula_use_system_route_table: false
# Use this to set an MTU override.
nebula_routes: []
# Use this to route nebula traffic to non-nebula nodes. Avoid this in
# normal cases. See documentation.
nebula_unsafe_routes: []
# logging settings
# see: https://nebula.defined.net/docs/config/logging/
nebula_logging_level: "info"
nebula_logging_format: "text"
nebula_logging_disable_timestamp: false
# nebula_logging_timestamp_format: "2006-01-02T15:04:05Z07:00"
# firewall settings
# see: https://nebula.defined.net/docs/config/firewall/
nebula_firewall_conntrack_tcp_timeout: "12m"
nebula_firewall_conntrack_udp_timeout: "3m"
nebula_firewall_conntrack_default_timeout: "10m"
# nebula_firewall_outbound_action: "drop"
# nebula_firewall_inbound_action: "drop"
nebula_firewall_inbound_rules:
- port: any
proto: any
host: any
nebula_firewall_outbound_rules:
- port: any
proto: any
host: any
# nebula certificate configuration items
# nebula_cert_public_key: |
# nebula_cert_private_key: |
nebula_pki_disconnect_invalid: true
nebula_pki_blocklist: []
nebula_nonmanaged_certs_download_dir: "/var/tmp"
nebula_nonmanaged_member_certs: {}
# nebula_ca_config_dir: "/etc/nebula"
# nebula_ca_bin_dir: "/usr/bin"
# nebula_preferred_ranges: []
# nebula_routines: 1
...

8
handlers/main.yml Normal file
View file

@ -0,0 +1,8 @@
---
- name: restart_nebula
ansible.builtin.systemd:
name: "{{ nebula_service_name }}"
daemon_reload: true
state: restarted
enabled: true
...

35
meta/main.yml Normal file
View file

@ -0,0 +1,35 @@
---
galaxy_info:
namespace: rockylinux
role_name: nebula
author: Louis Abel
description: Nebula Role for RESF Infrastructure
company: Rocky Enterprise Software Foundation
# If the issue tracker for your role is not on github, uncomment the
# next line and provide a value
# issue_tracker_url: http://example.com/issue/tracker
# Choose a valid license ID from https://spdx.org - some suggested licenses:
# - BSD-3-Clause (default)
# - MIT
# - GPL-2.0-or-later
# - GPL-3.0-only
# - Apache-2.0
# - CC-BY-4.0
license: GPL-3.0-only
min_ansible_version: 2.14
platforms:
- name: EL
versions:
- 8
- 9
- 10
- name: Fedora
versions:
- 40
- 41
galaxy_tags:
- vpn
dependencies: []
...

21
tasks/determine_ca.yml Normal file
View file

@ -0,0 +1,21 @@
---
- name: Create empty list for CA hosts
ansible.builtin.set_fact:
nebula_ca_hosts: []
- name: Find every nebula host based on their host vars
ansible.builtin.set_fact:
nebula_ca_hosts: "{{ nebula_ca_hosts + [item] }}"
when: hostvars[item]['nebula_is_ca']|bool
- name: Check that there is only ONE CA host
ansible.builtin.assert:
that:
- nebula_ca_hosts|length == 1
success_msg: "One CA host found: {{ nebula_ca_hosts[0] }}"
fail_msg: "There is either more than zero or more than one CA hosts"
- name: Ensure that nebula_ca_host is set
ansible.builtin.set_fact:
nebula_ca_host: "{{ nebula_ca_hosts[0] }}"
...

14
tasks/determine_ip.yml Normal file
View file

@ -0,0 +1,14 @@
---
- name: Get the public IP of the lighthouse
ansible.builtin.uri:
url: "https://api.ipify.org?format=json"
method: Get
register: public_ip
until: public_ip.status == 200
retries: 6
delay: 10
- name: Set the routable IP fact
ansible.builtin.set_fact:
nebula_routable_ip: "{{ public_ip.json.ip }}"
...

34
tasks/determine_os.yml Normal file
View file

@ -0,0 +1,34 @@
---
- name: Check Red Hat distributions
when: ansible_os_family == "RedHat"
block:
- name: Check that this system is in the Red Hat family
ansible.builtin.assert:
that:
- ansible_os_family == "RedHat"
success_msg: "This is a RedHat family system"
fail_msg: "This is NOT a RedHat family system. Goodbye."
- name: Supported distributions only
ansible.builtin.assert:
that:
- (ansible_distribution == "Rocky") or (ansible_distribution == "Fedora")
success_msg: "System is supported"
fail_msg: "System is NOT supported"
- name: EL - Check that major versions are valid
when: ansible_distribution == "Rocky"
ansible.builtin.assert:
that:
- ansible_distribution_major_version|int >= 8
success_msg: "Supported major version of Enterprise Linux"
fail_msg: "This major version is not supported"
- name: Fedora - Check that major versions are valid
when: ansible_distribution == "Fedora"
ansible.builtin.assert:
that:
- ansible_distribution_major_version|int >= 39
success_msg: "Supported major version of Fedora"
fail_msg: "This major version is not supported"
...

View file

@ -0,0 +1,18 @@
---
- name: Set specific facts
ansible.builtin.set_fact:
nebula_bin_dir: "{{ nebula_local_bin_dir }}"
- name: Not supported yet
ansible.builtin.debug:
msg: "Downloading nebula without a package manager is not supported yet."
- name: End prematurely
ansible.builtin.fail:
msg: "Exiting."
# Steps to perform:
# -> download
# -> setup appropriate dirs
# -> drop systemd unit
...

45
tasks/install_pkg.yml Normal file
View file

@ -0,0 +1,45 @@
---
- name: Set specific facts
ansible.builtin.set_fact:
nebula_bin_dir: "{{ nebula_pkg_bin_dir }}"
################################################################################
# Fedora Systems Only
- name: Perform steps for Fedora Linux Systems
when: ansible_os_family == "RedHat" and ansible_distribution == "Fedora"
block:
- name: Install Packages
ansible.builtin.package:
state: present
name:
- nebula
################################################################################
# Rocky Linux Systems Only
- name: Perform steps for Rocky Linux Systems
when: ansible_os_family == "RedHat" and ansible_distribution == "Rocky"
block:
- name: Install core release package
ansible.builtin.package:
state: present
name:
- rocky-release-core
- name: Install the nebula package
ansible.builtin.package:
state: present
name:
- nebula
################################################################################
# All other distributions that are RedHat
- name: Perform steps for everyone else
when:
- ansible_os_family == "RedHat"
- ansible_distribution != "Rocky"
- ansible_distribution != "Fedora"
block:
- name: This isn't ready
ansible.builtin.debug:
msg: "This section is not ready. Sorry."
...

35
tasks/main.yml Normal file
View file

@ -0,0 +1,35 @@
---
- name: Determine if system is supported
ansible.builtin.import_tasks: determine_os.yml
- name: Determine if system is the CA
ansible.builtin.import_tasks: determine_ca.yml
when: nebula_ca_host is not defined
- name: Determine the system IP address
ansible.builtin.import_tasks: determine_ip.yml
when:
- nebula_am_lighthouse|bool
- nebula_routable_ip is not defined
- name: Prechecks for everything else
ansible.builtin.import_tasks: precheck.yml
when: nebula_is_member|bool
- name: Install nebula via package manager
ansible.builtin.import_tasks: install_pkg.yml
when: nebula_use_native_package|bool
- name: Install nebula via download
ansible.builtin.import_tasks: install_download.yml
when:
- not nebula_use_native_package|bool
- name: Install nebula CA
ansible.builtin.import_tasks: setup_ca.yml
when: nebula_is_ca|bool
- name: Configure member of mesh
ansible.builtin.import_tasks: setup_member.yml
when: nebula_is_member|bool
...

29
tasks/precheck.yml Normal file
View file

@ -0,0 +1,29 @@
---
- name: Double check that nebula_ca_host is defined
ansible.builtin.assert:
that:
- nebula_ca_host is defined
success_msg: "Alright good, you did not modify this role."
fail_msg: >-
There should be no reason you have reached this. Did you modify this role?
The nebula_ca_host MUST be defined with some value, whether you have set
this as a regular var, or nebula_is_ca is defined as a hostvar for all the
hosts you are running this for.
- name: Double check that nebula_ip is defined
ansible.builtin.assert:
that:
- nebula_ip is defined
success_msg: "nebula_ip has been defined"
fail_msg: "You cannot be part of the mesh without nebula_ip defined"
# In rare cases, we may want to have dedicated certs already defined. Like, if
# for example, you need to rebuild a member of a mesh.
- name: Check that nebula_cert_private/public_key are defined or none
ansible.builtin.assert:
that:
- (nebula_cert_private_key is defined and nebula_cert_public_key is defined) or
(nebula_cert_private_key is not defined and nebula_cert_public_key is not defined)
success_msg: "You did good!"
fail_msg: "They need to be both none or defined."
...

80
tasks/setup_ca.yml Normal file
View file

@ -0,0 +1,80 @@
---
- name: Verify that there isn't a CA key already
ansible.builtin.stat:
path: "{{ nebula_config_dir }}/ca.key"
register: ca_key_check
- name: Verify that there isn't a CA cert already
ansible.builtin.stat:
path: "{{ nebula_config_dir }}/ca.crt"
register: ca_cert_check
- name: Create a nebula CA certificate
ansible.builtin.command:
cmd: '{{ nebula_bin_dir }}/nebula-cert ca -name "{{ nebula_ca_name }}" -duration {{ nebula_ca_duration }} -out-key {{ nebula_config_dir }}/ca.key -out-crt {{ nebula_config_dir }}/ca.crt'
creates: "{{ nebula_config_dir }}/ca.key"
when:
- not ca_key_check.stat.exists|bool
- not ca_cert_check.stat.exists|bool
- name: Perform steps for non-ansible members
when: nebula_nonmanaged_member_certs | length > 0
block:
- name: Write out the public keys of non-ansible members if needed
delegate_to: "{{ nebula_ca_host }}"
ansible.builtin.copy:
dest: "{{ nebula_config_dir }}/{{ item.key }}.pub"
content: "{{ item.value['public_key'] }}"
mode: '0600'
when: item.value['public_key'] is defined
loop: "{{ nebula_nonmanaged_member_certs | dict2items }}"
- name: Create nebula certs for non-ansible members
ansible.builtin.template:
src: non-managed.sh.j2
dest: "/var/tmp/{{ item.key }}-generator.sh"
mode: "0755"
owner: root
group: root
loop: "{{ nebula_nonmanaged_member_certs | dict2items }}"
- name: Run the generator
ansible.builtin.command:
cmd: "/bin/bash /var/tmp/{{ item.key }}-generator.sh"
creates: "{{ nebula_config_dir }}/{{ item.key }}.crt"
loop: "{{ nebula_nonmanaged_member_certs | dict2items }}"
- name: Create an archive of certs that do not have a private key
community.general.archive:
format: zip
path:
- "{{ nebula_config_dir }}/ca.crt"
- "{{ nebula_config_dir }}/{{ item.key }}.crt"
dest: "{{ nebula_config_dir }}/{{ item.key }}.zip"
mode: '0600'
owner: root
group: root
when: item.value['public_key'] is defined
loop: "{{ nebula_nonmanaged_member_certs | dict2items }}"
- name: Create an archive of certs that do have a private key
community.general.archive:
format: zip
path:
- "{{ nebula_config_dir }}/ca.crt"
- "{{ nebula_config_dir }}/{{ item.key }}.crt"
- "{{ nebula_config_dir }}/{{ item.key }}.key"
dest: "{{ nebula_config_dir }}/{{ item.key }}.zip"
mode: '0600'
owner: root
group: root
when: item.value['public_key'] is not defined
loop: "{{ nebula_nonmanaged_member_certs | dict2items }}"
- name: Copy the nonmanaged certs
ansible.builtin.fetch:
src: "{{ nebula_config_dir }}/{{ item.key }}.zip"
dest: "{{ nebula_nonmanaged_certs_download_dir }}/{{ item.key }}.zip"
flat: true
loop: "{{ nebula_nonmanaged_member_certs | dict2items }}"
...

97
tasks/setup_member.yml Normal file
View file

@ -0,0 +1,97 @@
---
- name: Perform all member tasks on CA host
delegate_to: "{{ nebula_ca_host }}"
block:
- name: Waiting for CA certificate to be generated (default 5 minutes) if needed
ansible.builtin.wait_for:
path: "{{ nebula_config_dir }}/ca.key"
timeout: "{{ nebula_ca_wait_timeout_seconds }}"
- name: Writing public key of member node if applicable
ansible.builtin.copy:
dest: "{{ nebula_config_dir }}/{{ nebula_nodename }}.pub"
content: "{{ nebula_cert_public_key }}"
mode: '0600'
owner: root
group: root
when: nebula_cert_public_key is defined
- name: Create nebula cert generator for ansible members
ansible.builtin.template:
src: managed.sh.j2
dest: "/var/tmp/{{ nebula_nodename }}-generator.sh"
mode: "0755"
owner: root
group: root
- name: Run the member generator
ansible.builtin.command:
cmd: "/bin/bash /var/tmp/{{ item.key }}-generator.sh"
creates: "{{ nebula_config_dir }}/{{ nebula_nodename }}.crt"
- name: Register CA cert
ansible.builtin.slurp:
src: "{{ nebula_config_dir }}/ca.crt"
register: ca_cert_data
- name: Register client cert
ansible.builtin.slurp:
src: "{{ nebula_config_dir }}/{{ nebula_nodename }}.crt"
register: client_cert_data
- name: Register client key
ansible.builtin.slurp:
src: "{{ nebula_config_dir }}/{{ nebula_nodename }}.key"
register: client_key_data
when: nebula_cert_public_key is not defined
- name: Deploy the CA certificate
ansible.builtin.copy:
dest: "{{ nebula_config_dir }}/ca.crt"
content: "{{ ca_cert_data.content | b64decode }}"
mode: '0600'
no_log: true
- name: Deploy the client certificate
ansible.builtin.copy:
dest: "{{ nebula_config_dir }}/{{ nebula_nodename }}.crt"
content: "{{ client_cert_data.content | b64decode }}"
mode: '0600'
no_log: true
- name: Deploy client key if applicable
ansible.builtin.copy:
dest: "{{ nebula_config_dir }}/{{ nebula_nodename }}.key"
content: "{{ nebula_cert_private_key }}"
mode: '0600'
when: nebula_cert_private_key is defined
no_log: true
- name: Deploy client key generated on CA host
ansible.builtin.copy:
dest: "{{ nebula_config_dir }}/{{ nebula_nodename }}.key"
content: "{{ client_key_data.content | b64decode }}"
mode: '0600'
when: nebula_cert_public_key is not defined
no_log: true
- name: Waiting for a routable IP for nebula to be set on all the lighthouses
ansible.builtin.wait_for:
timeout: 10
retries: 12
delay: 10
when: hostvars[item]['nebula_am_lighthouse']|bool
until: hostvars[item]['nebula_routable_ip'] is defined
loop: "{{ ansible_play_hosts_all }}"
loop_control:
loop_var: item
- name: Push out nebula configuration
ansible.builtin.template:
src: config.yml.j2
dest: "{{ nebula_config_dir }}/config.yml"
mode: '0644'
owner: root
group: root
notify: restart_nebula
...

134
templates/config.yml.j2 Normal file
View file

@ -0,0 +1,134 @@
# Nebula Configuration ({{ ansible_managed }})
# PKI
pki:
ca: {{ nebula_config_dir }}/ca.crt
cert: {{ nebula_config_dir }}/{{ nebula_nodename }}.crt
key: {{ nebula_config_dir }}/{{ nebula_nodename }}.key
{% if nebula_pki_disconnect_invalid %}
disconnect_invalid: {{ nebula_pki_disconnect_invalid }}
{% endif %}
{% if nebula_pki_blocklist|length >= 1 %}
blocklist:
{{ nebula_pki_blocklist | to_nice_yaml(indent=2) | indent(width=4) }}
{% endif %}
# static host map
static_host_map:
{% for host in ansible_play_hosts_all %}
{% if (hostvars[host]['nebula_am_lighthouse']|default(false)) and (hostvars[host]['nebula_is_member']|default(true)) %}
"{{ hostvars[host]['nebula_ip'].split('/')[0] }}": ["{{ hostvars[host]['nebula_routable_ip']|default('NONE') }}:{{ hostvars[host]['nebula_listen_port']|default('4242') }}"]
{% endif %}
{% endfor %}
{% if nebula_static_map %}
static_map:
{% if nebula_static_map_cadence is defined %}
cadence: {{ nebula_static_map_cadence }}
{% endif %}
{% if nebula_static_map_network is defined %}
network: {{ nebula_static_map_network }}
{% endif %}
{% if nebula_static_map_network is defined %}
lookup_timeout: {{ nebula_static_map_lookup_timeout }}
{% endif %}
{% endif %}
# lighthouse configuration
lighthouse:
am_lighthouse: {{ nebula_am_lighthouse }}
interval: {{ nebula_lighthouse_interval }}
hosts:
{% if not nebula_am_lighthouse %}
{% for host in ansible_play_hosts_all %}
{% if (hostvars[host]['nebula_am_lighthouse']|default(false)) and (hostvars[host]['nebula_is_member']|default(true)) %}
- '{{ hostvars[host]['nebula_ip'].split('/')[0] }}'
{% endif %}
{% endfor %}
{% endif %}
# listen configuration
listen:
host: {{ nebula_listen_host }}
port: {{ nebula_listen_port }}
{% if nebula_listen_batch is defined %}
batch: {{ nebula_listen_batch }}
{% endif %}
{% if nebula_listen_read_buffer is defined %}
read_buffer: {{ nebula_listen_read_buffer }}
{% endif %}
{% if nebula_listen_write_buffer is defined %}
write_buffer: {{ nebula_listen_write_buffer }}
{% endif %}
{% if nebula_listen_send_recv_error is defined %}
send_recv_error: {{ nebula_listen_send_recv_error }}
{% endif %}
# punchy
punchy:
punch: {{ nebula_punchy_punch }}
{% if nebula_punchy_respond is defined %}
repond: {{ nebula_punchy_respond }}
{% endif %}
{% if nebula_punchy_respond_delay is defined %}
repond_delay: {{ nebula_punchy_respond_delay }}
{% endif %}
{% if nebula_punchy_delay is defined %}
delay: {{ nebula_punchy_delay }}
{% endif %}
{% if nebula_cipher is defined %}
# cipher
cipher: {{ nebula_cipher }}
{% endif %}
{% if nebula_preferred_ranges|length >= 1 %}
preferred_ranges: {{ nebula_preferred_ranges }}
{% endif %}
{% if nebula_routines is defined %}
routines: {{ nebula_routines }}
{% endif %}
# tun
tun:
disabled: {{ nebula_tun_disabled }}
dev: {{ nebula_tun_dev }}
drop_local_broadcast: {{ nebula_tun_drop_local_broadcast }}
drop_multicast: {{ nebula_tun_drop_multicast }}
tx_queue: {{ nebula_tun_tx_queue }}
mtu: {{ nebula_tun_mtu }}
{% if nebula_use_system_route_table %}
use_system_route_table: {{ nebula_use_system_route_table }}
{% if nebula_routes|length >= 1 %}
routes:
{{ nebula_routes|to_nice_yaml(indent=2)|indent(width=4) }}
{% else %}
routes:
{% endif %}
{% if nebula_unsafe_routes|length >= 1 %}
unsafe_routes:
{{ nebula_unsafe_routes|to_nice_yaml(indent=2)|indent(width=4) }}
{% else %}
unsafe_routes:
{% endif %}
# logging
logging:
level: {{ nebula_logging_level }}
format: {{ nebula_logging_format }}
disable_timestamp: {{ nebula_logging_disable_timestamp }}
firewall:
{% if nebula_firewall_outbound_action is defined %}
outbound_action: {{ nebula_firewall_outbound_action }}
{% endif %}
{% if nebula_firewall_inbound_action is defined %}
inbound_action: {{ nebula_firewall_inbound_action }}
{% endif %}
conntrack:
tcp_timeout: {{ nebula_firewall_conntrack_tcp_timeout }}
udp_timeout: {{ nebula_firewall_conntrack_udp_timeout }}
default_timeout: {{ nebula_firewall_conntrack_default_timeout }}
inbound:
{{ nebula_firewall_inbound_rules | to_nice_yaml(indent=2) | indent(width=4) }}
outbound:
{{ nebula_firewall_outbound_rules | to_nice_yaml(indent=2) | indent(width=4) }}

15
templates/managed.sh.j2 Normal file
View file

@ -0,0 +1,15 @@
#!/bin/bash
# Generator for managed certs for {{ nebula_nodename }}
{{ nebula_bin_dir }}/nebula-cert sign \
-name "{{ nebula_nodename }}" \
-ip "{{ nebula_ip | mandatory }}" \
-groups "{{ nebula_groups | join(',') }}" \
-ca-key "{{ nebula_config_dir }}/ca.key" \
-ca-crt "{{ nebula_config_dir }}/ca.crt" \
{% if nebula_cert_public_key is defined %}
-in-pub "{{ nebula_config_dir }}/{{ nebula_name }}.pub" \
{% else %}
-out-key "{{ nebula_config_dir }}/{{ nebula_name }}.key" \
{% endif %}
-out-crt "{{ nebula_config_dir }}/{{ nebula_name }}.crt"

View file

@ -0,0 +1,41 @@
# systemd unit for nebula
# typically part of the package in Rocky Linux and Fedora, but for non-pkg
# installs, we want to keep the config consistent.
[Unit]
Description=Nebula overlay networking tool
After=basic.target network.target network-online.target
Before=sshd.service
Wants=basic.target network-online.target nss-lookup.target time-sync.target
[Service]
ExecReload=/bin/kill -HUP $MAINPID
ExecStart={{ nebula_bin_dir }}/nebula -config {{ nebula_config_dir }}/config.yml
SyslogIdentifier=nebula
#CapabilityBoundingSet=CAP_IPC_LOCK CAP_NET_ADMIN CAP_NET_BIND_SERVICE CAP_NET_RAW CAP_SETGID CAP_SETUID CAP_SETPCAP CAP_SYS_CHROOT CAP_DAC_OVERRIDE CAP_AUDIT_WRITE
CapabilityBoundingSet=CAP_NET_ADMIN
RestrictNamespaces=yes
WorkingDirectory={{ nebula_config_dir }}
ProtectClock=true
ProtectSystem=strict
ProtectHostname=yes
ProtectHome=yes
PrivateHome=yes
ProtectKernelTunables=yes
ProtectKernelModules=yes
ProtectControlGroups=yes
SystemCallFilter=@system-service
SystemCallErrorNumber=EPERM
NoNewPrivileges=yes
PrivateTmp=yes
UMask=0077
RestrictAddressFamilies=AF_NETLINK AF_INET AF_INET6
DeviceAllow=/dev/null rw
DeviceAllow=/dev/net/tun rw
PrivateTmp=true
ProtectSystem=true
ProtectHome=true
[Install]
WantedBy=multi-user.target

View file

@ -0,0 +1,16 @@
#!/bin/bash
# Generator for nonmanaged certs for {{ item.key }}
{{ nebula_bin_dir }}/nebula-cert sign \
-name "{{ item.key }}" \
-ip "{{ item.value.ip | mandatory }}" \
-groups "{{ (item.value.groups | default([])) | join(',') }}" \
-ca-key "{{ nebula_config_dir }}/ca.key" \
-ca-crt "{{ nebula_config_dir }}/ca.crt" \
{% if item.value['public_key'] is defined %}
-in-pub "{{ nebula_config_dir }}/{{ item.key }}.pub" \
{% else %}
-out-key "{{ nebula_config_dir }}/{{ item.key }}.key" \
{% endif %}
-out-crt "{{ nebula_config_dir }}/{{ item.key }}.crt"

45
tests/ansible.cfg Normal file
View file

@ -0,0 +1,45 @@
[defaults]
host_key_checking = False
retry_files_enabled = False
roles_path = ../../
collections_paths = ../../../collections
remote_user = ansible
ansible_managed = RESF
timeout = 3
callbacks_enabled = ansible.posix.profile_roles
[privilege_escalation]
;become=True
;become_method=sudo
;become_user=root
;become_ask_pass=False
[persistent_connection]
[connection]
[colors]
[selinux]
[diff]
[galaxy]
[inventory]
enable_plugins = host_list, virtualbox, yaml, constructed, script, ini, auto
[netconf_connection]
[paramiko_connection]
record_host_keys = False
[jinja2]
[tags]
[ssh_connection]
ssh_args = -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no
pipelining = True
control_path = /tmp/ansible-role-nebula-%%h%%p%%r
retries = 10

2
tests/inventory Normal file
View file

@ -0,0 +1,2 @@
localhost

31
tests/test.yml Normal file
View file

@ -0,0 +1,31 @@
---
- name: Check that nebula hosts are not empty
hosts: localhost
any_errors_fatal: true
tasks:
- name: Check for one host
ansible.builtin.assert:
that: (groups['nebula'] | default([])) | length > 0
fail_msg: "No hosts configured. Ending test."
success_msg: "There are hosts found in the group."
- name: Setup nebula
hosts: nebula
strategy: free
become: true
roles:
- rockylinux.nebula
- name: Verify they can ping
hosts: nebula
strategy: free
tasks:
- name: Ping all nebula hosts
ansible.builtin.command: "ping -W 1 -c 3 {{ hostvars[item]['nebula_ip'].split('/')[0] }}"
changed_when: "1 != 1"
register: ping_check
until: ping_check is succeeded
retries: 15
delay: 10
loop: "{{ ansible_play_hosts_all }}"
...

3
vars/main.yml Normal file
View file

@ -0,0 +1,3 @@
---
# There are no vars here.
...