This website content is no longer under development.
The main reason for these notes is a reference to assist me with maintaining my home server. This includes upgrading the existing or setting up a new server in the future.
There are many reasons to setup a home server and many different options available. For me one of the big reasons is the tinkering and learning associated with such a set up. There are many other benefits. Perhaps also one of the largest negatives is also the time invested in this endeavour, it will certainly not be for everyone!
I have published these notes on my public website KPTree.net, for my own access and also and possible benefit to others. At this time I am not interest in adding advertising to this site. As these are my personal notes, provided without cost, I assume no obligations in anyway should anyone in anyway use them in full or part. YOU USE THESE NOTES AT YOUR OWN RISK!
I have used many references from the Internet to assist me with the development of my home server and these notes. In general these references links are provided in the relevant section of the notes. Many of these reference links are also provided in the KPTree-Miscellaneous Links. The biggest single source of information and arguably inspiration has come from Havetheknowhow.com, this is certainly a good starting point if you are interested in a Linux based home server!
The following site also has some info on How to Install and Configure KVM on Ubuntu 18.04 LTS Server.
Hardware - I have censored this for the time being....
A special mention goes to the OpenSprinkler sprinkler controller, that is probably the best network interfaced sprinkler controller available, both for home or commercial use.
Another special mention is Snapraid. I believe this to be the best solution for a modern home server solution, giving the best compromise between performance, reliability and power saving. It should be considered that traditional full time raid systems require all harddisks spinning when in use, compromising long term reliably of all the included disks and increased power consumption. A key benefit of many traditional raid systems, is increase bandwidth (speed) due to use of simultaneous disks, however a modern 3.5" harddisk has a data bandwidth similar to a 1Gb/s Ethernet, so the tradition raid speed benefits are of little value unless a more exotic network arrangement is used. I use an SSD for my main system drive and 2x 6TB hard disks for main datastore + 1 extra 6TB HD for a parity harddisk with snapraid. All the 6TB hardisks are programmed to spin down after 20 minute of no access use. Further to this I back up the 2x 6TB HD to external drives intermittently and have addition 2.5" portable drive with regularly used data and irreplaceable personal data. Some photos, the main irreplaceable data are with other family members, giving some limited effective offsite data backup. I should consider off site backup of the irreplaceable data; to be sure, to be sure.
The home server I have has 4 Intel Gigabit NICs. The past couple of years I have only been using 1 NIC to a main 24 port gigabit switch. This is described in the Basic Network Setup below. The home server has 4 drive, 1 SSD system drive, 2 larger data storage drives and 1 drive used as a parity drive for off line raid for the data storage drives. For most the time a single NIC will provide sufficient bandwidth between the server and switch. However there exists server capacity to saturate bandwidth of a single gigabit NIC. To increase effective bandwidth there is an option to bond 2 or more NIC together to combine their bandwidth. This is call NIC Bonding. To allow virtual machine NIC access the NIC(s) must be setup in bridge mode. Furthermore bridging NICs can also allow the NICs to act as a switch, obviously where more than one NIC is available. The Full Network Setup section below, describes seting up the system with bonded and bridge NICs. Both setups were found to operate well.
Some references are noted below Network Setup Links.
To check available interfaces and names: "ip link"
Ensure the bridge utilites are loaded: "sudo apt install bridge-utils"
Edit the network configuration file: "/etc/network/interfaces" as follows:
# This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). # The loopback network interface auto lo iface lo inet loopback # The primary network interface # auto eth0 # iface eth0 inet dhcp #Basic bridge setup on a NIC to allow virtual machine NIC access #The DHCP server is used to assign a fixed IP address based upon MAC auto br0 iface br0 inet dhcp bridge_ports eth0 bridge_fd 9 bridge_hello 2 bridge_maxage 12 bridge_stp off #No point enabling NIC that are not being used #auto eth1 #iface eth1 inet manual #auto eth2 #iface eth2 inet manual #auto eth3 #iface eth3 inet manual
I tried earlier to use static assigned IP setup, but had problems with operation and used setup with dhcp, which worked. I then setup the dhcp sever to assign a fix IP address to the eth0 address.
As noted in the main section I have a server with 4 built in Intel NICs. To reduce performance reduction due to limited Ethernet bandwidth using only one NIC I propose to use 2 NICs in bonded configuration and also use bridging to allow server virtual machine access to the NICs and also use the remaining 2 NICs effectivily as a switch.
To check available interfaces and names: "ip link"
Ensure the bridge utilites are loaded: "sudo apt install bridge-utils"
The bonded configuration needs ifenslave utility loaded: "sudo apt install ifenslave"
My NIC connectors are setup as follows:
IPMI_LAN USB2-1 USB3-1 LAN3(eth2) LAN4(eth3) USB2-0 USB3-0 LAN1(eth0) LAN2(eth1) VGA
Edit the network configuration file: "/etc/network/interfaces" as follows:
# This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5) # and brctl(8). # The loopback network interface auto lo iface lo inet loopback #Setup the Bond auto bond0 iface bond0 inet manual hwaddress ether DE:AD:BE:EF:69:01 post-up ifenslave bond0 eth0 eth1 pre-down ifenslave -d bond0 eth0 eth1 bond-slaves none bond-mode 4 bond-miimon 100 bond-downdelay 0 bond-updelay 0 bond-lacp-rate fast bond-xmit_hash_policy layer2+3 #bond-mode 4 requires that the connected switch has matching #configuration #Start Bond network interfaces in manual auto eth0 iface eth0 inet manual bond-master bond0 auto eth1 iface eth1 inet manual bond-master bond0 #Setup Bridge Interface auto br0 iface br0 inet static address 192.168.1.5 network 192.168.1.0 netmask 255.255.255.0 broadcast 192.168.1.255 gateway 192.168.1.1 dns-nameservers 192.168.1.1 bridge_ports bond0 eth2 eth3 bridge_stp off bridge_fd 9 bridge_hello 2 bridge_maxage 12
The following is a description of some of the parameters
layer2 and layer2+3 options are 802.3ad compliant, layer3+4 is not fully compliant and may cause problems on some equipment/configurations.
Bonding Benefits and Limitations
Modern harddisks are generally faster than a 1 Gb/s ethernet connection, SSDs significantly so. Yet many individual data demand usages are significantly slower, e.g. video 0.5 to 30Mb/s, audio 100 - 400 kb/s. Furthermore most external internet connection are still normally slower then 100Mb/s, with only larger offices having 1Gb/s or more bandwidth. So the biggest speed/time impact is when coping files across a speed limited ethernet LAN connection or where a server is used to provide information to multiple clients. Ethernet bonding can help improve server performance by sharing multiple simultaneous client connections between the bonded ethernet connections.
Wifi quoted speeds are particularly bogus / optimistic. The quoted speed is usually the best possible speed achievable. Wifi bandwidth is often shared with many simultaneous users, with each n user only often getting at best a 1/n share of bandwidth. There are also latency and interferance issues with Wifi that can affect performance. Wired LAN ethernet connection tend to provide more reliable consistent performance. That being said Wifi is convenient and in most, but certinly not all, cases fast enough.
This is the setup for my new server with 4 built in Intel NICs, with Ubuntu 18.04 To reduce performance reduction due to limited Ethernet bandwidth using only one NIC I propose to use 2 NICs in bonded configuration and also use bridging to allow server virtual machine access to the NICs and also use the remaining 2 NICs effectivily as a switch.
To check available interfaces and names: "ip link"
Netplan does not require the bridge utilites to be loaded however these utilities can be used upon the bridge: "sudo apt install bridge-utils"
Under netplan the bonded configuration does not need ifenslave utility loaded, as this utilities is dependant upon ifupdown. Do not install "sudo apt install ifenslave"
The netplan website with basic information netplan.io. Also another resource is from cloud-init Networking Config Version 2
My new server NIC connectors are setup as follows:
IPMI_LAN USB2-1 LAN3(eno3) LAN4(eno4) USB2-0 LAN1(eno1) LAN2(eno2) VGA
The new server board does not have any back USB3 ports. No great loss, never used them yet.
As instructed in the system created yaml file "/etc/netplan/50-cloud-init.yaml", create the file "sudo vim /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg" and add the line "network: {config: disabled}"
Edit the network configuration file: "/etc/netplan/interfaces.yaml" as follows:
network: #setup network interfaces version: 2 renderer: networkd ethernets: eno1: dhcp4: no dhcp6: no optional: true eno2: dhcp4: no dhcp6: no optional: true eno3: dhcp4: no dhcp6: no optional: true eno4: dhcp4: no dhcp6: no optional: true #Setup the Bond bonds: bond0: interfaces: [eno1, eno2] parameters: mode: balance-rr #Setup Bridge Interface bridges: br0: addresses: [192.168.1.10/24] interfaces: [bond0, eno3, eno4] gateway4: 192.168.1.1 nameservers: addresses: [192.168.1.1] parameters: stp: off forward-delay: 9 hello-time: 2 max-age: 12
Some additional netplan commands:
The VM netplan yaml configuration file for static LAN IP address: "/etc/netplan/network.yaml" as follows:
network: version: 2 renderer: networkd ethernets: ens3: addresses: [192.168.1.12/24] gateway4: 192.168.1.1 nameservers: addresses: [192.168.1.1]
I also created a bridge definition file for libvirt as recommeded by netplan.io examples:
Create a file br0.xml, "vim ~/br0.xml" and add following to it:
<network> <name>br0</name> <forward mode='bridge'/> <bridge name='br0'/> </network>
Next have libvirt add the new network and autostart it:
The qemu defined networks can be listed with the command: "virsh net-list --all"
You can list networks with "networkctl list"
The NTP server setup is quite simple, I used the reference from Using chrony on Ubuntu 18.04. I replace the pool servers with my local ones, "sudo vim /etc/ntp.conf".
Some NTP tips:
Use the "timedatectl" command to interogate and modify time and date information as required, mainly used to set local timezone and daylight savings parameters. The following commands are typical:
For 18.04 I decided to go with TigerVNC according to Linuxize How to Install and Configure VNC on Ubuntu 18.04. The main difference is that I can not be bothered using a secure link in my home private network. So to allow a direct connection add "-localhost no" to the TigerVNC command line to allow direct connection, see GitHub TigerVNC notes unable to connect to socket: Connection refused(10061) #117
The basic set up is given in have the know how Ubuntu Sever install VNC, with more detailed startup details given in Ubuntu Server: How to run VNC on startup
I prefer a full xfce desktop to a cut down gnome one, so I installed it instead, see How to Install and Configure VNC on Ubuntu 16.04 from Digitalocean.
A concise setup is here VNC server on Ubuntu 18.04 Bionic Beaver Linux.
I basically follow the Have the know how instructions, but instead of "sudo apt install gnome-core", use "sudo apt install xfce4 xfce4-goodies". I have been using vnc4server, not tightvncserver. Also in ~/.vnc/xstartup, only:
(Basically the startxfce4 &, instead of metacity &, gnome-settings-daemon &, gnome-panel &)
The xfce screen-saver seems to default on and use significant system resources, and is basically unnecessary on a headless server. To disable perform the following:
(The xfce screensavers actually look quite nice, and may make sense on a standard desktop install.)
The xfce default shell seems to be sh (/bin/sh), I prefer bash (/bin/bash). To check the current shell, type: 'echo $SHELL". To use bash simply type "bash". To make permanent add the line "exec /bin/bash" to the end of "vim ~/.profile". You will need to restart VNCserver for this to take effect.
Some other important tips:
Some preferred graphical programs:
As I have a computer with enough memory I see no need or value in a SWAP partition. In fact as I am using a SSD for the system drive a SWAP is a concern to the reliability of his drive. The following is a list of method to check and disable SWAP function.
See How To Add Swap Space on Ubuntu 16.04 section on cache pressure on how to adjust this parameter.
The standard BASH colour configuration uses a blue colour for listing directories (ls) which is difficult to read on a black background. While this is the "standard colour", due to the impracticality I have decided to change it.
The personal BASH user configuration file is: "~/.bashrc". Simply add the following line to this file: "LS_COLORS='di=1;32' ; export LS_COLORS" The code 1;32; is for a light green colour.
The .bashrc file also has a number of other interesting "features" and options, such as aliases and colour prompts. If you turn on the colour prompt option (force_color_prompt=yes), again the dark blue colour may be difficult to read so I change the prompt color code from 34 to 36.
To update the terminal, without logging off type: ". ~/.bashrc" or "source ~/.bashrc". The command "exec bash" will also work.
How To Use Bash History Commands and Expansions on a Linux VPS
I use the VI (or VIM) editor. It comes standard on most Linux and UNIX distributions, or can otherwise be installed. A key feature I configure is the VIM colour scheme, as the standard colour scheme does not work well with black background terminal windows I prefer to use. Simply create the file on home directory, ".vimrc" ("sudo vim .vimrc") and add the line ":colorscheme desert".
The different VIM colour scheme definition files are located at "/usr/share/vim/vim74/colors"
A powerful text editor, standard in most Linux distributions and available in Windows. Need some time and effort to learn though, particularly if moving from graphical user environment.
Some Informative Links:
Some Quick tips:
There is a lot of existing information on using Rsync published
A symlink is a soft or hard link to a directory location to another directory location or file. I am only interested in the soft link. It effectivily allows a directory tree to be made for different non-structured directory locations, even across partitions.
Simple use is: 'ln -s "path/directory or file" "path/symlink name", where option -s is to create a symlink. See ln --help or man ln for more information. Another good reference is The Geek Stuff The Ultimate Linux Soft and Hard Link Guide (10 Ln Command Examples)
To remove symlink 'rm "path/symlink name"'
To list symlink 'ls "path/symlink name"'
To list symlink directory contents 'ls "path/symlink name/"'
Symlink ownership is not particularly important as it has full permissions (777) and file access is determined by real file permissions.
Use the built-in clone facility: "sudo virt-clone --connect=qemu://example.com/system -o this-vm -n that-vm --auto-clone", Which will make a copy of this-vm, named that-vm, and takes care of duplicating storage devices.
to list all defined virtual machines "virsh list --all"
To dump a virtual machine xml definition to a file: "virsh dumpxml {vir_machine} > /dir/file.xml"
Modify the following xml tags:
To convert the xml flie back to a virtual machine difinition: "sudo virsh define /path_to/name_VM_xml_file.xml"
The VM xml file can be edited directly using the command" virsh edit VM_name"
To get virtual immage disk information "sudo qemu-img info /path_to/name_VM_file.img"
A compacted qcow2 image can be created using the following command: "sudo qemu-img convert -O qcow2 -c old.qcow2 new_compacted_version_of_old.qcow2
Use fsck to check and repair a file system. The file system must be unmounted when being check and repaired to prevent corruption!
The root file system can not be unmounted and checked. Two possible options to check the system are:
Some Keypoints are:
Scans log files and check for in appropriate password activities and update and uses firewall (IPTables) to restrict (stop for a period of time) these activities. So failtoban limits incorrect authorisation attempts, thereby reducing, but not entirely eliminating associated risks and bandwidths. It is primarily used on port and associated services open to the public. DigitalOcean How To Protect an Apache Server with Fail2Ban on Ubuntu 14.04 and How Fail2Ban Works to Protect Services on a Linux Server. Also see the wiki of Fail2Ban on nftables and Fail2ban Add support for nftables #1118 and Add nftables actions #1292.
Monit is a small Open Source utility for managing and monitoring Unix systems. Monit conducts automatic maintenance and repair and can execute meaningful causal actions in error situations. The email server instructions from Ex Ratione - A Mailserver on Ubuntu 16.04: Postfix, Dovecot, MySQL, Postfixadmin, Roundcube also include some installation instructions for monit.
Another site with some security tips, How to secure an Ubuntu 16.04 LTS server - Part 1 The Basics
Trip wire check system files to check for any changes and alarms / alerts upon changes.
The apt-cacher-ng looks to be a self container apt caching server. Basically the apt cacher stores all the relevant apt update and upgrade related files and and acts as a proxy server to multiple clients. A handy feature to improve speed and reduce Internet bandwidth where a virtual machine server is used with multiple clients. There is another package called apt-cacher but it depends upon the installation of a separate webserver.
There is also APT-mirror that retrieves all packages from the specified public repository(s). Where as apt-cacher only retrieves each package when called and stores for subsequent use by other clients. APT caching looks the way to go and apt-cacher-ng the best overall option. I installed apt-cacher-ng on the VM server, not a VM client. The clients are setup to obtain their apt updates and upgrades via the server.
The LinuxHelp web page, How To Set up an Apt-Cache Server using "Apt-Cacher-NG" in Ubuntu 16.04 Server, provide a good description of how to setup. It is reasonably straight forward. I suggest the use of "sudo systemctl restart apt-cacher-ng", as opposed to the old fashion "sudo /etc/init.d/apt-cacher-ng restart".
If the non-default Cache directory is not set up correctly the program defaults to "/var/cache/apt-cacher-ng". This quirk is covered in How to change the directory of the apt-cacher-ng downloaded packages" in Ubuntu Xenial.
Links to the Apt-Cacher NG home page and Apt-Cacher-NG User Manual.
To access apt-cacher-ng web page: "http://192.168.1.5:3142"
There is an issue with use of apt-cacher and SSL/TLS repositories. A good reference is from packagecloud:blog: Using apt-cacher-ng with SSL/TLS.
Links relating to bridged and bonded Networking
A bridged network allows different networks to be connected, both physical, like NICs or Wifi and virtual, allowing virtual machine to connect to a physical network and even be assigned a LAN IP address. Bonding allows physical networking devices such as NICs or Wifi to be bonded to allow increased bandwidth or redundancy. Sadly there seems to be alot of information out there that isceither for older version of software or other purposing.