Here’s how you can build your own mail server from scratch.
This document is generated automatically from Mail-in-a-Box’s setup script source code.
If the hostname is not correctly resolvable sudo can't be used. This will result in errors during the install
First set the hostname in the configuration file, then activate the setting
echo box.yourdomain.com > /etc/hostnamehostname box.yourdomain.com
The default Ubuntu Bionic image on Scaleway throws warnings during setup about incorrect permissions (group writeable) set on the following directories.
chmod g-w /etc /etc/default /usr
If the physical memory of the system is below 2GB it is wise to create a swap file. This will make the system more resiliant to memory spikes and prevent for instance spam filtering from crashing
We will create a 1G file, this should be a good balance between disk usage and buffers for the system. We will only allocate this file if there is more than 5GB of disk space available
The following checks are performed: - Check if swap is currently mountend by looking at /proc/swaps - Check if the user intents to activate swap on next boot by checking fstab entries. - Check if a swapfile already exists - Check if the root file system is not btrfs, might be an incompatible version with swapfiles. User should hanle it them selves. - Check the memory requirements - Check available diskspace
See https://www.digitalocean.com/community/tutorials/how-to-add-swap-on-ubuntu-14-04 for reference
SWAP_MOUNTED=$(cat /proc/swaps | tail -n+2)SWAP_IN_FSTAB=$(grep swap /etc/fstab || /bin/true)ROOT_IS_BTRFS=$(grep "/ .*btrfs" /proc/mounts || /bin/true)TOTAL_PHYSICAL_MEM=$(head -n 1 /proc/meminfo | awk "{print $2}" || /bin/true)AVAILABLE_DISK_SPACE=$(df / --output=avail | tail -n 1)if[ -z $SWAP_MOUNTED ] &&[ -z $SWAP_IN_FSTAB ] &&[ ! -e /swapfile ] &&[ -z $ROOT_IS_BTRFS ] &&[ $TOTAL_PHYSICAL_MEM -lt 1900000 ] &&[ $AVAILABLE_DISK_SPACE -gt 5242880 ]then
Allocate and activate the swap file. Allocate in 1KB chuncks doing it in one go, could fail on low memory systems
dd if=/dev/zero of=/swapfile bs=1024 count=$[1024*1024] status=nonechmod 600 /swapfilemkswap /swapfileswapon /swapfile
Check if swap is mounted then activate on boot
echo "/swapfile none swap sw 0 0" >> /etc/fstab
We install some non-standard Ubuntu packages maintained by other third-party providers. First ensure add-apt-repository is installed.
apt-get updateapt-get install -y software-properties-common
Install the certbot PPA.
add-apt-repository -y ppa:certbot/certbot
Update system packages to make sure we have the latest upstream versions of things from Ubuntu, as well as the directory of packages provide by the PPAs so we can install those packages later.
apt-get updateapt_get_quiet upgrade
Old kernels pile up over time and take up a lot of disk space, and because of Mail-in-a-Box changes there may be other packages that are no longer needed. Clear out anything apt knows is safe to delete.
apt_get_quiet autoremove
Install basic utilities.
nc
command line networking toolnproc
tool to report number of processors, mktempapt-get install -y python3 python3-dev python3-pip netcat-openbsd wget curl git sudo coreutils bc haveged pollinate unzip unattended-upgrades cron ntp fail2ban rsyslog
When Ubuntu 20 comes out, we don't want users to be prompted to upgrade, because we don't yet support it.
tools/editconf.py /etc/update-manager/release-upgrades Prompt=neverrm -f /var/lib/ubuntu-release-upgrader/release-upgrade-available
Some systems are missing /etc/timezone, which we cat into the configs for Z-Push and ownCloud, so we need to set it to something. Daily cron tasks like the system backup are run at a time tied to the system timezone, so letting the user choose will help us identify the right time to do those things (i.e. late at night in whatever timezone the user actually lives in).
However, changing the timezone once it is set seems to confuse fail2ban and requires restarting fail2ban (done below in the fail2ban section) and syslog (see #328). There might be other issues, and it's not likely the user will want to change this, so we only ask on first setup. If the file is missing or this is the user's first time running Mail-in-a-Box setup, run the interactive timezone configuration tool.
dpkg-reconfigure tzdataservice rsyslog restart
This is a non-interactive setup so we can't ask the user. If /etc/timezone is missing, set it to UTC.
echo Etc/UTC > /etc/timezoneservice rsyslog restart
/dev/urandom is used by various components for generating random bytes for encryption keys and passwords:
ssl.sh
, which calls openssl genrsa
)dns.sh
)webmail.sh
)Why /dev/urandom? It's the same as /dev/random, except that it doesn't wait
for a constant new stream of entropy. In practice, we only need a little
entropy at the start to get going. After that, we can safely pull a random
stream from /dev/urandom and not worry about how much entropy has been
added to the stream. (http://www.2uo.de/myths-about-urandom/) So we need
to worry about /dev/urandom being seeded properly (which is also an issue
for /dev/random), but after that /dev/urandom is superior to /dev/random
because it's faster and doesn't block indefinitely to wait for hardware
entropy. Note that openssl genrsa
even uses /dev/urandom
, and if it's
good enough for generating an RSA private key, it's good enough for anything
else we may need.
Now about that seeding issue....
/dev/urandom is seeded from "the uninitialized contents of the pool buffers when the kernel starts, the startup clock time in nanosecond resolution,...and entropy saved across boots to a local file" as well as the order of execution of concurrent accesses to /dev/urandom. (Heninger et al 2012, https://factorable.net/weakkeys12.conference.pdf) But when memory is zeroed, the system clock is reset on boot, /etc/init.d/urandom has not yet run, or the machine is single CPU or has no concurrent accesses to /dev/urandom prior to this point, /dev/urandom may not be seeded well. After this, /dev/urandom draws from the same entropy sources as /dev/random, but it doesn't block or issue any warnings if no entropy is actually available. (http://www.2uo.de/myths-about-urandom/) Entropy might not be readily available because this machine has no user input devices (common on servers!) and either no hard disk or not enough IO has ocurred yet --- although haveged tries to mitigate this. So there's a good chance that accessing /dev/urandom will not be drawing from any hardware entropy and under a perfect-storm circumstance where the other seeds are meaningless, /dev/urandom may not be seeded at all.
The first thing we'll do is block until we can seed /dev/urandom with enough hardware entropy to get going, by drawing from /dev/random. haveged makes this less likely to stall for very long.
dd if=/dev/random of=/dev/urandom bs=1 count=32 2> /dev/null
This is supposedly sufficient. But because we're not sure if hardware entropy is really any good on virtualized systems, we'll also seed from Ubuntu's pollinate servers:
pollinate -q -r
Between these two, we really ought to be all set.
We need an ssh key to store backups via rsync, if it doesn't exist create one
ssh-keygen -t rsa -b 2048 -a 100 -f /root/.ssh/id_rsa_miab -N -q
Allow apt to install system updates automatically every day.
APT::Periodic::MaxAge "7"; APT::Periodic::Update-Package-Lists "1"; APT::Periodic::Unattended-Upgrade "1"; APT::Periodic::Verbose "0";
Install ufw
which provides a simple firewall configuration.
apt-get install -y ufw
Allow incoming connections to SSH.
ufw allow sshufw --force enable
Install a local recursive DNS server --- i.e. for DNS queries made by local services running on this machine.
(This is unrelated to the box's public, non-recursive DNS server that answers remote queries about domain names hosted on this box. For that see dns.sh.)
The default systemd-resolved service provides local DNS name resolution. By default it is a recursive stub nameserver, which means it simply relays requests to an external nameserver, usually provided by your ISP or configured in /etc/systemd/resolved.conf.
This won't work for us for three reasons.
1) We have higher security goals --- we want DNSSEC to be enforced on all DNS queries (some upstream DNS servers do, some don't). 2) We will configure postfix to use DANE, which uses DNSSEC to find TLS certificates for remote servers. DNSSEC validation must be performed locally because we can't trust an unencrypted connection to an external DNS server. 3) DNS-based mail server blacklists (RBLs) typically block large ISP DNS servers because they only provide free data to small users. Since we use RBLs to block incoming mail from blacklisted IP addresses, we have to run our own DNS server. See #1424.
systemd-resolved has a setting to perform local DNSSEC validation on all requests (in /etc/systemd/resolved.conf, set DNSSEC=yes), but because it's a stub server the main part of a request still goes through an upstream DNS server, which won't work for RBLs. So we really need a local recursive nameserver.
We'll install bind9
, which as packaged for Ubuntu, has DNSSEC enabled by default via "dnssec-validation auto".
We'll have it be bound to 127.0.0.1 so that it does not interfere with
the public, recursive nameserver nsd
bound to the public ethernet interfaces.
About the settings:
bind9
not listen on IPv6 addresses
so that we're sure there's no conflict with nsd, our public domain
name server, on IPV6.bind9
to
binding to the loopback interface instead of all interfaces.apt-get install -y bind9
OPTIONS="-u bind -4"
Add a listen-on directive if it doesn't exist inside the options block.
sed -i "s/^}/\n\tlisten-on { 127.0.0.1; };\n}/" /etc/bind/named.conf.options
First we'll disable systemd-resolved's management of resolv.conf and its stub server. Breaking the symlink to /run/systemd/resolve/stub-resolv.conf means systemd-resolved will read it for DNS servers to use. Put in 127.0.0.1, which is where bind9 will be running. Obviously don't do this before installing bind9 or else apt won't be able to resolve a server to download bind9 from.
rm -f /etc/resolv.conf
DNSStubListener=no
echo "nameserver 127.0.0.1" > /etc/resolv.conf
Restart the DNS services.
service bind9 restartsystemctl restart systemd-resolved
Configure the Fail2Ban installation to prevent dumb bruce-force attacks against dovecot, postfix, ssh, etc.
rm -f /etc/fail2ban/jail.local # we used to use this file but don\'t anymorerm -f /etc/fail2ban/jail.d/defaults-debian.conf # removes default config so we can manage all of fail2ban rules in one configcat conf/fail2ban/jails.conf | sed s/PUBLIC_IP/$PUBLIC_IP/g | sed s#STORAGE_ROOT#$STORE# > /etc/fail2ban/jail.d/mailinabox.confcp -f conf/fail2ban/filter.d/* /etc/fail2ban/filter.d/
On first installation, the log files that the jails look at don't all exist. e.g., The roundcube error log isn't normally created until someone logs into Roundcube for the first time. This causes fail2ban to fail to start. Later scripts will ensure the files exist and then fail2ban is given another restart at the very end of setup.
service fail2ban restart
Create an RSA private key, a self-signed SSL certificate, and some Diffie-Hellman cipher bits, if they have not yet been created.
The RSA private key and certificate are used for:
The certificate is created with its CN set to the box.yourdomain.com. It is also used for other domains served over HTTPS until the user installs a better certificate for those domains.
The Diffie-Hellman cipher bits are used for SMTP and HTTPS, when a Diffie-Hellman cipher is selected during TLS negotiation. Diffie-Hellman provides Perfect Forward Secrecy.
Show a status line if we are going to take any action in this file.
if [ ! -f /usr/bin/openssl ] || [ ! -f $STORE/ssl/ssl_private_key.pem ] || [ ! -f $STORE/ssl/ssl_certificate.pem ] || [ ! -f $STORE/ssl/dh2048.pem ]then
Install openssl.
apt-get install -y openssl
Create a directory to store TLS-related things like "SSL" certificates.
mkdir -p $STORE/ssl
Generate a new private key.
The key is only as good as the entropy available to openssl so that it can generate a random key. "OpenSSL’s built-in RSA key generator .... is seeded on first use with (on Linux) 32 bytes read from /dev/urandom, the process ID, user ID, and the current time in seconds. [During key generation OpenSSL] mixes into the entropy pool the current time in seconds, the process ID, and the possibly uninitialized contents of a ... buffer ... dozens to hundreds of times."
A perfect storm of issues can cause the generated key to be not very random:
Since we properly seed /dev/urandom in system.sh we should be fine, but I leave in the rest of the notes in case that ever changes. Set the umask so the key file is never world-readable.
(umask 077; openssl genrsa -out $STORE/ssl/ssl_private_key.pem 2048)
Generate a self-signed SSL certificate because things like nginx, dovecot, etc. won't even start without some certificate in place, and we need nginx so we can offer the user a control panel to install a better certificate. Generate a certificate signing request.
CSR=/tmp/ssl_cert_sign_req-$$.csropenssl req -new -key $STORE/ssl/ssl_private_key.pem -out $CSR -sha256 -subj /CN=box.yourdomain.com
Generate the self-signed certificate.
CERT=$STORE/ssl/box.yourdomain.com-selfsigned-$(date --rfc-3339=date | sed s/-//g).pemopenssl x509 -req -days 365 -in $CSR -signkey $STORE/ssl/ssl_private_key.pem -out $CERT
Delete the certificate signing request because it has no other purpose.
rm -f $CSR
Symlink the certificate into the system certificate path, so system services can find it.
ln -s $CERT $STORE/ssl/ssl_certificate.pem
Generate some Diffie-Hellman cipher bits. openssl's default bit length for this is 1024 bits, but we'll create 2048 bits of bits per the latest recommendations.
openssl dhparam -out $STORE/ssl/dh2048.pem 2048
This script installs packages, but the DNS zone files are only created by the /dns/update API in the management server because the set of zones (domains) hosted by the server depends on the mail users & aliases created by the user later.
Install the packages.
apt-get install -y nsd ldnsutils openssh-client
Prepare nsd's configuration.
mkdir -p /var/run/nsd
# Do not edit. Overwritten by Mail-in-a-Box setup. server: hide-version: yes logfile: "/var/log/nsd.log" # identify the server (CH TXT ID.SERVER entry). identity: "" # The directory for zonefile: files. zonesdir: "/etc/nsd/zones" # Allows NSD to bind to IP addresses that are not (yet) added to the # network interface. This allows nsd to start even if the network stack # isn't fully ready, which apparently happens in some cases. # See https://www.nlnetlabs.nl/projects/nsd/nsd.conf.5.html. ip-transparent: yes
Add log rotation
/var/log/nsd.log { weekly missingok rotate 12 compress delaycompress notifempty }
Since we have bind9 listening on localhost for locally-generated DNS queries that require a recursive nameserver, and the system might have other network interfaces for e.g. tunnelling, we have to be specific about the network interfaces that nsd binds to.
for ip in $PRIVATE_IP $PRIVATE_IPV6doecho " ip-address: $ip" >> /etc/nsd/nsd.confdoneecho "include: /etc/nsd/zones.conf" >> /etc/nsd/nsd.conf
Create DNSSEC signing keys.
mkdir -p $STORE/dns/dnssec
TLDs don't all support the same algorithms, so we'll generate keys using a few different algorithms. RSASHA1-NSEC3-SHA1 was possibly the first widely used algorithm that supported NSEC3, which is a security best practice. However TLDs will probably be moving away from it to a a SHA256-based algorithm.
Supports RSASHA1-NSEC3-SHA1
(didn't test with RSASHA256
):
Requires RSASHA256
Supports RSASHA256
(and defaulting to this)
for algo in RSASHA1-NSEC3-SHA1 RSASHA256do
Create the Key-Signing Key (KSK) (with -k
) which is the so-called
Secure Entry Point. The domain name we provide ("domain") doesn't
matter -- we'll use the same keys for all our domains.
ldns-keygen
outputs the new key's filename to stdout, which
we're capturing into the KSK
variable.
ldns-keygen uses /dev/random for generating random numbers by default. This is slow and unecessary if we ensure /dev/urandom is seeded properly, so we use /dev/urandom. See system.sh for an explanation. See #596, #115.
KSK=$(umask 077; cd $STORE/dns/dnssec; ldns-keygen -r /dev/urandom -a $algo -b 2048 -k _domain_)
Now create a Zone-Signing Key (ZSK) which is expected to be
rotated more often than a KSK, although we have no plans to
rotate it (and doing so would be difficult to do without
disturbing DNS availability.) Omit -k
and use a shorter key length.
ZSK=$(umask 077; cd $STORE/dns/dnssec; ldns-keygen -r /dev/urandom -a $algo -b 1024 _domain_)
These generate two sets of files like:
K_domain_.+007+08882.ds
: DS record normally provided to domain name registrar (but it's actually invalid with _domain_
)K_domain_.+007+08882.key
: public keyK_domain_.+007+08882.private
: private key (secret!)The filenames are unpredictable and encode the key generation options. So we'll store the names of the files we just generated. We might have multiple keys down the road. This will identify what keys are the current keys.
cat > $STORE/dns/dnssec/$algo.conf << EOFKSK=$KSKZSK=$ZSKEOF
And loop to do the next algorithm...
done
Force the dns_update script to be run every day to re-sign zones for DNSSEC
before they expire. When we sign zones (in dns_update.py
) we specify a
30-day validation window, so we had better re-sign before then.
#!/bin/bash
# Mail-in-a-Box
# Re-sign any DNS zones with DNSSEC because the signatures expire periodically.
/path/to/mailinabox
/tools/dns_update
chmod +x /etc/cron.daily/mailinabox-dnssec
Permit DNS queries on TCP/UDP in the firewall.
ufw allow domain
Postfix handles the transmission of email between servers using the SMTP protocol. It is a Mail Transfer Agent (MTA).
Postfix listens on port 25 (SMTP) for incoming mail from other servers on the Internet. It is responsible for very basic email filtering such as by IP address and greylisting, it checks that the destination address is valid, rewrites destinations according to aliases, and passses email on to another service for local mail delivery.
The first hop in local mail delivery is to Spamassassin via LMTP. Spamassassin then passes mail over to Dovecot for storage in the user's mailbox.
Postfix also listens on port 587 (SMTP+STARTLS) for connections from users who can authenticate and then sends their email out to the outside world. Postfix queries Dovecot to authenticate users.
Address validation, alias rewriting, and user authentication is configured in a separate setup script mail-users.sh because of the overlap of this part with the Dovecot configuration.
Install postfix's packages.
postfix
: The SMTP server.postfix-pcre
: Enables header filtering.postgrey
: A mail policy service that soft-rejects mail the first time
it is received. Spammers don't usually try agian. Legitimate mail
always will.ca-certificates
: A trust store used to squelch postfix warnings about
untrusted opportunistically-encrypted connections.apt-get install -y postfix postfix-sqlite postfix-pcre postgrey ca-certificates
Set some basic settings...
inet_interfaces=all smtp_bind_address=$PRIVATE_IP smtp_bind_address6=$PRIVATE_IPV6 myhostname=box.yourdomain.com smtpd_banner=$myhostname ESMTP Hi, I'm a Mail-in-a-Box (Ubuntu/Postfix; see https://mailinabox.email/) mydestination=localhost
Tweak some queue settings: * Inform users when their e-mail delivery is delayed more than 3 hours (default is not to warn). * Stop trying to send an undeliverable e-mail after 2 days (instead of 5), and for bounce messages just try for 1 day.
delay_warning_time=3h maximal_queue_lifetime=2d bounce_queue_lifetime=1d
Enable the 'submission' port 587 smtpd server and tweak its settings.
submission inet n - - - - smtpd -o smtpd_sasl_auth_enable yes -o syslog_name postfix/submission -o smtpd_milters inet:127.0.0.1:8891 -o smtpd_tls_security_level encrypt -o smtpd_tls_ciphers high -o smtpd_tls_exclude_ciphers=aNULL,DES,3DES,MD5,DES+MD5,RC4 -o smtpd_tls_mandatory_protocols=!SSLv2,!SSLv3 -o cleanup_service_name authclean authclean unix n - - - 0 cleanup -o header_checks pcre:/etc/postfix/outgoing_mail_header_filters -o nested_header_checks
Install the outgoing_mail_header_filters
file required by the new 'authclean' service.
cp conf/postfix_outgoing_mail_header_filters /etc/postfix/outgoing_mail_header_filters
Modify the outgoing_mail_header_filters
file to use the local machine name and ip
on the first received header line. This may help reduce the spam score of email by
removing the 127.0.0.1 reference.
sed -i s/box.yourdomain.com/box.yourdomain.com/ /etc/postfix/outgoing_mail_header_filterssed -i s/PUBLIC_IP/$PUBLIC_IP/ /etc/postfix/outgoing_mail_header_filters
Enable TLS on these and all other connections (i.e. ports 25 and 587) and require TLS before a user is allowed to authenticate. This also makes opportunistic TLS available on incoming mail. Set stronger DH parameters, which via openssl tend to default to 1024 bits (see ssl.sh).
smtpd_tls_security_level=may smtpd_tls_auth_only=yes smtpd_tls_cert_file=$STORE/ssl/ssl_certificate.pem smtpd_tls_key_file=$STORE/ssl/ssl_private_key.pem smtpd_tls_dh1024_param_file=$STORE/ssl/dh2048.pem smtpd_tls_protocols=!SSLv2,!SSLv3 smtpd_tls_ciphers=medium smtpd_tls_exclude_ciphers=aNULL,RC4 smtpd_tls_received_header=yes
Prevent non-authenticated users from sending mail that requires being relayed elsewhere. We don't want to be an "open relay". On outbound mail, require one of:
permit_sasl_authenticated
: Authenticated users (i.e. on port 587).permit_mynetworks
: Mail that originates locally.reject_unauth_destination
: No one else. (Permits mail whose destination is local and rejects other mail.)smtpd_relay_restrictions=permit_sasl_authenticated,permit_mynetworks,reject_unauth_destination
When connecting to remote SMTP servers, prefer TLS and use DANE if available.
Prefering ("opportunistic") TLS means Postfix will use TLS if the remote end offers it, otherwise it will transmit the message in the clear. Postfix will accept whatever SSL certificate the remote end provides. Opportunistic TLS protects against passive easvesdropping (but not man-in-the-middle attacks). DANE takes this a step further:
Postfix queries DNS for the TLSA record on the destination MX host. If no TLSA records are found,
then opportunistic TLS is used. Otherwise the server certificate must match the TLSA records
or else the mail bounces. TLSA also requires DNSSEC on the MX host. Postfix doesn't do DNSSEC
itself but assumes the system's nameserver does and reports DNSSEC status. Thus this also
relies on our local DNS server (see system.sh) and smtp_dns_support_level=dnssec
.
The smtp_tls_CAfile
is superflous, but it eliminates warnings in the logs about untrusted certs,
which we don't care about seeing because Postfix is doing opportunistic TLS anyway. Better to encrypt,
even if we don't know if it's to the right party, than to not encrypt at all. Instead we'll
now see notices about trusted certs. The CA file is provided by the package ca-certificates
.
smtp_tls_protocols=!SSLv2,!SSLv3 smtp_tls_mandatory_protocols=!SSLv2,!SSLv3 smtp_tls_ciphers=medium smtp_tls_exclude_ciphers=aNULL,RC4 smtp_tls_security_level=dane smtp_dns_support_level=dnssec smtp_tls_CAfile=/etc/ssl/certs/ca-certificates.crt smtp_tls_loglevel=2
Pass any incoming mail over to a local delivery agent. Spamassassin will act as the LDA agent at first. It is listening on port 10025 with LMTP. Spamassassin will pass the mail over to Dovecot after.
In a basic setup we would pass mail directly to Dovecot by setting
virtual_transport to lmtp:unix:private/dovecot-lmtp
.
virtual_transport=lmtp:[127.0.0.1]:10025
Because of a spampd bug, limit the number of recipients in each connection. See https://github.com/mail-in-a-box/mailinabox/issues/1523.
lmtp_destination_recipient_limit=1
Who can send mail to us? Some basic filters.
reject_non_fqdn_sender
: Reject not-nice-looking return paths.reject_unknown_sender_domain
: Reject return paths with invalid domains.reject_authenticated_sender_login_mismatch
: Reject if mail FROM address does not match the client SASL loginreject_rhsbl_sender
: Reject return paths that use blacklisted domains.permit_sasl_authenticated
: Authenticated users (i.e. on port 587) can skip further checks.permit_mynetworks
: Mail that originates locally can skip further checks.reject_rbl_client
: Reject connections from IP addresses blacklisted in zen.spamhaus.orgreject_unlisted_recipient
: Although Postfix will reject
mail to unknown recipients, it's nicer to reject such mail ahead of
greylisting rather than after.check_policy_service
: Apply greylisting using postgrey.smtpd_sender_restrictions=reject_non_fqdn_sender,reject_unknown_sender_domain,reject_authenticated_sender_login_mismatch,reject_rhsbl_sender dbl.spamhaus.org smtpd_recipient_restrictions=permit_sasl_authenticated,permit_mynetworks,reject_rbl_client zen.spamhaus.org,reject_unlisted_recipient,check_policy_service inet:127.0.0.1:10023
Postfix connects to Postgrey on the 127.0.0.1 interface specifically. Ensure that Postgrey listens on the same interface (and not IPv6, for instance). A lot of legit mail servers try to resend before 300 seconds. As a matter of fact RFC is not strict about retry timer so postfix and other MTA have their own intervals. To fix the problem of receiving e-mails really latter, delay of greylisting has been set to 180 seconds (default is 300 seconds).
POSTGREY_OPTS="--inet=127.0.0.1:10023 --delay=180 --whitelist-recipients=/etc/postgrey/whitelist_clients"
We are going to setup a newer whitelist for postgrey, the version included in the distribution is old
#!/bin/bash # Mail-in-a-Box # check we have a postgrey_whitelist_clients file and that it is not older than 28 days # ok we need to update the file, so lets try to fetch it # if fetching hasn't failed yet then check it is a plain text file # curl manual states that --fail sometimes still produces output # this final check will at least check the output is not html # before moving it into place mv /tmp/postgrey_whitelist_clients /etc/postgrey/whitelist_clients service postgrey restart rm /tmp/postgrey_whitelist_clients
chmod +x /etc/cron.daily/mailinabox-postgrey-whitelist/etc/cron.daily/mailinabox-postgrey-whitelist
Increase the message size limit from 10MB to 128MB. The same limit is specified in nginx.conf for mail submitted via webmail and Z-Push.
message_size_limit=134217728
Allow the two SMTP ports in the firewall.
ufw allow smtpufw allow submission
Restart services
service postfix restartservice postgrey restart
Dovecot is both the IMAP/POP server (the protocol that email applications use to query a mailbox) as well as the local delivery agent (LDA), meaning it is responsible for writing emails to mailbox storage on disk. You could imagine why these things would be bundled together.
As part of local mail delivery, Dovecot executes actions on incoming mail as defined in a "sieve" script.
Dovecot's LDA role comes after spam filtering. Postfix hands mail off to Spamassassin which in turn hands it off to Dovecot. This all happens using the LMTP protocol.
Install packages for dovecot. These are all core dovecot plugins, but dovecot-lucene is packaged by us in the Mail-in-a-Box PPA, not by Ubuntu.
apt-get install -y dovecot-core dovecot-imapd dovecot-pop3d dovecot-lmtpd dovecot-sqlite sqlite3 dovecot-sieve dovecot-managesieved
The dovecot-imapd
, dovecot-pop3d
, and dovecot-lmtpd
packages automatically
enable IMAP, POP and LMTP protocols.
Set basic daemon options.
The default_process_limit
is 100, which constrains the total number
of active IMAP connections (at, say, 5 open connections per user that
would be 20 users). Set it to 250 times the number of cores this
machine has, so on a two-core machine that's 500 processes/100 users).
The default_vsz_limit
is the maximum amount of virtual memory that
can be allocated. It should be set reasonably high to avoid allocation
issues with larger mailboxes. We're setting it to 1/3 of the total
available memory (physical mem + swap) to be sure.
See here for discussion:
- https://www.dovecot.org/list/dovecot/2012-August/137569.html
- https://www.dovecot.org/list/dovecot/2011-December/132455.html
default_process_limit=$(echo
The inotify max_user_instances
default is 128, which constrains
the total number of watched (IMAP IDLE push) folders by open connections.
See http://www.dovecot.org/pipermail/dovecot/2013-March/088834.html.
A reboot is required for this to take effect (which we don't do as
as a part of setup). Test with cat /proc/sys/fs/inotify/max_user_instances
.
fs.inotify.max_user_instances=1024
Set the location where we'll store user mailboxes. '%d' is the domain name and '%n' is the username part of the user's email address. We'll ensure that no bad domains or email addresses are created within the management daemon.
mail_location=maildir:$STORE/mail/mailboxes/%d/%n mail_privileged_group=mail rst_valid_uid=0
Create, subscribe, and mark as special folders: INBOX, Drafts, Sent, Trash, Spam and Archive.
cp conf/dovecot-mailboxes.conf /etc/dovecot/conf.d/15-mailboxes.conf
Require that passwords are sent over SSL only, and allow the usual IMAP authentication mechanisms. The LOGIN mechanism is supposedly for Microsoft products like Outlook to do SMTP login (I guess since we're using Dovecot to handle SMTP authentication?).
disable_plaintext_auth=yes auth_mechanisms=plain login
Enable SSL, specify the location of the SSL certificate and private key files. Disable obsolete SSL protocols and allow only good ciphers per http://baldric.net/2013/12/07/tls-ciphers-in-postfix-and-dovecot/. Enable strong ssl dh parameters
ssl=required ssl_cert=<$STORE/ssl/ssl_certificate.pem ssl_key=<$STORE/ssl/ssl_private_key.pem ssl_protocols=!SSLv3 ssl_cipher_list=ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS ssl_prefer_server_ciphers = yes ssl_dh_parameters_length = 2048
Disable in-the-clear IMAP/POP because there is no reason for a user to transmit login credentials outside of an encrypted connection. Only the over-TLS versions are made available (IMAPS on port 993; POP3S on port 995).
sed -i "s/#port = 143/port = 0/" /etc/dovecot/conf.d/10-master.confsed -i "s/#port = 110/port = 0/" /etc/dovecot/conf.d/10-master.conf
Make IMAP IDLE slightly more efficient. By default, Dovecot says "still here" every two minutes. With K-9 mail, the bandwidth and battery usage due to this are minimal. But for good measure, let's go to 4 minutes to halve the bandwidth and number of times the device's networking might be woken up. The risk is that if the connection is silent for too long it might be reset by a peer. See #129 and How bad is IMAP IDLE.
imap_idle_notify_interval=4 mins
Set POP3 UIDL. UIDLs are used by POP3 clients to keep track of what messages they've downloaded. For new POP3 servers, the easiest way to set up UIDLs is to use IMAP's UIDVALIDITY and UID values, the default in Dovecot.
pop3_uidl_format=%08Xu%08Xv
Enable Dovecot's LDA service with the LMTP protocol. It will listen on port 10026, and Spamassassin will be configured to pass mail there.
The disabled unix socket listener is normally how Postfix and Dovecot would communicate (see the Postfix setup script for the corresponding setting also commented out).
Also increase the number of allowed IMAP connections per mailbox because we all have so many devices lately.
service lmtp { #unix_listener /var/spool/postfix/private/dovecot-lmtp { # user = postfix # group = postfix #} inet_listener lmtp { address = 127.0.0.1 port = 10026 } } # Enable imap-login on localhost to allow the user_external plugin # for Nextcloud to do imap authentication. (See #1577) service imap-login { inet_listener imap { address = 127.0.0.1 port = 143 } } protocol imap { mail_max_userip_connections = 20 }
Setting a postmaster_address
is required or LMTP won't start. An alias
will be created automatically by our management daemon.
postmaster_address=postmaster@box.yourdomain.com
Enable the Dovecot sieve plugin which let's users run scripts that process mail as it comes in.
sed -i "s/#mail_plugins = .*/mail_plugins = \$mail_plugins sieve/" /etc/dovecot/conf.d/20-lmtp.conf
Configure sieve. We'll create a global script that moves mail marked as spam by Spamassassin into the user's Spam folder.
sieve_before
: The path to our global sieve which handles moving spam to the Spam folder.
sieve_before2
: The path to our global sieve directory for sieve which can contain .sieve files
to run globally for every user before their own sieve files run.
sieve_after
: The path to our global sieve directory which can contain .sieve files
to run globally for every user after their own sieve files run.
sieve
: The path to the user's main active script. ManageSieve will create a symbolic
link here to the actual sieve script. It should not be in the mailbox directory
(because then it might appear as a folder) and it should not be in the sieve_dir
(because then I suppose it might appear to the user as one of their scripts).
sieve_dir
: Directory for :personal include scripts for the include extension. This
is also where the ManageSieve service stores the user's scripts.plugin { sieve_before = /etc/dovecot/sieve-spam.sieve sieve_before2 = $STORE/mail/sieve/global_before sieve_after = $STORE/mail/sieve/global_after sieve = $STORE/mail/sieve/%d/%n.sieve sieve_dir = $STORE/mail/sieve/%d/%n }
Copy the global sieve script into where we've told Dovecot to look for it. Then compile it. Global scripts must be compiled now because Dovecot won't have permission later.
cp conf/sieve-spam.txt /etc/dovecot/sieve-spam.sievesievec /etc/dovecot/sieve-spam.sieve
PERMISSIONS
Ensure configuration files are owned by dovecot and not world readable.
chown -R mail:dovecot /etc/dovecotchmod -R o-rwx /etc/dovecot
Ensure mailbox files have a directory that exists and are owned by the mail user.
mkdir -p $STORE/mail/mailboxeschown -R mail.mail $STORE/mail/mailboxes
Same for the sieve scripts.
mkdir -p $STORE/mail/sievemkdir -p $STORE/mail/sieve/global_beforemkdir -p $STORE/mail/sieve/global_afterchown -R mail.mail $STORE/mail/sieve
Allow the IMAP/POP ports in the firewall.
ufw allow imapsufw allow pop3s
Allow the Sieve port in the firewall.
ufw allow sieve
Restart services.
service dovecot restart
This script configures user authentication for Dovecot and Postfix (which relies on Dovecot) and destination validation by quering an Sqlite3 database of mail users.
The database of mail users (i.e. authenticated users, who have mailboxes) and aliases (forwarders).
db_path=$STORE/mail/users.sqlite
Create an empty database if it doesn't yet exist.
echo "\"CREATE TABLE users (id INTEGER PRIMARY KEY AUTOINCREMENT, email TEXT NOT NULL UNIQUE, password TEXT NOT NULL, extra, privileges TEXT NOT NULL DEFAULT '');\"" \
| sqlite3 $db_pathecho "\"CREATE TABLE aliases (id INTEGER PRIMARY KEY AUTOINCREMENT, source TEXT NOT NULL UNIQUE, destination TEXT NOT NULL, permitted_senders TEXT);\"" \
| sqlite3 $db_path
Have Dovecot query our database, and not system users, for authentication.
sed -i "s/#*(!include auth-system.conf.ext)/#1/" /etc/dovecot/conf.d/10-auth.confsed -i "s/#(!include auth-sql.conf.ext)/1/" /etc/dovecot/conf.d/10-auth.conf
Specify how the database is to be queried for user authentication (passdb) and where user mailboxes are stored (userdb).
passdb { driver = sql args = /etc/dovecot/dovecot-sql.conf.ext } userdb { driver = sql args = /etc/dovecot/dovecot-sql.conf.ext }
Configure the SQL to query for a user's metadata and password.
driver = sqlite connect = $db_path default_pass_scheme = SHA512-CRYPT password_query = SELECT email as user, password FROM users WHERE email='%u'; user_query = SELECT email AS user, "mail" as uid, "mail" as gid, "$STORE/mail/mailboxes/%d/%n" as home FROM users WHERE email='%u'; iterate_query = SELECT email AS user FROM users;
chmod 0600 /etc/dovecot/dovecot-sql.conf.ext # per Dovecot instructions
Have Dovecot provide an authorization service that Postfix can access & use.
service auth { unix_listener /var/spool/postfix/private/auth { mode = 0666 user = postfix group = postfix } }
And have Postfix use that service. We disable it here so that authentication is not permitted on port 25 (which does not run DKIM on relayed mail, so outbound mail isn't correct, see #830), but we enable it specifically for the submission port.
smtpd_sasl_type=dovecot smtpd_sasl_path=private/auth smtpd_sasl_auth_enable=no
We use Postfix's reject_authenticated_sender_login_mismatch filter to prevent intra-domain spoofing by logged in but untrusted users in outbound email. In all outbound mail (the sender has authenticated), the MAIL FROM address (aka envelope or return path address) must be "owned" by the user who authenticated. An SQL query will find who are the owners of any given address.
smtpd_sender_login_maps=sqlite:/etc/postfix/sender-login-maps.cf
Postfix will query the exact address first, where the priority will be alias records first, then user records. If there are no matches for the exact address, then Postfix will query just the domain part, which we call catch-alls and domain aliases. A NULL permitted_senders column means to take the value from the destination column.
dbpath=$db_path query = SELECT permitted_senders FROM (SELECT permitted_senders, 0 AS priority FROM aliases WHERE source='%s' AND permitted_senders IS NOT NULL UNION SELECT destination AS permitted_senders, 1 AS priority FROM aliases WHERE source='%s' AND permitted_senders IS NULL UNION SELECT email as permitted_senders, 2 AS priority FROM users WHERE email='%s') ORDER BY priority LIMIT 1;
Use a Sqlite3 database to check whether a destination email address exists, and to perform any email alias rewrites in Postfix.
virtual_mailbox_domains=sqlite:/etc/postfix/virtual-mailbox-domains.cf virtual_mailbox_maps=sqlite:/etc/postfix/virtual-mailbox-maps.cf virtual_alias_maps=sqlite:/etc/postfix/virtual-alias-maps.cf local_recipient_maps=$virtual_mailbox_maps
SQL statement to check if we handle incoming mail for a domain, either for users or aliases.
dbpath=$db_path query = SELECT 1 FROM users WHERE email LIKE '%%@%s' UNION SELECT 1 FROM aliases WHERE source LIKE '%%@%s'
SQL statement to check if we handle incoming mail for a user.
dbpath=$db_path query = SELECT 1 FROM users WHERE email='%s'
SQL statement to rewrite an email address if an alias is present.
Postfix makes multiple queries for each incoming mail. It first
queries the whole email address, then just the user part in certain
locally-directed cases (but we don't use this), then just @
+the
domain part. The first query that returns something wins. See
http://www.postfix.org/virtual.5.html.
virtual-alias-maps has precedence over virtual-mailbox-maps, but we don't want catch-alls and domain aliases to catch mail for users that have been defined on those domains. To fix this, we not only query the aliases table but also the users table when resolving aliases, i.e. we turn users into aliases from themselves to themselves. That means users will match in postfix's first query before postfix gets to the third query for catch-alls/domain alises.
If there is both an alias and a user for the same address either might be returned by the UNION, so the whole query is wrapped in another select that prioritizes the alias definition to preserve postfix's preference for aliases for whole email addresses.
Since we might have alias records with an empty destination because it might have just permitted_senders, skip any records with an empty destination here so that other lower priority rules might match.
dbpath=$db_path query = SELECT destination from (SELECT destination, 0 as priority FROM aliases WHERE source='%s' AND destination<>'' UNION SELECT email as destination, 1 as priority FROM users WHERE email='%s') ORDER BY priority LIMIT 1;
Restart Services
service postfix restartservice dovecot restart
OpenDKIM provides a service that puts a DKIM signature on outbound mail.
The DNS configuration for DKIM is done in the management daemon.
Install DKIM...
apt-get install -y opendkim opendkim-tools opendmarc
Make sure configuration directories exist.
mkdir -p /etc/opendkimmkdir -p $STORE/mail/dkim
Used in InternalHosts and ExternalIgnoreList configuration directives. Not quite sure why.
echo 127.0.0.1 > /etc/opendkim/TrustedHosts
We need to at least create these files, since we reference them later. Otherwise, opendkim startup will fail
touch /etc/opendkim/KeyTabletouch /etc/opendkim/SigningTable
Add various configuration options to the end of opendkim.conf
.
MinimumKeyBits 1024 ExternalIgnoreList refile:/etc/opendkim/TrustedHosts InternalHosts refile:/etc/opendkim/TrustedHosts KeyTable refile:/etc/opendkim/KeyTable SigningTable refile:/etc/opendkim/SigningTable Socket inet:8891@127.0.0.1 RequireSafeKeys false
Create a new DKIM key. This creates mail.private and mail.txt in $STORE/mail/dkim. The former is the private key and the latter is the suggested DNS TXT entry which we'll include in our DNS setup. Note that the files are named after the 'selector' of the key, which we can change later on to support key rotation.
A 1024-bit key is seen as a minimum standard by several providers such as Google. But they and others use a 2048 bit key, so we'll do the same. Keys beyond 2048 bits may exceed DNS record limits.
opendkim-genkey -b 2048 -r -s mail -D $STORE/mail/dkim
Ensure files are owned by the opendkim user and are private otherwise.
chown -R opendkim:opendkim $STORE/mail/dkimchmod go-rwx $STORE/mail/dkim
Syslog true Socket inet:8893@[127.0.0.1]
Add OpenDKIM and OpenDMARC as milters to postfix, which is how OpenDKIM intercepts outgoing mail to perform the signing (by adding a mail header) and how they both intercept incoming mail to add Authentication-Results headers. The order possibly/probably matters: OpenDMARC relies on the OpenDKIM Authentication-Results header already being present.
Be careful. If we add other milters later, this needs to be concatenated on the smtpd_milters line.
The OpenDMARC milter is skipped in the SMTP submission listener by configuring smtpd_milters there to only list the OpenDKIM milter (see mail-postfix.sh).
smtpd_milters=inet:127.0.0.1:8891 inet:127.0.0.1:8893 non_smtpd_milters=$smtpd_milters milter_default_action=accept
We need to explicitly enable the opendmarc service, or it will not start
systemctl enable opendmarc
Restart services.
service opendkim restartservice opendmarc restartservice postfix restart
spampd sits between postfix and dovecot. It takes mail from postfix over the LMTP protocol, runs spamassassin on it, and then passes the message over LMTP to dovecot for local delivery.
In order to move spam automatically into the Spam folder we use the dovecot sieve plugin.
Install packages. libmail-dkim-perl is needed to make the spamassassin DKIM module work. For more information see Debian Bug #689414: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=689414
apt-get install -y spampd razor pyzor dovecot-antispam libmail-dkim-perl
Allow spamassassin to download new rules.
CRON=1
Configure pyzor, which is a client to a live database of hashes of spam emails. Set the pyzor configuration directory to something sane. The default is ~/.pyzor. We used to use that, so we'll kill that old directory. Then write the public pyzor server to its servers file. That will prevent an automatic download on first use, and also means we can skip 'pyzor discover', both of which are currently broken by something happening on Sourceforge (#496).
rm -rf ~/.pyzor
pyzor_options --homedir /etc/spamassassin/pyzor
mkdir -p /etc/spamassassin/pyzorecho public.pyzor.org:24441 > /etc/spamassassin/pyzor/servers
check with: pyzor --homedir /etc/mail/spamassassin/pyzor ping
Configure spampd: * Pass messages on to docevot on port 10026. This is actually the default setting but we don't want to lose track of it. (We've configured Dovecot to listen on this port elsewhere.) * Increase the maximum message size of scanned messages from the default of 64KB to 500KB, which is Spamassassin (spamc)'s own default. Specified in KBytes. * Disable localmode so Pyzor, DKIM and DNS checks can be used.
DESTPORT=10026 ADDOPTS="--maxsize=2000" LOCALONLY=0
Spamassassin normally wraps spam as an attachment inside a fresh email with a report about the message. This also protects the user from accidentally openening a message with embedded malware.
It's nice to see what rules caused the message to be marked as spam, but it's also annoying to get to the original message when it is an attachment, modern mail clients are safer now and don't load remote content or execute scripts, and it is probably confusing to most users.
Tell Spamassassin not to modify the original message except for adding the X-Spam-Status & X-Spam-Score mail headers and related headers.
report_safe 0 add_header all Report _REPORT_ add_header all Score _SCORE_
Spamassassin can learn from mail marked as spam or ham, but it needs to be configured. We'll store the learning data in our storage area.
These files must be:
We'll have these files owned by spampd and grant access to the other two processes.
Spamassassin will change the access rights back to the defaults, so we must also configure the filemode in the config file.
bayes_path $STORE/mail/spamassassin/bayes bayes_file_mode 0666
mkdir -p $STORE/mail/spamassassinchown -R spampd:spampd $STORE/mail/spamassassin
To mark mail as spam or ham, just drag it in or out of the Spam folder. We'll use the Dovecot antispam plugin to detect the message move operation and execute a shell script that invokes learning.
Enable the Dovecot antispam plugin.
sed -i "s/#mail_plugins = .*/mail_plugins = \$mail_plugins antispam/" /etc/dovecot/conf.d/20-imap.confsed -i "s/#mail_plugins = .*/mail_plugins = \$mail_plugins antispam/" /etc/dovecot/conf.d/20-pop3.conf
Configure the antispam plugin to call sa-learn-pipe.sh.
plugin { antispam_backend = pipe antispam_spam_pattern_ignorecase = SPAM antispam_trash_pattern_ignorecase = trash;Deleted * antispam_allow_append_to_spam = yes antispam_pipe_program_spam_args = /usr/local/bin/sa-learn-pipe.sh;--spam antispam_pipe_program_notspam_args = /usr/local/bin/sa-learn-pipe.sh;--ham antispam_pipe_program = /bin/bash }
Have Dovecot run its mail process with a supplementary group (the spampd group) so that it can access the learning files.
mail_access_groups=spampd
Here's the script that the antispam plugin executes. It spools the message into a temporary file and then runs sa-learn on it. from http://wiki2.dovecot.org/Plugins/Antispam
cat<&0 >> /tmp/sendmail-msg-$$.txt /usr/bin/sa-learn $* /tmp/sendmail-msg-$$.txt > /dev/null rm -f /tmp/sendmail-msg-$$.txt exit 0
chmod a+x /usr/local/bin/sa-learn-pipe.sh
Create empty bayes training data (if it doesn't exist). Once the files exist, ensure they are group-writable so that the Dovecot process has access.
sudo -u spampd /usr/bin/sa-learn --sync 2>/dev/nullchmod -R 660 $STORE/mail/spamassassinchmod 770 $STORE/mail/spamassassin
Initial training? sa-learn --ham storage/mail/mailboxes///cur/ sa-learn --spam storage/mail/mailboxes///.Spam/cur/
Kick services.
service spampd restartservice dovecot restart
HTTP: Turn on a web server serving static files
Some Ubuntu images start off with Apache. Remove it since we will use nginx. Use autoremove to remove any Apache depenencies.
apt-get -y purge apache2 apache2-*apt-get -y --purge autoremove
Install nginx and a PHP FastCGI daemon.
Turn off nginx's default website.
apt-get install -y nginx php-cli php-fpmrm -f /etc/nginx/sites-enabled/default
Copy in a nginx configuration file for common and best-practices SSL settings from @konklone. Replace STORAGE_ROOT so it can find the DH params.
rm -f /etc/nginx/nginx-ssl.conf # we used to put it heresed s#STORAGE_ROOT#$STORE# conf/nginx-ssl.conf > /etc/nginx/conf.d/ssl.conf
Fix some nginx defaults. The server_names_hash_bucket_size seems to prevent long domain names! The default, according to nginx's docs, depends on "the size of the processor’s cache line." It could be as low as 32. We fixed it at 64 in 2014 to accommodate a long domain name (20 characters?). But even at 64, a 58-character domain name won't work (#93), so now we're going up to 128.
server_names_hash_bucket_size 128;
Tell PHP not to expose its version number in the X-Powered-By header.
expose_php=Off
Set PHPs default charset to UTF-8, since we use it. See #367.
default_charset=UTF-8
Switch from the dynamic process manager to the ondemand manager see #1216
pm=ondemand
Bump up PHP's max_children to support more concurrent connections
pm.max_children=8
Other nginx settings will be configured by the management service since it depends on what domains we're serving, which we don't know until mail accounts have been created.
Create the iOS/OS X Mobile Configuration file which is exposed via the nginx configuration at /mailinabox-mobileconfig.
mkdir -p /var/lib/mailinaboxchmod a+rx /var/lib/mailinaboxcat conf/ios-profile.xml | sed s/box.yourdomain.com/box.yourdomain.com/ | sed "s/UUID1/$(cat /proc/sys/kernel/random/uuid)/" | sed "s/UUID2/$(cat /proc/sys/kernel/random/uuid)/" | sed "s/UUID3/$(cat /proc/sys/kernel/random/uuid)/" | sed "s/UUID4/$(cat /proc/sys/kernel/random/uuid)/" > /var/lib/mailinabox/mobileconfig.xmlchmod a+r /var/lib/mailinabox/mobileconfig.xml
Create the Mozilla Auto-configuration file which is exposed via the nginx configuration at /.well-known/autoconfig/mail/config-v1.1.xml. The format of the file is documented at: https://wiki.mozilla.org/Thunderbird:Autoconfiguration:ConfigFileFormat and https://developer.mozilla.org/en-US/docs/Mozilla/Thunderbird/Autoconfiguration/FileFormat/HowTo.
cat conf/mozilla-autoconfig.xml | sed s/box.yourdomain.com/box.yourdomain.com/ > /var/lib/mailinabox/mozilla-autoconfig.xmlchmod a+r /var/lib/mailinabox/mozilla-autoconfig.xml
make a default homepage
mkdir -p $STORE/www/defaultcp conf/www_default.html $STORE/www/default/index.htmlchown -R $STORAGE_USER $STORE/www
Start services.
service nginx restartservice php7.2-fpm restart
Open ports.
ufw allow httpufw allow https
We install Roundcube from sources, rather than from Ubuntu, because:
Ubuntu's roundcube-core
package has dependencies on Apache & MySQL, which we don't want.
The Roundcube shipped with Ubuntu is consistently out of date.
It's packaged incorrectly --- it seems to be missing a directory of files.
So we'll use apt-get to manually install the dependencies of roundcube that we know we need, and then we'll manually install roundcube from source.
These dependencies are from apt-cache showpkg roundcube-core
.
apt-get install -y dbconfig-common php-cli php-sqlite3 php-intl php-json php-common php-curl php-gd php-pspell tinymce libjs-jquery libjs-jquery-mousewheel libmagic1 php-mbstring
Install Roundcube from source if it is not already present or if it is out of date. Combine the Roundcube version number with the commit hash of plugins to track whether we have the latest version of everything.
VERSION=1.3.10HASH=431625fc737e301f9b7e502cccc61e50a24786b8PERSISTENT_LOGIN_VERSION=dc5ca3d3f4415cc41edb2fde533c8a8628a94c76HTML5_NOTIFIER_VERSION=4b370e3cd60dabd2f428a26f45b677ad1b7118d5CARDDAV_VERSION=3.0.3CARDDAV_HASH=d1e3b0d851ffa2c6bd42bf0c04f70d0e1d0d78f8UPDATE_KEY=$VERSION:$PERSISTENT_LOGIN_VERSION:$HTML5_NOTIFIER_VERSION:$CARDDAV_VERSION
paths that are often reused.
RCM_DIR=/usr/local/lib/roundcubemailRCM_PLUGIN_DIR=${RCM_DIR}/pluginsRCM_CONFIG=${RCM_DIR}/config/config.inc.php
checks if the version is what we want install roundcube
wget_verify https://github.com/roundcube/roundcubemail/releases/download/$VERSION/roundcubemail-$VERSION-complete.tar.gz $HASH /tmp/roundcube.tgztar -C /usr/local/lib --no-same-owner -zxf /tmp/roundcube.tgzrm -rf /usr/local/lib/roundcubemailmv /usr/local/lib/roundcubemail-$VERSION/ $RCM_DIRrm -f /tmp/roundcube.tgz
install roundcube persistent_login plugin
git_clone https://github.com/mfreiholz/Roundcube-Persistent-Login-Plugin.git $PERSISTENT_LOGIN_VERSION ${RCM_PLUGIN_DIR}/persistent_login
install roundcube html5_notifier plugin
git_clone https://github.com/kitist/html5_notifier.git $HTML5_NOTIFIER_VERSION ${RCM_PLUGIN_DIR}/html5_notifier
download and verify the full release of the carddav plugin
wget_verify https://github.com/blind-coder/rcmcarddav/releases/download/v${CARDDAV_VERSION}/carddav-${CARDDAV_VERSION}.zip $CARDDAV_HASH /tmp/carddav.zip
unzip and cleanup
unzip -q /tmp/carddav.zip -d ${RCM_PLUGIN_DIR}rm -f /tmp/carddav.zip
record the version we've installed
echo $UPDATE_KEY > ${RCM_DIR}/version
Generate a safe 24-character secret key of safe characters.
SECRET_KEY=$(dd if=/dev/urandom bs=1 count=18 2>/dev/null | base64 | fold -w 24 | head -n 1)
Create a configuration file.
For security, temp and log files are not stored in the default locations which are inside the roundcube sources directory. We put them instead in normal places.
cat > $RCM_CONFIG <<EOF<?php/** Do not edit. Written by Mail-in-a-Box. Regenerated on updates.*/\$config = array();\$config[\'log_dir\'] = \'/var/log/roundcubemail/\';\$config[\'temp_dir\'] = \'/var/tmp/roundcubemail/\';\$config[\'db_dsnw\'] = \'sqlite:///$STORE/mail/roundcube/roundcube.sqlite?mode=0640\';\$config[\'default_host\'] = \'ssl://localhost\';\$config[\'default_port\'] = 993;\$config[\'imap_conn_options\'] = array(\'ssl\' => array(\'verify_peer\' => false,\'verify_peer_name\' => false,),);\$config[\'imap_timeout\'] = 15;\$config[\'smtp_server\'] = \'tls://127.0.0.1\';\$config[\'smtp_port\'] = 587;\$config[\'smtp_user\'] = \'%u\';\$config[\'smtp_pass\'] = \'%p\';\$config[\'smtp_conn_options\'] = array(\'ssl\' => array(\'verify_peer\' => false,\'verify_peer_name\' => false,),);\$config[\'support_url\'] = \'https://mailinabox.email/\';\$config[\'product_name\'] = \'box.yourdomain.com Webmail\';\$config[\'des_key\'] = \'$SECRET_KEY\';\$config[\'plugins\'] = array(\'html5_notifier\', \'archive\', \'zipdownload\', \'password\', \'managesieve\', \'jqueryui\', \'persistent_login\', \'carddav\');\$config[\'skin\'] = \'larry\';\$config[\'login_autocomplete\'] = 2;\$config[\'password_charset\'] = \'UTF-8\';\$config[\'junk_mbox\'] = \'Spam\';?>EOF
Configure CardDav
cat > ${RCM_PLUGIN_DIR}/carddav/config.inc.php <<EOF<?php/* Do not edit. Written by Mail-in-a-Box. Regenerated on updates. */\$prefs[\'_GLOBAL\'][\'hide_preferences\'] = true;\$prefs[\'_GLOBAL\'][\'suppress_version_warning\'] = true;\$prefs[\'ownCloud\'] = array(\'name\' => \'ownCloud\',\'username\' => \'%u\', // login username\'password\' => \'%p\', // login password\'url\' => \'https://${box.yourdomain.com}/cloud/remote.php/carddav/addressbooks/%u/contacts\',\'active\' => true,\'readonly\' => false,\'refresh_time\' => \'02:00:00\',\'fixed\' => array(\'username\',\'password\'),\'preemptive_auth\' => \'1\',\'hide\' => false,);?>EOF
Create writable directories.
mkdir -p /var/log/roundcubemail /var/tmp/roundcubemail $STORE/mail/roundcubechown -R www-data.www-data /var/log/roundcubemail /var/tmp/roundcubemail $STORE/mail/roundcube
Ensure the log file monitored by fail2ban exists, or else fail2ban can't start.
sudo -u www-data touch /var/log/roundcubemail/errors
Password changing plugin settings The config comes empty by default, so we need the settings we're not planning to change in config.inc.dist...
cp ${RCM_PLUGIN_DIR}/password/config.inc.php.dist ${RCM_PLUGIN_DIR}/password/config.inc.phptools/editconf.py ${RCM_PLUGIN_DIR}/password/config.inc.php \$config[\'password_minimum_length\']=8; \$config[\'password_db_dsn\']=\'sqlite:///$STORE/mail/users.sqlite\'; "\$config['password_query']='UPDATE users SET password=%D WHERE email=%u';" "\$config['password_dovecotpw']='/usr/bin/doveadm pw';" \$config[\'password_dovecotpw_method\']=\'SHA512-CRYPT\'; \$config[\'password_dovecotpw_with_method\']=true;
so PHP can use doveadm, for the password changing plugin
usermod -a -G dovecot www-data
set permissions so that PHP can use users.sqlite could use dovecot instead of www-data, but not sure it matters
chown root.www-data $STORE/mailchmod 775 $STORE/mailchown root.www-data $STORE/mail/users.sqlitechmod 664 $STORE/mail/users.sqlite
Fix Carddav permissions:
chown -f -R root.www-data ${RCM_PLUGIN_DIR}/carddav
root.www-data need all permissions, others only read
chmod -R 774 ${RCM_PLUGIN_DIR}/carddav
Run Roundcube database migration script (database is created if it does not exist)
${RCM_DIR}/bin/updatedb.sh --dir ${RCM_DIR}/SQL --package roundcubechown www-data:www-data $STORE/mail/roundcube/roundcube.sqlitechmod 664 $STORE/mail/roundcube/roundcube.sqlite
Enable PHP modules.
phpenmod -v php mcrypt imapservice php7.2-fpm restart
Nextcloud
apt-get purge -qq -y owncloud* # we used to use the package managerapt-get install -y php php-fpm php-cli php-sqlite3 php-gd php-imap php-curl php-pear curl php-dev php-gd php-xml php-mbstring php-zip php-apcu php-json php-intl php-imagickInstallNextcloud() {version=$1hash=$2echoecho
Download and verify
wget_verify https://download.nextcloud.com/server/releases/nextcloud-$version.zip $hash /tmp/nextcloud.zip
Remove the current owncloud/Nextcloud
rm -rf /usr/local/lib/owncloud
Extract ownCloud/Nextcloud
unzip -q /tmp/nextcloud.zip -d /usr/local/libmv /usr/local/lib/nextcloud /usr/local/lib/owncloudrm -f /tmp/nextcloud.zip
The two apps we actually want are not in Nextcloud core. Download the releases from their github repositories.
mkdir -p /usr/local/lib/owncloud/appswget_verify https://github.com/nextcloud/contacts/releases/download/v3.1.1/contacts.tar.gz a06bd967197dcb03c94ec1dbd698c037018669e5 /tmp/contacts.tgztar xf /tmp/contacts.tgz -C /usr/local/lib/owncloud/apps/rm /tmp/contacts.tgzwget_verify https://github.com/nextcloud/calendar/releases/download/v1.6.5/calendar.tar.gz 79941255521a5172f7e4ce42dc7773838b5ede2f /tmp/calendar.tgztar xf /tmp/calendar.tgz -C /usr/local/lib/owncloud/apps/rm /tmp/calendar.tgz
Starting with Nextcloud 15, the app user_external is no longer included in Nextcloud core, we will install from their github repository.
wget_verify https://github.com/nextcloud/user_external/releases/download/v0.6.3/user_external-0.6.3.tar.gz 0f756d35fef6b64a177d6a16020486b76ea5799c /tmp/user_external.tgztar -xf /tmp/user_external.tgz -C /usr/local/lib/owncloud/apps/rm /tmp/user_external.tgz
Fix weird permissions.
chmod 750 /usr/local/lib/owncloud/{apps,config}
Create a symlink to the config.php in STORAGE_ROOT (for upgrades we're restoring the symlink we previously put in, and in new installs we're creating a symlink and will create the actual config later).
ln -sf $STORE/owncloud/config.php /usr/local/lib/owncloud/config/config.php
Make sure permissions are correct or the upgrade step won't run. $STORE/owncloud may not yet exist, so use -f to suppress that error.
chown -f -R www-data.www-data $STORE/owncloud /usr/local/lib/owncloud || /bin/true
If this isn't a new installation, immediately run the upgrade script. Then check for success (0=ok and 3=no upgrade needed, both are success). ownCloud 8.1.1 broke upgrades. It may fail on the first attempt, but that can be OK.
sudo -u www-data php /usr/local/lib/owncloud/occ upgradesudo -u www-data php /usr/local/lib/owncloud/occ upgradesudo -u www-data php /usr/local/lib/owncloud/occ maintenance:mode --off
Add missing indices. NextCloud didn't include this in the normal upgrade because it might take some time.
sudo -u www-data php /usr/local/lib/owncloud/occ db:add-missing-indices
Run conversion to BigInt identifiers, this process may take some time on large tables.
sudo -u www-data php /usr/local/lib/owncloud/occ db:convert-filecache-bigint --no-interaction}
Nextcloud Version to install. Checks are done down below to step through intermediate versions.
nextcloud_ver=15.0.8nextcloud_hash=4129d8d4021c435f2e86876225fb7f15adf764a3
Current Nextcloud Version, #1623 Checking /usr/local/lib/owncloud/version.php shows version of the Nextcloud application, not the DB $STORE/owncloud is kept together even during a backup. It is better to rely on config.php than version.php since the restore procedure can leave the system in a state where you have a newer Nextcloud application version than the database.
If config.php exists, get version number, otherwise CURRENT_NEXTCLOUD_VER is empty.
CURRENT_NEXTCLOUD_VER=$(php -r "include(\"$STORE/owncloud/config.php\"); echo(\$CONFIG['version']);)"CURRENT_NEXTCLOUD_VER=
If the Nextcloud directory is missing (never been installed before, or the nextcloud version to be installed is different from the version currently installed, do the install/upgrade
Stop php-fpm if running. If theyre not running (which happens on a previously failed install), dont bail.
service php7.2-fpm stop &> /dev/null || /bin/true
Backup the existing ownCloud/Nextcloud. Create a backup directory to store the current installation and database to
BACKUP_DIRECTORY=$STORE/owncloud-backup/`date +%Y-%m-%d-%T`mkdir -p $BACKUP_DIRECTORYcp -r /usr/local/lib/owncloud $BACKUP_DIRECTORY/owncloud-installcp $STORE/owncloud/owncloud.db $BACKUP_DIRECTORYcp $STORE/owncloud/config.php $BACKUP_DIRECTORY
If ownCloud or Nextcloud was previously installed.... Database migrations from ownCloud are no longer possible because ownCloud cannot be run under PHP 7.
exit 1exit 1
If we are running Nextcloud 13, upgrade to Nextcloud 14
InstallNextcloud 14.0.6 4e43a57340f04c2da306c8eea98e30040399ae5a
During the upgrade from Nextcloud 14 to 15, user_external may cause the upgrade to fail. We will disable it here before the upgrade and install it again after the upgrade.
sudo -u www-data php /usr/local/lib/owncloud/console.php app:disable user_externalInstallNextcloud $nextcloud_ver $nextcloud_hash
Setup Nextcloud if the Nextcloud database does not yet exist. Running setup when the database does exist wipes the database and user data. Create user data directory
mkdir -p $STORE/owncloud
Create an initial configuration file.
instanceid=oc$(echo box.yourdomain.com | sha1sum | fold -w 10 | head -n 1)cat > $STORE/owncloud/config.php <<EOF<?php\$CONFIG = array (\'datadirectory\' => \'$STORE/owncloud\',\'instanceid\' => \'$instanceid\',\'forcessl\' => true, # if unset/false, Nextcloud sends a HSTS=0 header, which conflicts with nginx config\'overwritewebroot\' => \'/cloud\',\'overwrite.cli.url\' => \'/cloud\',\'user_backends\' => array(array(\'class\' => \'OC_User_IMAP\',\'arguments\' => array(\'127.0.0.1\', 143, null),),),\'memcache.local\' => \'OCMemcacheAPCu\',\'mail_smtpmode\' => \'sendmail\',\'mail_smtpsecure\' => \'\',\'mail_smtpauthtype\' => \'LOGIN\',\'mail_smtpauth\' => false,\'mail_smtphost\' => \'\',\'mail_smtpport\' => \'\',\'mail_smtpname\' => \'\',\'mail_smtppassword\' => \'\',\'mail_from_address\' => \'owncloud\',);?>EOF
Create an auto-configuration file to fill in database settings when the install script is run. Make an administrator account here or else the install can't finish.
adminpassword=$(dd if=/dev/urandom bs=1 count=40 2>/dev/null | sha1sum | fold -w 30 | head -n 1)
<?php $AUTOCONFIG = array ( # storage/database 'directory' => '$STORE/owncloud', 'dbtype' => 'sqlite3', # create an administrator account with a random password so that # the user does not have to enter anything on first load of Nextcloud 'adminlogin' => 'root', 'adminpass' => '$adminpassword', ); ?>
Set permissions
chown -R www-data.www-data $STORE/owncloud /usr/local/lib/owncloud
Execute Nextcloud's setup step, which creates the Nextcloud sqlite database. It also wipes it if it exists. And it updates config.php with database settings and deletes the autoconfig.php file.
(cd /usr/local/lib/owncloud; sudo -u www-data php /usr/local/lib/owncloud/index.php;)
Update config.php. * trusted_domains is reset to localhost by autoconfig starting with ownCloud 8.1.1, so set it here. It also can change if the box's box.yourdomain.com changes, so this will make sure it has the right value. * Some settings weren't included in previous versions of Mail-in-a-Box. * We need to set the timezone to the system timezone to allow fail2ban to ban users within the proper timeframe * We need to set the logdateformat to something that will work correctly with fail2ban * mail_domain' needs to be set every time we run the setup. Making sure we are setting the correct domain name if the domain is being change from the previous setup. Use PHP to read the settings file, modify it, and write out the new settings array.
TIMEZONE=$(cat /etc/timezone)CONFIG_TEMP=$(/bin/mktemp)php <<EOF > $CONFIG_TEMP && mv $CONFIG_TEMP $STORE/owncloud/config.php<?phpinclude($STORE/owncloud/config.php)\$CONFIG[trusted_domains] = array(box.yourdomain.com)\$CONFIG[memcache.local] = OCMemcacheAPCu\$CONFIG[overwrite.cli.url] = /cloud\$CONFIG[mail_from_address] = administrator
just the local part, matches our master administrator address
\$CONFIG[logtimezone] = $TIMEZONE\$CONFIG[logdateformat] = "Y-m-d H:i:s"\$CONFIG[mail_domain] = box.yourdomain.com\$CONFIG[user_backends] = array(array(class => OC_User_IMAP,arguments => array(127.0.0.1, 143, null),),)var_export(\$CONFIG)?>EOFchown www-data.www-data $STORE/owncloud/config.php
Enable/disable apps. Note that this must be done after the Nextcloud setup. The firstrunwizard gave Josh all sorts of problems, so disabling that. user_external is what allows Nextcloud to use IMAP for login. The contacts and calendar apps are the extensions we really care about here.
sudo -u www-data php /usr/local/lib/owncloud/console.php app:disable firstrunwizardsudo -u www-data php /usr/local/lib/owncloud/console.php app:enable user_externalsudo -u www-data php /usr/local/lib/owncloud/console.php app:enable contactssudo -u www-data php /usr/local/lib/owncloud/console.php app:enable calendar
When upgrading, run the upgrade script again now that apps are enabled. It seems like the first upgrade at the top won't work because apps may be disabled during upgrade? Check for success (0=ok, 3=no upgrade needed).
sudo -u www-data php /usr/local/lib/owncloud/occ upgrade
Set PHP FPM values to support large file uploads (semicolon is the comment character in this file, hashes produce deprecation warnings)
upload_max_filesize=16G post_max_size=16G output_buffering=16384 memory_limit=512M max_execution_time=600 short_open_tag=On
Set Nextcloud recommended opcache settings
opcache.enable=1 opcache.enable_cli=1 opcache.interned_strings_buffer=8 opcache.max_accelerated_files=10000 opcache.memory_consumption=128 opcache.save_comments=1 opcache.revalidate_freq=1
Configure the path environment for php-fpm
env[PATH]=/usr/local/bin:/usr/bin:/bin
If apc is explicitly disabled we need to enable it
tools/editconf.py /etc/php/7.2/mods-available/apcu.ini -c ; apc.enabled=1
Set up a cron job for Nextcloud.
#!/bin/bash # Mail-in-a-Box sudo -u www-data php -f /usr/local/lib/owncloud/cron.php
chmod +x /etc/cron.hourly/mailinabox-owncloud
There's nothing much of interest that a user could do as an admin for Nextcloud,
and there's a lot they could mess up, so we don't make any users admins of Nextcloud.
But if we wanted to, we would do this:
for user in $(tools/mail.py user admins); do
sqlite3 $STORE/owncloud/owncloud.db "INSERT OR IGNORE INTO oc_group_user VALUES ('admin', '$user')"
done
Enable PHP modules and restart PHP.
service php7.2-fpm restart
Mostly for use on iOS which doesn't support IMAP IDLE.
Although Ubuntu ships Z-Push (as d-push) it has a dependency on Apache so we won't install it that way.
Thanks to http://frontender.ch/publikationen/push-mail-server-using-nginx-and-z-push.html.
Prereqs.
apt-get install -y php-soap php-imap libawl-php php-xslphpenmod -v php imap
Copy Z-Push into place.
VERSION=2.5.0TARGETHASH=30ce5c1af3f10939036361b6032d1187651b621e
checks if the version Download
wget_verify https://stash.z-hub.io/rest/api/latest/projects/ZP/repos/z-push/archive?at=refs%2Ftags%2F$VERSION&format=zip $TARGETHASH /tmp/z-push.zip
Extract into place.
rm -rf /usr/local/lib/z-push /tmp/z-pushunzip -q /tmp/z-push.zip -d /tmp/z-pushmv /tmp/z-push/src /usr/local/lib/z-pushrm -rf /tmp/z-push.zip /tmp/z-pushrm -f /usr/sbin/z-push-{admin,top}ln -s /usr/local/lib/z-push/z-push-admin.php /usr/sbin/z-push-adminln -s /usr/local/lib/z-push/z-push-top.php /usr/sbin/z-push-topecho $VERSION > /usr/local/lib/z-push/version
Configure default config.
sed -i "s^define('TIMEZONE', .*^define('TIMEZONE', '$(cat /etc/timezone)');^" /usr/local/lib/z-push/config.phpsed -i "s/define('BACKEND_PROVIDER', .*/define('BACKEND_PROVIDER', 'BackendCombined');/" /usr/local/lib/z-push/config.phpsed -i "s/define('USE_FULLEMAIL_FOR_LOGIN', .*/define('USE_FULLEMAIL_FOR_LOGIN', true);/" /usr/local/lib/z-push/config.phpsed -i "s/define('LOG_MEMORY_PROFILER', .*/define('LOG_MEMORY_PROFILER', false);/" /usr/local/lib/z-push/config.phpsed -i "s/define('BUG68532FIXED', .*/define('BUG68532FIXED', false);/" /usr/local/lib/z-push/config.phpsed -i "s/define('LOGLEVEL', .*/define('LOGLEVEL', LOGLEVEL_ERROR);/" /usr/local/lib/z-push/config.php
Configure BACKEND
rm -f /usr/local/lib/z-push/backend/combined/config.phpcp conf/zpush/backend_combined.php /usr/local/lib/z-push/backend/combined/config.php
Configure IMAP
rm -f /usr/local/lib/z-push/backend/imap/config.phpcp conf/zpush/backend_imap.php /usr/local/lib/z-push/backend/imap/config.phpsed -i s%STORAGE_ROOT%$STORE% /usr/local/lib/z-push/backend/imap/config.php
Configure CardDav
rm -f /usr/local/lib/z-push/backend/carddav/config.phpcp conf/zpush/backend_carddav.php /usr/local/lib/z-push/backend/carddav/config.php
Configure CalDav
rm -f /usr/local/lib/z-push/backend/caldav/config.phpcp conf/zpush/backend_caldav.php /usr/local/lib/z-push/backend/caldav/config.php
Configure Autodiscover
rm -f /usr/local/lib/z-push/autodiscover/config.phpcp conf/zpush/autodiscover_config.php /usr/local/lib/z-push/autodiscover/config.phpsed -i s/box.yourdomain.com/box.yourdomain.com/ /usr/local/lib/z-push/autodiscover/config.phpsed -i "s^define('TIMEZONE', .*^define('TIMEZONE', '$(cat /etc/timezone)');^" /usr/local/lib/z-push/autodiscover/config.php
Some directories it will use.
mkdir -p /var/log/z-pushmkdir -p /var/lib/z-pushchmod 750 /var/log/z-pushchmod 750 /var/lib/z-pushchown www-data:www-data /var/log/z-pushchown www-data:www-data /var/lib/z-push
Add log rotation
/var/log/z-push/*.log { weekly missingok rotate 52 compress delaycompress notifempty }
Restart service.
service php7.2-fpm restart
Fix states after upgrade
z-push-admin -a fixstates
Munin: resource monitoring tool
install Munin
apt-get install -y munin munin-node libcgi-fast-perl
libcgi-fast-perl is needed by /usr/lib/munin/cgi/munin-cgi-graph
edit config
dbdir /var/lib/munin htmldir /var/cache/munin/www logdir /var/log/munin rundir /var/run/munin tmpldir /etc/munin/templates includedir /etc/munin/munin-conf.d # path dynazoom uses for requests cgiurl_graph /admin/munin/cgi-graph # a simple host tree [box.yourdomain.com] address 127.0.0.1 # send alerts to the following address contacts admin contact.admin.command mail -s "Munin notification ${var:host}" administrator@box.yourdomain.com contact.admin.always_send warning critical
The Debian installer touches these files and chowns them to www-data:adm for use with spawn-fcgi
chown munin. /var/log/munin/munin-cgi-html.logchown munin. /var/log/munin/munin-cgi-graph.log
ensure munin-node knows the name of this machine and reduce logging level to warning
host_name box.yourdomain.com log_level 1
Update the activated plugins through munin's autoconfiguration.
munin-node-configure --shell --remove-also 2>/dev/null | sh || /bin/true
Deactivate monitoring of NTP peers. Not sure why anyone would want to monitor a NTP peer. The addresses seem to change (which is taken care of my munin-node-configure, but only when we re-run it.)nd /etc/munin/plugins/ -lname /usr/share/munin/plugins/ntp_ -print0 | xargs -0 /bin/rm -f
Deactivate monitoring of network interfaces that are not up. Otherwise we can get a lot of empty charts.
for f in $(find /etc/munin/plugins/ ( -lname /usr/share/munin/plugins/if_ -o -lname /usr/share/munin/plugins/if_err_ -o -lname /usr/share/munin/plugins/bonding_err_ ))doIF=$(echo $f | sed s/.*_//)rm $fdone
Create a 'state' directory. Not sure why we need to do this manually.
mkdir -p /var/lib/munin-node/plugin-state/
Create a systemd service for munin.
ln -sf $(pwd)/management/munin_start.sh /usr/local/lib/mailinabox/munin_start.shchmod 0744 /usr/local/lib/mailinabox/munin_start.shsystemctl link -f conf/munin.servicesystemctl daemon-reloadsystemctl unmask munin.servicesystemctl enable munin.service
Restart services.
service munin restartservice munin-node restart
generate initial statistics so the directory isn't empty (We get "Pango-WARNING **: error opening config file '/root/.config/pango/pangorc': Permission denied" if we don't explicitly set the HOME directory when sudo'ing.) We check to see if munin-cron is already running, if it is, there is no need to run it simultaneously generating an error.
sudo -H -u munin munin-cron