Setting up DMS (docker-mailserver) with PostgreSQL, PostfixAdmin and Roundcubemail

Some notes in the beginning:

DMS, docker mail server, is, as of now (2023-08-08), not intended to be used with a database and postfixadmin. It offers its user and domain management through a script and a text file, where you write in your accounts with hashed passwords or your aliases.

This is not a tutorial, it is my personal story with the topic. I might rewrite this as a tutorial at a later point in time.

Motivation

What I want to do is, replacing my current native setup with a docker solution that I can setup easily and reproducible in case of server change/malfunction or the need to restore a backup.

The native setup consists of

  • postfix – how obvious
  • PostgreSQL
  • PostfixAdmin
  • Dovecot
  • RoundcubeMail
  • Rspamd
  • imapproxy for Roundcube, for keeping connections low
    • (don’t think that I will transfer that thing…)
  • opendkimd

And of course, the new setup should be capable of the same.

Why?

My problem is of course a bit self-produced. I am a Arch Linux user – every-fuckin-where. And with that I sometimes get incompatibilities. As with Arch you get almost bleeding edge packages, the applications, e.g. roundcubemail or postfixadmin, aren’t ready for the newest PHP most of the time.

My motivation is now to move the email service with all its components into a docker deployment, so that any upgrade of my base system isn’t interfering with the service at all. Target is to be able to do an upgrade schedule of the machine as I see fit and having a different upgrade schedule for the services.

I did this approach already with a couple of services, e.g. nextcloud (AIO) and quassel-core.

Now there could be someone questioning my Arch Linux usage on server systems, of course…
Yes, I could move to an Ubuntu server with more “stability”, only to have a big act of upgrading to every new LTS version.
And for a move to another Linux distribution, I would have to move the services anyway…

Whatever.

Having the deployment in docker makes it even easier, if I would ever do this.

Get started…

For continuing, I expect that you have at least basic knowledge about all the involved services (postfix, dovecot, postgresql, dns entries, rspamd, postfixadmin, roundcubemail, lets encrypt, etc.). I won’t go into details, but may provide links for further reading, if you’re lucky. 🙂

Now let’s get started…

(more…)

Server crash on 25th February

On the Saturday morning of the 25th February I was quite surprised that I did not get an email delivered to my phone my fiancee sent me. Wondering about that I searched for the cause and realized after some research that my storage server had crashed.

The crash was a provider issue with the host kernel system at netcup’s system, which had a bug and threw all it’s virtualized machines into ‘death’. To be correct, the system reported the VM’s still as running, but they were completely offline. Which subsequently also killed my productive system as the storage was not available anymore.

Some hours of restoring stuff, including a phone call to the provider in the beginning solved all the problems resulting out of the system’s downtime and everything’s running again.

Still I made myself some tasks to work on to make recovery easier and faster.

Everything on latest version again

Looks like I am only posting about server updates since quite a while. Well what should I say else?

Of course, in my private life there’s a lot to talk about, but I do not lay my private life open to the public on a blog, like some other people do…

Err, yeah, so regarding the updates: It’s all updated. Except wekan, that sucker got blown up and I do not want to fix that now, I am working on different programming topics for the rest of the day.

Another server update

Quite some time was lost between the last and this server update.

And of course there are some update quirks this time. It’s seldom, but it happens.

  • collabora, accessed from the nextcloud instance, doesn’t work.
  • webmail client is down, because it doesn’t support php8 yet. :rolling_eyes:

At collabora I’m lost, no idea what went wrong there…
But webmail will be fixed today evening or tomorrow, when I’ve installed the latest RC.

Pushing Jenkins/Pipeline/Groovy/JVM to the limits [update]

At work we have enhanced conan.io with an own python build system wrapper, which handles full dependency chains, forward and backward, and is capable of generating Jenkins pipelines for building in the Jenkins CI system.

Such a build is quite simple, the Jenkinsfiles just needs two lines for bootstrap out of a jenkins pipeline library and then the CI build itself in it’s standard incarnation has 4 build phases:

  1. Generation phase, here the build is bootstrapped, the pipeline is generated, loaded and executed
  2. Prebuild phase, things to be done before the real build (reporting, or other stuff)
  3. The build itself, parallel execution for all existing targets and variants, plus test execution, static code analysis etc.
  4. Postbuild phase, reporting, metrics, Jira connectors, etc.

So such a generated pipeline can reach up to 1500 lines, depending on the project’s configuration, not a big deal typically.

Now we added the feature to build a complete dependency chain. Yes, you are reading right. If you have a Project A, B and C and the dependency chain is like A -> B -> C (-> means depends on) then if you build C the result is taken to build B and the result is taken again to build A. Still a rather simple thing, but can be a quite complex and huge pipeline to run that.

Method size

And there we hit our first problem, still a simple one. Executing that gives back an exception with the message “Method code too large!” coming from the underlying JVM, which allows a Method code size up to 64k.

Weeell, not that big deal, some small changes to the generation and every target build had it’s own function, problem solved.

Class size

Okay, fine. Then we got that one new huge project with something like 30 components or something. Normal build is fine, works good so far. And then someone tried to run a dependency build. BOOM!

Now we got an exception saying “Class size too large!”, seriously what the fuck?

Now it’s okay that a pipeline/Groovy script with 52.5k lines of code may be a bit oversized (1.6MB), but why the fuck is there a limit to the the class size? (We wondered already at the method code limit…)

Okay first step was to use generateable reusable methods in the pipeline, which reduced the size of that pipeline already to 22.7k lines of code – still too large for a class. (Yes I know that LOC are not the same as the bytesize of a class or method, but it is at least some indicator for the size of it)

What now? Splitting to multiple loadable groovy classes of course. Said and done, every build step is now it’s own little groovy file, 438 files to be exact. In the main pipeline script we generate now a map with an entry for each file and are loading the classes dynamically into that map.

Now guess what…

General error when generating a class: ArrayIndexOutOfBoundsException

That’s now a show stopper. Now we are trying to find out where this comes from and guess what the internet spats out about that error?

Nothing or just shit/brabble/rubbish.

[Update:]

A solution is of course to make the generation more intelligent and also the generated result more intelligent, which also brings a much higher complexity. But after all it is the best solution right now as long as you generate everything of a huge Jenkins pipeline.

Lets Encrypt Wildcard certs

I searched a half hour for a how-to for Let’s Encrypt Wildcard certificates with automatic renewal.

All sites I’ve found just promoted the manual method, where I would have to manually add dns entries every 3 months – neeeeeever!

Then I stumbled upon acme.sh. This acme client tool for Let’s Encrypt even has plugins for the most providers who offer DNS configuration and expose an API. And there exists a plugin for my provider netcup.de

Couldn’t be better. Just set the environment variables like mentioned in the quite small how-to for the plugin. And run the command to get a new cert.

It might be a good idea to also add a bigger key size, because the default is just 2048bits.

acme.sh --issue --dns dns_netcup -d example.com -d *.example.com -k 4096

And you’re done. 

Now you just have the work to point your services to the new certificates.

For me those were:

  • apache 
  • quasselcore
  • postfix
  • dovecot
  • prosody (xmpp/jabber)

Again Qualys SSL Labs and mxtoolbox were a great help in checking if everything works as expected, thanks for that guys!

Editing office documents directly inside nextcloud

It bothered me for long, that I couldn’t edit office documents directly online on my own/nextcloud. Then I found the collabora plugin in the nextcloud apps and checked the nextcloud website about that.

It’s easier than you think.

First Step: Get yourself the docker container running

The simplest solution would be a docker-compose.yml file like this one:

version: '2'
services:
  collabora:
    image: collabora/code
    environment:
      - domain=cloud.mmo.to
      - username=<username>
      - password=<password>
    restart: always
    ports:
      - 127.0.0.1:9980:9980
    networks:
      - collabora
networks:
  collabora:
    driver: bridge

It’s the latest developement version in collabora/code so for private use it’s okay. 🙂

As a sidenote: I have no idea what’s the username and password for in the docker container, but I’ve set it just to be sure.

Don’t forget to configure your webserver with an subdomain vhost and all the proxy configuration parts which are mentioned inside the nextcloud tutorial.

Second Step: Configure your Let’s Encrypt cert for the subdomain

Well, that’s kinda obvious and god damn simple, so I’ll skip to the next and last step.

Last Step: Configure your collabora app in nextcloud

… with the subdomain of your collabora docker instance behind the webproxy. 

And it magically works. I was surprised too! 

If anything doesn’t work as expected check back with the nextcloud site mentioned above or maybe on the website of collabora itself.

IPv6

Well, it’s about time that I handle that topic too.

So here’s a list what I had to do to get it working on all services I have running

  • Checking the IPv6 subnet I got from my provider
    • Setting one of those IPs on the network device
  • Checking DNS entries
    • Adding AAAA records for “*”, “@” and the server name
    • Adding an IPv6 reverse DNS name
    • For email I had to correct my SPF entry
  • Service configurations I had to change or to check
    • Apache, just had to check that the Listen configuration listens on all interfaces
    • Postfix, here I had to add the IPv6 protocol
  • Gladly the docker internal network is completly hidden, so I do not have to care about anything running behind my apache proxy, also the SSH server is listening on all devices and I do not care currently about my gitlab instance external ssh access, that may still stay on IPv4 for a while

What might help when you’re testing IPv6 is following test website: https://www.mythic-beasts.com/ipv6/health-check

So far everything is working, what’s still bugging me is that you can’t force your browser to use IPv6 when visiting a site which supports it, you don’t even know it… 

Zuul, Jenkins, Gerrit and Git submodules

So, we got a Git -> Gerrit -> Zuul (w/ Gearman) -> Jenkins setup at work and we started to use Git submodules with one repository lately.

Setting up the quality gate with Zuul and Gerrit for a normal git repository is quite straight forward and I won’t mention that any further. We got the problem, that we wanted to do a build of the parent repository of our submodule repository, when a change is committed for review or merge.

Zuul doesn’t give you any options here, it just has a single project configuration, and doesn’t support project dependencies.

BUT it supports build job dependencies!

So the solution is to build your submodule standalone in the first job, which can be the standard review job, based on a Jenkinsfile inside the submodule repository. And then starting a build job with the parent repository which depends on the result of the submodule standalone build. This second job can’t be a standard review build job because it has to do some different things. The standard Jenkinsfile for the review of the parent repository can be used with minor modifications.

So for your parent repository, you’ll be already using a checkout method which also retrieves the submodule repository and may look like this:

def zuul_fetch_repo() {
    checkout changelog: true, poll: false, scm: [$class: 'GitSCM', branches: [[name: 'refs/heads/zuul']], doGenerateSubmoduleConfigurations: false, extensions: [], submoduleCfg: [], userRemoteConfigs: [[refspec: '+$ZUUL_REF:refs/heads/zuul', url: '$ZUUL_URL/$ZUUL_PROJECT']], extensions: [[$class: 'SubmoduleOption', disableSubmodules: false, parentCredentials: true, recursiveSubmodules: true, reference: '', trackingSubmodules: true], [$class: 'CleanBeforeCheckout']]]
}

Because of the fact that you have to use a special job for the task, you also have to change the fetch function away from the generic $ZUUL_URL/$ZUUL_REPO to a hardcoded checkout url.

These variables you have to use to update your submodule repository to the zuul change provided, the resulting fetch function could look like this:

def zuul_fetch_repo() {
    checkout changelog: true, poll: false, scm: [$class: 'GitSCM', branches: [[name: 'master']], doGenerateSubmoduleConfigurations: false, extensions: [], submoduleCfg: [], userRemoteConfigs: [[url: 'ssh://<user>@your-gerrit.url:29418/parent-repo']], extensions: [[$class: 'SubmoduleOption', disableSubmodules: false, parentCredentials: true, recursiveSubmodules: true, reference: '', trackingSubmodules: true], [$class: 'CleanBeforeCheckout']]]

    sh '''
    cd path/to/your/submodule/repository
    git pull $ZUUL_URL/$ZUUL_PROJECT +$ZUUL_REF:refs/heads/zuul
    '''
}

And that’s it! You just have to somehow get the change from the project you configured in zuul into the submodule, and you have a build of the parent project with integrated change commit from the submodule. Of course you can do that a bit fancier, but that’s left as an exercise for the reader.

At last here a little snippet of the zuul config part reflecting that.

projects:
  - name: submodule-repo
    review:    # the zuul pipeline
      - review:    # standard review job, submodule standalone
        - review-parent-with-submodule    # parent project with submodule checkout