I am a big fan of self hosting software, but as it is currently, my setup needs to be reworked entirely. Right now I have my hosted software split between two servers. One is a hetzner server that I have been using for a few years at this point, which hosts nextcloud (set up manually), bitwarden, and several static sites. The other server is an older computer running arch linux on the living room floor at my house. It serves a vaultwarden instance which is only accessible on the home network, a git server which has broken, and the evidence of my attempt to figure out openvpn.
The split between two servers is not ideal. I set up the home server hoping to eventually move all of my hosted software onto my network in order to have tighter security in terms of networking and spare the expense of paying for a server. This is why I have duplicate password managers, as I had hoped to replace bitwarden on hetzner with vaultwarden on my server at home. Having faced several challenges, however, I’ve decided that the home server simply isn’t tenable. Configuring a vpn to connect to reach a lan network is challenging on its own technically, but adding additional checks and processes to reconfigure that vpn routinely due to dynamic dns is asking a bit too much of me. Additionally, the process of issuing https certificates for sites which are not port forwarded is difficult and involves trusting close sourced certificate authorities, which is not bad necessarily, but I would personally rather use the EFF’s open source Let’s Encrypt authority for ethical reasons.
On top of the home server being a mess of it’s own, I’ve been working on another project which has required hosting content on the open internet, which is not something I would be comfortable doing from my home address anyway, so I have decided to move back to using hetzner for my hosting needs. The home network hosting wasn’t for naught, however, as I learned the basics of docker compose when setting up vaultwarden and I believe that transitioning my current infrastructure to docker could help clean things up considerably.
That said, my plan was to set up nextcloud, vaultwarden, and nginx as docker containers, so that I could use nginx to reverse proxy to nextcloud and vaultwarden while I host static content on the same port. My current nextcloud instance would have its data directory plugged into the docker container, and then would be deleted, and bitwarden would be replaced with vaultwarden entirely. I would not be attempting to replace my broken git server, as I had found a solution for my needs using nextcloud.
As it turned out, attempting to use docker compose was a mistake. I learned a lot about compose from my efforts and am glad I tried that method, but an afternoon of headache showed me I wasn’t quite up to par. In the process of troubleshooting, however, I slowly figured out the function of an nginx proxy sample found here (which had eluded me when I initially set up vaultwarden on my home server). I decided that with my nextcloud and nginx functional as is, I could use docker for vaultwarden, set up my existing nginx config to proxy to that, and be done with it.
Getting the docker vaultwarden instance running was easy enough, but making port 8080 (the http port used) accessible to the loopback network exclusively was quite challenging. I wrestled with iptables for quite a while to create a rule that would block 8080 from every address but the loopback, only to discover that docker overwrites that firewall with its own, so I had to manually specify an address (similarly to whitelisting) in order to achieve what I wanted. Once that was done, I had a one and done command to create the container. “docker run -d --name vaultwarden-solo -v ./vw-data/:/data/ -e ROCKET_PORT=8080 -e ADMIN_TOKEN=<secret> --network vaultwarden -p 127.0.0.1:8080:8080 vaultwarden/server:latest” and now that it’s been created, I can use docker stop vaultwarden-solo and docker start vaultwarden-solo to turn in on and off.
Once the docker container was running and only accessible to the loopback address, I used the nginx proxy config and certbot to make the vaultwarden instance available over https, and then I set up the instance fully. Midway through that process, I discovered that the vaultwarden couldn’t send email over smtp, part of the process of creating users. It took a lot of time for me to fix this issue, as I had to first investigate the docker networking, then check the firewall on the server, then the firewall outside the hetzner server in the admin panel, before I finally discovered that you have to use a special request form to have ports 25 and 465 opened. Fun! Other than that though, the server went up successfully.
Having decided not to fully replace everything with one docker compose file, however, I now had to diagnose the issues with my existing nextcloud instance which I was happy to ignore when the plan was to replace it entirely. Installing php 7.4 and removing php 7.3 was much harder than it should have been due to repository issues, but I got that solved and slowly patched up and updated my nextcloud image.
Now, with nextcloud and vaultwarden running, I finally needed to connect logseq to nextcloud to replace the broken git server. I first tried connecting to the davs file system through thunar, my graphical file manager, and it took an incredible amount of trial and error for me to discover that the authentication password is a special application password that has to be generated in the settings. I made so many failed attempts to authenticate that I actually got myself banned from my server’s https port via fail2ban. Speaking of which, I am very frustrated that the ip which shows in the iptables ban list isn't the same format as the one you need to use to unban in fail2ban.
Once I had finally added the nextcloud webdav share as a network drive in thunar, I discovered that attempting to use logseq in that directory resulted in long freezes in logseq as it tried to access the files, so I decided to try manually mounting the webdav drive to my filesystem using the terminal.
The instructions given for doing this from the webdav page on nextcloud aren’t exactly compatible with arch linux, so here are the commands I used:
- yay -S davfs2
- Edit /etc/davfs2/davfs2.conf and uncomment the dav_group line. (This step may not be necessary, but I did it because of the note in 2.2 here https://wiki.archlinux.org/title/Davfs2 )
- sudo groupadd davfs2 (This step might not be needed either. I did it because running “usermod -aG davfs2 <username>” returned usermod: group 'davfs2' does not exist. It might be that the group had been created but not loaded yet? Perhaps trying “systemctl daemon-reload” or logging out and in again would make this step unnecessary. I wouldn’t know.)
- usermod -aG davfs2 <username>
- mkdir ~/nextcloud
- mkdir ~/.davfs2
- cp /etc/davfs2/secrets ~/.davfs2/secrets
- chown <username>:<username> ~/.davfs2/secrets
- chmod 600 ~/.davfs2/secrets
- Add your Nextcloud login credentials to the end of the secrets file, using your Nextcloud server URL and your Nextcloud username and password:
https://example.com/nextcloud/remote.php/dav/files/USERNAME/ <username> <password>
Or
$PathToMountPoint $USERNAME $PASSWORD
for example
/home/user/nextcloud john 1234 - Add the mount information to /etc/fstab https://example.com/nextcloud/remote.php/dav/files/USERNAME/ /home/<linux_username>/nextcloud davfs user,rw,auto 0 0
- (I had to log out and log back in to apply the group changes here)
- Then test that it mounts and authenticates by running the following command. If you set it up correctly you won’t need root permissions: mount ~/nextcloud
All of that done, I had a webdav network drive mounted on my filesystem(yippie!), but logseq still wasn’t working right. That’s when I discovered that the version of the logseq graph that was on nextcloud was incomplete and had presumably been cut off as it was uploaded. I copied an intact version of the graph into the directory and after some loading time, logseq was up and running! This made me wonder if the issue with thunar had been the incomplete graph copy instead of the way the network drive was mounted, so I gave thunar a second shot on my laptop. Surprisingly, it worked, but unlike the console method, it doesn’t mount automatically, so I ended up having to set up the console method anyway.
The result isn’t perfect. Both instances of logseq take a bit to load up, webDAV doesn’t really allow for synchronous file editing, and I have to manually open the directory in thunar to initiate the sync after a restart, but it works and I’m happy with it.
Also if you noticed that my page has been unthemed recently, I've been working on a project that involves serving my css with php and the process of updating my php broke that for a bit.
Addendum: The web drive started crashing and acting really strange and I nailed it down to reading from lost+found/ as described here https://savannah.nongnu.org/bugs/?func=detailitem&item_id=63771 . Downgrading davfs2 from 1.7.0 to 1.6.1 fixed the issue. webDav is still finicky, but not so finicky that it prevents access to my home directory with a graphical file manager.
Comments
Displaying 4 of 4 comments ( View all | Add Comment )
Rainy
I totally get the cancer that is self hosting. I got forgejo writefreely matrix 4get and jellyfin up on my homelab. jellyfin is super straight forward ofc cause its just local but all the other ones are under my domain and thats where the fun of nginx reverse proxies and figuring out how to run the thing actually begins. Matrix and Writefreely are systemd services, forgejo and 4get are docker, all behind an nginx reverse proxy setup. Its really fun and a great learning experience but jesus formulating all of it is such a clusterfuck.
Report Comment
All hail nginx.
by Fawkes; ; Report
Pebby
understood absolutely Nothing, double kudos
2 of them !
Report Comment
nerd
by Fawkes; ; Report
☝️
by Pebby; ; Report
lem.iso
nerd
Report Comment
ㅇㅅㅇ
Nerd!
Report Comment