Hi all,

as I’m running a lot of docker containers in my “self-hosted cloud”, I’m also a little bit worried about getting malicious docker containers at some points. And I’m not a dev, so very limited capabilities to inspect the source code myself.

Not every docker container is a “nextcloud” image with hundred of active contributors and many eyes looking at the source code. Many Self-Hosted projects are quite small, and Github accounts can be hacked, etc. …

What I’m doing in the moment, is:

Project selection:
- only select docker projects with high community activity on GitHub and a good track record

Docker networks:
- use separate isolated networks for every container without internet access
- if certain APIs need internet access (e.g. Geolocation data), I use an NGINX-proxy to forward this domain only (e.g. self-made outgoing application firewall)

Multiple LXC containers:
- I split my docker containers into multiple LXC instances via Proxmox, some senitive containers like Bitwarden are running on their own LXC instance

Watchtower:
- no automatic updates, but manual updates once per month and testing afterwards

Any other tips? Or am I worrying too much? ;)

  • jesuisoz@alien.top
    cake
    B
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    You’re not worrying too much. Projects selection, global awareness is definitely the crucial point.

    Isolated networks and concerns is also important.

    To avoid data leak, take time to review your firewall rules. Do not “allow to any” from LAN interface. Take time to allow just the ports you need. It takes time and everyone at home is going to scream when using a new app/software, but it’s worth the price.

    You can also add IDS/IPS on the lan side to prevent malicious app establishing outside connections. Have a look at ZenArmor or Crowdsec.

    You can also have a look at proxmox internal firewall system to isolate VM and their accessibility scope.