EDIT
I found the issue, it was me!! LOL I thought it had to be a setting I had forgotten and it was, I forgot to enable NFS & Nesting under Features in the Otions of the cotainer, see this image - https://imgur.com/bSiozKS
Thank you to everyone that took the time to reply and offer their suggestions.
Hi All,
Let me start with some basic basic background on my set up, I have a server running Proxmox with some Ubuntu containers. I have a separate server running TrueNas with a share that has both NFS & SMB set up. I can see this share in Windows 11 and read and write to it.
One of the Ubuntu containers is able to see this share via NFS and read and write to it too. I am testing Sonarr, Prowlarr & Qbittorrent Docker containers and got the basics set up, Sonarr can find episodes via Prowlarr of a TV show, hand it off to Qbittorrent to download and then move it from the download folder to the TV folder. Both the download folder and the TV folder are on the TrueNas server.
I then set up a Docker container for AudioBookShelf in the same Ubuntu CT and that can also read and write to the NFS share.
My issue is that I tried to set up another Ubuntu CT on the Proxmox server but cannot seem to access the NFS share on the TrueNas server.
This is what I did (which I think was the same process as the working CT)
- 1/Create a privileged conatiner
- 2/ Update and upgrade the CT
- 3/ Install nfs-common
- 4/ Create a directory in the CT in the /mnt directory for the NFS share
- 5/ added this line to the fstab file in /etc
192.168.0.188:/mnt/store/test-share /mnt/test-share nfs defaults 0 0
However, when I run mount -a
I get this error message mount.nfs: access denied by server while mounting 192.168.0.188:/mnt/store/test-share
Running df -h
does not show the mount obviously but it does in the working CT
As a further test, I cloned the working CT, deleted all the Docker containers and I can still see the NFS just fine.
I have probably missed a step while setting up the new CT but I’m not sure what.
Can anyone offer some help?
These comments all make this seem super complicated. I have a ZFS array on proxmox that exports itself over NFS and also an unraid server that has a share exported over NFS. I mount both into docker containers as needed. I’m in my phone so I’ll just copy and paste my install notes, hope that helps:
Proxmox Host Setup for NFS: -> proxmox install NFS server: apt install nfs-kernel-server -> create a filesystem for dockerData (makes snapshots easier and limits permissions) -> zfs create zfspool1/dockerData -> zfs set sharenfs=‘rw=@192.168.37.0/24,no_root_squash’ zfspool1/dockerdata -> zfs get sharenfs (to make sure only specific file systems are shared)
unraid: -> export NFS share with ‘private’ and set rule to: 192.168.37.0/24(rw)
Docker host setup (in my case I run an alpine server VM on proxmox:
apk add nfs-utils -> rc-update add nfsmount -> rc-service nfsmount start
In docker compose you will need a volume section at the top
volumes: sonarr-config: name: sonarr-config driver_opts: type: nfs o: addr=192.168.37.25,nolock,soft,rw device: :/zfspool1/dockerData/arr-stack/
Followed by something this in the compose: arr-stack-sonarr: image: ghcr.io/linuxserver/sonarr container_name: arr-stack-sonarr volumes: - sonarr-config:/config - media-tv:/tv - media-downloads:/downloads:z
Personally had a ton of issues with nfs and CIFS mounts in proxmox containers. I ended up mounting on the host and passing the directory through to the container.
Check the /etc/exports on the host. Is the .116 device listed there?
Add “-vvv” to your mount command and see what else it tells you.
Hi, thanks for your reply.
when I run
mount -a -vvv
I get the following:mount.nfs: timeout set for Tue Aug 8 16:14:10 2023
mount.nfs: trying text-based options 'vers=4.2,addr=192.168.0.188,clientaddr=192.168.0.116'
mount.nfs: mount(2): Permission denied
mount.nfs: trying text-based options 'vers=4,minorversion=1,addr=192.168.0.188,clientaddr=192.168.0.116'
mount.nfs: mount(2): Permission denied
mount.nfs: trying text-based options 'vers=4,addr=192.168.0.188,clientaddr=192.168.0.116'
mount.nfs: mount(2): Permission denied
mount.nfs: trying text-based options 'addr=192.168.0.188'
mount.nfs: prog 100003, trying vers=3, prot=6
mount.nfs: trying 192.168.0.188 prog 100003 vers 3 prot TCP port 2049
mount.nfs: prog 100005, trying vers=3, prot=17
mount.nfs: trying 192.168.0.188 prog 100005 vers 3 prot UDP port 661
mount.nfs: mount(2): Permission denied
mount.nfs: access denied by server while mounting 192.168.0.188:/mnt/store/test-share
The two docker containers can access the share, but the new proxmox container can’t?
The new proxmox container will have a different IP. My guess would be that the IP of the docker host is permitted to access the nfs share but the ip of the new proxmox container is not.
To test, you can allow access from your entire lan subnet (192.168.1.1/24)
Edit: For reference see: https://www.truenas.com/docs/scale/scaletutorials/shares/addingnfsshares/#adding-nfs-share-network-and-hosts
In particular: If you want to enter allowed systems, click Add to the right of Add hosts. Enter a host name or IP address to allow that system access to the NFS share. Click Add for each allowed system you want to define. Defining authorized systems restricts access to all other systems. Press the X to delete the field and allow all systems access to the share.
Hi, thanks for your reply.
Lets call the original Proxmox container CT1 and this has the *arrs Dockers that can access and interact with the NFS share on TrueNAS
Lets call the new Proxmox container CT2 and this is the one giving me the can’t access error
Lets call the cloned Proxmox container CT1Clone, this one can access the NFS share.
I think the NFS share is not restricted to any IP address, this is a screenshot of the NFS permissions. https://i.imgur.com/9k5jnw4.png I can also access if from my Windows machine that also has a different IP address.
CT1 & CT1Clone work fine, CT2 doesn’t work.
You can ignore the windows machine unless it’s using nfs, it’s not relevant.
Your screenshot suggests my guess was incorrect because you do not have any authorised Networks or Hosts defined.
Even so if it was me I would correctly configure authorised hosts or authorised networks just to rule it out, as it neatly explains why it works on one container but not another. Does the clone have the same IP by any chance?
The only other thing I can think for you to try is to set maproot user/group to root/wheel and see if that helps but it’s just a shot in the dark.
Hi,
The clone has a different IP address