-
Notifications
You must be signed in to change notification settings - Fork 59
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Volumes not persisted between restarts #5
Comments
It should be persistent already otherwise it's a bug. Let me try to recreate it. |
You don't have the rpcbind service started up I am guessing you don't need that for V4 NFS? |
A temporary workaround I can think of is to define the volume externally like the following sample
|
Hmm... I'm looking further into it, There may be something off when using compose. It works when doing a stack-deploy. |
Question I have though, are you using compose because you're running on Windows/Mac to connect to a shared mount? Because I had issues with shared drives before on Windows due to a race condition between the volume and the container. docker/for-win#584 |
#3 is implemented now, I'm just pushing image up right this moment and https://hub.docker.com/r/trajano/nfs-volume-plugin/ should have it by then you may want to try that since it's a simpler interface and should load faster since |
You were correct. rpcbind was not running. Took awhile to figure out what was going on, some error in the startup process, perhaps because nothing "wanted" rpcbind on startup. So I created a work around by having the nfs-common.target want it. Also ran into some other issues with my scripts, but they are not all fixed and after testing, with rpcbind starting, there error is still occurring. IE on shutdown of docker the volumes are removed. I am testing this in an AWS VM running centos 7.4. |
Will try the new build too. |
Issue still occurs in latest version. It also occurs if the volume is manually created with the If docker is shut down, it should be removing the volumes, I would think, however, the plugin would need to persist information about which volumes to restore on startup? |
Yah I would think so too, but I don't see anything on the API spec that says that or where it is preserved. However, in my current setup for cifs and gluster (there's nothing special with regards to storage) I don't have anything that would retain it, but it restores it when the swarm comes back up. I wonder if that's a limitation of managed plugins only restoring themselves if deployed as a stack/service. |
I wonder if you'd have better luck with https://github.com/ContainX/docker-volume-netshare that would require you to install the NFS binaries on the host and run it as a service. |
I'm not the only one with the issue apparently sapk/docker-volume-gluster#6 |
I'll make one minor change and push it up with the |
Hmm... so far so good...
|
btw I tried netshare but was having similar startup issues due to it being a full fledged service and not reliably, I believe, starting before docker did. Ran across your plugin in one of their issues. Thought the plugin approach was better, easier to maintain and if there were similar issues, would be easier to resolve the timing issue as it would be managed by docker. |
I'm starting to think what you're looking for is not allowed by Docker. The timing issues was one of the reasons why I set up the managed plugin approach too but primarily for the CIFS side. |
The global-cap seems like it might be the route to take. |
It does not appear to work when I did a full restart. I think I need a way of persisting data into the plugin. And somehow restoring it. |
What I am thinking was storing the Volume Info map into a boltdb and store that as part of the plugin. |
Sounds like a plan. Not 100% sure about how plugins work, so just out of curiosity, what about storing the data in the Mounts portion of the plugin config.json, would that be possible? It would seem that might be what it is designed for. |
@jonl-percsolutions-com that seems to have worked. Only nfs-volume-plugin:latest has this right now. My test is to create the the volume, restart all the nodes and see if I can still mount it. Let me know if it works for you I will apply the changes to the others. |
Closing as the changes have been applied a while back. |
Finally about to do another deployment after a month of dev. Updated this yesterday and appears to be working fantastically. |
I am using this to deploy some applications via docker compose. However, containers marked as restart: always fail to start because the volumes error out as non-existent.
I installed with the following commands:
My volume declaration is like such:
My docker compose command is like so:
sudo /usr/local/bin/docker-compose -f compose.yml up -d --build --force-recreate --remove-orphans
The resultant mount from docker inspect looks like so:
On restart of docker or the host I get the following error:
dockerd[1762]: time="2018-04-24T17:51:43.106196883Z" level=error msg="Failed to start container 3de3cad008484136b1c690c26fd46d17a7db80c01fc1187f4e9c2a9fac80b09d: get jistdeploy_webapp-logs: VolumeDriver.Get: volume jistdeploy_webapp-logs does not exist"
When stopping docker the "Source" location is removed.
Is there a way to make this persistent or have the plugin reconnect to the share on startup?
The text was updated successfully, but these errors were encountered: