-
-
Notifications
You must be signed in to change notification settings - Fork 22
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Volume missing after reboot #6
Comments
The persistence is maintain via file /etc/docker-volumes/gluster/gluster-persistence.json in case of docker plugin this file is inside the container so if plugin is destroy the persistence is lost. I should mount this file on host. |
Should be fix in https://github.com/sapk/docker-volume-gluster/releases/tag/v1.0.5 |
I guess this needs to be built with |
Simply upgrade the plugin: https://hub.docker.com/r/sapk/plugin-gluster/ https://docs.docker.com/engine/reference/commandline/plugin_upgrade/#examples |
That is fine. The data exists elsewhere so recreating the volume only adds the ability for docker containers to mount that volume from the gluster server. So after following the upgrade process, it failed.
|
I performed a forced remove of the plugin with
|
Can you try to create folder /etc/docker-volumes/gluster on host ? |
oh sorry I already did that after the first fail. I jus tried again after changing the owner of the folder from root to a normal user with the
|
|
Ok I updated the dockerfile to create the folder also in the container. I was too optimistisc and chang without testing on my computer. I will try to test this futhermore soon. |
The volume works again, but it is still missing after a reboot and I have to run |
I have restore the old version without the local mount of the config file. I will test it soon to list all the need changes and maybe build a plugin with a mountpoint (write the persitence file on host) and one with the file inside the container (like now) the let the user the choice. |
After reviewing code, I found that the reading and saving part of persistence wasn't using the same file.
|
the problem still occurs.
|
I have not been able to get it working.
…On Nov 25, 2017 5:13 PM, "Jarek W" ***@***.***> wrote:
the problem still occurs.
You need to remove and re-add the plugin.
"Id": "5fbb180f3e90158bbc74f459383131890d2a02d0c90e4944e60445fde3712675",
"Name": "sapk/plugin-gluster:latest",
"PluginReference": "docker.io/sapk/plugin-gluster:latest",
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#6 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AC8KkxaMq8Zk_UEvBQDP7kP6paW42O3Rks5s6JEZgaJpZM4QMEJT>
.
|
My issue was with the host. The plugin created a volume that couldn't be
removed. After much purging and removing of volume related things I was
able to reinstall the plugin. I'll have to test the reboot persistence when
I have time
…On Nov 25, 2017 6:02 PM, "Charlie Gunzelman" ***@***.***> wrote:
I have not been able to get it working.
On Nov 25, 2017 5:13 PM, "Jarek W" ***@***.***> wrote:
> the problem still occurs.
> You need to remove and re-add the plugin.
>
> "Id": "5fbb180f3e90158bbc74f459383131890d2a02d0c90e4944e60445fde3712675",
> "Name": "sapk/plugin-gluster:latest",
> "PluginReference": "docker.io/sapk/plugin-gluster:latest",
>
> —
> You are receiving this because you authored the thread.
> Reply to this email directly, view it on GitHub
> <#6 (comment)>,
> or mute the thread
> <https://github.com/notifications/unsubscribe-auth/AC8KkxaMq8Zk_UEvBQDP7kP6paW42O3Rks5s6JEZgaJpZM4QMEJT>
> .
>
|
Still broken. After a host reboot, the volume is completely borked. It shows in
The plugin and volume are gone at this point. I can then install the plugin, create the volume, and pull/run the container again and everything is functional. I then wait 5 minutes and reboot the host. The host comes back up and the volume is empty, container is continually restarting. I remove the container and image and cannot remove the volume. |
It seems that the plugin doesn't take in account the removed container attached that why it block the removal of volume. For information, a volume plugin doesn't know the running container and can only keep count of request. I will look if it is now possible to pass arg to plugin to force removal. For the original problem, I think that docker (re-)start the container before the plugin is ready. Do you start the container from systemd/init.d or let docker restart running container before reboot ? |
I use the -restart unless-stopped argument with my containers. Docker is
doing all the work. The volume cannot be removed when the container is
stopped or removed. The volume is stuck until I forcefully remove the
plugin.
…On Dec 1, 2017 12:44 PM, "Antoine GIRARD" ***@***.***> wrote:
It seems that the plugin doesn't take in account the removed container
attached that why it block the removal of volume. For information, a volume
plugin doesn't know the running container and can only keep count of
request. I will look if it is now possible to pass arg to plugin to force
removal. For the original problem, I think that docker (re-)start jour
container before the plugin is ready. Do you start the container from
systemd/init.d or let docker restart running container before reboot ?
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#6 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AC8Kk-CbyfQMgib26n7_UeXCI6ioAuHHks5s8DsWgaJpZM4QMEJT>
.
|
Seems related to #12 |
#12 is on a host that is remote to the gluster/docker servers in #6. #12 is
a rancher OS host with no local gluster node.
…On Dec 15, 2017 2:18 PM, "Antoine GIRARD" ***@***.***> wrote:
Seems related to #12
<#12>
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#6 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AC8Kkxo60_RWMKoCaV0bbtS5tpCgBLjbks5tAsX6gaJpZM4QMEJT>
.
|
@cron410 I have rework the plugin to have a common base for the multiple docker volume I have developed. I hope it fix your issue. |
I'll have to make some time to test it out. This is exciting news.
…On Wed, Mar 28, 2018, 9:22 AM Antoine GIRARD ***@***.***> wrote:
@cron410 <https://github.com/cron410> I have rework the plugin to have a
common base for the multiple docker volume I have developed. I hope it fix
your issue.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#6 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AC8KkyGDc9TIkENhppwYFZ9-oUbmjE_jks5ti458gaJpZM4QMEJT>
.
|
Hi I think something else seems off. I am putting it here in case what I write can be the same test case for this issue. I am not getting any errors on the plugin (even without the subdir). I see it in docker volumes ls in the swarm node However, when I actually try to mount the volume using Perhaps we should have a unit test to include doing a |
@trajano there is a test doing that https://github.com/sapk/docker-volume-gluster/blob/master/gluster/integration/integration_test.go. It create a cluster of gluster node (container) and try to mount them via managed and legacy plugin. It create folder, write to it and compare value of data inside. |
Thanks let me debug some more. There are no errors from the logs that I can tell but there was some switch I can enable.
… On Mar 28, 2018, at 7:12 PM, Antoine GIRARD ***@***.***> wrote:
@trajano there is a test doing that https://github.com/sapk/docker-volume-gluster/blob/master/gluster/integration/integration_test.go. It create a cluster of gluster node (container) and try to mount them via managed and legacy plugin. It create folder, write to it and compare value of data inside.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.
|
I was looking through the test code but I can't seem to find any mention of doing a mount outside the context of docker with the mount command and confirming that the files are there. Mind you I am on a mobile device so I may have missed it.
… On Mar 28, 2018, at 7:12 PM, Antoine GIRARD ***@***.***> wrote:
@trajano there is a test doing that https://github.com/sapk/docker-volume-gluster/blob/master/gluster/integration/integration_test.go. It create a cluster of gluster node (container) and try to mount them via managed and legacy plugin. It create folder, write to it and compare value of data inside.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.
|
I saw your recent commits added the explicit mount. Did it work for you? |
Tried a few more things. The examples with |
@trajano somethings don't work at least on travis but fuse is kind of tricky in this env (it failed to mount also from cli for exemple). I need to test it more locally. |
Okay I thought it worked since the travis build on the branch passed. I'm doing a few more tests on my side too. |
Tried to do a reboot again. This time with the legacy plugin (rather than managed) seems to have the same behavior in that if the remote store is not ready (which may take a minute or so) it creates a local volume rather than go on a loop. I can verify that it is still using the
|
I took a crack at making my own plugin, so far so good and it seems to survive reboots but only in docker stack deployed nodes. https://hub.docker.com/r/trajano/glusterfs-volume-plugin/ It does not sustain itself when the volume is created using |
I think I solved it on mine by storing the volume map data to a bolt db that is inside the rootfs and I also used "global" scope. |
You guys are making major progress. Thanks for all the help, I've not had
much time lately to test with all the projects going on at work and big
events at home
…On Wed, Apr 25, 2018, 8:02 PM Archimedes Trajano ***@***.***> wrote:
I think I solved it on mine by storing the volume map data to a bolt db
that is inside the rootfs and I also used "global" scope.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#6 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AC8KkyFD34k5pE0Oo9uQRZ_DTpS4HxK8ks5tsQ6LgaJpZM4QMEJT>
.
|
I created a volume on both Docker hosts that connects to a gluster cluster hosted on 2 other servers with the following command:
docker volume create --driver sapk/plugin-gluster --opt voluri="gluster.mydomain.com:volume-app1" --name volume-app1
I had a Docker host lock up yesterday and I had to perform a hard reboot on the VM, when it came back up, the volume was mounted with the Local driver instead of
sapk/plugin-gluster:latest
as the other host shows withdocker volume ls
so I removed the volume from the troubled host and recreated it. This connected to gluster again and shows the correct data and all containers that rely on the volume work correctly.How do I make the docker volume persist across reboots?
The text was updated successfully, but these errors were encountered: