-
Notifications
You must be signed in to change notification settings - Fork 30
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Is Kubernetes support going away? #175
Comments
Hi @blurpy, I don't see much users here. @ present the only project I know with glusterfs CSI is http://github.com/kadalu/kadalu. Check if that fits your requirements. Or there is rook.io which would get you multiple storage options. |
GlusterD2, which this depends on, has had zero commits since March (same as this project) and is no longer being actively maintained. Would you agree committing to GD2 and this for production workloads would be too risky and not recommended? |
Considering currently there are no 'products' on this project, and as the project maintainers themselves haven't released v1.0 version, I would say, surely not a good idea to depend on GD2 for production env. Same time, if you (or people depending on it) found that with some patches to project, things work good, happy to accept those patches and see how the project would be kept alive. |
Excuse my newbness to this project, but why does this CSI driver need 'products'? Only other components needed to deliver K8s storage from gluster are PVC & PV resources unless I'm missing something. |
By "products", he's referring to companies shipping a supported storage product based on this code. |
It's sad to see GlusterFS become a deprecated storage solution. We've been using GlusterFS in our production environment for the past 5 years (starting back with OKD 3.xx). The only reason we'd be exploring other options (e.g., Ceph, Longhorn, etc) would be because the community decided to deprecate and remove CSI drivers from modern Kubernetes platforms (i.e., RKE2). Pour out a 40 oz for GlusterFS |
We are using GlusterFS for all our storage in Kubernetes today. Kubernetes developers are working on replacing the in-tree drivers with CSI-drivers. I can't see any progress here since the pre-1.0 version released at the start of the year.
So, do we need to plan for a migration away from GlusterFS? At some point in the future when the in-tree driver is removed we will be stuck if the CSI-driver is not ready.
The text was updated successfully, but these errors were encountered: