Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

✨ Use LoadBalancer IPv6 address - take 2 #1334

Draft
wants to merge 5 commits into
base: main
Choose a base branch
from

Conversation

rbjorklin
Copy link

@rbjorklin rbjorklin commented Jun 8, 2024

What this PR does / why we need it:
I'm trying to take over #1227 to push it to completion. Please see the original PR for a full description. If @JochemTSR returns I would be happy to move my commits back to the original PR.

Which issue(s) this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close the issue(s) when PR gets merged):
Fixes #1218

Special notes for your reviewer:
Even after making changes to the .envrc file as suggested in the "Running local e2e test" section I am unable to run the already existing e2e tests. Before I put any more time into troubleshooting on my side I would appreciate it if someone at Syself could review the instructions to ensure they are still up-to-date.

EDIT: Is a robot user required for e2e tests even if I'm not running the robot tests?

TODOs:

  • squash commits
  • include documentation
  • add unit tests

@janiskemper
Copy link
Contributor

Thanks a lot @rbjorklin! @guettli I think you can have a look here

apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: HCloudMachineTemplate
metadata:
name: "${CLUSTER_NAME}-md-0"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should give the machine deployment a different name, so that the name does not clash with the other MD. I suggest "${CLUSTER_NAME}-md-ipv6only-hcloud".


processControlPlaneEndpoint(hetznerCluster)

if hetznerCluster.Spec.ControlPlaneEndpoint.Host != "abc" {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would suggest to use require.Equal(t, a, b)

@guettli
Copy link
Collaborator

guettli commented Jun 20, 2024

@rbjorklin @JochemTSR What about having a meeting where we talk about the goals and strategies to achieve them? Unfortunately, I will be on vacation from Friday afternoon. I will be back on July 8th. We could meet today, tomorrow, or in two weeks. Maybe you two could sync up first? It would be great if you could create an overview, maybe not in GitHub, but in a Google Doc.

Here are some questions that come to mind:

  • Do you plan to use DNS entries for the LoadBalancer?
  • Will you use an IPv6-only LB or dual-stack?
  • What about the Kubernetes network: IPv6-only, dual-stack, or IPv4-only?
  • Why do you want that: Do you want to save costs? Do you want to provide IPv6 services?
  • Which locations do you use?
  • What about Hetzner bare-metal?
  • Do you use Kubernetes with IPv6 in other providers?
  • Are you using AMD only, or do you plan to use ARM as well?

I know some of these questions are not related to this PR, but I would like to have a better understanding of how community members use our CAPI provider (aside from our commercial customers).

@JochemTSR
Copy link

@rbjorklin @JochemTSR What about having a meeting where we talk about the goals and strategies to achieve them? Unfortunately, I will be on vacation from Friday afternoon. I will be back on July 8th. We could meet today, tomorrow, or in two weeks. Maybe you two could sync up first? It would be great if you could create an overview, maybe not in GitHub, but in a Google Doc.

Here are some questions that come to mind:

* Do you plan to use DNS entries for the LoadBalancer?

* Will you use an IPv6-only LB or dual-stack?

* What about the Kubernetes network: IPv6-only, dual-stack, or IPv4-only?

* Why do you want that: Do you want to save costs? Do you want to provide IPv6 services?

* Which locations do you use?

* What about Hetzner bare-metal?

* Do you use Kubernetes with IPv6 in other providers?

* Are you using AMD only, or do you plan to use ARM as well?

I know some of these questions are not related to this PR, but I would like to have a better understanding of how community members use our CAPI provider (aside from our commercial customers).

@guettli @rbjorklin I made a quick write-up here: (https://cloud.jochemram.net/s/gkcfLMbkpExcCfr). I am available for a meeting tomorrow anytime after 5PM UTC. Please reach out at [email protected] for contact details or for editing access to the document.

Regards,
Jochem

@guettli
Copy link
Collaborator

guettli commented Jun 21, 2024

@guettli @rbjorklin I made a quick write-up here: (https://cloud.jochemram.net/s/gkcfLMbkpExcCfr). I am available for a meeting tomorrow anytime after 5PM UTC. Please reach out at [email protected] for contact details or for editing access to the document.

Regards, Jochem

Hi Jochen, I don't email a lot, so that I propose another way: we could use the Kubernetes Slack. I am "Thomas Güttler" there. I searched for your name, but have not found it.

Friday 5PM UTC is too late for me. I can answer via Slack, but I can't dive into the source code. I will be back 8th Juli. But please contact me today via Kube Slack, and send me the password for above document. Thank you.

@guettli
Copy link
Collaborator

guettli commented Aug 15, 2024

@rbjorklin @JochemTSR sorry for the long delay. Are you still interested in getting this PR merged?

@rbjorklin
Copy link
Author

I'm still interested in being able to spin up IPv6 only clusters. Realistically though I won't be spending any time on moving this forward as long as the weather is nice. I might find my way back here around the end of October.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Use LoadBalancer IPv6 address
4 participants