-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
✨Fleet example changes #1
✨Fleet example changes #1
Conversation
Signed-off-by: Iván Álvarez <[email protected]>
return nil | ||
} | ||
cm.engagedClustersLock.RUnlock() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
great find! but don't we need both? RUnlock on return, and RUnlock for the else case?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes! I missed that. I have also update the fleet
example with the same error here.
examples/fleet/main.go
Outdated
|
||
// Retrieve the pod from the cluster. | ||
pod := &corev1.Pod{} | ||
if err := client.Get(ctx, req.NamespacedName, pod); err != nil { | ||
return reconcile.Result{}, err | ||
} | ||
log.Info("Reconciling pod", "name", pod.Name, "uuid", pod.UID) | ||
log.Info(fmt.Sprintf("Retrieved pod %s:>%s/%s", cl.Name(), pod.Namespace, pod.Name)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is an anti-pattern for structural logging. Just add "ns", pod.Namespace
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed :)
Co-authored-by: Dr. Stefan Schimanski <[email protected]>
Signed-off-by: Iván Álvarez <[email protected]>
…ntroller-runtime into sttts-cluster-support
Signed-off-by: Iván Álvarez <[email protected]>
Perfect. Merging. Btw, I want to come back to the PR and the design proposal soonish. Thanks for trying it out! |
Hi @sttts,
First at all, let me please put some context about this PR.
Starting from the (amazing!!!) changes of your PR, my goal was to create an operator that watch the changes at many clusters. I was able to create the POC, but I have to add some small changes reflected in this PR.
During my tests I was not able to
engage
a new cluster as the code was always hung at line627: cm.engagedClustersLock.Lock()
, as the lock was neverRUnlock(ed)
.fleet
test.If this changes make sense for you, feel free to merge this PR or add them directly in your branch.
Note: I also try to run
make test
to validate these code changes, but I have some compiler errors.I have not found any reference in the code to
cluster.WatchEvent
. If you could provide to me some context about this error, I will be happy to review it.Regards,
Iván