-
Notifications
You must be signed in to change notification settings - Fork 59
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reimplement client proxy in go #276
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this all seems good: I have some weird nit-picks, and a few notes for some incremental improvements.
It seems sort of complicated/fragile that we have this proxy communicating with the API server over DNS. I might be wrong, but only the proxies are going to be making DNS requests, is that correct?
In the future when there are multiple API servers and the state of the host machines (instances?) is fully managed by the orchestrators, it seems like the API servers and orchestrators could do the proxying directly for a different port/set of routes?
Because the orchestrator owns the sandboxes, it seems reasonable that combining things in one process would be something that we'd want to delay until some later point when the orchestrators are restartable without impacting sandboxes (e.g. separate process management and the data/management parts, with data persisted (locally) to disk.)
(My instinct says that DNS is maybe the wrong protocol for doing this level of IPC between our components. Moving proxying into the other applications means that we can just have a local API to get data to the proxies. We could use a different transport (e.g. gRPC/etc.) which might be easier to manage/understand) but I'm not sure if my instinct is actually correct: removing the extra communication could be a useful simplification, but I'm less convinced that changing protocols would be important.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this all seems good: I have some weird nit-picks, and a few notes for some incremental improvements.
It seems sort of complicated/fragile that we have this proxy communicating with the API server over DNS. I might be wrong, but only the proxies are going to be making DNS requests, is that correct?
In the future when there are multiple API servers and the state of the host machines (instances?) is fully managed by the orchestrators, it seems like the API servers and orchestrators could do the proxying directly for a different port/set of routes?
Because the orchestrator owns the sandboxes, it seems reasonable that combining things in one process would be something that we'd want to delay until some later point when the orchestrators are restartable without impacting sandboxes (e.g. separate process management and the data/management parts, with data persisted (locally) to disk.)
(My instinct says that DNS is maybe the wrong protocol for doing this level of IPC between our components. Moving proxying into the other applications means that we can just have a local API to get data to the proxies. We could use a different transport (e.g. gRPC/etc.) which might be easier to manage/understand) but I'm not sure if my instinct is actually correct: removing the extra communication could be a useful simplification, but I'm less convinced that changing protocols would be important.
We didn't want to have all connection over API servers, if API server is proxying, you have to wait till all connections are closed, which is up to 24 hours. Especially if we want to be able to deploy API server often. Also there are probably different requirements for API server and the proxy.
This is good point, we probably could do that. I was maybe stuck with solution we had already in place, just moved it to the new proxy. |
Depends on how resources are allocated, but if we're just talking about a docker container running a go process, it maybe doesn't matter if the old process hangs around for a day without doing very much. Having said that... The default Also, and correct me if I'm wrong, but it's very unlikely that a connection would last for 24 hours. The sandbox might, but a single (HTTP) connection is very unlikely to. The proxy only needs to live as long as the longest request, not the sandbox.
Seems totally correct. |
Yes, isn't it harder to manage if you have basically more stateful resources? |
333baee
to
cf4e64c
Compare
Implement proxy in go