-
Notifications
You must be signed in to change notification settings - Fork 441
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unable to Set S3 Bucket URL for Team Workspace #2826
Comments
Hey @zhihuizhang17! Just to make sure we are on the same page. When developing locally you use a Teams workspace and have an S3 file storage that is accessible for you locally. Then, on the pod instance running Livebook app server, you want to use a different URL for that file storage, which is only accessible from the pod. I assume your use case is reading files from the storage in a notebook via |
Hey @jonatanklosko! Thanks for your reply. I think there might be a misunderstanding. Let me clarify our use case:
|
@zhihuizhang17 you are getting that error because the Livebook instance you are using is an "agent" instance. It is only meant to run deployed notebooks and therefore it is read only (in fact, we should not be allowing you set the file system for the personal workspace). You should deploy a regular Livebook instance, listed here, and then they should all work. We should probably make this clearer. |
@josevalim Sorry, I'm a bit confused now. I followed the guide you mentioned to deploy my Livebook team service. The only change I made:
I believe I have already set it up correctly. How can I determine whether it's an agent instance or a regular Livebook instance? |
@zhihuizhang17 when you deployed it, did you have to manually add your Livebook Teams workspace to that instance or it was already there? Have you set |
Yes, it was already there.
Yes. I set them in
|
Yes, that's precisely the issue. Once you set those two env vars, it becomes an "agent". You should remove them and then you will be able to add your org as usual, as you did on your local machine, with full access. I will improve the docs. |
@josevalim It works!!! However, there are still some unusual behaviors:
|
Update
This ensures that the team workspace is always present when I refresh the UI after the next deployment. |
This is happening because you have multiple instances and the workspace connection is per instance. We currently don't sync them across the cluster. If you run a single instance (as you would run a single instance on your machine), then you should be fine. Thank you for the feedback, we will discuss internally how to improve the workflow. Meanwhile, I will update the docs to mention it should be a single instance. |
Closing this in favor of #2827. |
Background
We use AWS IAM to grant Kubernetes pods access to an S3 bucket for our services via adding annotation
iam.amazonaws.com/role
inDeployment
yaml. The bucket is accessible only by the Livebook team's service account, which means that access is restricted to pods within the Kubernetes cluster. As a result, I am unable to access the S3 bucket from a local Livebook instance, and I can only set the bucket URL for my personal workspace using the Livebook team's instance.However, I am facing an issue when trying to set the bucket_url for my team's workspace from the Livebook team's pod. I encounter the following error message:
Proposed Solution
To resolve this issue, I propose adding a new environment variable that allows us to set the
bucket_url
whenLIVEBOOK_AWS_CREDENTIALS
is set to true. This would help in configuring access appropriately when running in environments with restricted access, such as within a Kubernetes pod.BTW, there is A TYPO in above snapshot. #2824
The text was updated successfully, but these errors were encountered: