Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Clarify safety on untrusted code #249

Open
afq984 opened this issue Apr 4, 2023 · 2 comments
Open

Clarify safety on untrusted code #249

afq984 opened this issue Apr 4, 2023 · 2 comments
Assignees

Comments

@afq984
Copy link

afq984 commented Apr 4, 2023

starlark/README.md

Lines 36 to 37 in ce1fdb0

* **Hermetic execution**. Execution cannot access the file system, network,
system clock. It is safe to execute untrusted code.

The readme says:

It is safe to execute untrusted code.

google/starlark-go#241 (comment) says:

we've never claimed that it is secure for running untrusted code. Scripts can easily cause denial of service by exhausting all memory, or by hash flooding.

It seems like executing arbitrary starlark code could crash a system, but other than that, there should be no way to escape the execution environment. Is this expectation correct?

The safety expectations also sound different from what https://github.com/google/cel-go offers, so it would be great if this could be elaborated in README.

@stepancheg
Copy link
Contributor

(Deleted wrong comment)

@brandjon
Copy link
Member

Thanks for pointing this language out.

I'm worried that "It is safe to execute untrusted code" conveys unrealistic expectations about how hardened any given Starlark implementation is likely to be against truly malicious users. Imagine an implementation in a non-memory-safe language: Isn't "safe to execute untrusted code" practically equivalent to "There are no crash bugs"?

In Bazel, the design constraints that led to Starlark wasn't that we're dealing with untrusted code, so much as we want to preclude by construction code that is hostile to determinism and parallelism. Yes, to some extent you expect a user to not be able to compromise a Bazel host environment at the Starlark level (though you certainly could compromise it at the Bazel action execution level if not sandboxed). But we can soften the wording to avoid giving the impression that it's a primary use case that all implementations must satisfy.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants