Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

(WIP) feat: add support for processing handshake packets async via vacation #99

Draft
wants to merge 3 commits into
base: main
Choose a base branch
from

Conversation

jlizen
Copy link

@jlizen jlizen commented Dec 30, 2024

WIP, I plan to add:

  • async handling for client/server instantiation too under the same feature flag, as that was also blocking in my profiling
  • probably try to clean up error modeling a bit

Also have yet to publish the new dependency crate, so all CI will fail :)

Staging this now to demonstrate what it would be like to use jlizen/vacation#9 in practice. Also for early feedback from my colleagues (or tokio-rustls maintainers if you have time, no rush though, I'll cut an official PR after it's polished).


closes #94

@jlizen jlizen changed the title feat: add support for processing handshake packets async via compute-heavy-future-executor (WIP) feat: add support for processing handshake packets async via compute-heavy-future-executor Dec 30, 2024
Cargo.toml Outdated Show resolved Hide resolved
src/common/mod.rs Outdated Show resolved Hide resolved
@jlizen
Copy link
Author

jlizen commented Dec 30, 2024

Pushed a new change that adds an enum to determine whether to process packets async. Also updated to reflect some changes in the dependency crate (now called vacation)

@jlizen jlizen changed the title (WIP) feat: add support for processing handshake packets async via compute-heavy-future-executor (WIP) feat: add support for processing handshake packets async via vacation Dec 30, 2024
@quininer
Copy link
Member

It became too big and too intrusive for me. i believe there is a non-intrusive way to handle it.

like

struct ComputeHeavy<F: Future, P> {
    source: F,
    prepare: P,
    execute: Option<Box<dyn Future<Output = F::Output>>>
}

impl<F, P> Future for ComputeHeavy<F, P>
where
    F: Future,
    P: Fn(&mut F) -> Poll<()>
{
    // ..

    fn poll(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Self::Output> {
        if let Some(exec) = self.execute.as_mut() {
            // poll exec
        }
    
        (self.prepare)(self.source)?;
        self.execute
            .insert(execute(|| self.source.poll(cx))))
            .poll(cx)
    }
}

let mut fut = acceptor.accept(io);
let tls_stream = ComputeHeavy::new(
    // source
    fut,
    // prepare
    |fut| fut.get_mut().unwrap().poll_fill_buf(cx)
).await;

poll_fill_buf means additional buffer is required, but it does not require any intrusive adaptation.

If want to expose a fill_to_rustls_buffer (of course it should have a better name) to avoid additional buffer, I think it's also good.

@jlizen
Copy link
Author

jlizen commented Dec 31, 2024

Thanks for the read-through + advice, @quininer ! I'll try out the suggested approach.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Feature request] Support alternative future executor handling for compute-bounded futures
3 participants