Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement graceful shutdown #236

Open
wants to merge 11 commits into
base: master
Choose a base branch
from
Open

Conversation

nhoad
Copy link
Contributor

@nhoad nhoad commented Aug 28, 2022

Hi! This PR attempts to implement per the recommendation in this comment #91 (comment). The commit message provides more detail on exactly

Note that one distinct difference from the suggested implementation is that the listener socket is closed after the backlog is drained. For my use case (running many instances of my HTTP server using SO_REUSEPORT) this works better, as it will ensure that any new connections will go to other instances ASAP and ensure this given instance can shutdown relatively quickly. Based on further discussion on the issue, I think this would be the most beneficial behavior for others too. If you'd like that behavior to be configurable as well, just let me know and I can add another config flag for it.

(hopefully) fixes #91.

nhoad added 2 commits August 27, 2022 20:05
When WANT_GRACEFUL_SHUTDOWN is defined, the following events will happen
on SIGTERM:

1. Drain the backlog on the listening socket by accepting as many
   connections as it can. Previously they were likely to get a
   connection reset error.
2. Close the listening socket so no more clients can connect.
3. Check every 100ms if there are any active connections. An active
   connection is defined as a connection that is part-way through
   handling a request. By not counting keep-alive connections, this
   prevents bjoern from living longer than necessary.
4. If after 30 seconds there are still active connections, shutdown
   anyway.

Also, after receiving SIGTERM, the server will stop keeping connections
alive.
- graceful-shutdown-closes-socket tests that the socket is closed upon a
  SIGTERM being received.
- graceful-shutdown-accept-backlog tests that any pending connections
  are accepted.
- graceful-shutdown-timeout to show that active connections will be
  dropped after 30 seconds.
- graceful-shutdown-keep-alive to show that inactive-but-open
  connections will not block shutdown.
Copy link
Owner

@jonashaag jonashaag left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Terrific work! Thanks so much for working on this!

bjoern/server.c Outdated
ev_signal_stop(mainloop, watcher);
#ifdef WANT_SIGNAL_HANDLING
ev_timer_stop(mainloop, &timeout_watcher);
#endif
#ifdef WANT_GRACEFUL_SHUTDOWN
ev_signal_stop(mainloop, &thread_info->sigterm_watcher);
ev_timer_stop(mainloop, &timeout_watcher);
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it safe to stop the same timer twice?

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Btw, do you know if it is possible to receive a timeout_watcher callback call during ev_signal_on_sigxxx? If so, is it a problem? Shall we move stopping the timeout watcher to the very top?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point! I'll move the timeout watcher up to the top and shuffle the code around so that it only gets stopped once.

From the documentation for ev_TYPE_stop here http://pod.tst.eu/http://cvs.schmorp.de/libev/ev.pod, it says "Stops the given watcher if active, and clears the pending status (whether the watcher was active or not)." so it looks like it's safe to stop it more than once. Because it also clears the pending status means that we shouldn't receive a timeout_watcher callback during this handler either.

bjoern/server.c Outdated Show resolved Hide resolved
bjoern/server.c Outdated Show resolved Hide resolved
bjoern/server.c Show resolved Hide resolved
bjoern/server.c Outdated Show resolved Hide resolved
bjoern/server.c Show resolved Hide resolved
@@ -11,6 +11,10 @@ static PyObject*
run(PyObject* self, PyObject* args)
{
ServerInfo info;
#ifdef WANT_GRACEFUL_SHUTDOWN
info.active_connections = 0;
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a built in way in libev to handle connection draining so that we don't have to keep a counter? I'm worried about counting bugs.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unfortunately no, going through the documentation I can't see any way to do that. In case it helps, I went through a couple of iterations of the counting code and this was by far the simplest one I came up with - counting connections as "active" once http-parser tells us a message has begun, and then not counting them once we're done with the sending the response.

I contemplated doing something like... storing all of the requests in an array/linked list/hash map on the ThreadInfo struct, and then basically watching for that to drop to either drop to zero entries, or if the existing entries aren't handling any connections. That would eliminate counting bugs, but it would also be significantly more complex as we'd need to handle growing the array occasionally if there's an influx of requests. I think we'd also need to store some additional state on the Request struct for whether or not it's currently handling a request. Given the complexity, that's why I decided to go for the simpler approach :)

That said though, I think it's worth doing some testing on dropping connections halfway through requests at various points in the lifecycle to see how the counting holds up. I'll go back and do that, and report on my findings :)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My testing found something - aborted connections weren't being accounted for. I've fixed that up and added some tests for it as well.

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not implement counting as follows:

  • A successful accept increments the counter
  • A close decrements the counter

So, move to ev_io_on_requests and close_connection?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could do that! That would certainly be simpler. The downside to doing that is that any persistent connections that aren't being actively used will hold the event loop open until the timeout is reached. Is that acceptable?

bjoern/server.c Outdated Show resolved Hide resolved
@@ -11,6 +11,10 @@ static PyObject*
run(PyObject* self, PyObject* args)
{
ServerInfo info;
#ifdef WANT_GRACEFUL_SHUTDOWN
info.active_connections = 0;
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not implement counting as follows:

  • A successful accept increments the counter
  • A close decrements the counter

So, move to ev_io_on_requests and close_connection?

bjoern/server.c Outdated Show resolved Hide resolved
ev_signal_stop(mainloop, watcher);
ev_timer_stop(mainloop, &timeout_watcher);

// don't shutdown immediately, but start a timer to check if we can shutdown in 100ms.
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we still need this if we change the counting to what I suggested above? It feels like this timeout stuff is unnecessarily complicated

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We would still need it, yes - without this there's nothing checking if there are any active connections, so it will always take 30 seconds to shutdown (i.e. the other timeout would be reached).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thinking a little bit more on this, I could remove the ev_shutdown_timeout_ontick function and associated timer if I did the following:

  • When SIGTERM is received, store ev_now(loop) + 30 in an ev_tstamp on the server
  • Make ev_shutdown_ontick check if shutdown_time <= ev_now(loop), and use that to stop the loop.

That feels a bit simpler to me than timeout juggling it's currently doing.

@jonashaag
Copy link
Owner

@nhoad are you planning to work on this some more?

@nhoad
Copy link
Contributor Author

nhoad commented Jan 12, 2023

I'm happy with the branch as it is, but if you'd like me to make any changes just let me know 😄

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Gracefully exiting a Python process running Bjoern
2 participants