-
Notifications
You must be signed in to change notification settings - Fork 595
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(storage): lazily remove instance data in uploader to avoid data loss #19931
Conversation
@@ -1414,28 +1435,7 @@ impl HummockUploader { | |||
let UploaderState::Working(data) = &mut self.state else { | |||
return; | |||
}; | |||
if let Some(removed_table_data) = data.unsync_data.may_destroy_instance(instance_id) { | |||
data.task_manager.remove_table_spill_tasks( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For "drop streaming job", I think it is still preferable to cancel the task instead of waiting the task to finsih because the data will be deleted anyway. Can you elabrate in which cases we should preserve the upload task instead of cancelling it on instance destruction?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just updated the PR description.
In short, this PR is not for drop streaming job
, but for the single barrier configuration change in #18312. In configuration change that may concurrently stop some existing actors and create new actors, for actors to be dropped, let's say in epoch1, they should write epoch1 data to hummock, and their data should be included in the SSTs of epoch1. But in the current main branch, for these actor to be dropped, their data is not likely to be included in the SSTs of epoch1, because their data might be dropped immediately when the instance is destroyed but before sync(epoch1)
is called. For the same reason, we should also preserve the spilling task.
The current main branch works because in configuration change, we have a pause barrier, let's say epoch0, and we can ensure that no data is written between epoch0 and epoch1, and therefore even the instances are dropped, we won't lose data.
For normal drop streaming job
, it's fine to drop instance and cancel related spilling task and ignore data loss. But given that we cannot distinguish whether dropping an instance is caused by drop streaming job or configuration change, we may let it follow the same handling logic as configuration change. I think the benefit of uploading less data may not be worthy compared to the increased complexity. Besides, a spilling task usually mix data from different tables, which may include normal table and the table to be dropped. Therefore, the spilling task may not be really cancelled when we call the previous remove_table_spill_tasks
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
a spilling task usually mix data from different tables, which may include normal table and the table to be dropped.
This is a good point. Thanks for the explanation.
bd50878
to
28ad9d0
Compare
This stack of pull requests is managed by Graphite. Learn more about stacking. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
I hereby agree to the terms of the RisingWave Labs, Inc. Contributor License Agreement.
What's changed and what's your intention?
Currently in uploader, when an local instance is destroyed, we will remove the instance from the uploader immediately, even if there is some unsync data in the instance. The logic works in current code, because an instance can be destroyed in 3 scenarios: drop streaming job, configuration change, and actor exit on error or recovery. When dropping streaming job, the table will be removed soon, and therefore the loss of data is invisible. In configuration change, before the barrier to drop actors, we will have a pause barrier to ensure that there is no data in between, so it works for configuration.
However, when we support #18312, there will not be a pause barrier, and then we may lose data if the sync is called later than the actor exits. So we need to fix it before implementing #18312.
In this PR, we change to lazily remove the instance data. When the instance is destroyed, in uploader we only mark the instance as
destroyed
. The instance will later be destroyed when all the sealed epochs have been started syncing.Checklist
Documentation
Release note