Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(storage): lazily remove instance data in uploader to avoid data loss #19931

Merged
merged 4 commits into from
Dec 25, 2024

Conversation

wenym1
Copy link
Contributor

@wenym1 wenym1 commented Dec 25, 2024

I hereby agree to the terms of the RisingWave Labs, Inc. Contributor License Agreement.

What's changed and what's your intention?

Currently in uploader, when an local instance is destroyed, we will remove the instance from the uploader immediately, even if there is some unsync data in the instance. The logic works in current code, because an instance can be destroyed in 3 scenarios: drop streaming job, configuration change, and actor exit on error or recovery. When dropping streaming job, the table will be removed soon, and therefore the loss of data is invisible. In configuration change, before the barrier to drop actors, we will have a pause barrier to ensure that there is no data in between, so it works for configuration.

However, when we support #18312, there will not be a pause barrier, and then we may lose data if the sync is called later than the actor exits. So we need to fix it before implementing #18312.

In this PR, we change to lazily remove the instance data. When the instance is destroyed, in uploader we only mark the instance as destroyed. The instance will later be destroyed when all the sealed epochs have been started syncing.

Checklist

  • I have written necessary rustdoc comments.
  • I have added necessary unit tests and integration tests.
  • I have added test labels as necessary.
  • I have added fuzzing tests or opened an issue to track them.
  • My PR contains breaking changes.
  • My PR changes performance-critical code, so I will run (micro) benchmarks and present the results.
  • My PR contains critical fixes that are necessary to be merged into the latest release.

Documentation

  • My PR needs documentation updates.
Release note

@@ -1414,28 +1435,7 @@ impl HummockUploader {
let UploaderState::Working(data) = &mut self.state else {
return;
};
if let Some(removed_table_data) = data.unsync_data.may_destroy_instance(instance_id) {
data.task_manager.remove_table_spill_tasks(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For "drop streaming job", I think it is still preferable to cancel the task instead of waiting the task to finsih because the data will be deleted anyway. Can you elabrate in which cases we should preserve the upload task instead of cancelling it on instance destruction?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just updated the PR description.

In short, this PR is not for drop streaming job, but for the single barrier configuration change in #18312. In configuration change that may concurrently stop some existing actors and create new actors, for actors to be dropped, let's say in epoch1, they should write epoch1 data to hummock, and their data should be included in the SSTs of epoch1. But in the current main branch, for these actor to be dropped, their data is not likely to be included in the SSTs of epoch1, because their data might be dropped immediately when the instance is destroyed but before sync(epoch1) is called. For the same reason, we should also preserve the spilling task.

The current main branch works because in configuration change, we have a pause barrier, let's say epoch0, and we can ensure that no data is written between epoch0 and epoch1, and therefore even the instances are dropped, we won't lose data.

For normal drop streaming job, it's fine to drop instance and cancel related spilling task and ignore data loss. But given that we cannot distinguish whether dropping an instance is caused by drop streaming job or configuration change, we may let it follow the same handling logic as configuration change. I think the benefit of uploading less data may not be worthy compared to the increased complexity. Besides, a spilling task usually mix data from different tables, which may include normal table and the table to be dropped. Therefore, the spilling task may not be really cancelled when we call the previous remove_table_spill_tasks.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

a spilling task usually mix data from different tables, which may include normal table and the table to be dropped.

This is a good point. Thanks for the explanation.

@wenym1 wenym1 force-pushed the yiming/lazily-remove-uploader-instance-data branch from bd50878 to 28ad9d0 Compare December 25, 2024 05:50
Copy link
Collaborator

@hzxa21 hzxa21 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@wenym1 wenym1 added this pull request to the merge queue Dec 25, 2024
Merged via the queue into main with commit c40eb04 Dec 25, 2024
29 of 30 checks passed
@wenym1 wenym1 deleted the yiming/lazily-remove-uploader-instance-data branch December 25, 2024 09:15
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants