Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix chunk_seq_no wrap issue. #2952

Open
wants to merge 4 commits into
base: main
Choose a base branch
from
Open

Fix chunk_seq_no wrap issue. #2952

wants to merge 4 commits into from

Conversation

Li-Aaron
Copy link
Contributor

@Li-Aaron Li-Aaron commented Jan 13, 2025

Fix #2875

@Li-Aaron Li-Aaron closed this Jan 13, 2025
@Li-Aaron Li-Aaron deleted the chunk-wrap branch January 13, 2025 02:16
@Li-Aaron Li-Aaron restored the chunk-wrap branch January 13, 2025 02:16
@Li-Aaron Li-Aaron reopened this Jan 13, 2025
@Li-Aaron Li-Aaron marked this pull request as draft January 13, 2025 02:59
@Li-Aaron Li-Aaron force-pushed the chunk-wrap branch 3 times, most recently from 3c503f8 to 2f8dd0c Compare January 13, 2025 06:07
@Li-Aaron Li-Aaron marked this pull request as ready for review January 14, 2025 00:44
spdm_context->connection_info.capability.data_transfer_size,
spdm_context->local_context.capability.sender_data_transfer_size);

max_chunk_data_transfer_size =
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we should also consult max_spdm_msg_size.

min_data_transfer_size = LIBSPDM_MIN(
spdm_context->connection_info.capability.data_transfer_size,
spdm_context->local_context.capability.sender_data_transfer_size);

/* Fail if exceed max chunks */
max_chunk_data_transfer_size =
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we should also consult max_spdm_msg_size.

Li-Aaron

This comment was marked as spam.

@Li-Aaron Li-Aaron marked this pull request as draft January 14, 2025 01:58
@Li-Aaron
Copy link
Contributor Author

The newly added test case for chunk get in requester will cost ~90s if print to console or ~10s if print to file.

@Li-Aaron Li-Aaron marked this pull request as ready for review January 16, 2025 09:47
@steven-bellock
Copy link
Contributor

When the Requester receives the first CHUNK_RESPONSE it can use LargeMessageSize to determine whether ChunkSeqNo will wrap, assuming the size of each chunk is equal to the Requester's DataTransferSize. A similar calculation can take place on the Responder side when it receives the first CHUNK_SEND.

@Li-Aaron
Copy link
Contributor Author

Li-Aaron commented Jan 20, 2025

When the Requester receives the first CHUNK_RESPONSE it can use LargeMessageSize to determine whether ChunkSeqNo will wrap, assuming the size of each chunk is equal to the Requester's DataTransferSize. A similar calculation can take place on the Responder side when it receives the first CHUNK_SEND.

In Spec line 812:

ChunkSize shall be a value where the total size of CHUNK_SEND or CHUNK_RESPONSE shall be from MinDataTransferSize to the DataTransferSize of the receiving SPDM endpoint.

Which means the Responder could send chunk messages with ChunkSize less than Requester's DataTransferSize.

@steven-bellock
Copy link
Contributor

Are you saying that even if the check passes then the ChunkSeqNo may still wrap? That is correct. However to determine that the Requester has to retrieve 65536 messages to determine that, whereas if the check fails then the Requester knows from the beginning the ChunkSeqNo would wrap and not spend the time retrieving all those messages only for ChunkSeqNo to wrap. It's a performance improvement. The original check is still needed.

@Li-Aaron
Copy link
Contributor Author

Got it, it make sense to add additional check to improve the performance.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

ChunkSeqNo should not wrap
3 participants