Skip to content

Commit

Permalink
deploy: ab93681
Browse files Browse the repository at this point in the history
  • Loading branch information
rzadp committed Feb 8, 2025
1 parent 231c4f7 commit 6723f1d
Show file tree
Hide file tree
Showing 4 changed files with 32 additions and 8 deletions.
18 changes: 15 additions & 3 deletions print.html
Original file line number Diff line number Diff line change
Expand Up @@ -228,7 +228,7 @@ <h2 id="summary"><a class="header" href="#summary">Summary</a></h2>
<p>An off-chain approximation protocol should assign rewards based upon the approvals and availability work done by validators.</p>
<p>All validators track which approval votes they actually use, reporting the aggregate, after which an on-chain median computation gives a good approximation under byzantine assumptions. Approval checkers report aggregate information about which availability chunks they use too, but in availability we need a tit-for-tat game to enforce honesty, because approval committees could often bias results thanks to their small size.</p>
<h2 id="motivation"><a class="header" href="#motivation">Motivation</a></h2>
<p>We want all polkadot subsystems be profitable for validataors, because otherwise operators might profit from running modified code. In particular, almost all rewards in Kusama/Polkadot should come from work done securing parachains, primarily approval checking, but also backing, availability, and support of XCMP.</p>
<p>We want all or most polkadot subsystems be profitable for validataors, because otherwise operators might profit from running modified code. In particular, almost all rewards in Kusama/Polkadot should come from work done securing parachains, primarily approval checking, but also backing, availability, and support of XCMP.</p>
<p>Among these task, our highest priorities must be approval checks, which ensure soundness, and sending availability chunks to approval checkers. We prove backers must be paid strictly less than approval checkers.</p>
<p>At present though, <a href="https://wiki.polkadot.network/docs/maintain-guides-validator-payout">validators' rewards</a> have relatively little relationship to validators operating costs, in terms of bandwidth and CPU time. Worse, polkadot's scaling makes us particular vulnerable &quot;no-shows&quot; caused by validators skipping their approval checks.</p>
<p>We're particularly concernned about hardware specks impact upon the number of parachain cores. We've requested relatively low spec machines so far, only four physical CPU cores, although some run even lower specs like only two physical CPU cores. Alone, rewards cannot fix our low speced validator problem, but rewards and outreach together should far more impact than either alone. </p>
Expand Down Expand Up @@ -256,6 +256,8 @@ <h3 id="collection"><a class="header" href="#collection">Collection</a></h3>
CandidateRewards {
/// Anyone who backed this parablock
backers: [AuthorityId; NumBackers],
/// Anyone to whome we think no-showed, even only briefly.
noshows: HashSet&lt;AuthorityId&gt;,
/// Anyone who sent us chunks for this candidate
downloaded_from: HashMap&lt;AuthorityId,u16&gt;,
/// Anyone to whome we sent chunks for this candidate
Expand All @@ -272,6 +274,8 @@ <h3 id="collection"><a class="header" href="#collection">Collection</a></h3>
pub struct ApprovalTallyLine {
/// Approvals by this validator which our approvals gadget used in marking candidates approved.
approval_usages: u32,
/// How many times we think this validator no-showed, even only briefly.
noshows: u32
/// Availability chunks we downloaded from this validator for our approval checks we used.
used_downloads: u32,
/// Availability chunks we uploaded to this validator which whose approval checks we used.
Expand All @@ -285,6 +289,8 @@ <h3 id="messages"><a class="header" href="#messages">Messages</a></h3>
pub struct ApprovalTallyMessageLine {
/// Approvals by this validator which our approvals gadget used in marking candidates approved.
approval_usages: u32,
/// How many times we think this validator no-showed, even only briefly.
noshows: u32
/// Availability chunks we downloaded from this validator for our approval checks we used.
used_downloads: u32,
}
Expand All @@ -293,15 +299,19 @@ <h3 id="messages"><a class="header" href="#messages">Messages</a></h3>
pub struct ApprovalsTallyMessage(Vec&lt;ApprovalTallyMessageLine&gt;);
</code></pre>
<h3 id="rewards-compoutation"><a class="header" href="#rewards-compoutation">Rewards compoutation</a></h3>
<p>We compute the approvals rewards by taking the median of the <code>approval_usages</code> fields for each validator across all validators <code>ApprovalsTallyMessage</code>s.</p>
<p>We compute the approvals rewards for each validator by taking the median of the <code>approval_usages</code> fields for each validator across all validators <code>ApprovalsTallyMessage</code>s. We compute some <code>noshows_percentiles</code> for each validator similarly, but using a 2/3 precentile instead of the median.</p>
<pre><code>let mut approval_usages_medians = Vec::new();
let mut noshows_percentiles = = Vec::new();
for i in 0..num_validators {
let mut v: Vec&lt;u32&gt; = approvals_tally_messages.iter().map(|atm| atm.0[i].approval_usages);
v.sort();
approval_usages_medians.push(v[num_validators/2]);
let mut v: Vec&lt;u32&gt; = approvals_tally_messages.iter().map(|atm| atm.0[i].noshows);
v.sort();
noshows_percentiles.push(v[num_validators/3]);
}
</code></pre>
<p>Assuming more than 50% honersty, these median tell us how many approval votes form each validator</p>
<p>Assuming more than 50% honersty, these median tell us how many approval votes form each validator. </p>
<p>We re-weight the <code>used_downloads</code> from the <code>i</code>th validator by their median times their expected <code>f+1</code> chunks and divided by how many chunks downloads they claimed, and sum them </p>
<pre><code>#[cfg(offchain)]
let mut my_missing_uploads = my_approvals_tally.iter().map(|l| l.used_uploads).collect();
Expand All @@ -317,6 +327,7 @@ <h3 id="rewards-compoutation"><a class="header" href="#rewards-compoutation">Rew
}
</code></pre>
<p>We distribute rewards on-chain using <code>approval_usages_medians</code> and <code>reweighted_total_used_downloads</code>. Approval checkers could later change from who they download chunks using <code>my_missing_uploads</code>.</p>
<p>We deduct small amount of rewards using <code>noshows_medians</code> too, likely 1% of the rewards for an approval, but excuse some small number of noshows, ala <code>noshows_medians[i].saturating_sub(MAX_NO_PENALTY_NOSHOWS)</code>.</p>
<h3 id="strategies"><a class="header" href="#strategies">Strategies</a></h3>
<p>In theory, validators could adopt whatever strategy they like to penalize validators who stiff them on availability redistribution rewards, except they should not stiff back, only choose other availability providers. We discuss one good strategy below, but initially this could go unimplemented. </p>
<h2 id="explanation"><a class="header" href="#explanation">Explanation</a></h2>
Expand Down Expand Up @@ -344,6 +355,7 @@ <h3 id="approvals"><a class="header" href="#approvals">Approvals</a></h3>
<p>As discussed in https://hackmd.io/@rgbPIkIdTwSICPuAq67Jbw/S1fHcvXSF we could compute these medians using the <a href="https://www.quora.com/Is-there-an-online-algorithm-to-calculate-the-median-of-a-stream-of-numbers-if-stream-elements-can-be-added-or-removed-at-any-point?share=1">on-line algorithm</a> if substrate had a nice priority queue.</p>
<p>We never achieve true consensus on approval checkers and their approval votes. Yet, our approval assignment loop gives a rough concensus, under our Byzantine assumption and some synchrony assumption. It then follows that miss-reporting by malicious validators should not appreciably alter the median $\alpha_v$ and hence rewards.</p>
<p>We never tally used approval assignments to candidate equivocations or other forks. Any validator should always conclude whatever approval checks it begins, even on other forks, but we expect relay chain equivocations should be vanishingly rare, and sassafras should make forks uncommon.</p>
<p>We account for noshows similarly, and deduce a much smaller amount of rewards, but require a 2/3 precentile level, not kjust a median.</p>
<h3 id="availability-redistribution"><a class="header" href="#availability-redistribution">Availability redistribution</a></h3>
<p>As approval checkers could easily perform useless checks, we shall reward availability providers for the availability chunks they provide that resulted in useful approval checks. We enforce honesty using a tit-for-tat mechanism because chunk transfers are inherently subjective.</p>
<p>An approval checker reconstructs the full parachain block by downloading distinct $f+1$ chunks from other validators, where at most $f$ validators are byzantine, out of the $n \ge 3 f + 1$ total validators. In downloading chunks, validators prefer the $f+1$ systemic chunks over the non-systemic chunks, and prefer fetching from validators who already voted valid, like backing checkers. It follows some validators should recieve credit for more than one chunk per candidate.</p>
Expand Down
18 changes: 15 additions & 3 deletions proposed/0000-rewards.html
Original file line number Diff line number Diff line change
Expand Up @@ -222,7 +222,7 @@ <h2 id="summary"><a class="header" href="#summary">Summary</a></h2>
<p>An off-chain approximation protocol should assign rewards based upon the approvals and availability work done by validators.</p>
<p>All validators track which approval votes they actually use, reporting the aggregate, after which an on-chain median computation gives a good approximation under byzantine assumptions. Approval checkers report aggregate information about which availability chunks they use too, but in availability we need a tit-for-tat game to enforce honesty, because approval committees could often bias results thanks to their small size.</p>
<h2 id="motivation"><a class="header" href="#motivation">Motivation</a></h2>
<p>We want all polkadot subsystems be profitable for validataors, because otherwise operators might profit from running modified code. In particular, almost all rewards in Kusama/Polkadot should come from work done securing parachains, primarily approval checking, but also backing, availability, and support of XCMP.</p>
<p>We want all or most polkadot subsystems be profitable for validataors, because otherwise operators might profit from running modified code. In particular, almost all rewards in Kusama/Polkadot should come from work done securing parachains, primarily approval checking, but also backing, availability, and support of XCMP.</p>
<p>Among these task, our highest priorities must be approval checks, which ensure soundness, and sending availability chunks to approval checkers. We prove backers must be paid strictly less than approval checkers.</p>
<p>At present though, <a href="https://wiki.polkadot.network/docs/maintain-guides-validator-payout">validators' rewards</a> have relatively little relationship to validators operating costs, in terms of bandwidth and CPU time. Worse, polkadot's scaling makes us particular vulnerable &quot;no-shows&quot; caused by validators skipping their approval checks.</p>
<p>We're particularly concernned about hardware specks impact upon the number of parachain cores. We've requested relatively low spec machines so far, only four physical CPU cores, although some run even lower specs like only two physical CPU cores. Alone, rewards cannot fix our low speced validator problem, but rewards and outreach together should far more impact than either alone. </p>
Expand Down Expand Up @@ -250,6 +250,8 @@ <h3 id="collection"><a class="header" href="#collection">Collection</a></h3>
CandidateRewards {
/// Anyone who backed this parablock
backers: [AuthorityId; NumBackers],
/// Anyone to whome we think no-showed, even only briefly.
noshows: HashSet&lt;AuthorityId&gt;,
/// Anyone who sent us chunks for this candidate
downloaded_from: HashMap&lt;AuthorityId,u16&gt;,
/// Anyone to whome we sent chunks for this candidate
Expand All @@ -266,6 +268,8 @@ <h3 id="collection"><a class="header" href="#collection">Collection</a></h3>
pub struct ApprovalTallyLine {
/// Approvals by this validator which our approvals gadget used in marking candidates approved.
approval_usages: u32,
/// How many times we think this validator no-showed, even only briefly.
noshows: u32
/// Availability chunks we downloaded from this validator for our approval checks we used.
used_downloads: u32,
/// Availability chunks we uploaded to this validator which whose approval checks we used.
Expand All @@ -279,6 +283,8 @@ <h3 id="messages"><a class="header" href="#messages">Messages</a></h3>
pub struct ApprovalTallyMessageLine {
/// Approvals by this validator which our approvals gadget used in marking candidates approved.
approval_usages: u32,
/// How many times we think this validator no-showed, even only briefly.
noshows: u32
/// Availability chunks we downloaded from this validator for our approval checks we used.
used_downloads: u32,
}
Expand All @@ -287,15 +293,19 @@ <h3 id="messages"><a class="header" href="#messages">Messages</a></h3>
pub struct ApprovalsTallyMessage(Vec&lt;ApprovalTallyMessageLine&gt;);
</code></pre>
<h3 id="rewards-compoutation"><a class="header" href="#rewards-compoutation">Rewards compoutation</a></h3>
<p>We compute the approvals rewards by taking the median of the <code>approval_usages</code> fields for each validator across all validators <code>ApprovalsTallyMessage</code>s.</p>
<p>We compute the approvals rewards for each validator by taking the median of the <code>approval_usages</code> fields for each validator across all validators <code>ApprovalsTallyMessage</code>s. We compute some <code>noshows_percentiles</code> for each validator similarly, but using a 2/3 precentile instead of the median.</p>
<pre><code>let mut approval_usages_medians = Vec::new();
let mut noshows_percentiles = = Vec::new();
for i in 0..num_validators {
let mut v: Vec&lt;u32&gt; = approvals_tally_messages.iter().map(|atm| atm.0[i].approval_usages);
v.sort();
approval_usages_medians.push(v[num_validators/2]);
let mut v: Vec&lt;u32&gt; = approvals_tally_messages.iter().map(|atm| atm.0[i].noshows);
v.sort();
noshows_percentiles.push(v[num_validators/3]);
}
</code></pre>
<p>Assuming more than 50% honersty, these median tell us how many approval votes form each validator</p>
<p>Assuming more than 50% honersty, these median tell us how many approval votes form each validator. </p>
<p>We re-weight the <code>used_downloads</code> from the <code>i</code>th validator by their median times their expected <code>f+1</code> chunks and divided by how many chunks downloads they claimed, and sum them </p>
<pre><code>#[cfg(offchain)]
let mut my_missing_uploads = my_approvals_tally.iter().map(|l| l.used_uploads).collect();
Expand All @@ -311,6 +321,7 @@ <h3 id="rewards-compoutation"><a class="header" href="#rewards-compoutation">Rew
}
</code></pre>
<p>We distribute rewards on-chain using <code>approval_usages_medians</code> and <code>reweighted_total_used_downloads</code>. Approval checkers could later change from who they download chunks using <code>my_missing_uploads</code>.</p>
<p>We deduct small amount of rewards using <code>noshows_medians</code> too, likely 1% of the rewards for an approval, but excuse some small number of noshows, ala <code>noshows_medians[i].saturating_sub(MAX_NO_PENALTY_NOSHOWS)</code>.</p>
<h3 id="strategies"><a class="header" href="#strategies">Strategies</a></h3>
<p>In theory, validators could adopt whatever strategy they like to penalize validators who stiff them on availability redistribution rewards, except they should not stiff back, only choose other availability providers. We discuss one good strategy below, but initially this could go unimplemented. </p>
<h2 id="explanation"><a class="header" href="#explanation">Explanation</a></h2>
Expand Down Expand Up @@ -338,6 +349,7 @@ <h3 id="approvals"><a class="header" href="#approvals">Approvals</a></h3>
<p>As discussed in https://hackmd.io/@rgbPIkIdTwSICPuAq67Jbw/S1fHcvXSF we could compute these medians using the <a href="https://www.quora.com/Is-there-an-online-algorithm-to-calculate-the-median-of-a-stream-of-numbers-if-stream-elements-can-be-added-or-removed-at-any-point?share=1">on-line algorithm</a> if substrate had a nice priority queue.</p>
<p>We never achieve true consensus on approval checkers and their approval votes. Yet, our approval assignment loop gives a rough concensus, under our Byzantine assumption and some synchrony assumption. It then follows that miss-reporting by malicious validators should not appreciably alter the median $\alpha_v$ and hence rewards.</p>
<p>We never tally used approval assignments to candidate equivocations or other forks. Any validator should always conclude whatever approval checks it begins, even on other forks, but we expect relay chain equivocations should be vanishingly rare, and sassafras should make forks uncommon.</p>
<p>We account for noshows similarly, and deduce a much smaller amount of rewards, but require a 2/3 precentile level, not kjust a median.</p>
<h3 id="availability-redistribution"><a class="header" href="#availability-redistribution">Availability redistribution</a></h3>
<p>As approval checkers could easily perform useless checks, we shall reward availability providers for the availability chunks they provide that resulted in useful approval checks. We enforce honesty using a tit-for-tat mechanism because chunk transfers are inherently subjective.</p>
<p>An approval checker reconstructs the full parachain block by downloading distinct $f+1$ chunks from other validators, where at most $f$ validators are byzantine, out of the $n \ge 3 f + 1$ total validators. In downloading chunks, validators prefer the $f+1$ systemic chunks over the non-systemic chunks, and prefer fetching from validators who already voted valid, like backing checkers. It follows some validators should recieve credit for more than one chunk per candidate.</p>
Expand Down
2 changes: 1 addition & 1 deletion searchindex.js

Large diffs are not rendered by default.

2 changes: 1 addition & 1 deletion searchindex.json

Large diffs are not rendered by default.

0 comments on commit 6723f1d

Please sign in to comment.