Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore: integrate peerdas-kzg #6923

Draft
wants to merge 33 commits into
base: peerDAS
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
33 commits
Select commit Hold shift + click to select a range
08884f8
feat: placeholder PR for electra
g11tech Jan 24, 2024
b1481f1
feat: implement peerDAS on electra
g11tech Jan 24, 2024
7dab01e
fix: docker build issue for c-kzg
matthewkeil Jun 21, 2024
ae1a03e
rebase fixes
g11tech Jun 26, 2024
38fc1b4
fix: update c-zkg install workflow
matthewkeil Jun 26, 2024
2b26a4e
feat: add trustedSetupPrecompute cli flag
matthewkeil Jun 26, 2024
33e8ba2
fix: update trusted-setup for testing
matthewkeil Jun 26, 2024
c210176
fix: update c-zkg install workflow to remove sudo
matthewkeil Jun 26, 2024
952b65f
add dep
kevaundray Jun 29, 2024
ca67a07
update peerdas lib
kevaundray Jun 30, 2024
1cb31a4
add rough idea
kevaundray Jun 30, 2024
f2d2917
modify code to call the exported computeCellsAndProofs method
kevaundray Jun 30, 2024
83a137c
add false to use ckzg
kevaundray Jun 30, 2024
ae0c46c
add todo re lib switching
kevaundray Jun 30, 2024
50dfe26
revert yarn.lock
kevaundray Jun 30, 2024
89ca708
yarn add "@crate-crypto/[email protected]" --exact
kevaundray Jun 30, 2024
02f7c67
use tsc --define to set the library to use
kevaundray Jun 30, 2024
4805a2e
feat: placeholder PR for electra
g11tech Jan 24, 2024
499d93c
feat: implement peerDAS on electra
g11tech Jan 24, 2024
156ef53
fix: docker build issue for c-kzg
matthewkeil Jun 21, 2024
47eedae
feat: get various sync mechanisms working with/without sharded data
g11tech Jul 14, 2024
d423004
feat: add the modifications to work with devnet2
g11tech Jul 14, 2024
a0c5d27
fix: refactor to add and use nodeid computation and clear out nodeid …
g11tech Jul 16, 2024
c7f6341
fix the types/test
g11tech Aug 9, 2024
81aaeb5
feat: add and use metadatav3 for peer custody subnet
g11tech Aug 12, 2024
039f170
Merge branch 'peerDAS' into kw/peerdas-rust-lib-integration
kevaundray Aug 16, 2024
313af86
use v0.4.1 plus name-change
kevaundray Aug 16, 2024
5e9c747
refactor: package name has changed
kevaundray Aug 16, 2024
f8f5bf8
yarn lint
kevaundray Aug 16, 2024
5557728
Fix: github has problems merging branch due to force merge
kevaundray Aug 16, 2024
c03f297
revert lockfile and regenerate by running `yarn`
kevaundray Aug 16, 2024
3b69703
change import to node-eth-kzg
kevaundray Aug 16, 2024
0f04d36
update API and add notes on how to upgrade
kevaundray Aug 16, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
FROM --platform=${BUILDPLATFORM:-amd64} node:22.4-slim as build_src
ARG COMMIT
WORKDIR /usr/app
RUN apt-get update && apt-get install -y g++ make python3 python3-setuptools && apt-get clean && rm -rf /var/lib/apt/lists/*
RUN apt-get update && apt-get install -y git rsync g++ make python3 python3-setuptools && apt-get clean && rm -rf /var/lib/apt/lists/*

COPY . .

Expand All @@ -23,7 +23,7 @@ RUN cd packages/cli && GIT_COMMIT=${COMMIT} yarn write-git-data
# Note: This step is redundant for the host arch
FROM node:22.4-slim as build_deps
WORKDIR /usr/app
RUN apt-get update && apt-get install -y g++ make python3 python3-setuptools && apt-get clean && rm -rf /var/lib/apt/lists/*
RUN apt-get update && apt-get install -y git rsync g++ make python3 python3-setuptools && apt-get clean && rm -rf /var/lib/apt/lists/*

COPY --from=build_src /usr/app .

Expand Down
5 changes: 3 additions & 2 deletions packages/beacon-node/package.json
Original file line number Diff line number Diff line change
Expand Up @@ -105,6 +105,7 @@
"@chainsafe/prometheus-gc-stats": "^1.0.0",
"@chainsafe/ssz": "^0.17.0",
"@chainsafe/threads": "^1.11.1",
"@crate-crypto/node-eth-kzg": "0.4.1",
"@ethersproject/abi": "^5.7.0",
"@fastify/bearer-auth": "^9.0.0",
"@fastify/cors": "^8.2.1",
Expand Down Expand Up @@ -132,7 +133,7 @@
"@lodestar/utils": "^1.21.0",
"@lodestar/validator": "^1.21.0",
"@multiformats/multiaddr": "^12.1.3",
"c-kzg": "^2.1.2",
"c-kzg": "matthewkeil/c-kzg-4844#13aa01464479aa7c1ccafa64d52cbc17699ffa07",
"datastore-core": "^9.1.1",
"datastore-level": "^10.1.1",
"deepmerge": "^4.3.1",
Expand Down Expand Up @@ -167,4 +168,4 @@
"beacon",
"blockchain"
]
}
}
49 changes: 39 additions & 10 deletions packages/beacon-node/src/api/impl/beacon/blocks/index.ts
Original file line number Diff line number Diff line change
Expand Up @@ -6,9 +6,10 @@ import {
reconstructFullBlockOrContents,
signedBeaconBlockToBlinded,
} from "@lodestar/state-transition";
import {ForkExecution, SLOTS_PER_HISTORICAL_ROOT, isForkExecution} from "@lodestar/params";
import {ForkExecution, SLOTS_PER_HISTORICAL_ROOT, isForkExecution, ForkName} from "@lodestar/params";
import {sleep, fromHex, toHex} from "@lodestar/utils";
import {
electra,
deneb,
isSignedBlockContents,
ProducedBlockSource,
Expand All @@ -23,10 +24,13 @@ import {
BlockInput,
BlobsSource,
BlockInputDataBlobs,
BlockInputDataDataColumns,
DataColumnsSource,
BlockInputData,
} from "../../../../chain/blocks/types.js";
import {promiseAllMaybeAsync} from "../../../../util/promises.js";
import {isOptimisticBlock} from "../../../../util/forkChoice.js";
import {computeBlobSidecars} from "../../../../util/blobs.js";
import {computeBlobSidecars, computeDataColumnSidecars} from "../../../../util/blobs.js";
import {BlockError, BlockErrorCode, BlockGossipError} from "../../../../chain/errors/index.js";
import {OpSource} from "../../../../metrics/validatorMonitor.js";
import {NetworkEvent} from "../../../../network/index.js";
Expand Down Expand Up @@ -65,17 +69,40 @@ export function getBeaconBlockApi({
opts: PublishBlockOpts = {}
) => {
const seenTimestampSec = Date.now() / 1000;
let blockForImport: BlockInput, signedBlock: SignedBeaconBlock, blobSidecars: deneb.BlobSidecars;
let blockForImport: BlockInput,
signedBlock: SignedBeaconBlock,
blobSidecars: deneb.BlobSidecars,
dataColumnSidecars: electra.DataColumnSidecars;

if (isSignedBlockContents(signedBlockOrContents)) {
({signedBlock} = signedBlockOrContents);
blobSidecars = computeBlobSidecars(config, signedBlock, signedBlockOrContents);
const blockData = {
fork: config.getForkName(signedBlock.message.slot),
blobs: blobSidecars,
blobsSource: BlobsSource.api,
blobsBytes: blobSidecars.map(() => null),
} as BlockInputDataBlobs;
const fork = config.getForkName(signedBlock.message.slot);
let blockData: BlockInputData;
if (fork === ForkName.electra) {
dataColumnSidecars = computeDataColumnSidecars(config, signedBlock, signedBlockOrContents);
blockData = {
fork,
dataColumnsLen: dataColumnSidecars.length,
// custodyColumns is a 1 based index of ith column present in dataColumns[custodyColumns[i-1]]
dataColumnsIndex: new Uint8Array(Array.from({length: dataColumnSidecars.length}, (_, j) => 1 + j)),
dataColumns: dataColumnSidecars,
dataColumnsBytes: dataColumnSidecars.map(() => null),
dataColumnsSource: DataColumnsSource.api,
} as BlockInputDataDataColumns;
blobSidecars = [];
} else if (fork === ForkName.deneb) {
blobSidecars = computeBlobSidecars(config, signedBlock, signedBlockOrContents);
blockData = {
fork,
blobs: blobSidecars,
blobsSource: BlobsSource.api,
blobsBytes: blobSidecars.map(() => null),
} as BlockInputDataBlobs;
dataColumnSidecars = [];
} else {
throw Error(`Invalid data fork=${fork} for publish`);
}

blockForImport = getBlockInput.availableData(
config,
signedBlock,
Expand All @@ -87,6 +114,7 @@ export function getBeaconBlockApi({
} else {
signedBlock = signedBlockOrContents;
blobSidecars = [];
dataColumnSidecars = [];
blockForImport = getBlockInput.preData(config, signedBlock, BlockSource.api, context?.sszBytes ?? null);
}

Expand Down Expand Up @@ -221,6 +249,7 @@ export function getBeaconBlockApi({
// b) they might require more hops to reach recipients in peerDAS kind of setup where
// blobs might need to hop between nodes because of partial subnet subscription
...blobSidecars.map((blobSidecar) => () => network.publishBlobSidecar(blobSidecar)),
...dataColumnSidecars.map((dataColumnSidecar) => () => network.publishDataColumnSidecar(dataColumnSidecar)),
() => network.publishBeaconBlock(signedBlock) as Promise<unknown>,
() =>
// there is no rush to persist block since we published it to gossip anyway
Expand Down
74 changes: 61 additions & 13 deletions packages/beacon-node/src/chain/blocks/importBlock.ts
Original file line number Diff line number Diff line change
@@ -1,6 +1,13 @@
import {toHexString} from "@chainsafe/ssz";
import {capella, ssz, altair, BeaconBlock} from "@lodestar/types";
import {ForkLightClient, ForkSeq, INTERVALS_PER_SLOT, MAX_SEED_LOOKAHEAD, SLOTS_PER_EPOCH} from "@lodestar/params";
import {
ForkName,
ForkLightClient,
ForkSeq,
INTERVALS_PER_SLOT,
MAX_SEED_LOOKAHEAD,
SLOTS_PER_EPOCH,
} from "@lodestar/params";
import {
CachedBeaconStateAltair,
computeEpochAtSlot,
Expand Down Expand Up @@ -101,6 +108,39 @@ export async function importBlock(
this.metrics?.importBlock.bySource.inc({source});
this.logger.verbose("Added block to forkchoice and state cache", {slot: blockSlot, root: blockRootHex});

// We want to import block asap so call all event handler in the next event loop
callInNextEventLoop(async () => {
this.emitter.emit(routes.events.EventType.block, {
block: blockRootHex,
slot: blockSlot,
executionOptimistic: blockSummary != null && isOptimisticBlock(blockSummary),
});

// dataPromise will not end up here, but preDeneb could. In future we might also allow syncing
// out of data range blocks and import then in forkchoice although one would not be able to
// attest and propose with such head similar to optimistic sync
if (blockInput.type === BlockInputType.availableData) {
const {blockData} = blockInput;
if (blockData.fork === ForkName.deneb) {
const {blobsSource, blobs} = blockData;

this.metrics?.importBlock.blobsBySource.inc({blobsSource});
for (const blobSidecar of blobs) {
const {index, kzgCommitment} = blobSidecar;
this.emitter.emit(routes.events.EventType.blobSidecar, {
blockRoot: blockRootHex,
slot: blockSlot,
index,
kzgCommitment: toHexString(kzgCommitment),
versionedHash: toHexString(kzgCommitmentToVersionedHash(kzgCommitment)),
});
}
} else if (blockData.fork === ForkName.electra) {
// TODO peerDAS build and emit the event for the datacolumns
}
}
});

// 3. Import attestations to fork choice
//
// - For each attestation
Expand Down Expand Up @@ -424,16 +464,20 @@ export async function importBlock(
blockInput.type === BlockInputType.availableData &&
this.emitter.listenerCount(routes.events.EventType.blobSidecar)
) {
const {blobs} = blockInput.blockData;
for (const blobSidecar of blobs) {
const {index, kzgCommitment} = blobSidecar;
this.emitter.emit(routes.events.EventType.blobSidecar, {
blockRoot: blockRootHex,
slot: blockSlot,
index,
kzgCommitment: toHexString(kzgCommitment),
versionedHash: toHexString(kzgCommitmentToVersionedHash(kzgCommitment)),
});
if (blockInput.blockData.fork === ForkName.deneb) {
const {blobs} = blockInput.blockData;
for (const blobSidecar of blobs) {
const {index, kzgCommitment} = blobSidecar;
this.emitter.emit(routes.events.EventType.blobSidecar, {
blockRoot: blockRootHex,
slot: blockSlot,
index,
kzgCommitment: toHexString(kzgCommitment),
versionedHash: toHexString(kzgCommitmentToVersionedHash(kzgCommitment)),
});
}
} else {
// TODO add event for datacolumns
}
}
});
Expand All @@ -454,8 +498,12 @@ export async function importBlock(
// out of data range blocks and import then in forkchoice although one would not be able to
// attest and propose with such head similar to optimistic sync
if (blockInput.type === BlockInputType.availableData) {
const {blobsSource} = blockInput.blockData;
this.metrics?.importBlock.blobsBySource.inc({blobsSource});
if (blockInput.blockData.fork === ForkName.deneb) {
const {blobsSource} = blockInput.blockData;
this.metrics?.importBlock.blobsBySource.inc({blobsSource});
} else {
// TODO add data columns metrics
}
}

const advancedSlot = this.clock.slotWithFutureTolerance(REPROCESS_MIN_TIME_TO_NEXT_SLOT_SEC);
Expand Down
54 changes: 48 additions & 6 deletions packages/beacon-node/src/chain/blocks/types.ts
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
import {CachedBeaconStateAllForks, computeEpochAtSlot} from "@lodestar/state-transition";
import {MaybeValidExecutionStatus, DataAvailabilityStatus} from "@lodestar/fork-choice";
import {deneb, Slot, RootHex, SignedBeaconBlock} from "@lodestar/types";
import {deneb, Slot, RootHex, SignedBeaconBlock, electra, ColumnIndex} from "@lodestar/types";
import {ForkSeq, ForkName} from "@lodestar/params";
import {ChainForkConfig} from "@lodestar/config";

Expand Down Expand Up @@ -29,23 +29,45 @@ export enum BlobsSource {
byRoot = "req_resp_by_root",
}

export enum DataColumnsSource {
gossip = "gossip",
api = "api",
byRange = "req_resp_by_range",
byRoot = "req_resp_by_root",
}

export enum GossipedInputType {
block = "block",
blob = "blob",
dataColumn = "dataColumn",
}

type BlobsCacheMap = Map<number, {blobSidecar: deneb.BlobSidecar; blobBytes: Uint8Array | null}>;
export type BlobsCacheMap = Map<number, {blobSidecar: deneb.BlobSidecar; blobBytes: Uint8Array | null}>;
export type DataColumnsCacheMap = Map<
number,
{dataColumnSidecar: electra.DataColumnSidecar; dataColumnBytes: Uint8Array | null}
>;

type ForkBlobsInfo = {fork: ForkName.deneb};
type BlobsData = {blobs: deneb.BlobSidecars; blobsBytes: (Uint8Array | null)[]; blobsSource: BlobsSource};
export type BlockInputDataBlobs = ForkBlobsInfo & BlobsData;
export type BlockInputData = BlockInputDataBlobs;

export type BlockInputBlobs = {blobs: deneb.BlobSidecars; blobsBytes: (Uint8Array | null)[]; blobsSource: BlobsSource};
type Availability<T> = {availabilityPromise: Promise<T>; resolveAvailability: (data: T) => void};
type ForkDataColumnsInfo = {fork: ForkName.electra};
type DataColumnsData = {
// marker of that columns are to be custodied
dataColumnsLen: number;
dataColumnsIndex: Uint8Array;
dataColumns: electra.DataColumnSidecars;
dataColumnsBytes: (Uint8Array | null)[];
dataColumnsSource: DataColumnsSource;
};
export type BlockInputDataDataColumns = ForkDataColumnsInfo & DataColumnsData;
export type BlockInputData = BlockInputDataBlobs | BlockInputDataDataColumns;

type Availability<T> = {availabilityPromise: Promise<T>; resolveAvailability: (data: T) => void};
type CachedBlobs = {blobsCache: BlobsCacheMap} & Availability<BlockInputDataBlobs>;
export type CachedData = ForkBlobsInfo & CachedBlobs;
type CachedDataColumns = {dataColumnsCache: DataColumnsCacheMap} & Availability<BlockInputDataDataColumns>;
export type CachedData = (ForkBlobsInfo & CachedBlobs) | (ForkDataColumnsInfo & CachedDataColumns);

export type BlockInput = {block: SignedBeaconBlock; source: BlockSource; blockBytes: Uint8Array | null} & (
| {type: BlockInputType.preData | BlockInputType.outOfRangeData}
Expand Down Expand Up @@ -161,6 +183,26 @@ export function getBlockInputBlobs(blobsCache: BlobsCacheMap): Omit<BlobsData, "
return {blobs, blobsBytes};
}

export function getBlockInputDataColumns(
dataColumnsCache: DataColumnsCacheMap,
columnIndexes: ColumnIndex[]
): Omit<DataColumnsData, "dataColumnsLen" | "dataColumnsIndex" | "dataColumnsSource"> {
const dataColumns = [];
const dataColumnsBytes = [];

for (const index of columnIndexes) {
const dataColumnCache = dataColumnsCache.get(index);
if (dataColumnCache === undefined) {
// check if the index is correct as per the custody columns
throw Error(`Missing dataColumnCache at index=${index}`);
}
const {dataColumnSidecar, dataColumnBytes} = dataColumnCache;
dataColumns.push(dataColumnSidecar);
dataColumnsBytes.push(dataColumnBytes);
}
return {dataColumns, dataColumnsBytes};
}

export enum AttestationImportOpt {
Skip,
Force,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,10 +3,19 @@ import {DataAvailabilityStatus} from "@lodestar/fork-choice";
import {ChainForkConfig} from "@lodestar/config";
import {deneb, UintNum64} from "@lodestar/types";
import {Logger} from "@lodestar/utils";
import {ForkName} from "@lodestar/params";
import {BlockError, BlockErrorCode} from "../errors/index.js";
import {validateBlobSidecars} from "../validation/blobSidecar.js";
import {validateDataColumnsSidecars} from "../validation/dataColumnSidecar.js";
import {Metrics} from "../../metrics/metrics.js";
import {BlockInput, BlockInputType, ImportBlockOpts, BlobSidecarValidation, getBlockInput} from "./types.js";
import {
BlockInput,
BlockInputType,
ImportBlockOpts,
BlobSidecarValidation,
getBlockInput,
BlockInputData,
} from "./types.js";

// we can now wait for full 12 seconds because unavailable block sync will try pulling
// the blobs from the network anyway after 500ms of seeing the block
Expand Down Expand Up @@ -88,27 +97,37 @@ async function maybeValidateBlobs(
// run full validation
const {block} = blockInput;
const blockSlot = block.message.slot;

const blobsData =
blockInput.type === BlockInputType.availableData
? blockInput.blockData
: await raceWithCutoff(chain, blockInput, blockInput.cachedData.availabilityPromise);
const {blobs} = blobsData;

const {blobKzgCommitments} = (block as deneb.SignedBeaconBlock).message.body;
const beaconBlockRoot = chain.config.getForkTypes(blockSlot).BeaconBlock.hashTreeRoot(block.message);

// if the blob siddecars have been individually verified then we can skip kzg proof check
// but other checks to match blobs with block data still need to be performed
const skipProofsCheck = opts.validBlobSidecars === BlobSidecarValidation.Individual;
validateBlobSidecars(blockSlot, beaconBlockRoot, blobKzgCommitments, blobs, {skipProofsCheck});
const blockData =
blockInput.type === BlockInputType.availableData
? blockInput.blockData
: await raceWithCutoff(
chain,
blockInput,
blockInput.cachedData.availabilityPromise as Promise<BlockInputData>
);

if (blockData.fork === ForkName.deneb) {
const {blobs} = blockData;

// if the blob siddecars have been individually verified then we can skip kzg proof check
// but other checks to match blobs with block data still need to be performed
const skipProofsCheck = opts.validBlobSidecars === BlobSidecarValidation.Individual;
validateBlobSidecars(blockSlot, beaconBlockRoot, blobKzgCommitments, blobs, {skipProofsCheck});
} else if (blockData.fork === ForkName.electra) {
const {dataColumns} = blockData;
const skipProofsCheck = opts.validBlobSidecars === BlobSidecarValidation.Individual;
// might require numColumns, custodyColumns from blockData as input to below
validateDataColumnsSidecars(blockSlot, beaconBlockRoot, blobKzgCommitments, dataColumns, {skipProofsCheck});
}

const availableBlockInput = getBlockInput.availableData(
chain.config,
blockInput.block,
blockInput.source,
blockInput.blockBytes,
blobsData
blockData
);
return {dataAvailabilityStatus: DataAvailabilityStatus.Available, availableBlockInput: availableBlockInput};
}
Expand Down
Loading