Skip to content

Commit

Permalink
feat: ZK Stack framework MVP (matter-labs#880)
Browse files Browse the repository at this point in the history
⚠️ Please read the description before reviewing. ⚠️ 

This is the ZK Stack framework MVP that represents the core
functionality of the new framework.
A lot of things are not polished yet: it'll need better
logging/reporting, interfaces can be better, naming is off at some
places (and I haven't decided on the usage of ZK Stack / zkSync there),
we need more tests and examples, no README.md, etc.

Additionally, only a few resources and tasks are implemented right now.
The reasons for that:
- Adapting the existing code to get a single interface takes time, as
most of the tasks/resources were created in an ad-hoc manner.
- I wanted to focus on the framework functionality here, the
implementations may come later as separate PRs.
- I didn't want the diff to be big.

In the current form, the framework contains a showcase of how
interacting with it looks like, and generally, it seems to work fine.
For more details on motivation and goals, please read the spec.

With that in mind, *any* feedback is appreciated, but I don't guarantee
that I'll proceed it in this PR. Here I'll focus on making sure that the
framework works and there is no major disagreement w.r.t. its design,
and then there will be N follow-up PRs bringing the missing
functionality and polishing the code.
  • Loading branch information
popzxc authored Jan 23, 2024
1 parent 8e1009f commit 3e5c528
Show file tree
Hide file tree
Showing 27 changed files with 1,267 additions and 13 deletions.
22 changes: 22 additions & 0 deletions Cargo.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

1 change: 1 addition & 0 deletions Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,7 @@ members = [
"core/lib/mempool",
"core/lib/merkle_tree",
"core/lib/mini_merkle_tree",
"core/lib/node",
"core/lib/object_store",
"core/lib/prometheus_exporter",
"core/lib/queued_job_processor",
Expand Down
8 changes: 8 additions & 0 deletions checks-config/era.dic
Original file line number Diff line number Diff line change
Expand Up @@ -881,3 +881,11 @@ plookup
shivini
EIP4844
KZG
healthcheck
healthchecks
after_node_shutdown
runnable
downcasting
parameterized
reimplementation
composability
10 changes: 10 additions & 0 deletions core/lib/dal/src/connection/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,7 @@ use crate::{metrics::CONNECTION_METRICS, StorageProcessor};
pub mod holder;

/// Builder for [`ConnectionPool`]s.
#[derive(Clone)]
pub struct ConnectionPoolBuilder {
database_url: String,
max_size: u32,
Expand Down Expand Up @@ -65,6 +66,15 @@ impl ConnectionPoolBuilder {
max_size: self.max_size,
})
}

/// Builds a connection pool that has a single connection.
pub async fn build_singleton(&self) -> anyhow::Result<ConnectionPool> {
let singleton_builder = Self {
max_size: 1,
..self.clone()
};
singleton_builder.build().await
}
}

#[derive(Clone)]
Expand Down
9 changes: 6 additions & 3 deletions core/lib/health_check/src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -80,10 +80,13 @@ pub struct AppHealth {

impl AppHealth {
/// Aggregates health info from the provided checks.
pub async fn new(health_checks: &[Box<dyn CheckHealth>]) -> Self {
pub async fn new<T: AsRef<dyn CheckHealth>>(health_checks: &[T]) -> Self {
let check_futures = health_checks.iter().map(|check| {
let check_name = check.name();
check.check_health().map(move |health| (check_name, health))
let check_name = check.as_ref().name();
check
.as_ref()
.check_health()
.map(move |health| (check_name, health))
});
let components: HashMap<_, _> = future::join_all(check_futures).await.into_iter().collect();

Expand Down
31 changes: 31 additions & 0 deletions core/lib/node/Cargo.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
[package]
name = "zksync_node"
version = "0.1.0"
edition = "2018"
authors = ["The Matter Labs Team <[email protected]>"]
homepage = "https://zksync.io/"
repository = "https://github.com/matter-labs/zksync-era"
license = "MIT OR Apache-2.0"
keywords = ["blockchain", "zksync"]
categories = ["cryptography"]

[dependencies]
prometheus_exporter = { path = "../prometheus_exporter" }
zksync_types = { path = "../types" }
zksync_health_check = { path = "../health_check" }
zksync_dal = { path = "../dal" }
zksync_config = { path = "../config" }
zksync_object_store = { path = "../object_store" }
zksync_core = { path = "../zksync_core" }
zksync_storage = { path = "../storage" }

tracing = "0.1"
thiserror = "1"
async-trait = "0.1"
futures = "0.3"
anyhow = "1"
tokio = { version = "1", features = ["rt"] }

[dev-dependencies]
zksync_env_config = { path = "../env_config" }
vlog = { path = "../vlog" }
84 changes: 84 additions & 0 deletions core/lib/node/examples/main_node.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,84 @@
//! An incomplete example of how node initialization looks like.
//! This example defines a `ResourceProvider` that works using the main node env config, and
//! initializes a single task with a health check server.
use zksync_config::{configs::chain::OperationsManagerConfig, DBConfig, PostgresConfig};
use zksync_core::metadata_calculator::MetadataCalculatorConfig;
use zksync_dal::ConnectionPool;
use zksync_env_config::FromEnv;
use zksync_node::{
implementations::{
resource::pools::MasterPoolResource,
task::{
healtcheck_server::HealthCheckTaskBuilder,
metadata_calculator::MetadataCalculatorTaskBuilder,
},
},
node::ZkSyncNode,
resource::{Resource, ResourceId, ResourceProvider, StoredResource},
};

/// Resource provider for the main node.
/// It defines which resources the tasks will receive. This particular provider is stateless, e.g. it always uses
/// the main node env config, and always knows which resources to provide.
/// The resource provider can be dynamic, however. For example, we can define a resource provider which may use
/// different config load scheme (e.g. env variables / protobuf / yaml / toml), and which resources to provide
/// (e.g. decide whether we need MempoolIO or ExternalIO depending on some config).
#[derive(Debug)]
struct MainNodeResourceProvider;

impl MainNodeResourceProvider {
fn master_pool_resource() -> anyhow::Result<MasterPoolResource> {
let config = PostgresConfig::from_env()?;
let mut master_pool =
ConnectionPool::builder(config.master_url()?, config.max_connections()?);
master_pool.set_statement_timeout(config.statement_timeout());

Ok(MasterPoolResource::new(master_pool))
}
}

#[async_trait::async_trait]
impl ResourceProvider for MainNodeResourceProvider {
async fn get_resource(&self, name: &ResourceId) -> Option<Box<dyn StoredResource>> {
match name {
name if name == &MasterPoolResource::resource_id() => {
let resource =
Self::master_pool_resource().expect("Failed to create pools resource");
Some(Box::new(resource))
}
_ => None,
}
}
}

fn main() -> anyhow::Result<()> {
#[allow(deprecated)] // TODO (QIT-21): Use centralized configuration approach.
let log_format = vlog::log_format_from_env();
let _guard = vlog::ObservabilityBuilder::new()
.with_log_format(log_format)
.build();

// Create the node with specified resource provider. We don't need to add any resources explicitly,
// the task will request what they actually need. The benefit here is that we won't instantiate resources
// that are not used, which would be complex otherwise, since the task set is often dynamic.
let mut node = ZkSyncNode::new(MainNodeResourceProvider)?;

// Add the metadata calculator task.
let merkle_tree_env_config = DBConfig::from_env()?.merkle_tree;
let operations_manager_env_config = OperationsManagerConfig::from_env()?;
let metadata_calculator_config = MetadataCalculatorConfig::for_main_node(
&merkle_tree_env_config,
&operations_manager_env_config,
);
node.add_task(MetadataCalculatorTaskBuilder(metadata_calculator_config));

// Add the healthcheck server.
let healthcheck_config = zksync_config::ApiConfig::from_env()?.healthcheck;
node.add_task(HealthCheckTaskBuilder(healthcheck_config));

// Run the node until completion.
node.run()?;

Ok(())
}
6 changes: 6 additions & 0 deletions core/lib/node/src/implementations/mod.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
//! Implementations of resources and tasks.
//! These are temporarily provided by the framework crate itself, but will be moved to the separate crates
//! in the future.
pub mod resource;
pub mod task;
36 changes: 36 additions & 0 deletions core/lib/node/src/implementations/resource/healthcheck.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
use std::{fmt, sync::Arc};

// Public re-exports from external crate to minimize the required dependencies.
pub use zksync_health_check::{CheckHealth, ReactiveHealthCheck};

use crate::resource::Resource;

/// Wrapper for a generic health check.
#[derive(Clone)]
pub struct HealthCheckResource(Arc<dyn CheckHealth>);

impl HealthCheckResource {
pub fn new(check: impl CheckHealth + 'static) -> Self {
Self(Arc::new(check))
}
}

impl fmt::Debug for HealthCheckResource {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
f.debug_struct("HealthCheckResource")
.field("name", &self.0.name())
.finish_non_exhaustive()
}
}

impl Resource for HealthCheckResource {
fn resource_id() -> crate::resource::ResourceId {
"common/health_check".into()
}
}

impl AsRef<dyn CheckHealth> for HealthCheckResource {
fn as_ref(&self) -> &dyn CheckHealth {
self.0.as_ref()
}
}
3 changes: 3 additions & 0 deletions core/lib/node/src/implementations/resource/mod.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
pub mod healthcheck;
pub mod object_store;
pub mod pools;
15 changes: 15 additions & 0 deletions core/lib/node/src/implementations/resource/object_store.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
use std::sync::Arc;

use zksync_object_store::ObjectStore;

use crate::resource::Resource;

/// Wrapper for the object store.
#[derive(Debug, Clone)]
pub struct ObjectStoreResource(pub Arc<dyn ObjectStore>);

impl Resource for ObjectStoreResource {
fn resource_id() -> crate::resource::ResourceId {
"common/object_store".into()
}
}
75 changes: 75 additions & 0 deletions core/lib/node/src/implementations/resource/pools.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,75 @@
use zksync_dal::{connection::ConnectionPoolBuilder, ConnectionPool};

use crate::resource::Resource;

/// Represents a connection pool to the master database.
#[derive(Debug, Clone)]
pub struct MasterPoolResource(ConnectionPoolBuilder);

impl Resource for MasterPoolResource {
fn resource_id() -> crate::resource::ResourceId {
"common/master_pool".into()
}
}

impl MasterPoolResource {
pub fn new(builder: ConnectionPoolBuilder) -> Self {
Self(builder)
}

pub async fn get(&self) -> anyhow::Result<ConnectionPool> {
self.0.build().await
}

pub async fn get_singleton(&self) -> anyhow::Result<ConnectionPool> {
self.0.build_singleton().await
}
}

/// Represents a connection pool to the replica database.
#[derive(Debug, Clone)]
pub struct ReplicaPoolResource(ConnectionPoolBuilder);

impl Resource for ReplicaPoolResource {
fn resource_id() -> crate::resource::ResourceId {
"common/replica_pool".into()
}
}

impl ReplicaPoolResource {
pub fn new(builder: ConnectionPoolBuilder) -> Self {
Self(builder)
}

pub async fn get(&self) -> anyhow::Result<ConnectionPool> {
self.0.build().await
}

pub async fn get_singleton(&self) -> anyhow::Result<ConnectionPool> {
self.0.build_singleton().await
}
}

/// Represents a connection pool to the prover database.
#[derive(Debug, Clone)]
pub struct ProverPoolResource(ConnectionPoolBuilder);

impl Resource for ProverPoolResource {
fn resource_id() -> crate::resource::ResourceId {
"common/prover_pool".into()
}
}

impl ProverPoolResource {
pub fn new(builder: ConnectionPoolBuilder) -> Self {
Self(builder)
}

pub async fn get(&self) -> anyhow::Result<ConnectionPool> {
self.0.build().await
}

pub async fn get_singleton(&self) -> anyhow::Result<ConnectionPool> {
self.0.build_singleton().await
}
}
Loading

0 comments on commit 3e5c528

Please sign in to comment.