-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add pruning vs archive node sharding #42
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
looking good so far, will review again once last to-do's are completed
PROXY_HEIGHT_BASED_ROUTING_ENABLED enables the feature PROXY_PRUNING_BACKEND_HOST_URL_MAP is the map to pruning cluster hosts
the Proxies instance parses the block height out of the request. if the request is made for the latest block height, the request is routed to the defined pruning cluster. otherwise the default cluster is used.
some methods will always work if routed to a pruning cluster. do that!
f04687a
to
36799d7
Compare
@@ -248,13 +249,13 @@ func lookupBlockNumberFromHashParam(ctx context.Context, evmClient *ethclient.Cl | |||
return 0, fmt.Errorf(fmt.Sprintf("error decoding block hash param from params %+v at index %d", params, paramIndex)) | |||
} | |||
|
|||
block, err := evmClient.BlockByHash(ctx, common.HexToHash(blockHash)) | |||
header, err := evmClient.HeaderByHash(ctx, common.HexToHash(blockHash)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
as mentioned in slack, the call to BlockByHash
was failing when querying a hash that did not exist in the chain and causing the service to panic. i confirmed the problem on this branch, master, and on the commit of the currently deployed public proxy (547e9ef).
it doesn't appear to be a real problem in production, as the service properly responds when you query a non-existent block hash.
however, i propose changing the BlockByHash
call to HeaderByHash
. BlockByHash
actually makes multiple requests to get the header and the txs. we only need the block number, so header is sufficient.
CC @evgeniy-scherbina heads up, this will conflict with your evm client changes in #39
make ci-setup | ||
``` | ||
|
||
At that point, running `make e2e-test` will run the end-to-end tests with requests routing to public testnet. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👊🏽
@@ -27,6 +27,7 @@ type ProxiedRequestMetric struct { | |||
UserAgent *string | |||
Referer *string | |||
Origin *string | |||
ResponseBackend string |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
prefer pointer to string to allow for differentiating between "value is empty because it was unset / nil, vs the empty value is what was set"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the database has a default value of "DEFAULT" for the column.
realizing this doesn't work for requests that fail (ie. if they get 502'd because they aren't in the map). will update column to allow null and make this a pointer.
i think i'll also abandon backfilling the column to "DEFAULT" unless you think that's still worthwhile?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
digging deeper, it looks like we aren't creating metrics for failing requests.
the only case in which the response_backend
wouldn't be correct if it defaulted to "DEAFULT" is in the case where the host is not in the host -> backend url map. in that case we short-circuit respond 502:
kava-proxy-service/service/middleware.go
Lines 172 to 180 in ad18471
if !ok { | |
serviceLogger.Error().Msg(fmt.Sprintf("no matching proxy for host %s for request %+v\n configured proxies %+v", r.Host, r, proxies)) | |
w.WriteHeader(http.StatusBadGateway) | |
w.Write([]byte("no proxy backend configured for request host")) | |
return | |
} |
when we return
there, we forego adding details to the context. this means for all the context values, casting to string will fail (ok == false
). then we return early, skipping metric creation.
some examples:
kava-proxy-service/service/middleware.go
Lines 331 to 356 in ad18471
rawRequestHostname := r.Context().Value(RequestHostnameContextKey) | |
requestHostname, ok := rawRequestHostname.(string) | |
if !ok { | |
service.ServiceLogger.Trace().Msg(fmt.Sprintf("invalid context value %+v for value %s", rawRequestHostname, RequestHostnameContextKey)) | |
return | |
} | |
rawRequestIP := r.Context().Value(RequestIPContextKey) | |
requestIP, ok := rawRequestIP.(string) | |
if !ok { | |
service.ServiceLogger.Trace().Msg(fmt.Sprintf("invalid context value %+v for value %s", rawRequestIP, RequestIPContextKey)) | |
return | |
} | |
rawUserAgent := r.Context().Value(RequestUserAgentContextKey) | |
userAgent, ok := rawUserAgent.(string) | |
if !ok { | |
service.ServiceLogger.Trace().Msg(fmt.Sprintf("invalid context value %+v for value %s", rawUserAgent, RequestUserAgentContextKey)) | |
return | |
} |
all those values are set after the failing request would short-circuit the middleware with the 502.
so the take away:
- we aren't collecting failing request metrics. we probably should? though i think they'd just be indicative of a configuration problem (either an unintended host is pointing to the proxy that we didn't setup, or we didn't setup an intended host)
- the request metrics table is currently only for requests that were properly routed. the repsponse_backend should always be set and so a string is more appropriate than a pointer. additionally, the backfill of existing metrics (and default column value) is valid.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Adds a new
Proxies
implementation that parses the method & block number out of incoming requests. If the request is for the latest height, or uses a method that can always be fielded without historical data, it is routed to an optionally configured pruning cluster.The pruning cluster is configured via a new environment variable
PROXY_PRUNING_BACKEND_HOST_URL_MAP
(similar to the existing host map, but for pruning node host urls). The entire funtionality can be enabled / disabled via thePROXY_HEIGHT_BASED_ROUTING_ENABLED
env variable.Includes e2e test setup that routes traffic to two different nodes! Documentation on proxy configuration & topology can be found here
Opening this PR as a draft for early feedback.It will not be ready for review / merge until: