Skip to content

Commit

Permalink
Ready for version 0.1.0
Browse files Browse the repository at this point in the history
  • Loading branch information
fitomad committed Jun 25, 2024
1 parent 90707da commit 3eaa9fb
Show file tree
Hide file tree
Showing 16 changed files with 428 additions and 110 deletions.
262 changes: 260 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,8 @@
A Rate-Limit middleware implementation for Vapor servers using Redis database.

```swift
...

let tokenBucket = TokenBucket {
TokenBucketConfiguration(bucketSize: 25,
refillRate: 5,
Expand All @@ -18,7 +20,7 @@ let tokenBucket = TokenBucket {

let checkpoint = Checkpoint(using: tokenBucket)

// Modify response HTTP header and body response when rate limit exceed
// 🚨 Modify response HTTP header and body response when rate limit exceed
checkpoint.didFailWithTooManyRequest = { (request, response, metadata) in
metadata.headers = [
"X-RateLimit" : "Failure for request \(request.id)."
Expand All @@ -33,12 +35,268 @@ app.middleware.use(checkpoint)

## Supported algorythims

### Tocken Bucket
Currently **Checkpoint** supports 4 rate-limit algorithims.

### Token Bucket

The Token Bucket rate-limiting algorithm is a widely-used and flexible approach that controls the rate of requests to a service while allowing for some bursts of traffic. Here’s an explanation of how it works:

The configuration for the Token Bucket is setted using the `TokenBucketConfiguration` type

```swift
let tokenbucketAlgorithm = TokenBucket {
TokenBucketConfiguration(bucketSize: 10,
refillRate: 0,
refillTimeInterval: .seconds(count: 20),
appliedTo: .header(key: "X-ApiKey"),
inside: .endpoint)
} storage: {
// Rate limit database in Redis
app.redis("rate").configuration = try? RedisConfiguration(hostname: "localhost",
port: 9090,
database: 0)

return app.redis("rate")
} logging: {
app.logger
}
```

How the Token Bucket Algorithm Works:

1. Initialize the Bucket:

- The bucket has a fixed capacity, which represents the maximum number of tokens it can hold.
- Tokens are added to the bucket at a fixed rate, up to the bucket's capacity.

2. Handle Incoming Requests:

- When a new request arrives, check if there are enough tokens in the bucket.
- If there is at least one token, allow the request and remove a token from the bucket.
- If there are no tokens available, deny the request (rate limit exceeded).

3. Add Tokens:

- Tokens are added to the bucket at a steady rate, which determines the average rate of allowed requests.
- The bucket never holds more than its fixed capacity of tokens.

### Leaking Bucket

The Leaking Bucket rate-limit algorithm is an effective approach to rate limiting that ensures a smooth, steady flow of requests. It works similarly to a physical bucket with a hole in it, where water (requests) drips out at a constant rate. Here’s a detailed explanation of how it works:

The configuration for Leaking Bucket is the `LeakingBucketConfiguration` object

```swift
let leakingBucketAlgorithm = LeakingBucket {
LeakingBucketConfiguration(bucketSize: 10,
removingRate: 5,
removingTimeInterval: .minutes(count: 1),
appliedTo: .header(key: "X-ApiKey"),
inside :.endpoint)
} storage: {
// Rate limit database in Redis
app.redis("rate").configuration = try? RedisConfiguration(hostname: "localhost",
port: 9090,
database: 0)

return app.redis("rate")
} logging: {
app.logger
}
```

How the Leaking Bucket Algorithm Works:

1. Initialize the Bucket:

- The bucket has a fixed capacity, representing the maximum number of requests that can be stored in the bucket at any given time.
- The bucket leaks at a fixed rate, representing the maximum rate at which requests are processed.

2. Handle Incoming Requests:

- When a new request arrives, check the current level of the bucket.
- If the bucket is not full (i.e., the number of stored requests is less than the bucket's capacity), add the request to the bucket.
- If the bucket is full, deny the request (rate limit exceeded).

3. Process Requests:

- Requests in the bucket are processed (leaked) at a constant rate.
- This ensures a steady flow of requests, preventing sudden bursts.

### Fixed Window Counter

The Fixed Window Counter rate-limit algorithm is a straightforward and easy-to-implement approach for rate limiting, used to control the number of requests a client can make to a service within a specified time period. Here’s an explanation of how it works:

To set the configuration you must use the `FixedWindowCounterConfiguration` type

```swift
let fixedWindowAlgorithm = FixedWindowCounter {
FixedWindowCounterConfiguration(requestPerWindow: 10,
timeWindowDuration: .minutes(count: 2),
appliedTo: .header(key: "X-ApiKey"),
inside: .endpoint)
} storage: {
// Rate limit database in Redis
app.redis("rate").configuration = try? RedisConfiguration(hostname: "localhost",
port: 9090,
database: 0)

return app.redis("rate")
} logging: {
app.logger
}
```

How the Fixed Window Counter Algorithm Works:

1. Define a Time Window
Choose a fixed duration (e.g., 1 minute, 1 hour) which will serve as the time window for counting requests.

2. Initialize a Counter:
Maintain a counter for each client (or each resource being accessed) that tracks the number of requests made within the current time window.

3. Track Request Timestamps:
Each time a request is made, check the current timestamp and determine which time window it falls into.
Increment the Counter:

- If the request falls within the current window, increment the counter.
- If the request falls outside the current window, reset the counter and start a new window.

4. Enforce Limits:

- If the counter exceeds the predefined limit within the current window, the request is denied (or throttled).
- If the counter is within the limit, the request is allowed.


```swift

```

### Sliding Window Log

The Sliding Window Log rate-limit algorithm is a more refined approach to rate limiting compared to the Fixed Window Counter. It offers smoother control over request rates by maintaining a log of individual request timestamps, allowing for a more granular and accurate rate-limiting mechanism. Here’s a detailed explanation of how it works:

To set the configuration for this rate-limit algorithim use the `` type

```swift
let slidingWindowLogAlgorith = SlidingWindowLog {
SlidingWindowLogConfiguration(requestPerWindow: 10,
windowDuration: .minutes(count: 2),
appliedTo: .header(key: "X-ApiKey"),
inside: .endpoint)
} storage: {
// Rate limit database in Redis
app.redis("rate").configuration = try? RedisConfiguration(hostname: "localhost",
port: 9090,
database: 0)

return app.redis("rate")
} logging: {
app.logger
}
```

How the Sliding Window Log Algorithm Works:

1. Define a Time Window:
Choose a time window duration (e.g., 1 minute) within which you want to limit the number of requests.

2. Log Requests:
Maintain a log (typically a list or queue) for each client that stores the timestamps of each request.

3. Handle Incoming Requests:
When a new request arrives, do the following:

- Remove timestamps from the log that fall outside the current time window.
- Check the number of timestamps remaining in the log.
- If the number of requests (timestamps) within the window is below the limit, add the new request’s timestamp to the log and allow the request.
- If the number of requests meets or exceeds the limit, deny the request.


## Modify server response

Sometimes we need to modify the response sent to the client by adding a custom HTTP header or setting a failure reason text in the JSON payload.

In that case, you can use one of the closures defined in the `Checkpoint` class, one per Rate-Limit processing stage.

### Before performing Rate-Limit checking

This closure is invoked just before the Checkpoint middleware checking operation for a given request will be performed, and receive a Request object as a parameter.

```swift
public var willCheck: CheckpointAction?
```

### After perform Rate-Limit checking

If Rate-Limit checking goes well, this closure is invoked, and you know that the Request continues to be processed by the Vapor server.

```swift
public var willCheck: CheckpointAction?
```

### Rate-Limit reached
It's sure you want to know when a request reaches the rate limit you set when initializing Checkpoint.

In this case, Checkpoint will notify a rate-limit reached using the didFailWithTooManyRequest closure.

```swift
public var didFailWithTooManyRequest: CheckpointErrorAction?
```

This closure contains 3 parameter

- `requests`. It's a [`Request`](https://api.vapor.codes/vapor/documentation/vapor/request) object type representing the user request that reaches the limit.
- `response`. It's the server response ([`Response`](https://api.vapor.codes/vapor/documentation/vapor/response) type) returned by Vapor.
- `metadata`. It's an object designed to set custom HTTP headers and a reason text that will be attached to the object payload returned by the response.

For example, if you want to add a custom HTTP header and a reason text to inform a user that he reaches the limit you will do something like this

```swift
// 👮‍♀️ Modify response HTTP header and body response when rate limit exceed
checkpoint.didFailWithTooManyRequest = { (request, response, metadata) in
metadata.headers = [
"X-RateLimit" : "Failure for request \(request.id)."
]

metadata.reason = "Rate limit for your api key exceeded"
}
```

### Error throwed while process a request

If an error different from an HTTP 429 code (rate-limit) comes from Checkpoint, you will be reported in the following closure

```swift
// 🚨 Modify response HTTP header and body response when error occurs
checkpoint.didFail = { (request, response, abort, metadata) in
metadata.headers = [
"X-ApiError" : "Error for request \(request.id)."
]

metadata.reason = "Error code \(abort.status) for your api key exceeded"
}
```

The parameters used in this closure are the same as the ones received in the closure, you can add a custom HTTP header and/or a reason message.

## Redis

To work with Checkpoint you must install and configure a Redis database in your system. Thanks to Docker it's really easy to deploy a Redis installation.

We recommend to install the [**redis-stack-server**](https://hub.docker.com/r/redis/redis-stack-server) image from the Docker Hub.

## History

### 0.1.0

Alpha version, a *Friends & Family* release 😜

- Support for Redis Database
- Logging system based on the Vapor `Logger` type
- Four rate-limit algorithims support
- Fixed Window Counter
- Leaking Bucket
- Sliding Window Log
- Token Bucket

8 changes: 8 additions & 0 deletions Sources/Checkpoint/Algorithms/Algorithm.swift
Original file line number Diff line number Diff line change
Expand Up @@ -12,14 +12,22 @@ import Vapor
public typealias StorageAction = () -> Application.Redis
public typealias LoggerAction = () -> Logger

/// Definition for the different Rate-Limit algorithims
public protocol Algorithm: Sendable {
/// The configuration type used in a specific algorithim
associatedtype ConfigurationType

/// The Redis database used to store the request data
var storage: Application.Redis { get }
/// A `Logger` object created on Vapor
var logging: Logger? { get }

/// Create a new Rate-Limit algorithim with a given configuration,
/// storage and logging
init(configuration: () -> ConfigurationType, storage: StorageAction, logging: LoggerAction?)

/// Performs the algorithim logic to check if a request is valid
/// or reach the rate-limit specified on the algorithim's configuration
func checkRequest(_ request: Request) async throws
}

Expand Down
26 changes: 19 additions & 7 deletions Sources/Checkpoint/Algorithms/FixedWindowCounter.swift
Original file line number Diff line number Diff line change
Expand Up @@ -10,22 +10,30 @@ import Redis
import Vapor

/**
Algorithm can be described as follows:

1. Timeline is divided into fixed time windows.
2. Each time window has a counter.
3. When a request comes in, the counter for the current time window is incremented.
4. If the counter is greater than the rate limit, the request is rejected.
5. If the counter is less than the rate limit, the request is accepted.
Fixed Window Counter algorithm presents the workflow described as follows:

1. Define a time window has a counter where the store the number of requets for a given time window.
3. When a user makes a request, the counter for the current time window is incremented by 1.
4. If the counter is greater than the rate limit, the request is rejected and whe send an HTTP 429 code status.
5. If the counter is less than the rate limit, the request is accepted.
*/
public final class FixedWindowCounter {
// Configuration for this rate-limit algorithm
private let configuration: FixedWindowCounterConfiguration
// The Redis database where we store the request information
public let storage: Application.Redis
// A logger set during Vapor initialization
public let logging: Logger?

// The Combine Timer publishers
private var cancellable: AnyCancellable?
// Keys stored in a given time window
private var keys = Set<String>()

/**


*/
public init(configuration: () -> FixedWindowCounterConfiguration, storage: StorageAction, logging: LoggerAction? = nil) {
self.configuration = configuration()
self.storage = storage()
Expand All @@ -35,12 +43,16 @@ public final class FixedWindowCounter {
performing: resetWindow)
}

/**

*/
deinit {
cancellable?.cancel()
}
}

extension FixedWindowCounter: WindowBasedAlgorithm {
///
public func checkRequest(_ request: Request) async throws {
guard let requestKey = try? valueFor(field: configuration.appliedField, in: request, inside: configuration.scope) else {
return
Expand Down
5 changes: 0 additions & 5 deletions Sources/Checkpoint/Algorithms/LeakingBucket.swift
Original file line number Diff line number Diff line change
Expand Up @@ -48,10 +48,6 @@ public final class LeakingBucket {
}

extension LeakingBucket: WindowBasedAlgorithm {
var isValidRequest: Bool {
return true
}

public func checkRequest(_ request: Request) async throws {
guard let requestKey = try? valueFor(field: configuration.appliedField, in: request, inside: configuration.scope) else {
return
Expand All @@ -68,7 +64,6 @@ extension LeakingBucket: WindowBasedAlgorithm {

// 1. New request, remove one token from the bucket
let bucketItemsCount = try await storage.increment(redisKey).get()
logging?.info("⌚️ \(requestKey) = \(bucketItemsCount)")
// 2. If buckes is empty, throw an error
if bucketItemsCount > configuration.bucketSize {
throw Abort(.tooManyRequests)
Expand Down
Loading

0 comments on commit 3eaa9fb

Please sign in to comment.