Skip to content

Commit

Permalink
Merge pull request #211 from LukoJy3D/master
Browse files Browse the repository at this point in the history
feat: allow hourly partitioning
  • Loading branch information
joker1007 authored Oct 26, 2024
2 parents 616c348 + 718fbca commit c075102
Show file tree
Hide file tree
Showing 3 changed files with 9 additions and 9 deletions.
14 changes: 7 additions & 7 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -59,7 +59,7 @@ Because embbeded gem dependency sometimes restricts ruby environment.
| schema_cache_expire | integer | no | no | 600 | Value is second. If current time is after expiration interval, re-fetch table schema definition. |
| request_timeout_sec | integer | no | no | nil | Bigquery API response timeout |
| request_open_timeout_sec | integer | no | no | 60 | Bigquery API connection, and request timeout. If you send big data to Bigquery, set large value. |
| time_partitioning_type | enum | no (either day) | no | nil | Type of bigquery time partitioning feature. |
| time_partitioning_type | enum | no (either day or hour) | no | nil | Type of bigquery time partitioning feature. |
| time_partitioning_field | string | no | no | nil | Field used to determine how to create a time-based partition. |
| time_partitioning_expiration | time | no | no | nil | Expiration milliseconds for bigquery time partitioning. |
| clustering_fields | array(string) | no | no | nil | One or more fields on which data should be clustered. The order of the specified columns determines the sort order of the data. |
Expand Down Expand Up @@ -194,15 +194,15 @@ For high rate inserts over streaming inserts, you should specify flush intervals
```apache
<match dummy>
@type bigquery_insert
<buffer>
flush_interval 0.1 # flush as frequent as possible
total_limit_size 10g
flush_thread_count 16
</buffer>
auth_method private_key # default
email xxxxxxxxxxxx-xxxxxxxxxxxxxxxxxxxxxx@developer.gserviceaccount.com
private_key_path /home/username/.keys/00000000000000000000000000000000-privatekey.p12
Expand Down Expand Up @@ -543,7 +543,7 @@ The second method is to specify a path to a BigQuery schema file instead of list
@type bigquery_insert
...
schema_path /path/to/httpd.schema
</match>
```
Expand All @@ -556,7 +556,7 @@ The third method is to set `fetch_schema` to `true` to enable fetch a schema usi
@type bigquery_insert
...
fetch_schema true
# fetch_schema_table other_table # if you want to fetch schema from other table
</match>
Expand Down
2 changes: 1 addition & 1 deletion lib/fluent/plugin/bigquery/version.rb
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
module Fluent
module BigQueryPlugin
VERSION = "3.1.0".freeze
VERSION = "3.1.1".freeze
end
end
2 changes: 1 addition & 1 deletion lib/fluent/plugin/out_bigquery_base.rb
Original file line number Diff line number Diff line change
Expand Up @@ -69,7 +69,7 @@ class BigQueryBaseOutput < Output
config_param :request_open_timeout_sec, :time, default: 60

## Partitioning
config_param :time_partitioning_type, :enum, list: [:day], default: nil
config_param :time_partitioning_type, :enum, list: [:day, :hour], default: nil
config_param :time_partitioning_field, :string, default: nil
config_param :time_partitioning_expiration, :time, default: nil

Expand Down

0 comments on commit c075102

Please sign in to comment.