diff --git a/docs/hadoop-catalog-with-s3.md b/docs/hadoop-catalog-with-s3.md index 2c8f8131b50..e5bd4c41f7c 100644 --- a/docs/hadoop-catalog-with-s3.md +++ b/docs/hadoop-catalog-with-s3.md @@ -446,7 +446,7 @@ In order to access fileset with S3 using the GVFS Python client, apart from [bas | `s3_secret_access_key` | The secret key of the AWS S3. | (none) | Yes | 0.7.0-incubating | :::note -- `s3_endpoint` is an optional configuration for GVFS **Python** client but a required configuration for GVFS **Java** client to access Hadop with AWS S3, and it is required for other S3-compatible storage services like MinIO. +- `s3_endpoint` is an optional configuration for GVFS **Python** client but a required configuration for GVFS **Java** client to access Hadoop with AWS S3, and it is required for other S3-compatible storage services like MinIO. - If the catalog has enabled [credential vending](security/credential-vending.md), the properties above can be omitted. ::: diff --git a/docs/hadoop-catalog.md b/docs/hadoop-catalog.md index abd8dfefb50..978efad90df 100644 --- a/docs/hadoop-catalog.md +++ b/docs/hadoop-catalog.md @@ -9,7 +9,7 @@ license: "This software is licensed under the Apache License version 2." ## Introduction Hadoop catalog is a fileset catalog that using Hadoop Compatible File System (HCFS) to manage -the storage location of the fileset. Currently, it supports the local filesystem and HDFS. Since 0.7.0-incubating, Gravitino supports [S3](hadoop-catalog-with-S3.md), [GCS](hadoop-catalog-with-gcs.md), [OSS](hadoop-catalog-with-oss.md) and [Azure Blob Storage](hadoop-catalog-with-adls.md) through Hadoop catalog. +the storage location of the fileset. Currently, it supports the local filesystem and HDFS. Since 0.7.0-incubating, Gravitino supports [S3](hadoop-catalog-with-s3.md), [GCS](hadoop-catalog-with-gcs.md), [OSS](hadoop-catalog-with-oss.md) and [Azure Blob Storage](hadoop-catalog-with-adls.md) through Hadoop catalog. The rest of this document will use HDFS or local file as an example to illustrate how to use the Hadoop catalog. For S3, GCS, OSS and Azure Blob Storage, the configuration is similar to HDFS, please refer to the corresponding document for more details.