subcategory |
---|
Compute |
-> Note If you have a fully automated setup with workspaces created by databricks_mws_workspaces or azurerm_databricks_workspace, please make sure to add depends_on attribute in order to prevent default auth: cannot configure default credentials errors.
Retrieves information about a databricks_cluster using its id. This could be retrieved programmatically using databricks_clusters data source.
Retrieve attributes of each SQL warehouses in a workspace
data "databricks_clusters" "all" {
}
data "databricks_cluster" "all" {
for_each = data.databricks_clusters.all.ids
cluster_id = each.value
}
cluster_id
- (Required ifcluster_name
isn't specified) The id of the clustercluster_name
- (Required ifcluster_id
isn't specified) The exact name of the cluster to search
This data source exports the following attributes:
id
- cluster IDcluster_info
block, consisting of following fields:cluster_name
- Cluster name, which doesn’t have to be unique.spark_version
- Runtime version of the cluster.runtime_engine
- The type of runtime of the clusterdriver_node_type_id
- The node type of the Spark driver.node_type_id
- Any supported databricks_node_type id.instance_pool_id
The pool of idle instances the cluster is attached to.driver_instance_pool_id
- similar toinstance_pool_id
, but for driver node.policy_id
- Identifier of Cluster Policy to validate cluster and preset certain defaults.autotermination_minutes
- Automatically terminate the cluster after being inactive for this time in minutes. If specified, the threshold must be between 10 and 10000 minutes. You can also set this value to 0 to explicitly disable automatic termination.enable_elastic_disk
- Use autoscaling local storage.enable_local_disk_encryption
- Enable local disk encryption.data_security_mode
- Security features of the cluster. Unity Catalog requiresSINGLE_USER
orUSER_ISOLATION
mode.LEGACY_PASSTHROUGH
for passthrough cluster andLEGACY_TABLE_ACL
for Table ACL cluster. Default toNONE
, i.e. no security feature enabled.single_user_name
- The optional user name of the user to assign to an interactive cluster. This field is required when using standard AAD Passthrough for Azure Data Lake Storage (ADLS) with a single-user cluster (i.e., not high-concurrency clusters).idempotency_token
- An optional token to guarantee the idempotency of cluster creation requests.ssh_public_keys
- SSH public key contents that will be added to each Spark node in this cluster.spark_env_vars
- Map with environment variable key-value pairs to fine-tune Spark clusters. Key-value pairs of the form (X,Y) are exported (i.e., X='Y') while launching the driver and workers.custom_tags
- Additional tags for cluster resources.spark_conf
- Map with key-value pairs to fine-tune Spark clusters.
The following resources are often used in the same context:
- End to end workspace management guide.
- databricks_cluster to create Databricks Clusters.
- databricks_cluster_policy to create a databricks_cluster policy, which limits the ability to create clusters based on a set of rules.
- databricks_instance_pool to manage instance pools to reduce cluster start and auto-scaling times by maintaining a set of idle, ready-to-use instances.
- databricks_job to manage Databricks Jobs to run non-interactive code in a databricks_cluster.
- databricks_library to install a library on databricks_cluster.
- databricks_pipeline to deploy Delta Live Tables.