Google Cloud Native is in preview. Google Cloud Classic is fully supported.
google-native.dataproc/v1.Batch
Explore with Pulumi AI
Google Cloud Native is in preview. Google Cloud Classic is fully supported.
Creates a batch workload that executes asynchronously. Auto-naming is currently not supported for this resource.
Create Batch Resource
Resources are created with functions called constructors. To learn more about declaring and configuring resources, see Resources.
Constructor syntax
new Batch(name: string, args?: BatchArgs, opts?: CustomResourceOptions);@overload
def Batch(resource_name: str,
          args: Optional[BatchArgs] = None,
          opts: Optional[ResourceOptions] = None)
@overload
def Batch(resource_name: str,
          opts: Optional[ResourceOptions] = None,
          batch_id: Optional[str] = None,
          environment_config: Optional[EnvironmentConfigArgs] = None,
          labels: Optional[Mapping[str, str]] = None,
          location: Optional[str] = None,
          project: Optional[str] = None,
          pyspark_batch: Optional[PySparkBatchArgs] = None,
          request_id: Optional[str] = None,
          runtime_config: Optional[RuntimeConfigArgs] = None,
          spark_batch: Optional[SparkBatchArgs] = None,
          spark_r_batch: Optional[SparkRBatchArgs] = None,
          spark_sql_batch: Optional[SparkSqlBatchArgs] = None)func NewBatch(ctx *Context, name string, args *BatchArgs, opts ...ResourceOption) (*Batch, error)public Batch(string name, BatchArgs? args = null, CustomResourceOptions? opts = null)type: google-native:dataproc/v1:Batch
properties: # The arguments to resource properties.
options: # Bag of options to control resource's behavior.
Parameters
- name string
- The unique name of the resource.
- args BatchArgs
- The arguments to resource properties.
- opts CustomResourceOptions
- Bag of options to control resource's behavior.
- resource_name str
- The unique name of the resource.
- args BatchArgs
- The arguments to resource properties.
- opts ResourceOptions
- Bag of options to control resource's behavior.
- ctx Context
- Context object for the current deployment.
- name string
- The unique name of the resource.
- args BatchArgs
- The arguments to resource properties.
- opts ResourceOption
- Bag of options to control resource's behavior.
- name string
- The unique name of the resource.
- args BatchArgs
- The arguments to resource properties.
- opts CustomResourceOptions
- Bag of options to control resource's behavior.
- name String
- The unique name of the resource.
- args BatchArgs
- The arguments to resource properties.
- options CustomResourceOptions
- Bag of options to control resource's behavior.
Constructor example
The following reference example uses placeholder values for all input properties.
var batchResource = new GoogleNative.Dataproc.V1.Batch("batchResource", new()
{
    BatchId = "string",
    EnvironmentConfig = new GoogleNative.Dataproc.V1.Inputs.EnvironmentConfigArgs
    {
        ExecutionConfig = new GoogleNative.Dataproc.V1.Inputs.ExecutionConfigArgs
        {
            IdleTtl = "string",
            KmsKey = "string",
            NetworkTags = new[]
            {
                "string",
            },
            NetworkUri = "string",
            ServiceAccount = "string",
            StagingBucket = "string",
            SubnetworkUri = "string",
            Ttl = "string",
        },
        PeripheralsConfig = new GoogleNative.Dataproc.V1.Inputs.PeripheralsConfigArgs
        {
            MetastoreService = "string",
            SparkHistoryServerConfig = new GoogleNative.Dataproc.V1.Inputs.SparkHistoryServerConfigArgs
            {
                DataprocCluster = "string",
            },
        },
    },
    Labels = 
    {
        { "string", "string" },
    },
    Location = "string",
    Project = "string",
    PysparkBatch = new GoogleNative.Dataproc.V1.Inputs.PySparkBatchArgs
    {
        MainPythonFileUri = "string",
        ArchiveUris = new[]
        {
            "string",
        },
        Args = new[]
        {
            "string",
        },
        FileUris = new[]
        {
            "string",
        },
        JarFileUris = new[]
        {
            "string",
        },
        PythonFileUris = new[]
        {
            "string",
        },
    },
    RequestId = "string",
    RuntimeConfig = new GoogleNative.Dataproc.V1.Inputs.RuntimeConfigArgs
    {
        ContainerImage = "string",
        Properties = 
        {
            { "string", "string" },
        },
        RepositoryConfig = new GoogleNative.Dataproc.V1.Inputs.RepositoryConfigArgs
        {
            PypiRepositoryConfig = new GoogleNative.Dataproc.V1.Inputs.PyPiRepositoryConfigArgs
            {
                PypiRepository = "string",
            },
        },
        Version = "string",
    },
    SparkBatch = new GoogleNative.Dataproc.V1.Inputs.SparkBatchArgs
    {
        ArchiveUris = new[]
        {
            "string",
        },
        Args = new[]
        {
            "string",
        },
        FileUris = new[]
        {
            "string",
        },
        JarFileUris = new[]
        {
            "string",
        },
        MainClass = "string",
        MainJarFileUri = "string",
    },
    SparkRBatch = new GoogleNative.Dataproc.V1.Inputs.SparkRBatchArgs
    {
        MainRFileUri = "string",
        ArchiveUris = new[]
        {
            "string",
        },
        Args = new[]
        {
            "string",
        },
        FileUris = new[]
        {
            "string",
        },
    },
    SparkSqlBatch = new GoogleNative.Dataproc.V1.Inputs.SparkSqlBatchArgs
    {
        QueryFileUri = "string",
        JarFileUris = new[]
        {
            "string",
        },
        QueryVariables = 
        {
            { "string", "string" },
        },
    },
});
example, err := dataproc.NewBatch(ctx, "batchResource", &dataproc.BatchArgs{
	BatchId: pulumi.String("string"),
	EnvironmentConfig: &dataproc.EnvironmentConfigArgs{
		ExecutionConfig: &dataproc.ExecutionConfigArgs{
			IdleTtl: pulumi.String("string"),
			KmsKey:  pulumi.String("string"),
			NetworkTags: pulumi.StringArray{
				pulumi.String("string"),
			},
			NetworkUri:     pulumi.String("string"),
			ServiceAccount: pulumi.String("string"),
			StagingBucket:  pulumi.String("string"),
			SubnetworkUri:  pulumi.String("string"),
			Ttl:            pulumi.String("string"),
		},
		PeripheralsConfig: &dataproc.PeripheralsConfigArgs{
			MetastoreService: pulumi.String("string"),
			SparkHistoryServerConfig: &dataproc.SparkHistoryServerConfigArgs{
				DataprocCluster: pulumi.String("string"),
			},
		},
	},
	Labels: pulumi.StringMap{
		"string": pulumi.String("string"),
	},
	Location: pulumi.String("string"),
	Project:  pulumi.String("string"),
	PysparkBatch: &dataproc.PySparkBatchArgs{
		MainPythonFileUri: pulumi.String("string"),
		ArchiveUris: pulumi.StringArray{
			pulumi.String("string"),
		},
		Args: pulumi.StringArray{
			pulumi.String("string"),
		},
		FileUris: pulumi.StringArray{
			pulumi.String("string"),
		},
		JarFileUris: pulumi.StringArray{
			pulumi.String("string"),
		},
		PythonFileUris: pulumi.StringArray{
			pulumi.String("string"),
		},
	},
	RequestId: pulumi.String("string"),
	RuntimeConfig: &dataproc.RuntimeConfigArgs{
		ContainerImage: pulumi.String("string"),
		Properties: pulumi.StringMap{
			"string": pulumi.String("string"),
		},
		RepositoryConfig: &dataproc.RepositoryConfigArgs{
			PypiRepositoryConfig: &dataproc.PyPiRepositoryConfigArgs{
				PypiRepository: pulumi.String("string"),
			},
		},
		Version: pulumi.String("string"),
	},
	SparkBatch: &dataproc.SparkBatchArgs{
		ArchiveUris: pulumi.StringArray{
			pulumi.String("string"),
		},
		Args: pulumi.StringArray{
			pulumi.String("string"),
		},
		FileUris: pulumi.StringArray{
			pulumi.String("string"),
		},
		JarFileUris: pulumi.StringArray{
			pulumi.String("string"),
		},
		MainClass:      pulumi.String("string"),
		MainJarFileUri: pulumi.String("string"),
	},
	SparkRBatch: &dataproc.SparkRBatchArgs{
		MainRFileUri: pulumi.String("string"),
		ArchiveUris: pulumi.StringArray{
			pulumi.String("string"),
		},
		Args: pulumi.StringArray{
			pulumi.String("string"),
		},
		FileUris: pulumi.StringArray{
			pulumi.String("string"),
		},
	},
	SparkSqlBatch: &dataproc.SparkSqlBatchArgs{
		QueryFileUri: pulumi.String("string"),
		JarFileUris: pulumi.StringArray{
			pulumi.String("string"),
		},
		QueryVariables: pulumi.StringMap{
			"string": pulumi.String("string"),
		},
	},
})
var batchResource = new Batch("batchResource", BatchArgs.builder()
    .batchId("string")
    .environmentConfig(EnvironmentConfigArgs.builder()
        .executionConfig(ExecutionConfigArgs.builder()
            .idleTtl("string")
            .kmsKey("string")
            .networkTags("string")
            .networkUri("string")
            .serviceAccount("string")
            .stagingBucket("string")
            .subnetworkUri("string")
            .ttl("string")
            .build())
        .peripheralsConfig(PeripheralsConfigArgs.builder()
            .metastoreService("string")
            .sparkHistoryServerConfig(SparkHistoryServerConfigArgs.builder()
                .dataprocCluster("string")
                .build())
            .build())
        .build())
    .labels(Map.of("string", "string"))
    .location("string")
    .project("string")
    .pysparkBatch(PySparkBatchArgs.builder()
        .mainPythonFileUri("string")
        .archiveUris("string")
        .args("string")
        .fileUris("string")
        .jarFileUris("string")
        .pythonFileUris("string")
        .build())
    .requestId("string")
    .runtimeConfig(RuntimeConfigArgs.builder()
        .containerImage("string")
        .properties(Map.of("string", "string"))
        .repositoryConfig(RepositoryConfigArgs.builder()
            .pypiRepositoryConfig(PyPiRepositoryConfigArgs.builder()
                .pypiRepository("string")
                .build())
            .build())
        .version("string")
        .build())
    .sparkBatch(SparkBatchArgs.builder()
        .archiveUris("string")
        .args("string")
        .fileUris("string")
        .jarFileUris("string")
        .mainClass("string")
        .mainJarFileUri("string")
        .build())
    .sparkRBatch(SparkRBatchArgs.builder()
        .mainRFileUri("string")
        .archiveUris("string")
        .args("string")
        .fileUris("string")
        .build())
    .sparkSqlBatch(SparkSqlBatchArgs.builder()
        .queryFileUri("string")
        .jarFileUris("string")
        .queryVariables(Map.of("string", "string"))
        .build())
    .build());
batch_resource = google_native.dataproc.v1.Batch("batchResource",
    batch_id="string",
    environment_config={
        "execution_config": {
            "idle_ttl": "string",
            "kms_key": "string",
            "network_tags": ["string"],
            "network_uri": "string",
            "service_account": "string",
            "staging_bucket": "string",
            "subnetwork_uri": "string",
            "ttl": "string",
        },
        "peripherals_config": {
            "metastore_service": "string",
            "spark_history_server_config": {
                "dataproc_cluster": "string",
            },
        },
    },
    labels={
        "string": "string",
    },
    location="string",
    project="string",
    pyspark_batch={
        "main_python_file_uri": "string",
        "archive_uris": ["string"],
        "args": ["string"],
        "file_uris": ["string"],
        "jar_file_uris": ["string"],
        "python_file_uris": ["string"],
    },
    request_id="string",
    runtime_config={
        "container_image": "string",
        "properties": {
            "string": "string",
        },
        "repository_config": {
            "pypi_repository_config": {
                "pypi_repository": "string",
            },
        },
        "version": "string",
    },
    spark_batch={
        "archive_uris": ["string"],
        "args": ["string"],
        "file_uris": ["string"],
        "jar_file_uris": ["string"],
        "main_class": "string",
        "main_jar_file_uri": "string",
    },
    spark_r_batch={
        "main_r_file_uri": "string",
        "archive_uris": ["string"],
        "args": ["string"],
        "file_uris": ["string"],
    },
    spark_sql_batch={
        "query_file_uri": "string",
        "jar_file_uris": ["string"],
        "query_variables": {
            "string": "string",
        },
    })
const batchResource = new google_native.dataproc.v1.Batch("batchResource", {
    batchId: "string",
    environmentConfig: {
        executionConfig: {
            idleTtl: "string",
            kmsKey: "string",
            networkTags: ["string"],
            networkUri: "string",
            serviceAccount: "string",
            stagingBucket: "string",
            subnetworkUri: "string",
            ttl: "string",
        },
        peripheralsConfig: {
            metastoreService: "string",
            sparkHistoryServerConfig: {
                dataprocCluster: "string",
            },
        },
    },
    labels: {
        string: "string",
    },
    location: "string",
    project: "string",
    pysparkBatch: {
        mainPythonFileUri: "string",
        archiveUris: ["string"],
        args: ["string"],
        fileUris: ["string"],
        jarFileUris: ["string"],
        pythonFileUris: ["string"],
    },
    requestId: "string",
    runtimeConfig: {
        containerImage: "string",
        properties: {
            string: "string",
        },
        repositoryConfig: {
            pypiRepositoryConfig: {
                pypiRepository: "string",
            },
        },
        version: "string",
    },
    sparkBatch: {
        archiveUris: ["string"],
        args: ["string"],
        fileUris: ["string"],
        jarFileUris: ["string"],
        mainClass: "string",
        mainJarFileUri: "string",
    },
    sparkRBatch: {
        mainRFileUri: "string",
        archiveUris: ["string"],
        args: ["string"],
        fileUris: ["string"],
    },
    sparkSqlBatch: {
        queryFileUri: "string",
        jarFileUris: ["string"],
        queryVariables: {
            string: "string",
        },
    },
});
type: google-native:dataproc/v1:Batch
properties:
    batchId: string
    environmentConfig:
        executionConfig:
            idleTtl: string
            kmsKey: string
            networkTags:
                - string
            networkUri: string
            serviceAccount: string
            stagingBucket: string
            subnetworkUri: string
            ttl: string
        peripheralsConfig:
            metastoreService: string
            sparkHistoryServerConfig:
                dataprocCluster: string
    labels:
        string: string
    location: string
    project: string
    pysparkBatch:
        archiveUris:
            - string
        args:
            - string
        fileUris:
            - string
        jarFileUris:
            - string
        mainPythonFileUri: string
        pythonFileUris:
            - string
    requestId: string
    runtimeConfig:
        containerImage: string
        properties:
            string: string
        repositoryConfig:
            pypiRepositoryConfig:
                pypiRepository: string
        version: string
    sparkBatch:
        archiveUris:
            - string
        args:
            - string
        fileUris:
            - string
        jarFileUris:
            - string
        mainClass: string
        mainJarFileUri: string
    sparkRBatch:
        archiveUris:
            - string
        args:
            - string
        fileUris:
            - string
        mainRFileUri: string
    sparkSqlBatch:
        jarFileUris:
            - string
        queryFileUri: string
        queryVariables:
            string: string
Batch Resource Properties
To learn more about resource properties and how to use them, see Inputs and Outputs in the Architecture and Concepts docs.
Inputs
In Python, inputs that are objects can be passed either as argument classes or as dictionary literals.
The Batch resource accepts the following input properties:
- BatchId string
- Optional. The ID to use for the batch, which will become the final component of the batch's resource name.This value must be 4-63 characters. Valid characters are /[a-z][0-9]-/.
- EnvironmentConfig Pulumi.Google Native. Dataproc. V1. Inputs. Environment Config 
- Optional. Environment configuration for the batch execution.
- Labels Dictionary<string, string>
- Optional. The labels to associate with this batch. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a batch.
- Location string
- Project string
- PysparkBatch Pulumi.Google Native. Dataproc. V1. Inputs. Py Spark Batch 
- Optional. PySpark batch config.
- RequestId string
- Optional. A unique ID used to identify the request. If the service receives two CreateBatchRequest (https://cloud.google.com/dataproc/docs/reference/rpc/google.cloud.dataproc.v1#google.cloud.dataproc.v1.CreateBatchRequest)s with the same request_id, the second request is ignored and the Operation that corresponds to the first Batch created and stored in the backend is returned.Recommendation: Set this value to a UUID (https://en.wikipedia.org/wiki/Universally_unique_identifier).The value must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters.
- RuntimeConfig Pulumi.Google Native. Dataproc. V1. Inputs. Runtime Config 
- Optional. Runtime configuration for the batch execution.
- SparkBatch Pulumi.Google Native. Dataproc. V1. Inputs. Spark Batch 
- Optional. Spark batch config.
- SparkRBatch Pulumi.Google Native. Dataproc. V1. Inputs. Spark RBatch 
- Optional. SparkR batch config.
- SparkSql Pulumi.Batch Google Native. Dataproc. V1. Inputs. Spark Sql Batch 
- Optional. SparkSql batch config.
- BatchId string
- Optional. The ID to use for the batch, which will become the final component of the batch's resource name.This value must be 4-63 characters. Valid characters are /[a-z][0-9]-/.
- EnvironmentConfig EnvironmentConfig Args 
- Optional. Environment configuration for the batch execution.
- Labels map[string]string
- Optional. The labels to associate with this batch. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a batch.
- Location string
- Project string
- PysparkBatch PySpark Batch Args 
- Optional. PySpark batch config.
- RequestId string
- Optional. A unique ID used to identify the request. If the service receives two CreateBatchRequest (https://cloud.google.com/dataproc/docs/reference/rpc/google.cloud.dataproc.v1#google.cloud.dataproc.v1.CreateBatchRequest)s with the same request_id, the second request is ignored and the Operation that corresponds to the first Batch created and stored in the backend is returned.Recommendation: Set this value to a UUID (https://en.wikipedia.org/wiki/Universally_unique_identifier).The value must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters.
- RuntimeConfig RuntimeConfig Args 
- Optional. Runtime configuration for the batch execution.
- SparkBatch SparkBatch Args 
- Optional. Spark batch config.
- SparkRBatch SparkRBatch Args 
- Optional. SparkR batch config.
- SparkSql SparkBatch Sql Batch Args 
- Optional. SparkSql batch config.
- batchId String
- Optional. The ID to use for the batch, which will become the final component of the batch's resource name.This value must be 4-63 characters. Valid characters are /[a-z][0-9]-/.
- environmentConfig EnvironmentConfig 
- Optional. Environment configuration for the batch execution.
- labels Map<String,String>
- Optional. The labels to associate with this batch. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a batch.
- location String
- project String
- pysparkBatch PySpark Batch 
- Optional. PySpark batch config.
- requestId String
- Optional. A unique ID used to identify the request. If the service receives two CreateBatchRequest (https://cloud.google.com/dataproc/docs/reference/rpc/google.cloud.dataproc.v1#google.cloud.dataproc.v1.CreateBatchRequest)s with the same request_id, the second request is ignored and the Operation that corresponds to the first Batch created and stored in the backend is returned.Recommendation: Set this value to a UUID (https://en.wikipedia.org/wiki/Universally_unique_identifier).The value must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters.
- runtimeConfig RuntimeConfig 
- Optional. Runtime configuration for the batch execution.
- sparkBatch SparkBatch 
- Optional. Spark batch config.
- sparkRBatch SparkRBatch 
- Optional. SparkR batch config.
- sparkSql SparkBatch Sql Batch 
- Optional. SparkSql batch config.
- batchId string
- Optional. The ID to use for the batch, which will become the final component of the batch's resource name.This value must be 4-63 characters. Valid characters are /[a-z][0-9]-/.
- environmentConfig EnvironmentConfig 
- Optional. Environment configuration for the batch execution.
- labels {[key: string]: string}
- Optional. The labels to associate with this batch. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a batch.
- location string
- project string
- pysparkBatch PySpark Batch 
- Optional. PySpark batch config.
- requestId string
- Optional. A unique ID used to identify the request. If the service receives two CreateBatchRequest (https://cloud.google.com/dataproc/docs/reference/rpc/google.cloud.dataproc.v1#google.cloud.dataproc.v1.CreateBatchRequest)s with the same request_id, the second request is ignored and the Operation that corresponds to the first Batch created and stored in the backend is returned.Recommendation: Set this value to a UUID (https://en.wikipedia.org/wiki/Universally_unique_identifier).The value must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters.
- runtimeConfig RuntimeConfig 
- Optional. Runtime configuration for the batch execution.
- sparkBatch SparkBatch 
- Optional. Spark batch config.
- sparkRBatch SparkRBatch 
- Optional. SparkR batch config.
- sparkSql SparkBatch Sql Batch 
- Optional. SparkSql batch config.
- batch_id str
- Optional. The ID to use for the batch, which will become the final component of the batch's resource name.This value must be 4-63 characters. Valid characters are /[a-z][0-9]-/.
- environment_config EnvironmentConfig Args 
- Optional. Environment configuration for the batch execution.
- labels Mapping[str, str]
- Optional. The labels to associate with this batch. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a batch.
- location str
- project str
- pyspark_batch PySpark Batch Args 
- Optional. PySpark batch config.
- request_id str
- Optional. A unique ID used to identify the request. If the service receives two CreateBatchRequest (https://cloud.google.com/dataproc/docs/reference/rpc/google.cloud.dataproc.v1#google.cloud.dataproc.v1.CreateBatchRequest)s with the same request_id, the second request is ignored and the Operation that corresponds to the first Batch created and stored in the backend is returned.Recommendation: Set this value to a UUID (https://en.wikipedia.org/wiki/Universally_unique_identifier).The value must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters.
- runtime_config RuntimeConfig Args 
- Optional. Runtime configuration for the batch execution.
- spark_batch SparkBatch Args 
- Optional. Spark batch config.
- spark_r_ Sparkbatch RBatch Args 
- Optional. SparkR batch config.
- spark_sql_ Sparkbatch Sql Batch Args 
- Optional. SparkSql batch config.
- batchId String
- Optional. The ID to use for the batch, which will become the final component of the batch's resource name.This value must be 4-63 characters. Valid characters are /[a-z][0-9]-/.
- environmentConfig Property Map
- Optional. Environment configuration for the batch execution.
- labels Map<String>
- Optional. The labels to associate with this batch. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a batch.
- location String
- project String
- pysparkBatch Property Map
- Optional. PySpark batch config.
- requestId String
- Optional. A unique ID used to identify the request. If the service receives two CreateBatchRequest (https://cloud.google.com/dataproc/docs/reference/rpc/google.cloud.dataproc.v1#google.cloud.dataproc.v1.CreateBatchRequest)s with the same request_id, the second request is ignored and the Operation that corresponds to the first Batch created and stored in the backend is returned.Recommendation: Set this value to a UUID (https://en.wikipedia.org/wiki/Universally_unique_identifier).The value must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters.
- runtimeConfig Property Map
- Optional. Runtime configuration for the batch execution.
- sparkBatch Property Map
- Optional. Spark batch config.
- sparkRBatch Property Map
- Optional. SparkR batch config.
- sparkSql Property MapBatch 
- Optional. SparkSql batch config.
Outputs
All input properties are implicitly available as output properties. Additionally, the Batch resource produces the following output properties:
- CreateTime string
- The time when the batch was created.
- Creator string
- The email address of the user who created the batch.
- Id string
- The provider-assigned unique ID for this managed resource.
- Name string
- The resource name of the batch.
- Operation string
- The resource name of the operation associated with this batch.
- RuntimeInfo Pulumi.Google Native. Dataproc. V1. Outputs. Runtime Info Response 
- Runtime information about batch execution.
- State string
- The state of the batch.
- StateHistory List<Pulumi.Google Native. Dataproc. V1. Outputs. State History Response> 
- Historical state information for the batch.
- StateMessage string
- Batch state details, such as a failure description if the state is FAILED.
- StateTime string
- The time when the batch entered a current state.
- Uuid string
- A batch UUID (Unique Universal Identifier). The service generates this value when it creates the batch.
- CreateTime string
- The time when the batch was created.
- Creator string
- The email address of the user who created the batch.
- Id string
- The provider-assigned unique ID for this managed resource.
- Name string
- The resource name of the batch.
- Operation string
- The resource name of the operation associated with this batch.
- RuntimeInfo RuntimeInfo Response 
- Runtime information about batch execution.
- State string
- The state of the batch.
- StateHistory []StateHistory Response 
- Historical state information for the batch.
- StateMessage string
- Batch state details, such as a failure description if the state is FAILED.
- StateTime string
- The time when the batch entered a current state.
- Uuid string
- A batch UUID (Unique Universal Identifier). The service generates this value when it creates the batch.
- createTime String
- The time when the batch was created.
- creator String
- The email address of the user who created the batch.
- id String
- The provider-assigned unique ID for this managed resource.
- name String
- The resource name of the batch.
- operation String
- The resource name of the operation associated with this batch.
- runtimeInfo RuntimeInfo Response 
- Runtime information about batch execution.
- state String
- The state of the batch.
- stateHistory List<StateHistory Response> 
- Historical state information for the batch.
- stateMessage String
- Batch state details, such as a failure description if the state is FAILED.
- stateTime String
- The time when the batch entered a current state.
- uuid String
- A batch UUID (Unique Universal Identifier). The service generates this value when it creates the batch.
- createTime string
- The time when the batch was created.
- creator string
- The email address of the user who created the batch.
- id string
- The provider-assigned unique ID for this managed resource.
- name string
- The resource name of the batch.
- operation string
- The resource name of the operation associated with this batch.
- runtimeInfo RuntimeInfo Response 
- Runtime information about batch execution.
- state string
- The state of the batch.
- stateHistory StateHistory Response[] 
- Historical state information for the batch.
- stateMessage string
- Batch state details, such as a failure description if the state is FAILED.
- stateTime string
- The time when the batch entered a current state.
- uuid string
- A batch UUID (Unique Universal Identifier). The service generates this value when it creates the batch.
- create_time str
- The time when the batch was created.
- creator str
- The email address of the user who created the batch.
- id str
- The provider-assigned unique ID for this managed resource.
- name str
- The resource name of the batch.
- operation str
- The resource name of the operation associated with this batch.
- runtime_info RuntimeInfo Response 
- Runtime information about batch execution.
- state str
- The state of the batch.
- state_history Sequence[StateHistory Response] 
- Historical state information for the batch.
- state_message str
- Batch state details, such as a failure description if the state is FAILED.
- state_time str
- The time when the batch entered a current state.
- uuid str
- A batch UUID (Unique Universal Identifier). The service generates this value when it creates the batch.
- createTime String
- The time when the batch was created.
- creator String
- The email address of the user who created the batch.
- id String
- The provider-assigned unique ID for this managed resource.
- name String
- The resource name of the batch.
- operation String
- The resource name of the operation associated with this batch.
- runtimeInfo Property Map
- Runtime information about batch execution.
- state String
- The state of the batch.
- stateHistory List<Property Map>
- Historical state information for the batch.
- stateMessage String
- Batch state details, such as a failure description if the state is FAILED.
- stateTime String
- The time when the batch entered a current state.
- uuid String
- A batch UUID (Unique Universal Identifier). The service generates this value when it creates the batch.
Supporting Types
EnvironmentConfig, EnvironmentConfigArgs    
- ExecutionConfig Pulumi.Google Native. Dataproc. V1. Inputs. Execution Config 
- Optional. Execution configuration for a workload.
- PeripheralsConfig Pulumi.Google Native. Dataproc. V1. Inputs. Peripherals Config 
- Optional. Peripherals configuration that workload has access to.
- ExecutionConfig ExecutionConfig 
- Optional. Execution configuration for a workload.
- PeripheralsConfig PeripheralsConfig 
- Optional. Peripherals configuration that workload has access to.
- executionConfig ExecutionConfig 
- Optional. Execution configuration for a workload.
- peripheralsConfig PeripheralsConfig 
- Optional. Peripherals configuration that workload has access to.
- executionConfig ExecutionConfig 
- Optional. Execution configuration for a workload.
- peripheralsConfig PeripheralsConfig 
- Optional. Peripherals configuration that workload has access to.
- execution_config ExecutionConfig 
- Optional. Execution configuration for a workload.
- peripherals_config PeripheralsConfig 
- Optional. Peripherals configuration that workload has access to.
- executionConfig Property Map
- Optional. Execution configuration for a workload.
- peripheralsConfig Property Map
- Optional. Peripherals configuration that workload has access to.
EnvironmentConfigResponse, EnvironmentConfigResponseArgs      
- ExecutionConfig Pulumi.Google Native. Dataproc. V1. Inputs. Execution Config Response 
- Optional. Execution configuration for a workload.
- PeripheralsConfig Pulumi.Google Native. Dataproc. V1. Inputs. Peripherals Config Response 
- Optional. Peripherals configuration that workload has access to.
- ExecutionConfig ExecutionConfig Response 
- Optional. Execution configuration for a workload.
- PeripheralsConfig PeripheralsConfig Response 
- Optional. Peripherals configuration that workload has access to.
- executionConfig ExecutionConfig Response 
- Optional. Execution configuration for a workload.
- peripheralsConfig PeripheralsConfig Response 
- Optional. Peripherals configuration that workload has access to.
- executionConfig ExecutionConfig Response 
- Optional. Execution configuration for a workload.
- peripheralsConfig PeripheralsConfig Response 
- Optional. Peripherals configuration that workload has access to.
- execution_config ExecutionConfig Response 
- Optional. Execution configuration for a workload.
- peripherals_config PeripheralsConfig Response 
- Optional. Peripherals configuration that workload has access to.
- executionConfig Property Map
- Optional. Execution configuration for a workload.
- peripheralsConfig Property Map
- Optional. Peripherals configuration that workload has access to.
ExecutionConfig, ExecutionConfigArgs    
- IdleTtl string
- Optional. Applies to sessions only. The duration to keep the session alive while it's idling. Exceeding this threshold causes the session to terminate. This field cannot be set on a batch workload. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). Defaults to 1 hour if not set. If both ttl and idle_ttl are specified for an interactive session, the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
- KmsKey string
- Optional. The Cloud KMS key to use for encryption.
- List<string>
- Optional. Tags used for network traffic control.
- NetworkUri string
- Optional. Network URI to connect workload to.
- ServiceAccount string
- Optional. Service account that used to execute workload.
- StagingBucket string
- Optional. A Cloud Storage bucket used to stage workload dependencies, config files, and store workload output and other ephemeral data, such as Spark history files. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location according to the region where your workload is running, and then create and manage project-level, per-location staging and temporary buckets. This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- SubnetworkUri string
- Optional. Subnetwork URI to connect workload to.
- Ttl string
- Optional. The duration after which the workload will be terminated, specified as the JSON representation for Duration (https://protobuf.dev/programming-guides/proto3/#json). When the workload exceeds this duration, it will be unconditionally terminated without waiting for ongoing work to finish. If ttl is not specified for a batch workload, the workload will be allowed to run until it exits naturally (or run forever without exiting). If ttl is not specified for an interactive session, it defaults to 24 hours. If ttl is not specified for a batch that uses 2.1+ runtime version, it defaults to 4 hours. Minimum value is 10 minutes; maximum value is 14 days. If both ttl and idle_ttl are specified (for an interactive session), the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
- IdleTtl string
- Optional. Applies to sessions only. The duration to keep the session alive while it's idling. Exceeding this threshold causes the session to terminate. This field cannot be set on a batch workload. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). Defaults to 1 hour if not set. If both ttl and idle_ttl are specified for an interactive session, the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
- KmsKey string
- Optional. The Cloud KMS key to use for encryption.
- []string
- Optional. Tags used for network traffic control.
- NetworkUri string
- Optional. Network URI to connect workload to.
- ServiceAccount string
- Optional. Service account that used to execute workload.
- StagingBucket string
- Optional. A Cloud Storage bucket used to stage workload dependencies, config files, and store workload output and other ephemeral data, such as Spark history files. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location according to the region where your workload is running, and then create and manage project-level, per-location staging and temporary buckets. This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- SubnetworkUri string
- Optional. Subnetwork URI to connect workload to.
- Ttl string
- Optional. The duration after which the workload will be terminated, specified as the JSON representation for Duration (https://protobuf.dev/programming-guides/proto3/#json). When the workload exceeds this duration, it will be unconditionally terminated without waiting for ongoing work to finish. If ttl is not specified for a batch workload, the workload will be allowed to run until it exits naturally (or run forever without exiting). If ttl is not specified for an interactive session, it defaults to 24 hours. If ttl is not specified for a batch that uses 2.1+ runtime version, it defaults to 4 hours. Minimum value is 10 minutes; maximum value is 14 days. If both ttl and idle_ttl are specified (for an interactive session), the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
- idleTtl String
- Optional. Applies to sessions only. The duration to keep the session alive while it's idling. Exceeding this threshold causes the session to terminate. This field cannot be set on a batch workload. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). Defaults to 1 hour if not set. If both ttl and idle_ttl are specified for an interactive session, the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
- kmsKey String
- Optional. The Cloud KMS key to use for encryption.
- List<String>
- Optional. Tags used for network traffic control.
- networkUri String
- Optional. Network URI to connect workload to.
- serviceAccount String
- Optional. Service account that used to execute workload.
- stagingBucket String
- Optional. A Cloud Storage bucket used to stage workload dependencies, config files, and store workload output and other ephemeral data, such as Spark history files. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location according to the region where your workload is running, and then create and manage project-level, per-location staging and temporary buckets. This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- subnetworkUri String
- Optional. Subnetwork URI to connect workload to.
- ttl String
- Optional. The duration after which the workload will be terminated, specified as the JSON representation for Duration (https://protobuf.dev/programming-guides/proto3/#json). When the workload exceeds this duration, it will be unconditionally terminated without waiting for ongoing work to finish. If ttl is not specified for a batch workload, the workload will be allowed to run until it exits naturally (or run forever without exiting). If ttl is not specified for an interactive session, it defaults to 24 hours. If ttl is not specified for a batch that uses 2.1+ runtime version, it defaults to 4 hours. Minimum value is 10 minutes; maximum value is 14 days. If both ttl and idle_ttl are specified (for an interactive session), the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
- idleTtl string
- Optional. Applies to sessions only. The duration to keep the session alive while it's idling. Exceeding this threshold causes the session to terminate. This field cannot be set on a batch workload. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). Defaults to 1 hour if not set. If both ttl and idle_ttl are specified for an interactive session, the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
- kmsKey string
- Optional. The Cloud KMS key to use for encryption.
- string[]
- Optional. Tags used for network traffic control.
- networkUri string
- Optional. Network URI to connect workload to.
- serviceAccount string
- Optional. Service account that used to execute workload.
- stagingBucket string
- Optional. A Cloud Storage bucket used to stage workload dependencies, config files, and store workload output and other ephemeral data, such as Spark history files. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location according to the region where your workload is running, and then create and manage project-level, per-location staging and temporary buckets. This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- subnetworkUri string
- Optional. Subnetwork URI to connect workload to.
- ttl string
- Optional. The duration after which the workload will be terminated, specified as the JSON representation for Duration (https://protobuf.dev/programming-guides/proto3/#json). When the workload exceeds this duration, it will be unconditionally terminated without waiting for ongoing work to finish. If ttl is not specified for a batch workload, the workload will be allowed to run until it exits naturally (or run forever without exiting). If ttl is not specified for an interactive session, it defaults to 24 hours. If ttl is not specified for a batch that uses 2.1+ runtime version, it defaults to 4 hours. Minimum value is 10 minutes; maximum value is 14 days. If both ttl and idle_ttl are specified (for an interactive session), the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
- idle_ttl str
- Optional. Applies to sessions only. The duration to keep the session alive while it's idling. Exceeding this threshold causes the session to terminate. This field cannot be set on a batch workload. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). Defaults to 1 hour if not set. If both ttl and idle_ttl are specified for an interactive session, the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
- kms_key str
- Optional. The Cloud KMS key to use for encryption.
- Sequence[str]
- Optional. Tags used for network traffic control.
- network_uri str
- Optional. Network URI to connect workload to.
- service_account str
- Optional. Service account that used to execute workload.
- staging_bucket str
- Optional. A Cloud Storage bucket used to stage workload dependencies, config files, and store workload output and other ephemeral data, such as Spark history files. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location according to the region where your workload is running, and then create and manage project-level, per-location staging and temporary buckets. This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- subnetwork_uri str
- Optional. Subnetwork URI to connect workload to.
- ttl str
- Optional. The duration after which the workload will be terminated, specified as the JSON representation for Duration (https://protobuf.dev/programming-guides/proto3/#json). When the workload exceeds this duration, it will be unconditionally terminated without waiting for ongoing work to finish. If ttl is not specified for a batch workload, the workload will be allowed to run until it exits naturally (or run forever without exiting). If ttl is not specified for an interactive session, it defaults to 24 hours. If ttl is not specified for a batch that uses 2.1+ runtime version, it defaults to 4 hours. Minimum value is 10 minutes; maximum value is 14 days. If both ttl and idle_ttl are specified (for an interactive session), the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
- idleTtl String
- Optional. Applies to sessions only. The duration to keep the session alive while it's idling. Exceeding this threshold causes the session to terminate. This field cannot be set on a batch workload. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). Defaults to 1 hour if not set. If both ttl and idle_ttl are specified for an interactive session, the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
- kmsKey String
- Optional. The Cloud KMS key to use for encryption.
- List<String>
- Optional. Tags used for network traffic control.
- networkUri String
- Optional. Network URI to connect workload to.
- serviceAccount String
- Optional. Service account that used to execute workload.
- stagingBucket String
- Optional. A Cloud Storage bucket used to stage workload dependencies, config files, and store workload output and other ephemeral data, such as Spark history files. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location according to the region where your workload is running, and then create and manage project-level, per-location staging and temporary buckets. This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- subnetworkUri String
- Optional. Subnetwork URI to connect workload to.
- ttl String
- Optional. The duration after which the workload will be terminated, specified as the JSON representation for Duration (https://protobuf.dev/programming-guides/proto3/#json). When the workload exceeds this duration, it will be unconditionally terminated without waiting for ongoing work to finish. If ttl is not specified for a batch workload, the workload will be allowed to run until it exits naturally (or run forever without exiting). If ttl is not specified for an interactive session, it defaults to 24 hours. If ttl is not specified for a batch that uses 2.1+ runtime version, it defaults to 4 hours. Minimum value is 10 minutes; maximum value is 14 days. If both ttl and idle_ttl are specified (for an interactive session), the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
ExecutionConfigResponse, ExecutionConfigResponseArgs      
- IdleTtl string
- Optional. Applies to sessions only. The duration to keep the session alive while it's idling. Exceeding this threshold causes the session to terminate. This field cannot be set on a batch workload. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). Defaults to 1 hour if not set. If both ttl and idle_ttl are specified for an interactive session, the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
- KmsKey string
- Optional. The Cloud KMS key to use for encryption.
- List<string>
- Optional. Tags used for network traffic control.
- NetworkUri string
- Optional. Network URI to connect workload to.
- ServiceAccount string
- Optional. Service account that used to execute workload.
- StagingBucket string
- Optional. A Cloud Storage bucket used to stage workload dependencies, config files, and store workload output and other ephemeral data, such as Spark history files. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location according to the region where your workload is running, and then create and manage project-level, per-location staging and temporary buckets. This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- SubnetworkUri string
- Optional. Subnetwork URI to connect workload to.
- Ttl string
- Optional. The duration after which the workload will be terminated, specified as the JSON representation for Duration (https://protobuf.dev/programming-guides/proto3/#json). When the workload exceeds this duration, it will be unconditionally terminated without waiting for ongoing work to finish. If ttl is not specified for a batch workload, the workload will be allowed to run until it exits naturally (or run forever without exiting). If ttl is not specified for an interactive session, it defaults to 24 hours. If ttl is not specified for a batch that uses 2.1+ runtime version, it defaults to 4 hours. Minimum value is 10 minutes; maximum value is 14 days. If both ttl and idle_ttl are specified (for an interactive session), the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
- IdleTtl string
- Optional. Applies to sessions only. The duration to keep the session alive while it's idling. Exceeding this threshold causes the session to terminate. This field cannot be set on a batch workload. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). Defaults to 1 hour if not set. If both ttl and idle_ttl are specified for an interactive session, the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
- KmsKey string
- Optional. The Cloud KMS key to use for encryption.
- []string
- Optional. Tags used for network traffic control.
- NetworkUri string
- Optional. Network URI to connect workload to.
- ServiceAccount string
- Optional. Service account that used to execute workload.
- StagingBucket string
- Optional. A Cloud Storage bucket used to stage workload dependencies, config files, and store workload output and other ephemeral data, such as Spark history files. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location according to the region where your workload is running, and then create and manage project-level, per-location staging and temporary buckets. This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- SubnetworkUri string
- Optional. Subnetwork URI to connect workload to.
- Ttl string
- Optional. The duration after which the workload will be terminated, specified as the JSON representation for Duration (https://protobuf.dev/programming-guides/proto3/#json). When the workload exceeds this duration, it will be unconditionally terminated without waiting for ongoing work to finish. If ttl is not specified for a batch workload, the workload will be allowed to run until it exits naturally (or run forever without exiting). If ttl is not specified for an interactive session, it defaults to 24 hours. If ttl is not specified for a batch that uses 2.1+ runtime version, it defaults to 4 hours. Minimum value is 10 minutes; maximum value is 14 days. If both ttl and idle_ttl are specified (for an interactive session), the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
- idleTtl String
- Optional. Applies to sessions only. The duration to keep the session alive while it's idling. Exceeding this threshold causes the session to terminate. This field cannot be set on a batch workload. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). Defaults to 1 hour if not set. If both ttl and idle_ttl are specified for an interactive session, the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
- kmsKey String
- Optional. The Cloud KMS key to use for encryption.
- List<String>
- Optional. Tags used for network traffic control.
- networkUri String
- Optional. Network URI to connect workload to.
- serviceAccount String
- Optional. Service account that used to execute workload.
- stagingBucket String
- Optional. A Cloud Storage bucket used to stage workload dependencies, config files, and store workload output and other ephemeral data, such as Spark history files. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location according to the region where your workload is running, and then create and manage project-level, per-location staging and temporary buckets. This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- subnetworkUri String
- Optional. Subnetwork URI to connect workload to.
- ttl String
- Optional. The duration after which the workload will be terminated, specified as the JSON representation for Duration (https://protobuf.dev/programming-guides/proto3/#json). When the workload exceeds this duration, it will be unconditionally terminated without waiting for ongoing work to finish. If ttl is not specified for a batch workload, the workload will be allowed to run until it exits naturally (or run forever without exiting). If ttl is not specified for an interactive session, it defaults to 24 hours. If ttl is not specified for a batch that uses 2.1+ runtime version, it defaults to 4 hours. Minimum value is 10 minutes; maximum value is 14 days. If both ttl and idle_ttl are specified (for an interactive session), the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
- idleTtl string
- Optional. Applies to sessions only. The duration to keep the session alive while it's idling. Exceeding this threshold causes the session to terminate. This field cannot be set on a batch workload. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). Defaults to 1 hour if not set. If both ttl and idle_ttl are specified for an interactive session, the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
- kmsKey string
- Optional. The Cloud KMS key to use for encryption.
- string[]
- Optional. Tags used for network traffic control.
- networkUri string
- Optional. Network URI to connect workload to.
- serviceAccount string
- Optional. Service account that used to execute workload.
- stagingBucket string
- Optional. A Cloud Storage bucket used to stage workload dependencies, config files, and store workload output and other ephemeral data, such as Spark history files. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location according to the region where your workload is running, and then create and manage project-level, per-location staging and temporary buckets. This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- subnetworkUri string
- Optional. Subnetwork URI to connect workload to.
- ttl string
- Optional. The duration after which the workload will be terminated, specified as the JSON representation for Duration (https://protobuf.dev/programming-guides/proto3/#json). When the workload exceeds this duration, it will be unconditionally terminated without waiting for ongoing work to finish. If ttl is not specified for a batch workload, the workload will be allowed to run until it exits naturally (or run forever without exiting). If ttl is not specified for an interactive session, it defaults to 24 hours. If ttl is not specified for a batch that uses 2.1+ runtime version, it defaults to 4 hours. Minimum value is 10 minutes; maximum value is 14 days. If both ttl and idle_ttl are specified (for an interactive session), the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
- idle_ttl str
- Optional. Applies to sessions only. The duration to keep the session alive while it's idling. Exceeding this threshold causes the session to terminate. This field cannot be set on a batch workload. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). Defaults to 1 hour if not set. If both ttl and idle_ttl are specified for an interactive session, the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
- kms_key str
- Optional. The Cloud KMS key to use for encryption.
- Sequence[str]
- Optional. Tags used for network traffic control.
- network_uri str
- Optional. Network URI to connect workload to.
- service_account str
- Optional. Service account that used to execute workload.
- staging_bucket str
- Optional. A Cloud Storage bucket used to stage workload dependencies, config files, and store workload output and other ephemeral data, such as Spark history files. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location according to the region where your workload is running, and then create and manage project-level, per-location staging and temporary buckets. This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- subnetwork_uri str
- Optional. Subnetwork URI to connect workload to.
- ttl str
- Optional. The duration after which the workload will be terminated, specified as the JSON representation for Duration (https://protobuf.dev/programming-guides/proto3/#json). When the workload exceeds this duration, it will be unconditionally terminated without waiting for ongoing work to finish. If ttl is not specified for a batch workload, the workload will be allowed to run until it exits naturally (or run forever without exiting). If ttl is not specified for an interactive session, it defaults to 24 hours. If ttl is not specified for a batch that uses 2.1+ runtime version, it defaults to 4 hours. Minimum value is 10 minutes; maximum value is 14 days. If both ttl and idle_ttl are specified (for an interactive session), the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
- idleTtl String
- Optional. Applies to sessions only. The duration to keep the session alive while it's idling. Exceeding this threshold causes the session to terminate. This field cannot be set on a batch workload. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). Defaults to 1 hour if not set. If both ttl and idle_ttl are specified for an interactive session, the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
- kmsKey String
- Optional. The Cloud KMS key to use for encryption.
- List<String>
- Optional. Tags used for network traffic control.
- networkUri String
- Optional. Network URI to connect workload to.
- serviceAccount String
- Optional. Service account that used to execute workload.
- stagingBucket String
- Optional. A Cloud Storage bucket used to stage workload dependencies, config files, and store workload output and other ephemeral data, such as Spark history files. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location according to the region where your workload is running, and then create and manage project-level, per-location staging and temporary buckets. This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- subnetworkUri String
- Optional. Subnetwork URI to connect workload to.
- ttl String
- Optional. The duration after which the workload will be terminated, specified as the JSON representation for Duration (https://protobuf.dev/programming-guides/proto3/#json). When the workload exceeds this duration, it will be unconditionally terminated without waiting for ongoing work to finish. If ttl is not specified for a batch workload, the workload will be allowed to run until it exits naturally (or run forever without exiting). If ttl is not specified for an interactive session, it defaults to 24 hours. If ttl is not specified for a batch that uses 2.1+ runtime version, it defaults to 4 hours. Minimum value is 10 minutes; maximum value is 14 days. If both ttl and idle_ttl are specified (for an interactive session), the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
PeripheralsConfig, PeripheralsConfigArgs    
- MetastoreService string
- Optional. Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[region]/services/[service_id]
- SparkHistory Pulumi.Server Config Google Native. Dataproc. V1. Inputs. Spark History Server Config 
- Optional. The Spark History Server configuration for the workload.
- MetastoreService string
- Optional. Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[region]/services/[service_id]
- SparkHistory SparkServer Config History Server Config 
- Optional. The Spark History Server configuration for the workload.
- metastoreService String
- Optional. Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[region]/services/[service_id]
- sparkHistory SparkServer Config History Server Config 
- Optional. The Spark History Server configuration for the workload.
- metastoreService string
- Optional. Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[region]/services/[service_id]
- sparkHistory SparkServer Config History Server Config 
- Optional. The Spark History Server configuration for the workload.
- metastore_service str
- Optional. Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[region]/services/[service_id]
- spark_history_ Sparkserver_ config History Server Config 
- Optional. The Spark History Server configuration for the workload.
- metastoreService String
- Optional. Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[region]/services/[service_id]
- sparkHistory Property MapServer Config 
- Optional. The Spark History Server configuration for the workload.
PeripheralsConfigResponse, PeripheralsConfigResponseArgs      
- MetastoreService string
- Optional. Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[region]/services/[service_id]
- SparkHistory Pulumi.Server Config Google Native. Dataproc. V1. Inputs. Spark History Server Config Response 
- Optional. The Spark History Server configuration for the workload.
- MetastoreService string
- Optional. Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[region]/services/[service_id]
- SparkHistory SparkServer Config History Server Config Response 
- Optional. The Spark History Server configuration for the workload.
- metastoreService String
- Optional. Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[region]/services/[service_id]
- sparkHistory SparkServer Config History Server Config Response 
- Optional. The Spark History Server configuration for the workload.
- metastoreService string
- Optional. Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[region]/services/[service_id]
- sparkHistory SparkServer Config History Server Config Response 
- Optional. The Spark History Server configuration for the workload.
- metastore_service str
- Optional. Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[region]/services/[service_id]
- spark_history_ Sparkserver_ config History Server Config Response 
- Optional. The Spark History Server configuration for the workload.
- metastoreService String
- Optional. Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[region]/services/[service_id]
- sparkHistory Property MapServer Config 
- Optional. The Spark History Server configuration for the workload.
PyPiRepositoryConfig, PyPiRepositoryConfigArgs        
- PypiRepository string
- Optional. PyPi repository address
- PypiRepository string
- Optional. PyPi repository address
- pypiRepository String
- Optional. PyPi repository address
- pypiRepository string
- Optional. PyPi repository address
- pypi_repository str
- Optional. PyPi repository address
- pypiRepository String
- Optional. PyPi repository address
PyPiRepositoryConfigResponse, PyPiRepositoryConfigResponseArgs          
- PypiRepository string
- Optional. PyPi repository address
- PypiRepository string
- Optional. PyPi repository address
- pypiRepository String
- Optional. PyPi repository address
- pypiRepository string
- Optional. PyPi repository address
- pypi_repository str
- Optional. PyPi repository address
- pypiRepository String
- Optional. PyPi repository address
PySparkBatch, PySparkBatchArgs      
- MainPython stringFile Uri 
- The HCFS URI of the main Python file to use as the Spark driver. Must be a .py file.
- ArchiveUris List<string>
- Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- Args List<string>
- Optional. The arguments to pass to the driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.
- FileUris List<string>
- Optional. HCFS URIs of files to be placed in the working directory of each executor.
- JarFile List<string>Uris 
- Optional. HCFS URIs of jar files to add to the classpath of the Spark driver and tasks.
- PythonFile List<string>Uris 
- Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.
- MainPython stringFile Uri 
- The HCFS URI of the main Python file to use as the Spark driver. Must be a .py file.
- ArchiveUris []string
- Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- Args []string
- Optional. The arguments to pass to the driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.
- FileUris []string
- Optional. HCFS URIs of files to be placed in the working directory of each executor.
- JarFile []stringUris 
- Optional. HCFS URIs of jar files to add to the classpath of the Spark driver and tasks.
- PythonFile []stringUris 
- Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.
- mainPython StringFile Uri 
- The HCFS URI of the main Python file to use as the Spark driver. Must be a .py file.
- archiveUris List<String>
- Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- args List<String>
- Optional. The arguments to pass to the driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.
- fileUris List<String>
- Optional. HCFS URIs of files to be placed in the working directory of each executor.
- jarFile List<String>Uris 
- Optional. HCFS URIs of jar files to add to the classpath of the Spark driver and tasks.
- pythonFile List<String>Uris 
- Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.
- mainPython stringFile Uri 
- The HCFS URI of the main Python file to use as the Spark driver. Must be a .py file.
- archiveUris string[]
- Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- args string[]
- Optional. The arguments to pass to the driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.
- fileUris string[]
- Optional. HCFS URIs of files to be placed in the working directory of each executor.
- jarFile string[]Uris 
- Optional. HCFS URIs of jar files to add to the classpath of the Spark driver and tasks.
- pythonFile string[]Uris 
- Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.
- main_python_ strfile_ uri 
- The HCFS URI of the main Python file to use as the Spark driver. Must be a .py file.
- archive_uris Sequence[str]
- Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- args Sequence[str]
- Optional. The arguments to pass to the driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.
- file_uris Sequence[str]
- Optional. HCFS URIs of files to be placed in the working directory of each executor.
- jar_file_ Sequence[str]uris 
- Optional. HCFS URIs of jar files to add to the classpath of the Spark driver and tasks.
- python_file_ Sequence[str]uris 
- Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.
- mainPython StringFile Uri 
- The HCFS URI of the main Python file to use as the Spark driver. Must be a .py file.
- archiveUris List<String>
- Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- args List<String>
- Optional. The arguments to pass to the driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.
- fileUris List<String>
- Optional. HCFS URIs of files to be placed in the working directory of each executor.
- jarFile List<String>Uris 
- Optional. HCFS URIs of jar files to add to the classpath of the Spark driver and tasks.
- pythonFile List<String>Uris 
- Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.
PySparkBatchResponse, PySparkBatchResponseArgs        
- ArchiveUris List<string>
- Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- Args List<string>
- Optional. The arguments to pass to the driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.
- FileUris List<string>
- Optional. HCFS URIs of files to be placed in the working directory of each executor.
- JarFile List<string>Uris 
- Optional. HCFS URIs of jar files to add to the classpath of the Spark driver and tasks.
- MainPython stringFile Uri 
- The HCFS URI of the main Python file to use as the Spark driver. Must be a .py file.
- PythonFile List<string>Uris 
- Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.
- ArchiveUris []string
- Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- Args []string
- Optional. The arguments to pass to the driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.
- FileUris []string
- Optional. HCFS URIs of files to be placed in the working directory of each executor.
- JarFile []stringUris 
- Optional. HCFS URIs of jar files to add to the classpath of the Spark driver and tasks.
- MainPython stringFile Uri 
- The HCFS URI of the main Python file to use as the Spark driver. Must be a .py file.
- PythonFile []stringUris 
- Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.
- archiveUris List<String>
- Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- args List<String>
- Optional. The arguments to pass to the driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.
- fileUris List<String>
- Optional. HCFS URIs of files to be placed in the working directory of each executor.
- jarFile List<String>Uris 
- Optional. HCFS URIs of jar files to add to the classpath of the Spark driver and tasks.
- mainPython StringFile Uri 
- The HCFS URI of the main Python file to use as the Spark driver. Must be a .py file.
- pythonFile List<String>Uris 
- Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.
- archiveUris string[]
- Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- args string[]
- Optional. The arguments to pass to the driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.
- fileUris string[]
- Optional. HCFS URIs of files to be placed in the working directory of each executor.
- jarFile string[]Uris 
- Optional. HCFS URIs of jar files to add to the classpath of the Spark driver and tasks.
- mainPython stringFile Uri 
- The HCFS URI of the main Python file to use as the Spark driver. Must be a .py file.
- pythonFile string[]Uris 
- Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.
- archive_uris Sequence[str]
- Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- args Sequence[str]
- Optional. The arguments to pass to the driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.
- file_uris Sequence[str]
- Optional. HCFS URIs of files to be placed in the working directory of each executor.
- jar_file_ Sequence[str]uris 
- Optional. HCFS URIs of jar files to add to the classpath of the Spark driver and tasks.
- main_python_ strfile_ uri 
- The HCFS URI of the main Python file to use as the Spark driver. Must be a .py file.
- python_file_ Sequence[str]uris 
- Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.
- archiveUris List<String>
- Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- args List<String>
- Optional. The arguments to pass to the driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.
- fileUris List<String>
- Optional. HCFS URIs of files to be placed in the working directory of each executor.
- jarFile List<String>Uris 
- Optional. HCFS URIs of jar files to add to the classpath of the Spark driver and tasks.
- mainPython StringFile Uri 
- The HCFS URI of the main Python file to use as the Spark driver. Must be a .py file.
- pythonFile List<String>Uris 
- Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.
RepositoryConfig, RepositoryConfigArgs    
- PypiRepository Pulumi.Config Google Native. Dataproc. V1. Inputs. Py Pi Repository Config 
- Optional. Configuration for PyPi repository.
- PypiRepository PyConfig Pi Repository Config 
- Optional. Configuration for PyPi repository.
- pypiRepository PyConfig Pi Repository Config 
- Optional. Configuration for PyPi repository.
- pypiRepository PyConfig Pi Repository Config 
- Optional. Configuration for PyPi repository.
- pypi_repository_ Pyconfig Pi Repository Config 
- Optional. Configuration for PyPi repository.
- pypiRepository Property MapConfig 
- Optional. Configuration for PyPi repository.
RepositoryConfigResponse, RepositoryConfigResponseArgs      
- PypiRepository Pulumi.Config Google Native. Dataproc. V1. Inputs. Py Pi Repository Config Response 
- Optional. Configuration for PyPi repository.
- PypiRepository PyConfig Pi Repository Config Response 
- Optional. Configuration for PyPi repository.
- pypiRepository PyConfig Pi Repository Config Response 
- Optional. Configuration for PyPi repository.
- pypiRepository PyConfig Pi Repository Config Response 
- Optional. Configuration for PyPi repository.
- pypi_repository_ Pyconfig Pi Repository Config Response 
- Optional. Configuration for PyPi repository.
- pypiRepository Property MapConfig 
- Optional. Configuration for PyPi repository.
RuntimeConfig, RuntimeConfigArgs    
- ContainerImage string
- Optional. Optional custom container image for the job runtime environment. If not specified, a default container image will be used.
- Properties Dictionary<string, string>
- Optional. A mapping of property names to values, which are used to configure workload execution.
- RepositoryConfig Pulumi.Google Native. Dataproc. V1. Inputs. Repository Config 
- Optional. Dependency repository configuration.
- Version string
- Optional. Version of the batch runtime.
- ContainerImage string
- Optional. Optional custom container image for the job runtime environment. If not specified, a default container image will be used.
- Properties map[string]string
- Optional. A mapping of property names to values, which are used to configure workload execution.
- RepositoryConfig RepositoryConfig 
- Optional. Dependency repository configuration.
- Version string
- Optional. Version of the batch runtime.
- containerImage String
- Optional. Optional custom container image for the job runtime environment. If not specified, a default container image will be used.
- properties Map<String,String>
- Optional. A mapping of property names to values, which are used to configure workload execution.
- repositoryConfig RepositoryConfig 
- Optional. Dependency repository configuration.
- version String
- Optional. Version of the batch runtime.
- containerImage string
- Optional. Optional custom container image for the job runtime environment. If not specified, a default container image will be used.
- properties {[key: string]: string}
- Optional. A mapping of property names to values, which are used to configure workload execution.
- repositoryConfig RepositoryConfig 
- Optional. Dependency repository configuration.
- version string
- Optional. Version of the batch runtime.
- container_image str
- Optional. Optional custom container image for the job runtime environment. If not specified, a default container image will be used.
- properties Mapping[str, str]
- Optional. A mapping of property names to values, which are used to configure workload execution.
- repository_config RepositoryConfig 
- Optional. Dependency repository configuration.
- version str
- Optional. Version of the batch runtime.
- containerImage String
- Optional. Optional custom container image for the job runtime environment. If not specified, a default container image will be used.
- properties Map<String>
- Optional. A mapping of property names to values, which are used to configure workload execution.
- repositoryConfig Property Map
- Optional. Dependency repository configuration.
- version String
- Optional. Version of the batch runtime.
RuntimeConfigResponse, RuntimeConfigResponseArgs      
- ContainerImage string
- Optional. Optional custom container image for the job runtime environment. If not specified, a default container image will be used.
- Properties Dictionary<string, string>
- Optional. A mapping of property names to values, which are used to configure workload execution.
- RepositoryConfig Pulumi.Google Native. Dataproc. V1. Inputs. Repository Config Response 
- Optional. Dependency repository configuration.
- Version string
- Optional. Version of the batch runtime.
- ContainerImage string
- Optional. Optional custom container image for the job runtime environment. If not specified, a default container image will be used.
- Properties map[string]string
- Optional. A mapping of property names to values, which are used to configure workload execution.
- RepositoryConfig RepositoryConfig Response 
- Optional. Dependency repository configuration.
- Version string
- Optional. Version of the batch runtime.
- containerImage String
- Optional. Optional custom container image for the job runtime environment. If not specified, a default container image will be used.
- properties Map<String,String>
- Optional. A mapping of property names to values, which are used to configure workload execution.
- repositoryConfig RepositoryConfig Response 
- Optional. Dependency repository configuration.
- version String
- Optional. Version of the batch runtime.
- containerImage string
- Optional. Optional custom container image for the job runtime environment. If not specified, a default container image will be used.
- properties {[key: string]: string}
- Optional. A mapping of property names to values, which are used to configure workload execution.
- repositoryConfig RepositoryConfig Response 
- Optional. Dependency repository configuration.
- version string
- Optional. Version of the batch runtime.
- container_image str
- Optional. Optional custom container image for the job runtime environment. If not specified, a default container image will be used.
- properties Mapping[str, str]
- Optional. A mapping of property names to values, which are used to configure workload execution.
- repository_config RepositoryConfig Response 
- Optional. Dependency repository configuration.
- version str
- Optional. Version of the batch runtime.
- containerImage String
- Optional. Optional custom container image for the job runtime environment. If not specified, a default container image will be used.
- properties Map<String>
- Optional. A mapping of property names to values, which are used to configure workload execution.
- repositoryConfig Property Map
- Optional. Dependency repository configuration.
- version String
- Optional. Version of the batch runtime.
RuntimeInfoResponse, RuntimeInfoResponseArgs      
- ApproximateUsage Pulumi.Google Native. Dataproc. V1. Inputs. Usage Metrics Response 
- Approximate workload resource usage, calculated when the workload completes (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).Note: This metric calculation may change in the future, for example, to capture cumulative workload resource consumption during workload execution (see the Dataproc Serverless release notes (https://cloud.google.com/dataproc-serverless/docs/release-notes) for announcements, changes, fixes and other Dataproc developments).
- CurrentUsage Pulumi.Google Native. Dataproc. V1. Inputs. Usage Snapshot Response 
- Snapshot of current workload resource usage.
- DiagnosticOutput stringUri 
- A URI pointing to the location of the diagnostics tarball.
- Endpoints Dictionary<string, string>
- Map of remote access endpoints (such as web interfaces and APIs) to their URIs.
- OutputUri string
- A URI pointing to the location of the stdout and stderr of the workload.
- ApproximateUsage UsageMetrics Response 
- Approximate workload resource usage, calculated when the workload completes (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).Note: This metric calculation may change in the future, for example, to capture cumulative workload resource consumption during workload execution (see the Dataproc Serverless release notes (https://cloud.google.com/dataproc-serverless/docs/release-notes) for announcements, changes, fixes and other Dataproc developments).
- CurrentUsage UsageSnapshot Response 
- Snapshot of current workload resource usage.
- DiagnosticOutput stringUri 
- A URI pointing to the location of the diagnostics tarball.
- Endpoints map[string]string
- Map of remote access endpoints (such as web interfaces and APIs) to their URIs.
- OutputUri string
- A URI pointing to the location of the stdout and stderr of the workload.
- approximateUsage UsageMetrics Response 
- Approximate workload resource usage, calculated when the workload completes (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).Note: This metric calculation may change in the future, for example, to capture cumulative workload resource consumption during workload execution (see the Dataproc Serverless release notes (https://cloud.google.com/dataproc-serverless/docs/release-notes) for announcements, changes, fixes and other Dataproc developments).
- currentUsage UsageSnapshot Response 
- Snapshot of current workload resource usage.
- diagnosticOutput StringUri 
- A URI pointing to the location of the diagnostics tarball.
- endpoints Map<String,String>
- Map of remote access endpoints (such as web interfaces and APIs) to their URIs.
- outputUri String
- A URI pointing to the location of the stdout and stderr of the workload.
- approximateUsage UsageMetrics Response 
- Approximate workload resource usage, calculated when the workload completes (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).Note: This metric calculation may change in the future, for example, to capture cumulative workload resource consumption during workload execution (see the Dataproc Serverless release notes (https://cloud.google.com/dataproc-serverless/docs/release-notes) for announcements, changes, fixes and other Dataproc developments).
- currentUsage UsageSnapshot Response 
- Snapshot of current workload resource usage.
- diagnosticOutput stringUri 
- A URI pointing to the location of the diagnostics tarball.
- endpoints {[key: string]: string}
- Map of remote access endpoints (such as web interfaces and APIs) to their URIs.
- outputUri string
- A URI pointing to the location of the stdout and stderr of the workload.
- approximate_usage UsageMetrics Response 
- Approximate workload resource usage, calculated when the workload completes (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).Note: This metric calculation may change in the future, for example, to capture cumulative workload resource consumption during workload execution (see the Dataproc Serverless release notes (https://cloud.google.com/dataproc-serverless/docs/release-notes) for announcements, changes, fixes and other Dataproc developments).
- current_usage UsageSnapshot Response 
- Snapshot of current workload resource usage.
- diagnostic_output_ struri 
- A URI pointing to the location of the diagnostics tarball.
- endpoints Mapping[str, str]
- Map of remote access endpoints (such as web interfaces and APIs) to their URIs.
- output_uri str
- A URI pointing to the location of the stdout and stderr of the workload.
- approximateUsage Property Map
- Approximate workload resource usage, calculated when the workload completes (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).Note: This metric calculation may change in the future, for example, to capture cumulative workload resource consumption during workload execution (see the Dataproc Serverless release notes (https://cloud.google.com/dataproc-serverless/docs/release-notes) for announcements, changes, fixes and other Dataproc developments).
- currentUsage Property Map
- Snapshot of current workload resource usage.
- diagnosticOutput StringUri 
- A URI pointing to the location of the diagnostics tarball.
- endpoints Map<String>
- Map of remote access endpoints (such as web interfaces and APIs) to their URIs.
- outputUri String
- A URI pointing to the location of the stdout and stderr of the workload.
SparkBatch, SparkBatchArgs    
- ArchiveUris List<string>
- Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- Args List<string>
- Optional. The arguments to pass to the driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.
- FileUris List<string>
- Optional. HCFS URIs of files to be placed in the working directory of each executor.
- JarFile List<string>Uris 
- Optional. HCFS URIs of jar files to add to the classpath of the Spark driver and tasks.
- MainClass string
- Optional. The name of the driver main class. The jar file that contains the class must be in the classpath or specified in jar_file_uris.
- MainJar stringFile Uri 
- Optional. The HCFS URI of the jar file that contains the main class.
- ArchiveUris []string
- Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- Args []string
- Optional. The arguments to pass to the driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.
- FileUris []string
- Optional. HCFS URIs of files to be placed in the working directory of each executor.
- JarFile []stringUris 
- Optional. HCFS URIs of jar files to add to the classpath of the Spark driver and tasks.
- MainClass string
- Optional. The name of the driver main class. The jar file that contains the class must be in the classpath or specified in jar_file_uris.
- MainJar stringFile Uri 
- Optional. The HCFS URI of the jar file that contains the main class.
- archiveUris List<String>
- Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- args List<String>
- Optional. The arguments to pass to the driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.
- fileUris List<String>
- Optional. HCFS URIs of files to be placed in the working directory of each executor.
- jarFile List<String>Uris 
- Optional. HCFS URIs of jar files to add to the classpath of the Spark driver and tasks.
- mainClass String
- Optional. The name of the driver main class. The jar file that contains the class must be in the classpath or specified in jar_file_uris.
- mainJar StringFile Uri 
- Optional. The HCFS URI of the jar file that contains the main class.
- archiveUris string[]
- Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- args string[]
- Optional. The arguments to pass to the driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.
- fileUris string[]
- Optional. HCFS URIs of files to be placed in the working directory of each executor.
- jarFile string[]Uris 
- Optional. HCFS URIs of jar files to add to the classpath of the Spark driver and tasks.
- mainClass string
- Optional. The name of the driver main class. The jar file that contains the class must be in the classpath or specified in jar_file_uris.
- mainJar stringFile Uri 
- Optional. The HCFS URI of the jar file that contains the main class.
- archive_uris Sequence[str]
- Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- args Sequence[str]
- Optional. The arguments to pass to the driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.
- file_uris Sequence[str]
- Optional. HCFS URIs of files to be placed in the working directory of each executor.
- jar_file_ Sequence[str]uris 
- Optional. HCFS URIs of jar files to add to the classpath of the Spark driver and tasks.
- main_class str
- Optional. The name of the driver main class. The jar file that contains the class must be in the classpath or specified in jar_file_uris.
- main_jar_ strfile_ uri 
- Optional. The HCFS URI of the jar file that contains the main class.
- archiveUris List<String>
- Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- args List<String>
- Optional. The arguments to pass to the driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.
- fileUris List<String>
- Optional. HCFS URIs of files to be placed in the working directory of each executor.
- jarFile List<String>Uris 
- Optional. HCFS URIs of jar files to add to the classpath of the Spark driver and tasks.
- mainClass String
- Optional. The name of the driver main class. The jar file that contains the class must be in the classpath or specified in jar_file_uris.
- mainJar StringFile Uri 
- Optional. The HCFS URI of the jar file that contains the main class.
SparkBatchResponse, SparkBatchResponseArgs      
- ArchiveUris List<string>
- Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- Args List<string>
- Optional. The arguments to pass to the driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.
- FileUris List<string>
- Optional. HCFS URIs of files to be placed in the working directory of each executor.
- JarFile List<string>Uris 
- Optional. HCFS URIs of jar files to add to the classpath of the Spark driver and tasks.
- MainClass string
- Optional. The name of the driver main class. The jar file that contains the class must be in the classpath or specified in jar_file_uris.
- MainJar stringFile Uri 
- Optional. The HCFS URI of the jar file that contains the main class.
- ArchiveUris []string
- Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- Args []string
- Optional. The arguments to pass to the driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.
- FileUris []string
- Optional. HCFS URIs of files to be placed in the working directory of each executor.
- JarFile []stringUris 
- Optional. HCFS URIs of jar files to add to the classpath of the Spark driver and tasks.
- MainClass string
- Optional. The name of the driver main class. The jar file that contains the class must be in the classpath or specified in jar_file_uris.
- MainJar stringFile Uri 
- Optional. The HCFS URI of the jar file that contains the main class.
- archiveUris List<String>
- Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- args List<String>
- Optional. The arguments to pass to the driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.
- fileUris List<String>
- Optional. HCFS URIs of files to be placed in the working directory of each executor.
- jarFile List<String>Uris 
- Optional. HCFS URIs of jar files to add to the classpath of the Spark driver and tasks.
- mainClass String
- Optional. The name of the driver main class. The jar file that contains the class must be in the classpath or specified in jar_file_uris.
- mainJar StringFile Uri 
- Optional. The HCFS URI of the jar file that contains the main class.
- archiveUris string[]
- Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- args string[]
- Optional. The arguments to pass to the driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.
- fileUris string[]
- Optional. HCFS URIs of files to be placed in the working directory of each executor.
- jarFile string[]Uris 
- Optional. HCFS URIs of jar files to add to the classpath of the Spark driver and tasks.
- mainClass string
- Optional. The name of the driver main class. The jar file that contains the class must be in the classpath or specified in jar_file_uris.
- mainJar stringFile Uri 
- Optional. The HCFS URI of the jar file that contains the main class.
- archive_uris Sequence[str]
- Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- args Sequence[str]
- Optional. The arguments to pass to the driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.
- file_uris Sequence[str]
- Optional. HCFS URIs of files to be placed in the working directory of each executor.
- jar_file_ Sequence[str]uris 
- Optional. HCFS URIs of jar files to add to the classpath of the Spark driver and tasks.
- main_class str
- Optional. The name of the driver main class. The jar file that contains the class must be in the classpath or specified in jar_file_uris.
- main_jar_ strfile_ uri 
- Optional. The HCFS URI of the jar file that contains the main class.
- archiveUris List<String>
- Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- args List<String>
- Optional. The arguments to pass to the driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.
- fileUris List<String>
- Optional. HCFS URIs of files to be placed in the working directory of each executor.
- jarFile List<String>Uris 
- Optional. HCFS URIs of jar files to add to the classpath of the Spark driver and tasks.
- mainClass String
- Optional. The name of the driver main class. The jar file that contains the class must be in the classpath or specified in jar_file_uris.
- mainJar StringFile Uri 
- Optional. The HCFS URI of the jar file that contains the main class.
SparkHistoryServerConfig, SparkHistoryServerConfigArgs        
- DataprocCluster string
- Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]
- DataprocCluster string
- Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]
- dataprocCluster String
- Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]
- dataprocCluster string
- Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]
- dataproc_cluster str
- Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]
- dataprocCluster String
- Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]
SparkHistoryServerConfigResponse, SparkHistoryServerConfigResponseArgs          
- DataprocCluster string
- Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]
- DataprocCluster string
- Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]
- dataprocCluster String
- Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]
- dataprocCluster string
- Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]
- dataproc_cluster str
- Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]
- dataprocCluster String
- Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]
SparkRBatch, SparkRBatchArgs    
- MainRFile stringUri 
- The HCFS URI of the main R file to use as the driver. Must be a .R or .r file.
- ArchiveUris List<string>
- Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- Args List<string>
- Optional. The arguments to pass to the Spark driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.
- FileUris List<string>
- Optional. HCFS URIs of files to be placed in the working directory of each executor.
- MainRFile stringUri 
- The HCFS URI of the main R file to use as the driver. Must be a .R or .r file.
- ArchiveUris []string
- Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- Args []string
- Optional. The arguments to pass to the Spark driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.
- FileUris []string
- Optional. HCFS URIs of files to be placed in the working directory of each executor.
- mainRFile StringUri 
- The HCFS URI of the main R file to use as the driver. Must be a .R or .r file.
- archiveUris List<String>
- Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- args List<String>
- Optional. The arguments to pass to the Spark driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.
- fileUris List<String>
- Optional. HCFS URIs of files to be placed in the working directory of each executor.
- mainRFile stringUri 
- The HCFS URI of the main R file to use as the driver. Must be a .R or .r file.
- archiveUris string[]
- Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- args string[]
- Optional. The arguments to pass to the Spark driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.
- fileUris string[]
- Optional. HCFS URIs of files to be placed in the working directory of each executor.
- main_r_ strfile_ uri 
- The HCFS URI of the main R file to use as the driver. Must be a .R or .r file.
- archive_uris Sequence[str]
- Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- args Sequence[str]
- Optional. The arguments to pass to the Spark driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.
- file_uris Sequence[str]
- Optional. HCFS URIs of files to be placed in the working directory of each executor.
- mainRFile StringUri 
- The HCFS URI of the main R file to use as the driver. Must be a .R or .r file.
- archiveUris List<String>
- Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- args List<String>
- Optional. The arguments to pass to the Spark driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.
- fileUris List<String>
- Optional. HCFS URIs of files to be placed in the working directory of each executor.
SparkRBatchResponse, SparkRBatchResponseArgs      
- ArchiveUris List<string>
- Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- Args List<string>
- Optional. The arguments to pass to the Spark driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.
- FileUris List<string>
- Optional. HCFS URIs of files to be placed in the working directory of each executor.
- MainRFile stringUri 
- The HCFS URI of the main R file to use as the driver. Must be a .R or .r file.
- ArchiveUris []string
- Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- Args []string
- Optional. The arguments to pass to the Spark driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.
- FileUris []string
- Optional. HCFS URIs of files to be placed in the working directory of each executor.
- MainRFile stringUri 
- The HCFS URI of the main R file to use as the driver. Must be a .R or .r file.
- archiveUris List<String>
- Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- args List<String>
- Optional. The arguments to pass to the Spark driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.
- fileUris List<String>
- Optional. HCFS URIs of files to be placed in the working directory of each executor.
- mainRFile StringUri 
- The HCFS URI of the main R file to use as the driver. Must be a .R or .r file.
- archiveUris string[]
- Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- args string[]
- Optional. The arguments to pass to the Spark driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.
- fileUris string[]
- Optional. HCFS URIs of files to be placed in the working directory of each executor.
- mainRFile stringUri 
- The HCFS URI of the main R file to use as the driver. Must be a .R or .r file.
- archive_uris Sequence[str]
- Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- args Sequence[str]
- Optional. The arguments to pass to the Spark driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.
- file_uris Sequence[str]
- Optional. HCFS URIs of files to be placed in the working directory of each executor.
- main_r_ strfile_ uri 
- The HCFS URI of the main R file to use as the driver. Must be a .R or .r file.
- archiveUris List<String>
- Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- args List<String>
- Optional. The arguments to pass to the Spark driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.
- fileUris List<String>
- Optional. HCFS URIs of files to be placed in the working directory of each executor.
- mainRFile StringUri 
- The HCFS URI of the main R file to use as the driver. Must be a .R or .r file.
SparkSqlBatch, SparkSqlBatchArgs      
- QueryFile stringUri 
- The HCFS URI of the script that contains Spark SQL queries to execute.
- JarFile List<string>Uris 
- Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH.
- QueryVariables Dictionary<string, string>
- Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).
- QueryFile stringUri 
- The HCFS URI of the script that contains Spark SQL queries to execute.
- JarFile []stringUris 
- Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH.
- QueryVariables map[string]string
- Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).
- queryFile StringUri 
- The HCFS URI of the script that contains Spark SQL queries to execute.
- jarFile List<String>Uris 
- Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH.
- queryVariables Map<String,String>
- Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).
- queryFile stringUri 
- The HCFS URI of the script that contains Spark SQL queries to execute.
- jarFile string[]Uris 
- Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH.
- queryVariables {[key: string]: string}
- Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).
- query_file_ struri 
- The HCFS URI of the script that contains Spark SQL queries to execute.
- jar_file_ Sequence[str]uris 
- Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH.
- query_variables Mapping[str, str]
- Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).
- queryFile StringUri 
- The HCFS URI of the script that contains Spark SQL queries to execute.
- jarFile List<String>Uris 
- Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH.
- queryVariables Map<String>
- Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).
SparkSqlBatchResponse, SparkSqlBatchResponseArgs        
- JarFile List<string>Uris 
- Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH.
- QueryFile stringUri 
- The HCFS URI of the script that contains Spark SQL queries to execute.
- QueryVariables Dictionary<string, string>
- Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).
- JarFile []stringUris 
- Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH.
- QueryFile stringUri 
- The HCFS URI of the script that contains Spark SQL queries to execute.
- QueryVariables map[string]string
- Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).
- jarFile List<String>Uris 
- Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH.
- queryFile StringUri 
- The HCFS URI of the script that contains Spark SQL queries to execute.
- queryVariables Map<String,String>
- Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).
- jarFile string[]Uris 
- Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH.
- queryFile stringUri 
- The HCFS URI of the script that contains Spark SQL queries to execute.
- queryVariables {[key: string]: string}
- Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).
- jar_file_ Sequence[str]uris 
- Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH.
- query_file_ struri 
- The HCFS URI of the script that contains Spark SQL queries to execute.
- query_variables Mapping[str, str]
- Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).
- jarFile List<String>Uris 
- Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH.
- queryFile StringUri 
- The HCFS URI of the script that contains Spark SQL queries to execute.
- queryVariables Map<String>
- Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).
StateHistoryResponse, StateHistoryResponseArgs      
- State string
- The state of the batch at this point in history.
- StateMessage string
- Details about the state at this point in history.
- StateStart stringTime 
- The time when the batch entered the historical state.
- State string
- The state of the batch at this point in history.
- StateMessage string
- Details about the state at this point in history.
- StateStart stringTime 
- The time when the batch entered the historical state.
- state String
- The state of the batch at this point in history.
- stateMessage String
- Details about the state at this point in history.
- stateStart StringTime 
- The time when the batch entered the historical state.
- state string
- The state of the batch at this point in history.
- stateMessage string
- Details about the state at this point in history.
- stateStart stringTime 
- The time when the batch entered the historical state.
- state str
- The state of the batch at this point in history.
- state_message str
- Details about the state at this point in history.
- state_start_ strtime 
- The time when the batch entered the historical state.
- state String
- The state of the batch at this point in history.
- stateMessage String
- Details about the state at this point in history.
- stateStart StringTime 
- The time when the batch entered the historical state.
UsageMetricsResponse, UsageMetricsResponseArgs      
- AcceleratorType string
- Optional. Accelerator type being used, if any
- MilliAccelerator stringSeconds 
- Optional. Accelerator usage in (milliAccelerator x seconds) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
- MilliDcu stringSeconds 
- Optional. DCU (Dataproc Compute Units) usage in (milliDCU x seconds) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
- ShuffleStorage stringGb Seconds 
- Optional. Shuffle storage usage in (GB x seconds) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
- AcceleratorType string
- Optional. Accelerator type being used, if any
- MilliAccelerator stringSeconds 
- Optional. Accelerator usage in (milliAccelerator x seconds) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
- MilliDcu stringSeconds 
- Optional. DCU (Dataproc Compute Units) usage in (milliDCU x seconds) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
- ShuffleStorage stringGb Seconds 
- Optional. Shuffle storage usage in (GB x seconds) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
- acceleratorType String
- Optional. Accelerator type being used, if any
- milliAccelerator StringSeconds 
- Optional. Accelerator usage in (milliAccelerator x seconds) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
- milliDcu StringSeconds 
- Optional. DCU (Dataproc Compute Units) usage in (milliDCU x seconds) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
- shuffleStorage StringGb Seconds 
- Optional. Shuffle storage usage in (GB x seconds) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
- acceleratorType string
- Optional. Accelerator type being used, if any
- milliAccelerator stringSeconds 
- Optional. Accelerator usage in (milliAccelerator x seconds) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
- milliDcu stringSeconds 
- Optional. DCU (Dataproc Compute Units) usage in (milliDCU x seconds) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
- shuffleStorage stringGb Seconds 
- Optional. Shuffle storage usage in (GB x seconds) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
- accelerator_type str
- Optional. Accelerator type being used, if any
- milli_accelerator_ strseconds 
- Optional. Accelerator usage in (milliAccelerator x seconds) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
- milli_dcu_ strseconds 
- Optional. DCU (Dataproc Compute Units) usage in (milliDCU x seconds) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
- shuffle_storage_ strgb_ seconds 
- Optional. Shuffle storage usage in (GB x seconds) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
- acceleratorType String
- Optional. Accelerator type being used, if any
- milliAccelerator StringSeconds 
- Optional. Accelerator usage in (milliAccelerator x seconds) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
- milliDcu StringSeconds 
- Optional. DCU (Dataproc Compute Units) usage in (milliDCU x seconds) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
- shuffleStorage StringGb Seconds 
- Optional. Shuffle storage usage in (GB x seconds) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
UsageSnapshotResponse, UsageSnapshotResponseArgs      
- AcceleratorType string
- Optional. Accelerator type being used, if any
- MilliAccelerator string
- Optional. Milli (one-thousandth) accelerator. (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing))
- MilliDcu string
- Optional. Milli (one-thousandth) Dataproc Compute Units (DCUs) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
- string
- Optional. Milli (one-thousandth) Dataproc Compute Units (DCUs) charged at premium tier (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
- ShuffleStorage stringGb 
- Optional. Shuffle Storage in gigabytes (GB). (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing))
- string
- Optional. Shuffle Storage in gigabytes (GB) charged at premium tier. (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing))
- SnapshotTime string
- Optional. The timestamp of the usage snapshot.
- AcceleratorType string
- Optional. Accelerator type being used, if any
- MilliAccelerator string
- Optional. Milli (one-thousandth) accelerator. (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing))
- MilliDcu string
- Optional. Milli (one-thousandth) Dataproc Compute Units (DCUs) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
- string
- Optional. Milli (one-thousandth) Dataproc Compute Units (DCUs) charged at premium tier (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
- ShuffleStorage stringGb 
- Optional. Shuffle Storage in gigabytes (GB). (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing))
- string
- Optional. Shuffle Storage in gigabytes (GB) charged at premium tier. (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing))
- SnapshotTime string
- Optional. The timestamp of the usage snapshot.
- acceleratorType String
- Optional. Accelerator type being used, if any
- milliAccelerator String
- Optional. Milli (one-thousandth) accelerator. (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing))
- milliDcu String
- Optional. Milli (one-thousandth) Dataproc Compute Units (DCUs) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
- String
- Optional. Milli (one-thousandth) Dataproc Compute Units (DCUs) charged at premium tier (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
- shuffleStorage StringGb 
- Optional. Shuffle Storage in gigabytes (GB). (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing))
- String
- Optional. Shuffle Storage in gigabytes (GB) charged at premium tier. (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing))
- snapshotTime String
- Optional. The timestamp of the usage snapshot.
- acceleratorType string
- Optional. Accelerator type being used, if any
- milliAccelerator string
- Optional. Milli (one-thousandth) accelerator. (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing))
- milliDcu string
- Optional. Milli (one-thousandth) Dataproc Compute Units (DCUs) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
- string
- Optional. Milli (one-thousandth) Dataproc Compute Units (DCUs) charged at premium tier (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
- shuffleStorage stringGb 
- Optional. Shuffle Storage in gigabytes (GB). (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing))
- string
- Optional. Shuffle Storage in gigabytes (GB) charged at premium tier. (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing))
- snapshotTime string
- Optional. The timestamp of the usage snapshot.
- accelerator_type str
- Optional. Accelerator type being used, if any
- milli_accelerator str
- Optional. Milli (one-thousandth) accelerator. (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing))
- milli_dcu str
- Optional. Milli (one-thousandth) Dataproc Compute Units (DCUs) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
- str
- Optional. Milli (one-thousandth) Dataproc Compute Units (DCUs) charged at premium tier (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
- shuffle_storage_ strgb 
- Optional. Shuffle Storage in gigabytes (GB). (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing))
- str
- Optional. Shuffle Storage in gigabytes (GB) charged at premium tier. (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing))
- snapshot_time str
- Optional. The timestamp of the usage snapshot.
- acceleratorType String
- Optional. Accelerator type being used, if any
- milliAccelerator String
- Optional. Milli (one-thousandth) accelerator. (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing))
- milliDcu String
- Optional. Milli (one-thousandth) Dataproc Compute Units (DCUs) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
- String
- Optional. Milli (one-thousandth) Dataproc Compute Units (DCUs) charged at premium tier (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
- shuffleStorage StringGb 
- Optional. Shuffle Storage in gigabytes (GB). (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing))
- String
- Optional. Shuffle Storage in gigabytes (GB) charged at premium tier. (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing))
- snapshotTime String
- Optional. The timestamp of the usage snapshot.
Package Details
- Repository
- Google Cloud Native pulumi/pulumi-google-native
- License
- Apache-2.0
Google Cloud Native is in preview. Google Cloud Classic is fully supported.