Google Cloud Native is in preview. Google Cloud Classic is fully supported.
google-native.datapipelines/v1.getPipeline
Explore with Pulumi AI
Google Cloud Native is in preview. Google Cloud Classic is fully supported.
Looks up a single pipeline. Returns a “NOT_FOUND” error if no such pipeline exists. Returns a “FORBIDDEN” error if the caller doesn’t have permission to access it.
Using getPipeline
Two invocation forms are available. The direct form accepts plain arguments and either blocks until the result value is available, or returns a Promise-wrapped result. The output form accepts Input-wrapped arguments and returns an Output-wrapped result.
function getPipeline(args: GetPipelineArgs, opts?: InvokeOptions): Promise<GetPipelineResult>
function getPipelineOutput(args: GetPipelineOutputArgs, opts?: InvokeOptions): Output<GetPipelineResult>
def get_pipeline(location: Optional[str] = None,
pipeline_id: Optional[str] = None,
project: Optional[str] = None,
opts: Optional[InvokeOptions] = None) -> GetPipelineResult
def get_pipeline_output(location: Optional[pulumi.Input[str]] = None,
pipeline_id: Optional[pulumi.Input[str]] = None,
project: Optional[pulumi.Input[str]] = None,
opts: Optional[InvokeOptions] = None) -> Output[GetPipelineResult]
func LookupPipeline(ctx *Context, args *LookupPipelineArgs, opts ...InvokeOption) (*LookupPipelineResult, error)
func LookupPipelineOutput(ctx *Context, args *LookupPipelineOutputArgs, opts ...InvokeOption) LookupPipelineResultOutput
> Note: This function is named LookupPipeline
in the Go SDK.
public static class GetPipeline
{
public static Task<GetPipelineResult> InvokeAsync(GetPipelineArgs args, InvokeOptions? opts = null)
public static Output<GetPipelineResult> Invoke(GetPipelineInvokeArgs args, InvokeOptions? opts = null)
}
public static CompletableFuture<GetPipelineResult> getPipeline(GetPipelineArgs args, InvokeOptions options)
public static Output<GetPipelineResult> getPipeline(GetPipelineArgs args, InvokeOptions options)
fn::invoke:
function: google-native:datapipelines/v1:getPipeline
arguments:
# arguments dictionary
The following arguments are supported:
- Location
This property is required. string - Pipeline
Id This property is required. string - Project string
- Location
This property is required. string - Pipeline
Id This property is required. string - Project string
- location
This property is required. String - pipeline
Id This property is required. String - project String
- location
This property is required. string - pipeline
Id This property is required. string - project string
- location
This property is required. str - pipeline_
id This property is required. str - project str
- location
This property is required. String - pipeline
Id This property is required. String - project String
getPipeline Result
The following output properties are available:
- Create
Time string - Immutable. The timestamp when the pipeline was initially created. Set by the Data Pipelines service.
- Display
Name string - The display name of the pipeline. It can contain only letters ([A-Za-z]), numbers ([0-9]), hyphens (-), and underscores (_).
- Job
Count int - Number of jobs.
- Last
Update stringTime - Immutable. The timestamp when the pipeline was last modified. Set by the Data Pipelines service.
- Name string
- The pipeline name. For example:
projects/PROJECT_ID/locations/LOCATION_ID/pipelines/PIPELINE_ID
. *PROJECT_ID
can contain letters ([A-Za-z]), numbers ([0-9]), hyphens (-), colons (:), and periods (.). For more information, see Identifying projects. *LOCATION_ID
is the canonical ID for the pipeline's location. The list of available locations can be obtained by callinggoogle.cloud.location.Locations.ListLocations
. Note that the Data Pipelines service is not available in all regions. It depends on Cloud Scheduler, an App Engine application, so it's only available in App Engine regions. *PIPELINE_ID
is the ID of the pipeline. Must be unique for the selected project and location. - Pipeline
Sources Dictionary<string, string> - Immutable. The sources of the pipeline (for example, Dataplex). The keys and values are set by the corresponding sources during pipeline creation.
- Schedule
Info Pulumi.Google Native. Datapipelines. V1. Outputs. Google Cloud Datapipelines V1Schedule Spec Response - Internal scheduling information for a pipeline. If this information is provided, periodic jobs will be created per the schedule. If not, users are responsible for creating jobs externally.
- Scheduler
Service stringAccount Email - Optional. A service account email to be used with the Cloud Scheduler job. If not specified, the default compute engine service account will be used.
- State string
- The state of the pipeline. When the pipeline is created, the state is set to 'PIPELINE_STATE_ACTIVE' by default. State changes can be requested by setting the state to stopping, paused, or resuming. State cannot be changed through UpdatePipeline requests.
- Type string
- The type of the pipeline. This field affects the scheduling of the pipeline and the type of metrics to show for the pipeline.
- Workload
Pulumi.
Google Native. Datapipelines. V1. Outputs. Google Cloud Datapipelines V1Workload Response - Workload information for creating new jobs.
- Create
Time string - Immutable. The timestamp when the pipeline was initially created. Set by the Data Pipelines service.
- Display
Name string - The display name of the pipeline. It can contain only letters ([A-Za-z]), numbers ([0-9]), hyphens (-), and underscores (_).
- Job
Count int - Number of jobs.
- Last
Update stringTime - Immutable. The timestamp when the pipeline was last modified. Set by the Data Pipelines service.
- Name string
- The pipeline name. For example:
projects/PROJECT_ID/locations/LOCATION_ID/pipelines/PIPELINE_ID
. *PROJECT_ID
can contain letters ([A-Za-z]), numbers ([0-9]), hyphens (-), colons (:), and periods (.). For more information, see Identifying projects. *LOCATION_ID
is the canonical ID for the pipeline's location. The list of available locations can be obtained by callinggoogle.cloud.location.Locations.ListLocations
. Note that the Data Pipelines service is not available in all regions. It depends on Cloud Scheduler, an App Engine application, so it's only available in App Engine regions. *PIPELINE_ID
is the ID of the pipeline. Must be unique for the selected project and location. - Pipeline
Sources map[string]string - Immutable. The sources of the pipeline (for example, Dataplex). The keys and values are set by the corresponding sources during pipeline creation.
- Schedule
Info GoogleCloud Datapipelines V1Schedule Spec Response - Internal scheduling information for a pipeline. If this information is provided, periodic jobs will be created per the schedule. If not, users are responsible for creating jobs externally.
- Scheduler
Service stringAccount Email - Optional. A service account email to be used with the Cloud Scheduler job. If not specified, the default compute engine service account will be used.
- State string
- The state of the pipeline. When the pipeline is created, the state is set to 'PIPELINE_STATE_ACTIVE' by default. State changes can be requested by setting the state to stopping, paused, or resuming. State cannot be changed through UpdatePipeline requests.
- Type string
- The type of the pipeline. This field affects the scheduling of the pipeline and the type of metrics to show for the pipeline.
- Workload
Google
Cloud Datapipelines V1Workload Response - Workload information for creating new jobs.
- create
Time String - Immutable. The timestamp when the pipeline was initially created. Set by the Data Pipelines service.
- display
Name String - The display name of the pipeline. It can contain only letters ([A-Za-z]), numbers ([0-9]), hyphens (-), and underscores (_).
- job
Count Integer - Number of jobs.
- last
Update StringTime - Immutable. The timestamp when the pipeline was last modified. Set by the Data Pipelines service.
- name String
- The pipeline name. For example:
projects/PROJECT_ID/locations/LOCATION_ID/pipelines/PIPELINE_ID
. *PROJECT_ID
can contain letters ([A-Za-z]), numbers ([0-9]), hyphens (-), colons (:), and periods (.). For more information, see Identifying projects. *LOCATION_ID
is the canonical ID for the pipeline's location. The list of available locations can be obtained by callinggoogle.cloud.location.Locations.ListLocations
. Note that the Data Pipelines service is not available in all regions. It depends on Cloud Scheduler, an App Engine application, so it's only available in App Engine regions. *PIPELINE_ID
is the ID of the pipeline. Must be unique for the selected project and location. - pipeline
Sources Map<String,String> - Immutable. The sources of the pipeline (for example, Dataplex). The keys and values are set by the corresponding sources during pipeline creation.
- schedule
Info GoogleCloud Datapipelines V1Schedule Spec Response - Internal scheduling information for a pipeline. If this information is provided, periodic jobs will be created per the schedule. If not, users are responsible for creating jobs externally.
- scheduler
Service StringAccount Email - Optional. A service account email to be used with the Cloud Scheduler job. If not specified, the default compute engine service account will be used.
- state String
- The state of the pipeline. When the pipeline is created, the state is set to 'PIPELINE_STATE_ACTIVE' by default. State changes can be requested by setting the state to stopping, paused, or resuming. State cannot be changed through UpdatePipeline requests.
- type String
- The type of the pipeline. This field affects the scheduling of the pipeline and the type of metrics to show for the pipeline.
- workload
Google
Cloud Datapipelines V1Workload Response - Workload information for creating new jobs.
- create
Time string - Immutable. The timestamp when the pipeline was initially created. Set by the Data Pipelines service.
- display
Name string - The display name of the pipeline. It can contain only letters ([A-Za-z]), numbers ([0-9]), hyphens (-), and underscores (_).
- job
Count number - Number of jobs.
- last
Update stringTime - Immutable. The timestamp when the pipeline was last modified. Set by the Data Pipelines service.
- name string
- The pipeline name. For example:
projects/PROJECT_ID/locations/LOCATION_ID/pipelines/PIPELINE_ID
. *PROJECT_ID
can contain letters ([A-Za-z]), numbers ([0-9]), hyphens (-), colons (:), and periods (.). For more information, see Identifying projects. *LOCATION_ID
is the canonical ID for the pipeline's location. The list of available locations can be obtained by callinggoogle.cloud.location.Locations.ListLocations
. Note that the Data Pipelines service is not available in all regions. It depends on Cloud Scheduler, an App Engine application, so it's only available in App Engine regions. *PIPELINE_ID
is the ID of the pipeline. Must be unique for the selected project and location. - pipeline
Sources {[key: string]: string} - Immutable. The sources of the pipeline (for example, Dataplex). The keys and values are set by the corresponding sources during pipeline creation.
- schedule
Info GoogleCloud Datapipelines V1Schedule Spec Response - Internal scheduling information for a pipeline. If this information is provided, periodic jobs will be created per the schedule. If not, users are responsible for creating jobs externally.
- scheduler
Service stringAccount Email - Optional. A service account email to be used with the Cloud Scheduler job. If not specified, the default compute engine service account will be used.
- state string
- The state of the pipeline. When the pipeline is created, the state is set to 'PIPELINE_STATE_ACTIVE' by default. State changes can be requested by setting the state to stopping, paused, or resuming. State cannot be changed through UpdatePipeline requests.
- type string
- The type of the pipeline. This field affects the scheduling of the pipeline and the type of metrics to show for the pipeline.
- workload
Google
Cloud Datapipelines V1Workload Response - Workload information for creating new jobs.
- create_
time str - Immutable. The timestamp when the pipeline was initially created. Set by the Data Pipelines service.
- display_
name str - The display name of the pipeline. It can contain only letters ([A-Za-z]), numbers ([0-9]), hyphens (-), and underscores (_).
- job_
count int - Number of jobs.
- last_
update_ strtime - Immutable. The timestamp when the pipeline was last modified. Set by the Data Pipelines service.
- name str
- The pipeline name. For example:
projects/PROJECT_ID/locations/LOCATION_ID/pipelines/PIPELINE_ID
. *PROJECT_ID
can contain letters ([A-Za-z]), numbers ([0-9]), hyphens (-), colons (:), and periods (.). For more information, see Identifying projects. *LOCATION_ID
is the canonical ID for the pipeline's location. The list of available locations can be obtained by callinggoogle.cloud.location.Locations.ListLocations
. Note that the Data Pipelines service is not available in all regions. It depends on Cloud Scheduler, an App Engine application, so it's only available in App Engine regions. *PIPELINE_ID
is the ID of the pipeline. Must be unique for the selected project and location. - pipeline_
sources Mapping[str, str] - Immutable. The sources of the pipeline (for example, Dataplex). The keys and values are set by the corresponding sources during pipeline creation.
- schedule_
info GoogleCloud Datapipelines V1Schedule Spec Response - Internal scheduling information for a pipeline. If this information is provided, periodic jobs will be created per the schedule. If not, users are responsible for creating jobs externally.
- scheduler_
service_ straccount_ email - Optional. A service account email to be used with the Cloud Scheduler job. If not specified, the default compute engine service account will be used.
- state str
- The state of the pipeline. When the pipeline is created, the state is set to 'PIPELINE_STATE_ACTIVE' by default. State changes can be requested by setting the state to stopping, paused, or resuming. State cannot be changed through UpdatePipeline requests.
- type str
- The type of the pipeline. This field affects the scheduling of the pipeline and the type of metrics to show for the pipeline.
- workload
Google
Cloud Datapipelines V1Workload Response - Workload information for creating new jobs.
- create
Time String - Immutable. The timestamp when the pipeline was initially created. Set by the Data Pipelines service.
- display
Name String - The display name of the pipeline. It can contain only letters ([A-Za-z]), numbers ([0-9]), hyphens (-), and underscores (_).
- job
Count Number - Number of jobs.
- last
Update StringTime - Immutable. The timestamp when the pipeline was last modified. Set by the Data Pipelines service.
- name String
- The pipeline name. For example:
projects/PROJECT_ID/locations/LOCATION_ID/pipelines/PIPELINE_ID
. *PROJECT_ID
can contain letters ([A-Za-z]), numbers ([0-9]), hyphens (-), colons (:), and periods (.). For more information, see Identifying projects. *LOCATION_ID
is the canonical ID for the pipeline's location. The list of available locations can be obtained by callinggoogle.cloud.location.Locations.ListLocations
. Note that the Data Pipelines service is not available in all regions. It depends on Cloud Scheduler, an App Engine application, so it's only available in App Engine regions. *PIPELINE_ID
is the ID of the pipeline. Must be unique for the selected project and location. - pipeline
Sources Map<String> - Immutable. The sources of the pipeline (for example, Dataplex). The keys and values are set by the corresponding sources during pipeline creation.
- schedule
Info Property Map - Internal scheduling information for a pipeline. If this information is provided, periodic jobs will be created per the schedule. If not, users are responsible for creating jobs externally.
- scheduler
Service StringAccount Email - Optional. A service account email to be used with the Cloud Scheduler job. If not specified, the default compute engine service account will be used.
- state String
- The state of the pipeline. When the pipeline is created, the state is set to 'PIPELINE_STATE_ACTIVE' by default. State changes can be requested by setting the state to stopping, paused, or resuming. State cannot be changed through UpdatePipeline requests.
- type String
- The type of the pipeline. This field affects the scheduling of the pipeline and the type of metrics to show for the pipeline.
- workload Property Map
- Workload information for creating new jobs.
Supporting Types
GoogleCloudDatapipelinesV1FlexTemplateRuntimeEnvironmentResponse
- Additional
Experiments This property is required. List<string> - Additional experiment flags for the job.
- Additional
User Labels This property is required. Dictionary<string, string> - Additional user labels to be specified for the job. Keys and values must follow the restrictions specified in the labeling restrictions. An object containing a list of key/value pairs. Example:
{ "name": "wrench", "mass": "1kg", "count": "3" }
. - Enable
Streaming Engine This property is required. bool - Whether to enable Streaming Engine for the job.
- Flexrs
Goal This property is required. string - Set FlexRS goal for the job. https://cloud.google.com/dataflow/docs/guides/flexrs
- Ip
Configuration This property is required. string - Configuration for VM IPs.
- Kms
Key Name This property is required. string - Name for the Cloud KMS key for the job. Key format is: projects//locations//keyRings//cryptoKeys/
- Machine
Type This property is required. string - The machine type to use for the job. Defaults to the value from the template if not specified.
- Max
Workers This property is required. int - The maximum number of Compute Engine instances to be made available to your pipeline during execution, from 1 to 1000.
- Network
This property is required. string - Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".
- Num
Workers This property is required. int - The initial number of Compute Engine instances for the job.
- Service
Account Email This property is required. string - The email address of the service account to run the job as.
- Subnetwork
This property is required. string - Subnetwork to which VMs will be assigned, if desired. You can specify a subnetwork using either a complete URL or an abbreviated path. Expected to be of the form "https://www.googleapis.com/compute/v1/projects/HOST_PROJECT_ID/regions/REGION/subnetworks/SUBNETWORK" or "regions/REGION/subnetworks/SUBNETWORK". If the subnetwork is located in a Shared VPC network, you must use the complete URL.
- Temp
Location This property is required. string - The Cloud Storage path to use for temporary files. Must be a valid Cloud Storage URL, beginning with
gs://
. - Worker
Region This property is required. string - The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, defaults to the control plane region.
- Worker
Zone This property is required. string - The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane region is chosen based on available capacity. If both
worker_zone
andzone
are set,worker_zone
takes precedence. - Zone
This property is required. string - The Compute Engine availability zone for launching worker instances to run your pipeline. In the future, worker_zone will take precedence.
- Additional
Experiments This property is required. []string - Additional experiment flags for the job.
- Additional
User Labels This property is required. map[string]string - Additional user labels to be specified for the job. Keys and values must follow the restrictions specified in the labeling restrictions. An object containing a list of key/value pairs. Example:
{ "name": "wrench", "mass": "1kg", "count": "3" }
. - Enable
Streaming Engine This property is required. bool - Whether to enable Streaming Engine for the job.
- Flexrs
Goal This property is required. string - Set FlexRS goal for the job. https://cloud.google.com/dataflow/docs/guides/flexrs
- Ip
Configuration This property is required. string - Configuration for VM IPs.
- Kms
Key Name This property is required. string - Name for the Cloud KMS key for the job. Key format is: projects//locations//keyRings//cryptoKeys/
- Machine
Type This property is required. string - The machine type to use for the job. Defaults to the value from the template if not specified.
- Max
Workers This property is required. int - The maximum number of Compute Engine instances to be made available to your pipeline during execution, from 1 to 1000.
- Network
This property is required. string - Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".
- Num
Workers This property is required. int - The initial number of Compute Engine instances for the job.
- Service
Account Email This property is required. string - The email address of the service account to run the job as.
- Subnetwork
This property is required. string - Subnetwork to which VMs will be assigned, if desired. You can specify a subnetwork using either a complete URL or an abbreviated path. Expected to be of the form "https://www.googleapis.com/compute/v1/projects/HOST_PROJECT_ID/regions/REGION/subnetworks/SUBNETWORK" or "regions/REGION/subnetworks/SUBNETWORK". If the subnetwork is located in a Shared VPC network, you must use the complete URL.
- Temp
Location This property is required. string - The Cloud Storage path to use for temporary files. Must be a valid Cloud Storage URL, beginning with
gs://
. - Worker
Region This property is required. string - The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, defaults to the control plane region.
- Worker
Zone This property is required. string - The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane region is chosen based on available capacity. If both
worker_zone
andzone
are set,worker_zone
takes precedence. - Zone
This property is required. string - The Compute Engine availability zone for launching worker instances to run your pipeline. In the future, worker_zone will take precedence.
- additional
Experiments This property is required. List<String> - Additional experiment flags for the job.
- additional
User Labels This property is required. Map<String,String> - Additional user labels to be specified for the job. Keys and values must follow the restrictions specified in the labeling restrictions. An object containing a list of key/value pairs. Example:
{ "name": "wrench", "mass": "1kg", "count": "3" }
. - enable
Streaming Engine This property is required. Boolean - Whether to enable Streaming Engine for the job.
- flexrs
Goal This property is required. String - Set FlexRS goal for the job. https://cloud.google.com/dataflow/docs/guides/flexrs
- ip
Configuration This property is required. String - Configuration for VM IPs.
- kms
Key Name This property is required. String - Name for the Cloud KMS key for the job. Key format is: projects//locations//keyRings//cryptoKeys/
- machine
Type This property is required. String - The machine type to use for the job. Defaults to the value from the template if not specified.
- max
Workers This property is required. Integer - The maximum number of Compute Engine instances to be made available to your pipeline during execution, from 1 to 1000.
- network
This property is required. String - Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".
- num
Workers This property is required. Integer - The initial number of Compute Engine instances for the job.
- service
Account Email This property is required. String - The email address of the service account to run the job as.
- subnetwork
This property is required. String - Subnetwork to which VMs will be assigned, if desired. You can specify a subnetwork using either a complete URL or an abbreviated path. Expected to be of the form "https://www.googleapis.com/compute/v1/projects/HOST_PROJECT_ID/regions/REGION/subnetworks/SUBNETWORK" or "regions/REGION/subnetworks/SUBNETWORK". If the subnetwork is located in a Shared VPC network, you must use the complete URL.
- temp
Location This property is required. String - The Cloud Storage path to use for temporary files. Must be a valid Cloud Storage URL, beginning with
gs://
. - worker
Region This property is required. String - The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, defaults to the control plane region.
- worker
Zone This property is required. String - The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane region is chosen based on available capacity. If both
worker_zone
andzone
are set,worker_zone
takes precedence. - zone
This property is required. String - The Compute Engine availability zone for launching worker instances to run your pipeline. In the future, worker_zone will take precedence.
- additional
Experiments This property is required. string[] - Additional experiment flags for the job.
- additional
User Labels This property is required. {[key: string]: string} - Additional user labels to be specified for the job. Keys and values must follow the restrictions specified in the labeling restrictions. An object containing a list of key/value pairs. Example:
{ "name": "wrench", "mass": "1kg", "count": "3" }
. - enable
Streaming Engine This property is required. boolean - Whether to enable Streaming Engine for the job.
- flexrs
Goal This property is required. string - Set FlexRS goal for the job. https://cloud.google.com/dataflow/docs/guides/flexrs
- ip
Configuration This property is required. string - Configuration for VM IPs.
- kms
Key Name This property is required. string - Name for the Cloud KMS key for the job. Key format is: projects//locations//keyRings//cryptoKeys/
- machine
Type This property is required. string - The machine type to use for the job. Defaults to the value from the template if not specified.
- max
Workers This property is required. number - The maximum number of Compute Engine instances to be made available to your pipeline during execution, from 1 to 1000.
- network
This property is required. string - Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".
- num
Workers This property is required. number - The initial number of Compute Engine instances for the job.
- service
Account Email This property is required. string - The email address of the service account to run the job as.
- subnetwork
This property is required. string - Subnetwork to which VMs will be assigned, if desired. You can specify a subnetwork using either a complete URL or an abbreviated path. Expected to be of the form "https://www.googleapis.com/compute/v1/projects/HOST_PROJECT_ID/regions/REGION/subnetworks/SUBNETWORK" or "regions/REGION/subnetworks/SUBNETWORK". If the subnetwork is located in a Shared VPC network, you must use the complete URL.
- temp
Location This property is required. string - The Cloud Storage path to use for temporary files. Must be a valid Cloud Storage URL, beginning with
gs://
. - worker
Region This property is required. string - The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, defaults to the control plane region.
- worker
Zone This property is required. string - The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane region is chosen based on available capacity. If both
worker_zone
andzone
are set,worker_zone
takes precedence. - zone
This property is required. string - The Compute Engine availability zone for launching worker instances to run your pipeline. In the future, worker_zone will take precedence.
- additional_
experiments This property is required. Sequence[str] - Additional experiment flags for the job.
- additional_
user_ labels This property is required. Mapping[str, str] - Additional user labels to be specified for the job. Keys and values must follow the restrictions specified in the labeling restrictions. An object containing a list of key/value pairs. Example:
{ "name": "wrench", "mass": "1kg", "count": "3" }
. - enable_
streaming_ engine This property is required. bool - Whether to enable Streaming Engine for the job.
- flexrs_
goal This property is required. str - Set FlexRS goal for the job. https://cloud.google.com/dataflow/docs/guides/flexrs
- ip_
configuration This property is required. str - Configuration for VM IPs.
- kms_
key_ name This property is required. str - Name for the Cloud KMS key for the job. Key format is: projects//locations//keyRings//cryptoKeys/
- machine_
type This property is required. str - The machine type to use for the job. Defaults to the value from the template if not specified.
- max_
workers This property is required. int - The maximum number of Compute Engine instances to be made available to your pipeline during execution, from 1 to 1000.
- network
This property is required. str - Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".
- num_
workers This property is required. int - The initial number of Compute Engine instances for the job.
- service_
account_ email This property is required. str - The email address of the service account to run the job as.
- subnetwork
This property is required. str - Subnetwork to which VMs will be assigned, if desired. You can specify a subnetwork using either a complete URL or an abbreviated path. Expected to be of the form "https://www.googleapis.com/compute/v1/projects/HOST_PROJECT_ID/regions/REGION/subnetworks/SUBNETWORK" or "regions/REGION/subnetworks/SUBNETWORK". If the subnetwork is located in a Shared VPC network, you must use the complete URL.
- temp_
location This property is required. str - The Cloud Storage path to use for temporary files. Must be a valid Cloud Storage URL, beginning with
gs://
. - worker_
region This property is required. str - The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, defaults to the control plane region.
- worker_
zone This property is required. str - The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane region is chosen based on available capacity. If both
worker_zone
andzone
are set,worker_zone
takes precedence. - zone
This property is required. str - The Compute Engine availability zone for launching worker instances to run your pipeline. In the future, worker_zone will take precedence.
- additional
Experiments This property is required. List<String> - Additional experiment flags for the job.
- additional
User Labels This property is required. Map<String> - Additional user labels to be specified for the job. Keys and values must follow the restrictions specified in the labeling restrictions. An object containing a list of key/value pairs. Example:
{ "name": "wrench", "mass": "1kg", "count": "3" }
. - enable
Streaming Engine This property is required. Boolean - Whether to enable Streaming Engine for the job.
- flexrs
Goal This property is required. String - Set FlexRS goal for the job. https://cloud.google.com/dataflow/docs/guides/flexrs
- ip
Configuration This property is required. String - Configuration for VM IPs.
- kms
Key Name This property is required. String - Name for the Cloud KMS key for the job. Key format is: projects//locations//keyRings//cryptoKeys/
- machine
Type This property is required. String - The machine type to use for the job. Defaults to the value from the template if not specified.
- max
Workers This property is required. Number - The maximum number of Compute Engine instances to be made available to your pipeline during execution, from 1 to 1000.
- network
This property is required. String - Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".
- num
Workers This property is required. Number - The initial number of Compute Engine instances for the job.
- service
Account Email This property is required. String - The email address of the service account to run the job as.
- subnetwork
This property is required. String - Subnetwork to which VMs will be assigned, if desired. You can specify a subnetwork using either a complete URL or an abbreviated path. Expected to be of the form "https://www.googleapis.com/compute/v1/projects/HOST_PROJECT_ID/regions/REGION/subnetworks/SUBNETWORK" or "regions/REGION/subnetworks/SUBNETWORK". If the subnetwork is located in a Shared VPC network, you must use the complete URL.
- temp
Location This property is required. String - The Cloud Storage path to use for temporary files. Must be a valid Cloud Storage URL, beginning with
gs://
. - worker
Region This property is required. String - The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, defaults to the control plane region.
- worker
Zone This property is required. String - The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane region is chosen based on available capacity. If both
worker_zone
andzone
are set,worker_zone
takes precedence. - zone
This property is required. String - The Compute Engine availability zone for launching worker instances to run your pipeline. In the future, worker_zone will take precedence.
GoogleCloudDatapipelinesV1LaunchFlexTemplateParameterResponse
- Container
Spec Gcs Path This property is required. string - Cloud Storage path to a file with a JSON-serialized ContainerSpec as content.
- Environment
This property is required. Pulumi.Google Native. Datapipelines. V1. Inputs. Google Cloud Datapipelines V1Flex Template Runtime Environment Response - The runtime environment for the Flex Template job.
- Job
Name This property is required. string - The job name to use for the created job. For an update job request, the job name should be the same as the existing running job.
- Launch
Options This property is required. Dictionary<string, string> - Launch options for this Flex Template job. This is a common set of options across languages and templates. This should not be used to pass job parameters.
- Parameters
This property is required. Dictionary<string, string> - The parameters for the Flex Template. Example:
{"num_workers":"5"}
- Transform
Name Mappings This property is required. Dictionary<string, string> - Use this to pass transform name mappings for streaming update jobs. Example:
{"oldTransformName":"newTransformName",...}
- Update
This property is required. bool - Set this to true if you are sending a request to update a running streaming job. When set, the job name should be the same as the running job.
- Container
Spec Gcs Path This property is required. string - Cloud Storage path to a file with a JSON-serialized ContainerSpec as content.
- Environment
This property is required. GoogleCloud Datapipelines V1Flex Template Runtime Environment Response - The runtime environment for the Flex Template job.
- Job
Name This property is required. string - The job name to use for the created job. For an update job request, the job name should be the same as the existing running job.
- Launch
Options This property is required. map[string]string - Launch options for this Flex Template job. This is a common set of options across languages and templates. This should not be used to pass job parameters.
- Parameters
This property is required. map[string]string - The parameters for the Flex Template. Example:
{"num_workers":"5"}
- Transform
Name Mappings This property is required. map[string]string - Use this to pass transform name mappings for streaming update jobs. Example:
{"oldTransformName":"newTransformName",...}
- Update
This property is required. bool - Set this to true if you are sending a request to update a running streaming job. When set, the job name should be the same as the running job.
- container
Spec Gcs Path This property is required. String - Cloud Storage path to a file with a JSON-serialized ContainerSpec as content.
- environment
This property is required. GoogleCloud Datapipelines V1Flex Template Runtime Environment Response - The runtime environment for the Flex Template job.
- job
Name This property is required. String - The job name to use for the created job. For an update job request, the job name should be the same as the existing running job.
- launch
Options This property is required. Map<String,String> - Launch options for this Flex Template job. This is a common set of options across languages and templates. This should not be used to pass job parameters.
- parameters
This property is required. Map<String,String> - The parameters for the Flex Template. Example:
{"num_workers":"5"}
- transform
Name Mappings This property is required. Map<String,String> - Use this to pass transform name mappings for streaming update jobs. Example:
{"oldTransformName":"newTransformName",...}
- update
This property is required. Boolean - Set this to true if you are sending a request to update a running streaming job. When set, the job name should be the same as the running job.
- container
Spec Gcs Path This property is required. string - Cloud Storage path to a file with a JSON-serialized ContainerSpec as content.
- environment
This property is required. GoogleCloud Datapipelines V1Flex Template Runtime Environment Response - The runtime environment for the Flex Template job.
- job
Name This property is required. string - The job name to use for the created job. For an update job request, the job name should be the same as the existing running job.
- launch
Options This property is required. {[key: string]: string} - Launch options for this Flex Template job. This is a common set of options across languages and templates. This should not be used to pass job parameters.
- parameters
This property is required. {[key: string]: string} - The parameters for the Flex Template. Example:
{"num_workers":"5"}
- transform
Name Mappings This property is required. {[key: string]: string} - Use this to pass transform name mappings for streaming update jobs. Example:
{"oldTransformName":"newTransformName",...}
- update
This property is required. boolean - Set this to true if you are sending a request to update a running streaming job. When set, the job name should be the same as the running job.
- container_
spec_ gcs_ path This property is required. str - Cloud Storage path to a file with a JSON-serialized ContainerSpec as content.
- environment
This property is required. GoogleCloud Datapipelines V1Flex Template Runtime Environment Response - The runtime environment for the Flex Template job.
- job_
name This property is required. str - The job name to use for the created job. For an update job request, the job name should be the same as the existing running job.
- launch_
options This property is required. Mapping[str, str] - Launch options for this Flex Template job. This is a common set of options across languages and templates. This should not be used to pass job parameters.
- parameters
This property is required. Mapping[str, str] - The parameters for the Flex Template. Example:
{"num_workers":"5"}
- transform_
name_ mappings This property is required. Mapping[str, str] - Use this to pass transform name mappings for streaming update jobs. Example:
{"oldTransformName":"newTransformName",...}
- update
This property is required. bool - Set this to true if you are sending a request to update a running streaming job. When set, the job name should be the same as the running job.
- container
Spec Gcs Path This property is required. String - Cloud Storage path to a file with a JSON-serialized ContainerSpec as content.
- environment
This property is required. Property Map - The runtime environment for the Flex Template job.
- job
Name This property is required. String - The job name to use for the created job. For an update job request, the job name should be the same as the existing running job.
- launch
Options This property is required. Map<String> - Launch options for this Flex Template job. This is a common set of options across languages and templates. This should not be used to pass job parameters.
- parameters
This property is required. Map<String> - The parameters for the Flex Template. Example:
{"num_workers":"5"}
- transform
Name Mappings This property is required. Map<String> - Use this to pass transform name mappings for streaming update jobs. Example:
{"oldTransformName":"newTransformName",...}
- update
This property is required. Boolean - Set this to true if you are sending a request to update a running streaming job. When set, the job name should be the same as the running job.
GoogleCloudDatapipelinesV1LaunchFlexTemplateRequestResponse
- Launch
Parameter This property is required. Pulumi.Google Native. Datapipelines. V1. Inputs. Google Cloud Datapipelines V1Launch Flex Template Parameter Response - Parameter to launch a job from a Flex Template.
- Location
This property is required. string - The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) to which to direct the request. For example,
us-central1
,us-west1
. - Project
This property is required. string - The ID of the Cloud Platform project that the job belongs to.
- Validate
Only This property is required. bool - If true, the request is validated but not actually executed. Defaults to false.
- Launch
Parameter This property is required. GoogleCloud Datapipelines V1Launch Flex Template Parameter Response - Parameter to launch a job from a Flex Template.
- Location
This property is required. string - The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) to which to direct the request. For example,
us-central1
,us-west1
. - Project
This property is required. string - The ID of the Cloud Platform project that the job belongs to.
- Validate
Only This property is required. bool - If true, the request is validated but not actually executed. Defaults to false.
- launch
Parameter This property is required. GoogleCloud Datapipelines V1Launch Flex Template Parameter Response - Parameter to launch a job from a Flex Template.
- location
This property is required. String - The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) to which to direct the request. For example,
us-central1
,us-west1
. - project
This property is required. String - The ID of the Cloud Platform project that the job belongs to.
- validate
Only This property is required. Boolean - If true, the request is validated but not actually executed. Defaults to false.
- launch
Parameter This property is required. GoogleCloud Datapipelines V1Launch Flex Template Parameter Response - Parameter to launch a job from a Flex Template.
- location
This property is required. string - The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) to which to direct the request. For example,
us-central1
,us-west1
. - project
This property is required. string - The ID of the Cloud Platform project that the job belongs to.
- validate
Only This property is required. boolean - If true, the request is validated but not actually executed. Defaults to false.
- launch_
parameter This property is required. GoogleCloud Datapipelines V1Launch Flex Template Parameter Response - Parameter to launch a job from a Flex Template.
- location
This property is required. str - The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) to which to direct the request. For example,
us-central1
,us-west1
. - project
This property is required. str - The ID of the Cloud Platform project that the job belongs to.
- validate_
only This property is required. bool - If true, the request is validated but not actually executed. Defaults to false.
- launch
Parameter This property is required. Property Map - Parameter to launch a job from a Flex Template.
- location
This property is required. String - The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) to which to direct the request. For example,
us-central1
,us-west1
. - project
This property is required. String - The ID of the Cloud Platform project that the job belongs to.
- validate
Only This property is required. Boolean - If true, the request is validated but not actually executed. Defaults to false.
GoogleCloudDatapipelinesV1LaunchTemplateParametersResponse
- Environment
This property is required. Pulumi.Google Native. Datapipelines. V1. Inputs. Google Cloud Datapipelines V1Runtime Environment Response - The runtime environment for the job.
- Job
Name This property is required. string - The job name to use for the created job.
- Parameters
This property is required. Dictionary<string, string> - The runtime parameters to pass to the job.
- Transform
Name Mapping This property is required. Dictionary<string, string> - Map of transform name prefixes of the job to be replaced to the corresponding name prefixes of the new job. Only applicable when updating a pipeline.
- Update
This property is required. bool - If set, replace the existing pipeline with the name specified by jobName with this pipeline, preserving state.
- Environment
This property is required. GoogleCloud Datapipelines V1Runtime Environment Response - The runtime environment for the job.
- Job
Name This property is required. string - The job name to use for the created job.
- Parameters
This property is required. map[string]string - The runtime parameters to pass to the job.
- Transform
Name Mapping This property is required. map[string]string - Map of transform name prefixes of the job to be replaced to the corresponding name prefixes of the new job. Only applicable when updating a pipeline.
- Update
This property is required. bool - If set, replace the existing pipeline with the name specified by jobName with this pipeline, preserving state.
- environment
This property is required. GoogleCloud Datapipelines V1Runtime Environment Response - The runtime environment for the job.
- job
Name This property is required. String - The job name to use for the created job.
- parameters
This property is required. Map<String,String> - The runtime parameters to pass to the job.
- transform
Name Mapping This property is required. Map<String,String> - Map of transform name prefixes of the job to be replaced to the corresponding name prefixes of the new job. Only applicable when updating a pipeline.
- update
This property is required. Boolean - If set, replace the existing pipeline with the name specified by jobName with this pipeline, preserving state.
- environment
This property is required. GoogleCloud Datapipelines V1Runtime Environment Response - The runtime environment for the job.
- job
Name This property is required. string - The job name to use for the created job.
- parameters
This property is required. {[key: string]: string} - The runtime parameters to pass to the job.
- transform
Name Mapping This property is required. {[key: string]: string} - Map of transform name prefixes of the job to be replaced to the corresponding name prefixes of the new job. Only applicable when updating a pipeline.
- update
This property is required. boolean - If set, replace the existing pipeline with the name specified by jobName with this pipeline, preserving state.
- environment
This property is required. GoogleCloud Datapipelines V1Runtime Environment Response - The runtime environment for the job.
- job_
name This property is required. str - The job name to use for the created job.
- parameters
This property is required. Mapping[str, str] - The runtime parameters to pass to the job.
- transform_
name_ mapping This property is required. Mapping[str, str] - Map of transform name prefixes of the job to be replaced to the corresponding name prefixes of the new job. Only applicable when updating a pipeline.
- update
This property is required. bool - If set, replace the existing pipeline with the name specified by jobName with this pipeline, preserving state.
- environment
This property is required. Property Map - The runtime environment for the job.
- job
Name This property is required. String - The job name to use for the created job.
- parameters
This property is required. Map<String> - The runtime parameters to pass to the job.
- transform
Name Mapping This property is required. Map<String> - Map of transform name prefixes of the job to be replaced to the corresponding name prefixes of the new job. Only applicable when updating a pipeline.
- update
This property is required. Boolean - If set, replace the existing pipeline with the name specified by jobName with this pipeline, preserving state.
GoogleCloudDatapipelinesV1LaunchTemplateRequestResponse
- Gcs
Path This property is required. string - A Cloud Storage path to the template from which to create the job. Must be a valid Cloud Storage URL, beginning with 'gs://'.
- Launch
Parameters This property is required. Pulumi.Google Native. Datapipelines. V1. Inputs. Google Cloud Datapipelines V1Launch Template Parameters Response - The parameters of the template to launch. This should be part of the body of the POST request.
- Location
This property is required. string - The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) to which to direct the request.
- Project
This property is required. string - The ID of the Cloud Platform project that the job belongs to.
- Validate
Only This property is required. bool - If true, the request is validated but not actually executed. Defaults to false.
- Gcs
Path This property is required. string - A Cloud Storage path to the template from which to create the job. Must be a valid Cloud Storage URL, beginning with 'gs://'.
- Launch
Parameters This property is required. GoogleCloud Datapipelines V1Launch Template Parameters Response - The parameters of the template to launch. This should be part of the body of the POST request.
- Location
This property is required. string - The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) to which to direct the request.
- Project
This property is required. string - The ID of the Cloud Platform project that the job belongs to.
- Validate
Only This property is required. bool - If true, the request is validated but not actually executed. Defaults to false.
- gcs
Path This property is required. String - A Cloud Storage path to the template from which to create the job. Must be a valid Cloud Storage URL, beginning with 'gs://'.
- launch
Parameters This property is required. GoogleCloud Datapipelines V1Launch Template Parameters Response - The parameters of the template to launch. This should be part of the body of the POST request.
- location
This property is required. String - The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) to which to direct the request.
- project
This property is required. String - The ID of the Cloud Platform project that the job belongs to.
- validate
Only This property is required. Boolean - If true, the request is validated but not actually executed. Defaults to false.
- gcs
Path This property is required. string - A Cloud Storage path to the template from which to create the job. Must be a valid Cloud Storage URL, beginning with 'gs://'.
- launch
Parameters This property is required. GoogleCloud Datapipelines V1Launch Template Parameters Response - The parameters of the template to launch. This should be part of the body of the POST request.
- location
This property is required. string - The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) to which to direct the request.
- project
This property is required. string - The ID of the Cloud Platform project that the job belongs to.
- validate
Only This property is required. boolean - If true, the request is validated but not actually executed. Defaults to false.
- gcs_
path This property is required. str - A Cloud Storage path to the template from which to create the job. Must be a valid Cloud Storage URL, beginning with 'gs://'.
- launch_
parameters This property is required. GoogleCloud Datapipelines V1Launch Template Parameters Response - The parameters of the template to launch. This should be part of the body of the POST request.
- location
This property is required. str - The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) to which to direct the request.
- project
This property is required. str - The ID of the Cloud Platform project that the job belongs to.
- validate_
only This property is required. bool - If true, the request is validated but not actually executed. Defaults to false.
- gcs
Path This property is required. String - A Cloud Storage path to the template from which to create the job. Must be a valid Cloud Storage URL, beginning with 'gs://'.
- launch
Parameters This property is required. Property Map - The parameters of the template to launch. This should be part of the body of the POST request.
- location
This property is required. String - The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) to which to direct the request.
- project
This property is required. String - The ID of the Cloud Platform project that the job belongs to.
- validate
Only This property is required. Boolean - If true, the request is validated but not actually executed. Defaults to false.
GoogleCloudDatapipelinesV1RuntimeEnvironmentResponse
- Additional
Experiments This property is required. List<string> - Additional experiment flags for the job.
- Additional
User Labels This property is required. Dictionary<string, string> - Additional user labels to be specified for the job. Keys and values should follow the restrictions specified in the labeling restrictions page. An object containing a list of key/value pairs. Example: { "name": "wrench", "mass": "1kg", "count": "3" }.
- Bypass
Temp Dir Validation This property is required. bool - Whether to bypass the safety checks for the job's temporary directory. Use with caution.
- Enable
Streaming Engine This property is required. bool - Whether to enable Streaming Engine for the job.
- Ip
Configuration This property is required. string - Configuration for VM IPs.
- Kms
Key Name This property is required. string - Name for the Cloud KMS key for the job. The key format is: projects//locations//keyRings//cryptoKeys/
- Machine
Type This property is required. string - The machine type to use for the job. Defaults to the value from the template if not specified.
- Max
Workers This property is required. int - The maximum number of Compute Engine instances to be made available to your pipeline during execution, from 1 to 1000.
- Network
This property is required. string - Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".
- Num
Workers This property is required. int - The initial number of Compute Engine instances for the job.
- Service
Account Email This property is required. string - The email address of the service account to run the job as.
- Subnetwork
This property is required. string - Subnetwork to which VMs will be assigned, if desired. You can specify a subnetwork using either a complete URL or an abbreviated path. Expected to be of the form "https://www.googleapis.com/compute/v1/projects/HOST_PROJECT_ID/regions/REGION/subnetworks/SUBNETWORK" or "regions/REGION/subnetworks/SUBNETWORK". If the subnetwork is located in a Shared VPC network, you must use the complete URL.
- Temp
Location This property is required. string - The Cloud Storage path to use for temporary files. Must be a valid Cloud Storage URL, beginning with
gs://
. - Worker
Region This property is required. string - The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, default to the control plane's region.
- Worker
Zone This property is required. string - The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane's region is chosen based on available capacity. If both
worker_zone
andzone
are set,worker_zone
takes precedence. - Zone
This property is required. string - The Compute Engine availability zone for launching worker instances to run your pipeline. In the future, worker_zone will take precedence.
- Additional
Experiments This property is required. []string - Additional experiment flags for the job.
- Additional
User Labels This property is required. map[string]string - Additional user labels to be specified for the job. Keys and values should follow the restrictions specified in the labeling restrictions page. An object containing a list of key/value pairs. Example: { "name": "wrench", "mass": "1kg", "count": "3" }.
- Bypass
Temp Dir Validation This property is required. bool - Whether to bypass the safety checks for the job's temporary directory. Use with caution.
- Enable
Streaming Engine This property is required. bool - Whether to enable Streaming Engine for the job.
- Ip
Configuration This property is required. string - Configuration for VM IPs.
- Kms
Key Name This property is required. string - Name for the Cloud KMS key for the job. The key format is: projects//locations//keyRings//cryptoKeys/
- Machine
Type This property is required. string - The machine type to use for the job. Defaults to the value from the template if not specified.
- Max
Workers This property is required. int - The maximum number of Compute Engine instances to be made available to your pipeline during execution, from 1 to 1000.
- Network
This property is required. string - Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".
- Num
Workers This property is required. int - The initial number of Compute Engine instances for the job.
- Service
Account Email This property is required. string - The email address of the service account to run the job as.
- Subnetwork
This property is required. string - Subnetwork to which VMs will be assigned, if desired. You can specify a subnetwork using either a complete URL or an abbreviated path. Expected to be of the form "https://www.googleapis.com/compute/v1/projects/HOST_PROJECT_ID/regions/REGION/subnetworks/SUBNETWORK" or "regions/REGION/subnetworks/SUBNETWORK". If the subnetwork is located in a Shared VPC network, you must use the complete URL.
- Temp
Location This property is required. string - The Cloud Storage path to use for temporary files. Must be a valid Cloud Storage URL, beginning with
gs://
. - Worker
Region This property is required. string - The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, default to the control plane's region.
- Worker
Zone This property is required. string - The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane's region is chosen based on available capacity. If both
worker_zone
andzone
are set,worker_zone
takes precedence. - Zone
This property is required. string - The Compute Engine availability zone for launching worker instances to run your pipeline. In the future, worker_zone will take precedence.
- additional
Experiments This property is required. List<String> - Additional experiment flags for the job.
- additional
User Labels This property is required. Map<String,String> - Additional user labels to be specified for the job. Keys and values should follow the restrictions specified in the labeling restrictions page. An object containing a list of key/value pairs. Example: { "name": "wrench", "mass": "1kg", "count": "3" }.
- bypass
Temp Dir Validation This property is required. Boolean - Whether to bypass the safety checks for the job's temporary directory. Use with caution.
- enable
Streaming Engine This property is required. Boolean - Whether to enable Streaming Engine for the job.
- ip
Configuration This property is required. String - Configuration for VM IPs.
- kms
Key Name This property is required. String - Name for the Cloud KMS key for the job. The key format is: projects//locations//keyRings//cryptoKeys/
- machine
Type This property is required. String - The machine type to use for the job. Defaults to the value from the template if not specified.
- max
Workers This property is required. Integer - The maximum number of Compute Engine instances to be made available to your pipeline during execution, from 1 to 1000.
- network
This property is required. String - Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".
- num
Workers This property is required. Integer - The initial number of Compute Engine instances for the job.
- service
Account Email This property is required. String - The email address of the service account to run the job as.
- subnetwork
This property is required. String - Subnetwork to which VMs will be assigned, if desired. You can specify a subnetwork using either a complete URL or an abbreviated path. Expected to be of the form "https://www.googleapis.com/compute/v1/projects/HOST_PROJECT_ID/regions/REGION/subnetworks/SUBNETWORK" or "regions/REGION/subnetworks/SUBNETWORK". If the subnetwork is located in a Shared VPC network, you must use the complete URL.
- temp
Location This property is required. String - The Cloud Storage path to use for temporary files. Must be a valid Cloud Storage URL, beginning with
gs://
. - worker
Region This property is required. String - The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, default to the control plane's region.
- worker
Zone This property is required. String - The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane's region is chosen based on available capacity. If both
worker_zone
andzone
are set,worker_zone
takes precedence. - zone
This property is required. String - The Compute Engine availability zone for launching worker instances to run your pipeline. In the future, worker_zone will take precedence.
- additional
Experiments This property is required. string[] - Additional experiment flags for the job.
- additional
User Labels This property is required. {[key: string]: string} - Additional user labels to be specified for the job. Keys and values should follow the restrictions specified in the labeling restrictions page. An object containing a list of key/value pairs. Example: { "name": "wrench", "mass": "1kg", "count": "3" }.
- bypass
Temp Dir Validation This property is required. boolean - Whether to bypass the safety checks for the job's temporary directory. Use with caution.
- enable
Streaming Engine This property is required. boolean - Whether to enable Streaming Engine for the job.
- ip
Configuration This property is required. string - Configuration for VM IPs.
- kms
Key Name This property is required. string - Name for the Cloud KMS key for the job. The key format is: projects//locations//keyRings//cryptoKeys/
- machine
Type This property is required. string - The machine type to use for the job. Defaults to the value from the template if not specified.
- max
Workers This property is required. number - The maximum number of Compute Engine instances to be made available to your pipeline during execution, from 1 to 1000.
- network
This property is required. string - Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".
- num
Workers This property is required. number - The initial number of Compute Engine instances for the job.
- service
Account Email This property is required. string - The email address of the service account to run the job as.
- subnetwork
This property is required. string - Subnetwork to which VMs will be assigned, if desired. You can specify a subnetwork using either a complete URL or an abbreviated path. Expected to be of the form "https://www.googleapis.com/compute/v1/projects/HOST_PROJECT_ID/regions/REGION/subnetworks/SUBNETWORK" or "regions/REGION/subnetworks/SUBNETWORK". If the subnetwork is located in a Shared VPC network, you must use the complete URL.
- temp
Location This property is required. string - The Cloud Storage path to use for temporary files. Must be a valid Cloud Storage URL, beginning with
gs://
. - worker
Region This property is required. string - The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, default to the control plane's region.
- worker
Zone This property is required. string - The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane's region is chosen based on available capacity. If both
worker_zone
andzone
are set,worker_zone
takes precedence. - zone
This property is required. string - The Compute Engine availability zone for launching worker instances to run your pipeline. In the future, worker_zone will take precedence.
- additional_
experiments This property is required. Sequence[str] - Additional experiment flags for the job.
- additional_
user_ labels This property is required. Mapping[str, str] - Additional user labels to be specified for the job. Keys and values should follow the restrictions specified in the labeling restrictions page. An object containing a list of key/value pairs. Example: { "name": "wrench", "mass": "1kg", "count": "3" }.
- bypass_
temp_ dir_ validation This property is required. bool - Whether to bypass the safety checks for the job's temporary directory. Use with caution.
- enable_
streaming_ engine This property is required. bool - Whether to enable Streaming Engine for the job.
- ip_
configuration This property is required. str - Configuration for VM IPs.
- kms_
key_ name This property is required. str - Name for the Cloud KMS key for the job. The key format is: projects//locations//keyRings//cryptoKeys/
- machine_
type This property is required. str - The machine type to use for the job. Defaults to the value from the template if not specified.
- max_
workers This property is required. int - The maximum number of Compute Engine instances to be made available to your pipeline during execution, from 1 to 1000.
- network
This property is required. str - Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".
- num_
workers This property is required. int - The initial number of Compute Engine instances for the job.
- service_
account_ email This property is required. str - The email address of the service account to run the job as.
- subnetwork
This property is required. str - Subnetwork to which VMs will be assigned, if desired. You can specify a subnetwork using either a complete URL or an abbreviated path. Expected to be of the form "https://www.googleapis.com/compute/v1/projects/HOST_PROJECT_ID/regions/REGION/subnetworks/SUBNETWORK" or "regions/REGION/subnetworks/SUBNETWORK". If the subnetwork is located in a Shared VPC network, you must use the complete URL.
- temp_
location This property is required. str - The Cloud Storage path to use for temporary files. Must be a valid Cloud Storage URL, beginning with
gs://
. - worker_
region This property is required. str - The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, default to the control plane's region.
- worker_
zone This property is required. str - The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane's region is chosen based on available capacity. If both
worker_zone
andzone
are set,worker_zone
takes precedence. - zone
This property is required. str - The Compute Engine availability zone for launching worker instances to run your pipeline. In the future, worker_zone will take precedence.
- additional
Experiments This property is required. List<String> - Additional experiment flags for the job.
- additional
User Labels This property is required. Map<String> - Additional user labels to be specified for the job. Keys and values should follow the restrictions specified in the labeling restrictions page. An object containing a list of key/value pairs. Example: { "name": "wrench", "mass": "1kg", "count": "3" }.
- bypass
Temp Dir Validation This property is required. Boolean - Whether to bypass the safety checks for the job's temporary directory. Use with caution.
- enable
Streaming Engine This property is required. Boolean - Whether to enable Streaming Engine for the job.
- ip
Configuration This property is required. String - Configuration for VM IPs.
- kms
Key Name This property is required. String - Name for the Cloud KMS key for the job. The key format is: projects//locations//keyRings//cryptoKeys/
- machine
Type This property is required. String - The machine type to use for the job. Defaults to the value from the template if not specified.
- max
Workers This property is required. Number - The maximum number of Compute Engine instances to be made available to your pipeline during execution, from 1 to 1000.
- network
This property is required. String - Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".
- num
Workers This property is required. Number - The initial number of Compute Engine instances for the job.
- service
Account Email This property is required. String - The email address of the service account to run the job as.
- subnetwork
This property is required. String - Subnetwork to which VMs will be assigned, if desired. You can specify a subnetwork using either a complete URL or an abbreviated path. Expected to be of the form "https://www.googleapis.com/compute/v1/projects/HOST_PROJECT_ID/regions/REGION/subnetworks/SUBNETWORK" or "regions/REGION/subnetworks/SUBNETWORK". If the subnetwork is located in a Shared VPC network, you must use the complete URL.
- temp
Location This property is required. String - The Cloud Storage path to use for temporary files. Must be a valid Cloud Storage URL, beginning with
gs://
. - worker
Region This property is required. String - The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, default to the control plane's region.
- worker
Zone This property is required. String - The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane's region is chosen based on available capacity. If both
worker_zone
andzone
are set,worker_zone
takes precedence. - zone
This property is required. String - The Compute Engine availability zone for launching worker instances to run your pipeline. In the future, worker_zone will take precedence.
GoogleCloudDatapipelinesV1ScheduleSpecResponse
- Next
Job Time This property is required. string - When the next Scheduler job is going to run.
- Schedule
This property is required. string - Unix-cron format of the schedule. This information is retrieved from the linked Cloud Scheduler.
- Time
Zone This property is required. string - Timezone ID. This matches the timezone IDs used by the Cloud Scheduler API. If empty, UTC time is assumed.
- Next
Job Time This property is required. string - When the next Scheduler job is going to run.
- Schedule
This property is required. string - Unix-cron format of the schedule. This information is retrieved from the linked Cloud Scheduler.
- Time
Zone This property is required. string - Timezone ID. This matches the timezone IDs used by the Cloud Scheduler API. If empty, UTC time is assumed.
- next
Job Time This property is required. String - When the next Scheduler job is going to run.
- schedule
This property is required. String - Unix-cron format of the schedule. This information is retrieved from the linked Cloud Scheduler.
- time
Zone This property is required. String - Timezone ID. This matches the timezone IDs used by the Cloud Scheduler API. If empty, UTC time is assumed.
- next
Job Time This property is required. string - When the next Scheduler job is going to run.
- schedule
This property is required. string - Unix-cron format of the schedule. This information is retrieved from the linked Cloud Scheduler.
- time
Zone This property is required. string - Timezone ID. This matches the timezone IDs used by the Cloud Scheduler API. If empty, UTC time is assumed.
- next_
job_ time This property is required. str - When the next Scheduler job is going to run.
- schedule
This property is required. str - Unix-cron format of the schedule. This information is retrieved from the linked Cloud Scheduler.
- time_
zone This property is required. str - Timezone ID. This matches the timezone IDs used by the Cloud Scheduler API. If empty, UTC time is assumed.
- next
Job Time This property is required. String - When the next Scheduler job is going to run.
- schedule
This property is required. String - Unix-cron format of the schedule. This information is retrieved from the linked Cloud Scheduler.
- time
Zone This property is required. String - Timezone ID. This matches the timezone IDs used by the Cloud Scheduler API. If empty, UTC time is assumed.
GoogleCloudDatapipelinesV1WorkloadResponse
- Dataflow
Flex Template Request This property is required. Pulumi.Google Native. Datapipelines. V1. Inputs. Google Cloud Datapipelines V1Launch Flex Template Request Response - Template information and additional parameters needed to launch a Dataflow job using the flex launch API.
- Dataflow
Launch Template Request This property is required. Pulumi.Google Native. Datapipelines. V1. Inputs. Google Cloud Datapipelines V1Launch Template Request Response - Template information and additional parameters needed to launch a Dataflow job using the standard launch API.
- Dataflow
Flex Template Request This property is required. GoogleCloud Datapipelines V1Launch Flex Template Request Response - Template information and additional parameters needed to launch a Dataflow job using the flex launch API.
- Dataflow
Launch Template Request This property is required. GoogleCloud Datapipelines V1Launch Template Request Response - Template information and additional parameters needed to launch a Dataflow job using the standard launch API.
- dataflow
Flex Template Request This property is required. GoogleCloud Datapipelines V1Launch Flex Template Request Response - Template information and additional parameters needed to launch a Dataflow job using the flex launch API.
- dataflow
Launch Template Request This property is required. GoogleCloud Datapipelines V1Launch Template Request Response - Template information and additional parameters needed to launch a Dataflow job using the standard launch API.
- dataflow
Flex Template Request This property is required. GoogleCloud Datapipelines V1Launch Flex Template Request Response - Template information and additional parameters needed to launch a Dataflow job using the flex launch API.
- dataflow
Launch Template Request This property is required. GoogleCloud Datapipelines V1Launch Template Request Response - Template information and additional parameters needed to launch a Dataflow job using the standard launch API.
- dataflow_
flex_ template_ request This property is required. GoogleCloud Datapipelines V1Launch Flex Template Request Response - Template information and additional parameters needed to launch a Dataflow job using the flex launch API.
- dataflow_
launch_ template_ request This property is required. GoogleCloud Datapipelines V1Launch Template Request Response - Template information and additional parameters needed to launch a Dataflow job using the standard launch API.
- dataflow
Flex Template Request This property is required. Property Map - Template information and additional parameters needed to launch a Dataflow job using the flex launch API.
- dataflow
Launch Template Request This property is required. Property Map - Template information and additional parameters needed to launch a Dataflow job using the standard launch API.
Package Details
- Repository
- Google Cloud Native pulumi/pulumi-google-native
- License
- Apache-2.0
Google Cloud Native is in preview. Google Cloud Classic is fully supported.