Google Cloud Native is in preview. Google Cloud Classic is fully supported.
Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi
google-native.storagetransfer/v1.getTransferJob
Explore with Pulumi AI
Google Cloud Native is in preview. Google Cloud Classic is fully supported.
Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi
Gets a transfer job.
Using getTransferJob
Two invocation forms are available. The direct form accepts plain arguments and either blocks until the result value is available, or returns a Promise-wrapped result. The output form accepts Input-wrapped arguments and returns an Output-wrapped result.
function getTransferJob(args: GetTransferJobArgs, opts?: InvokeOptions): Promise<GetTransferJobResult>
function getTransferJobOutput(args: GetTransferJobOutputArgs, opts?: InvokeOptions): Output<GetTransferJobResult>def get_transfer_job(project_id: Optional[str] = None,
                     transfer_job_id: Optional[str] = None,
                     opts: Optional[InvokeOptions] = None) -> GetTransferJobResult
def get_transfer_job_output(project_id: Optional[pulumi.Input[str]] = None,
                     transfer_job_id: Optional[pulumi.Input[str]] = None,
                     opts: Optional[InvokeOptions] = None) -> Output[GetTransferJobResult]func LookupTransferJob(ctx *Context, args *LookupTransferJobArgs, opts ...InvokeOption) (*LookupTransferJobResult, error)
func LookupTransferJobOutput(ctx *Context, args *LookupTransferJobOutputArgs, opts ...InvokeOption) LookupTransferJobResultOutput> Note: This function is named LookupTransferJob in the Go SDK.
public static class GetTransferJob 
{
    public static Task<GetTransferJobResult> InvokeAsync(GetTransferJobArgs args, InvokeOptions? opts = null)
    public static Output<GetTransferJobResult> Invoke(GetTransferJobInvokeArgs args, InvokeOptions? opts = null)
}public static CompletableFuture<GetTransferJobResult> getTransferJob(GetTransferJobArgs args, InvokeOptions options)
public static Output<GetTransferJobResult> getTransferJob(GetTransferJobArgs args, InvokeOptions options)
fn::invoke:
  function: google-native:storagetransfer/v1:getTransferJob
  arguments:
    # arguments dictionaryThe following arguments are supported:
- ProjectId string
- TransferJob stringId 
- ProjectId string
- TransferJob stringId 
- projectId String
- transferJob StringId 
- projectId string
- transferJob stringId 
- project_id str
- transfer_job_ strid 
- projectId String
- transferJob StringId 
getTransferJob Result
The following output properties are available:
- CreationTime string
- The time that the transfer job was created.
- DeletionTime string
- The time that the transfer job was deleted.
- Description string
- A description provided by the user for the job. Its max length is 1024 bytes when Unicode-encoded.
- EventStream Pulumi.Google Native. Storage Transfer. V1. Outputs. Event Stream Response 
- Specifies the event stream for the transfer job for event-driven transfers. When EventStream is specified, the Schedule fields are ignored.
- LastModification stringTime 
- The time that the transfer job was last modified.
- LatestOperation stringName 
- The name of the most recently started TransferOperation of this JobConfig. Present if a TransferOperation has been created for this JobConfig.
- LoggingConfig Pulumi.Google Native. Storage Transfer. V1. Outputs. Logging Config Response 
- Logging configuration.
- Name string
- A unique name (within the transfer project) assigned when the job is created. If this field is empty in a CreateTransferJobRequest, Storage Transfer Service assigns a unique name. Otherwise, the specified name is used as the unique name for this job. If the specified name is in use by a job, the creation request fails with an ALREADY_EXISTS error. This name must start with "transferJobs/"prefix and end with a letter or a number, and should be no more than 128 characters. For transfers involving PosixFilesystem, this name must start withtransferJobs/OPIspecifically. For all other transfer types, this name must not start withtransferJobs/OPI. Non-PosixFilesystem example:"transferJobs/^(?!OPI)[A-Za-z0-9-._~]*[A-Za-z0-9]$"PosixFilesystem example:"transferJobs/OPI^[A-Za-z0-9-._~]*[A-Za-z0-9]$"Applications must not rely on the enforcement of naming requirements involving OPI. Invalid job names fail with an INVALID_ARGUMENT error.
- NotificationConfig Pulumi.Google Native. Storage Transfer. V1. Outputs. Notification Config Response 
- Notification configuration. This is not supported for transfers involving PosixFilesystem.
- Project string
- The ID of the Google Cloud project that owns the job.
- Schedule
Pulumi.Google Native. Storage Transfer. V1. Outputs. Schedule Response 
- Specifies schedule for the transfer job. This is an optional field. When the field is not set, the job never executes a transfer, unless you invoke RunTransferJob or update the job to have a non-empty schedule.
- Status string
- Status of the job. This value MUST be specified for CreateTransferJobRequests. Note: The effect of the new job status takes place during a subsequent job run. For example, if you change the job status from ENABLED to DISABLED, and an operation spawned by the transfer is running, the status change would not affect the current operation.
- TransferSpec Pulumi.Google Native. Storage Transfer. V1. Outputs. Transfer Spec Response 
- Transfer specification.
- CreationTime string
- The time that the transfer job was created.
- DeletionTime string
- The time that the transfer job was deleted.
- Description string
- A description provided by the user for the job. Its max length is 1024 bytes when Unicode-encoded.
- EventStream EventStream Response 
- Specifies the event stream for the transfer job for event-driven transfers. When EventStream is specified, the Schedule fields are ignored.
- LastModification stringTime 
- The time that the transfer job was last modified.
- LatestOperation stringName 
- The name of the most recently started TransferOperation of this JobConfig. Present if a TransferOperation has been created for this JobConfig.
- LoggingConfig LoggingConfig Response 
- Logging configuration.
- Name string
- A unique name (within the transfer project) assigned when the job is created. If this field is empty in a CreateTransferJobRequest, Storage Transfer Service assigns a unique name. Otherwise, the specified name is used as the unique name for this job. If the specified name is in use by a job, the creation request fails with an ALREADY_EXISTS error. This name must start with "transferJobs/"prefix and end with a letter or a number, and should be no more than 128 characters. For transfers involving PosixFilesystem, this name must start withtransferJobs/OPIspecifically. For all other transfer types, this name must not start withtransferJobs/OPI. Non-PosixFilesystem example:"transferJobs/^(?!OPI)[A-Za-z0-9-._~]*[A-Za-z0-9]$"PosixFilesystem example:"transferJobs/OPI^[A-Za-z0-9-._~]*[A-Za-z0-9]$"Applications must not rely on the enforcement of naming requirements involving OPI. Invalid job names fail with an INVALID_ARGUMENT error.
- NotificationConfig NotificationConfig Response 
- Notification configuration. This is not supported for transfers involving PosixFilesystem.
- Project string
- The ID of the Google Cloud project that owns the job.
- Schedule
ScheduleResponse 
- Specifies schedule for the transfer job. This is an optional field. When the field is not set, the job never executes a transfer, unless you invoke RunTransferJob or update the job to have a non-empty schedule.
- Status string
- Status of the job. This value MUST be specified for CreateTransferJobRequests. Note: The effect of the new job status takes place during a subsequent job run. For example, if you change the job status from ENABLED to DISABLED, and an operation spawned by the transfer is running, the status change would not affect the current operation.
- TransferSpec TransferSpec Response 
- Transfer specification.
- creationTime String
- The time that the transfer job was created.
- deletionTime String
- The time that the transfer job was deleted.
- description String
- A description provided by the user for the job. Its max length is 1024 bytes when Unicode-encoded.
- eventStream EventStream Response 
- Specifies the event stream for the transfer job for event-driven transfers. When EventStream is specified, the Schedule fields are ignored.
- lastModification StringTime 
- The time that the transfer job was last modified.
- latestOperation StringName 
- The name of the most recently started TransferOperation of this JobConfig. Present if a TransferOperation has been created for this JobConfig.
- loggingConfig LoggingConfig Response 
- Logging configuration.
- name String
- A unique name (within the transfer project) assigned when the job is created. If this field is empty in a CreateTransferJobRequest, Storage Transfer Service assigns a unique name. Otherwise, the specified name is used as the unique name for this job. If the specified name is in use by a job, the creation request fails with an ALREADY_EXISTS error. This name must start with "transferJobs/"prefix and end with a letter or a number, and should be no more than 128 characters. For transfers involving PosixFilesystem, this name must start withtransferJobs/OPIspecifically. For all other transfer types, this name must not start withtransferJobs/OPI. Non-PosixFilesystem example:"transferJobs/^(?!OPI)[A-Za-z0-9-._~]*[A-Za-z0-9]$"PosixFilesystem example:"transferJobs/OPI^[A-Za-z0-9-._~]*[A-Za-z0-9]$"Applications must not rely on the enforcement of naming requirements involving OPI. Invalid job names fail with an INVALID_ARGUMENT error.
- notificationConfig NotificationConfig Response 
- Notification configuration. This is not supported for transfers involving PosixFilesystem.
- project String
- The ID of the Google Cloud project that owns the job.
- schedule
ScheduleResponse 
- Specifies schedule for the transfer job. This is an optional field. When the field is not set, the job never executes a transfer, unless you invoke RunTransferJob or update the job to have a non-empty schedule.
- status String
- Status of the job. This value MUST be specified for CreateTransferJobRequests. Note: The effect of the new job status takes place during a subsequent job run. For example, if you change the job status from ENABLED to DISABLED, and an operation spawned by the transfer is running, the status change would not affect the current operation.
- transferSpec TransferSpec Response 
- Transfer specification.
- creationTime string
- The time that the transfer job was created.
- deletionTime string
- The time that the transfer job was deleted.
- description string
- A description provided by the user for the job. Its max length is 1024 bytes when Unicode-encoded.
- eventStream EventStream Response 
- Specifies the event stream for the transfer job for event-driven transfers. When EventStream is specified, the Schedule fields are ignored.
- lastModification stringTime 
- The time that the transfer job was last modified.
- latestOperation stringName 
- The name of the most recently started TransferOperation of this JobConfig. Present if a TransferOperation has been created for this JobConfig.
- loggingConfig LoggingConfig Response 
- Logging configuration.
- name string
- A unique name (within the transfer project) assigned when the job is created. If this field is empty in a CreateTransferJobRequest, Storage Transfer Service assigns a unique name. Otherwise, the specified name is used as the unique name for this job. If the specified name is in use by a job, the creation request fails with an ALREADY_EXISTS error. This name must start with "transferJobs/"prefix and end with a letter or a number, and should be no more than 128 characters. For transfers involving PosixFilesystem, this name must start withtransferJobs/OPIspecifically. For all other transfer types, this name must not start withtransferJobs/OPI. Non-PosixFilesystem example:"transferJobs/^(?!OPI)[A-Za-z0-9-._~]*[A-Za-z0-9]$"PosixFilesystem example:"transferJobs/OPI^[A-Za-z0-9-._~]*[A-Za-z0-9]$"Applications must not rely on the enforcement of naming requirements involving OPI. Invalid job names fail with an INVALID_ARGUMENT error.
- notificationConfig NotificationConfig Response 
- Notification configuration. This is not supported for transfers involving PosixFilesystem.
- project string
- The ID of the Google Cloud project that owns the job.
- schedule
ScheduleResponse 
- Specifies schedule for the transfer job. This is an optional field. When the field is not set, the job never executes a transfer, unless you invoke RunTransferJob or update the job to have a non-empty schedule.
- status string
- Status of the job. This value MUST be specified for CreateTransferJobRequests. Note: The effect of the new job status takes place during a subsequent job run. For example, if you change the job status from ENABLED to DISABLED, and an operation spawned by the transfer is running, the status change would not affect the current operation.
- transferSpec TransferSpec Response 
- Transfer specification.
- creation_time str
- The time that the transfer job was created.
- deletion_time str
- The time that the transfer job was deleted.
- description str
- A description provided by the user for the job. Its max length is 1024 bytes when Unicode-encoded.
- event_stream EventStream Response 
- Specifies the event stream for the transfer job for event-driven transfers. When EventStream is specified, the Schedule fields are ignored.
- last_modification_ strtime 
- The time that the transfer job was last modified.
- latest_operation_ strname 
- The name of the most recently started TransferOperation of this JobConfig. Present if a TransferOperation has been created for this JobConfig.
- logging_config LoggingConfig Response 
- Logging configuration.
- name str
- A unique name (within the transfer project) assigned when the job is created. If this field is empty in a CreateTransferJobRequest, Storage Transfer Service assigns a unique name. Otherwise, the specified name is used as the unique name for this job. If the specified name is in use by a job, the creation request fails with an ALREADY_EXISTS error. This name must start with "transferJobs/"prefix and end with a letter or a number, and should be no more than 128 characters. For transfers involving PosixFilesystem, this name must start withtransferJobs/OPIspecifically. For all other transfer types, this name must not start withtransferJobs/OPI. Non-PosixFilesystem example:"transferJobs/^(?!OPI)[A-Za-z0-9-._~]*[A-Za-z0-9]$"PosixFilesystem example:"transferJobs/OPI^[A-Za-z0-9-._~]*[A-Za-z0-9]$"Applications must not rely on the enforcement of naming requirements involving OPI. Invalid job names fail with an INVALID_ARGUMENT error.
- notification_config NotificationConfig Response 
- Notification configuration. This is not supported for transfers involving PosixFilesystem.
- project str
- The ID of the Google Cloud project that owns the job.
- schedule
ScheduleResponse 
- Specifies schedule for the transfer job. This is an optional field. When the field is not set, the job never executes a transfer, unless you invoke RunTransferJob or update the job to have a non-empty schedule.
- status str
- Status of the job. This value MUST be specified for CreateTransferJobRequests. Note: The effect of the new job status takes place during a subsequent job run. For example, if you change the job status from ENABLED to DISABLED, and an operation spawned by the transfer is running, the status change would not affect the current operation.
- transfer_spec TransferSpec Response 
- Transfer specification.
- creationTime String
- The time that the transfer job was created.
- deletionTime String
- The time that the transfer job was deleted.
- description String
- A description provided by the user for the job. Its max length is 1024 bytes when Unicode-encoded.
- eventStream Property Map
- Specifies the event stream for the transfer job for event-driven transfers. When EventStream is specified, the Schedule fields are ignored.
- lastModification StringTime 
- The time that the transfer job was last modified.
- latestOperation StringName 
- The name of the most recently started TransferOperation of this JobConfig. Present if a TransferOperation has been created for this JobConfig.
- loggingConfig Property Map
- Logging configuration.
- name String
- A unique name (within the transfer project) assigned when the job is created. If this field is empty in a CreateTransferJobRequest, Storage Transfer Service assigns a unique name. Otherwise, the specified name is used as the unique name for this job. If the specified name is in use by a job, the creation request fails with an ALREADY_EXISTS error. This name must start with "transferJobs/"prefix and end with a letter or a number, and should be no more than 128 characters. For transfers involving PosixFilesystem, this name must start withtransferJobs/OPIspecifically. For all other transfer types, this name must not start withtransferJobs/OPI. Non-PosixFilesystem example:"transferJobs/^(?!OPI)[A-Za-z0-9-._~]*[A-Za-z0-9]$"PosixFilesystem example:"transferJobs/OPI^[A-Za-z0-9-._~]*[A-Za-z0-9]$"Applications must not rely on the enforcement of naming requirements involving OPI. Invalid job names fail with an INVALID_ARGUMENT error.
- notificationConfig Property Map
- Notification configuration. This is not supported for transfers involving PosixFilesystem.
- project String
- The ID of the Google Cloud project that owns the job.
- schedule Property Map
- Specifies schedule for the transfer job. This is an optional field. When the field is not set, the job never executes a transfer, unless you invoke RunTransferJob or update the job to have a non-empty schedule.
- status String
- Status of the job. This value MUST be specified for CreateTransferJobRequests. Note: The effect of the new job status takes place during a subsequent job run. For example, if you change the job status from ENABLED to DISABLED, and an operation spawned by the transfer is running, the status change would not affect the current operation.
- transferSpec Property Map
- Transfer specification.
Supporting Types
AwsAccessKeyResponse   
- AccessKey stringId 
- AWS access key ID.
- SecretAccess stringKey 
- AWS secret access key. This field is not returned in RPC responses.
- AccessKey stringId 
- AWS access key ID.
- SecretAccess stringKey 
- AWS secret access key. This field is not returned in RPC responses.
- accessKey StringId 
- AWS access key ID.
- secretAccess StringKey 
- AWS secret access key. This field is not returned in RPC responses.
- accessKey stringId 
- AWS access key ID.
- secretAccess stringKey 
- AWS secret access key. This field is not returned in RPC responses.
- access_key_ strid 
- AWS access key ID.
- secret_access_ strkey 
- AWS secret access key. This field is not returned in RPC responses.
- accessKey StringId 
- AWS access key ID.
- secretAccess StringKey 
- AWS secret access key. This field is not returned in RPC responses.
AwsS3CompatibleDataResponse   
- BucketName string
- Specifies the name of the bucket.
- Endpoint string
- Specifies the endpoint of the storage service.
- Path string
- Specifies the root path to transfer objects. Must be an empty string or full path name that ends with a '/'. This field is treated as an object prefix. As such, it should generally not begin with a '/'.
- Region string
- Specifies the region to sign requests with. This can be left blank if requests should be signed with an empty region.
- S3Metadata
Pulumi.Google Native. Storage Transfer. V1. Inputs. S3Compatible Metadata Response 
- A S3 compatible metadata.
- BucketName string
- Specifies the name of the bucket.
- Endpoint string
- Specifies the endpoint of the storage service.
- Path string
- Specifies the root path to transfer objects. Must be an empty string or full path name that ends with a '/'. This field is treated as an object prefix. As such, it should generally not begin with a '/'.
- Region string
- Specifies the region to sign requests with. This can be left blank if requests should be signed with an empty region.
- S3Metadata
S3CompatibleMetadata Response 
- A S3 compatible metadata.
- bucketName String
- Specifies the name of the bucket.
- endpoint String
- Specifies the endpoint of the storage service.
- path String
- Specifies the root path to transfer objects. Must be an empty string or full path name that ends with a '/'. This field is treated as an object prefix. As such, it should generally not begin with a '/'.
- region String
- Specifies the region to sign requests with. This can be left blank if requests should be signed with an empty region.
- s3Metadata
S3CompatibleMetadata Response 
- A S3 compatible metadata.
- bucketName string
- Specifies the name of the bucket.
- endpoint string
- Specifies the endpoint of the storage service.
- path string
- Specifies the root path to transfer objects. Must be an empty string or full path name that ends with a '/'. This field is treated as an object prefix. As such, it should generally not begin with a '/'.
- region string
- Specifies the region to sign requests with. This can be left blank if requests should be signed with an empty region.
- s3Metadata
S3CompatibleMetadata Response 
- A S3 compatible metadata.
- bucket_name str
- Specifies the name of the bucket.
- endpoint str
- Specifies the endpoint of the storage service.
- path str
- Specifies the root path to transfer objects. Must be an empty string or full path name that ends with a '/'. This field is treated as an object prefix. As such, it should generally not begin with a '/'.
- region str
- Specifies the region to sign requests with. This can be left blank if requests should be signed with an empty region.
- s3_metadata S3CompatibleMetadata Response 
- A S3 compatible metadata.
- bucketName String
- Specifies the name of the bucket.
- endpoint String
- Specifies the endpoint of the storage service.
- path String
- Specifies the root path to transfer objects. Must be an empty string or full path name that ends with a '/'. This field is treated as an object prefix. As such, it should generally not begin with a '/'.
- region String
- Specifies the region to sign requests with. This can be left blank if requests should be signed with an empty region.
- s3Metadata Property Map
- A S3 compatible metadata.
AwsS3DataResponse  
- AwsAccess Pulumi.Key Google Native. Storage Transfer. V1. Inputs. Aws Access Key Response 
- Input only. AWS access key used to sign the API requests to the AWS S3 bucket. Permissions on the bucket must be granted to the access ID of the AWS access key. For information on our data retention policy for user credentials, see User credentials.
- BucketName string
- S3 Bucket name (see Creating a bucket).
- CloudfrontDomain string
- Optional. Cloudfront domain name pointing to this bucket (as origin), to use when fetching. Format: https://{id}.cloudfront.netor any valid custom domainhttps://...
- CredentialsSecret string
- Optional. The Resource name of a secret in Secret Manager. The Azure SAS token must be stored in Secret Manager in JSON format: { "sas_token" : "SAS_TOKEN" } GoogleServiceAccount must be granted roles/secretmanager.secretAccessorfor the resource. See [Configure access to a source: Microsoft Azure Blob Storage] (https://cloud.google.com/storage-transfer/docs/source-microsoft-azure#secret_manager) for more information. Ifcredentials_secretis specified, do not specify azure_credentials. This feature is in preview. Format:projects/{project_number}/secrets/{secret_name}
- Path string
- Root path to transfer objects. Must be an empty string or full path name that ends with a '/'. This field is treated as an object prefix. As such, it should generally not begin with a '/'.
- RoleArn string
- The Amazon Resource Name (ARN) of the role to support temporary credentials via AssumeRoleWithWebIdentity. For more information about ARNs, see IAM ARNs. When a role ARN is provided, Transfer Service fetches temporary credentials for the session using aAssumeRoleWithWebIdentitycall for the provided role using the GoogleServiceAccount for this project.
- AwsAccess AwsKey Access Key Response 
- Input only. AWS access key used to sign the API requests to the AWS S3 bucket. Permissions on the bucket must be granted to the access ID of the AWS access key. For information on our data retention policy for user credentials, see User credentials.
- BucketName string
- S3 Bucket name (see Creating a bucket).
- CloudfrontDomain string
- Optional. Cloudfront domain name pointing to this bucket (as origin), to use when fetching. Format: https://{id}.cloudfront.netor any valid custom domainhttps://...
- CredentialsSecret string
- Optional. The Resource name of a secret in Secret Manager. The Azure SAS token must be stored in Secret Manager in JSON format: { "sas_token" : "SAS_TOKEN" } GoogleServiceAccount must be granted roles/secretmanager.secretAccessorfor the resource. See [Configure access to a source: Microsoft Azure Blob Storage] (https://cloud.google.com/storage-transfer/docs/source-microsoft-azure#secret_manager) for more information. Ifcredentials_secretis specified, do not specify azure_credentials. This feature is in preview. Format:projects/{project_number}/secrets/{secret_name}
- Path string
- Root path to transfer objects. Must be an empty string or full path name that ends with a '/'. This field is treated as an object prefix. As such, it should generally not begin with a '/'.
- RoleArn string
- The Amazon Resource Name (ARN) of the role to support temporary credentials via AssumeRoleWithWebIdentity. For more information about ARNs, see IAM ARNs. When a role ARN is provided, Transfer Service fetches temporary credentials for the session using aAssumeRoleWithWebIdentitycall for the provided role using the GoogleServiceAccount for this project.
- awsAccess AwsKey Access Key Response 
- Input only. AWS access key used to sign the API requests to the AWS S3 bucket. Permissions on the bucket must be granted to the access ID of the AWS access key. For information on our data retention policy for user credentials, see User credentials.
- bucketName String
- S3 Bucket name (see Creating a bucket).
- cloudfrontDomain String
- Optional. Cloudfront domain name pointing to this bucket (as origin), to use when fetching. Format: https://{id}.cloudfront.netor any valid custom domainhttps://...
- credentialsSecret String
- Optional. The Resource name of a secret in Secret Manager. The Azure SAS token must be stored in Secret Manager in JSON format: { "sas_token" : "SAS_TOKEN" } GoogleServiceAccount must be granted roles/secretmanager.secretAccessorfor the resource. See [Configure access to a source: Microsoft Azure Blob Storage] (https://cloud.google.com/storage-transfer/docs/source-microsoft-azure#secret_manager) for more information. Ifcredentials_secretis specified, do not specify azure_credentials. This feature is in preview. Format:projects/{project_number}/secrets/{secret_name}
- path String
- Root path to transfer objects. Must be an empty string or full path name that ends with a '/'. This field is treated as an object prefix. As such, it should generally not begin with a '/'.
- roleArn String
- The Amazon Resource Name (ARN) of the role to support temporary credentials via AssumeRoleWithWebIdentity. For more information about ARNs, see IAM ARNs. When a role ARN is provided, Transfer Service fetches temporary credentials for the session using aAssumeRoleWithWebIdentitycall for the provided role using the GoogleServiceAccount for this project.
- awsAccess AwsKey Access Key Response 
- Input only. AWS access key used to sign the API requests to the AWS S3 bucket. Permissions on the bucket must be granted to the access ID of the AWS access key. For information on our data retention policy for user credentials, see User credentials.
- bucketName string
- S3 Bucket name (see Creating a bucket).
- cloudfrontDomain string
- Optional. Cloudfront domain name pointing to this bucket (as origin), to use when fetching. Format: https://{id}.cloudfront.netor any valid custom domainhttps://...
- credentialsSecret string
- Optional. The Resource name of a secret in Secret Manager. The Azure SAS token must be stored in Secret Manager in JSON format: { "sas_token" : "SAS_TOKEN" } GoogleServiceAccount must be granted roles/secretmanager.secretAccessorfor the resource. See [Configure access to a source: Microsoft Azure Blob Storage] (https://cloud.google.com/storage-transfer/docs/source-microsoft-azure#secret_manager) for more information. Ifcredentials_secretis specified, do not specify azure_credentials. This feature is in preview. Format:projects/{project_number}/secrets/{secret_name}
- path string
- Root path to transfer objects. Must be an empty string or full path name that ends with a '/'. This field is treated as an object prefix. As such, it should generally not begin with a '/'.
- roleArn string
- The Amazon Resource Name (ARN) of the role to support temporary credentials via AssumeRoleWithWebIdentity. For more information about ARNs, see IAM ARNs. When a role ARN is provided, Transfer Service fetches temporary credentials for the session using aAssumeRoleWithWebIdentitycall for the provided role using the GoogleServiceAccount for this project.
- aws_access_ Awskey Access Key Response 
- Input only. AWS access key used to sign the API requests to the AWS S3 bucket. Permissions on the bucket must be granted to the access ID of the AWS access key. For information on our data retention policy for user credentials, see User credentials.
- bucket_name str
- S3 Bucket name (see Creating a bucket).
- cloudfront_domain str
- Optional. Cloudfront domain name pointing to this bucket (as origin), to use when fetching. Format: https://{id}.cloudfront.netor any valid custom domainhttps://...
- credentials_secret str
- Optional. The Resource name of a secret in Secret Manager. The Azure SAS token must be stored in Secret Manager in JSON format: { "sas_token" : "SAS_TOKEN" } GoogleServiceAccount must be granted roles/secretmanager.secretAccessorfor the resource. See [Configure access to a source: Microsoft Azure Blob Storage] (https://cloud.google.com/storage-transfer/docs/source-microsoft-azure#secret_manager) for more information. Ifcredentials_secretis specified, do not specify azure_credentials. This feature is in preview. Format:projects/{project_number}/secrets/{secret_name}
- path str
- Root path to transfer objects. Must be an empty string or full path name that ends with a '/'. This field is treated as an object prefix. As such, it should generally not begin with a '/'.
- role_arn str
- The Amazon Resource Name (ARN) of the role to support temporary credentials via AssumeRoleWithWebIdentity. For more information about ARNs, see IAM ARNs. When a role ARN is provided, Transfer Service fetches temporary credentials for the session using aAssumeRoleWithWebIdentitycall for the provided role using the GoogleServiceAccount for this project.
- awsAccess Property MapKey 
- Input only. AWS access key used to sign the API requests to the AWS S3 bucket. Permissions on the bucket must be granted to the access ID of the AWS access key. For information on our data retention policy for user credentials, see User credentials.
- bucketName String
- S3 Bucket name (see Creating a bucket).
- cloudfrontDomain String
- Optional. Cloudfront domain name pointing to this bucket (as origin), to use when fetching. Format: https://{id}.cloudfront.netor any valid custom domainhttps://...
- credentialsSecret String
- Optional. The Resource name of a secret in Secret Manager. The Azure SAS token must be stored in Secret Manager in JSON format: { "sas_token" : "SAS_TOKEN" } GoogleServiceAccount must be granted roles/secretmanager.secretAccessorfor the resource. See [Configure access to a source: Microsoft Azure Blob Storage] (https://cloud.google.com/storage-transfer/docs/source-microsoft-azure#secret_manager) for more information. Ifcredentials_secretis specified, do not specify azure_credentials. This feature is in preview. Format:projects/{project_number}/secrets/{secret_name}
- path String
- Root path to transfer objects. Must be an empty string or full path name that ends with a '/'. This field is treated as an object prefix. As such, it should generally not begin with a '/'.
- roleArn String
- The Amazon Resource Name (ARN) of the role to support temporary credentials via AssumeRoleWithWebIdentity. For more information about ARNs, see IAM ARNs. When a role ARN is provided, Transfer Service fetches temporary credentials for the session using aAssumeRoleWithWebIdentitycall for the provided role using the GoogleServiceAccount for this project.
AzureBlobStorageDataResponse    
- AzureCredentials Pulumi.Google Native. Storage Transfer. V1. Inputs. Azure Credentials Response 
- Input only. Credentials used to authenticate API requests to Azure. For information on our data retention policy for user credentials, see User credentials.
- Container string
- The container to transfer from the Azure Storage account.
- CredentialsSecret string
- Optional. The Resource name of a secret in Secret Manager. The Azure SAS token must be stored in Secret Manager in JSON format: { "sas_token" : "SAS_TOKEN" } GoogleServiceAccount must be granted roles/secretmanager.secretAccessorfor the resource. See [Configure access to a source: Microsoft Azure Blob Storage] (https://cloud.google.com/storage-transfer/docs/source-microsoft-azure#secret_manager) for more information. Ifcredentials_secretis specified, do not specify azure_credentials. This feature is in preview. Format:projects/{project_number}/secrets/{secret_name}
- Path string
- Root path to transfer objects. Must be an empty string or full path name that ends with a '/'. This field is treated as an object prefix. As such, it should generally not begin with a '/'.
- StorageAccount string
- The name of the Azure Storage account.
- AzureCredentials AzureCredentials Response 
- Input only. Credentials used to authenticate API requests to Azure. For information on our data retention policy for user credentials, see User credentials.
- Container string
- The container to transfer from the Azure Storage account.
- CredentialsSecret string
- Optional. The Resource name of a secret in Secret Manager. The Azure SAS token must be stored in Secret Manager in JSON format: { "sas_token" : "SAS_TOKEN" } GoogleServiceAccount must be granted roles/secretmanager.secretAccessorfor the resource. See [Configure access to a source: Microsoft Azure Blob Storage] (https://cloud.google.com/storage-transfer/docs/source-microsoft-azure#secret_manager) for more information. Ifcredentials_secretis specified, do not specify azure_credentials. This feature is in preview. Format:projects/{project_number}/secrets/{secret_name}
- Path string
- Root path to transfer objects. Must be an empty string or full path name that ends with a '/'. This field is treated as an object prefix. As such, it should generally not begin with a '/'.
- StorageAccount string
- The name of the Azure Storage account.
- azureCredentials AzureCredentials Response 
- Input only. Credentials used to authenticate API requests to Azure. For information on our data retention policy for user credentials, see User credentials.
- container String
- The container to transfer from the Azure Storage account.
- credentialsSecret String
- Optional. The Resource name of a secret in Secret Manager. The Azure SAS token must be stored in Secret Manager in JSON format: { "sas_token" : "SAS_TOKEN" } GoogleServiceAccount must be granted roles/secretmanager.secretAccessorfor the resource. See [Configure access to a source: Microsoft Azure Blob Storage] (https://cloud.google.com/storage-transfer/docs/source-microsoft-azure#secret_manager) for more information. Ifcredentials_secretis specified, do not specify azure_credentials. This feature is in preview. Format:projects/{project_number}/secrets/{secret_name}
- path String
- Root path to transfer objects. Must be an empty string or full path name that ends with a '/'. This field is treated as an object prefix. As such, it should generally not begin with a '/'.
- storageAccount String
- The name of the Azure Storage account.
- azureCredentials AzureCredentials Response 
- Input only. Credentials used to authenticate API requests to Azure. For information on our data retention policy for user credentials, see User credentials.
- container string
- The container to transfer from the Azure Storage account.
- credentialsSecret string
- Optional. The Resource name of a secret in Secret Manager. The Azure SAS token must be stored in Secret Manager in JSON format: { "sas_token" : "SAS_TOKEN" } GoogleServiceAccount must be granted roles/secretmanager.secretAccessorfor the resource. See [Configure access to a source: Microsoft Azure Blob Storage] (https://cloud.google.com/storage-transfer/docs/source-microsoft-azure#secret_manager) for more information. Ifcredentials_secretis specified, do not specify azure_credentials. This feature is in preview. Format:projects/{project_number}/secrets/{secret_name}
- path string
- Root path to transfer objects. Must be an empty string or full path name that ends with a '/'. This field is treated as an object prefix. As such, it should generally not begin with a '/'.
- storageAccount string
- The name of the Azure Storage account.
- azure_credentials AzureCredentials Response 
- Input only. Credentials used to authenticate API requests to Azure. For information on our data retention policy for user credentials, see User credentials.
- container str
- The container to transfer from the Azure Storage account.
- credentials_secret str
- Optional. The Resource name of a secret in Secret Manager. The Azure SAS token must be stored in Secret Manager in JSON format: { "sas_token" : "SAS_TOKEN" } GoogleServiceAccount must be granted roles/secretmanager.secretAccessorfor the resource. See [Configure access to a source: Microsoft Azure Blob Storage] (https://cloud.google.com/storage-transfer/docs/source-microsoft-azure#secret_manager) for more information. Ifcredentials_secretis specified, do not specify azure_credentials. This feature is in preview. Format:projects/{project_number}/secrets/{secret_name}
- path str
- Root path to transfer objects. Must be an empty string or full path name that ends with a '/'. This field is treated as an object prefix. As such, it should generally not begin with a '/'.
- storage_account str
- The name of the Azure Storage account.
- azureCredentials Property Map
- Input only. Credentials used to authenticate API requests to Azure. For information on our data retention policy for user credentials, see User credentials.
- container String
- The container to transfer from the Azure Storage account.
- credentialsSecret String
- Optional. The Resource name of a secret in Secret Manager. The Azure SAS token must be stored in Secret Manager in JSON format: { "sas_token" : "SAS_TOKEN" } GoogleServiceAccount must be granted roles/secretmanager.secretAccessorfor the resource. See [Configure access to a source: Microsoft Azure Blob Storage] (https://cloud.google.com/storage-transfer/docs/source-microsoft-azure#secret_manager) for more information. Ifcredentials_secretis specified, do not specify azure_credentials. This feature is in preview. Format:projects/{project_number}/secrets/{secret_name}
- path String
- Root path to transfer objects. Must be an empty string or full path name that ends with a '/'. This field is treated as an object prefix. As such, it should generally not begin with a '/'.
- storageAccount String
- The name of the Azure Storage account.
AzureCredentialsResponse  
- SasToken string
- Azure shared access signature (SAS). For more information about SAS, see Grant limited access to Azure Storage resources using shared access signatures (SAS).
- SasToken string
- Azure shared access signature (SAS). For more information about SAS, see Grant limited access to Azure Storage resources using shared access signatures (SAS).
- sasToken String
- Azure shared access signature (SAS). For more information about SAS, see Grant limited access to Azure Storage resources using shared access signatures (SAS).
- sasToken string
- Azure shared access signature (SAS). For more information about SAS, see Grant limited access to Azure Storage resources using shared access signatures (SAS).
- sas_token str
- Azure shared access signature (SAS). For more information about SAS, see Grant limited access to Azure Storage resources using shared access signatures (SAS).
- sasToken String
- Azure shared access signature (SAS). For more information about SAS, see Grant limited access to Azure Storage resources using shared access signatures (SAS).
DateResponse 
- Day int
- Day of a month. Must be from 1 to 31 and valid for the year and month, or 0 to specify a year by itself or a year and month where the day isn't significant.
- Month int
- Month of a year. Must be from 1 to 12, or 0 to specify a year without a month and day.
- Year int
- Year of the date. Must be from 1 to 9999, or 0 to specify a date without a year.
- Day int
- Day of a month. Must be from 1 to 31 and valid for the year and month, or 0 to specify a year by itself or a year and month where the day isn't significant.
- Month int
- Month of a year. Must be from 1 to 12, or 0 to specify a year without a month and day.
- Year int
- Year of the date. Must be from 1 to 9999, or 0 to specify a date without a year.
- day Integer
- Day of a month. Must be from 1 to 31 and valid for the year and month, or 0 to specify a year by itself or a year and month where the day isn't significant.
- month Integer
- Month of a year. Must be from 1 to 12, or 0 to specify a year without a month and day.
- year Integer
- Year of the date. Must be from 1 to 9999, or 0 to specify a date without a year.
- day number
- Day of a month. Must be from 1 to 31 and valid for the year and month, or 0 to specify a year by itself or a year and month where the day isn't significant.
- month number
- Month of a year. Must be from 1 to 12, or 0 to specify a year without a month and day.
- year number
- Year of the date. Must be from 1 to 9999, or 0 to specify a date without a year.
- day int
- Day of a month. Must be from 1 to 31 and valid for the year and month, or 0 to specify a year by itself or a year and month where the day isn't significant.
- month int
- Month of a year. Must be from 1 to 12, or 0 to specify a year without a month and day.
- year int
- Year of the date. Must be from 1 to 9999, or 0 to specify a date without a year.
- day Number
- Day of a month. Must be from 1 to 31 and valid for the year and month, or 0 to specify a year by itself or a year and month where the day isn't significant.
- month Number
- Month of a year. Must be from 1 to 12, or 0 to specify a year without a month and day.
- year Number
- Year of the date. Must be from 1 to 9999, or 0 to specify a date without a year.
EventStreamResponse  
- EventStream stringExpiration Time 
- Specifies the data and time at which Storage Transfer Service stops listening for events from this stream. After this time, any transfers in progress will complete, but no new transfers are initiated.
- EventStream stringStart Time 
- Specifies the date and time that Storage Transfer Service starts listening for events from this stream. If no start time is specified or start time is in the past, Storage Transfer Service starts listening immediately.
- Name string
- Specifies a unique name of the resource such as AWS SQS ARN in the form 'arn:aws:sqs:region:account_id:queue_name', or Pub/Sub subscription resource name in the form 'projects/{project}/subscriptions/{sub}'.
- EventStream stringExpiration Time 
- Specifies the data and time at which Storage Transfer Service stops listening for events from this stream. After this time, any transfers in progress will complete, but no new transfers are initiated.
- EventStream stringStart Time 
- Specifies the date and time that Storage Transfer Service starts listening for events from this stream. If no start time is specified or start time is in the past, Storage Transfer Service starts listening immediately.
- Name string
- Specifies a unique name of the resource such as AWS SQS ARN in the form 'arn:aws:sqs:region:account_id:queue_name', or Pub/Sub subscription resource name in the form 'projects/{project}/subscriptions/{sub}'.
- eventStream StringExpiration Time 
- Specifies the data and time at which Storage Transfer Service stops listening for events from this stream. After this time, any transfers in progress will complete, but no new transfers are initiated.
- eventStream StringStart Time 
- Specifies the date and time that Storage Transfer Service starts listening for events from this stream. If no start time is specified or start time is in the past, Storage Transfer Service starts listening immediately.
- name String
- Specifies a unique name of the resource such as AWS SQS ARN in the form 'arn:aws:sqs:region:account_id:queue_name', or Pub/Sub subscription resource name in the form 'projects/{project}/subscriptions/{sub}'.
- eventStream stringExpiration Time 
- Specifies the data and time at which Storage Transfer Service stops listening for events from this stream. After this time, any transfers in progress will complete, but no new transfers are initiated.
- eventStream stringStart Time 
- Specifies the date and time that Storage Transfer Service starts listening for events from this stream. If no start time is specified or start time is in the past, Storage Transfer Service starts listening immediately.
- name string
- Specifies a unique name of the resource such as AWS SQS ARN in the form 'arn:aws:sqs:region:account_id:queue_name', or Pub/Sub subscription resource name in the form 'projects/{project}/subscriptions/{sub}'.
- event_stream_ strexpiration_ time 
- Specifies the data and time at which Storage Transfer Service stops listening for events from this stream. After this time, any transfers in progress will complete, but no new transfers are initiated.
- event_stream_ strstart_ time 
- Specifies the date and time that Storage Transfer Service starts listening for events from this stream. If no start time is specified or start time is in the past, Storage Transfer Service starts listening immediately.
- name str
- Specifies a unique name of the resource such as AWS SQS ARN in the form 'arn:aws:sqs:region:account_id:queue_name', or Pub/Sub subscription resource name in the form 'projects/{project}/subscriptions/{sub}'.
- eventStream StringExpiration Time 
- Specifies the data and time at which Storage Transfer Service stops listening for events from this stream. After this time, any transfers in progress will complete, but no new transfers are initiated.
- eventStream StringStart Time 
- Specifies the date and time that Storage Transfer Service starts listening for events from this stream. If no start time is specified or start time is in the past, Storage Transfer Service starts listening immediately.
- name String
- Specifies a unique name of the resource such as AWS SQS ARN in the form 'arn:aws:sqs:region:account_id:queue_name', or Pub/Sub subscription resource name in the form 'projects/{project}/subscriptions/{sub}'.
GcsDataResponse  
- BucketName string
- Cloud Storage bucket name. Must meet Bucket Name Requirements.
- Path string
- Root path to transfer objects. Must be an empty string or full path name that ends with a '/'. This field is treated as an object prefix. As such, it should generally not begin with a '/'. The root path value must meet Object Name Requirements.
- BucketName string
- Cloud Storage bucket name. Must meet Bucket Name Requirements.
- Path string
- Root path to transfer objects. Must be an empty string or full path name that ends with a '/'. This field is treated as an object prefix. As such, it should generally not begin with a '/'. The root path value must meet Object Name Requirements.
- bucketName String
- Cloud Storage bucket name. Must meet Bucket Name Requirements.
- path String
- Root path to transfer objects. Must be an empty string or full path name that ends with a '/'. This field is treated as an object prefix. As such, it should generally not begin with a '/'. The root path value must meet Object Name Requirements.
- bucketName string
- Cloud Storage bucket name. Must meet Bucket Name Requirements.
- path string
- Root path to transfer objects. Must be an empty string or full path name that ends with a '/'. This field is treated as an object prefix. As such, it should generally not begin with a '/'. The root path value must meet Object Name Requirements.
- bucket_name str
- Cloud Storage bucket name. Must meet Bucket Name Requirements.
- path str
- Root path to transfer objects. Must be an empty string or full path name that ends with a '/'. This field is treated as an object prefix. As such, it should generally not begin with a '/'. The root path value must meet Object Name Requirements.
- bucketName String
- Cloud Storage bucket name. Must meet Bucket Name Requirements.
- path String
- Root path to transfer objects. Must be an empty string or full path name that ends with a '/'. This field is treated as an object prefix. As such, it should generally not begin with a '/'. The root path value must meet Object Name Requirements.
HttpDataResponse  
- ListUrl string
- The URL that points to the file that stores the object list entries. This file must allow public access. Currently, only URLs with HTTP and HTTPS schemes are supported.
- ListUrl string
- The URL that points to the file that stores the object list entries. This file must allow public access. Currently, only URLs with HTTP and HTTPS schemes are supported.
- listUrl String
- The URL that points to the file that stores the object list entries. This file must allow public access. Currently, only URLs with HTTP and HTTPS schemes are supported.
- listUrl string
- The URL that points to the file that stores the object list entries. This file must allow public access. Currently, only URLs with HTTP and HTTPS schemes are supported.
- list_url str
- The URL that points to the file that stores the object list entries. This file must allow public access. Currently, only URLs with HTTP and HTTPS schemes are supported.
- listUrl String
- The URL that points to the file that stores the object list entries. This file must allow public access. Currently, only URLs with HTTP and HTTPS schemes are supported.
LoggingConfigResponse  
- EnableOnprem boolGcs Transfer Logs 
- For transfers with a PosixFilesystem source, this option enables the Cloud Storage transfer logs for this transfer.
- LogAction List<string>States 
- States in which log_actionsare logged. If empty, no logs are generated. Not supported for transfers with PosixFilesystem data sources; use enable_onprem_gcs_transfer_logs instead.
- LogActions List<string>
- Specifies the actions to be logged. If empty, no logs are generated. Not supported for transfers with PosixFilesystem data sources; use enable_onprem_gcs_transfer_logs instead.
- EnableOnprem boolGcs Transfer Logs 
- For transfers with a PosixFilesystem source, this option enables the Cloud Storage transfer logs for this transfer.
- LogAction []stringStates 
- States in which log_actionsare logged. If empty, no logs are generated. Not supported for transfers with PosixFilesystem data sources; use enable_onprem_gcs_transfer_logs instead.
- LogActions []string
- Specifies the actions to be logged. If empty, no logs are generated. Not supported for transfers with PosixFilesystem data sources; use enable_onprem_gcs_transfer_logs instead.
- enableOnprem BooleanGcs Transfer Logs 
- For transfers with a PosixFilesystem source, this option enables the Cloud Storage transfer logs for this transfer.
- logAction List<String>States 
- States in which log_actionsare logged. If empty, no logs are generated. Not supported for transfers with PosixFilesystem data sources; use enable_onprem_gcs_transfer_logs instead.
- logActions List<String>
- Specifies the actions to be logged. If empty, no logs are generated. Not supported for transfers with PosixFilesystem data sources; use enable_onprem_gcs_transfer_logs instead.
- enableOnprem booleanGcs Transfer Logs 
- For transfers with a PosixFilesystem source, this option enables the Cloud Storage transfer logs for this transfer.
- logAction string[]States 
- States in which log_actionsare logged. If empty, no logs are generated. Not supported for transfers with PosixFilesystem data sources; use enable_onprem_gcs_transfer_logs instead.
- logActions string[]
- Specifies the actions to be logged. If empty, no logs are generated. Not supported for transfers with PosixFilesystem data sources; use enable_onprem_gcs_transfer_logs instead.
- enable_onprem_ boolgcs_ transfer_ logs 
- For transfers with a PosixFilesystem source, this option enables the Cloud Storage transfer logs for this transfer.
- log_action_ Sequence[str]states 
- States in which log_actionsare logged. If empty, no logs are generated. Not supported for transfers with PosixFilesystem data sources; use enable_onprem_gcs_transfer_logs instead.
- log_actions Sequence[str]
- Specifies the actions to be logged. If empty, no logs are generated. Not supported for transfers with PosixFilesystem data sources; use enable_onprem_gcs_transfer_logs instead.
- enableOnprem BooleanGcs Transfer Logs 
- For transfers with a PosixFilesystem source, this option enables the Cloud Storage transfer logs for this transfer.
- logAction List<String>States 
- States in which log_actionsare logged. If empty, no logs are generated. Not supported for transfers with PosixFilesystem data sources; use enable_onprem_gcs_transfer_logs instead.
- logActions List<String>
- Specifies the actions to be logged. If empty, no logs are generated. Not supported for transfers with PosixFilesystem data sources; use enable_onprem_gcs_transfer_logs instead.
MetadataOptionsResponse  
- Acl string
- Specifies how each object's ACLs should be preserved for transfers between Google Cloud Storage buckets. If unspecified, the default behavior is the same as ACL_DESTINATION_BUCKET_DEFAULT.
- Gid string
- Specifies how each file's POSIX group ID (GID) attribute should be handled by the transfer. By default, GID is not preserved. Only applicable to transfers involving POSIX file systems, and ignored for other transfers.
- KmsKey string
- Specifies how each object's Cloud KMS customer-managed encryption key (CMEK) is preserved for transfers between Google Cloud Storage buckets. If unspecified, the default behavior is the same as KMS_KEY_DESTINATION_BUCKET_DEFAULT.
- Mode string
- Specifies how each file's mode attribute should be handled by the transfer. By default, mode is not preserved. Only applicable to transfers involving POSIX file systems, and ignored for other transfers.
- StorageClass string
- Specifies the storage class to set on objects being transferred to Google Cloud Storage buckets. If unspecified, the default behavior is the same as STORAGE_CLASS_DESTINATION_BUCKET_DEFAULT.
- Symlink string
- Specifies how symlinks should be handled by the transfer. By default, symlinks are not preserved. Only applicable to transfers involving POSIX file systems, and ignored for other transfers.
- TemporaryHold string
- Specifies how each object's temporary hold status should be preserved for transfers between Google Cloud Storage buckets. If unspecified, the default behavior is the same as TEMPORARY_HOLD_PRESERVE.
- TimeCreated string
- Specifies how each object's timeCreatedmetadata is preserved for transfers between Google Cloud Storage buckets. If unspecified, the default behavior is the same as TIME_CREATED_SKIP.
- Uid string
- Specifies how each file's POSIX user ID (UID) attribute should be handled by the transfer. By default, UID is not preserved. Only applicable to transfers involving POSIX file systems, and ignored for other transfers.
- Acl string
- Specifies how each object's ACLs should be preserved for transfers between Google Cloud Storage buckets. If unspecified, the default behavior is the same as ACL_DESTINATION_BUCKET_DEFAULT.
- Gid string
- Specifies how each file's POSIX group ID (GID) attribute should be handled by the transfer. By default, GID is not preserved. Only applicable to transfers involving POSIX file systems, and ignored for other transfers.
- KmsKey string
- Specifies how each object's Cloud KMS customer-managed encryption key (CMEK) is preserved for transfers between Google Cloud Storage buckets. If unspecified, the default behavior is the same as KMS_KEY_DESTINATION_BUCKET_DEFAULT.
- Mode string
- Specifies how each file's mode attribute should be handled by the transfer. By default, mode is not preserved. Only applicable to transfers involving POSIX file systems, and ignored for other transfers.
- StorageClass string
- Specifies the storage class to set on objects being transferred to Google Cloud Storage buckets. If unspecified, the default behavior is the same as STORAGE_CLASS_DESTINATION_BUCKET_DEFAULT.
- Symlink string
- Specifies how symlinks should be handled by the transfer. By default, symlinks are not preserved. Only applicable to transfers involving POSIX file systems, and ignored for other transfers.
- TemporaryHold string
- Specifies how each object's temporary hold status should be preserved for transfers between Google Cloud Storage buckets. If unspecified, the default behavior is the same as TEMPORARY_HOLD_PRESERVE.
- TimeCreated string
- Specifies how each object's timeCreatedmetadata is preserved for transfers between Google Cloud Storage buckets. If unspecified, the default behavior is the same as TIME_CREATED_SKIP.
- Uid string
- Specifies how each file's POSIX user ID (UID) attribute should be handled by the transfer. By default, UID is not preserved. Only applicable to transfers involving POSIX file systems, and ignored for other transfers.
- acl String
- Specifies how each object's ACLs should be preserved for transfers between Google Cloud Storage buckets. If unspecified, the default behavior is the same as ACL_DESTINATION_BUCKET_DEFAULT.
- gid String
- Specifies how each file's POSIX group ID (GID) attribute should be handled by the transfer. By default, GID is not preserved. Only applicable to transfers involving POSIX file systems, and ignored for other transfers.
- kmsKey String
- Specifies how each object's Cloud KMS customer-managed encryption key (CMEK) is preserved for transfers between Google Cloud Storage buckets. If unspecified, the default behavior is the same as KMS_KEY_DESTINATION_BUCKET_DEFAULT.
- mode String
- Specifies how each file's mode attribute should be handled by the transfer. By default, mode is not preserved. Only applicable to transfers involving POSIX file systems, and ignored for other transfers.
- storageClass String
- Specifies the storage class to set on objects being transferred to Google Cloud Storage buckets. If unspecified, the default behavior is the same as STORAGE_CLASS_DESTINATION_BUCKET_DEFAULT.
- symlink String
- Specifies how symlinks should be handled by the transfer. By default, symlinks are not preserved. Only applicable to transfers involving POSIX file systems, and ignored for other transfers.
- temporaryHold String
- Specifies how each object's temporary hold status should be preserved for transfers between Google Cloud Storage buckets. If unspecified, the default behavior is the same as TEMPORARY_HOLD_PRESERVE.
- timeCreated String
- Specifies how each object's timeCreatedmetadata is preserved for transfers between Google Cloud Storage buckets. If unspecified, the default behavior is the same as TIME_CREATED_SKIP.
- uid String
- Specifies how each file's POSIX user ID (UID) attribute should be handled by the transfer. By default, UID is not preserved. Only applicable to transfers involving POSIX file systems, and ignored for other transfers.
- acl string
- Specifies how each object's ACLs should be preserved for transfers between Google Cloud Storage buckets. If unspecified, the default behavior is the same as ACL_DESTINATION_BUCKET_DEFAULT.
- gid string
- Specifies how each file's POSIX group ID (GID) attribute should be handled by the transfer. By default, GID is not preserved. Only applicable to transfers involving POSIX file systems, and ignored for other transfers.
- kmsKey string
- Specifies how each object's Cloud KMS customer-managed encryption key (CMEK) is preserved for transfers between Google Cloud Storage buckets. If unspecified, the default behavior is the same as KMS_KEY_DESTINATION_BUCKET_DEFAULT.
- mode string
- Specifies how each file's mode attribute should be handled by the transfer. By default, mode is not preserved. Only applicable to transfers involving POSIX file systems, and ignored for other transfers.
- storageClass string
- Specifies the storage class to set on objects being transferred to Google Cloud Storage buckets. If unspecified, the default behavior is the same as STORAGE_CLASS_DESTINATION_BUCKET_DEFAULT.
- symlink string
- Specifies how symlinks should be handled by the transfer. By default, symlinks are not preserved. Only applicable to transfers involving POSIX file systems, and ignored for other transfers.
- temporaryHold string
- Specifies how each object's temporary hold status should be preserved for transfers between Google Cloud Storage buckets. If unspecified, the default behavior is the same as TEMPORARY_HOLD_PRESERVE.
- timeCreated string
- Specifies how each object's timeCreatedmetadata is preserved for transfers between Google Cloud Storage buckets. If unspecified, the default behavior is the same as TIME_CREATED_SKIP.
- uid string
- Specifies how each file's POSIX user ID (UID) attribute should be handled by the transfer. By default, UID is not preserved. Only applicable to transfers involving POSIX file systems, and ignored for other transfers.
- acl str
- Specifies how each object's ACLs should be preserved for transfers between Google Cloud Storage buckets. If unspecified, the default behavior is the same as ACL_DESTINATION_BUCKET_DEFAULT.
- gid str
- Specifies how each file's POSIX group ID (GID) attribute should be handled by the transfer. By default, GID is not preserved. Only applicable to transfers involving POSIX file systems, and ignored for other transfers.
- kms_key str
- Specifies how each object's Cloud KMS customer-managed encryption key (CMEK) is preserved for transfers between Google Cloud Storage buckets. If unspecified, the default behavior is the same as KMS_KEY_DESTINATION_BUCKET_DEFAULT.
- mode str
- Specifies how each file's mode attribute should be handled by the transfer. By default, mode is not preserved. Only applicable to transfers involving POSIX file systems, and ignored for other transfers.
- storage_class str
- Specifies the storage class to set on objects being transferred to Google Cloud Storage buckets. If unspecified, the default behavior is the same as STORAGE_CLASS_DESTINATION_BUCKET_DEFAULT.
- symlink str
- Specifies how symlinks should be handled by the transfer. By default, symlinks are not preserved. Only applicable to transfers involving POSIX file systems, and ignored for other transfers.
- temporary_hold str
- Specifies how each object's temporary hold status should be preserved for transfers between Google Cloud Storage buckets. If unspecified, the default behavior is the same as TEMPORARY_HOLD_PRESERVE.
- time_created str
- Specifies how each object's timeCreatedmetadata is preserved for transfers between Google Cloud Storage buckets. If unspecified, the default behavior is the same as TIME_CREATED_SKIP.
- uid str
- Specifies how each file's POSIX user ID (UID) attribute should be handled by the transfer. By default, UID is not preserved. Only applicable to transfers involving POSIX file systems, and ignored for other transfers.
- acl String
- Specifies how each object's ACLs should be preserved for transfers between Google Cloud Storage buckets. If unspecified, the default behavior is the same as ACL_DESTINATION_BUCKET_DEFAULT.
- gid String
- Specifies how each file's POSIX group ID (GID) attribute should be handled by the transfer. By default, GID is not preserved. Only applicable to transfers involving POSIX file systems, and ignored for other transfers.
- kmsKey String
- Specifies how each object's Cloud KMS customer-managed encryption key (CMEK) is preserved for transfers between Google Cloud Storage buckets. If unspecified, the default behavior is the same as KMS_KEY_DESTINATION_BUCKET_DEFAULT.
- mode String
- Specifies how each file's mode attribute should be handled by the transfer. By default, mode is not preserved. Only applicable to transfers involving POSIX file systems, and ignored for other transfers.
- storageClass String
- Specifies the storage class to set on objects being transferred to Google Cloud Storage buckets. If unspecified, the default behavior is the same as STORAGE_CLASS_DESTINATION_BUCKET_DEFAULT.
- symlink String
- Specifies how symlinks should be handled by the transfer. By default, symlinks are not preserved. Only applicable to transfers involving POSIX file systems, and ignored for other transfers.
- temporaryHold String
- Specifies how each object's temporary hold status should be preserved for transfers between Google Cloud Storage buckets. If unspecified, the default behavior is the same as TEMPORARY_HOLD_PRESERVE.
- timeCreated String
- Specifies how each object's timeCreatedmetadata is preserved for transfers between Google Cloud Storage buckets. If unspecified, the default behavior is the same as TIME_CREATED_SKIP.
- uid String
- Specifies how each file's POSIX user ID (UID) attribute should be handled by the transfer. By default, UID is not preserved. Only applicable to transfers involving POSIX file systems, and ignored for other transfers.
NotificationConfigResponse  
- EventTypes List<string>
- Event types for which a notification is desired. If empty, send notifications for all event types.
- PayloadFormat string
- The desired format of the notification message payloads.
- PubsubTopic string
- The Topic.nameof the Pub/Sub topic to which to publish notifications. Must be of the format:projects/{project}/topics/{topic}. Not matching this format results in an INVALID_ARGUMENT error.
- EventTypes []string
- Event types for which a notification is desired. If empty, send notifications for all event types.
- PayloadFormat string
- The desired format of the notification message payloads.
- PubsubTopic string
- The Topic.nameof the Pub/Sub topic to which to publish notifications. Must be of the format:projects/{project}/topics/{topic}. Not matching this format results in an INVALID_ARGUMENT error.
- eventTypes List<String>
- Event types for which a notification is desired. If empty, send notifications for all event types.
- payloadFormat String
- The desired format of the notification message payloads.
- pubsubTopic String
- The Topic.nameof the Pub/Sub topic to which to publish notifications. Must be of the format:projects/{project}/topics/{topic}. Not matching this format results in an INVALID_ARGUMENT error.
- eventTypes string[]
- Event types for which a notification is desired. If empty, send notifications for all event types.
- payloadFormat string
- The desired format of the notification message payloads.
- pubsubTopic string
- The Topic.nameof the Pub/Sub topic to which to publish notifications. Must be of the format:projects/{project}/topics/{topic}. Not matching this format results in an INVALID_ARGUMENT error.
- event_types Sequence[str]
- Event types for which a notification is desired. If empty, send notifications for all event types.
- payload_format str
- The desired format of the notification message payloads.
- pubsub_topic str
- The Topic.nameof the Pub/Sub topic to which to publish notifications. Must be of the format:projects/{project}/topics/{topic}. Not matching this format results in an INVALID_ARGUMENT error.
- eventTypes List<String>
- Event types for which a notification is desired. If empty, send notifications for all event types.
- payloadFormat String
- The desired format of the notification message payloads.
- pubsubTopic String
- The Topic.nameof the Pub/Sub topic to which to publish notifications. Must be of the format:projects/{project}/topics/{topic}. Not matching this format results in an INVALID_ARGUMENT error.
ObjectConditionsResponse  
- ExcludePrefixes List<string>
- If you specify exclude_prefixes, Storage Transfer Service uses the items in theexclude_prefixesarray to determine which objects to exclude from a transfer. Objects must not start with one of the matchingexclude_prefixesfor inclusion in a transfer. The following are requirements ofexclude_prefixes: * Each exclude-prefix can contain any sequence of Unicode characters, to a max length of 1024 bytes when UTF8-encoded, and must not contain Carriage Return or Line Feed characters. Wildcard matching and regular expression matching are not supported. * Each exclude-prefix must omit the leading slash. For example, to exclude the objects3://my-aws-bucket/logs/y=2015/requests.gz, specify the exclude-prefix aslogs/y=2015/requests.gz. * None of the exclude-prefix values can be empty, if specified. * Each exclude-prefix must exclude a distinct portion of the object namespace. No exclude-prefix may be a prefix of another exclude-prefix. * If include_prefixes is specified, then each exclude-prefix must start with the value of a path explicitly included byinclude_prefixes. The max size ofexclude_prefixesis 1000. For more information, see Filtering objects from transfers.
- IncludePrefixes List<string>
- If you specify include_prefixes, Storage Transfer Service uses the items in theinclude_prefixesarray to determine which objects to include in a transfer. Objects must start with one of the matchinginclude_prefixesfor inclusion in the transfer. If exclude_prefixes is specified, objects must not start with any of theexclude_prefixesspecified for inclusion in the transfer. The following are requirements ofinclude_prefixes: * Each include-prefix can contain any sequence of Unicode characters, to a max length of 1024 bytes when UTF8-encoded, and must not contain Carriage Return or Line Feed characters. Wildcard matching and regular expression matching are not supported. * Each include-prefix must omit the leading slash. For example, to include the objects3://my-aws-bucket/logs/y=2015/requests.gz, specify the include-prefix aslogs/y=2015/requests.gz. * None of the include-prefix values can be empty, if specified. * Each include-prefix must include a distinct portion of the object namespace. No include-prefix may be a prefix of another include-prefix. The max size ofinclude_prefixesis 1000. For more information, see Filtering objects from transfers.
- LastModified stringBefore 
- If specified, only objects with a "last modification time" before this timestamp and objects that don't have a "last modification time" are transferred.
- LastModified stringSince 
- If specified, only objects with a "last modification time" on or after this timestamp and objects that don't have a "last modification time" are transferred. The last_modified_sinceandlast_modified_beforefields can be used together for chunked data processing. For example, consider a script that processes each day's worth of data at a time. For that you'd set each of the fields as follows: *last_modified_sinceto the start of the day *last_modified_beforeto the end of the day
- MaxTime stringElapsed Since Last Modification 
- Ensures that objects are not transferred if a specific maximum time has elapsed since the "last modification time". When a TransferOperation begins, objects with a "last modification time" are transferred only if the elapsed time between the start_time of the TransferOperationand the "last modification time" of the object is less than the value of max_time_elapsed_since_last_modification`. Objects that do not have a "last modification time" are also transferred.
- MinTime stringElapsed Since Last Modification 
- Ensures that objects are not transferred until a specific minimum time has elapsed after the "last modification time". When a TransferOperation begins, objects with a "last modification time" are transferred only if the elapsed time between the start_time of the TransferOperationand the "last modification time" of the object is equal to or greater than the value of min_time_elapsed_since_last_modification`. Objects that do not have a "last modification time" are also transferred.
- ExcludePrefixes []string
- If you specify exclude_prefixes, Storage Transfer Service uses the items in theexclude_prefixesarray to determine which objects to exclude from a transfer. Objects must not start with one of the matchingexclude_prefixesfor inclusion in a transfer. The following are requirements ofexclude_prefixes: * Each exclude-prefix can contain any sequence of Unicode characters, to a max length of 1024 bytes when UTF8-encoded, and must not contain Carriage Return or Line Feed characters. Wildcard matching and regular expression matching are not supported. * Each exclude-prefix must omit the leading slash. For example, to exclude the objects3://my-aws-bucket/logs/y=2015/requests.gz, specify the exclude-prefix aslogs/y=2015/requests.gz. * None of the exclude-prefix values can be empty, if specified. * Each exclude-prefix must exclude a distinct portion of the object namespace. No exclude-prefix may be a prefix of another exclude-prefix. * If include_prefixes is specified, then each exclude-prefix must start with the value of a path explicitly included byinclude_prefixes. The max size ofexclude_prefixesis 1000. For more information, see Filtering objects from transfers.
- IncludePrefixes []string
- If you specify include_prefixes, Storage Transfer Service uses the items in theinclude_prefixesarray to determine which objects to include in a transfer. Objects must start with one of the matchinginclude_prefixesfor inclusion in the transfer. If exclude_prefixes is specified, objects must not start with any of theexclude_prefixesspecified for inclusion in the transfer. The following are requirements ofinclude_prefixes: * Each include-prefix can contain any sequence of Unicode characters, to a max length of 1024 bytes when UTF8-encoded, and must not contain Carriage Return or Line Feed characters. Wildcard matching and regular expression matching are not supported. * Each include-prefix must omit the leading slash. For example, to include the objects3://my-aws-bucket/logs/y=2015/requests.gz, specify the include-prefix aslogs/y=2015/requests.gz. * None of the include-prefix values can be empty, if specified. * Each include-prefix must include a distinct portion of the object namespace. No include-prefix may be a prefix of another include-prefix. The max size ofinclude_prefixesis 1000. For more information, see Filtering objects from transfers.
- LastModified stringBefore 
- If specified, only objects with a "last modification time" before this timestamp and objects that don't have a "last modification time" are transferred.
- LastModified stringSince 
- If specified, only objects with a "last modification time" on or after this timestamp and objects that don't have a "last modification time" are transferred. The last_modified_sinceandlast_modified_beforefields can be used together for chunked data processing. For example, consider a script that processes each day's worth of data at a time. For that you'd set each of the fields as follows: *last_modified_sinceto the start of the day *last_modified_beforeto the end of the day
- MaxTime stringElapsed Since Last Modification 
- Ensures that objects are not transferred if a specific maximum time has elapsed since the "last modification time". When a TransferOperation begins, objects with a "last modification time" are transferred only if the elapsed time between the start_time of the TransferOperationand the "last modification time" of the object is less than the value of max_time_elapsed_since_last_modification`. Objects that do not have a "last modification time" are also transferred.
- MinTime stringElapsed Since Last Modification 
- Ensures that objects are not transferred until a specific minimum time has elapsed after the "last modification time". When a TransferOperation begins, objects with a "last modification time" are transferred only if the elapsed time between the start_time of the TransferOperationand the "last modification time" of the object is equal to or greater than the value of min_time_elapsed_since_last_modification`. Objects that do not have a "last modification time" are also transferred.
- excludePrefixes List<String>
- If you specify exclude_prefixes, Storage Transfer Service uses the items in theexclude_prefixesarray to determine which objects to exclude from a transfer. Objects must not start with one of the matchingexclude_prefixesfor inclusion in a transfer. The following are requirements ofexclude_prefixes: * Each exclude-prefix can contain any sequence of Unicode characters, to a max length of 1024 bytes when UTF8-encoded, and must not contain Carriage Return or Line Feed characters. Wildcard matching and regular expression matching are not supported. * Each exclude-prefix must omit the leading slash. For example, to exclude the objects3://my-aws-bucket/logs/y=2015/requests.gz, specify the exclude-prefix aslogs/y=2015/requests.gz. * None of the exclude-prefix values can be empty, if specified. * Each exclude-prefix must exclude a distinct portion of the object namespace. No exclude-prefix may be a prefix of another exclude-prefix. * If include_prefixes is specified, then each exclude-prefix must start with the value of a path explicitly included byinclude_prefixes. The max size ofexclude_prefixesis 1000. For more information, see Filtering objects from transfers.
- includePrefixes List<String>
- If you specify include_prefixes, Storage Transfer Service uses the items in theinclude_prefixesarray to determine which objects to include in a transfer. Objects must start with one of the matchinginclude_prefixesfor inclusion in the transfer. If exclude_prefixes is specified, objects must not start with any of theexclude_prefixesspecified for inclusion in the transfer. The following are requirements ofinclude_prefixes: * Each include-prefix can contain any sequence of Unicode characters, to a max length of 1024 bytes when UTF8-encoded, and must not contain Carriage Return or Line Feed characters. Wildcard matching and regular expression matching are not supported. * Each include-prefix must omit the leading slash. For example, to include the objects3://my-aws-bucket/logs/y=2015/requests.gz, specify the include-prefix aslogs/y=2015/requests.gz. * None of the include-prefix values can be empty, if specified. * Each include-prefix must include a distinct portion of the object namespace. No include-prefix may be a prefix of another include-prefix. The max size ofinclude_prefixesis 1000. For more information, see Filtering objects from transfers.
- lastModified StringBefore 
- If specified, only objects with a "last modification time" before this timestamp and objects that don't have a "last modification time" are transferred.
- lastModified StringSince 
- If specified, only objects with a "last modification time" on or after this timestamp and objects that don't have a "last modification time" are transferred. The last_modified_sinceandlast_modified_beforefields can be used together for chunked data processing. For example, consider a script that processes each day's worth of data at a time. For that you'd set each of the fields as follows: *last_modified_sinceto the start of the day *last_modified_beforeto the end of the day
- maxTime StringElapsed Since Last Modification 
- Ensures that objects are not transferred if a specific maximum time has elapsed since the "last modification time". When a TransferOperation begins, objects with a "last modification time" are transferred only if the elapsed time between the start_time of the TransferOperationand the "last modification time" of the object is less than the value of max_time_elapsed_since_last_modification`. Objects that do not have a "last modification time" are also transferred.
- minTime StringElapsed Since Last Modification 
- Ensures that objects are not transferred until a specific minimum time has elapsed after the "last modification time". When a TransferOperation begins, objects with a "last modification time" are transferred only if the elapsed time between the start_time of the TransferOperationand the "last modification time" of the object is equal to or greater than the value of min_time_elapsed_since_last_modification`. Objects that do not have a "last modification time" are also transferred.
- excludePrefixes string[]
- If you specify exclude_prefixes, Storage Transfer Service uses the items in theexclude_prefixesarray to determine which objects to exclude from a transfer. Objects must not start with one of the matchingexclude_prefixesfor inclusion in a transfer. The following are requirements ofexclude_prefixes: * Each exclude-prefix can contain any sequence of Unicode characters, to a max length of 1024 bytes when UTF8-encoded, and must not contain Carriage Return or Line Feed characters. Wildcard matching and regular expression matching are not supported. * Each exclude-prefix must omit the leading slash. For example, to exclude the objects3://my-aws-bucket/logs/y=2015/requests.gz, specify the exclude-prefix aslogs/y=2015/requests.gz. * None of the exclude-prefix values can be empty, if specified. * Each exclude-prefix must exclude a distinct portion of the object namespace. No exclude-prefix may be a prefix of another exclude-prefix. * If include_prefixes is specified, then each exclude-prefix must start with the value of a path explicitly included byinclude_prefixes. The max size ofexclude_prefixesis 1000. For more information, see Filtering objects from transfers.
- includePrefixes string[]
- If you specify include_prefixes, Storage Transfer Service uses the items in theinclude_prefixesarray to determine which objects to include in a transfer. Objects must start with one of the matchinginclude_prefixesfor inclusion in the transfer. If exclude_prefixes is specified, objects must not start with any of theexclude_prefixesspecified for inclusion in the transfer. The following are requirements ofinclude_prefixes: * Each include-prefix can contain any sequence of Unicode characters, to a max length of 1024 bytes when UTF8-encoded, and must not contain Carriage Return or Line Feed characters. Wildcard matching and regular expression matching are not supported. * Each include-prefix must omit the leading slash. For example, to include the objects3://my-aws-bucket/logs/y=2015/requests.gz, specify the include-prefix aslogs/y=2015/requests.gz. * None of the include-prefix values can be empty, if specified. * Each include-prefix must include a distinct portion of the object namespace. No include-prefix may be a prefix of another include-prefix. The max size ofinclude_prefixesis 1000. For more information, see Filtering objects from transfers.
- lastModified stringBefore 
- If specified, only objects with a "last modification time" before this timestamp and objects that don't have a "last modification time" are transferred.
- lastModified stringSince 
- If specified, only objects with a "last modification time" on or after this timestamp and objects that don't have a "last modification time" are transferred. The last_modified_sinceandlast_modified_beforefields can be used together for chunked data processing. For example, consider a script that processes each day's worth of data at a time. For that you'd set each of the fields as follows: *last_modified_sinceto the start of the day *last_modified_beforeto the end of the day
- maxTime stringElapsed Since Last Modification 
- Ensures that objects are not transferred if a specific maximum time has elapsed since the "last modification time". When a TransferOperation begins, objects with a "last modification time" are transferred only if the elapsed time between the start_time of the TransferOperationand the "last modification time" of the object is less than the value of max_time_elapsed_since_last_modification`. Objects that do not have a "last modification time" are also transferred.
- minTime stringElapsed Since Last Modification 
- Ensures that objects are not transferred until a specific minimum time has elapsed after the "last modification time". When a TransferOperation begins, objects with a "last modification time" are transferred only if the elapsed time between the start_time of the TransferOperationand the "last modification time" of the object is equal to or greater than the value of min_time_elapsed_since_last_modification`. Objects that do not have a "last modification time" are also transferred.
- exclude_prefixes Sequence[str]
- If you specify exclude_prefixes, Storage Transfer Service uses the items in theexclude_prefixesarray to determine which objects to exclude from a transfer. Objects must not start with one of the matchingexclude_prefixesfor inclusion in a transfer. The following are requirements ofexclude_prefixes: * Each exclude-prefix can contain any sequence of Unicode characters, to a max length of 1024 bytes when UTF8-encoded, and must not contain Carriage Return or Line Feed characters. Wildcard matching and regular expression matching are not supported. * Each exclude-prefix must omit the leading slash. For example, to exclude the objects3://my-aws-bucket/logs/y=2015/requests.gz, specify the exclude-prefix aslogs/y=2015/requests.gz. * None of the exclude-prefix values can be empty, if specified. * Each exclude-prefix must exclude a distinct portion of the object namespace. No exclude-prefix may be a prefix of another exclude-prefix. * If include_prefixes is specified, then each exclude-prefix must start with the value of a path explicitly included byinclude_prefixes. The max size ofexclude_prefixesis 1000. For more information, see Filtering objects from transfers.
- include_prefixes Sequence[str]
- If you specify include_prefixes, Storage Transfer Service uses the items in theinclude_prefixesarray to determine which objects to include in a transfer. Objects must start with one of the matchinginclude_prefixesfor inclusion in the transfer. If exclude_prefixes is specified, objects must not start with any of theexclude_prefixesspecified for inclusion in the transfer. The following are requirements ofinclude_prefixes: * Each include-prefix can contain any sequence of Unicode characters, to a max length of 1024 bytes when UTF8-encoded, and must not contain Carriage Return or Line Feed characters. Wildcard matching and regular expression matching are not supported. * Each include-prefix must omit the leading slash. For example, to include the objects3://my-aws-bucket/logs/y=2015/requests.gz, specify the include-prefix aslogs/y=2015/requests.gz. * None of the include-prefix values can be empty, if specified. * Each include-prefix must include a distinct portion of the object namespace. No include-prefix may be a prefix of another include-prefix. The max size ofinclude_prefixesis 1000. For more information, see Filtering objects from transfers.
- last_modified_ strbefore 
- If specified, only objects with a "last modification time" before this timestamp and objects that don't have a "last modification time" are transferred.
- last_modified_ strsince 
- If specified, only objects with a "last modification time" on or after this timestamp and objects that don't have a "last modification time" are transferred. The last_modified_sinceandlast_modified_beforefields can be used together for chunked data processing. For example, consider a script that processes each day's worth of data at a time. For that you'd set each of the fields as follows: *last_modified_sinceto the start of the day *last_modified_beforeto the end of the day
- max_time_ strelapsed_ since_ last_ modification 
- Ensures that objects are not transferred if a specific maximum time has elapsed since the "last modification time". When a TransferOperation begins, objects with a "last modification time" are transferred only if the elapsed time between the start_time of the TransferOperationand the "last modification time" of the object is less than the value of max_time_elapsed_since_last_modification`. Objects that do not have a "last modification time" are also transferred.
- min_time_ strelapsed_ since_ last_ modification 
- Ensures that objects are not transferred until a specific minimum time has elapsed after the "last modification time". When a TransferOperation begins, objects with a "last modification time" are transferred only if the elapsed time between the start_time of the TransferOperationand the "last modification time" of the object is equal to or greater than the value of min_time_elapsed_since_last_modification`. Objects that do not have a "last modification time" are also transferred.
- excludePrefixes List<String>
- If you specify exclude_prefixes, Storage Transfer Service uses the items in theexclude_prefixesarray to determine which objects to exclude from a transfer. Objects must not start with one of the matchingexclude_prefixesfor inclusion in a transfer. The following are requirements ofexclude_prefixes: * Each exclude-prefix can contain any sequence of Unicode characters, to a max length of 1024 bytes when UTF8-encoded, and must not contain Carriage Return or Line Feed characters. Wildcard matching and regular expression matching are not supported. * Each exclude-prefix must omit the leading slash. For example, to exclude the objects3://my-aws-bucket/logs/y=2015/requests.gz, specify the exclude-prefix aslogs/y=2015/requests.gz. * None of the exclude-prefix values can be empty, if specified. * Each exclude-prefix must exclude a distinct portion of the object namespace. No exclude-prefix may be a prefix of another exclude-prefix. * If include_prefixes is specified, then each exclude-prefix must start with the value of a path explicitly included byinclude_prefixes. The max size ofexclude_prefixesis 1000. For more information, see Filtering objects from transfers.
- includePrefixes List<String>
- If you specify include_prefixes, Storage Transfer Service uses the items in theinclude_prefixesarray to determine which objects to include in a transfer. Objects must start with one of the matchinginclude_prefixesfor inclusion in the transfer. If exclude_prefixes is specified, objects must not start with any of theexclude_prefixesspecified for inclusion in the transfer. The following are requirements ofinclude_prefixes: * Each include-prefix can contain any sequence of Unicode characters, to a max length of 1024 bytes when UTF8-encoded, and must not contain Carriage Return or Line Feed characters. Wildcard matching and regular expression matching are not supported. * Each include-prefix must omit the leading slash. For example, to include the objects3://my-aws-bucket/logs/y=2015/requests.gz, specify the include-prefix aslogs/y=2015/requests.gz. * None of the include-prefix values can be empty, if specified. * Each include-prefix must include a distinct portion of the object namespace. No include-prefix may be a prefix of another include-prefix. The max size ofinclude_prefixesis 1000. For more information, see Filtering objects from transfers.
- lastModified StringBefore 
- If specified, only objects with a "last modification time" before this timestamp and objects that don't have a "last modification time" are transferred.
- lastModified StringSince 
- If specified, only objects with a "last modification time" on or after this timestamp and objects that don't have a "last modification time" are transferred. The last_modified_sinceandlast_modified_beforefields can be used together for chunked data processing. For example, consider a script that processes each day's worth of data at a time. For that you'd set each of the fields as follows: *last_modified_sinceto the start of the day *last_modified_beforeto the end of the day
- maxTime StringElapsed Since Last Modification 
- Ensures that objects are not transferred if a specific maximum time has elapsed since the "last modification time". When a TransferOperation begins, objects with a "last modification time" are transferred only if the elapsed time between the start_time of the TransferOperationand the "last modification time" of the object is less than the value of max_time_elapsed_since_last_modification`. Objects that do not have a "last modification time" are also transferred.
- minTime StringElapsed Since Last Modification 
- Ensures that objects are not transferred until a specific minimum time has elapsed after the "last modification time". When a TransferOperation begins, objects with a "last modification time" are transferred only if the elapsed time between the start_time of the TransferOperationand the "last modification time" of the object is equal to or greater than the value of min_time_elapsed_since_last_modification`. Objects that do not have a "last modification time" are also transferred.
PosixFilesystemResponse  
- RootDirectory string
- Root directory path to the filesystem.
- RootDirectory string
- Root directory path to the filesystem.
- rootDirectory String
- Root directory path to the filesystem.
- rootDirectory string
- Root directory path to the filesystem.
- root_directory str
- Root directory path to the filesystem.
- rootDirectory String
- Root directory path to the filesystem.
S3CompatibleMetadataResponse  
- AuthMethod string
- Specifies the authentication and authorization method used by the storage service. When not specified, Transfer Service will attempt to determine right auth method to use.
- ListApi string
- The Listing API to use for discovering objects. When not specified, Transfer Service will attempt to determine the right API to use.
- Protocol string
- Specifies the network protocol of the agent. When not specified, the default value of NetworkProtocol NETWORK_PROTOCOL_HTTPS is used.
- RequestModel string
- Specifies the API request model used to call the storage service. When not specified, the default value of RequestModel REQUEST_MODEL_VIRTUAL_HOSTED_STYLE is used.
- AuthMethod string
- Specifies the authentication and authorization method used by the storage service. When not specified, Transfer Service will attempt to determine right auth method to use.
- ListApi string
- The Listing API to use for discovering objects. When not specified, Transfer Service will attempt to determine the right API to use.
- Protocol string
- Specifies the network protocol of the agent. When not specified, the default value of NetworkProtocol NETWORK_PROTOCOL_HTTPS is used.
- RequestModel string
- Specifies the API request model used to call the storage service. When not specified, the default value of RequestModel REQUEST_MODEL_VIRTUAL_HOSTED_STYLE is used.
- authMethod String
- Specifies the authentication and authorization method used by the storage service. When not specified, Transfer Service will attempt to determine right auth method to use.
- listApi String
- The Listing API to use for discovering objects. When not specified, Transfer Service will attempt to determine the right API to use.
- protocol String
- Specifies the network protocol of the agent. When not specified, the default value of NetworkProtocol NETWORK_PROTOCOL_HTTPS is used.
- requestModel String
- Specifies the API request model used to call the storage service. When not specified, the default value of RequestModel REQUEST_MODEL_VIRTUAL_HOSTED_STYLE is used.
- authMethod string
- Specifies the authentication and authorization method used by the storage service. When not specified, Transfer Service will attempt to determine right auth method to use.
- listApi string
- The Listing API to use for discovering objects. When not specified, Transfer Service will attempt to determine the right API to use.
- protocol string
- Specifies the network protocol of the agent. When not specified, the default value of NetworkProtocol NETWORK_PROTOCOL_HTTPS is used.
- requestModel string
- Specifies the API request model used to call the storage service. When not specified, the default value of RequestModel REQUEST_MODEL_VIRTUAL_HOSTED_STYLE is used.
- auth_method str
- Specifies the authentication and authorization method used by the storage service. When not specified, Transfer Service will attempt to determine right auth method to use.
- list_api str
- The Listing API to use for discovering objects. When not specified, Transfer Service will attempt to determine the right API to use.
- protocol str
- Specifies the network protocol of the agent. When not specified, the default value of NetworkProtocol NETWORK_PROTOCOL_HTTPS is used.
- request_model str
- Specifies the API request model used to call the storage service. When not specified, the default value of RequestModel REQUEST_MODEL_VIRTUAL_HOSTED_STYLE is used.
- authMethod String
- Specifies the authentication and authorization method used by the storage service. When not specified, Transfer Service will attempt to determine right auth method to use.
- listApi String
- The Listing API to use for discovering objects. When not specified, Transfer Service will attempt to determine the right API to use.
- protocol String
- Specifies the network protocol of the agent. When not specified, the default value of NetworkProtocol NETWORK_PROTOCOL_HTTPS is used.
- requestModel String
- Specifies the API request model used to call the storage service. When not specified, the default value of RequestModel REQUEST_MODEL_VIRTUAL_HOSTED_STYLE is used.
ScheduleResponse 
- EndTime Pulumi.Of Day Google Native. Storage Transfer. V1. Inputs. Time Of Day Response 
- The time in UTC that no further transfer operations are scheduled. Combined with schedule_end_date, end_time_of_dayspecifies the end date and time for starting new transfer operations. This field must be greater than or equal to the timestamp corresponding to the combintation of schedule_start_date and start_time_of_day, and is subject to the following: * Ifend_time_of_dayis not set andschedule_end_dateis set, then a default value of23:59:59is used forend_time_of_day. * Ifend_time_of_dayis set andschedule_end_dateis not set, then INVALID_ARGUMENT is returned.
- RepeatInterval string
- Interval between the start of each scheduled TransferOperation. If unspecified, the default value is 24 hours. This value may not be less than 1 hour.
- ScheduleEnd Pulumi.Date Google Native. Storage Transfer. V1. Inputs. Date Response 
- The last day a transfer runs. Date boundaries are determined relative to UTC time. A job runs once per 24 hours within the following guidelines: * If schedule_end_dateand schedule_start_date are the same and in the future relative to UTC, the transfer is executed only one time. * Ifschedule_end_dateis later thanschedule_start_dateandschedule_end_dateis in the future relative to UTC, the job runs each day at start_time_of_day throughschedule_end_date.
- ScheduleStart Pulumi.Date Google Native. Storage Transfer. V1. Inputs. Date Response 
- The start date of a transfer. Date boundaries are determined relative to UTC time. If schedule_start_dateand start_time_of_day are in the past relative to the job's creation time, the transfer starts the day after you schedule the transfer request. Note: When starting jobs at or near midnight UTC it is possible that a job starts later than expected. For example, if you send an outbound request on June 1 one millisecond prior to midnight UTC and the Storage Transfer Service server receives the request on June 2, then it creates a TransferJob withschedule_start_dateset to June 2 and astart_time_of_dayset to midnight UTC. The first scheduled TransferOperation takes place on June 3 at midnight UTC.
- StartTime Pulumi.Of Day Google Native. Storage Transfer. V1. Inputs. Time Of Day Response 
- The time in UTC that a transfer job is scheduled to run. Transfers may start later than this time. If start_time_of_dayis not specified: * One-time transfers run immediately. * Recurring transfers run immediately, and each day at midnight UTC, through schedule_end_date. Ifstart_time_of_dayis specified: * One-time transfers run at the specified time. * Recurring transfers run at the specified time each day, throughschedule_end_date.
- EndTime TimeOf Day Of Day Response 
- The time in UTC that no further transfer operations are scheduled. Combined with schedule_end_date, end_time_of_dayspecifies the end date and time for starting new transfer operations. This field must be greater than or equal to the timestamp corresponding to the combintation of schedule_start_date and start_time_of_day, and is subject to the following: * Ifend_time_of_dayis not set andschedule_end_dateis set, then a default value of23:59:59is used forend_time_of_day. * Ifend_time_of_dayis set andschedule_end_dateis not set, then INVALID_ARGUMENT is returned.
- RepeatInterval string
- Interval between the start of each scheduled TransferOperation. If unspecified, the default value is 24 hours. This value may not be less than 1 hour.
- ScheduleEnd DateDate Response 
- The last day a transfer runs. Date boundaries are determined relative to UTC time. A job runs once per 24 hours within the following guidelines: * If schedule_end_dateand schedule_start_date are the same and in the future relative to UTC, the transfer is executed only one time. * Ifschedule_end_dateis later thanschedule_start_dateandschedule_end_dateis in the future relative to UTC, the job runs each day at start_time_of_day throughschedule_end_date.
- ScheduleStart DateDate Response 
- The start date of a transfer. Date boundaries are determined relative to UTC time. If schedule_start_dateand start_time_of_day are in the past relative to the job's creation time, the transfer starts the day after you schedule the transfer request. Note: When starting jobs at or near midnight UTC it is possible that a job starts later than expected. For example, if you send an outbound request on June 1 one millisecond prior to midnight UTC and the Storage Transfer Service server receives the request on June 2, then it creates a TransferJob withschedule_start_dateset to June 2 and astart_time_of_dayset to midnight UTC. The first scheduled TransferOperation takes place on June 3 at midnight UTC.
- StartTime TimeOf Day Of Day Response 
- The time in UTC that a transfer job is scheduled to run. Transfers may start later than this time. If start_time_of_dayis not specified: * One-time transfers run immediately. * Recurring transfers run immediately, and each day at midnight UTC, through schedule_end_date. Ifstart_time_of_dayis specified: * One-time transfers run at the specified time. * Recurring transfers run at the specified time each day, throughschedule_end_date.
- endTime TimeOf Day Of Day Response 
- The time in UTC that no further transfer operations are scheduled. Combined with schedule_end_date, end_time_of_dayspecifies the end date and time for starting new transfer operations. This field must be greater than or equal to the timestamp corresponding to the combintation of schedule_start_date and start_time_of_day, and is subject to the following: * Ifend_time_of_dayis not set andschedule_end_dateis set, then a default value of23:59:59is used forend_time_of_day. * Ifend_time_of_dayis set andschedule_end_dateis not set, then INVALID_ARGUMENT is returned.
- repeatInterval String
- Interval between the start of each scheduled TransferOperation. If unspecified, the default value is 24 hours. This value may not be less than 1 hour.
- scheduleEnd DateDate Response 
- The last day a transfer runs. Date boundaries are determined relative to UTC time. A job runs once per 24 hours within the following guidelines: * If schedule_end_dateand schedule_start_date are the same and in the future relative to UTC, the transfer is executed only one time. * Ifschedule_end_dateis later thanschedule_start_dateandschedule_end_dateis in the future relative to UTC, the job runs each day at start_time_of_day throughschedule_end_date.
- scheduleStart DateDate Response 
- The start date of a transfer. Date boundaries are determined relative to UTC time. If schedule_start_dateand start_time_of_day are in the past relative to the job's creation time, the transfer starts the day after you schedule the transfer request. Note: When starting jobs at or near midnight UTC it is possible that a job starts later than expected. For example, if you send an outbound request on June 1 one millisecond prior to midnight UTC and the Storage Transfer Service server receives the request on June 2, then it creates a TransferJob withschedule_start_dateset to June 2 and astart_time_of_dayset to midnight UTC. The first scheduled TransferOperation takes place on June 3 at midnight UTC.
- startTime TimeOf Day Of Day Response 
- The time in UTC that a transfer job is scheduled to run. Transfers may start later than this time. If start_time_of_dayis not specified: * One-time transfers run immediately. * Recurring transfers run immediately, and each day at midnight UTC, through schedule_end_date. Ifstart_time_of_dayis specified: * One-time transfers run at the specified time. * Recurring transfers run at the specified time each day, throughschedule_end_date.
- endTime TimeOf Day Of Day Response 
- The time in UTC that no further transfer operations are scheduled. Combined with schedule_end_date, end_time_of_dayspecifies the end date and time for starting new transfer operations. This field must be greater than or equal to the timestamp corresponding to the combintation of schedule_start_date and start_time_of_day, and is subject to the following: * Ifend_time_of_dayis not set andschedule_end_dateis set, then a default value of23:59:59is used forend_time_of_day. * Ifend_time_of_dayis set andschedule_end_dateis not set, then INVALID_ARGUMENT is returned.
- repeatInterval string
- Interval between the start of each scheduled TransferOperation. If unspecified, the default value is 24 hours. This value may not be less than 1 hour.
- scheduleEnd DateDate Response 
- The last day a transfer runs. Date boundaries are determined relative to UTC time. A job runs once per 24 hours within the following guidelines: * If schedule_end_dateand schedule_start_date are the same and in the future relative to UTC, the transfer is executed only one time. * Ifschedule_end_dateis later thanschedule_start_dateandschedule_end_dateis in the future relative to UTC, the job runs each day at start_time_of_day throughschedule_end_date.
- scheduleStart DateDate Response 
- The start date of a transfer. Date boundaries are determined relative to UTC time. If schedule_start_dateand start_time_of_day are in the past relative to the job's creation time, the transfer starts the day after you schedule the transfer request. Note: When starting jobs at or near midnight UTC it is possible that a job starts later than expected. For example, if you send an outbound request on June 1 one millisecond prior to midnight UTC and the Storage Transfer Service server receives the request on June 2, then it creates a TransferJob withschedule_start_dateset to June 2 and astart_time_of_dayset to midnight UTC. The first scheduled TransferOperation takes place on June 3 at midnight UTC.
- startTime TimeOf Day Of Day Response 
- The time in UTC that a transfer job is scheduled to run. Transfers may start later than this time. If start_time_of_dayis not specified: * One-time transfers run immediately. * Recurring transfers run immediately, and each day at midnight UTC, through schedule_end_date. Ifstart_time_of_dayis specified: * One-time transfers run at the specified time. * Recurring transfers run at the specified time each day, throughschedule_end_date.
- end_time_ Timeof_ day Of Day Response 
- The time in UTC that no further transfer operations are scheduled. Combined with schedule_end_date, end_time_of_dayspecifies the end date and time for starting new transfer operations. This field must be greater than or equal to the timestamp corresponding to the combintation of schedule_start_date and start_time_of_day, and is subject to the following: * Ifend_time_of_dayis not set andschedule_end_dateis set, then a default value of23:59:59is used forend_time_of_day. * Ifend_time_of_dayis set andschedule_end_dateis not set, then INVALID_ARGUMENT is returned.
- repeat_interval str
- Interval between the start of each scheduled TransferOperation. If unspecified, the default value is 24 hours. This value may not be less than 1 hour.
- schedule_end_ Datedate Response 
- The last day a transfer runs. Date boundaries are determined relative to UTC time. A job runs once per 24 hours within the following guidelines: * If schedule_end_dateand schedule_start_date are the same and in the future relative to UTC, the transfer is executed only one time. * Ifschedule_end_dateis later thanschedule_start_dateandschedule_end_dateis in the future relative to UTC, the job runs each day at start_time_of_day throughschedule_end_date.
- schedule_start_ Datedate Response 
- The start date of a transfer. Date boundaries are determined relative to UTC time. If schedule_start_dateand start_time_of_day are in the past relative to the job's creation time, the transfer starts the day after you schedule the transfer request. Note: When starting jobs at or near midnight UTC it is possible that a job starts later than expected. For example, if you send an outbound request on June 1 one millisecond prior to midnight UTC and the Storage Transfer Service server receives the request on June 2, then it creates a TransferJob withschedule_start_dateset to June 2 and astart_time_of_dayset to midnight UTC. The first scheduled TransferOperation takes place on June 3 at midnight UTC.
- start_time_ Timeof_ day Of Day Response 
- The time in UTC that a transfer job is scheduled to run. Transfers may start later than this time. If start_time_of_dayis not specified: * One-time transfers run immediately. * Recurring transfers run immediately, and each day at midnight UTC, through schedule_end_date. Ifstart_time_of_dayis specified: * One-time transfers run at the specified time. * Recurring transfers run at the specified time each day, throughschedule_end_date.
- endTime Property MapOf Day 
- The time in UTC that no further transfer operations are scheduled. Combined with schedule_end_date, end_time_of_dayspecifies the end date and time for starting new transfer operations. This field must be greater than or equal to the timestamp corresponding to the combintation of schedule_start_date and start_time_of_day, and is subject to the following: * Ifend_time_of_dayis not set andschedule_end_dateis set, then a default value of23:59:59is used forend_time_of_day. * Ifend_time_of_dayis set andschedule_end_dateis not set, then INVALID_ARGUMENT is returned.
- repeatInterval String
- Interval between the start of each scheduled TransferOperation. If unspecified, the default value is 24 hours. This value may not be less than 1 hour.
- scheduleEnd Property MapDate 
- The last day a transfer runs. Date boundaries are determined relative to UTC time. A job runs once per 24 hours within the following guidelines: * If schedule_end_dateand schedule_start_date are the same and in the future relative to UTC, the transfer is executed only one time. * Ifschedule_end_dateis later thanschedule_start_dateandschedule_end_dateis in the future relative to UTC, the job runs each day at start_time_of_day throughschedule_end_date.
- scheduleStart Property MapDate 
- The start date of a transfer. Date boundaries are determined relative to UTC time. If schedule_start_dateand start_time_of_day are in the past relative to the job's creation time, the transfer starts the day after you schedule the transfer request. Note: When starting jobs at or near midnight UTC it is possible that a job starts later than expected. For example, if you send an outbound request on June 1 one millisecond prior to midnight UTC and the Storage Transfer Service server receives the request on June 2, then it creates a TransferJob withschedule_start_dateset to June 2 and astart_time_of_dayset to midnight UTC. The first scheduled TransferOperation takes place on June 3 at midnight UTC.
- startTime Property MapOf Day 
- The time in UTC that a transfer job is scheduled to run. Transfers may start later than this time. If start_time_of_dayis not specified: * One-time transfers run immediately. * Recurring transfers run immediately, and each day at midnight UTC, through schedule_end_date. Ifstart_time_of_dayis specified: * One-time transfers run at the specified time. * Recurring transfers run at the specified time each day, throughschedule_end_date.
TimeOfDayResponse   
- Hours int
- Hours of day in 24 hour format. Should be from 0 to 23. An API may choose to allow the value "24:00:00" for scenarios like business closing time.
- Minutes int
- Minutes of hour of day. Must be from 0 to 59.
- Nanos int
- Fractions of seconds in nanoseconds. Must be from 0 to 999,999,999.
- Seconds int
- Seconds of minutes of the time. Must normally be from 0 to 59. An API may allow the value 60 if it allows leap-seconds.
- Hours int
- Hours of day in 24 hour format. Should be from 0 to 23. An API may choose to allow the value "24:00:00" for scenarios like business closing time.
- Minutes int
- Minutes of hour of day. Must be from 0 to 59.
- Nanos int
- Fractions of seconds in nanoseconds. Must be from 0 to 999,999,999.
- Seconds int
- Seconds of minutes of the time. Must normally be from 0 to 59. An API may allow the value 60 if it allows leap-seconds.
- hours Integer
- Hours of day in 24 hour format. Should be from 0 to 23. An API may choose to allow the value "24:00:00" for scenarios like business closing time.
- minutes Integer
- Minutes of hour of day. Must be from 0 to 59.
- nanos Integer
- Fractions of seconds in nanoseconds. Must be from 0 to 999,999,999.
- seconds Integer
- Seconds of minutes of the time. Must normally be from 0 to 59. An API may allow the value 60 if it allows leap-seconds.
- hours number
- Hours of day in 24 hour format. Should be from 0 to 23. An API may choose to allow the value "24:00:00" for scenarios like business closing time.
- minutes number
- Minutes of hour of day. Must be from 0 to 59.
- nanos number
- Fractions of seconds in nanoseconds. Must be from 0 to 999,999,999.
- seconds number
- Seconds of minutes of the time. Must normally be from 0 to 59. An API may allow the value 60 if it allows leap-seconds.
- hours int
- Hours of day in 24 hour format. Should be from 0 to 23. An API may choose to allow the value "24:00:00" for scenarios like business closing time.
- minutes int
- Minutes of hour of day. Must be from 0 to 59.
- nanos int
- Fractions of seconds in nanoseconds. Must be from 0 to 999,999,999.
- seconds int
- Seconds of minutes of the time. Must normally be from 0 to 59. An API may allow the value 60 if it allows leap-seconds.
- hours Number
- Hours of day in 24 hour format. Should be from 0 to 23. An API may choose to allow the value "24:00:00" for scenarios like business closing time.
- minutes Number
- Minutes of hour of day. Must be from 0 to 59.
- nanos Number
- Fractions of seconds in nanoseconds. Must be from 0 to 999,999,999.
- seconds Number
- Seconds of minutes of the time. Must normally be from 0 to 59. An API may allow the value 60 if it allows leap-seconds.
TransferManifestResponse  
- Location string
- Specifies the path to the manifest in Cloud Storage. The Google-managed service account for the transfer must have storage.objects.getpermission for this object. An example path isgs://bucket_name/path/manifest.csv.
- Location string
- Specifies the path to the manifest in Cloud Storage. The Google-managed service account for the transfer must have storage.objects.getpermission for this object. An example path isgs://bucket_name/path/manifest.csv.
- location String
- Specifies the path to the manifest in Cloud Storage. The Google-managed service account for the transfer must have storage.objects.getpermission for this object. An example path isgs://bucket_name/path/manifest.csv.
- location string
- Specifies the path to the manifest in Cloud Storage. The Google-managed service account for the transfer must have storage.objects.getpermission for this object. An example path isgs://bucket_name/path/manifest.csv.
- location str
- Specifies the path to the manifest in Cloud Storage. The Google-managed service account for the transfer must have storage.objects.getpermission for this object. An example path isgs://bucket_name/path/manifest.csv.
- location String
- Specifies the path to the manifest in Cloud Storage. The Google-managed service account for the transfer must have storage.objects.getpermission for this object. An example path isgs://bucket_name/path/manifest.csv.
TransferOptionsResponse  
- DeleteObjects boolFrom Source After Transfer 
- Whether objects should be deleted from the source after they are transferred to the sink. Note: This option and delete_objects_unique_in_sink are mutually exclusive.
- DeleteObjects boolUnique In Sink 
- Whether objects that exist only in the sink should be deleted. Note: This option and delete_objects_from_source_after_transfer are mutually exclusive.
- MetadataOptions Pulumi.Google Native. Storage Transfer. V1. Inputs. Metadata Options Response 
- Represents the selected metadata options for a transfer job.
- OverwriteObjects boolAlready Existing In Sink 
- When to overwrite objects that already exist in the sink. The default is that only objects that are different from the source are ovewritten. If true, all objects in the sink whose name matches an object in the source are overwritten with the source object.
- OverwriteWhen string
- When to overwrite objects that already exist in the sink. If not set, overwrite behavior is determined by overwrite_objects_already_existing_in_sink.
- DeleteObjects boolFrom Source After Transfer 
- Whether objects should be deleted from the source after they are transferred to the sink. Note: This option and delete_objects_unique_in_sink are mutually exclusive.
- DeleteObjects boolUnique In Sink 
- Whether objects that exist only in the sink should be deleted. Note: This option and delete_objects_from_source_after_transfer are mutually exclusive.
- MetadataOptions MetadataOptions Response 
- Represents the selected metadata options for a transfer job.
- OverwriteObjects boolAlready Existing In Sink 
- When to overwrite objects that already exist in the sink. The default is that only objects that are different from the source are ovewritten. If true, all objects in the sink whose name matches an object in the source are overwritten with the source object.
- OverwriteWhen string
- When to overwrite objects that already exist in the sink. If not set, overwrite behavior is determined by overwrite_objects_already_existing_in_sink.
- deleteObjects BooleanFrom Source After Transfer 
- Whether objects should be deleted from the source after they are transferred to the sink. Note: This option and delete_objects_unique_in_sink are mutually exclusive.
- deleteObjects BooleanUnique In Sink 
- Whether objects that exist only in the sink should be deleted. Note: This option and delete_objects_from_source_after_transfer are mutually exclusive.
- metadataOptions MetadataOptions Response 
- Represents the selected metadata options for a transfer job.
- overwriteObjects BooleanAlready Existing In Sink 
- When to overwrite objects that already exist in the sink. The default is that only objects that are different from the source are ovewritten. If true, all objects in the sink whose name matches an object in the source are overwritten with the source object.
- overwriteWhen String
- When to overwrite objects that already exist in the sink. If not set, overwrite behavior is determined by overwrite_objects_already_existing_in_sink.
- deleteObjects booleanFrom Source After Transfer 
- Whether objects should be deleted from the source after they are transferred to the sink. Note: This option and delete_objects_unique_in_sink are mutually exclusive.
- deleteObjects booleanUnique In Sink 
- Whether objects that exist only in the sink should be deleted. Note: This option and delete_objects_from_source_after_transfer are mutually exclusive.
- metadataOptions MetadataOptions Response 
- Represents the selected metadata options for a transfer job.
- overwriteObjects booleanAlready Existing In Sink 
- When to overwrite objects that already exist in the sink. The default is that only objects that are different from the source are ovewritten. If true, all objects in the sink whose name matches an object in the source are overwritten with the source object.
- overwriteWhen string
- When to overwrite objects that already exist in the sink. If not set, overwrite behavior is determined by overwrite_objects_already_existing_in_sink.
- delete_objects_ boolfrom_ source_ after_ transfer 
- Whether objects should be deleted from the source after they are transferred to the sink. Note: This option and delete_objects_unique_in_sink are mutually exclusive.
- delete_objects_ boolunique_ in_ sink 
- Whether objects that exist only in the sink should be deleted. Note: This option and delete_objects_from_source_after_transfer are mutually exclusive.
- metadata_options MetadataOptions Response 
- Represents the selected metadata options for a transfer job.
- overwrite_objects_ boolalready_ existing_ in_ sink 
- When to overwrite objects that already exist in the sink. The default is that only objects that are different from the source are ovewritten. If true, all objects in the sink whose name matches an object in the source are overwritten with the source object.
- overwrite_when str
- When to overwrite objects that already exist in the sink. If not set, overwrite behavior is determined by overwrite_objects_already_existing_in_sink.
- deleteObjects BooleanFrom Source After Transfer 
- Whether objects should be deleted from the source after they are transferred to the sink. Note: This option and delete_objects_unique_in_sink are mutually exclusive.
- deleteObjects BooleanUnique In Sink 
- Whether objects that exist only in the sink should be deleted. Note: This option and delete_objects_from_source_after_transfer are mutually exclusive.
- metadataOptions Property Map
- Represents the selected metadata options for a transfer job.
- overwriteObjects BooleanAlready Existing In Sink 
- When to overwrite objects that already exist in the sink. The default is that only objects that are different from the source are ovewritten. If true, all objects in the sink whose name matches an object in the source are overwritten with the source object.
- overwriteWhen String
- When to overwrite objects that already exist in the sink. If not set, overwrite behavior is determined by overwrite_objects_already_existing_in_sink.
TransferSpecResponse  
- AwsS3Compatible Pulumi.Data Source Google Native. Storage Transfer. V1. Inputs. Aws S3Compatible Data Response 
- An AWS S3 compatible data source.
- AwsS3Data Pulumi.Source Google Native. Storage Transfer. V1. Inputs. Aws S3Data Response 
- An AWS S3 data source.
- AzureBlob Pulumi.Storage Data Source Google Native. Storage Transfer. V1. Inputs. Azure Blob Storage Data Response 
- An Azure Blob Storage data source.
- GcsData Pulumi.Sink Google Native. Storage Transfer. V1. Inputs. Gcs Data Response 
- A Cloud Storage data sink.
- GcsData Pulumi.Source Google Native. Storage Transfer. V1. Inputs. Gcs Data Response 
- A Cloud Storage data source.
- GcsIntermediate Pulumi.Data Location Google Native. Storage Transfer. V1. Inputs. Gcs Data Response 
- For transfers between file systems, specifies a Cloud Storage bucket to be used as an intermediate location through which to transfer data. See Transfer data between file systems for more information.
- HttpData Pulumi.Source Google Native. Storage Transfer. V1. Inputs. Http Data Response 
- An HTTP URL data source.
- ObjectConditions Pulumi.Google Native. Storage Transfer. V1. Inputs. Object Conditions Response 
- Only objects that satisfy these object conditions are included in the set of data source and data sink objects. Object conditions based on objects' "last modification time" do not exclude objects in a data sink.
- PosixData Pulumi.Sink Google Native. Storage Transfer. V1. Inputs. Posix Filesystem Response 
- A POSIX Filesystem data sink.
- PosixData Pulumi.Source Google Native. Storage Transfer. V1. Inputs. Posix Filesystem Response 
- A POSIX Filesystem data source.
- SinkAgent stringPool Name 
- Specifies the agent pool name associated with the posix data sink. When unspecified, the default name is used.
- SourceAgent stringPool Name 
- Specifies the agent pool name associated with the posix data source. When unspecified, the default name is used.
- TransferManifest Pulumi.Google Native. Storage Transfer. V1. Inputs. Transfer Manifest Response 
- A manifest file provides a list of objects to be transferred from the data source. This field points to the location of the manifest file. Otherwise, the entire source bucket is used. ObjectConditions still apply.
- TransferOptions Pulumi.Google Native. Storage Transfer. V1. Inputs. Transfer Options Response 
- If the option delete_objects_unique_in_sink is trueand time-based object conditions such as 'last modification time' are specified, the request fails with an INVALID_ARGUMENT error.
- AwsS3Compatible AwsData Source S3Compatible Data Response 
- An AWS S3 compatible data source.
- AwsS3Data AwsSource S3Data Response 
- An AWS S3 data source.
- AzureBlob AzureStorage Data Source Blob Storage Data Response 
- An Azure Blob Storage data source.
- GcsData GcsSink Data Response 
- A Cloud Storage data sink.
- GcsData GcsSource Data Response 
- A Cloud Storage data source.
- GcsIntermediate GcsData Location Data Response 
- For transfers between file systems, specifies a Cloud Storage bucket to be used as an intermediate location through which to transfer data. See Transfer data between file systems for more information.
- HttpData HttpSource Data Response 
- An HTTP URL data source.
- ObjectConditions ObjectConditions Response 
- Only objects that satisfy these object conditions are included in the set of data source and data sink objects. Object conditions based on objects' "last modification time" do not exclude objects in a data sink.
- PosixData PosixSink Filesystem Response 
- A POSIX Filesystem data sink.
- PosixData PosixSource Filesystem Response 
- A POSIX Filesystem data source.
- SinkAgent stringPool Name 
- Specifies the agent pool name associated with the posix data sink. When unspecified, the default name is used.
- SourceAgent stringPool Name 
- Specifies the agent pool name associated with the posix data source. When unspecified, the default name is used.
- TransferManifest TransferManifest Response 
- A manifest file provides a list of objects to be transferred from the data source. This field points to the location of the manifest file. Otherwise, the entire source bucket is used. ObjectConditions still apply.
- TransferOptions TransferOptions Response 
- If the option delete_objects_unique_in_sink is trueand time-based object conditions such as 'last modification time' are specified, the request fails with an INVALID_ARGUMENT error.
- awsS3Compatible AwsData Source S3Compatible Data Response 
- An AWS S3 compatible data source.
- awsS3Data AwsSource S3Data Response 
- An AWS S3 data source.
- azureBlob AzureStorage Data Source Blob Storage Data Response 
- An Azure Blob Storage data source.
- gcsData GcsSink Data Response 
- A Cloud Storage data sink.
- gcsData GcsSource Data Response 
- A Cloud Storage data source.
- gcsIntermediate GcsData Location Data Response 
- For transfers between file systems, specifies a Cloud Storage bucket to be used as an intermediate location through which to transfer data. See Transfer data between file systems for more information.
- httpData HttpSource Data Response 
- An HTTP URL data source.
- objectConditions ObjectConditions Response 
- Only objects that satisfy these object conditions are included in the set of data source and data sink objects. Object conditions based on objects' "last modification time" do not exclude objects in a data sink.
- posixData PosixSink Filesystem Response 
- A POSIX Filesystem data sink.
- posixData PosixSource Filesystem Response 
- A POSIX Filesystem data source.
- sinkAgent StringPool Name 
- Specifies the agent pool name associated with the posix data sink. When unspecified, the default name is used.
- sourceAgent StringPool Name 
- Specifies the agent pool name associated with the posix data source. When unspecified, the default name is used.
- transferManifest TransferManifest Response 
- A manifest file provides a list of objects to be transferred from the data source. This field points to the location of the manifest file. Otherwise, the entire source bucket is used. ObjectConditions still apply.
- transferOptions TransferOptions Response 
- If the option delete_objects_unique_in_sink is trueand time-based object conditions such as 'last modification time' are specified, the request fails with an INVALID_ARGUMENT error.
- awsS3Compatible AwsData Source S3Compatible Data Response 
- An AWS S3 compatible data source.
- awsS3Data AwsSource S3Data Response 
- An AWS S3 data source.
- azureBlob AzureStorage Data Source Blob Storage Data Response 
- An Azure Blob Storage data source.
- gcsData GcsSink Data Response 
- A Cloud Storage data sink.
- gcsData GcsSource Data Response 
- A Cloud Storage data source.
- gcsIntermediate GcsData Location Data Response 
- For transfers between file systems, specifies a Cloud Storage bucket to be used as an intermediate location through which to transfer data. See Transfer data between file systems for more information.
- httpData HttpSource Data Response 
- An HTTP URL data source.
- objectConditions ObjectConditions Response 
- Only objects that satisfy these object conditions are included in the set of data source and data sink objects. Object conditions based on objects' "last modification time" do not exclude objects in a data sink.
- posixData PosixSink Filesystem Response 
- A POSIX Filesystem data sink.
- posixData PosixSource Filesystem Response 
- A POSIX Filesystem data source.
- sinkAgent stringPool Name 
- Specifies the agent pool name associated with the posix data sink. When unspecified, the default name is used.
- sourceAgent stringPool Name 
- Specifies the agent pool name associated with the posix data source. When unspecified, the default name is used.
- transferManifest TransferManifest Response 
- A manifest file provides a list of objects to be transferred from the data source. This field points to the location of the manifest file. Otherwise, the entire source bucket is used. ObjectConditions still apply.
- transferOptions TransferOptions Response 
- If the option delete_objects_unique_in_sink is trueand time-based object conditions such as 'last modification time' are specified, the request fails with an INVALID_ARGUMENT error.
- aws_s3_ Awscompatible_ data_ source S3Compatible Data Response 
- An AWS S3 compatible data source.
- aws_s3_ Awsdata_ source S3Data Response 
- An AWS S3 data source.
- azure_blob_ Azurestorage_ data_ source Blob Storage Data Response 
- An Azure Blob Storage data source.
- gcs_data_ Gcssink Data Response 
- A Cloud Storage data sink.
- gcs_data_ Gcssource Data Response 
- A Cloud Storage data source.
- gcs_intermediate_ Gcsdata_ location Data Response 
- For transfers between file systems, specifies a Cloud Storage bucket to be used as an intermediate location through which to transfer data. See Transfer data between file systems for more information.
- http_data_ Httpsource Data Response 
- An HTTP URL data source.
- object_conditions ObjectConditions Response 
- Only objects that satisfy these object conditions are included in the set of data source and data sink objects. Object conditions based on objects' "last modification time" do not exclude objects in a data sink.
- posix_data_ Posixsink Filesystem Response 
- A POSIX Filesystem data sink.
- posix_data_ Posixsource Filesystem Response 
- A POSIX Filesystem data source.
- sink_agent_ strpool_ name 
- Specifies the agent pool name associated with the posix data sink. When unspecified, the default name is used.
- source_agent_ strpool_ name 
- Specifies the agent pool name associated with the posix data source. When unspecified, the default name is used.
- transfer_manifest TransferManifest Response 
- A manifest file provides a list of objects to be transferred from the data source. This field points to the location of the manifest file. Otherwise, the entire source bucket is used. ObjectConditions still apply.
- transfer_options TransferOptions Response 
- If the option delete_objects_unique_in_sink is trueand time-based object conditions such as 'last modification time' are specified, the request fails with an INVALID_ARGUMENT error.
- awsS3Compatible Property MapData Source 
- An AWS S3 compatible data source.
- awsS3Data Property MapSource 
- An AWS S3 data source.
- azureBlob Property MapStorage Data Source 
- An Azure Blob Storage data source.
- gcsData Property MapSink 
- A Cloud Storage data sink.
- gcsData Property MapSource 
- A Cloud Storage data source.
- gcsIntermediate Property MapData Location 
- For transfers between file systems, specifies a Cloud Storage bucket to be used as an intermediate location through which to transfer data. See Transfer data between file systems for more information.
- httpData Property MapSource 
- An HTTP URL data source.
- objectConditions Property Map
- Only objects that satisfy these object conditions are included in the set of data source and data sink objects. Object conditions based on objects' "last modification time" do not exclude objects in a data sink.
- posixData Property MapSink 
- A POSIX Filesystem data sink.
- posixData Property MapSource 
- A POSIX Filesystem data source.
- sinkAgent StringPool Name 
- Specifies the agent pool name associated with the posix data sink. When unspecified, the default name is used.
- sourceAgent StringPool Name 
- Specifies the agent pool name associated with the posix data source. When unspecified, the default name is used.
- transferManifest Property Map
- A manifest file provides a list of objects to be transferred from the data source. This field points to the location of the manifest file. Otherwise, the entire source bucket is used. ObjectConditions still apply.
- transferOptions Property Map
- If the option delete_objects_unique_in_sink is trueand time-based object conditions such as 'last modification time' are specified, the request fails with an INVALID_ARGUMENT error.
Package Details
- Repository
- Google Cloud Native pulumi/pulumi-google-native
- License
- Apache-2.0
Google Cloud Native is in preview. Google Cloud Classic is fully supported.
Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi