airbyte.DestinationRedshift
Explore with Pulumi AI
DestinationRedshift Resource
Example Usage
Coming soon!
Coming soon!
Coming soon!
Coming soon!
package generated_program;
import com.pulumi.Context;
import com.pulumi.Pulumi;
import com.pulumi.core.Output;
import com.pulumi.airbyte.DestinationRedshift;
import com.pulumi.airbyte.DestinationRedshiftArgs;
import com.pulumi.airbyte.inputs.DestinationRedshiftConfigurationArgs;
import java.util.List;
import java.util.ArrayList;
import java.util.Map;
import java.io.File;
import java.nio.file.Files;
import java.nio.file.Paths;
public class App {
public static void main(String[] args) {
Pulumi.run(App::stack);
}
public static void stack(Context ctx) {
var myDestinationRedshift = new DestinationRedshift("myDestinationRedshift", DestinationRedshiftArgs.builder()
.configuration(DestinationRedshiftConfigurationArgs.builder()
.database("...my_database...")
.disable_type_dedupe(false)
.drop_cascade(false)
.host("...my_host...")
.jdbc_url_params("...my_jdbc_url_params...")
.password("...my_password...")
.port(5439)
.raw_data_schema("...my_raw_data_schema...")
.schema("public")
.tunnel_method(%!v(PANIC=Format method: runtime error: invalid memory address or nil pointer dereference))
.uploading_method(%!v(PANIC=Format method: runtime error: invalid memory address or nil pointer dereference))
.username("...my_username...")
.build())
.definitionId("50bfb2e7-1ca1-4132-b623-8606f328175d")
.workspaceId("e25c2049-8986-4945-a3f6-604de181966d")
.build());
}
}
resources:
myDestinationRedshift:
type: airbyte:DestinationRedshift
properties:
configuration:
database: '...my_database...'
disable_type_dedupe: false
drop_cascade: false
host: '...my_host...'
jdbc_url_params: '...my_jdbc_url_params...'
password: '...my_password...'
port: 5439
raw_data_schema: '...my_raw_data_schema...'
schema: public
tunnel_method:
sshKeyAuthentication:
sshKey: '...my_ssh_key...'
tunnelHost: '...my_tunnel_host...'
tunnelPort: 22
tunnelUser: '...my_tunnel_user...'
uploading_method:
awss3Staging:
accessKeyId: '...my_access_key_id...'
fileNamePattern: '{date}'
purgeStagingData: false
s3BucketName: airbyte.staging
s3BucketPath: data_sync/test
s3BucketRegion: eu-west-2
secretAccessKey: '...my_secret_access_key...'
username: '...my_username...'
definitionId: 50bfb2e7-1ca1-4132-b623-8606f328175d
workspaceId: e25c2049-8986-4945-a3f6-604de181966d
Create DestinationRedshift Resource
Resources are created with functions called constructors. To learn more about declaring and configuring resources, see Resources.
Constructor syntax
new DestinationRedshift(name: string, args: DestinationRedshiftArgs, opts?: CustomResourceOptions);
@overload
def DestinationRedshift(resource_name: str,
args: DestinationRedshiftArgs,
opts: Optional[ResourceOptions] = None)
@overload
def DestinationRedshift(resource_name: str,
opts: Optional[ResourceOptions] = None,
configuration: Optional[DestinationRedshiftConfigurationArgs] = None,
workspace_id: Optional[str] = None,
definition_id: Optional[str] = None,
name: Optional[str] = None)
func NewDestinationRedshift(ctx *Context, name string, args DestinationRedshiftArgs, opts ...ResourceOption) (*DestinationRedshift, error)
public DestinationRedshift(string name, DestinationRedshiftArgs args, CustomResourceOptions? opts = null)
public DestinationRedshift(String name, DestinationRedshiftArgs args)
public DestinationRedshift(String name, DestinationRedshiftArgs args, CustomResourceOptions options)
type: airbyte:DestinationRedshift
properties: # The arguments to resource properties.
options: # Bag of options to control resource's behavior.
Parameters
- name string
- The unique name of the resource.
- args DestinationRedshiftArgs
- The arguments to resource properties.
- opts CustomResourceOptions
- Bag of options to control resource's behavior.
- resource_name str
- The unique name of the resource.
- args DestinationRedshiftArgs
- The arguments to resource properties.
- opts ResourceOptions
- Bag of options to control resource's behavior.
- ctx Context
- Context object for the current deployment.
- name string
- The unique name of the resource.
- args DestinationRedshiftArgs
- The arguments to resource properties.
- opts ResourceOption
- Bag of options to control resource's behavior.
- name string
- The unique name of the resource.
- args DestinationRedshiftArgs
- The arguments to resource properties.
- opts CustomResourceOptions
- Bag of options to control resource's behavior.
- name String
- The unique name of the resource.
- args DestinationRedshiftArgs
- The arguments to resource properties.
- options CustomResourceOptions
- Bag of options to control resource's behavior.
Constructor example
The following reference example uses placeholder values for all input properties.
var destinationRedshiftResource = new Airbyte.DestinationRedshift("destinationRedshiftResource", new()
{
Configuration = new Airbyte.Inputs.DestinationRedshiftConfigurationArgs
{
Database = "string",
Host = "string",
Password = "string",
Username = "string",
DisableTypeDedupe = false,
DropCascade = false,
JdbcUrlParams = "string",
Port = 0,
RawDataSchema = "string",
Schema = "string",
TunnelMethod = new Airbyte.Inputs.DestinationRedshiftConfigurationTunnelMethodArgs
{
NoTunnel = null,
PasswordAuthentication = new Airbyte.Inputs.DestinationRedshiftConfigurationTunnelMethodPasswordAuthenticationArgs
{
TunnelHost = "string",
TunnelUser = "string",
TunnelUserPassword = "string",
TunnelPort = 0,
},
SshKeyAuthentication = new Airbyte.Inputs.DestinationRedshiftConfigurationTunnelMethodSshKeyAuthenticationArgs
{
SshKey = "string",
TunnelHost = "string",
TunnelUser = "string",
TunnelPort = 0,
},
},
UploadingMethod = new Airbyte.Inputs.DestinationRedshiftConfigurationUploadingMethodArgs
{
Awss3Staging = new Airbyte.Inputs.DestinationRedshiftConfigurationUploadingMethodAwss3StagingArgs
{
AccessKeyId = "string",
S3BucketName = "string",
SecretAccessKey = "string",
FileNamePattern = "string",
PurgeStagingData = false,
S3BucketPath = "string",
S3BucketRegion = "string",
},
},
},
WorkspaceId = "string",
DefinitionId = "string",
Name = "string",
});
example, err := airbyte.NewDestinationRedshift(ctx, "destinationRedshiftResource", &airbyte.DestinationRedshiftArgs{
Configuration: &.DestinationRedshiftConfigurationArgs{
Database: pulumi.String("string"),
Host: pulumi.String("string"),
Password: pulumi.String("string"),
Username: pulumi.String("string"),
DisableTypeDedupe: pulumi.Bool(false),
DropCascade: pulumi.Bool(false),
JdbcUrlParams: pulumi.String("string"),
Port: pulumi.Float64(0),
RawDataSchema: pulumi.String("string"),
Schema: pulumi.String("string"),
TunnelMethod: &.DestinationRedshiftConfigurationTunnelMethodArgs{
NoTunnel: &.DestinationRedshiftConfigurationTunnelMethodNoTunnelArgs{
},
PasswordAuthentication: &.DestinationRedshiftConfigurationTunnelMethodPasswordAuthenticationArgs{
TunnelHost: pulumi.String("string"),
TunnelUser: pulumi.String("string"),
TunnelUserPassword: pulumi.String("string"),
TunnelPort: pulumi.Float64(0),
},
SshKeyAuthentication: &.DestinationRedshiftConfigurationTunnelMethodSshKeyAuthenticationArgs{
SshKey: pulumi.String("string"),
TunnelHost: pulumi.String("string"),
TunnelUser: pulumi.String("string"),
TunnelPort: pulumi.Float64(0),
},
},
UploadingMethod: &.DestinationRedshiftConfigurationUploadingMethodArgs{
Awss3Staging: &.DestinationRedshiftConfigurationUploadingMethodAwss3StagingArgs{
AccessKeyId: pulumi.String("string"),
S3BucketName: pulumi.String("string"),
SecretAccessKey: pulumi.String("string"),
FileNamePattern: pulumi.String("string"),
PurgeStagingData: pulumi.Bool(false),
S3BucketPath: pulumi.String("string"),
S3BucketRegion: pulumi.String("string"),
},
},
},
WorkspaceId: pulumi.String("string"),
DefinitionId: pulumi.String("string"),
Name: pulumi.String("string"),
})
var destinationRedshiftResource = new DestinationRedshift("destinationRedshiftResource", DestinationRedshiftArgs.builder()
.configuration(DestinationRedshiftConfigurationArgs.builder()
.database("string")
.host("string")
.password("string")
.username("string")
.disableTypeDedupe(false)
.dropCascade(false)
.jdbcUrlParams("string")
.port(0)
.rawDataSchema("string")
.schema("string")
.tunnelMethod(DestinationRedshiftConfigurationTunnelMethodArgs.builder()
.noTunnel()
.passwordAuthentication(DestinationRedshiftConfigurationTunnelMethodPasswordAuthenticationArgs.builder()
.tunnelHost("string")
.tunnelUser("string")
.tunnelUserPassword("string")
.tunnelPort(0)
.build())
.sshKeyAuthentication(DestinationRedshiftConfigurationTunnelMethodSshKeyAuthenticationArgs.builder()
.sshKey("string")
.tunnelHost("string")
.tunnelUser("string")
.tunnelPort(0)
.build())
.build())
.uploadingMethod(DestinationRedshiftConfigurationUploadingMethodArgs.builder()
.awss3Staging(DestinationRedshiftConfigurationUploadingMethodAwss3StagingArgs.builder()
.accessKeyId("string")
.s3BucketName("string")
.secretAccessKey("string")
.fileNamePattern("string")
.purgeStagingData(false)
.s3BucketPath("string")
.s3BucketRegion("string")
.build())
.build())
.build())
.workspaceId("string")
.definitionId("string")
.name("string")
.build());
destination_redshift_resource = airbyte.DestinationRedshift("destinationRedshiftResource",
configuration={
"database": "string",
"host": "string",
"password": "string",
"username": "string",
"disable_type_dedupe": False,
"drop_cascade": False,
"jdbc_url_params": "string",
"port": 0,
"raw_data_schema": "string",
"schema": "string",
"tunnel_method": {
"no_tunnel": {},
"password_authentication": {
"tunnel_host": "string",
"tunnel_user": "string",
"tunnel_user_password": "string",
"tunnel_port": 0,
},
"ssh_key_authentication": {
"ssh_key": "string",
"tunnel_host": "string",
"tunnel_user": "string",
"tunnel_port": 0,
},
},
"uploading_method": {
"awss3_staging": {
"access_key_id": "string",
"s3_bucket_name": "string",
"secret_access_key": "string",
"file_name_pattern": "string",
"purge_staging_data": False,
"s3_bucket_path": "string",
"s3_bucket_region": "string",
},
},
},
workspace_id="string",
definition_id="string",
name="string")
const destinationRedshiftResource = new airbyte.DestinationRedshift("destinationRedshiftResource", {
configuration: {
database: "string",
host: "string",
password: "string",
username: "string",
disableTypeDedupe: false,
dropCascade: false,
jdbcUrlParams: "string",
port: 0,
rawDataSchema: "string",
schema: "string",
tunnelMethod: {
noTunnel: {},
passwordAuthentication: {
tunnelHost: "string",
tunnelUser: "string",
tunnelUserPassword: "string",
tunnelPort: 0,
},
sshKeyAuthentication: {
sshKey: "string",
tunnelHost: "string",
tunnelUser: "string",
tunnelPort: 0,
},
},
uploadingMethod: {
awss3Staging: {
accessKeyId: "string",
s3BucketName: "string",
secretAccessKey: "string",
fileNamePattern: "string",
purgeStagingData: false,
s3BucketPath: "string",
s3BucketRegion: "string",
},
},
},
workspaceId: "string",
definitionId: "string",
name: "string",
});
type: airbyte:DestinationRedshift
properties:
configuration:
database: string
disableTypeDedupe: false
dropCascade: false
host: string
jdbcUrlParams: string
password: string
port: 0
rawDataSchema: string
schema: string
tunnelMethod:
noTunnel: {}
passwordAuthentication:
tunnelHost: string
tunnelPort: 0
tunnelUser: string
tunnelUserPassword: string
sshKeyAuthentication:
sshKey: string
tunnelHost: string
tunnelPort: 0
tunnelUser: string
uploadingMethod:
awss3Staging:
accessKeyId: string
fileNamePattern: string
purgeStagingData: false
s3BucketName: string
s3BucketPath: string
s3BucketRegion: string
secretAccessKey: string
username: string
definitionId: string
name: string
workspaceId: string
DestinationRedshift Resource Properties
To learn more about resource properties and how to use them, see Inputs and Outputs in the Architecture and Concepts docs.
Inputs
In Python, inputs that are objects can be passed either as argument classes or as dictionary literals.
The DestinationRedshift resource accepts the following input properties:
- Configuration
Destination
Redshift Configuration - Workspace
Id string - Definition
Id string - The UUID of the connector definition. One of configuration.destinationType or definitionId must be provided. Requires replacement if changed.
- Name string
- Name of the destination e.g. dev-mysql-instance.
- Configuration
Destination
Redshift Configuration Args - Workspace
Id string - Definition
Id string - The UUID of the connector definition. One of configuration.destinationType or definitionId must be provided. Requires replacement if changed.
- Name string
- Name of the destination e.g. dev-mysql-instance.
- configuration
Destination
Redshift Configuration - workspace
Id String - definition
Id String - The UUID of the connector definition. One of configuration.destinationType or definitionId must be provided. Requires replacement if changed.
- name String
- Name of the destination e.g. dev-mysql-instance.
- configuration
Destination
Redshift Configuration - workspace
Id string - definition
Id string - The UUID of the connector definition. One of configuration.destinationType or definitionId must be provided. Requires replacement if changed.
- name string
- Name of the destination e.g. dev-mysql-instance.
- configuration
Destination
Redshift Configuration Args - workspace_
id str - definition_
id str - The UUID of the connector definition. One of configuration.destinationType or definitionId must be provided. Requires replacement if changed.
- name str
- Name of the destination e.g. dev-mysql-instance.
- configuration Property Map
- workspace
Id String - definition
Id String - The UUID of the connector definition. One of configuration.destinationType or definitionId must be provided. Requires replacement if changed.
- name String
- Name of the destination e.g. dev-mysql-instance.
Outputs
All input properties are implicitly available as output properties. Additionally, the DestinationRedshift resource produces the following output properties:
- Created
At double - Destination
Id string - Destination
Type string - Id string
- The provider-assigned unique ID for this managed resource.
- Resource
Allocation DestinationRedshift Resource Allocation - actor or actor definition specific resource requirements. if default is set, these are the requirements that should be set for ALL jobs run for this actor definition. it is overriden by the job type specific configurations. if not set, the platform will use defaults. these values will be overriden by configuration at the connection level.
- Created
At float64 - Destination
Id string - Destination
Type string - Id string
- The provider-assigned unique ID for this managed resource.
- Resource
Allocation DestinationRedshift Resource Allocation - actor or actor definition specific resource requirements. if default is set, these are the requirements that should be set for ALL jobs run for this actor definition. it is overriden by the job type specific configurations. if not set, the platform will use defaults. these values will be overriden by configuration at the connection level.
- created
At Double - destination
Id String - destination
Type String - id String
- The provider-assigned unique ID for this managed resource.
- resource
Allocation DestinationRedshift Resource Allocation - actor or actor definition specific resource requirements. if default is set, these are the requirements that should be set for ALL jobs run for this actor definition. it is overriden by the job type specific configurations. if not set, the platform will use defaults. these values will be overriden by configuration at the connection level.
- created
At number - destination
Id string - destination
Type string - id string
- The provider-assigned unique ID for this managed resource.
- resource
Allocation DestinationRedshift Resource Allocation - actor or actor definition specific resource requirements. if default is set, these are the requirements that should be set for ALL jobs run for this actor definition. it is overriden by the job type specific configurations. if not set, the platform will use defaults. these values will be overriden by configuration at the connection level.
- created_
at float - destination_
id str - destination_
type str - id str
- The provider-assigned unique ID for this managed resource.
- resource_
allocation DestinationRedshift Resource Allocation - actor or actor definition specific resource requirements. if default is set, these are the requirements that should be set for ALL jobs run for this actor definition. it is overriden by the job type specific configurations. if not set, the platform will use defaults. these values will be overriden by configuration at the connection level.
- created
At Number - destination
Id String - destination
Type String - id String
- The provider-assigned unique ID for this managed resource.
- resource
Allocation Property Map - actor or actor definition specific resource requirements. if default is set, these are the requirements that should be set for ALL jobs run for this actor definition. it is overriden by the job type specific configurations. if not set, the platform will use defaults. these values will be overriden by configuration at the connection level.
Look up Existing DestinationRedshift Resource
Get an existing DestinationRedshift resource’s state with the given name, ID, and optional extra properties used to qualify the lookup.
public static get(name: string, id: Input<ID>, state?: DestinationRedshiftState, opts?: CustomResourceOptions): DestinationRedshift
@staticmethod
def get(resource_name: str,
id: str,
opts: Optional[ResourceOptions] = None,
configuration: Optional[DestinationRedshiftConfigurationArgs] = None,
created_at: Optional[float] = None,
definition_id: Optional[str] = None,
destination_id: Optional[str] = None,
destination_type: Optional[str] = None,
name: Optional[str] = None,
resource_allocation: Optional[DestinationRedshiftResourceAllocationArgs] = None,
workspace_id: Optional[str] = None) -> DestinationRedshift
func GetDestinationRedshift(ctx *Context, name string, id IDInput, state *DestinationRedshiftState, opts ...ResourceOption) (*DestinationRedshift, error)
public static DestinationRedshift Get(string name, Input<string> id, DestinationRedshiftState? state, CustomResourceOptions? opts = null)
public static DestinationRedshift get(String name, Output<String> id, DestinationRedshiftState state, CustomResourceOptions options)
resources: _: type: airbyte:DestinationRedshift get: id: ${id}
- name
- The unique name of the resulting resource.
- id
- The unique provider ID of the resource to lookup.
- state
- Any extra arguments used during the lookup.
- opts
- A bag of options that control this resource's behavior.
- resource_name
- The unique name of the resulting resource.
- id
- The unique provider ID of the resource to lookup.
- name
- The unique name of the resulting resource.
- id
- The unique provider ID of the resource to lookup.
- state
- Any extra arguments used during the lookup.
- opts
- A bag of options that control this resource's behavior.
- name
- The unique name of the resulting resource.
- id
- The unique provider ID of the resource to lookup.
- state
- Any extra arguments used during the lookup.
- opts
- A bag of options that control this resource's behavior.
- name
- The unique name of the resulting resource.
- id
- The unique provider ID of the resource to lookup.
- state
- Any extra arguments used during the lookup.
- opts
- A bag of options that control this resource's behavior.
- Configuration
Destination
Redshift Configuration - Created
At double - Definition
Id string - The UUID of the connector definition. One of configuration.destinationType or definitionId must be provided. Requires replacement if changed.
- Destination
Id string - Destination
Type string - Name string
- Name of the destination e.g. dev-mysql-instance.
- Resource
Allocation DestinationRedshift Resource Allocation - actor or actor definition specific resource requirements. if default is set, these are the requirements that should be set for ALL jobs run for this actor definition. it is overriden by the job type specific configurations. if not set, the platform will use defaults. these values will be overriden by configuration at the connection level.
- Workspace
Id string
- Configuration
Destination
Redshift Configuration Args - Created
At float64 - Definition
Id string - The UUID of the connector definition. One of configuration.destinationType or definitionId must be provided. Requires replacement if changed.
- Destination
Id string - Destination
Type string - Name string
- Name of the destination e.g. dev-mysql-instance.
- Resource
Allocation DestinationRedshift Resource Allocation Args - actor or actor definition specific resource requirements. if default is set, these are the requirements that should be set for ALL jobs run for this actor definition. it is overriden by the job type specific configurations. if not set, the platform will use defaults. these values will be overriden by configuration at the connection level.
- Workspace
Id string
- configuration
Destination
Redshift Configuration - created
At Double - definition
Id String - The UUID of the connector definition. One of configuration.destinationType or definitionId must be provided. Requires replacement if changed.
- destination
Id String - destination
Type String - name String
- Name of the destination e.g. dev-mysql-instance.
- resource
Allocation DestinationRedshift Resource Allocation - actor or actor definition specific resource requirements. if default is set, these are the requirements that should be set for ALL jobs run for this actor definition. it is overriden by the job type specific configurations. if not set, the platform will use defaults. these values will be overriden by configuration at the connection level.
- workspace
Id String
- configuration
Destination
Redshift Configuration - created
At number - definition
Id string - The UUID of the connector definition. One of configuration.destinationType or definitionId must be provided. Requires replacement if changed.
- destination
Id string - destination
Type string - name string
- Name of the destination e.g. dev-mysql-instance.
- resource
Allocation DestinationRedshift Resource Allocation - actor or actor definition specific resource requirements. if default is set, these are the requirements that should be set for ALL jobs run for this actor definition. it is overriden by the job type specific configurations. if not set, the platform will use defaults. these values will be overriden by configuration at the connection level.
- workspace
Id string
- configuration
Destination
Redshift Configuration Args - created_
at float - definition_
id str - The UUID of the connector definition. One of configuration.destinationType or definitionId must be provided. Requires replacement if changed.
- destination_
id str - destination_
type str - name str
- Name of the destination e.g. dev-mysql-instance.
- resource_
allocation DestinationRedshift Resource Allocation Args - actor or actor definition specific resource requirements. if default is set, these are the requirements that should be set for ALL jobs run for this actor definition. it is overriden by the job type specific configurations. if not set, the platform will use defaults. these values will be overriden by configuration at the connection level.
- workspace_
id str
- configuration Property Map
- created
At Number - definition
Id String - The UUID of the connector definition. One of configuration.destinationType or definitionId must be provided. Requires replacement if changed.
- destination
Id String - destination
Type String - name String
- Name of the destination e.g. dev-mysql-instance.
- resource
Allocation Property Map - actor or actor definition specific resource requirements. if default is set, these are the requirements that should be set for ALL jobs run for this actor definition. it is overriden by the job type specific configurations. if not set, the platform will use defaults. these values will be overriden by configuration at the connection level.
- workspace
Id String
Supporting Types
DestinationRedshiftConfiguration, DestinationRedshiftConfigurationArgs
- Database string
- Name of the database.
- Host string
- Host Endpoint of the Redshift Cluster (must include the cluster-id, region and end with .redshift.amazonaws.com)
- Password string
- Password associated with the username.
- Username string
- Username to use to access the database.
- Disable
Type boolDedupe - Disable Writing Final Tables. WARNING! The data format in airbytedata is likely stable but there are no guarantees that other metadata columns will remain the same in future versions. Default: false
- Drop
Cascade bool - Drop tables with CASCADE. WARNING! This will delete all data in all dependent objects (views, etc.). Use with caution. This option is intended for usecases which can easily rebuild the dependent objects. Default: false
- Jdbc
Url stringParams - Additional properties to pass to the JDBC URL string when connecting to the database formatted as 'key=value' pairs separated by the symbol '&'. (example: key1=value1&key2=value2&key3=value3).
- Port double
- Port of the database. Default: 5439
- Raw
Data stringSchema - The schema to write raw tables into (default: airbyte_internal).
- Schema string
- The default schema tables are written to if the source does not specify a namespace. Unless specifically configured, the usual value for this field is "public". Default: "public"
- Tunnel
Method DestinationRedshift Configuration Tunnel Method - Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use.
- Uploading
Method DestinationRedshift Configuration Uploading Method - The way data will be uploaded to Redshift.
- Database string
- Name of the database.
- Host string
- Host Endpoint of the Redshift Cluster (must include the cluster-id, region and end with .redshift.amazonaws.com)
- Password string
- Password associated with the username.
- Username string
- Username to use to access the database.
- Disable
Type boolDedupe - Disable Writing Final Tables. WARNING! The data format in airbytedata is likely stable but there are no guarantees that other metadata columns will remain the same in future versions. Default: false
- Drop
Cascade bool - Drop tables with CASCADE. WARNING! This will delete all data in all dependent objects (views, etc.). Use with caution. This option is intended for usecases which can easily rebuild the dependent objects. Default: false
- Jdbc
Url stringParams - Additional properties to pass to the JDBC URL string when connecting to the database formatted as 'key=value' pairs separated by the symbol '&'. (example: key1=value1&key2=value2&key3=value3).
- Port float64
- Port of the database. Default: 5439
- Raw
Data stringSchema - The schema to write raw tables into (default: airbyte_internal).
- Schema string
- The default schema tables are written to if the source does not specify a namespace. Unless specifically configured, the usual value for this field is "public". Default: "public"
- Tunnel
Method DestinationRedshift Configuration Tunnel Method - Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use.
- Uploading
Method DestinationRedshift Configuration Uploading Method - The way data will be uploaded to Redshift.
- database String
- Name of the database.
- host String
- Host Endpoint of the Redshift Cluster (must include the cluster-id, region and end with .redshift.amazonaws.com)
- password String
- Password associated with the username.
- username String
- Username to use to access the database.
- disable
Type BooleanDedupe - Disable Writing Final Tables. WARNING! The data format in airbytedata is likely stable but there are no guarantees that other metadata columns will remain the same in future versions. Default: false
- drop
Cascade Boolean - Drop tables with CASCADE. WARNING! This will delete all data in all dependent objects (views, etc.). Use with caution. This option is intended for usecases which can easily rebuild the dependent objects. Default: false
- jdbc
Url StringParams - Additional properties to pass to the JDBC URL string when connecting to the database formatted as 'key=value' pairs separated by the symbol '&'. (example: key1=value1&key2=value2&key3=value3).
- port Double
- Port of the database. Default: 5439
- raw
Data StringSchema - The schema to write raw tables into (default: airbyte_internal).
- schema String
- The default schema tables are written to if the source does not specify a namespace. Unless specifically configured, the usual value for this field is "public". Default: "public"
- tunnel
Method DestinationRedshift Configuration Tunnel Method - Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use.
- uploading
Method DestinationRedshift Configuration Uploading Method - The way data will be uploaded to Redshift.
- database string
- Name of the database.
- host string
- Host Endpoint of the Redshift Cluster (must include the cluster-id, region and end with .redshift.amazonaws.com)
- password string
- Password associated with the username.
- username string
- Username to use to access the database.
- disable
Type booleanDedupe - Disable Writing Final Tables. WARNING! The data format in airbytedata is likely stable but there are no guarantees that other metadata columns will remain the same in future versions. Default: false
- drop
Cascade boolean - Drop tables with CASCADE. WARNING! This will delete all data in all dependent objects (views, etc.). Use with caution. This option is intended for usecases which can easily rebuild the dependent objects. Default: false
- jdbc
Url stringParams - Additional properties to pass to the JDBC URL string when connecting to the database formatted as 'key=value' pairs separated by the symbol '&'. (example: key1=value1&key2=value2&key3=value3).
- port number
- Port of the database. Default: 5439
- raw
Data stringSchema - The schema to write raw tables into (default: airbyte_internal).
- schema string
- The default schema tables are written to if the source does not specify a namespace. Unless specifically configured, the usual value for this field is "public". Default: "public"
- tunnel
Method DestinationRedshift Configuration Tunnel Method - Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use.
- uploading
Method DestinationRedshift Configuration Uploading Method - The way data will be uploaded to Redshift.
- database str
- Name of the database.
- host str
- Host Endpoint of the Redshift Cluster (must include the cluster-id, region and end with .redshift.amazonaws.com)
- password str
- Password associated with the username.
- username str
- Username to use to access the database.
- disable_
type_ booldedupe - Disable Writing Final Tables. WARNING! The data format in airbytedata is likely stable but there are no guarantees that other metadata columns will remain the same in future versions. Default: false
- drop_
cascade bool - Drop tables with CASCADE. WARNING! This will delete all data in all dependent objects (views, etc.). Use with caution. This option is intended for usecases which can easily rebuild the dependent objects. Default: false
- jdbc_
url_ strparams - Additional properties to pass to the JDBC URL string when connecting to the database formatted as 'key=value' pairs separated by the symbol '&'. (example: key1=value1&key2=value2&key3=value3).
- port float
- Port of the database. Default: 5439
- raw_
data_ strschema - The schema to write raw tables into (default: airbyte_internal).
- schema str
- The default schema tables are written to if the source does not specify a namespace. Unless specifically configured, the usual value for this field is "public". Default: "public"
- tunnel_
method DestinationRedshift Configuration Tunnel Method - Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use.
- uploading_
method DestinationRedshift Configuration Uploading Method - The way data will be uploaded to Redshift.
- database String
- Name of the database.
- host String
- Host Endpoint of the Redshift Cluster (must include the cluster-id, region and end with .redshift.amazonaws.com)
- password String
- Password associated with the username.
- username String
- Username to use to access the database.
- disable
Type BooleanDedupe - Disable Writing Final Tables. WARNING! The data format in airbytedata is likely stable but there are no guarantees that other metadata columns will remain the same in future versions. Default: false
- drop
Cascade Boolean - Drop tables with CASCADE. WARNING! This will delete all data in all dependent objects (views, etc.). Use with caution. This option is intended for usecases which can easily rebuild the dependent objects. Default: false
- jdbc
Url StringParams - Additional properties to pass to the JDBC URL string when connecting to the database formatted as 'key=value' pairs separated by the symbol '&'. (example: key1=value1&key2=value2&key3=value3).
- port Number
- Port of the database. Default: 5439
- raw
Data StringSchema - The schema to write raw tables into (default: airbyte_internal).
- schema String
- The default schema tables are written to if the source does not specify a namespace. Unless specifically configured, the usual value for this field is "public". Default: "public"
- tunnel
Method Property Map - Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use.
- uploading
Method Property Map - The way data will be uploaded to Redshift.
DestinationRedshiftConfigurationTunnelMethod, DestinationRedshiftConfigurationTunnelMethodArgs
DestinationRedshiftConfigurationTunnelMethodPasswordAuthentication, DestinationRedshiftConfigurationTunnelMethodPasswordAuthenticationArgs
- Tunnel
Host string - Hostname of the jump server host that allows inbound ssh tunnel.
- Tunnel
User string - OS-level username for logging into the jump server host
- Tunnel
User stringPassword - OS-level password for logging into the jump server host
- Tunnel
Port double - Port on the proxy/jump server that accepts inbound ssh connections. Default: 22
- Tunnel
Host string - Hostname of the jump server host that allows inbound ssh tunnel.
- Tunnel
User string - OS-level username for logging into the jump server host
- Tunnel
User stringPassword - OS-level password for logging into the jump server host
- Tunnel
Port float64 - Port on the proxy/jump server that accepts inbound ssh connections. Default: 22
- tunnel
Host String - Hostname of the jump server host that allows inbound ssh tunnel.
- tunnel
User String - OS-level username for logging into the jump server host
- tunnel
User StringPassword - OS-level password for logging into the jump server host
- tunnel
Port Double - Port on the proxy/jump server that accepts inbound ssh connections. Default: 22
- tunnel
Host string - Hostname of the jump server host that allows inbound ssh tunnel.
- tunnel
User string - OS-level username for logging into the jump server host
- tunnel
User stringPassword - OS-level password for logging into the jump server host
- tunnel
Port number - Port on the proxy/jump server that accepts inbound ssh connections. Default: 22
- tunnel_
host str - Hostname of the jump server host that allows inbound ssh tunnel.
- tunnel_
user str - OS-level username for logging into the jump server host
- tunnel_
user_ strpassword - OS-level password for logging into the jump server host
- tunnel_
port float - Port on the proxy/jump server that accepts inbound ssh connections. Default: 22
- tunnel
Host String - Hostname of the jump server host that allows inbound ssh tunnel.
- tunnel
User String - OS-level username for logging into the jump server host
- tunnel
User StringPassword - OS-level password for logging into the jump server host
- tunnel
Port Number - Port on the proxy/jump server that accepts inbound ssh connections. Default: 22
DestinationRedshiftConfigurationTunnelMethodSshKeyAuthentication, DestinationRedshiftConfigurationTunnelMethodSshKeyAuthenticationArgs
- Ssh
Key string - OS-level user account ssh key credentials in RSA PEM format ( created with ssh-keygen -t rsa -m PEM -f myuser_rsa )
- Tunnel
Host string - Hostname of the jump server host that allows inbound ssh tunnel.
- Tunnel
User string - OS-level username for logging into the jump server host.
- Tunnel
Port double - Port on the proxy/jump server that accepts inbound ssh connections. Default: 22
- Ssh
Key string - OS-level user account ssh key credentials in RSA PEM format ( created with ssh-keygen -t rsa -m PEM -f myuser_rsa )
- Tunnel
Host string - Hostname of the jump server host that allows inbound ssh tunnel.
- Tunnel
User string - OS-level username for logging into the jump server host.
- Tunnel
Port float64 - Port on the proxy/jump server that accepts inbound ssh connections. Default: 22
- ssh
Key String - OS-level user account ssh key credentials in RSA PEM format ( created with ssh-keygen -t rsa -m PEM -f myuser_rsa )
- tunnel
Host String - Hostname of the jump server host that allows inbound ssh tunnel.
- tunnel
User String - OS-level username for logging into the jump server host.
- tunnel
Port Double - Port on the proxy/jump server that accepts inbound ssh connections. Default: 22
- ssh
Key string - OS-level user account ssh key credentials in RSA PEM format ( created with ssh-keygen -t rsa -m PEM -f myuser_rsa )
- tunnel
Host string - Hostname of the jump server host that allows inbound ssh tunnel.
- tunnel
User string - OS-level username for logging into the jump server host.
- tunnel
Port number - Port on the proxy/jump server that accepts inbound ssh connections. Default: 22
- ssh_
key str - OS-level user account ssh key credentials in RSA PEM format ( created with ssh-keygen -t rsa -m PEM -f myuser_rsa )
- tunnel_
host str - Hostname of the jump server host that allows inbound ssh tunnel.
- tunnel_
user str - OS-level username for logging into the jump server host.
- tunnel_
port float - Port on the proxy/jump server that accepts inbound ssh connections. Default: 22
- ssh
Key String - OS-level user account ssh key credentials in RSA PEM format ( created with ssh-keygen -t rsa -m PEM -f myuser_rsa )
- tunnel
Host String - Hostname of the jump server host that allows inbound ssh tunnel.
- tunnel
User String - OS-level username for logging into the jump server host.
- tunnel
Port Number - Port on the proxy/jump server that accepts inbound ssh connections. Default: 22
DestinationRedshiftConfigurationUploadingMethod, DestinationRedshiftConfigurationUploadingMethodArgs
- Awss3Staging
Destination
Redshift Configuration Uploading Method Awss3Staging - \n\n(recommended)\n\n Uploads data to S3 and then uses a COPY to insert the data into Redshift. COPY is recommended for production workloads for better speed and scalability. See \n\nAWS docs\n\n for more details.
- Awss3Staging
Destination
Redshift Configuration Uploading Method Awss3Staging - \n\n(recommended)\n\n Uploads data to S3 and then uses a COPY to insert the data into Redshift. COPY is recommended for production workloads for better speed and scalability. See \n\nAWS docs\n\n for more details.
- awss3Staging
Destination
Redshift Configuration Uploading Method Awss3Staging - \n\n(recommended)\n\n Uploads data to S3 and then uses a COPY to insert the data into Redshift. COPY is recommended for production workloads for better speed and scalability. See \n\nAWS docs\n\n for more details.
- awss3Staging
Destination
Redshift Configuration Uploading Method Awss3Staging - \n\n(recommended)\n\n Uploads data to S3 and then uses a COPY to insert the data into Redshift. COPY is recommended for production workloads for better speed and scalability. See \n\nAWS docs\n\n for more details.
- awss3_
staging DestinationRedshift Configuration Uploading Method Awss3Staging - \n\n(recommended)\n\n Uploads data to S3 and then uses a COPY to insert the data into Redshift. COPY is recommended for production workloads for better speed and scalability. See \n\nAWS docs\n\n for more details.
- awss3Staging Property Map
- \n\n(recommended)\n\n Uploads data to S3 and then uses a COPY to insert the data into Redshift. COPY is recommended for production workloads for better speed and scalability. See \n\nAWS docs\n\n for more details.
DestinationRedshiftConfigurationUploadingMethodAwss3Staging, DestinationRedshiftConfigurationUploadingMethodAwss3StagingArgs
- Access
Key stringId - This ID grants access to the above S3 staging bucket. Airbyte requires Read and Write permissions to the given bucket. See \n\nAWS docs\n\n on how to generate an access key ID and secret access key.
- S3Bucket
Name string - The name of the staging S3 bucket.
- Secret
Access stringKey - The corresponding secret to the above access key id. See \n\nAWS docs\n\n on how to generate an access key ID and secret access key.
- File
Name stringPattern - The pattern allows you to set the file-name format for the S3 staging file(s)
- Purge
Staging boolData - Whether to delete the staging files from S3 after completing the sync. See \n\n docs\n\n for details. Default: true
- S3Bucket
Path string - The directory under the S3 bucket where data will be written. If not provided, then defaults to the root directory. See \n\npath's name recommendations\n\n for more details.
- S3Bucket
Region string - The region of the S3 staging bucket. Default: ""; must be one of ["", "af-south-1", "ap-east-1", "ap-northeast-1", "ap-northeast-2", "ap-northeast-3", "ap-south-1", "ap-south-2", "ap-southeast-1", "ap-southeast-2", "ap-southeast-3", "ap-southeast-4", "ca-central-1", "ca-west-1", "cn-north-1", "cn-northwest-1", "eu-central-1", "eu-central-2", "eu-north-1", "eu-south-1", "eu-south-2", "eu-west-1", "eu-west-2", "eu-west-3", "il-central-1", "me-central-1", "me-south-1", "sa-east-1", "us-east-1", "us-east-2", "us-gov-east-1", "us-gov-west-1", "us-west-1", "us-west-2"]
- Access
Key stringId - This ID grants access to the above S3 staging bucket. Airbyte requires Read and Write permissions to the given bucket. See \n\nAWS docs\n\n on how to generate an access key ID and secret access key.
- S3Bucket
Name string - The name of the staging S3 bucket.
- Secret
Access stringKey - The corresponding secret to the above access key id. See \n\nAWS docs\n\n on how to generate an access key ID and secret access key.
- File
Name stringPattern - The pattern allows you to set the file-name format for the S3 staging file(s)
- Purge
Staging boolData - Whether to delete the staging files from S3 after completing the sync. See \n\n docs\n\n for details. Default: true
- S3Bucket
Path string - The directory under the S3 bucket where data will be written. If not provided, then defaults to the root directory. See \n\npath's name recommendations\n\n for more details.
- S3Bucket
Region string - The region of the S3 staging bucket. Default: ""; must be one of ["", "af-south-1", "ap-east-1", "ap-northeast-1", "ap-northeast-2", "ap-northeast-3", "ap-south-1", "ap-south-2", "ap-southeast-1", "ap-southeast-2", "ap-southeast-3", "ap-southeast-4", "ca-central-1", "ca-west-1", "cn-north-1", "cn-northwest-1", "eu-central-1", "eu-central-2", "eu-north-1", "eu-south-1", "eu-south-2", "eu-west-1", "eu-west-2", "eu-west-3", "il-central-1", "me-central-1", "me-south-1", "sa-east-1", "us-east-1", "us-east-2", "us-gov-east-1", "us-gov-west-1", "us-west-1", "us-west-2"]
- access
Key StringId - This ID grants access to the above S3 staging bucket. Airbyte requires Read and Write permissions to the given bucket. See \n\nAWS docs\n\n on how to generate an access key ID and secret access key.
- s3Bucket
Name String - The name of the staging S3 bucket.
- secret
Access StringKey - The corresponding secret to the above access key id. See \n\nAWS docs\n\n on how to generate an access key ID and secret access key.
- file
Name StringPattern - The pattern allows you to set the file-name format for the S3 staging file(s)
- purge
Staging BooleanData - Whether to delete the staging files from S3 after completing the sync. See \n\n docs\n\n for details. Default: true
- s3Bucket
Path String - The directory under the S3 bucket where data will be written. If not provided, then defaults to the root directory. See \n\npath's name recommendations\n\n for more details.
- s3Bucket
Region String - The region of the S3 staging bucket. Default: ""; must be one of ["", "af-south-1", "ap-east-1", "ap-northeast-1", "ap-northeast-2", "ap-northeast-3", "ap-south-1", "ap-south-2", "ap-southeast-1", "ap-southeast-2", "ap-southeast-3", "ap-southeast-4", "ca-central-1", "ca-west-1", "cn-north-1", "cn-northwest-1", "eu-central-1", "eu-central-2", "eu-north-1", "eu-south-1", "eu-south-2", "eu-west-1", "eu-west-2", "eu-west-3", "il-central-1", "me-central-1", "me-south-1", "sa-east-1", "us-east-1", "us-east-2", "us-gov-east-1", "us-gov-west-1", "us-west-1", "us-west-2"]
- access
Key stringId - This ID grants access to the above S3 staging bucket. Airbyte requires Read and Write permissions to the given bucket. See \n\nAWS docs\n\n on how to generate an access key ID and secret access key.
- s3Bucket
Name string - The name of the staging S3 bucket.
- secret
Access stringKey - The corresponding secret to the above access key id. See \n\nAWS docs\n\n on how to generate an access key ID and secret access key.
- file
Name stringPattern - The pattern allows you to set the file-name format for the S3 staging file(s)
- purge
Staging booleanData - Whether to delete the staging files from S3 after completing the sync. See \n\n docs\n\n for details. Default: true
- s3Bucket
Path string - The directory under the S3 bucket where data will be written. If not provided, then defaults to the root directory. See \n\npath's name recommendations\n\n for more details.
- s3Bucket
Region string - The region of the S3 staging bucket. Default: ""; must be one of ["", "af-south-1", "ap-east-1", "ap-northeast-1", "ap-northeast-2", "ap-northeast-3", "ap-south-1", "ap-south-2", "ap-southeast-1", "ap-southeast-2", "ap-southeast-3", "ap-southeast-4", "ca-central-1", "ca-west-1", "cn-north-1", "cn-northwest-1", "eu-central-1", "eu-central-2", "eu-north-1", "eu-south-1", "eu-south-2", "eu-west-1", "eu-west-2", "eu-west-3", "il-central-1", "me-central-1", "me-south-1", "sa-east-1", "us-east-1", "us-east-2", "us-gov-east-1", "us-gov-west-1", "us-west-1", "us-west-2"]
- access_
key_ strid - This ID grants access to the above S3 staging bucket. Airbyte requires Read and Write permissions to the given bucket. See \n\nAWS docs\n\n on how to generate an access key ID and secret access key.
- s3_
bucket_ strname - The name of the staging S3 bucket.
- secret_
access_ strkey - The corresponding secret to the above access key id. See \n\nAWS docs\n\n on how to generate an access key ID and secret access key.
- file_
name_ strpattern - The pattern allows you to set the file-name format for the S3 staging file(s)
- purge_
staging_ booldata - Whether to delete the staging files from S3 after completing the sync. See \n\n docs\n\n for details. Default: true
- s3_
bucket_ strpath - The directory under the S3 bucket where data will be written. If not provided, then defaults to the root directory. See \n\npath's name recommendations\n\n for more details.
- s3_
bucket_ strregion - The region of the S3 staging bucket. Default: ""; must be one of ["", "af-south-1", "ap-east-1", "ap-northeast-1", "ap-northeast-2", "ap-northeast-3", "ap-south-1", "ap-south-2", "ap-southeast-1", "ap-southeast-2", "ap-southeast-3", "ap-southeast-4", "ca-central-1", "ca-west-1", "cn-north-1", "cn-northwest-1", "eu-central-1", "eu-central-2", "eu-north-1", "eu-south-1", "eu-south-2", "eu-west-1", "eu-west-2", "eu-west-3", "il-central-1", "me-central-1", "me-south-1", "sa-east-1", "us-east-1", "us-east-2", "us-gov-east-1", "us-gov-west-1", "us-west-1", "us-west-2"]
- access
Key StringId - This ID grants access to the above S3 staging bucket. Airbyte requires Read and Write permissions to the given bucket. See \n\nAWS docs\n\n on how to generate an access key ID and secret access key.
- s3Bucket
Name String - The name of the staging S3 bucket.
- secret
Access StringKey - The corresponding secret to the above access key id. See \n\nAWS docs\n\n on how to generate an access key ID and secret access key.
- file
Name StringPattern - The pattern allows you to set the file-name format for the S3 staging file(s)
- purge
Staging BooleanData - Whether to delete the staging files from S3 after completing the sync. See \n\n docs\n\n for details. Default: true
- s3Bucket
Path String - The directory under the S3 bucket where data will be written. If not provided, then defaults to the root directory. See \n\npath's name recommendations\n\n for more details.
- s3Bucket
Region String - The region of the S3 staging bucket. Default: ""; must be one of ["", "af-south-1", "ap-east-1", "ap-northeast-1", "ap-northeast-2", "ap-northeast-3", "ap-south-1", "ap-south-2", "ap-southeast-1", "ap-southeast-2", "ap-southeast-3", "ap-southeast-4", "ca-central-1", "ca-west-1", "cn-north-1", "cn-northwest-1", "eu-central-1", "eu-central-2", "eu-north-1", "eu-south-1", "eu-south-2", "eu-west-1", "eu-west-2", "eu-west-3", "il-central-1", "me-central-1", "me-south-1", "sa-east-1", "us-east-1", "us-east-2", "us-gov-east-1", "us-gov-west-1", "us-west-1", "us-west-2"]
DestinationRedshiftResourceAllocation, DestinationRedshiftResourceAllocationArgs
- Default
Destination
Redshift Resource Allocation Default - optional resource requirements to run workers (blank for unbounded allocations)
- Job
Specifics List<DestinationRedshift Resource Allocation Job Specific>
- Default
Destination
Redshift Resource Allocation Default - optional resource requirements to run workers (blank for unbounded allocations)
- Job
Specifics []DestinationRedshift Resource Allocation Job Specific
- default_
Destination
Redshift Resource Allocation Default - optional resource requirements to run workers (blank for unbounded allocations)
- job
Specifics List<DestinationRedshift Resource Allocation Job Specific>
- default
Destination
Redshift Resource Allocation Default - optional resource requirements to run workers (blank for unbounded allocations)
- job
Specifics DestinationRedshift Resource Allocation Job Specific[]
- default
Destination
Redshift Resource Allocation Default - optional resource requirements to run workers (blank for unbounded allocations)
- job_
specifics Sequence[DestinationRedshift Resource Allocation Job Specific]
- default Property Map
- optional resource requirements to run workers (blank for unbounded allocations)
- job
Specifics List<Property Map>
DestinationRedshiftResourceAllocationDefault, DestinationRedshiftResourceAllocationDefaultArgs
- Cpu
Limit string - Cpu
Request string - Ephemeral
Storage stringLimit - Ephemeral
Storage stringRequest - Memory
Limit string - Memory
Request string
- Cpu
Limit string - Cpu
Request string - Ephemeral
Storage stringLimit - Ephemeral
Storage stringRequest - Memory
Limit string - Memory
Request string
- cpu
Limit String - cpu
Request String - ephemeral
Storage StringLimit - ephemeral
Storage StringRequest - memory
Limit String - memory
Request String
- cpu
Limit string - cpu
Request string - ephemeral
Storage stringLimit - ephemeral
Storage stringRequest - memory
Limit string - memory
Request string
- cpu_
limit str - cpu_
request str - ephemeral_
storage_ strlimit - ephemeral_
storage_ strrequest - memory_
limit str - memory_
request str
- cpu
Limit String - cpu
Request String - ephemeral
Storage StringLimit - ephemeral
Storage StringRequest - memory
Limit String - memory
Request String
DestinationRedshiftResourceAllocationJobSpecific, DestinationRedshiftResourceAllocationJobSpecificArgs
- Job
Type string - enum that describes the different types of jobs that the platform runs. must be one of ["getspec", "checkconnection", "discoverschema", "sync", "resetconnection", "connection_updater", "replicate"]
- Resource
Requirements DestinationRedshift Resource Allocation Job Specific Resource Requirements - optional resource requirements to run workers (blank for unbounded allocations)
- Job
Type string - enum that describes the different types of jobs that the platform runs. must be one of ["getspec", "checkconnection", "discoverschema", "sync", "resetconnection", "connection_updater", "replicate"]
- Resource
Requirements DestinationRedshift Resource Allocation Job Specific Resource Requirements - optional resource requirements to run workers (blank for unbounded allocations)
- job
Type String - enum that describes the different types of jobs that the platform runs. must be one of ["getspec", "checkconnection", "discoverschema", "sync", "resetconnection", "connection_updater", "replicate"]
- resource
Requirements DestinationRedshift Resource Allocation Job Specific Resource Requirements - optional resource requirements to run workers (blank for unbounded allocations)
- job
Type string - enum that describes the different types of jobs that the platform runs. must be one of ["getspec", "checkconnection", "discoverschema", "sync", "resetconnection", "connection_updater", "replicate"]
- resource
Requirements DestinationRedshift Resource Allocation Job Specific Resource Requirements - optional resource requirements to run workers (blank for unbounded allocations)
- job_
type str - enum that describes the different types of jobs that the platform runs. must be one of ["getspec", "checkconnection", "discoverschema", "sync", "resetconnection", "connection_updater", "replicate"]
- resource_
requirements DestinationRedshift Resource Allocation Job Specific Resource Requirements - optional resource requirements to run workers (blank for unbounded allocations)
- job
Type String - enum that describes the different types of jobs that the platform runs. must be one of ["getspec", "checkconnection", "discoverschema", "sync", "resetconnection", "connection_updater", "replicate"]
- resource
Requirements Property Map - optional resource requirements to run workers (blank for unbounded allocations)
DestinationRedshiftResourceAllocationJobSpecificResourceRequirements, DestinationRedshiftResourceAllocationJobSpecificResourceRequirementsArgs
- Cpu
Limit string - Cpu
Request string - Ephemeral
Storage stringLimit - Ephemeral
Storage stringRequest - Memory
Limit string - Memory
Request string
- Cpu
Limit string - Cpu
Request string - Ephemeral
Storage stringLimit - Ephemeral
Storage stringRequest - Memory
Limit string - Memory
Request string
- cpu
Limit String - cpu
Request String - ephemeral
Storage StringLimit - ephemeral
Storage StringRequest - memory
Limit String - memory
Request String
- cpu
Limit string - cpu
Request string - ephemeral
Storage stringLimit - ephemeral
Storage stringRequest - memory
Limit string - memory
Request string
- cpu_
limit str - cpu_
request str - ephemeral_
storage_ strlimit - ephemeral_
storage_ strrequest - memory_
limit str - memory_
request str
- cpu
Limit String - cpu
Request String - ephemeral
Storage StringLimit - ephemeral
Storage StringRequest - memory
Limit String - memory
Request String
Import
$ pulumi import airbyte:index/destinationRedshift:DestinationRedshift my_airbyte_destination_redshift ""
To learn more about importing existing cloud resources, see Importing resources.
Package Details
- Repository
- airbyte airbytehq/terraform-provider-airbyte
- License
- Notes
- This Pulumi package is based on the
airbyte
Terraform Provider.