airbyte.DestinationIceberg
Explore with Pulumi AI
DestinationIceberg Resource
Example Usage
Coming soon!
Coming soon!
Coming soon!
Coming soon!
package generated_program;
import com.pulumi.Context;
import com.pulumi.Pulumi;
import com.pulumi.core.Output;
import com.pulumi.airbyte.DestinationIceberg;
import com.pulumi.airbyte.DestinationIcebergArgs;
import com.pulumi.airbyte.inputs.DestinationIcebergConfigurationArgs;
import java.util.List;
import java.util.ArrayList;
import java.util.Map;
import java.io.File;
import java.nio.file.Files;
import java.nio.file.Paths;
public class App {
public static void main(String[] args) {
Pulumi.run(App::stack);
}
public static void stack(Context ctx) {
var myDestinationIceberg = new DestinationIceberg("myDestinationIceberg", DestinationIcebergArgs.builder()
.configuration(DestinationIcebergConfigurationArgs.builder()
.catalog_config(%!v(PANIC=Format method: runtime error: invalid memory address or nil pointer dereference))
.format_config(%!v(PANIC=Format method: runtime error: invalid memory address or nil pointer dereference))
.storage_config(%!v(PANIC=Format method: runtime error: invalid memory address or nil pointer dereference))
.build())
.definitionId("263446c4-43e9-45cc-ac60-4398823f5d7f")
.workspaceId("a348c0e2-12a2-4320-9af6-f59e32031847")
.build());
}
}
resources:
myDestinationIceberg:
type: airbyte:DestinationIceberg
properties:
configuration:
catalog_config:
glueCatalog:
catalogType: Glue
database: public
hadoopCatalogUseHierarchicalFileSystemsAsSameAsStorageConfig:
catalogType: Hadoop
database: default
format_config:
autoCompact: true
compactTargetFileSizeInMb: 9
flushBatchSize: 8
format: Parquet
storage_config:
serverManaged:
managedWarehouseName: '...my_managed_warehouse_name...'
storageType: MANAGED
definitionId: 263446c4-43e9-45cc-ac60-4398823f5d7f
workspaceId: a348c0e2-12a2-4320-9af6-f59e32031847
Create DestinationIceberg Resource
Resources are created with functions called constructors. To learn more about declaring and configuring resources, see Resources.
Constructor syntax
new DestinationIceberg(name: string, args: DestinationIcebergArgs, opts?: CustomResourceOptions);
@overload
def DestinationIceberg(resource_name: str,
args: DestinationIcebergArgs,
opts: Optional[ResourceOptions] = None)
@overload
def DestinationIceberg(resource_name: str,
opts: Optional[ResourceOptions] = None,
configuration: Optional[DestinationIcebergConfigurationArgs] = None,
workspace_id: Optional[str] = None,
definition_id: Optional[str] = None,
name: Optional[str] = None)
func NewDestinationIceberg(ctx *Context, name string, args DestinationIcebergArgs, opts ...ResourceOption) (*DestinationIceberg, error)
public DestinationIceberg(string name, DestinationIcebergArgs args, CustomResourceOptions? opts = null)
public DestinationIceberg(String name, DestinationIcebergArgs args)
public DestinationIceberg(String name, DestinationIcebergArgs args, CustomResourceOptions options)
type: airbyte:DestinationIceberg
properties: # The arguments to resource properties.
options: # Bag of options to control resource's behavior.
Parameters
- name string
- The unique name of the resource.
- args DestinationIcebergArgs
- The arguments to resource properties.
- opts CustomResourceOptions
- Bag of options to control resource's behavior.
- resource_name str
- The unique name of the resource.
- args DestinationIcebergArgs
- The arguments to resource properties.
- opts ResourceOptions
- Bag of options to control resource's behavior.
- ctx Context
- Context object for the current deployment.
- name string
- The unique name of the resource.
- args DestinationIcebergArgs
- The arguments to resource properties.
- opts ResourceOption
- Bag of options to control resource's behavior.
- name string
- The unique name of the resource.
- args DestinationIcebergArgs
- The arguments to resource properties.
- opts CustomResourceOptions
- Bag of options to control resource's behavior.
- name String
- The unique name of the resource.
- args DestinationIcebergArgs
- The arguments to resource properties.
- options CustomResourceOptions
- Bag of options to control resource's behavior.
Constructor example
The following reference example uses placeholder values for all input properties.
var destinationIcebergResource = new Airbyte.DestinationIceberg("destinationIcebergResource", new()
{
Configuration = new Airbyte.Inputs.DestinationIcebergConfigurationArgs
{
CatalogConfig = new Airbyte.Inputs.DestinationIcebergConfigurationCatalogConfigArgs
{
GlueCatalog = new Airbyte.Inputs.DestinationIcebergConfigurationCatalogConfigGlueCatalogArgs
{
CatalogType = "string",
Database = "string",
},
HadoopCatalogUseHierarchicalFileSystemsAsSameAsStorageConfig = new Airbyte.Inputs.DestinationIcebergConfigurationCatalogConfigHadoopCatalogUseHierarchicalFileSystemsAsSameAsStorageConfigArgs
{
CatalogType = "string",
Database = "string",
},
HiveCatalogUseApacheHiveMetaStore = new Airbyte.Inputs.DestinationIcebergConfigurationCatalogConfigHiveCatalogUseApacheHiveMetaStoreArgs
{
HiveThriftUri = "string",
CatalogType = "string",
Database = "string",
},
JdbcCatalogUseRelationalDatabase = new Airbyte.Inputs.DestinationIcebergConfigurationCatalogConfigJdbcCatalogUseRelationalDatabaseArgs
{
CatalogSchema = "string",
CatalogType = "string",
Database = "string",
JdbcUrl = "string",
Password = "string",
Ssl = false,
Username = "string",
},
RestCatalog = new Airbyte.Inputs.DestinationIcebergConfigurationCatalogConfigRestCatalogArgs
{
RestUri = "string",
CatalogType = "string",
RestCredential = "string",
RestToken = "string",
},
},
FormatConfig = new Airbyte.Inputs.DestinationIcebergConfigurationFormatConfigArgs
{
AutoCompact = false,
CompactTargetFileSizeInMb = 0,
FlushBatchSize = 0,
Format = "string",
},
StorageConfig = new Airbyte.Inputs.DestinationIcebergConfigurationStorageConfigArgs
{
S3 = new Airbyte.Inputs.DestinationIcebergConfigurationStorageConfigS3Args
{
AccessKeyId = "string",
S3WarehouseUri = "string",
SecretAccessKey = "string",
S3BucketRegion = "string",
S3Endpoint = "string",
S3PathStyleAccess = false,
StorageType = "string",
},
ServerManaged = new Airbyte.Inputs.DestinationIcebergConfigurationStorageConfigServerManagedArgs
{
ManagedWarehouseName = "string",
StorageType = "string",
},
},
},
WorkspaceId = "string",
DefinitionId = "string",
Name = "string",
});
example, err := airbyte.NewDestinationIceberg(ctx, "destinationIcebergResource", &airbyte.DestinationIcebergArgs{
Configuration: &.DestinationIcebergConfigurationArgs{
CatalogConfig: &.DestinationIcebergConfigurationCatalogConfigArgs{
GlueCatalog: &.DestinationIcebergConfigurationCatalogConfigGlueCatalogArgs{
CatalogType: pulumi.String("string"),
Database: pulumi.String("string"),
},
HadoopCatalogUseHierarchicalFileSystemsAsSameAsStorageConfig: &.DestinationIcebergConfigurationCatalogConfigHadoopCatalogUseHierarchicalFileSystemsAsSameAsStorageConfigArgs{
CatalogType: pulumi.String("string"),
Database: pulumi.String("string"),
},
HiveCatalogUseApacheHiveMetaStore: &.DestinationIcebergConfigurationCatalogConfigHiveCatalogUseApacheHiveMetaStoreArgs{
HiveThriftUri: pulumi.String("string"),
CatalogType: pulumi.String("string"),
Database: pulumi.String("string"),
},
JdbcCatalogUseRelationalDatabase: &.DestinationIcebergConfigurationCatalogConfigJdbcCatalogUseRelationalDatabaseArgs{
CatalogSchema: pulumi.String("string"),
CatalogType: pulumi.String("string"),
Database: pulumi.String("string"),
JdbcUrl: pulumi.String("string"),
Password: pulumi.String("string"),
Ssl: pulumi.Bool(false),
Username: pulumi.String("string"),
},
RestCatalog: &.DestinationIcebergConfigurationCatalogConfigRestCatalogArgs{
RestUri: pulumi.String("string"),
CatalogType: pulumi.String("string"),
RestCredential: pulumi.String("string"),
RestToken: pulumi.String("string"),
},
},
FormatConfig: &.DestinationIcebergConfigurationFormatConfigArgs{
AutoCompact: pulumi.Bool(false),
CompactTargetFileSizeInMb: pulumi.Float64(0),
FlushBatchSize: pulumi.Float64(0),
Format: pulumi.String("string"),
},
StorageConfig: &.DestinationIcebergConfigurationStorageConfigArgs{
S3: &.DestinationIcebergConfigurationStorageConfigS3Args{
AccessKeyId: pulumi.String("string"),
S3WarehouseUri: pulumi.String("string"),
SecretAccessKey: pulumi.String("string"),
S3BucketRegion: pulumi.String("string"),
S3Endpoint: pulumi.String("string"),
S3PathStyleAccess: pulumi.Bool(false),
StorageType: pulumi.String("string"),
},
ServerManaged: &.DestinationIcebergConfigurationStorageConfigServerManagedArgs{
ManagedWarehouseName: pulumi.String("string"),
StorageType: pulumi.String("string"),
},
},
},
WorkspaceId: pulumi.String("string"),
DefinitionId: pulumi.String("string"),
Name: pulumi.String("string"),
})
var destinationIcebergResource = new DestinationIceberg("destinationIcebergResource", DestinationIcebergArgs.builder()
.configuration(DestinationIcebergConfigurationArgs.builder()
.catalogConfig(DestinationIcebergConfigurationCatalogConfigArgs.builder()
.glueCatalog(DestinationIcebergConfigurationCatalogConfigGlueCatalogArgs.builder()
.catalogType("string")
.database("string")
.build())
.hadoopCatalogUseHierarchicalFileSystemsAsSameAsStorageConfig(DestinationIcebergConfigurationCatalogConfigHadoopCatalogUseHierarchicalFileSystemsAsSameAsStorageConfigArgs.builder()
.catalogType("string")
.database("string")
.build())
.hiveCatalogUseApacheHiveMetaStore(DestinationIcebergConfigurationCatalogConfigHiveCatalogUseApacheHiveMetaStoreArgs.builder()
.hiveThriftUri("string")
.catalogType("string")
.database("string")
.build())
.jdbcCatalogUseRelationalDatabase(DestinationIcebergConfigurationCatalogConfigJdbcCatalogUseRelationalDatabaseArgs.builder()
.catalogSchema("string")
.catalogType("string")
.database("string")
.jdbcUrl("string")
.password("string")
.ssl(false)
.username("string")
.build())
.restCatalog(DestinationIcebergConfigurationCatalogConfigRestCatalogArgs.builder()
.restUri("string")
.catalogType("string")
.restCredential("string")
.restToken("string")
.build())
.build())
.formatConfig(DestinationIcebergConfigurationFormatConfigArgs.builder()
.autoCompact(false)
.compactTargetFileSizeInMb(0)
.flushBatchSize(0)
.format("string")
.build())
.storageConfig(DestinationIcebergConfigurationStorageConfigArgs.builder()
.s3(DestinationIcebergConfigurationStorageConfigS3Args.builder()
.accessKeyId("string")
.s3WarehouseUri("string")
.secretAccessKey("string")
.s3BucketRegion("string")
.s3Endpoint("string")
.s3PathStyleAccess(false)
.storageType("string")
.build())
.serverManaged(DestinationIcebergConfigurationStorageConfigServerManagedArgs.builder()
.managedWarehouseName("string")
.storageType("string")
.build())
.build())
.build())
.workspaceId("string")
.definitionId("string")
.name("string")
.build());
destination_iceberg_resource = airbyte.DestinationIceberg("destinationIcebergResource",
configuration={
"catalog_config": {
"glue_catalog": {
"catalog_type": "string",
"database": "string",
},
"hadoop_catalog_use_hierarchical_file_systems_as_same_as_storage_config": {
"catalog_type": "string",
"database": "string",
},
"hive_catalog_use_apache_hive_meta_store": {
"hive_thrift_uri": "string",
"catalog_type": "string",
"database": "string",
},
"jdbc_catalog_use_relational_database": {
"catalog_schema": "string",
"catalog_type": "string",
"database": "string",
"jdbc_url": "string",
"password": "string",
"ssl": False,
"username": "string",
},
"rest_catalog": {
"rest_uri": "string",
"catalog_type": "string",
"rest_credential": "string",
"rest_token": "string",
},
},
"format_config": {
"auto_compact": False,
"compact_target_file_size_in_mb": 0,
"flush_batch_size": 0,
"format": "string",
},
"storage_config": {
"s3": {
"access_key_id": "string",
"s3_warehouse_uri": "string",
"secret_access_key": "string",
"s3_bucket_region": "string",
"s3_endpoint": "string",
"s3_path_style_access": False,
"storage_type": "string",
},
"server_managed": {
"managed_warehouse_name": "string",
"storage_type": "string",
},
},
},
workspace_id="string",
definition_id="string",
name="string")
const destinationIcebergResource = new airbyte.DestinationIceberg("destinationIcebergResource", {
configuration: {
catalogConfig: {
glueCatalog: {
catalogType: "string",
database: "string",
},
hadoopCatalogUseHierarchicalFileSystemsAsSameAsStorageConfig: {
catalogType: "string",
database: "string",
},
hiveCatalogUseApacheHiveMetaStore: {
hiveThriftUri: "string",
catalogType: "string",
database: "string",
},
jdbcCatalogUseRelationalDatabase: {
catalogSchema: "string",
catalogType: "string",
database: "string",
jdbcUrl: "string",
password: "string",
ssl: false,
username: "string",
},
restCatalog: {
restUri: "string",
catalogType: "string",
restCredential: "string",
restToken: "string",
},
},
formatConfig: {
autoCompact: false,
compactTargetFileSizeInMb: 0,
flushBatchSize: 0,
format: "string",
},
storageConfig: {
s3: {
accessKeyId: "string",
s3WarehouseUri: "string",
secretAccessKey: "string",
s3BucketRegion: "string",
s3Endpoint: "string",
s3PathStyleAccess: false,
storageType: "string",
},
serverManaged: {
managedWarehouseName: "string",
storageType: "string",
},
},
},
workspaceId: "string",
definitionId: "string",
name: "string",
});
type: airbyte:DestinationIceberg
properties:
configuration:
catalogConfig:
glueCatalog:
catalogType: string
database: string
hadoopCatalogUseHierarchicalFileSystemsAsSameAsStorageConfig:
catalogType: string
database: string
hiveCatalogUseApacheHiveMetaStore:
catalogType: string
database: string
hiveThriftUri: string
jdbcCatalogUseRelationalDatabase:
catalogSchema: string
catalogType: string
database: string
jdbcUrl: string
password: string
ssl: false
username: string
restCatalog:
catalogType: string
restCredential: string
restToken: string
restUri: string
formatConfig:
autoCompact: false
compactTargetFileSizeInMb: 0
flushBatchSize: 0
format: string
storageConfig:
s3:
accessKeyId: string
s3BucketRegion: string
s3Endpoint: string
s3PathStyleAccess: false
s3WarehouseUri: string
secretAccessKey: string
storageType: string
serverManaged:
managedWarehouseName: string
storageType: string
definitionId: string
name: string
workspaceId: string
DestinationIceberg Resource Properties
To learn more about resource properties and how to use them, see Inputs and Outputs in the Architecture and Concepts docs.
Inputs
In Python, inputs that are objects can be passed either as argument classes or as dictionary literals.
The DestinationIceberg resource accepts the following input properties:
- Configuration
Destination
Iceberg Configuration - Workspace
Id string - Definition
Id string - The UUID of the connector definition. One of configuration.destinationType or definitionId must be provided. Requires replacement if changed.
- Name string
- Name of the destination e.g. dev-mysql-instance.
- Configuration
Destination
Iceberg Configuration Args - Workspace
Id string - Definition
Id string - The UUID of the connector definition. One of configuration.destinationType or definitionId must be provided. Requires replacement if changed.
- Name string
- Name of the destination e.g. dev-mysql-instance.
- configuration
Destination
Iceberg Configuration - workspace
Id String - definition
Id String - The UUID of the connector definition. One of configuration.destinationType or definitionId must be provided. Requires replacement if changed.
- name String
- Name of the destination e.g. dev-mysql-instance.
- configuration
Destination
Iceberg Configuration - workspace
Id string - definition
Id string - The UUID of the connector definition. One of configuration.destinationType or definitionId must be provided. Requires replacement if changed.
- name string
- Name of the destination e.g. dev-mysql-instance.
- configuration
Destination
Iceberg Configuration Args - workspace_
id str - definition_
id str - The UUID of the connector definition. One of configuration.destinationType or definitionId must be provided. Requires replacement if changed.
- name str
- Name of the destination e.g. dev-mysql-instance.
- configuration Property Map
- workspace
Id String - definition
Id String - The UUID of the connector definition. One of configuration.destinationType or definitionId must be provided. Requires replacement if changed.
- name String
- Name of the destination e.g. dev-mysql-instance.
Outputs
All input properties are implicitly available as output properties. Additionally, the DestinationIceberg resource produces the following output properties:
- Created
At double - Destination
Id string - Destination
Type string - Id string
- The provider-assigned unique ID for this managed resource.
- Resource
Allocation DestinationIceberg Resource Allocation - actor or actor definition specific resource requirements. if default is set, these are the requirements that should be set for ALL jobs run for this actor definition. it is overriden by the job type specific configurations. if not set, the platform will use defaults. these values will be overriden by configuration at the connection level.
- Created
At float64 - Destination
Id string - Destination
Type string - Id string
- The provider-assigned unique ID for this managed resource.
- Resource
Allocation DestinationIceberg Resource Allocation - actor or actor definition specific resource requirements. if default is set, these are the requirements that should be set for ALL jobs run for this actor definition. it is overriden by the job type specific configurations. if not set, the platform will use defaults. these values will be overriden by configuration at the connection level.
- created
At Double - destination
Id String - destination
Type String - id String
- The provider-assigned unique ID for this managed resource.
- resource
Allocation DestinationIceberg Resource Allocation - actor or actor definition specific resource requirements. if default is set, these are the requirements that should be set for ALL jobs run for this actor definition. it is overriden by the job type specific configurations. if not set, the platform will use defaults. these values will be overriden by configuration at the connection level.
- created
At number - destination
Id string - destination
Type string - id string
- The provider-assigned unique ID for this managed resource.
- resource
Allocation DestinationIceberg Resource Allocation - actor or actor definition specific resource requirements. if default is set, these are the requirements that should be set for ALL jobs run for this actor definition. it is overriden by the job type specific configurations. if not set, the platform will use defaults. these values will be overriden by configuration at the connection level.
- created_
at float - destination_
id str - destination_
type str - id str
- The provider-assigned unique ID for this managed resource.
- resource_
allocation DestinationIceberg Resource Allocation - actor or actor definition specific resource requirements. if default is set, these are the requirements that should be set for ALL jobs run for this actor definition. it is overriden by the job type specific configurations. if not set, the platform will use defaults. these values will be overriden by configuration at the connection level.
- created
At Number - destination
Id String - destination
Type String - id String
- The provider-assigned unique ID for this managed resource.
- resource
Allocation Property Map - actor or actor definition specific resource requirements. if default is set, these are the requirements that should be set for ALL jobs run for this actor definition. it is overriden by the job type specific configurations. if not set, the platform will use defaults. these values will be overriden by configuration at the connection level.
Look up Existing DestinationIceberg Resource
Get an existing DestinationIceberg resource’s state with the given name, ID, and optional extra properties used to qualify the lookup.
public static get(name: string, id: Input<ID>, state?: DestinationIcebergState, opts?: CustomResourceOptions): DestinationIceberg
@staticmethod
def get(resource_name: str,
id: str,
opts: Optional[ResourceOptions] = None,
configuration: Optional[DestinationIcebergConfigurationArgs] = None,
created_at: Optional[float] = None,
definition_id: Optional[str] = None,
destination_id: Optional[str] = None,
destination_type: Optional[str] = None,
name: Optional[str] = None,
resource_allocation: Optional[DestinationIcebergResourceAllocationArgs] = None,
workspace_id: Optional[str] = None) -> DestinationIceberg
func GetDestinationIceberg(ctx *Context, name string, id IDInput, state *DestinationIcebergState, opts ...ResourceOption) (*DestinationIceberg, error)
public static DestinationIceberg Get(string name, Input<string> id, DestinationIcebergState? state, CustomResourceOptions? opts = null)
public static DestinationIceberg get(String name, Output<String> id, DestinationIcebergState state, CustomResourceOptions options)
resources: _: type: airbyte:DestinationIceberg get: id: ${id}
- name
- The unique name of the resulting resource.
- id
- The unique provider ID of the resource to lookup.
- state
- Any extra arguments used during the lookup.
- opts
- A bag of options that control this resource's behavior.
- resource_name
- The unique name of the resulting resource.
- id
- The unique provider ID of the resource to lookup.
- name
- The unique name of the resulting resource.
- id
- The unique provider ID of the resource to lookup.
- state
- Any extra arguments used during the lookup.
- opts
- A bag of options that control this resource's behavior.
- name
- The unique name of the resulting resource.
- id
- The unique provider ID of the resource to lookup.
- state
- Any extra arguments used during the lookup.
- opts
- A bag of options that control this resource's behavior.
- name
- The unique name of the resulting resource.
- id
- The unique provider ID of the resource to lookup.
- state
- Any extra arguments used during the lookup.
- opts
- A bag of options that control this resource's behavior.
- Configuration
Destination
Iceberg Configuration - Created
At double - Definition
Id string - The UUID of the connector definition. One of configuration.destinationType or definitionId must be provided. Requires replacement if changed.
- Destination
Id string - Destination
Type string - Name string
- Name of the destination e.g. dev-mysql-instance.
- Resource
Allocation DestinationIceberg Resource Allocation - actor or actor definition specific resource requirements. if default is set, these are the requirements that should be set for ALL jobs run for this actor definition. it is overriden by the job type specific configurations. if not set, the platform will use defaults. these values will be overriden by configuration at the connection level.
- Workspace
Id string
- Configuration
Destination
Iceberg Configuration Args - Created
At float64 - Definition
Id string - The UUID of the connector definition. One of configuration.destinationType or definitionId must be provided. Requires replacement if changed.
- Destination
Id string - Destination
Type string - Name string
- Name of the destination e.g. dev-mysql-instance.
- Resource
Allocation DestinationIceberg Resource Allocation Args - actor or actor definition specific resource requirements. if default is set, these are the requirements that should be set for ALL jobs run for this actor definition. it is overriden by the job type specific configurations. if not set, the platform will use defaults. these values will be overriden by configuration at the connection level.
- Workspace
Id string
- configuration
Destination
Iceberg Configuration - created
At Double - definition
Id String - The UUID of the connector definition. One of configuration.destinationType or definitionId must be provided. Requires replacement if changed.
- destination
Id String - destination
Type String - name String
- Name of the destination e.g. dev-mysql-instance.
- resource
Allocation DestinationIceberg Resource Allocation - actor or actor definition specific resource requirements. if default is set, these are the requirements that should be set for ALL jobs run for this actor definition. it is overriden by the job type specific configurations. if not set, the platform will use defaults. these values will be overriden by configuration at the connection level.
- workspace
Id String
- configuration
Destination
Iceberg Configuration - created
At number - definition
Id string - The UUID of the connector definition. One of configuration.destinationType or definitionId must be provided. Requires replacement if changed.
- destination
Id string - destination
Type string - name string
- Name of the destination e.g. dev-mysql-instance.
- resource
Allocation DestinationIceberg Resource Allocation - actor or actor definition specific resource requirements. if default is set, these are the requirements that should be set for ALL jobs run for this actor definition. it is overriden by the job type specific configurations. if not set, the platform will use defaults. these values will be overriden by configuration at the connection level.
- workspace
Id string
- configuration
Destination
Iceberg Configuration Args - created_
at float - definition_
id str - The UUID of the connector definition. One of configuration.destinationType or definitionId must be provided. Requires replacement if changed.
- destination_
id str - destination_
type str - name str
- Name of the destination e.g. dev-mysql-instance.
- resource_
allocation DestinationIceberg Resource Allocation Args - actor or actor definition specific resource requirements. if default is set, these are the requirements that should be set for ALL jobs run for this actor definition. it is overriden by the job type specific configurations. if not set, the platform will use defaults. these values will be overriden by configuration at the connection level.
- workspace_
id str
- configuration Property Map
- created
At Number - definition
Id String - The UUID of the connector definition. One of configuration.destinationType or definitionId must be provided. Requires replacement if changed.
- destination
Id String - destination
Type String - name String
- Name of the destination e.g. dev-mysql-instance.
- resource
Allocation Property Map - actor or actor definition specific resource requirements. if default is set, these are the requirements that should be set for ALL jobs run for this actor definition. it is overriden by the job type specific configurations. if not set, the platform will use defaults. these values will be overriden by configuration at the connection level.
- workspace
Id String
Supporting Types
DestinationIcebergConfiguration, DestinationIcebergConfigurationArgs
- Catalog
Config DestinationIceberg Configuration Catalog Config - Catalog config of Iceberg.
- Format
Config DestinationIceberg Configuration Format Config - File format of Iceberg storage.
- Storage
Config DestinationIceberg Configuration Storage Config - Storage config of Iceberg.
- Catalog
Config DestinationIceberg Configuration Catalog Config - Catalog config of Iceberg.
- Format
Config DestinationIceberg Configuration Format Config - File format of Iceberg storage.
- Storage
Config DestinationIceberg Configuration Storage Config - Storage config of Iceberg.
- catalog
Config DestinationIceberg Configuration Catalog Config - Catalog config of Iceberg.
- format
Config DestinationIceberg Configuration Format Config - File format of Iceberg storage.
- storage
Config DestinationIceberg Configuration Storage Config - Storage config of Iceberg.
- catalog
Config DestinationIceberg Configuration Catalog Config - Catalog config of Iceberg.
- format
Config DestinationIceberg Configuration Format Config - File format of Iceberg storage.
- storage
Config DestinationIceberg Configuration Storage Config - Storage config of Iceberg.
- catalog_
config DestinationIceberg Configuration Catalog Config - Catalog config of Iceberg.
- format_
config DestinationIceberg Configuration Format Config - File format of Iceberg storage.
- storage_
config DestinationIceberg Configuration Storage Config - Storage config of Iceberg.
- catalog
Config Property Map - Catalog config of Iceberg.
- format
Config Property Map - File format of Iceberg storage.
- storage
Config Property Map - Storage config of Iceberg.
DestinationIcebergConfigurationCatalogConfig, DestinationIcebergConfigurationCatalogConfigArgs
- Glue
Catalog DestinationIceberg Configuration Catalog Config Glue Catalog - The GlueCatalog connects to a AWS Glue Catalog
- Hadoop
Catalog DestinationUse Hierarchical File Systems As Same As Storage Config Iceberg Configuration Catalog Config Hadoop Catalog Use Hierarchical File Systems As Same As Storage Config - A Hadoop catalog doesn’t need to connect to a Hive MetaStore, but can only be used with HDFS or similar file systems that support atomic rename.
- Hive
Catalog DestinationUse Apache Hive Meta Store Iceberg Configuration Catalog Config Hive Catalog Use Apache Hive Meta Store - Jdbc
Catalog DestinationUse Relational Database Iceberg Configuration Catalog Config Jdbc Catalog Use Relational Database - Using a table in a relational database to manage Iceberg tables through JDBC. Read more \n\nhere\n\n. Supporting: PostgreSQL
- Rest
Catalog DestinationIceberg Configuration Catalog Config Rest Catalog - The RESTCatalog connects to a REST server at the specified URI
- Glue
Catalog DestinationIceberg Configuration Catalog Config Glue Catalog - The GlueCatalog connects to a AWS Glue Catalog
- Hadoop
Catalog DestinationUse Hierarchical File Systems As Same As Storage Config Iceberg Configuration Catalog Config Hadoop Catalog Use Hierarchical File Systems As Same As Storage Config - A Hadoop catalog doesn’t need to connect to a Hive MetaStore, but can only be used with HDFS or similar file systems that support atomic rename.
- Hive
Catalog DestinationUse Apache Hive Meta Store Iceberg Configuration Catalog Config Hive Catalog Use Apache Hive Meta Store - Jdbc
Catalog DestinationUse Relational Database Iceberg Configuration Catalog Config Jdbc Catalog Use Relational Database - Using a table in a relational database to manage Iceberg tables through JDBC. Read more \n\nhere\n\n. Supporting: PostgreSQL
- Rest
Catalog DestinationIceberg Configuration Catalog Config Rest Catalog - The RESTCatalog connects to a REST server at the specified URI
- glue
Catalog DestinationIceberg Configuration Catalog Config Glue Catalog - The GlueCatalog connects to a AWS Glue Catalog
- hadoop
Catalog DestinationUse Hierarchical File Systems As Same As Storage Config Iceberg Configuration Catalog Config Hadoop Catalog Use Hierarchical File Systems As Same As Storage Config - A Hadoop catalog doesn’t need to connect to a Hive MetaStore, but can only be used with HDFS or similar file systems that support atomic rename.
- hive
Catalog DestinationUse Apache Hive Meta Store Iceberg Configuration Catalog Config Hive Catalog Use Apache Hive Meta Store - jdbc
Catalog DestinationUse Relational Database Iceberg Configuration Catalog Config Jdbc Catalog Use Relational Database - Using a table in a relational database to manage Iceberg tables through JDBC. Read more \n\nhere\n\n. Supporting: PostgreSQL
- rest
Catalog DestinationIceberg Configuration Catalog Config Rest Catalog - The RESTCatalog connects to a REST server at the specified URI
- glue
Catalog DestinationIceberg Configuration Catalog Config Glue Catalog - The GlueCatalog connects to a AWS Glue Catalog
- hadoop
Catalog DestinationUse Hierarchical File Systems As Same As Storage Config Iceberg Configuration Catalog Config Hadoop Catalog Use Hierarchical File Systems As Same As Storage Config - A Hadoop catalog doesn’t need to connect to a Hive MetaStore, but can only be used with HDFS or similar file systems that support atomic rename.
- hive
Catalog DestinationUse Apache Hive Meta Store Iceberg Configuration Catalog Config Hive Catalog Use Apache Hive Meta Store - jdbc
Catalog DestinationUse Relational Database Iceberg Configuration Catalog Config Jdbc Catalog Use Relational Database - Using a table in a relational database to manage Iceberg tables through JDBC. Read more \n\nhere\n\n. Supporting: PostgreSQL
- rest
Catalog DestinationIceberg Configuration Catalog Config Rest Catalog - The RESTCatalog connects to a REST server at the specified URI
- glue_
catalog DestinationIceberg Configuration Catalog Config Glue Catalog - The GlueCatalog connects to a AWS Glue Catalog
- hadoop_
catalog_ Destinationuse_ hierarchical_ file_ systems_ as_ same_ as_ storage_ config Iceberg Configuration Catalog Config Hadoop Catalog Use Hierarchical File Systems As Same As Storage Config - A Hadoop catalog doesn’t need to connect to a Hive MetaStore, but can only be used with HDFS or similar file systems that support atomic rename.
- hive_
catalog_ Destinationuse_ apache_ hive_ meta_ store Iceberg Configuration Catalog Config Hive Catalog Use Apache Hive Meta Store - jdbc_
catalog_ Destinationuse_ relational_ database Iceberg Configuration Catalog Config Jdbc Catalog Use Relational Database - Using a table in a relational database to manage Iceberg tables through JDBC. Read more \n\nhere\n\n. Supporting: PostgreSQL
- rest_
catalog DestinationIceberg Configuration Catalog Config Rest Catalog - The RESTCatalog connects to a REST server at the specified URI
- glue
Catalog Property Map - The GlueCatalog connects to a AWS Glue Catalog
- hadoop
Catalog Property MapUse Hierarchical File Systems As Same As Storage Config - A Hadoop catalog doesn’t need to connect to a Hive MetaStore, but can only be used with HDFS or similar file systems that support atomic rename.
- hive
Catalog Property MapUse Apache Hive Meta Store - jdbc
Catalog Property MapUse Relational Database - Using a table in a relational database to manage Iceberg tables through JDBC. Read more \n\nhere\n\n. Supporting: PostgreSQL
- rest
Catalog Property Map - The RESTCatalog connects to a REST server at the specified URI
DestinationIcebergConfigurationCatalogConfigGlueCatalog, DestinationIcebergConfigurationCatalogConfigGlueCatalogArgs
- Catalog
Type string - Default: "Glue"; must be "Glue"
- Database string
- The default schema tables are written to if the source does not specify a namespace. The usual value for this field is "public". Default: "public"
- Catalog
Type string - Default: "Glue"; must be "Glue"
- Database string
- The default schema tables are written to if the source does not specify a namespace. The usual value for this field is "public". Default: "public"
- catalog
Type String - Default: "Glue"; must be "Glue"
- database String
- The default schema tables are written to if the source does not specify a namespace. The usual value for this field is "public". Default: "public"
- catalog
Type string - Default: "Glue"; must be "Glue"
- database string
- The default schema tables are written to if the source does not specify a namespace. The usual value for this field is "public". Default: "public"
- catalog_
type str - Default: "Glue"; must be "Glue"
- database str
- The default schema tables are written to if the source does not specify a namespace. The usual value for this field is "public". Default: "public"
- catalog
Type String - Default: "Glue"; must be "Glue"
- database String
- The default schema tables are written to if the source does not specify a namespace. The usual value for this field is "public". Default: "public"
DestinationIcebergConfigurationCatalogConfigHadoopCatalogUseHierarchicalFileSystemsAsSameAsStorageConfig, DestinationIcebergConfigurationCatalogConfigHadoopCatalogUseHierarchicalFileSystemsAsSameAsStorageConfigArgs
- Catalog
Type string - Default: "Hadoop"; must be "Hadoop"
- Database string
- The default database tables are written to if the source does not specify a namespace. The usual value for this field is "default". Default: "default"
- Catalog
Type string - Default: "Hadoop"; must be "Hadoop"
- Database string
- The default database tables are written to if the source does not specify a namespace. The usual value for this field is "default". Default: "default"
- catalog
Type String - Default: "Hadoop"; must be "Hadoop"
- database String
- The default database tables are written to if the source does not specify a namespace. The usual value for this field is "default". Default: "default"
- catalog
Type string - Default: "Hadoop"; must be "Hadoop"
- database string
- The default database tables are written to if the source does not specify a namespace. The usual value for this field is "default". Default: "default"
- catalog_
type str - Default: "Hadoop"; must be "Hadoop"
- database str
- The default database tables are written to if the source does not specify a namespace. The usual value for this field is "default". Default: "default"
- catalog
Type String - Default: "Hadoop"; must be "Hadoop"
- database String
- The default database tables are written to if the source does not specify a namespace. The usual value for this field is "default". Default: "default"
DestinationIcebergConfigurationCatalogConfigHiveCatalogUseApacheHiveMetaStore, DestinationIcebergConfigurationCatalogConfigHiveCatalogUseApacheHiveMetaStoreArgs
- Hive
Thrift stringUri - Hive MetaStore thrift server uri of iceberg catalog.
- Catalog
Type string - Default: "Hive"; must be "Hive"
- Database string
- The default database tables are written to if the source does not specify a namespace. The usual value for this field is "default". Default: "default"
- Hive
Thrift stringUri - Hive MetaStore thrift server uri of iceberg catalog.
- Catalog
Type string - Default: "Hive"; must be "Hive"
- Database string
- The default database tables are written to if the source does not specify a namespace. The usual value for this field is "default". Default: "default"
- hive
Thrift StringUri - Hive MetaStore thrift server uri of iceberg catalog.
- catalog
Type String - Default: "Hive"; must be "Hive"
- database String
- The default database tables are written to if the source does not specify a namespace. The usual value for this field is "default". Default: "default"
- hive
Thrift stringUri - Hive MetaStore thrift server uri of iceberg catalog.
- catalog
Type string - Default: "Hive"; must be "Hive"
- database string
- The default database tables are written to if the source does not specify a namespace. The usual value for this field is "default". Default: "default"
- hive_
thrift_ struri - Hive MetaStore thrift server uri of iceberg catalog.
- catalog_
type str - Default: "Hive"; must be "Hive"
- database str
- The default database tables are written to if the source does not specify a namespace. The usual value for this field is "default". Default: "default"
- hive
Thrift StringUri - Hive MetaStore thrift server uri of iceberg catalog.
- catalog
Type String - Default: "Hive"; must be "Hive"
- database String
- The default database tables are written to if the source does not specify a namespace. The usual value for this field is "default". Default: "default"
DestinationIcebergConfigurationCatalogConfigJdbcCatalogUseRelationalDatabase, DestinationIcebergConfigurationCatalogConfigJdbcCatalogUseRelationalDatabaseArgs
- Catalog
Schema string - Iceberg catalog metadata tables are written to catalog schema. The usual value for this field is "public". Default: "public"
- Catalog
Type string - Default: "Jdbc"; must be "Jdbc"
- Database string
- The default schema tables are written to if the source does not specify a namespace. The usual value for this field is "public". Default: "public"
- Jdbc
Url string - Password string
- Password associated with the username.
- Ssl bool
- Encrypt data using SSL. When activating SSL, please select one of the connection modes. Default: false
- Username string
- Username to use to access the database.
- Catalog
Schema string - Iceberg catalog metadata tables are written to catalog schema. The usual value for this field is "public". Default: "public"
- Catalog
Type string - Default: "Jdbc"; must be "Jdbc"
- Database string
- The default schema tables are written to if the source does not specify a namespace. The usual value for this field is "public". Default: "public"
- Jdbc
Url string - Password string
- Password associated with the username.
- Ssl bool
- Encrypt data using SSL. When activating SSL, please select one of the connection modes. Default: false
- Username string
- Username to use to access the database.
- catalog
Schema String - Iceberg catalog metadata tables are written to catalog schema. The usual value for this field is "public". Default: "public"
- catalog
Type String - Default: "Jdbc"; must be "Jdbc"
- database String
- The default schema tables are written to if the source does not specify a namespace. The usual value for this field is "public". Default: "public"
- jdbc
Url String - password String
- Password associated with the username.
- ssl Boolean
- Encrypt data using SSL. When activating SSL, please select one of the connection modes. Default: false
- username String
- Username to use to access the database.
- catalog
Schema string - Iceberg catalog metadata tables are written to catalog schema. The usual value for this field is "public". Default: "public"
- catalog
Type string - Default: "Jdbc"; must be "Jdbc"
- database string
- The default schema tables are written to if the source does not specify a namespace. The usual value for this field is "public". Default: "public"
- jdbc
Url string - password string
- Password associated with the username.
- ssl boolean
- Encrypt data using SSL. When activating SSL, please select one of the connection modes. Default: false
- username string
- Username to use to access the database.
- catalog_
schema str - Iceberg catalog metadata tables are written to catalog schema. The usual value for this field is "public". Default: "public"
- catalog_
type str - Default: "Jdbc"; must be "Jdbc"
- database str
- The default schema tables are written to if the source does not specify a namespace. The usual value for this field is "public". Default: "public"
- jdbc_
url str - password str
- Password associated with the username.
- ssl bool
- Encrypt data using SSL. When activating SSL, please select one of the connection modes. Default: false
- username str
- Username to use to access the database.
- catalog
Schema String - Iceberg catalog metadata tables are written to catalog schema. The usual value for this field is "public". Default: "public"
- catalog
Type String - Default: "Jdbc"; must be "Jdbc"
- database String
- The default schema tables are written to if the source does not specify a namespace. The usual value for this field is "public". Default: "public"
- jdbc
Url String - password String
- Password associated with the username.
- ssl Boolean
- Encrypt data using SSL. When activating SSL, please select one of the connection modes. Default: false
- username String
- Username to use to access the database.
DestinationIcebergConfigurationCatalogConfigRestCatalog, DestinationIcebergConfigurationCatalogConfigRestCatalogArgs
- Rest
Uri string - Catalog
Type string - Default: "Rest"; must be "Rest"
- Rest
Credential string - Rest
Token string
- Rest
Uri string - Catalog
Type string - Default: "Rest"; must be "Rest"
- Rest
Credential string - Rest
Token string
- rest
Uri String - catalog
Type String - Default: "Rest"; must be "Rest"
- rest
Credential String - rest
Token String
- rest
Uri string - catalog
Type string - Default: "Rest"; must be "Rest"
- rest
Credential string - rest
Token string
- rest_
uri str - catalog_
type str - Default: "Rest"; must be "Rest"
- rest_
credential str - rest_
token str
- rest
Uri String - catalog
Type String - Default: "Rest"; must be "Rest"
- rest
Credential String - rest
Token String
DestinationIcebergConfigurationFormatConfig, DestinationIcebergConfigurationFormatConfigArgs
- Auto
Compact bool - Auto compact data files when stream close. Default: false
- Compact
Target doubleFile Size In Mb - Specify the target size of Iceberg data file when performing a compaction action. Default: 100
- Flush
Batch doubleSize - Iceberg data file flush batch size. Incoming rows write to cache firstly; When cache size reaches this 'batch size', flush into real Iceberg data file. Default: 10000
- Format string
- Default: "Parquet"; must be one of ["Parquet", "Avro"]
- Auto
Compact bool - Auto compact data files when stream close. Default: false
- Compact
Target float64File Size In Mb - Specify the target size of Iceberg data file when performing a compaction action. Default: 100
- Flush
Batch float64Size - Iceberg data file flush batch size. Incoming rows write to cache firstly; When cache size reaches this 'batch size', flush into real Iceberg data file. Default: 10000
- Format string
- Default: "Parquet"; must be one of ["Parquet", "Avro"]
- auto
Compact Boolean - Auto compact data files when stream close. Default: false
- compact
Target DoubleFile Size In Mb - Specify the target size of Iceberg data file when performing a compaction action. Default: 100
- flush
Batch DoubleSize - Iceberg data file flush batch size. Incoming rows write to cache firstly; When cache size reaches this 'batch size', flush into real Iceberg data file. Default: 10000
- format String
- Default: "Parquet"; must be one of ["Parquet", "Avro"]
- auto
Compact boolean - Auto compact data files when stream close. Default: false
- compact
Target numberFile Size In Mb - Specify the target size of Iceberg data file when performing a compaction action. Default: 100
- flush
Batch numberSize - Iceberg data file flush batch size. Incoming rows write to cache firstly; When cache size reaches this 'batch size', flush into real Iceberg data file. Default: 10000
- format string
- Default: "Parquet"; must be one of ["Parquet", "Avro"]
- auto_
compact bool - Auto compact data files when stream close. Default: false
- compact_
target_ floatfile_ size_ in_ mb - Specify the target size of Iceberg data file when performing a compaction action. Default: 100
- flush_
batch_ floatsize - Iceberg data file flush batch size. Incoming rows write to cache firstly; When cache size reaches this 'batch size', flush into real Iceberg data file. Default: 10000
- format str
- Default: "Parquet"; must be one of ["Parquet", "Avro"]
- auto
Compact Boolean - Auto compact data files when stream close. Default: false
- compact
Target NumberFile Size In Mb - Specify the target size of Iceberg data file when performing a compaction action. Default: 100
- flush
Batch NumberSize - Iceberg data file flush batch size. Incoming rows write to cache firstly; When cache size reaches this 'batch size', flush into real Iceberg data file. Default: 10000
- format String
- Default: "Parquet"; must be one of ["Parquet", "Avro"]
DestinationIcebergConfigurationStorageConfig, DestinationIcebergConfigurationStorageConfigArgs
- S3
Destination
Iceberg Configuration Storage Config S3 - S3 object storage
- Server
Managed DestinationIceberg Configuration Storage Config Server Managed - Server-managed object storage
- S3
Destination
Iceberg Configuration Storage Config S3 - S3 object storage
- Server
Managed DestinationIceberg Configuration Storage Config Server Managed - Server-managed object storage
- s3
Destination
Iceberg Configuration Storage Config S3 - S3 object storage
- server
Managed DestinationIceberg Configuration Storage Config Server Managed - Server-managed object storage
- s3
Destination
Iceberg Configuration Storage Config S3 - S3 object storage
- server
Managed DestinationIceberg Configuration Storage Config Server Managed - Server-managed object storage
- s3
Destination
Iceberg Configuration Storage Config S3 - S3 object storage
- server_
managed DestinationIceberg Configuration Storage Config Server Managed - Server-managed object storage
- s3 Property Map
- S3 object storage
- server
Managed Property Map - Server-managed object storage
DestinationIcebergConfigurationStorageConfigS3, DestinationIcebergConfigurationStorageConfigS3Args
- Access
Key stringId - The access key ID to access the S3 bucket. Airbyte requires Read and Write permissions to the given bucket. Read more \n\nhere\n\n.
- S3Warehouse
Uri string - The Warehouse Uri for Iceberg
- Secret
Access stringKey - The corresponding secret to the access key ID. Read more \n\nhere\n\n
- S3Bucket
Region string - The region of the S3 bucket. See \n\nhere\n\n for all region codes. Default: ""; must be one of ["", "af-south-1", "ap-east-1", "ap-northeast-1", "ap-northeast-2", "ap-northeast-3", "ap-south-1", "ap-south-2", "ap-southeast-1", "ap-southeast-2", "ap-southeast-3", "ap-southeast-4", "ca-central-1", "ca-west-1", "cn-north-1", "cn-northwest-1", "eu-central-1", "eu-central-2", "eu-north-1", "eu-south-1", "eu-south-2", "eu-west-1", "eu-west-2", "eu-west-3", "il-central-1", "me-central-1", "me-south-1", "sa-east-1", "us-east-1", "us-east-2", "us-gov-east-1", "us-gov-west-1", "us-west-1", "us-west-2"]
- S3Endpoint string
- Your S3 endpoint url. Read more \n\nhere\n\n. Default: ""
- S3Path
Style boolAccess - Use path style access. Default: true
- Storage
Type string - Default: "S3"; must be "S3"
- Access
Key stringId - The access key ID to access the S3 bucket. Airbyte requires Read and Write permissions to the given bucket. Read more \n\nhere\n\n.
- S3Warehouse
Uri string - The Warehouse Uri for Iceberg
- Secret
Access stringKey - The corresponding secret to the access key ID. Read more \n\nhere\n\n
- S3Bucket
Region string - The region of the S3 bucket. See \n\nhere\n\n for all region codes. Default: ""; must be one of ["", "af-south-1", "ap-east-1", "ap-northeast-1", "ap-northeast-2", "ap-northeast-3", "ap-south-1", "ap-south-2", "ap-southeast-1", "ap-southeast-2", "ap-southeast-3", "ap-southeast-4", "ca-central-1", "ca-west-1", "cn-north-1", "cn-northwest-1", "eu-central-1", "eu-central-2", "eu-north-1", "eu-south-1", "eu-south-2", "eu-west-1", "eu-west-2", "eu-west-3", "il-central-1", "me-central-1", "me-south-1", "sa-east-1", "us-east-1", "us-east-2", "us-gov-east-1", "us-gov-west-1", "us-west-1", "us-west-2"]
- S3Endpoint string
- Your S3 endpoint url. Read more \n\nhere\n\n. Default: ""
- S3Path
Style boolAccess - Use path style access. Default: true
- Storage
Type string - Default: "S3"; must be "S3"
- access
Key StringId - The access key ID to access the S3 bucket. Airbyte requires Read and Write permissions to the given bucket. Read more \n\nhere\n\n.
- s3Warehouse
Uri String - The Warehouse Uri for Iceberg
- secret
Access StringKey - The corresponding secret to the access key ID. Read more \n\nhere\n\n
- s3Bucket
Region String - The region of the S3 bucket. See \n\nhere\n\n for all region codes. Default: ""; must be one of ["", "af-south-1", "ap-east-1", "ap-northeast-1", "ap-northeast-2", "ap-northeast-3", "ap-south-1", "ap-south-2", "ap-southeast-1", "ap-southeast-2", "ap-southeast-3", "ap-southeast-4", "ca-central-1", "ca-west-1", "cn-north-1", "cn-northwest-1", "eu-central-1", "eu-central-2", "eu-north-1", "eu-south-1", "eu-south-2", "eu-west-1", "eu-west-2", "eu-west-3", "il-central-1", "me-central-1", "me-south-1", "sa-east-1", "us-east-1", "us-east-2", "us-gov-east-1", "us-gov-west-1", "us-west-1", "us-west-2"]
- s3Endpoint String
- Your S3 endpoint url. Read more \n\nhere\n\n. Default: ""
- s3Path
Style BooleanAccess - Use path style access. Default: true
- storage
Type String - Default: "S3"; must be "S3"
- access
Key stringId - The access key ID to access the S3 bucket. Airbyte requires Read and Write permissions to the given bucket. Read more \n\nhere\n\n.
- s3Warehouse
Uri string - The Warehouse Uri for Iceberg
- secret
Access stringKey - The corresponding secret to the access key ID. Read more \n\nhere\n\n
- s3Bucket
Region string - The region of the S3 bucket. See \n\nhere\n\n for all region codes. Default: ""; must be one of ["", "af-south-1", "ap-east-1", "ap-northeast-1", "ap-northeast-2", "ap-northeast-3", "ap-south-1", "ap-south-2", "ap-southeast-1", "ap-southeast-2", "ap-southeast-3", "ap-southeast-4", "ca-central-1", "ca-west-1", "cn-north-1", "cn-northwest-1", "eu-central-1", "eu-central-2", "eu-north-1", "eu-south-1", "eu-south-2", "eu-west-1", "eu-west-2", "eu-west-3", "il-central-1", "me-central-1", "me-south-1", "sa-east-1", "us-east-1", "us-east-2", "us-gov-east-1", "us-gov-west-1", "us-west-1", "us-west-2"]
- s3Endpoint string
- Your S3 endpoint url. Read more \n\nhere\n\n. Default: ""
- s3Path
Style booleanAccess - Use path style access. Default: true
- storage
Type string - Default: "S3"; must be "S3"
- access_
key_ strid - The access key ID to access the S3 bucket. Airbyte requires Read and Write permissions to the given bucket. Read more \n\nhere\n\n.
- s3_
warehouse_ struri - The Warehouse Uri for Iceberg
- secret_
access_ strkey - The corresponding secret to the access key ID. Read more \n\nhere\n\n
- s3_
bucket_ strregion - The region of the S3 bucket. See \n\nhere\n\n for all region codes. Default: ""; must be one of ["", "af-south-1", "ap-east-1", "ap-northeast-1", "ap-northeast-2", "ap-northeast-3", "ap-south-1", "ap-south-2", "ap-southeast-1", "ap-southeast-2", "ap-southeast-3", "ap-southeast-4", "ca-central-1", "ca-west-1", "cn-north-1", "cn-northwest-1", "eu-central-1", "eu-central-2", "eu-north-1", "eu-south-1", "eu-south-2", "eu-west-1", "eu-west-2", "eu-west-3", "il-central-1", "me-central-1", "me-south-1", "sa-east-1", "us-east-1", "us-east-2", "us-gov-east-1", "us-gov-west-1", "us-west-1", "us-west-2"]
- s3_
endpoint str - Your S3 endpoint url. Read more \n\nhere\n\n. Default: ""
- s3_
path_ boolstyle_ access - Use path style access. Default: true
- storage_
type str - Default: "S3"; must be "S3"
- access
Key StringId - The access key ID to access the S3 bucket. Airbyte requires Read and Write permissions to the given bucket. Read more \n\nhere\n\n.
- s3Warehouse
Uri String - The Warehouse Uri for Iceberg
- secret
Access StringKey - The corresponding secret to the access key ID. Read more \n\nhere\n\n
- s3Bucket
Region String - The region of the S3 bucket. See \n\nhere\n\n for all region codes. Default: ""; must be one of ["", "af-south-1", "ap-east-1", "ap-northeast-1", "ap-northeast-2", "ap-northeast-3", "ap-south-1", "ap-south-2", "ap-southeast-1", "ap-southeast-2", "ap-southeast-3", "ap-southeast-4", "ca-central-1", "ca-west-1", "cn-north-1", "cn-northwest-1", "eu-central-1", "eu-central-2", "eu-north-1", "eu-south-1", "eu-south-2", "eu-west-1", "eu-west-2", "eu-west-3", "il-central-1", "me-central-1", "me-south-1", "sa-east-1", "us-east-1", "us-east-2", "us-gov-east-1", "us-gov-west-1", "us-west-1", "us-west-2"]
- s3Endpoint String
- Your S3 endpoint url. Read more \n\nhere\n\n. Default: ""
- s3Path
Style BooleanAccess - Use path style access. Default: true
- storage
Type String - Default: "S3"; must be "S3"
DestinationIcebergConfigurationStorageConfigServerManaged, DestinationIcebergConfigurationStorageConfigServerManagedArgs
- Managed
Warehouse stringName - The name of the managed warehouse
- Storage
Type string - Default: "MANAGED"; must be "MANAGED"
- Managed
Warehouse stringName - The name of the managed warehouse
- Storage
Type string - Default: "MANAGED"; must be "MANAGED"
- managed
Warehouse StringName - The name of the managed warehouse
- storage
Type String - Default: "MANAGED"; must be "MANAGED"
- managed
Warehouse stringName - The name of the managed warehouse
- storage
Type string - Default: "MANAGED"; must be "MANAGED"
- managed_
warehouse_ strname - The name of the managed warehouse
- storage_
type str - Default: "MANAGED"; must be "MANAGED"
- managed
Warehouse StringName - The name of the managed warehouse
- storage
Type String - Default: "MANAGED"; must be "MANAGED"
DestinationIcebergResourceAllocation, DestinationIcebergResourceAllocationArgs
- Default
Destination
Iceberg Resource Allocation Default - optional resource requirements to run workers (blank for unbounded allocations)
- Job
Specifics List<DestinationIceberg Resource Allocation Job Specific>
- Default
Destination
Iceberg Resource Allocation Default - optional resource requirements to run workers (blank for unbounded allocations)
- Job
Specifics []DestinationIceberg Resource Allocation Job Specific
- default_
Destination
Iceberg Resource Allocation Default - optional resource requirements to run workers (blank for unbounded allocations)
- job
Specifics List<DestinationIceberg Resource Allocation Job Specific>
- default
Destination
Iceberg Resource Allocation Default - optional resource requirements to run workers (blank for unbounded allocations)
- job
Specifics DestinationIceberg Resource Allocation Job Specific[]
- default
Destination
Iceberg Resource Allocation Default - optional resource requirements to run workers (blank for unbounded allocations)
- job_
specifics Sequence[DestinationIceberg Resource Allocation Job Specific]
- default Property Map
- optional resource requirements to run workers (blank for unbounded allocations)
- job
Specifics List<Property Map>
DestinationIcebergResourceAllocationDefault, DestinationIcebergResourceAllocationDefaultArgs
- Cpu
Limit string - Cpu
Request string - Ephemeral
Storage stringLimit - Ephemeral
Storage stringRequest - Memory
Limit string - Memory
Request string
- Cpu
Limit string - Cpu
Request string - Ephemeral
Storage stringLimit - Ephemeral
Storage stringRequest - Memory
Limit string - Memory
Request string
- cpu
Limit String - cpu
Request String - ephemeral
Storage StringLimit - ephemeral
Storage StringRequest - memory
Limit String - memory
Request String
- cpu
Limit string - cpu
Request string - ephemeral
Storage stringLimit - ephemeral
Storage stringRequest - memory
Limit string - memory
Request string
- cpu_
limit str - cpu_
request str - ephemeral_
storage_ strlimit - ephemeral_
storage_ strrequest - memory_
limit str - memory_
request str
- cpu
Limit String - cpu
Request String - ephemeral
Storage StringLimit - ephemeral
Storage StringRequest - memory
Limit String - memory
Request String
DestinationIcebergResourceAllocationJobSpecific, DestinationIcebergResourceAllocationJobSpecificArgs
- Job
Type string - enum that describes the different types of jobs that the platform runs. must be one of ["getspec", "checkconnection", "discoverschema", "sync", "resetconnection", "connection_updater", "replicate"]
- Resource
Requirements DestinationIceberg Resource Allocation Job Specific Resource Requirements - optional resource requirements to run workers (blank for unbounded allocations)
- Job
Type string - enum that describes the different types of jobs that the platform runs. must be one of ["getspec", "checkconnection", "discoverschema", "sync", "resetconnection", "connection_updater", "replicate"]
- Resource
Requirements DestinationIceberg Resource Allocation Job Specific Resource Requirements - optional resource requirements to run workers (blank for unbounded allocations)
- job
Type String - enum that describes the different types of jobs that the platform runs. must be one of ["getspec", "checkconnection", "discoverschema", "sync", "resetconnection", "connection_updater", "replicate"]
- resource
Requirements DestinationIceberg Resource Allocation Job Specific Resource Requirements - optional resource requirements to run workers (blank for unbounded allocations)
- job
Type string - enum that describes the different types of jobs that the platform runs. must be one of ["getspec", "checkconnection", "discoverschema", "sync", "resetconnection", "connection_updater", "replicate"]
- resource
Requirements DestinationIceberg Resource Allocation Job Specific Resource Requirements - optional resource requirements to run workers (blank for unbounded allocations)
- job_
type str - enum that describes the different types of jobs that the platform runs. must be one of ["getspec", "checkconnection", "discoverschema", "sync", "resetconnection", "connection_updater", "replicate"]
- resource_
requirements DestinationIceberg Resource Allocation Job Specific Resource Requirements - optional resource requirements to run workers (blank for unbounded allocations)
- job
Type String - enum that describes the different types of jobs that the platform runs. must be one of ["getspec", "checkconnection", "discoverschema", "sync", "resetconnection", "connection_updater", "replicate"]
- resource
Requirements Property Map - optional resource requirements to run workers (blank for unbounded allocations)
DestinationIcebergResourceAllocationJobSpecificResourceRequirements, DestinationIcebergResourceAllocationJobSpecificResourceRequirementsArgs
- Cpu
Limit string - Cpu
Request string - Ephemeral
Storage stringLimit - Ephemeral
Storage stringRequest - Memory
Limit string - Memory
Request string
- Cpu
Limit string - Cpu
Request string - Ephemeral
Storage stringLimit - Ephemeral
Storage stringRequest - Memory
Limit string - Memory
Request string
- cpu
Limit String - cpu
Request String - ephemeral
Storage StringLimit - ephemeral
Storage StringRequest - memory
Limit String - memory
Request String
- cpu
Limit string - cpu
Request string - ephemeral
Storage stringLimit - ephemeral
Storage stringRequest - memory
Limit string - memory
Request string
- cpu_
limit str - cpu_
request str - ephemeral_
storage_ strlimit - ephemeral_
storage_ strrequest - memory_
limit str - memory_
request str
- cpu
Limit String - cpu
Request String - ephemeral
Storage StringLimit - ephemeral
Storage StringRequest - memory
Limit String - memory
Request String
Import
$ pulumi import airbyte:index/destinationIceberg:DestinationIceberg my_airbyte_destination_iceberg ""
To learn more about importing existing cloud resources, see Importing resources.
Package Details
- Repository
- airbyte airbytehq/terraform-provider-airbyte
- License
- Notes
- This Pulumi package is based on the
airbyte
Terraform Provider.