1. Packages
  2. Google Cloud Native
  3. API Docs
  4. bigquery
  5. bigquery/v2
  6. Routine

Google Cloud Native is in preview. Google Cloud Classic is fully supported.

Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi

google-native.bigquery/v2.Routine

Explore with Pulumi AI

Google Cloud Native is in preview. Google Cloud Classic is fully supported.

Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi

Creates a new routine in the dataset. Auto-naming is currently not supported for this resource.

Create Routine Resource

Resources are created with functions called constructors. To learn more about declaring and configuring resources, see Resources.

Constructor syntax

new Routine(name: string, args: RoutineArgs, opts?: CustomResourceOptions);
@overload
def Routine(resource_name: str,
            args: RoutineArgs,
            opts: Optional[ResourceOptions] = None)

@overload
def Routine(resource_name: str,
            opts: Optional[ResourceOptions] = None,
            routine_reference: Optional[RoutineReferenceArgs] = None,
            routine_type: Optional[RoutineRoutineType] = None,
            dataset_id: Optional[str] = None,
            definition_body: Optional[str] = None,
            description: Optional[str] = None,
            determinism_level: Optional[RoutineDeterminismLevel] = None,
            imported_libraries: Optional[Sequence[str]] = None,
            language: Optional[RoutineLanguage] = None,
            project: Optional[str] = None,
            remote_function_options: Optional[RemoteFunctionOptionsArgs] = None,
            return_table_type: Optional[StandardSqlTableTypeArgs] = None,
            return_type: Optional[StandardSqlDataTypeArgs] = None,
            arguments: Optional[Sequence[ArgumentArgs]] = None,
            data_governance_type: Optional[RoutineDataGovernanceType] = None,
            security_mode: Optional[RoutineSecurityMode] = None,
            spark_options: Optional[SparkOptionsArgs] = None,
            strict_mode: Optional[bool] = None)
func NewRoutine(ctx *Context, name string, args RoutineArgs, opts ...ResourceOption) (*Routine, error)
public Routine(string name, RoutineArgs args, CustomResourceOptions? opts = null)
public Routine(String name, RoutineArgs args)
public Routine(String name, RoutineArgs args, CustomResourceOptions options)
type: google-native:bigquery/v2:Routine
properties: # The arguments to resource properties.
options: # Bag of options to control resource's behavior.

Parameters

name This property is required. string
The unique name of the resource.
args This property is required. RoutineArgs
The arguments to resource properties.
opts CustomResourceOptions
Bag of options to control resource's behavior.
resource_name This property is required. str
The unique name of the resource.
args This property is required. RoutineArgs
The arguments to resource properties.
opts ResourceOptions
Bag of options to control resource's behavior.
ctx Context
Context object for the current deployment.
name This property is required. string
The unique name of the resource.
args This property is required. RoutineArgs
The arguments to resource properties.
opts ResourceOption
Bag of options to control resource's behavior.
name This property is required. string
The unique name of the resource.
args This property is required. RoutineArgs
The arguments to resource properties.
opts CustomResourceOptions
Bag of options to control resource's behavior.
name This property is required. String
The unique name of the resource.
args This property is required. RoutineArgs
The arguments to resource properties.
options CustomResourceOptions
Bag of options to control resource's behavior.

Constructor example

The following reference example uses placeholder values for all input properties.

var routineResource = new GoogleNative.BigQuery.V2.Routine("routineResource", new()
{
    RoutineReference = new GoogleNative.BigQuery.V2.Inputs.RoutineReferenceArgs
    {
        DatasetId = "string",
        Project = "string",
        RoutineId = "string",
    },
    RoutineType = GoogleNative.BigQuery.V2.RoutineRoutineType.RoutineTypeUnspecified,
    DatasetId = "string",
    DefinitionBody = "string",
    Description = "string",
    DeterminismLevel = GoogleNative.BigQuery.V2.RoutineDeterminismLevel.DeterminismLevelUnspecified,
    ImportedLibraries = new[]
    {
        "string",
    },
    Language = GoogleNative.BigQuery.V2.RoutineLanguage.LanguageUnspecified,
    Project = "string",
    RemoteFunctionOptions = new GoogleNative.BigQuery.V2.Inputs.RemoteFunctionOptionsArgs
    {
        Connection = "string",
        Endpoint = "string",
        MaxBatchingRows = "string",
        UserDefinedContext = 
        {
            { "string", "string" },
        },
    },
    ReturnTableType = new GoogleNative.BigQuery.V2.Inputs.StandardSqlTableTypeArgs
    {
        Columns = new[]
        {
            new GoogleNative.BigQuery.V2.Inputs.StandardSqlFieldArgs
            {
                Name = "string",
                Type = new GoogleNative.BigQuery.V2.Inputs.StandardSqlDataTypeArgs
                {
                    TypeKind = GoogleNative.BigQuery.V2.StandardSqlDataTypeTypeKind.TypeKindUnspecified,
                    ArrayElementType = standardSqlDataType,
                    RangeElementType = standardSqlDataType,
                    StructType = new GoogleNative.BigQuery.V2.Inputs.StandardSqlStructTypeArgs
                    {
                        Fields = new[]
                        {
                            standardSqlField,
                        },
                    },
                },
            },
        },
    },
    ReturnType = standardSqlDataType,
    Arguments = new[]
    {
        new GoogleNative.BigQuery.V2.Inputs.ArgumentArgs
        {
            ArgumentKind = GoogleNative.BigQuery.V2.ArgumentArgumentKind.ArgumentKindUnspecified,
            DataType = standardSqlDataType,
            IsAggregate = false,
            Mode = GoogleNative.BigQuery.V2.ArgumentMode.ModeUnspecified,
            Name = "string",
        },
    },
    DataGovernanceType = GoogleNative.BigQuery.V2.RoutineDataGovernanceType.DataGovernanceTypeUnspecified,
    SecurityMode = GoogleNative.BigQuery.V2.RoutineSecurityMode.SecurityModeUnspecified,
    SparkOptions = new GoogleNative.BigQuery.V2.Inputs.SparkOptionsArgs
    {
        ArchiveUris = new[]
        {
            "string",
        },
        Connection = "string",
        ContainerImage = "string",
        FileUris = new[]
        {
            "string",
        },
        JarUris = new[]
        {
            "string",
        },
        MainClass = "string",
        MainFileUri = "string",
        Properties = 
        {
            { "string", "string" },
        },
        PyFileUris = new[]
        {
            "string",
        },
        RuntimeVersion = "string",
    },
    StrictMode = false,
});
Copy
example, err := bigquery.NewRoutine(ctx, "routineResource", &bigquery.RoutineArgs{
	RoutineReference: &bigquery.RoutineReferenceArgs{
		DatasetId: pulumi.String("string"),
		Project:   pulumi.String("string"),
		RoutineId: pulumi.String("string"),
	},
	RoutineType:      bigquery.RoutineRoutineTypeRoutineTypeUnspecified,
	DatasetId:        pulumi.String("string"),
	DefinitionBody:   pulumi.String("string"),
	Description:      pulumi.String("string"),
	DeterminismLevel: bigquery.RoutineDeterminismLevelDeterminismLevelUnspecified,
	ImportedLibraries: pulumi.StringArray{
		pulumi.String("string"),
	},
	Language: bigquery.RoutineLanguageLanguageUnspecified,
	Project:  pulumi.String("string"),
	RemoteFunctionOptions: &bigquery.RemoteFunctionOptionsArgs{
		Connection:      pulumi.String("string"),
		Endpoint:        pulumi.String("string"),
		MaxBatchingRows: pulumi.String("string"),
		UserDefinedContext: pulumi.StringMap{
			"string": pulumi.String("string"),
		},
	},
	ReturnTableType: &bigquery.StandardSqlTableTypeArgs{
		Columns: bigquery.StandardSqlFieldArray{
			&bigquery.StandardSqlFieldArgs{
				Name: pulumi.String("string"),
				Type: &bigquery.StandardSqlDataTypeArgs{
					TypeKind:         bigquery.StandardSqlDataTypeTypeKindTypeKindUnspecified,
					ArrayElementType: pulumi.Any(standardSqlDataType),
					RangeElementType: pulumi.Any(standardSqlDataType),
					StructType: &bigquery.StandardSqlStructTypeArgs{
						Fields: bigquery.StandardSqlFieldArray{
							standardSqlField,
						},
					},
				},
			},
		},
	},
	ReturnType: pulumi.Any(standardSqlDataType),
	Arguments: bigquery.ArgumentArray{
		&bigquery.ArgumentArgs{
			ArgumentKind: bigquery.ArgumentArgumentKindArgumentKindUnspecified,
			DataType:     pulumi.Any(standardSqlDataType),
			IsAggregate:  pulumi.Bool(false),
			Mode:         bigquery.ArgumentModeModeUnspecified,
			Name:         pulumi.String("string"),
		},
	},
	DataGovernanceType: bigquery.RoutineDataGovernanceTypeDataGovernanceTypeUnspecified,
	SecurityMode:       bigquery.RoutineSecurityModeSecurityModeUnspecified,
	SparkOptions: &bigquery.SparkOptionsArgs{
		ArchiveUris: pulumi.StringArray{
			pulumi.String("string"),
		},
		Connection:     pulumi.String("string"),
		ContainerImage: pulumi.String("string"),
		FileUris: pulumi.StringArray{
			pulumi.String("string"),
		},
		JarUris: pulumi.StringArray{
			pulumi.String("string"),
		},
		MainClass:   pulumi.String("string"),
		MainFileUri: pulumi.String("string"),
		Properties: pulumi.StringMap{
			"string": pulumi.String("string"),
		},
		PyFileUris: pulumi.StringArray{
			pulumi.String("string"),
		},
		RuntimeVersion: pulumi.String("string"),
	},
	StrictMode: pulumi.Bool(false),
})
Copy
var routineResource = new Routine("routineResource", RoutineArgs.builder()
    .routineReference(RoutineReferenceArgs.builder()
        .datasetId("string")
        .project("string")
        .routineId("string")
        .build())
    .routineType("ROUTINE_TYPE_UNSPECIFIED")
    .datasetId("string")
    .definitionBody("string")
    .description("string")
    .determinismLevel("DETERMINISM_LEVEL_UNSPECIFIED")
    .importedLibraries("string")
    .language("LANGUAGE_UNSPECIFIED")
    .project("string")
    .remoteFunctionOptions(RemoteFunctionOptionsArgs.builder()
        .connection("string")
        .endpoint("string")
        .maxBatchingRows("string")
        .userDefinedContext(Map.of("string", "string"))
        .build())
    .returnTableType(StandardSqlTableTypeArgs.builder()
        .columns(StandardSqlFieldArgs.builder()
            .name("string")
            .type(StandardSqlDataTypeArgs.builder()
                .typeKind("TYPE_KIND_UNSPECIFIED")
                .arrayElementType(standardSqlDataType)
                .rangeElementType(standardSqlDataType)
                .structType(StandardSqlStructTypeArgs.builder()
                    .fields(standardSqlField)
                    .build())
                .build())
            .build())
        .build())
    .returnType(standardSqlDataType)
    .arguments(ArgumentArgs.builder()
        .argumentKind("ARGUMENT_KIND_UNSPECIFIED")
        .dataType(standardSqlDataType)
        .isAggregate(false)
        .mode("MODE_UNSPECIFIED")
        .name("string")
        .build())
    .dataGovernanceType("DATA_GOVERNANCE_TYPE_UNSPECIFIED")
    .securityMode("SECURITY_MODE_UNSPECIFIED")
    .sparkOptions(SparkOptionsArgs.builder()
        .archiveUris("string")
        .connection("string")
        .containerImage("string")
        .fileUris("string")
        .jarUris("string")
        .mainClass("string")
        .mainFileUri("string")
        .properties(Map.of("string", "string"))
        .pyFileUris("string")
        .runtimeVersion("string")
        .build())
    .strictMode(false)
    .build());
Copy
routine_resource = google_native.bigquery.v2.Routine("routineResource",
    routine_reference={
        "dataset_id": "string",
        "project": "string",
        "routine_id": "string",
    },
    routine_type=google_native.bigquery.v2.RoutineRoutineType.ROUTINE_TYPE_UNSPECIFIED,
    dataset_id="string",
    definition_body="string",
    description="string",
    determinism_level=google_native.bigquery.v2.RoutineDeterminismLevel.DETERMINISM_LEVEL_UNSPECIFIED,
    imported_libraries=["string"],
    language=google_native.bigquery.v2.RoutineLanguage.LANGUAGE_UNSPECIFIED,
    project="string",
    remote_function_options={
        "connection": "string",
        "endpoint": "string",
        "max_batching_rows": "string",
        "user_defined_context": {
            "string": "string",
        },
    },
    return_table_type={
        "columns": [{
            "name": "string",
            "type": {
                "type_kind": google_native.bigquery.v2.StandardSqlDataTypeTypeKind.TYPE_KIND_UNSPECIFIED,
                "array_element_type": standard_sql_data_type,
                "range_element_type": standard_sql_data_type,
                "struct_type": {
                    "fields": [standard_sql_field],
                },
            },
        }],
    },
    return_type=standard_sql_data_type,
    arguments=[{
        "argument_kind": google_native.bigquery.v2.ArgumentArgumentKind.ARGUMENT_KIND_UNSPECIFIED,
        "data_type": standard_sql_data_type,
        "is_aggregate": False,
        "mode": google_native.bigquery.v2.ArgumentMode.MODE_UNSPECIFIED,
        "name": "string",
    }],
    data_governance_type=google_native.bigquery.v2.RoutineDataGovernanceType.DATA_GOVERNANCE_TYPE_UNSPECIFIED,
    security_mode=google_native.bigquery.v2.RoutineSecurityMode.SECURITY_MODE_UNSPECIFIED,
    spark_options={
        "archive_uris": ["string"],
        "connection": "string",
        "container_image": "string",
        "file_uris": ["string"],
        "jar_uris": ["string"],
        "main_class": "string",
        "main_file_uri": "string",
        "properties": {
            "string": "string",
        },
        "py_file_uris": ["string"],
        "runtime_version": "string",
    },
    strict_mode=False)
Copy
const routineResource = new google_native.bigquery.v2.Routine("routineResource", {
    routineReference: {
        datasetId: "string",
        project: "string",
        routineId: "string",
    },
    routineType: google_native.bigquery.v2.RoutineRoutineType.RoutineTypeUnspecified,
    datasetId: "string",
    definitionBody: "string",
    description: "string",
    determinismLevel: google_native.bigquery.v2.RoutineDeterminismLevel.DeterminismLevelUnspecified,
    importedLibraries: ["string"],
    language: google_native.bigquery.v2.RoutineLanguage.LanguageUnspecified,
    project: "string",
    remoteFunctionOptions: {
        connection: "string",
        endpoint: "string",
        maxBatchingRows: "string",
        userDefinedContext: {
            string: "string",
        },
    },
    returnTableType: {
        columns: [{
            name: "string",
            type: {
                typeKind: google_native.bigquery.v2.StandardSqlDataTypeTypeKind.TypeKindUnspecified,
                arrayElementType: standardSqlDataType,
                rangeElementType: standardSqlDataType,
                structType: {
                    fields: [standardSqlField],
                },
            },
        }],
    },
    returnType: standardSqlDataType,
    arguments: [{
        argumentKind: google_native.bigquery.v2.ArgumentArgumentKind.ArgumentKindUnspecified,
        dataType: standardSqlDataType,
        isAggregate: false,
        mode: google_native.bigquery.v2.ArgumentMode.ModeUnspecified,
        name: "string",
    }],
    dataGovernanceType: google_native.bigquery.v2.RoutineDataGovernanceType.DataGovernanceTypeUnspecified,
    securityMode: google_native.bigquery.v2.RoutineSecurityMode.SecurityModeUnspecified,
    sparkOptions: {
        archiveUris: ["string"],
        connection: "string",
        containerImage: "string",
        fileUris: ["string"],
        jarUris: ["string"],
        mainClass: "string",
        mainFileUri: "string",
        properties: {
            string: "string",
        },
        pyFileUris: ["string"],
        runtimeVersion: "string",
    },
    strictMode: false,
});
Copy
type: google-native:bigquery/v2:Routine
properties:
    arguments:
        - argumentKind: ARGUMENT_KIND_UNSPECIFIED
          dataType: ${standardSqlDataType}
          isAggregate: false
          mode: MODE_UNSPECIFIED
          name: string
    dataGovernanceType: DATA_GOVERNANCE_TYPE_UNSPECIFIED
    datasetId: string
    definitionBody: string
    description: string
    determinismLevel: DETERMINISM_LEVEL_UNSPECIFIED
    importedLibraries:
        - string
    language: LANGUAGE_UNSPECIFIED
    project: string
    remoteFunctionOptions:
        connection: string
        endpoint: string
        maxBatchingRows: string
        userDefinedContext:
            string: string
    returnTableType:
        columns:
            - name: string
              type:
                arrayElementType: ${standardSqlDataType}
                rangeElementType: ${standardSqlDataType}
                structType:
                    fields:
                        - ${standardSqlField}
                typeKind: TYPE_KIND_UNSPECIFIED
    returnType: ${standardSqlDataType}
    routineReference:
        datasetId: string
        project: string
        routineId: string
    routineType: ROUTINE_TYPE_UNSPECIFIED
    securityMode: SECURITY_MODE_UNSPECIFIED
    sparkOptions:
        archiveUris:
            - string
        connection: string
        containerImage: string
        fileUris:
            - string
        jarUris:
            - string
        mainClass: string
        mainFileUri: string
        properties:
            string: string
        pyFileUris:
            - string
        runtimeVersion: string
    strictMode: false
Copy

Routine Resource Properties

To learn more about resource properties and how to use them, see Inputs and Outputs in the Architecture and Concepts docs.

Inputs

In Python, inputs that are objects can be passed either as argument classes or as dictionary literals.

The Routine resource accepts the following input properties:

DatasetId
This property is required.
Changes to this property will trigger replacement.
string
DefinitionBody This property is required. string
The body of the routine. For functions, this is the expression in the AS clause. If language=SQL, it is the substring inside (but excluding) the parentheses. For example, for the function created with the following statement: CREATE FUNCTION JoinLines(x string, y string) as (concat(x, "\n", y)) The definition_body is concat(x, "\n", y) (\n is not replaced with linebreak). If language=JAVASCRIPT, it is the evaluated string in the AS clause. For example, for the function created with the following statement: CREATE FUNCTION f() RETURNS STRING LANGUAGE js AS 'return "\n";\n' The definition_body is return "\n";\n Note that both \n are replaced with linebreaks.
RoutineReference This property is required. Pulumi.GoogleNative.BigQuery.V2.Inputs.RoutineReference
Reference describing the ID of this routine.
RoutineType This property is required. Pulumi.GoogleNative.BigQuery.V2.RoutineRoutineType
The type of routine.
Arguments List<Pulumi.GoogleNative.BigQuery.V2.Inputs.Argument>
Optional.
DataGovernanceType Pulumi.GoogleNative.BigQuery.V2.RoutineDataGovernanceType
Optional. If set to DATA_MASKING, the function is validated and made available as a masking function. For more information, see Create custom masking routines.
Description string
Optional. The description of the routine, if defined.
DeterminismLevel Pulumi.GoogleNative.BigQuery.V2.RoutineDeterminismLevel
Optional. The determinism level of the JavaScript UDF, if defined.
ImportedLibraries List<string>
Optional. If language = "JAVASCRIPT", this field stores the path of the imported JAVASCRIPT libraries.
Language Pulumi.GoogleNative.BigQuery.V2.RoutineLanguage
Optional. Defaults to "SQL" if remote_function_options field is absent, not set otherwise.
Project Changes to this property will trigger replacement. string
RemoteFunctionOptions Pulumi.GoogleNative.BigQuery.V2.Inputs.RemoteFunctionOptions
Optional. Remote function specific options.
ReturnTableType Pulumi.GoogleNative.BigQuery.V2.Inputs.StandardSqlTableType
Optional. Can be set only if routine_type = "TABLE_VALUED_FUNCTION". If absent, the return table type is inferred from definition_body at query time in each query that references this routine. If present, then the columns in the evaluated table result will be cast to match the column types specified in return table type, at query time.
ReturnType Pulumi.GoogleNative.BigQuery.V2.Inputs.StandardSqlDataType
Optional if language = "SQL"; required otherwise. Cannot be set if routine_type = "TABLE_VALUED_FUNCTION". If absent, the return type is inferred from definition_body at query time in each query that references this routine. If present, then the evaluated result will be cast to the specified returned type at query time. For example, for the functions created with the following statements: * CREATE FUNCTION Add(x FLOAT64, y FLOAT64) RETURNS FLOAT64 AS (x + y); * CREATE FUNCTION Increment(x FLOAT64) AS (Add(x, 1)); * CREATE FUNCTION Decrement(x FLOAT64) RETURNS FLOAT64 AS (Add(x, -1)); The return_type is {type_kind: "FLOAT64"} for Add and Decrement, and is absent for Increment (inferred as FLOAT64 at query time). Suppose the function Add is replaced by CREATE OR REPLACE FUNCTION Add(x INT64, y INT64) AS (x + y); Then the inferred return type of Increment is automatically changed to INT64 at query time, while the return type of Decrement remains FLOAT64.
SecurityMode Pulumi.GoogleNative.BigQuery.V2.RoutineSecurityMode
Optional. The security mode of the routine, if defined. If not defined, the security mode is automatically determined from the routine's configuration.
SparkOptions Pulumi.GoogleNative.BigQuery.V2.Inputs.SparkOptions
Optional. Spark specific options.
StrictMode bool
Optional. Can be set for procedures only. If true (default), the definition body will be validated in the creation and the updates of the procedure. For procedures with an argument of ANY TYPE, the definition body validtion is not supported at creation/update time, and thus this field must be set to false explicitly.
DatasetId
This property is required.
Changes to this property will trigger replacement.
string
DefinitionBody This property is required. string
The body of the routine. For functions, this is the expression in the AS clause. If language=SQL, it is the substring inside (but excluding) the parentheses. For example, for the function created with the following statement: CREATE FUNCTION JoinLines(x string, y string) as (concat(x, "\n", y)) The definition_body is concat(x, "\n", y) (\n is not replaced with linebreak). If language=JAVASCRIPT, it is the evaluated string in the AS clause. For example, for the function created with the following statement: CREATE FUNCTION f() RETURNS STRING LANGUAGE js AS 'return "\n";\n' The definition_body is return "\n";\n Note that both \n are replaced with linebreaks.
RoutineReference This property is required. RoutineReferenceArgs
Reference describing the ID of this routine.
RoutineType This property is required. RoutineRoutineType
The type of routine.
Arguments []ArgumentArgs
Optional.
DataGovernanceType RoutineDataGovernanceType
Optional. If set to DATA_MASKING, the function is validated and made available as a masking function. For more information, see Create custom masking routines.
Description string
Optional. The description of the routine, if defined.
DeterminismLevel RoutineDeterminismLevel
Optional. The determinism level of the JavaScript UDF, if defined.
ImportedLibraries []string
Optional. If language = "JAVASCRIPT", this field stores the path of the imported JAVASCRIPT libraries.
Language RoutineLanguage
Optional. Defaults to "SQL" if remote_function_options field is absent, not set otherwise.
Project Changes to this property will trigger replacement. string
RemoteFunctionOptions RemoteFunctionOptionsArgs
Optional. Remote function specific options.
ReturnTableType StandardSqlTableTypeArgs
Optional. Can be set only if routine_type = "TABLE_VALUED_FUNCTION". If absent, the return table type is inferred from definition_body at query time in each query that references this routine. If present, then the columns in the evaluated table result will be cast to match the column types specified in return table type, at query time.
ReturnType StandardSqlDataTypeArgs
Optional if language = "SQL"; required otherwise. Cannot be set if routine_type = "TABLE_VALUED_FUNCTION". If absent, the return type is inferred from definition_body at query time in each query that references this routine. If present, then the evaluated result will be cast to the specified returned type at query time. For example, for the functions created with the following statements: * CREATE FUNCTION Add(x FLOAT64, y FLOAT64) RETURNS FLOAT64 AS (x + y); * CREATE FUNCTION Increment(x FLOAT64) AS (Add(x, 1)); * CREATE FUNCTION Decrement(x FLOAT64) RETURNS FLOAT64 AS (Add(x, -1)); The return_type is {type_kind: "FLOAT64"} for Add and Decrement, and is absent for Increment (inferred as FLOAT64 at query time). Suppose the function Add is replaced by CREATE OR REPLACE FUNCTION Add(x INT64, y INT64) AS (x + y); Then the inferred return type of Increment is automatically changed to INT64 at query time, while the return type of Decrement remains FLOAT64.
SecurityMode RoutineSecurityMode
Optional. The security mode of the routine, if defined. If not defined, the security mode is automatically determined from the routine's configuration.
SparkOptions SparkOptionsArgs
Optional. Spark specific options.
StrictMode bool
Optional. Can be set for procedures only. If true (default), the definition body will be validated in the creation and the updates of the procedure. For procedures with an argument of ANY TYPE, the definition body validtion is not supported at creation/update time, and thus this field must be set to false explicitly.
datasetId
This property is required.
Changes to this property will trigger replacement.
String
definitionBody This property is required. String
The body of the routine. For functions, this is the expression in the AS clause. If language=SQL, it is the substring inside (but excluding) the parentheses. For example, for the function created with the following statement: CREATE FUNCTION JoinLines(x string, y string) as (concat(x, "\n", y)) The definition_body is concat(x, "\n", y) (\n is not replaced with linebreak). If language=JAVASCRIPT, it is the evaluated string in the AS clause. For example, for the function created with the following statement: CREATE FUNCTION f() RETURNS STRING LANGUAGE js AS 'return "\n";\n' The definition_body is return "\n";\n Note that both \n are replaced with linebreaks.
routineReference This property is required. RoutineReference
Reference describing the ID of this routine.
routineType This property is required. RoutineRoutineType
The type of routine.
arguments List<Argument>
Optional.
dataGovernanceType RoutineDataGovernanceType
Optional. If set to DATA_MASKING, the function is validated and made available as a masking function. For more information, see Create custom masking routines.
description String
Optional. The description of the routine, if defined.
determinismLevel RoutineDeterminismLevel
Optional. The determinism level of the JavaScript UDF, if defined.
importedLibraries List<String>
Optional. If language = "JAVASCRIPT", this field stores the path of the imported JAVASCRIPT libraries.
language RoutineLanguage
Optional. Defaults to "SQL" if remote_function_options field is absent, not set otherwise.
project Changes to this property will trigger replacement. String
remoteFunctionOptions RemoteFunctionOptions
Optional. Remote function specific options.
returnTableType StandardSqlTableType
Optional. Can be set only if routine_type = "TABLE_VALUED_FUNCTION". If absent, the return table type is inferred from definition_body at query time in each query that references this routine. If present, then the columns in the evaluated table result will be cast to match the column types specified in return table type, at query time.
returnType StandardSqlDataType
Optional if language = "SQL"; required otherwise. Cannot be set if routine_type = "TABLE_VALUED_FUNCTION". If absent, the return type is inferred from definition_body at query time in each query that references this routine. If present, then the evaluated result will be cast to the specified returned type at query time. For example, for the functions created with the following statements: * CREATE FUNCTION Add(x FLOAT64, y FLOAT64) RETURNS FLOAT64 AS (x + y); * CREATE FUNCTION Increment(x FLOAT64) AS (Add(x, 1)); * CREATE FUNCTION Decrement(x FLOAT64) RETURNS FLOAT64 AS (Add(x, -1)); The return_type is {type_kind: "FLOAT64"} for Add and Decrement, and is absent for Increment (inferred as FLOAT64 at query time). Suppose the function Add is replaced by CREATE OR REPLACE FUNCTION Add(x INT64, y INT64) AS (x + y); Then the inferred return type of Increment is automatically changed to INT64 at query time, while the return type of Decrement remains FLOAT64.
securityMode RoutineSecurityMode
Optional. The security mode of the routine, if defined. If not defined, the security mode is automatically determined from the routine's configuration.
sparkOptions SparkOptions
Optional. Spark specific options.
strictMode Boolean
Optional. Can be set for procedures only. If true (default), the definition body will be validated in the creation and the updates of the procedure. For procedures with an argument of ANY TYPE, the definition body validtion is not supported at creation/update time, and thus this field must be set to false explicitly.
datasetId
This property is required.
Changes to this property will trigger replacement.
string
definitionBody This property is required. string
The body of the routine. For functions, this is the expression in the AS clause. If language=SQL, it is the substring inside (but excluding) the parentheses. For example, for the function created with the following statement: CREATE FUNCTION JoinLines(x string, y string) as (concat(x, "\n", y)) The definition_body is concat(x, "\n", y) (\n is not replaced with linebreak). If language=JAVASCRIPT, it is the evaluated string in the AS clause. For example, for the function created with the following statement: CREATE FUNCTION f() RETURNS STRING LANGUAGE js AS 'return "\n";\n' The definition_body is return "\n";\n Note that both \n are replaced with linebreaks.
routineReference This property is required. RoutineReference
Reference describing the ID of this routine.
routineType This property is required. RoutineRoutineType
The type of routine.
arguments Argument[]
Optional.
dataGovernanceType RoutineDataGovernanceType
Optional. If set to DATA_MASKING, the function is validated and made available as a masking function. For more information, see Create custom masking routines.
description string
Optional. The description of the routine, if defined.
determinismLevel RoutineDeterminismLevel
Optional. The determinism level of the JavaScript UDF, if defined.
importedLibraries string[]
Optional. If language = "JAVASCRIPT", this field stores the path of the imported JAVASCRIPT libraries.
language RoutineLanguage
Optional. Defaults to "SQL" if remote_function_options field is absent, not set otherwise.
project Changes to this property will trigger replacement. string
remoteFunctionOptions RemoteFunctionOptions
Optional. Remote function specific options.
returnTableType StandardSqlTableType
Optional. Can be set only if routine_type = "TABLE_VALUED_FUNCTION". If absent, the return table type is inferred from definition_body at query time in each query that references this routine. If present, then the columns in the evaluated table result will be cast to match the column types specified in return table type, at query time.
returnType StandardSqlDataType
Optional if language = "SQL"; required otherwise. Cannot be set if routine_type = "TABLE_VALUED_FUNCTION". If absent, the return type is inferred from definition_body at query time in each query that references this routine. If present, then the evaluated result will be cast to the specified returned type at query time. For example, for the functions created with the following statements: * CREATE FUNCTION Add(x FLOAT64, y FLOAT64) RETURNS FLOAT64 AS (x + y); * CREATE FUNCTION Increment(x FLOAT64) AS (Add(x, 1)); * CREATE FUNCTION Decrement(x FLOAT64) RETURNS FLOAT64 AS (Add(x, -1)); The return_type is {type_kind: "FLOAT64"} for Add and Decrement, and is absent for Increment (inferred as FLOAT64 at query time). Suppose the function Add is replaced by CREATE OR REPLACE FUNCTION Add(x INT64, y INT64) AS (x + y); Then the inferred return type of Increment is automatically changed to INT64 at query time, while the return type of Decrement remains FLOAT64.
securityMode RoutineSecurityMode
Optional. The security mode of the routine, if defined. If not defined, the security mode is automatically determined from the routine's configuration.
sparkOptions SparkOptions
Optional. Spark specific options.
strictMode boolean
Optional. Can be set for procedures only. If true (default), the definition body will be validated in the creation and the updates of the procedure. For procedures with an argument of ANY TYPE, the definition body validtion is not supported at creation/update time, and thus this field must be set to false explicitly.
dataset_id
This property is required.
Changes to this property will trigger replacement.
str
definition_body This property is required. str
The body of the routine. For functions, this is the expression in the AS clause. If language=SQL, it is the substring inside (but excluding) the parentheses. For example, for the function created with the following statement: CREATE FUNCTION JoinLines(x string, y string) as (concat(x, "\n", y)) The definition_body is concat(x, "\n", y) (\n is not replaced with linebreak). If language=JAVASCRIPT, it is the evaluated string in the AS clause. For example, for the function created with the following statement: CREATE FUNCTION f() RETURNS STRING LANGUAGE js AS 'return "\n";\n' The definition_body is return "\n";\n Note that both \n are replaced with linebreaks.
routine_reference This property is required. RoutineReferenceArgs
Reference describing the ID of this routine.
routine_type This property is required. RoutineRoutineType
The type of routine.
arguments Sequence[ArgumentArgs]
Optional.
data_governance_type RoutineDataGovernanceType
Optional. If set to DATA_MASKING, the function is validated and made available as a masking function. For more information, see Create custom masking routines.
description str
Optional. The description of the routine, if defined.
determinism_level RoutineDeterminismLevel
Optional. The determinism level of the JavaScript UDF, if defined.
imported_libraries Sequence[str]
Optional. If language = "JAVASCRIPT", this field stores the path of the imported JAVASCRIPT libraries.
language RoutineLanguage
Optional. Defaults to "SQL" if remote_function_options field is absent, not set otherwise.
project Changes to this property will trigger replacement. str
remote_function_options RemoteFunctionOptionsArgs
Optional. Remote function specific options.
return_table_type StandardSqlTableTypeArgs
Optional. Can be set only if routine_type = "TABLE_VALUED_FUNCTION". If absent, the return table type is inferred from definition_body at query time in each query that references this routine. If present, then the columns in the evaluated table result will be cast to match the column types specified in return table type, at query time.
return_type StandardSqlDataTypeArgs
Optional if language = "SQL"; required otherwise. Cannot be set if routine_type = "TABLE_VALUED_FUNCTION". If absent, the return type is inferred from definition_body at query time in each query that references this routine. If present, then the evaluated result will be cast to the specified returned type at query time. For example, for the functions created with the following statements: * CREATE FUNCTION Add(x FLOAT64, y FLOAT64) RETURNS FLOAT64 AS (x + y); * CREATE FUNCTION Increment(x FLOAT64) AS (Add(x, 1)); * CREATE FUNCTION Decrement(x FLOAT64) RETURNS FLOAT64 AS (Add(x, -1)); The return_type is {type_kind: "FLOAT64"} for Add and Decrement, and is absent for Increment (inferred as FLOAT64 at query time). Suppose the function Add is replaced by CREATE OR REPLACE FUNCTION Add(x INT64, y INT64) AS (x + y); Then the inferred return type of Increment is automatically changed to INT64 at query time, while the return type of Decrement remains FLOAT64.
security_mode RoutineSecurityMode
Optional. The security mode of the routine, if defined. If not defined, the security mode is automatically determined from the routine's configuration.
spark_options SparkOptionsArgs
Optional. Spark specific options.
strict_mode bool
Optional. Can be set for procedures only. If true (default), the definition body will be validated in the creation and the updates of the procedure. For procedures with an argument of ANY TYPE, the definition body validtion is not supported at creation/update time, and thus this field must be set to false explicitly.
datasetId
This property is required.
Changes to this property will trigger replacement.
String
definitionBody This property is required. String
The body of the routine. For functions, this is the expression in the AS clause. If language=SQL, it is the substring inside (but excluding) the parentheses. For example, for the function created with the following statement: CREATE FUNCTION JoinLines(x string, y string) as (concat(x, "\n", y)) The definition_body is concat(x, "\n", y) (\n is not replaced with linebreak). If language=JAVASCRIPT, it is the evaluated string in the AS clause. For example, for the function created with the following statement: CREATE FUNCTION f() RETURNS STRING LANGUAGE js AS 'return "\n";\n' The definition_body is return "\n";\n Note that both \n are replaced with linebreaks.
routineReference This property is required. Property Map
Reference describing the ID of this routine.
routineType This property is required. "ROUTINE_TYPE_UNSPECIFIED" | "SCALAR_FUNCTION" | "PROCEDURE" | "TABLE_VALUED_FUNCTION" | "AGGREGATE_FUNCTION"
The type of routine.
arguments List<Property Map>
Optional.
dataGovernanceType "DATA_GOVERNANCE_TYPE_UNSPECIFIED" | "DATA_MASKING"
Optional. If set to DATA_MASKING, the function is validated and made available as a masking function. For more information, see Create custom masking routines.
description String
Optional. The description of the routine, if defined.
determinismLevel "DETERMINISM_LEVEL_UNSPECIFIED" | "DETERMINISTIC" | "NOT_DETERMINISTIC"
Optional. The determinism level of the JavaScript UDF, if defined.
importedLibraries List<String>
Optional. If language = "JAVASCRIPT", this field stores the path of the imported JAVASCRIPT libraries.
language "LANGUAGE_UNSPECIFIED" | "SQL" | "JAVASCRIPT" | "PYTHON" | "JAVA" | "SCALA"
Optional. Defaults to "SQL" if remote_function_options field is absent, not set otherwise.
project Changes to this property will trigger replacement. String
remoteFunctionOptions Property Map
Optional. Remote function specific options.
returnTableType Property Map
Optional. Can be set only if routine_type = "TABLE_VALUED_FUNCTION". If absent, the return table type is inferred from definition_body at query time in each query that references this routine. If present, then the columns in the evaluated table result will be cast to match the column types specified in return table type, at query time.
returnType Property Map
Optional if language = "SQL"; required otherwise. Cannot be set if routine_type = "TABLE_VALUED_FUNCTION". If absent, the return type is inferred from definition_body at query time in each query that references this routine. If present, then the evaluated result will be cast to the specified returned type at query time. For example, for the functions created with the following statements: * CREATE FUNCTION Add(x FLOAT64, y FLOAT64) RETURNS FLOAT64 AS (x + y); * CREATE FUNCTION Increment(x FLOAT64) AS (Add(x, 1)); * CREATE FUNCTION Decrement(x FLOAT64) RETURNS FLOAT64 AS (Add(x, -1)); The return_type is {type_kind: "FLOAT64"} for Add and Decrement, and is absent for Increment (inferred as FLOAT64 at query time). Suppose the function Add is replaced by CREATE OR REPLACE FUNCTION Add(x INT64, y INT64) AS (x + y); Then the inferred return type of Increment is automatically changed to INT64 at query time, while the return type of Decrement remains FLOAT64.
securityMode "SECURITY_MODE_UNSPECIFIED" | "DEFINER" | "INVOKER"
Optional. The security mode of the routine, if defined. If not defined, the security mode is automatically determined from the routine's configuration.
sparkOptions Property Map
Optional. Spark specific options.
strictMode Boolean
Optional. Can be set for procedures only. If true (default), the definition body will be validated in the creation and the updates of the procedure. For procedures with an argument of ANY TYPE, the definition body validtion is not supported at creation/update time, and thus this field must be set to false explicitly.

Outputs

All input properties are implicitly available as output properties. Additionally, the Routine resource produces the following output properties:

CreationTime string
The time when this routine was created, in milliseconds since the epoch.
Etag string
A hash of this resource.
Id string
The provider-assigned unique ID for this managed resource.
LastModifiedTime string
The time when this routine was last modified, in milliseconds since the epoch.
CreationTime string
The time when this routine was created, in milliseconds since the epoch.
Etag string
A hash of this resource.
Id string
The provider-assigned unique ID for this managed resource.
LastModifiedTime string
The time when this routine was last modified, in milliseconds since the epoch.
creationTime String
The time when this routine was created, in milliseconds since the epoch.
etag String
A hash of this resource.
id String
The provider-assigned unique ID for this managed resource.
lastModifiedTime String
The time when this routine was last modified, in milliseconds since the epoch.
creationTime string
The time when this routine was created, in milliseconds since the epoch.
etag string
A hash of this resource.
id string
The provider-assigned unique ID for this managed resource.
lastModifiedTime string
The time when this routine was last modified, in milliseconds since the epoch.
creation_time str
The time when this routine was created, in milliseconds since the epoch.
etag str
A hash of this resource.
id str
The provider-assigned unique ID for this managed resource.
last_modified_time str
The time when this routine was last modified, in milliseconds since the epoch.
creationTime String
The time when this routine was created, in milliseconds since the epoch.
etag String
A hash of this resource.
id String
The provider-assigned unique ID for this managed resource.
lastModifiedTime String
The time when this routine was last modified, in milliseconds since the epoch.

Supporting Types

Argument
, ArgumentArgs

ArgumentKind Pulumi.GoogleNative.BigQuery.V2.ArgumentArgumentKind
Optional. Defaults to FIXED_TYPE.
DataType Pulumi.GoogleNative.BigQuery.V2.Inputs.StandardSqlDataType
Required unless argument_kind = ANY_TYPE.
IsAggregate bool
Optional. Whether the argument is an aggregate function parameter. Must be Unset for routine types other than AGGREGATE_FUNCTION. For AGGREGATE_FUNCTION, if set to false, it is equivalent to adding "NOT AGGREGATE" clause in DDL; Otherwise, it is equivalent to omitting "NOT AGGREGATE" clause in DDL.
Mode Pulumi.GoogleNative.BigQuery.V2.ArgumentMode
Optional. Specifies whether the argument is input or output. Can be set for procedures only.
Name string
Optional. The name of this argument. Can be absent for function return argument.
ArgumentKind ArgumentArgumentKind
Optional. Defaults to FIXED_TYPE.
DataType StandardSqlDataType
Required unless argument_kind = ANY_TYPE.
IsAggregate bool
Optional. Whether the argument is an aggregate function parameter. Must be Unset for routine types other than AGGREGATE_FUNCTION. For AGGREGATE_FUNCTION, if set to false, it is equivalent to adding "NOT AGGREGATE" clause in DDL; Otherwise, it is equivalent to omitting "NOT AGGREGATE" clause in DDL.
Mode ArgumentMode
Optional. Specifies whether the argument is input or output. Can be set for procedures only.
Name string
Optional. The name of this argument. Can be absent for function return argument.
argumentKind ArgumentArgumentKind
Optional. Defaults to FIXED_TYPE.
dataType StandardSqlDataType
Required unless argument_kind = ANY_TYPE.
isAggregate Boolean
Optional. Whether the argument is an aggregate function parameter. Must be Unset for routine types other than AGGREGATE_FUNCTION. For AGGREGATE_FUNCTION, if set to false, it is equivalent to adding "NOT AGGREGATE" clause in DDL; Otherwise, it is equivalent to omitting "NOT AGGREGATE" clause in DDL.
mode ArgumentMode
Optional. Specifies whether the argument is input or output. Can be set for procedures only.
name String
Optional. The name of this argument. Can be absent for function return argument.
argumentKind ArgumentArgumentKind
Optional. Defaults to FIXED_TYPE.
dataType StandardSqlDataType
Required unless argument_kind = ANY_TYPE.
isAggregate boolean
Optional. Whether the argument is an aggregate function parameter. Must be Unset for routine types other than AGGREGATE_FUNCTION. For AGGREGATE_FUNCTION, if set to false, it is equivalent to adding "NOT AGGREGATE" clause in DDL; Otherwise, it is equivalent to omitting "NOT AGGREGATE" clause in DDL.
mode ArgumentMode
Optional. Specifies whether the argument is input or output. Can be set for procedures only.
name string
Optional. The name of this argument. Can be absent for function return argument.
argument_kind ArgumentArgumentKind
Optional. Defaults to FIXED_TYPE.
data_type StandardSqlDataType
Required unless argument_kind = ANY_TYPE.
is_aggregate bool
Optional. Whether the argument is an aggregate function parameter. Must be Unset for routine types other than AGGREGATE_FUNCTION. For AGGREGATE_FUNCTION, if set to false, it is equivalent to adding "NOT AGGREGATE" clause in DDL; Otherwise, it is equivalent to omitting "NOT AGGREGATE" clause in DDL.
mode ArgumentMode
Optional. Specifies whether the argument is input or output. Can be set for procedures only.
name str
Optional. The name of this argument. Can be absent for function return argument.
argumentKind "ARGUMENT_KIND_UNSPECIFIED" | "FIXED_TYPE" | "ANY_TYPE"
Optional. Defaults to FIXED_TYPE.
dataType Property Map
Required unless argument_kind = ANY_TYPE.
isAggregate Boolean
Optional. Whether the argument is an aggregate function parameter. Must be Unset for routine types other than AGGREGATE_FUNCTION. For AGGREGATE_FUNCTION, if set to false, it is equivalent to adding "NOT AGGREGATE" clause in DDL; Otherwise, it is equivalent to omitting "NOT AGGREGATE" clause in DDL.
mode "MODE_UNSPECIFIED" | "IN" | "OUT" | "INOUT"
Optional. Specifies whether the argument is input or output. Can be set for procedures only.
name String
Optional. The name of this argument. Can be absent for function return argument.

ArgumentArgumentKind
, ArgumentArgumentKindArgs

ArgumentKindUnspecified
ARGUMENT_KIND_UNSPECIFIEDDefault value.
FixedType
FIXED_TYPEThe argument is a variable with fully specified type, which can be a struct or an array, but not a table.
AnyType
ANY_TYPEThe argument is any type, including struct or array, but not a table. To be added: FIXED_TABLE, ANY_TABLE
ArgumentArgumentKindArgumentKindUnspecified
ARGUMENT_KIND_UNSPECIFIEDDefault value.
ArgumentArgumentKindFixedType
FIXED_TYPEThe argument is a variable with fully specified type, which can be a struct or an array, but not a table.
ArgumentArgumentKindAnyType
ANY_TYPEThe argument is any type, including struct or array, but not a table. To be added: FIXED_TABLE, ANY_TABLE
ArgumentKindUnspecified
ARGUMENT_KIND_UNSPECIFIEDDefault value.
FixedType
FIXED_TYPEThe argument is a variable with fully specified type, which can be a struct or an array, but not a table.
AnyType
ANY_TYPEThe argument is any type, including struct or array, but not a table. To be added: FIXED_TABLE, ANY_TABLE
ArgumentKindUnspecified
ARGUMENT_KIND_UNSPECIFIEDDefault value.
FixedType
FIXED_TYPEThe argument is a variable with fully specified type, which can be a struct or an array, but not a table.
AnyType
ANY_TYPEThe argument is any type, including struct or array, but not a table. To be added: FIXED_TABLE, ANY_TABLE
ARGUMENT_KIND_UNSPECIFIED
ARGUMENT_KIND_UNSPECIFIEDDefault value.
FIXED_TYPE
FIXED_TYPEThe argument is a variable with fully specified type, which can be a struct or an array, but not a table.
ANY_TYPE
ANY_TYPEThe argument is any type, including struct or array, but not a table. To be added: FIXED_TABLE, ANY_TABLE
"ARGUMENT_KIND_UNSPECIFIED"
ARGUMENT_KIND_UNSPECIFIEDDefault value.
"FIXED_TYPE"
FIXED_TYPEThe argument is a variable with fully specified type, which can be a struct or an array, but not a table.
"ANY_TYPE"
ANY_TYPEThe argument is any type, including struct or array, but not a table. To be added: FIXED_TABLE, ANY_TABLE

ArgumentMode
, ArgumentModeArgs

ModeUnspecified
MODE_UNSPECIFIEDDefault value.
In
INThe argument is input-only.
Out
OUTThe argument is output-only.
Inout
INOUTThe argument is both an input and an output.
ArgumentModeModeUnspecified
MODE_UNSPECIFIEDDefault value.
ArgumentModeIn
INThe argument is input-only.
ArgumentModeOut
OUTThe argument is output-only.
ArgumentModeInout
INOUTThe argument is both an input and an output.
ModeUnspecified
MODE_UNSPECIFIEDDefault value.
In
INThe argument is input-only.
Out
OUTThe argument is output-only.
Inout
INOUTThe argument is both an input and an output.
ModeUnspecified
MODE_UNSPECIFIEDDefault value.
In
INThe argument is input-only.
Out
OUTThe argument is output-only.
Inout
INOUTThe argument is both an input and an output.
MODE_UNSPECIFIED
MODE_UNSPECIFIEDDefault value.
IN_
INThe argument is input-only.
OUT
OUTThe argument is output-only.
INOUT
INOUTThe argument is both an input and an output.
"MODE_UNSPECIFIED"
MODE_UNSPECIFIEDDefault value.
"IN"
INThe argument is input-only.
"OUT"
OUTThe argument is output-only.
"INOUT"
INOUTThe argument is both an input and an output.

ArgumentResponse
, ArgumentResponseArgs

ArgumentKind This property is required. string
Optional. Defaults to FIXED_TYPE.
DataType This property is required. Pulumi.GoogleNative.BigQuery.V2.Inputs.StandardSqlDataTypeResponse
Required unless argument_kind = ANY_TYPE.
IsAggregate This property is required. bool
Optional. Whether the argument is an aggregate function parameter. Must be Unset for routine types other than AGGREGATE_FUNCTION. For AGGREGATE_FUNCTION, if set to false, it is equivalent to adding "NOT AGGREGATE" clause in DDL; Otherwise, it is equivalent to omitting "NOT AGGREGATE" clause in DDL.
Mode This property is required. string
Optional. Specifies whether the argument is input or output. Can be set for procedures only.
Name This property is required. string
Optional. The name of this argument. Can be absent for function return argument.
ArgumentKind This property is required. string
Optional. Defaults to FIXED_TYPE.
DataType This property is required. StandardSqlDataTypeResponse
Required unless argument_kind = ANY_TYPE.
IsAggregate This property is required. bool
Optional. Whether the argument is an aggregate function parameter. Must be Unset for routine types other than AGGREGATE_FUNCTION. For AGGREGATE_FUNCTION, if set to false, it is equivalent to adding "NOT AGGREGATE" clause in DDL; Otherwise, it is equivalent to omitting "NOT AGGREGATE" clause in DDL.
Mode This property is required. string
Optional. Specifies whether the argument is input or output. Can be set for procedures only.
Name This property is required. string
Optional. The name of this argument. Can be absent for function return argument.
argumentKind This property is required. String
Optional. Defaults to FIXED_TYPE.
dataType This property is required. StandardSqlDataTypeResponse
Required unless argument_kind = ANY_TYPE.
isAggregate This property is required. Boolean
Optional. Whether the argument is an aggregate function parameter. Must be Unset for routine types other than AGGREGATE_FUNCTION. For AGGREGATE_FUNCTION, if set to false, it is equivalent to adding "NOT AGGREGATE" clause in DDL; Otherwise, it is equivalent to omitting "NOT AGGREGATE" clause in DDL.
mode This property is required. String
Optional. Specifies whether the argument is input or output. Can be set for procedures only.
name This property is required. String
Optional. The name of this argument. Can be absent for function return argument.
argumentKind This property is required. string
Optional. Defaults to FIXED_TYPE.
dataType This property is required. StandardSqlDataTypeResponse
Required unless argument_kind = ANY_TYPE.
isAggregate This property is required. boolean
Optional. Whether the argument is an aggregate function parameter. Must be Unset for routine types other than AGGREGATE_FUNCTION. For AGGREGATE_FUNCTION, if set to false, it is equivalent to adding "NOT AGGREGATE" clause in DDL; Otherwise, it is equivalent to omitting "NOT AGGREGATE" clause in DDL.
mode This property is required. string
Optional. Specifies whether the argument is input or output. Can be set for procedures only.
name This property is required. string
Optional. The name of this argument. Can be absent for function return argument.
argument_kind This property is required. str
Optional. Defaults to FIXED_TYPE.
data_type This property is required. StandardSqlDataTypeResponse
Required unless argument_kind = ANY_TYPE.
is_aggregate This property is required. bool
Optional. Whether the argument is an aggregate function parameter. Must be Unset for routine types other than AGGREGATE_FUNCTION. For AGGREGATE_FUNCTION, if set to false, it is equivalent to adding "NOT AGGREGATE" clause in DDL; Otherwise, it is equivalent to omitting "NOT AGGREGATE" clause in DDL.
mode This property is required. str
Optional. Specifies whether the argument is input or output. Can be set for procedures only.
name This property is required. str
Optional. The name of this argument. Can be absent for function return argument.
argumentKind This property is required. String
Optional. Defaults to FIXED_TYPE.
dataType This property is required. Property Map
Required unless argument_kind = ANY_TYPE.
isAggregate This property is required. Boolean
Optional. Whether the argument is an aggregate function parameter. Must be Unset for routine types other than AGGREGATE_FUNCTION. For AGGREGATE_FUNCTION, if set to false, it is equivalent to adding "NOT AGGREGATE" clause in DDL; Otherwise, it is equivalent to omitting "NOT AGGREGATE" clause in DDL.
mode This property is required. String
Optional. Specifies whether the argument is input or output. Can be set for procedures only.
name This property is required. String
Optional. The name of this argument. Can be absent for function return argument.

RemoteFunctionOptions
, RemoteFunctionOptionsArgs

Connection string
Fully qualified name of the user-provided connection object which holds the authentication information to send requests to the remote service. Format: "projects/{projectId}/locations/{locationId}/connections/{connectionId}"
Endpoint string
Endpoint of the user-provided remote service, e.g. https://us-east1-my_gcf_project.cloudfunctions.net/remote_add
MaxBatchingRows string
Max number of rows in each batch sent to the remote service. If absent or if 0, BigQuery dynamically decides the number of rows in a batch.
UserDefinedContext Dictionary<string, string>
User-defined context as a set of key/value pairs, which will be sent as function invocation context together with batched arguments in the requests to the remote service. The total number of bytes of keys and values must be less than 8KB.
Connection string
Fully qualified name of the user-provided connection object which holds the authentication information to send requests to the remote service. Format: "projects/{projectId}/locations/{locationId}/connections/{connectionId}"
Endpoint string
Endpoint of the user-provided remote service, e.g. https://us-east1-my_gcf_project.cloudfunctions.net/remote_add
MaxBatchingRows string
Max number of rows in each batch sent to the remote service. If absent or if 0, BigQuery dynamically decides the number of rows in a batch.
UserDefinedContext map[string]string
User-defined context as a set of key/value pairs, which will be sent as function invocation context together with batched arguments in the requests to the remote service. The total number of bytes of keys and values must be less than 8KB.
connection String
Fully qualified name of the user-provided connection object which holds the authentication information to send requests to the remote service. Format: "projects/{projectId}/locations/{locationId}/connections/{connectionId}"
endpoint String
Endpoint of the user-provided remote service, e.g. https://us-east1-my_gcf_project.cloudfunctions.net/remote_add
maxBatchingRows String
Max number of rows in each batch sent to the remote service. If absent or if 0, BigQuery dynamically decides the number of rows in a batch.
userDefinedContext Map<String,String>
User-defined context as a set of key/value pairs, which will be sent as function invocation context together with batched arguments in the requests to the remote service. The total number of bytes of keys and values must be less than 8KB.
connection string
Fully qualified name of the user-provided connection object which holds the authentication information to send requests to the remote service. Format: "projects/{projectId}/locations/{locationId}/connections/{connectionId}"
endpoint string
Endpoint of the user-provided remote service, e.g. https://us-east1-my_gcf_project.cloudfunctions.net/remote_add
maxBatchingRows string
Max number of rows in each batch sent to the remote service. If absent or if 0, BigQuery dynamically decides the number of rows in a batch.
userDefinedContext {[key: string]: string}
User-defined context as a set of key/value pairs, which will be sent as function invocation context together with batched arguments in the requests to the remote service. The total number of bytes of keys and values must be less than 8KB.
connection str
Fully qualified name of the user-provided connection object which holds the authentication information to send requests to the remote service. Format: "projects/{projectId}/locations/{locationId}/connections/{connectionId}"
endpoint str
Endpoint of the user-provided remote service, e.g. https://us-east1-my_gcf_project.cloudfunctions.net/remote_add
max_batching_rows str
Max number of rows in each batch sent to the remote service. If absent or if 0, BigQuery dynamically decides the number of rows in a batch.
user_defined_context Mapping[str, str]
User-defined context as a set of key/value pairs, which will be sent as function invocation context together with batched arguments in the requests to the remote service. The total number of bytes of keys and values must be less than 8KB.
connection String
Fully qualified name of the user-provided connection object which holds the authentication information to send requests to the remote service. Format: "projects/{projectId}/locations/{locationId}/connections/{connectionId}"
endpoint String
Endpoint of the user-provided remote service, e.g. https://us-east1-my_gcf_project.cloudfunctions.net/remote_add
maxBatchingRows String
Max number of rows in each batch sent to the remote service. If absent or if 0, BigQuery dynamically decides the number of rows in a batch.
userDefinedContext Map<String>
User-defined context as a set of key/value pairs, which will be sent as function invocation context together with batched arguments in the requests to the remote service. The total number of bytes of keys and values must be less than 8KB.

RemoteFunctionOptionsResponse
, RemoteFunctionOptionsResponseArgs

Connection This property is required. string
Fully qualified name of the user-provided connection object which holds the authentication information to send requests to the remote service. Format: "projects/{projectId}/locations/{locationId}/connections/{connectionId}"
Endpoint This property is required. string
Endpoint of the user-provided remote service, e.g. https://us-east1-my_gcf_project.cloudfunctions.net/remote_add
MaxBatchingRows This property is required. string
Max number of rows in each batch sent to the remote service. If absent or if 0, BigQuery dynamically decides the number of rows in a batch.
UserDefinedContext This property is required. Dictionary<string, string>
User-defined context as a set of key/value pairs, which will be sent as function invocation context together with batched arguments in the requests to the remote service. The total number of bytes of keys and values must be less than 8KB.
Connection This property is required. string
Fully qualified name of the user-provided connection object which holds the authentication information to send requests to the remote service. Format: "projects/{projectId}/locations/{locationId}/connections/{connectionId}"
Endpoint This property is required. string
Endpoint of the user-provided remote service, e.g. https://us-east1-my_gcf_project.cloudfunctions.net/remote_add
MaxBatchingRows This property is required. string
Max number of rows in each batch sent to the remote service. If absent or if 0, BigQuery dynamically decides the number of rows in a batch.
UserDefinedContext This property is required. map[string]string
User-defined context as a set of key/value pairs, which will be sent as function invocation context together with batched arguments in the requests to the remote service. The total number of bytes of keys and values must be less than 8KB.
connection This property is required. String
Fully qualified name of the user-provided connection object which holds the authentication information to send requests to the remote service. Format: "projects/{projectId}/locations/{locationId}/connections/{connectionId}"
endpoint This property is required. String
Endpoint of the user-provided remote service, e.g. https://us-east1-my_gcf_project.cloudfunctions.net/remote_add
maxBatchingRows This property is required. String
Max number of rows in each batch sent to the remote service. If absent or if 0, BigQuery dynamically decides the number of rows in a batch.
userDefinedContext This property is required. Map<String,String>
User-defined context as a set of key/value pairs, which will be sent as function invocation context together with batched arguments in the requests to the remote service. The total number of bytes of keys and values must be less than 8KB.
connection This property is required. string
Fully qualified name of the user-provided connection object which holds the authentication information to send requests to the remote service. Format: "projects/{projectId}/locations/{locationId}/connections/{connectionId}"
endpoint This property is required. string
Endpoint of the user-provided remote service, e.g. https://us-east1-my_gcf_project.cloudfunctions.net/remote_add
maxBatchingRows This property is required. string
Max number of rows in each batch sent to the remote service. If absent or if 0, BigQuery dynamically decides the number of rows in a batch.
userDefinedContext This property is required. {[key: string]: string}
User-defined context as a set of key/value pairs, which will be sent as function invocation context together with batched arguments in the requests to the remote service. The total number of bytes of keys and values must be less than 8KB.
connection This property is required. str
Fully qualified name of the user-provided connection object which holds the authentication information to send requests to the remote service. Format: "projects/{projectId}/locations/{locationId}/connections/{connectionId}"
endpoint This property is required. str
Endpoint of the user-provided remote service, e.g. https://us-east1-my_gcf_project.cloudfunctions.net/remote_add
max_batching_rows This property is required. str
Max number of rows in each batch sent to the remote service. If absent or if 0, BigQuery dynamically decides the number of rows in a batch.
user_defined_context This property is required. Mapping[str, str]
User-defined context as a set of key/value pairs, which will be sent as function invocation context together with batched arguments in the requests to the remote service. The total number of bytes of keys and values must be less than 8KB.
connection This property is required. String
Fully qualified name of the user-provided connection object which holds the authentication information to send requests to the remote service. Format: "projects/{projectId}/locations/{locationId}/connections/{connectionId}"
endpoint This property is required. String
Endpoint of the user-provided remote service, e.g. https://us-east1-my_gcf_project.cloudfunctions.net/remote_add
maxBatchingRows This property is required. String
Max number of rows in each batch sent to the remote service. If absent or if 0, BigQuery dynamically decides the number of rows in a batch.
userDefinedContext This property is required. Map<String>
User-defined context as a set of key/value pairs, which will be sent as function invocation context together with batched arguments in the requests to the remote service. The total number of bytes of keys and values must be less than 8KB.

RoutineDataGovernanceType
, RoutineDataGovernanceTypeArgs

DataGovernanceTypeUnspecified
DATA_GOVERNANCE_TYPE_UNSPECIFIEDThe data governance type is unspecified.
DataMasking
DATA_MASKINGThe data governance type is data masking.
RoutineDataGovernanceTypeDataGovernanceTypeUnspecified
DATA_GOVERNANCE_TYPE_UNSPECIFIEDThe data governance type is unspecified.
RoutineDataGovernanceTypeDataMasking
DATA_MASKINGThe data governance type is data masking.
DataGovernanceTypeUnspecified
DATA_GOVERNANCE_TYPE_UNSPECIFIEDThe data governance type is unspecified.
DataMasking
DATA_MASKINGThe data governance type is data masking.
DataGovernanceTypeUnspecified
DATA_GOVERNANCE_TYPE_UNSPECIFIEDThe data governance type is unspecified.
DataMasking
DATA_MASKINGThe data governance type is data masking.
DATA_GOVERNANCE_TYPE_UNSPECIFIED
DATA_GOVERNANCE_TYPE_UNSPECIFIEDThe data governance type is unspecified.
DATA_MASKING
DATA_MASKINGThe data governance type is data masking.
"DATA_GOVERNANCE_TYPE_UNSPECIFIED"
DATA_GOVERNANCE_TYPE_UNSPECIFIEDThe data governance type is unspecified.
"DATA_MASKING"
DATA_MASKINGThe data governance type is data masking.

RoutineDeterminismLevel
, RoutineDeterminismLevelArgs

DeterminismLevelUnspecified
DETERMINISM_LEVEL_UNSPECIFIEDThe determinism of the UDF is unspecified.
Deterministic
DETERMINISTICThe UDF is deterministic, meaning that 2 function calls with the same inputs always produce the same result, even across 2 query runs.
NotDeterministic
NOT_DETERMINISTICThe UDF is not deterministic.
RoutineDeterminismLevelDeterminismLevelUnspecified
DETERMINISM_LEVEL_UNSPECIFIEDThe determinism of the UDF is unspecified.
RoutineDeterminismLevelDeterministic
DETERMINISTICThe UDF is deterministic, meaning that 2 function calls with the same inputs always produce the same result, even across 2 query runs.
RoutineDeterminismLevelNotDeterministic
NOT_DETERMINISTICThe UDF is not deterministic.
DeterminismLevelUnspecified
DETERMINISM_LEVEL_UNSPECIFIEDThe determinism of the UDF is unspecified.
Deterministic
DETERMINISTICThe UDF is deterministic, meaning that 2 function calls with the same inputs always produce the same result, even across 2 query runs.
NotDeterministic
NOT_DETERMINISTICThe UDF is not deterministic.
DeterminismLevelUnspecified
DETERMINISM_LEVEL_UNSPECIFIEDThe determinism of the UDF is unspecified.
Deterministic
DETERMINISTICThe UDF is deterministic, meaning that 2 function calls with the same inputs always produce the same result, even across 2 query runs.
NotDeterministic
NOT_DETERMINISTICThe UDF is not deterministic.
DETERMINISM_LEVEL_UNSPECIFIED
DETERMINISM_LEVEL_UNSPECIFIEDThe determinism of the UDF is unspecified.
DETERMINISTIC
DETERMINISTICThe UDF is deterministic, meaning that 2 function calls with the same inputs always produce the same result, even across 2 query runs.
NOT_DETERMINISTIC
NOT_DETERMINISTICThe UDF is not deterministic.
"DETERMINISM_LEVEL_UNSPECIFIED"
DETERMINISM_LEVEL_UNSPECIFIEDThe determinism of the UDF is unspecified.
"DETERMINISTIC"
DETERMINISTICThe UDF is deterministic, meaning that 2 function calls with the same inputs always produce the same result, even across 2 query runs.
"NOT_DETERMINISTIC"
NOT_DETERMINISTICThe UDF is not deterministic.

RoutineLanguage
, RoutineLanguageArgs

LanguageUnspecified
LANGUAGE_UNSPECIFIEDDefault value.
Sql
SQLSQL language.
Javascript
JAVASCRIPTJavaScript language.
Python
PYTHONPython language.
Java
JAVAJava language.
Scala
SCALAScala language.
RoutineLanguageLanguageUnspecified
LANGUAGE_UNSPECIFIEDDefault value.
RoutineLanguageSql
SQLSQL language.
RoutineLanguageJavascript
JAVASCRIPTJavaScript language.
RoutineLanguagePython
PYTHONPython language.
RoutineLanguageJava
JAVAJava language.
RoutineLanguageScala
SCALAScala language.
LanguageUnspecified
LANGUAGE_UNSPECIFIEDDefault value.
Sql
SQLSQL language.
Javascript
JAVASCRIPTJavaScript language.
Python
PYTHONPython language.
Java
JAVAJava language.
Scala
SCALAScala language.
LanguageUnspecified
LANGUAGE_UNSPECIFIEDDefault value.
Sql
SQLSQL language.
Javascript
JAVASCRIPTJavaScript language.
Python
PYTHONPython language.
Java
JAVAJava language.
Scala
SCALAScala language.
LANGUAGE_UNSPECIFIED
LANGUAGE_UNSPECIFIEDDefault value.
SQL
SQLSQL language.
JAVASCRIPT
JAVASCRIPTJavaScript language.
PYTHON
PYTHONPython language.
JAVA
JAVAJava language.
SCALA
SCALAScala language.
"LANGUAGE_UNSPECIFIED"
LANGUAGE_UNSPECIFIEDDefault value.
"SQL"
SQLSQL language.
"JAVASCRIPT"
JAVASCRIPTJavaScript language.
"PYTHON"
PYTHONPython language.
"JAVA"
JAVAJava language.
"SCALA"
SCALAScala language.

RoutineReference
, RoutineReferenceArgs

DatasetId This property is required. string
The ID of the dataset containing this routine.
Project This property is required. string
The ID of the project containing this routine.
RoutineId This property is required. string
The ID of the routine. The ID must contain only letters (a-z, A-Z), numbers (0-9), or underscores (_). The maximum length is 256 characters.
DatasetId This property is required. string
The ID of the dataset containing this routine.
Project This property is required. string
The ID of the project containing this routine.
RoutineId This property is required. string
The ID of the routine. The ID must contain only letters (a-z, A-Z), numbers (0-9), or underscores (_). The maximum length is 256 characters.
datasetId This property is required. String
The ID of the dataset containing this routine.
project This property is required. String
The ID of the project containing this routine.
routineId This property is required. String
The ID of the routine. The ID must contain only letters (a-z, A-Z), numbers (0-9), or underscores (_). The maximum length is 256 characters.
datasetId This property is required. string
The ID of the dataset containing this routine.
project This property is required. string
The ID of the project containing this routine.
routineId This property is required. string
The ID of the routine. The ID must contain only letters (a-z, A-Z), numbers (0-9), or underscores (_). The maximum length is 256 characters.
dataset_id This property is required. str
The ID of the dataset containing this routine.
project This property is required. str
The ID of the project containing this routine.
routine_id This property is required. str
The ID of the routine. The ID must contain only letters (a-z, A-Z), numbers (0-9), or underscores (_). The maximum length is 256 characters.
datasetId This property is required. String
The ID of the dataset containing this routine.
project This property is required. String
The ID of the project containing this routine.
routineId This property is required. String
The ID of the routine. The ID must contain only letters (a-z, A-Z), numbers (0-9), or underscores (_). The maximum length is 256 characters.

RoutineReferenceResponse
, RoutineReferenceResponseArgs

DatasetId This property is required. string
The ID of the dataset containing this routine.
Project This property is required. string
The ID of the project containing this routine.
RoutineId This property is required. string
The ID of the routine. The ID must contain only letters (a-z, A-Z), numbers (0-9), or underscores (_). The maximum length is 256 characters.
DatasetId This property is required. string
The ID of the dataset containing this routine.
Project This property is required. string
The ID of the project containing this routine.
RoutineId This property is required. string
The ID of the routine. The ID must contain only letters (a-z, A-Z), numbers (0-9), or underscores (_). The maximum length is 256 characters.
datasetId This property is required. String
The ID of the dataset containing this routine.
project This property is required. String
The ID of the project containing this routine.
routineId This property is required. String
The ID of the routine. The ID must contain only letters (a-z, A-Z), numbers (0-9), or underscores (_). The maximum length is 256 characters.
datasetId This property is required. string
The ID of the dataset containing this routine.
project This property is required. string
The ID of the project containing this routine.
routineId This property is required. string
The ID of the routine. The ID must contain only letters (a-z, A-Z), numbers (0-9), or underscores (_). The maximum length is 256 characters.
dataset_id This property is required. str
The ID of the dataset containing this routine.
project This property is required. str
The ID of the project containing this routine.
routine_id This property is required. str
The ID of the routine. The ID must contain only letters (a-z, A-Z), numbers (0-9), or underscores (_). The maximum length is 256 characters.
datasetId This property is required. String
The ID of the dataset containing this routine.
project This property is required. String
The ID of the project containing this routine.
routineId This property is required. String
The ID of the routine. The ID must contain only letters (a-z, A-Z), numbers (0-9), or underscores (_). The maximum length is 256 characters.

RoutineRoutineType
, RoutineRoutineTypeArgs

RoutineTypeUnspecified
ROUTINE_TYPE_UNSPECIFIEDDefault value.
ScalarFunction
SCALAR_FUNCTIONNon-built-in persistent scalar function.
Procedure
PROCEDUREStored procedure.
TableValuedFunction
TABLE_VALUED_FUNCTIONNon-built-in persistent TVF.
AggregateFunction
AGGREGATE_FUNCTIONNon-built-in persistent aggregate function.
RoutineRoutineTypeRoutineTypeUnspecified
ROUTINE_TYPE_UNSPECIFIEDDefault value.
RoutineRoutineTypeScalarFunction
SCALAR_FUNCTIONNon-built-in persistent scalar function.
RoutineRoutineTypeProcedure
PROCEDUREStored procedure.
RoutineRoutineTypeTableValuedFunction
TABLE_VALUED_FUNCTIONNon-built-in persistent TVF.
RoutineRoutineTypeAggregateFunction
AGGREGATE_FUNCTIONNon-built-in persistent aggregate function.
RoutineTypeUnspecified
ROUTINE_TYPE_UNSPECIFIEDDefault value.
ScalarFunction
SCALAR_FUNCTIONNon-built-in persistent scalar function.
Procedure
PROCEDUREStored procedure.
TableValuedFunction
TABLE_VALUED_FUNCTIONNon-built-in persistent TVF.
AggregateFunction
AGGREGATE_FUNCTIONNon-built-in persistent aggregate function.
RoutineTypeUnspecified
ROUTINE_TYPE_UNSPECIFIEDDefault value.
ScalarFunction
SCALAR_FUNCTIONNon-built-in persistent scalar function.
Procedure
PROCEDUREStored procedure.
TableValuedFunction
TABLE_VALUED_FUNCTIONNon-built-in persistent TVF.
AggregateFunction
AGGREGATE_FUNCTIONNon-built-in persistent aggregate function.
ROUTINE_TYPE_UNSPECIFIED
ROUTINE_TYPE_UNSPECIFIEDDefault value.
SCALAR_FUNCTION
SCALAR_FUNCTIONNon-built-in persistent scalar function.
PROCEDURE
PROCEDUREStored procedure.
TABLE_VALUED_FUNCTION
TABLE_VALUED_FUNCTIONNon-built-in persistent TVF.
AGGREGATE_FUNCTION
AGGREGATE_FUNCTIONNon-built-in persistent aggregate function.
"ROUTINE_TYPE_UNSPECIFIED"
ROUTINE_TYPE_UNSPECIFIEDDefault value.
"SCALAR_FUNCTION"
SCALAR_FUNCTIONNon-built-in persistent scalar function.
"PROCEDURE"
PROCEDUREStored procedure.
"TABLE_VALUED_FUNCTION"
TABLE_VALUED_FUNCTIONNon-built-in persistent TVF.
"AGGREGATE_FUNCTION"
AGGREGATE_FUNCTIONNon-built-in persistent aggregate function.

RoutineSecurityMode
, RoutineSecurityModeArgs

SecurityModeUnspecified
SECURITY_MODE_UNSPECIFIEDThe security mode of the routine is unspecified.
Definer
DEFINERThe routine is to be executed with the privileges of the user who defines it.
Invoker
INVOKERThe routine is to be executed with the privileges of the user who invokes it.
RoutineSecurityModeSecurityModeUnspecified
SECURITY_MODE_UNSPECIFIEDThe security mode of the routine is unspecified.
RoutineSecurityModeDefiner
DEFINERThe routine is to be executed with the privileges of the user who defines it.
RoutineSecurityModeInvoker
INVOKERThe routine is to be executed with the privileges of the user who invokes it.
SecurityModeUnspecified
SECURITY_MODE_UNSPECIFIEDThe security mode of the routine is unspecified.
Definer
DEFINERThe routine is to be executed with the privileges of the user who defines it.
Invoker
INVOKERThe routine is to be executed with the privileges of the user who invokes it.
SecurityModeUnspecified
SECURITY_MODE_UNSPECIFIEDThe security mode of the routine is unspecified.
Definer
DEFINERThe routine is to be executed with the privileges of the user who defines it.
Invoker
INVOKERThe routine is to be executed with the privileges of the user who invokes it.
SECURITY_MODE_UNSPECIFIED
SECURITY_MODE_UNSPECIFIEDThe security mode of the routine is unspecified.
DEFINER
DEFINERThe routine is to be executed with the privileges of the user who defines it.
INVOKER
INVOKERThe routine is to be executed with the privileges of the user who invokes it.
"SECURITY_MODE_UNSPECIFIED"
SECURITY_MODE_UNSPECIFIEDThe security mode of the routine is unspecified.
"DEFINER"
DEFINERThe routine is to be executed with the privileges of the user who defines it.
"INVOKER"
INVOKERThe routine is to be executed with the privileges of the user who invokes it.

SparkOptions
, SparkOptionsArgs

ArchiveUris List<string>
Archive files to be extracted into the working directory of each executor. For more information about Apache Spark, see Apache Spark.
Connection string
Fully qualified name of the user-provided Spark connection object. Format: "projects/{project_id}/locations/{location_id}/connections/{connection_id}"
ContainerImage string
Custom container image for the runtime environment.
FileUris List<string>
Files to be placed in the working directory of each executor. For more information about Apache Spark, see Apache Spark.
JarUris List<string>
JARs to include on the driver and executor CLASSPATH. For more information about Apache Spark, see Apache Spark.
MainClass string
The fully qualified name of a class in jar_uris, for example, com.example.wordcount. Exactly one of main_class and main_jar_uri field should be set for Java/Scala language type.
MainFileUri string
The main file/jar URI of the Spark application. Exactly one of the definition_body field and the main_file_uri field must be set for Python. Exactly one of main_class and main_file_uri field should be set for Java/Scala language type.
Properties Dictionary<string, string>
Configuration properties as a set of key/value pairs, which will be passed on to the Spark application. For more information, see Apache Spark and the procedure option list.
PyFileUris List<string>
Python files to be placed on the PYTHONPATH for PySpark application. Supported file types: .py, .egg, and .zip. For more information about Apache Spark, see Apache Spark.
RuntimeVersion string
Runtime version. If not specified, the default runtime version is used.
ArchiveUris []string
Archive files to be extracted into the working directory of each executor. For more information about Apache Spark, see Apache Spark.
Connection string
Fully qualified name of the user-provided Spark connection object. Format: "projects/{project_id}/locations/{location_id}/connections/{connection_id}"
ContainerImage string
Custom container image for the runtime environment.
FileUris []string
Files to be placed in the working directory of each executor. For more information about Apache Spark, see Apache Spark.
JarUris []string
JARs to include on the driver and executor CLASSPATH. For more information about Apache Spark, see Apache Spark.
MainClass string
The fully qualified name of a class in jar_uris, for example, com.example.wordcount. Exactly one of main_class and main_jar_uri field should be set for Java/Scala language type.
MainFileUri string
The main file/jar URI of the Spark application. Exactly one of the definition_body field and the main_file_uri field must be set for Python. Exactly one of main_class and main_file_uri field should be set for Java/Scala language type.
Properties map[string]string
Configuration properties as a set of key/value pairs, which will be passed on to the Spark application. For more information, see Apache Spark and the procedure option list.
PyFileUris []string
Python files to be placed on the PYTHONPATH for PySpark application. Supported file types: .py, .egg, and .zip. For more information about Apache Spark, see Apache Spark.
RuntimeVersion string
Runtime version. If not specified, the default runtime version is used.
archiveUris List<String>
Archive files to be extracted into the working directory of each executor. For more information about Apache Spark, see Apache Spark.
connection String
Fully qualified name of the user-provided Spark connection object. Format: "projects/{project_id}/locations/{location_id}/connections/{connection_id}"
containerImage String
Custom container image for the runtime environment.
fileUris List<String>
Files to be placed in the working directory of each executor. For more information about Apache Spark, see Apache Spark.
jarUris List<String>
JARs to include on the driver and executor CLASSPATH. For more information about Apache Spark, see Apache Spark.
mainClass String
The fully qualified name of a class in jar_uris, for example, com.example.wordcount. Exactly one of main_class and main_jar_uri field should be set for Java/Scala language type.
mainFileUri String
The main file/jar URI of the Spark application. Exactly one of the definition_body field and the main_file_uri field must be set for Python. Exactly one of main_class and main_file_uri field should be set for Java/Scala language type.
properties Map<String,String>
Configuration properties as a set of key/value pairs, which will be passed on to the Spark application. For more information, see Apache Spark and the procedure option list.
pyFileUris List<String>
Python files to be placed on the PYTHONPATH for PySpark application. Supported file types: .py, .egg, and .zip. For more information about Apache Spark, see Apache Spark.
runtimeVersion String
Runtime version. If not specified, the default runtime version is used.
archiveUris string[]
Archive files to be extracted into the working directory of each executor. For more information about Apache Spark, see Apache Spark.
connection string
Fully qualified name of the user-provided Spark connection object. Format: "projects/{project_id}/locations/{location_id}/connections/{connection_id}"
containerImage string
Custom container image for the runtime environment.
fileUris string[]
Files to be placed in the working directory of each executor. For more information about Apache Spark, see Apache Spark.
jarUris string[]
JARs to include on the driver and executor CLASSPATH. For more information about Apache Spark, see Apache Spark.
mainClass string
The fully qualified name of a class in jar_uris, for example, com.example.wordcount. Exactly one of main_class and main_jar_uri field should be set for Java/Scala language type.
mainFileUri string
The main file/jar URI of the Spark application. Exactly one of the definition_body field and the main_file_uri field must be set for Python. Exactly one of main_class and main_file_uri field should be set for Java/Scala language type.
properties {[key: string]: string}
Configuration properties as a set of key/value pairs, which will be passed on to the Spark application. For more information, see Apache Spark and the procedure option list.
pyFileUris string[]
Python files to be placed on the PYTHONPATH for PySpark application. Supported file types: .py, .egg, and .zip. For more information about Apache Spark, see Apache Spark.
runtimeVersion string
Runtime version. If not specified, the default runtime version is used.
archive_uris Sequence[str]
Archive files to be extracted into the working directory of each executor. For more information about Apache Spark, see Apache Spark.
connection str
Fully qualified name of the user-provided Spark connection object. Format: "projects/{project_id}/locations/{location_id}/connections/{connection_id}"
container_image str
Custom container image for the runtime environment.
file_uris Sequence[str]
Files to be placed in the working directory of each executor. For more information about Apache Spark, see Apache Spark.
jar_uris Sequence[str]
JARs to include on the driver and executor CLASSPATH. For more information about Apache Spark, see Apache Spark.
main_class str
The fully qualified name of a class in jar_uris, for example, com.example.wordcount. Exactly one of main_class and main_jar_uri field should be set for Java/Scala language type.
main_file_uri str
The main file/jar URI of the Spark application. Exactly one of the definition_body field and the main_file_uri field must be set for Python. Exactly one of main_class and main_file_uri field should be set for Java/Scala language type.
properties Mapping[str, str]
Configuration properties as a set of key/value pairs, which will be passed on to the Spark application. For more information, see Apache Spark and the procedure option list.
py_file_uris Sequence[str]
Python files to be placed on the PYTHONPATH for PySpark application. Supported file types: .py, .egg, and .zip. For more information about Apache Spark, see Apache Spark.
runtime_version str
Runtime version. If not specified, the default runtime version is used.
archiveUris List<String>
Archive files to be extracted into the working directory of each executor. For more information about Apache Spark, see Apache Spark.
connection String
Fully qualified name of the user-provided Spark connection object. Format: "projects/{project_id}/locations/{location_id}/connections/{connection_id}"
containerImage String
Custom container image for the runtime environment.
fileUris List<String>
Files to be placed in the working directory of each executor. For more information about Apache Spark, see Apache Spark.
jarUris List<String>
JARs to include on the driver and executor CLASSPATH. For more information about Apache Spark, see Apache Spark.
mainClass String
The fully qualified name of a class in jar_uris, for example, com.example.wordcount. Exactly one of main_class and main_jar_uri field should be set for Java/Scala language type.
mainFileUri String
The main file/jar URI of the Spark application. Exactly one of the definition_body field and the main_file_uri field must be set for Python. Exactly one of main_class and main_file_uri field should be set for Java/Scala language type.
properties Map<String>
Configuration properties as a set of key/value pairs, which will be passed on to the Spark application. For more information, see Apache Spark and the procedure option list.
pyFileUris List<String>
Python files to be placed on the PYTHONPATH for PySpark application. Supported file types: .py, .egg, and .zip. For more information about Apache Spark, see Apache Spark.
runtimeVersion String
Runtime version. If not specified, the default runtime version is used.

SparkOptionsResponse
, SparkOptionsResponseArgs

ArchiveUris This property is required. List<string>
Archive files to be extracted into the working directory of each executor. For more information about Apache Spark, see Apache Spark.
Connection This property is required. string
Fully qualified name of the user-provided Spark connection object. Format: "projects/{project_id}/locations/{location_id}/connections/{connection_id}"
ContainerImage This property is required. string
Custom container image for the runtime environment.
FileUris This property is required. List<string>
Files to be placed in the working directory of each executor. For more information about Apache Spark, see Apache Spark.
JarUris This property is required. List<string>
JARs to include on the driver and executor CLASSPATH. For more information about Apache Spark, see Apache Spark.
MainClass This property is required. string
The fully qualified name of a class in jar_uris, for example, com.example.wordcount. Exactly one of main_class and main_jar_uri field should be set for Java/Scala language type.
MainFileUri This property is required. string
The main file/jar URI of the Spark application. Exactly one of the definition_body field and the main_file_uri field must be set for Python. Exactly one of main_class and main_file_uri field should be set for Java/Scala language type.
Properties This property is required. Dictionary<string, string>
Configuration properties as a set of key/value pairs, which will be passed on to the Spark application. For more information, see Apache Spark and the procedure option list.
PyFileUris This property is required. List<string>
Python files to be placed on the PYTHONPATH for PySpark application. Supported file types: .py, .egg, and .zip. For more information about Apache Spark, see Apache Spark.
RuntimeVersion This property is required. string
Runtime version. If not specified, the default runtime version is used.
ArchiveUris This property is required. []string
Archive files to be extracted into the working directory of each executor. For more information about Apache Spark, see Apache Spark.
Connection This property is required. string
Fully qualified name of the user-provided Spark connection object. Format: "projects/{project_id}/locations/{location_id}/connections/{connection_id}"
ContainerImage This property is required. string
Custom container image for the runtime environment.
FileUris This property is required. []string
Files to be placed in the working directory of each executor. For more information about Apache Spark, see Apache Spark.
JarUris This property is required. []string
JARs to include on the driver and executor CLASSPATH. For more information about Apache Spark, see Apache Spark.
MainClass This property is required. string
The fully qualified name of a class in jar_uris, for example, com.example.wordcount. Exactly one of main_class and main_jar_uri field should be set for Java/Scala language type.
MainFileUri This property is required. string
The main file/jar URI of the Spark application. Exactly one of the definition_body field and the main_file_uri field must be set for Python. Exactly one of main_class and main_file_uri field should be set for Java/Scala language type.
Properties This property is required. map[string]string
Configuration properties as a set of key/value pairs, which will be passed on to the Spark application. For more information, see Apache Spark and the procedure option list.
PyFileUris This property is required. []string
Python files to be placed on the PYTHONPATH for PySpark application. Supported file types: .py, .egg, and .zip. For more information about Apache Spark, see Apache Spark.
RuntimeVersion This property is required. string
Runtime version. If not specified, the default runtime version is used.
archiveUris This property is required. List<String>
Archive files to be extracted into the working directory of each executor. For more information about Apache Spark, see Apache Spark.
connection This property is required. String
Fully qualified name of the user-provided Spark connection object. Format: "projects/{project_id}/locations/{location_id}/connections/{connection_id}"
containerImage This property is required. String
Custom container image for the runtime environment.
fileUris This property is required. List<String>
Files to be placed in the working directory of each executor. For more information about Apache Spark, see Apache Spark.
jarUris This property is required. List<String>
JARs to include on the driver and executor CLASSPATH. For more information about Apache Spark, see Apache Spark.
mainClass This property is required. String
The fully qualified name of a class in jar_uris, for example, com.example.wordcount. Exactly one of main_class and main_jar_uri field should be set for Java/Scala language type.
mainFileUri This property is required. String
The main file/jar URI of the Spark application. Exactly one of the definition_body field and the main_file_uri field must be set for Python. Exactly one of main_class and main_file_uri field should be set for Java/Scala language type.
properties This property is required. Map<String,String>
Configuration properties as a set of key/value pairs, which will be passed on to the Spark application. For more information, see Apache Spark and the procedure option list.
pyFileUris This property is required. List<String>
Python files to be placed on the PYTHONPATH for PySpark application. Supported file types: .py, .egg, and .zip. For more information about Apache Spark, see Apache Spark.
runtimeVersion This property is required. String
Runtime version. If not specified, the default runtime version is used.
archiveUris This property is required. string[]
Archive files to be extracted into the working directory of each executor. For more information about Apache Spark, see Apache Spark.
connection This property is required. string
Fully qualified name of the user-provided Spark connection object. Format: "projects/{project_id}/locations/{location_id}/connections/{connection_id}"
containerImage This property is required. string
Custom container image for the runtime environment.
fileUris This property is required. string[]
Files to be placed in the working directory of each executor. For more information about Apache Spark, see Apache Spark.
jarUris This property is required. string[]
JARs to include on the driver and executor CLASSPATH. For more information about Apache Spark, see Apache Spark.
mainClass This property is required. string
The fully qualified name of a class in jar_uris, for example, com.example.wordcount. Exactly one of main_class and main_jar_uri field should be set for Java/Scala language type.
mainFileUri This property is required. string
The main file/jar URI of the Spark application. Exactly one of the definition_body field and the main_file_uri field must be set for Python. Exactly one of main_class and main_file_uri field should be set for Java/Scala language type.
properties This property is required. {[key: string]: string}
Configuration properties as a set of key/value pairs, which will be passed on to the Spark application. For more information, see Apache Spark and the procedure option list.
pyFileUris This property is required. string[]
Python files to be placed on the PYTHONPATH for PySpark application. Supported file types: .py, .egg, and .zip. For more information about Apache Spark, see Apache Spark.
runtimeVersion This property is required. string
Runtime version. If not specified, the default runtime version is used.
archive_uris This property is required. Sequence[str]
Archive files to be extracted into the working directory of each executor. For more information about Apache Spark, see Apache Spark.
connection This property is required. str
Fully qualified name of the user-provided Spark connection object. Format: "projects/{project_id}/locations/{location_id}/connections/{connection_id}"
container_image This property is required. str
Custom container image for the runtime environment.
file_uris This property is required. Sequence[str]
Files to be placed in the working directory of each executor. For more information about Apache Spark, see Apache Spark.
jar_uris This property is required. Sequence[str]
JARs to include on the driver and executor CLASSPATH. For more information about Apache Spark, see Apache Spark.
main_class This property is required. str
The fully qualified name of a class in jar_uris, for example, com.example.wordcount. Exactly one of main_class and main_jar_uri field should be set for Java/Scala language type.
main_file_uri This property is required. str
The main file/jar URI of the Spark application. Exactly one of the definition_body field and the main_file_uri field must be set for Python. Exactly one of main_class and main_file_uri field should be set for Java/Scala language type.
properties This property is required. Mapping[str, str]
Configuration properties as a set of key/value pairs, which will be passed on to the Spark application. For more information, see Apache Spark and the procedure option list.
py_file_uris This property is required. Sequence[str]
Python files to be placed on the PYTHONPATH for PySpark application. Supported file types: .py, .egg, and .zip. For more information about Apache Spark, see Apache Spark.
runtime_version This property is required. str
Runtime version. If not specified, the default runtime version is used.
archiveUris This property is required. List<String>
Archive files to be extracted into the working directory of each executor. For more information about Apache Spark, see Apache Spark.
connection This property is required. String
Fully qualified name of the user-provided Spark connection object. Format: "projects/{project_id}/locations/{location_id}/connections/{connection_id}"
containerImage This property is required. String
Custom container image for the runtime environment.
fileUris This property is required. List<String>
Files to be placed in the working directory of each executor. For more information about Apache Spark, see Apache Spark.
jarUris This property is required. List<String>
JARs to include on the driver and executor CLASSPATH. For more information about Apache Spark, see Apache Spark.
mainClass This property is required. String
The fully qualified name of a class in jar_uris, for example, com.example.wordcount. Exactly one of main_class and main_jar_uri field should be set for Java/Scala language type.
mainFileUri This property is required. String
The main file/jar URI of the Spark application. Exactly one of the definition_body field and the main_file_uri field must be set for Python. Exactly one of main_class and main_file_uri field should be set for Java/Scala language type.
properties This property is required. Map<String>
Configuration properties as a set of key/value pairs, which will be passed on to the Spark application. For more information, see Apache Spark and the procedure option list.
pyFileUris This property is required. List<String>
Python files to be placed on the PYTHONPATH for PySpark application. Supported file types: .py, .egg, and .zip. For more information about Apache Spark, see Apache Spark.
runtimeVersion This property is required. String
Runtime version. If not specified, the default runtime version is used.

StandardSqlDataType
, StandardSqlDataTypeArgs

TypeKind This property is required. Pulumi.GoogleNative.BigQuery.V2.StandardSqlDataTypeTypeKind
The top level type of this field. Can be any GoogleSQL data type (e.g., "INT64", "DATE", "ARRAY").
ArrayElementType Pulumi.GoogleNative.BigQuery.V2.Inputs.StandardSqlDataType
The type of the array's elements, if type_kind = "ARRAY".
RangeElementType Pulumi.GoogleNative.BigQuery.V2.Inputs.StandardSqlDataType
The type of the range's elements, if type_kind = "RANGE".
StructType Pulumi.GoogleNative.BigQuery.V2.Inputs.StandardSqlStructType
The fields of this struct, in order, if type_kind = "STRUCT".
TypeKind This property is required. StandardSqlDataTypeTypeKind
The top level type of this field. Can be any GoogleSQL data type (e.g., "INT64", "DATE", "ARRAY").
ArrayElementType StandardSqlDataType
The type of the array's elements, if type_kind = "ARRAY".
RangeElementType StandardSqlDataType
The type of the range's elements, if type_kind = "RANGE".
StructType StandardSqlStructType
The fields of this struct, in order, if type_kind = "STRUCT".
typeKind This property is required. StandardSqlDataTypeTypeKind
The top level type of this field. Can be any GoogleSQL data type (e.g., "INT64", "DATE", "ARRAY").
arrayElementType StandardSqlDataType
The type of the array's elements, if type_kind = "ARRAY".
rangeElementType StandardSqlDataType
The type of the range's elements, if type_kind = "RANGE".
structType StandardSqlStructType
The fields of this struct, in order, if type_kind = "STRUCT".
typeKind This property is required. StandardSqlDataTypeTypeKind
The top level type of this field. Can be any GoogleSQL data type (e.g., "INT64", "DATE", "ARRAY").
arrayElementType StandardSqlDataType
The type of the array's elements, if type_kind = "ARRAY".
rangeElementType StandardSqlDataType
The type of the range's elements, if type_kind = "RANGE".
structType StandardSqlStructType
The fields of this struct, in order, if type_kind = "STRUCT".
type_kind This property is required. StandardSqlDataTypeTypeKind
The top level type of this field. Can be any GoogleSQL data type (e.g., "INT64", "DATE", "ARRAY").
array_element_type StandardSqlDataType
The type of the array's elements, if type_kind = "ARRAY".
range_element_type StandardSqlDataType
The type of the range's elements, if type_kind = "RANGE".
struct_type StandardSqlStructType
The fields of this struct, in order, if type_kind = "STRUCT".
typeKind This property is required. "TYPE_KIND_UNSPECIFIED" | "INT64" | "BOOL" | "FLOAT64" | "STRING" | "BYTES" | "TIMESTAMP" | "DATE" | "TIME" | "DATETIME" | "INTERVAL" | "GEOGRAPHY" | "NUMERIC" | "BIGNUMERIC" | "JSON" | "ARRAY" | "STRUCT" | "RANGE"
The top level type of this field. Can be any GoogleSQL data type (e.g., "INT64", "DATE", "ARRAY").
arrayElementType Property Map
The type of the array's elements, if type_kind = "ARRAY".
rangeElementType Property Map
The type of the range's elements, if type_kind = "RANGE".
structType Property Map
The fields of this struct, in order, if type_kind = "STRUCT".

StandardSqlDataTypeResponse
, StandardSqlDataTypeResponseArgs

StructType This property is required. Pulumi.GoogleNative.BigQuery.V2.Inputs.StandardSqlStructTypeResponse
The fields of this struct, in order, if type_kind = "STRUCT".
TypeKind This property is required. string
The top level type of this field. Can be any GoogleSQL data type (e.g., "INT64", "DATE", "ARRAY").
ArrayElementType Pulumi.GoogleNative.BigQuery.V2.Inputs.StandardSqlDataTypeResponse
The type of the array's elements, if type_kind = "ARRAY".
RangeElementType Pulumi.GoogleNative.BigQuery.V2.Inputs.StandardSqlDataTypeResponse
The type of the range's elements, if type_kind = "RANGE".
StructType This property is required. StandardSqlStructTypeResponse
The fields of this struct, in order, if type_kind = "STRUCT".
TypeKind This property is required. string
The top level type of this field. Can be any GoogleSQL data type (e.g., "INT64", "DATE", "ARRAY").
ArrayElementType StandardSqlDataTypeResponse
The type of the array's elements, if type_kind = "ARRAY".
RangeElementType StandardSqlDataTypeResponse
The type of the range's elements, if type_kind = "RANGE".
structType This property is required. StandardSqlStructTypeResponse
The fields of this struct, in order, if type_kind = "STRUCT".
typeKind This property is required. String
The top level type of this field. Can be any GoogleSQL data type (e.g., "INT64", "DATE", "ARRAY").
arrayElementType StandardSqlDataTypeResponse
The type of the array's elements, if type_kind = "ARRAY".
rangeElementType StandardSqlDataTypeResponse
The type of the range's elements, if type_kind = "RANGE".
structType This property is required. StandardSqlStructTypeResponse
The fields of this struct, in order, if type_kind = "STRUCT".
typeKind This property is required. string
The top level type of this field. Can be any GoogleSQL data type (e.g., "INT64", "DATE", "ARRAY").
arrayElementType StandardSqlDataTypeResponse
The type of the array's elements, if type_kind = "ARRAY".
rangeElementType StandardSqlDataTypeResponse
The type of the range's elements, if type_kind = "RANGE".
struct_type This property is required. StandardSqlStructTypeResponse
The fields of this struct, in order, if type_kind = "STRUCT".
type_kind This property is required. str
The top level type of this field. Can be any GoogleSQL data type (e.g., "INT64", "DATE", "ARRAY").
array_element_type StandardSqlDataTypeResponse
The type of the array's elements, if type_kind = "ARRAY".
range_element_type StandardSqlDataTypeResponse
The type of the range's elements, if type_kind = "RANGE".
structType This property is required. Property Map
The fields of this struct, in order, if type_kind = "STRUCT".
typeKind This property is required. String
The top level type of this field. Can be any GoogleSQL data type (e.g., "INT64", "DATE", "ARRAY").
arrayElementType Property Map
The type of the array's elements, if type_kind = "ARRAY".
rangeElementType Property Map
The type of the range's elements, if type_kind = "RANGE".

StandardSqlDataTypeTypeKind
, StandardSqlDataTypeTypeKindArgs

TypeKindUnspecified
TYPE_KIND_UNSPECIFIEDInvalid type.
Int64
INT64Encoded as a string in decimal format.
Bool
BOOLEncoded as a boolean "false" or "true".
Float64
FLOAT64Encoded as a number, or string "NaN", "Infinity" or "-Infinity".
String
STRINGEncoded as a string value.
Bytes
BYTESEncoded as a base64 string per RFC 4648, section 4.
Timestamp
TIMESTAMPEncoded as an RFC 3339 timestamp with mandatory "Z" time zone string: 1985-04-12T23:20:50.52Z
Date
DATEEncoded as RFC 3339 full-date format string: 1985-04-12
Time
TIMEEncoded as RFC 3339 partial-time format string: 23:20:50.52
Datetime
DATETIMEEncoded as RFC 3339 full-date "T" partial-time: 1985-04-12T23:20:50.52
Interval
INTERVALEncoded as fully qualified 3 part: 0-5 15 2:30:45.6
Geography
GEOGRAPHYEncoded as WKT
Numeric
NUMERICEncoded as a decimal string.
Bignumeric
BIGNUMERICEncoded as a decimal string.
Json
JSONEncoded as a string.
Array
ARRAYEncoded as a list with types matching Type.array_type.
Struct
STRUCTEncoded as a list with fields of type Type.struct_type[i]. List is used because a JSON object cannot have duplicate field names.
Range
RANGEEncoded as a pair with types matching range_element_type. Pairs must begin with "[", end with ")", and be separated by ", ".
StandardSqlDataTypeTypeKindTypeKindUnspecified
TYPE_KIND_UNSPECIFIEDInvalid type.
StandardSqlDataTypeTypeKindInt64
INT64Encoded as a string in decimal format.
StandardSqlDataTypeTypeKindBool
BOOLEncoded as a boolean "false" or "true".
StandardSqlDataTypeTypeKindFloat64
FLOAT64Encoded as a number, or string "NaN", "Infinity" or "-Infinity".
StandardSqlDataTypeTypeKindString
STRINGEncoded as a string value.
StandardSqlDataTypeTypeKindBytes
BYTESEncoded as a base64 string per RFC 4648, section 4.
StandardSqlDataTypeTypeKindTimestamp
TIMESTAMPEncoded as an RFC 3339 timestamp with mandatory "Z" time zone string: 1985-04-12T23:20:50.52Z
StandardSqlDataTypeTypeKindDate
DATEEncoded as RFC 3339 full-date format string: 1985-04-12
StandardSqlDataTypeTypeKindTime
TIMEEncoded as RFC 3339 partial-time format string: 23:20:50.52
StandardSqlDataTypeTypeKindDatetime
DATETIMEEncoded as RFC 3339 full-date "T" partial-time: 1985-04-12T23:20:50.52
StandardSqlDataTypeTypeKindInterval
INTERVALEncoded as fully qualified 3 part: 0-5 15 2:30:45.6
StandardSqlDataTypeTypeKindGeography
GEOGRAPHYEncoded as WKT
StandardSqlDataTypeTypeKindNumeric
NUMERICEncoded as a decimal string.
StandardSqlDataTypeTypeKindBignumeric
BIGNUMERICEncoded as a decimal string.
StandardSqlDataTypeTypeKindJson
JSONEncoded as a string.
StandardSqlDataTypeTypeKindArray
ARRAYEncoded as a list with types matching Type.array_type.
StandardSqlDataTypeTypeKindStruct
STRUCTEncoded as a list with fields of type Type.struct_type[i]. List is used because a JSON object cannot have duplicate field names.
StandardSqlDataTypeTypeKindRange
RANGEEncoded as a pair with types matching range_element_type. Pairs must begin with "[", end with ")", and be separated by ", ".
TypeKindUnspecified
TYPE_KIND_UNSPECIFIEDInvalid type.
Int64
INT64Encoded as a string in decimal format.
Bool
BOOLEncoded as a boolean "false" or "true".
Float64
FLOAT64Encoded as a number, or string "NaN", "Infinity" or "-Infinity".
String
STRINGEncoded as a string value.
Bytes
BYTESEncoded as a base64 string per RFC 4648, section 4.
Timestamp
TIMESTAMPEncoded as an RFC 3339 timestamp with mandatory "Z" time zone string: 1985-04-12T23:20:50.52Z
Date
DATEEncoded as RFC 3339 full-date format string: 1985-04-12
Time
TIMEEncoded as RFC 3339 partial-time format string: 23:20:50.52
Datetime
DATETIMEEncoded as RFC 3339 full-date "T" partial-time: 1985-04-12T23:20:50.52
Interval
INTERVALEncoded as fully qualified 3 part: 0-5 15 2:30:45.6
Geography
GEOGRAPHYEncoded as WKT
Numeric
NUMERICEncoded as a decimal string.
Bignumeric
BIGNUMERICEncoded as a decimal string.
Json
JSONEncoded as a string.
Array
ARRAYEncoded as a list with types matching Type.array_type.
Struct
STRUCTEncoded as a list with fields of type Type.struct_type[i]. List is used because a JSON object cannot have duplicate field names.
Range
RANGEEncoded as a pair with types matching range_element_type. Pairs must begin with "[", end with ")", and be separated by ", ".
TypeKindUnspecified
TYPE_KIND_UNSPECIFIEDInvalid type.
Int64
INT64Encoded as a string in decimal format.
Bool
BOOLEncoded as a boolean "false" or "true".
Float64
FLOAT64Encoded as a number, or string "NaN", "Infinity" or "-Infinity".
String
STRINGEncoded as a string value.
Bytes
BYTESEncoded as a base64 string per RFC 4648, section 4.
Timestamp
TIMESTAMPEncoded as an RFC 3339 timestamp with mandatory "Z" time zone string: 1985-04-12T23:20:50.52Z
Date
DATEEncoded as RFC 3339 full-date format string: 1985-04-12
Time
TIMEEncoded as RFC 3339 partial-time format string: 23:20:50.52
Datetime
DATETIMEEncoded as RFC 3339 full-date "T" partial-time: 1985-04-12T23:20:50.52
Interval
INTERVALEncoded as fully qualified 3 part: 0-5 15 2:30:45.6
Geography
GEOGRAPHYEncoded as WKT
Numeric
NUMERICEncoded as a decimal string.
Bignumeric
BIGNUMERICEncoded as a decimal string.
Json
JSONEncoded as a string.
Array
ARRAYEncoded as a list with types matching Type.array_type.
Struct
STRUCTEncoded as a list with fields of type Type.struct_type[i]. List is used because a JSON object cannot have duplicate field names.
Range
RANGEEncoded as a pair with types matching range_element_type. Pairs must begin with "[", end with ")", and be separated by ", ".
TYPE_KIND_UNSPECIFIED
TYPE_KIND_UNSPECIFIEDInvalid type.
INT64
INT64Encoded as a string in decimal format.
BOOL
BOOLEncoded as a boolean "false" or "true".
FLOAT64
FLOAT64Encoded as a number, or string "NaN", "Infinity" or "-Infinity".
STRING
STRINGEncoded as a string value.
BYTES
BYTESEncoded as a base64 string per RFC 4648, section 4.
TIMESTAMP
TIMESTAMPEncoded as an RFC 3339 timestamp with mandatory "Z" time zone string: 1985-04-12T23:20:50.52Z
DATE
DATEEncoded as RFC 3339 full-date format string: 1985-04-12
TIME
TIMEEncoded as RFC 3339 partial-time format string: 23:20:50.52
DATETIME
DATETIMEEncoded as RFC 3339 full-date "T" partial-time: 1985-04-12T23:20:50.52
INTERVAL
INTERVALEncoded as fully qualified 3 part: 0-5 15 2:30:45.6
GEOGRAPHY
GEOGRAPHYEncoded as WKT
NUMERIC
NUMERICEncoded as a decimal string.
BIGNUMERIC
BIGNUMERICEncoded as a decimal string.
JSON
JSONEncoded as a string.
ARRAY
ARRAYEncoded as a list with types matching Type.array_type.
STRUCT
STRUCTEncoded as a list with fields of type Type.struct_type[i]. List is used because a JSON object cannot have duplicate field names.
RANGE
RANGEEncoded as a pair with types matching range_element_type. Pairs must begin with "[", end with ")", and be separated by ", ".
"TYPE_KIND_UNSPECIFIED"
TYPE_KIND_UNSPECIFIEDInvalid type.
"INT64"
INT64Encoded as a string in decimal format.
"BOOL"
BOOLEncoded as a boolean "false" or "true".
"FLOAT64"
FLOAT64Encoded as a number, or string "NaN", "Infinity" or "-Infinity".
"STRING"
STRINGEncoded as a string value.
"BYTES"
BYTESEncoded as a base64 string per RFC 4648, section 4.
"TIMESTAMP"
TIMESTAMPEncoded as an RFC 3339 timestamp with mandatory "Z" time zone string: 1985-04-12T23:20:50.52Z
"DATE"
DATEEncoded as RFC 3339 full-date format string: 1985-04-12
"TIME"
TIMEEncoded as RFC 3339 partial-time format string: 23:20:50.52
"DATETIME"
DATETIMEEncoded as RFC 3339 full-date "T" partial-time: 1985-04-12T23:20:50.52
"INTERVAL"
INTERVALEncoded as fully qualified 3 part: 0-5 15 2:30:45.6
"GEOGRAPHY"
GEOGRAPHYEncoded as WKT
"NUMERIC"
NUMERICEncoded as a decimal string.
"BIGNUMERIC"
BIGNUMERICEncoded as a decimal string.
"JSON"
JSONEncoded as a string.
"ARRAY"
ARRAYEncoded as a list with types matching Type.array_type.
"STRUCT"
STRUCTEncoded as a list with fields of type Type.struct_type[i]. List is used because a JSON object cannot have duplicate field names.
"RANGE"
RANGEEncoded as a pair with types matching range_element_type. Pairs must begin with "[", end with ")", and be separated by ", ".

StandardSqlField
, StandardSqlFieldArgs

Name string
Optional. The name of this field. Can be absent for struct fields.
Type Pulumi.GoogleNative.BigQuery.V2.Inputs.StandardSqlDataType
Optional. The type of this parameter. Absent if not explicitly specified (e.g., CREATE FUNCTION statement can omit the return type; in this case the output parameter does not have this "type" field).
Name string
Optional. The name of this field. Can be absent for struct fields.
Type StandardSqlDataType
Optional. The type of this parameter. Absent if not explicitly specified (e.g., CREATE FUNCTION statement can omit the return type; in this case the output parameter does not have this "type" field).
name String
Optional. The name of this field. Can be absent for struct fields.
type StandardSqlDataType
Optional. The type of this parameter. Absent if not explicitly specified (e.g., CREATE FUNCTION statement can omit the return type; in this case the output parameter does not have this "type" field).
name string
Optional. The name of this field. Can be absent for struct fields.
type StandardSqlDataType
Optional. The type of this parameter. Absent if not explicitly specified (e.g., CREATE FUNCTION statement can omit the return type; in this case the output parameter does not have this "type" field).
name str
Optional. The name of this field. Can be absent for struct fields.
type StandardSqlDataType
Optional. The type of this parameter. Absent if not explicitly specified (e.g., CREATE FUNCTION statement can omit the return type; in this case the output parameter does not have this "type" field).
name String
Optional. The name of this field. Can be absent for struct fields.
type Property Map
Optional. The type of this parameter. Absent if not explicitly specified (e.g., CREATE FUNCTION statement can omit the return type; in this case the output parameter does not have this "type" field).

StandardSqlFieldResponse
, StandardSqlFieldResponseArgs

Name This property is required. string
Optional. The name of this field. Can be absent for struct fields.
Type This property is required. Pulumi.GoogleNative.BigQuery.V2.Inputs.StandardSqlDataTypeResponse
Optional. The type of this parameter. Absent if not explicitly specified (e.g., CREATE FUNCTION statement can omit the return type; in this case the output parameter does not have this "type" field).
Name This property is required. string
Optional. The name of this field. Can be absent for struct fields.
Type This property is required. StandardSqlDataTypeResponse
Optional. The type of this parameter. Absent if not explicitly specified (e.g., CREATE FUNCTION statement can omit the return type; in this case the output parameter does not have this "type" field).
name This property is required. String
Optional. The name of this field. Can be absent for struct fields.
type This property is required. StandardSqlDataTypeResponse
Optional. The type of this parameter. Absent if not explicitly specified (e.g., CREATE FUNCTION statement can omit the return type; in this case the output parameter does not have this "type" field).
name This property is required. string
Optional. The name of this field. Can be absent for struct fields.
type This property is required. StandardSqlDataTypeResponse
Optional. The type of this parameter. Absent if not explicitly specified (e.g., CREATE FUNCTION statement can omit the return type; in this case the output parameter does not have this "type" field).
name This property is required. str
Optional. The name of this field. Can be absent for struct fields.
type This property is required. StandardSqlDataTypeResponse
Optional. The type of this parameter. Absent if not explicitly specified (e.g., CREATE FUNCTION statement can omit the return type; in this case the output parameter does not have this "type" field).
name This property is required. String
Optional. The name of this field. Can be absent for struct fields.
type This property is required. Property Map
Optional. The type of this parameter. Absent if not explicitly specified (e.g., CREATE FUNCTION statement can omit the return type; in this case the output parameter does not have this "type" field).

StandardSqlStructType
, StandardSqlStructTypeArgs

Fields []StandardSqlField
Fields within the struct.
fields List<StandardSqlField>
Fields within the struct.
fields StandardSqlField[]
Fields within the struct.
fields Sequence[StandardSqlField]
Fields within the struct.
fields List<Property Map>
Fields within the struct.

StandardSqlStructTypeResponse
, StandardSqlStructTypeResponseArgs

Fields This property is required. List<Pulumi.GoogleNative.BigQuery.V2.Inputs.StandardSqlFieldResponse>
Fields within the struct.
Fields This property is required. []StandardSqlFieldResponse
Fields within the struct.
fields This property is required. List<StandardSqlFieldResponse>
Fields within the struct.
fields This property is required. StandardSqlFieldResponse[]
Fields within the struct.
fields This property is required. Sequence[StandardSqlFieldResponse]
Fields within the struct.
fields This property is required. List<Property Map>
Fields within the struct.

StandardSqlTableType
, StandardSqlTableTypeArgs

Columns []StandardSqlField
The columns in this table type
columns List<StandardSqlField>
The columns in this table type
columns StandardSqlField[]
The columns in this table type
columns Sequence[StandardSqlField]
The columns in this table type
columns List<Property Map>
The columns in this table type

StandardSqlTableTypeResponse
, StandardSqlTableTypeResponseArgs

Columns This property is required. List<Pulumi.GoogleNative.BigQuery.V2.Inputs.StandardSqlFieldResponse>
The columns in this table type
Columns This property is required. []StandardSqlFieldResponse
The columns in this table type
columns This property is required. List<StandardSqlFieldResponse>
The columns in this table type
columns This property is required. StandardSqlFieldResponse[]
The columns in this table type
columns This property is required. Sequence[StandardSqlFieldResponse]
The columns in this table type
columns This property is required. List<Property Map>
The columns in this table type

Package Details

Repository
Google Cloud Native pulumi/pulumi-google-native
License
Apache-2.0

Google Cloud Native is in preview. Google Cloud Classic is fully supported.

Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi