Google Cloud Native is in preview. Google Cloud Classic is fully supported.
Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi
google-native.notebooks/v1.getSchedule
Explore with Pulumi AI
Google Cloud Native is in preview. Google Cloud Classic is fully supported.
Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi
Gets details of schedule
Using getSchedule
Two invocation forms are available. The direct form accepts plain arguments and either blocks until the result value is available, or returns a Promise-wrapped result. The output form accepts Input-wrapped arguments and returns an Output-wrapped result.
function getSchedule(args: GetScheduleArgs, opts?: InvokeOptions): Promise<GetScheduleResult>
function getScheduleOutput(args: GetScheduleOutputArgs, opts?: InvokeOptions): Output<GetScheduleResult>def get_schedule(location: Optional[str] = None,
                 project: Optional[str] = None,
                 schedule_id: Optional[str] = None,
                 opts: Optional[InvokeOptions] = None) -> GetScheduleResult
def get_schedule_output(location: Optional[pulumi.Input[str]] = None,
                 project: Optional[pulumi.Input[str]] = None,
                 schedule_id: Optional[pulumi.Input[str]] = None,
                 opts: Optional[InvokeOptions] = None) -> Output[GetScheduleResult]func LookupSchedule(ctx *Context, args *LookupScheduleArgs, opts ...InvokeOption) (*LookupScheduleResult, error)
func LookupScheduleOutput(ctx *Context, args *LookupScheduleOutputArgs, opts ...InvokeOption) LookupScheduleResultOutput> Note: This function is named LookupSchedule in the Go SDK.
public static class GetSchedule 
{
    public static Task<GetScheduleResult> InvokeAsync(GetScheduleArgs args, InvokeOptions? opts = null)
    public static Output<GetScheduleResult> Invoke(GetScheduleInvokeArgs args, InvokeOptions? opts = null)
}public static CompletableFuture<GetScheduleResult> getSchedule(GetScheduleArgs args, InvokeOptions options)
public static Output<GetScheduleResult> getSchedule(GetScheduleArgs args, InvokeOptions options)
fn::invoke:
  function: google-native:notebooks/v1:getSchedule
  arguments:
    # arguments dictionaryThe following arguments are supported:
- Location string
- ScheduleId string
- Project string
- Location string
- ScheduleId string
- Project string
- location String
- scheduleId String
- project String
- location string
- scheduleId string
- project string
- location str
- schedule_id str
- project str
- location String
- scheduleId String
- project String
getSchedule Result
The following output properties are available:
- CreateTime string
- Time the schedule was created.
- CronSchedule string
- Cron-tab formatted schedule by which the job will execute. Format: minute, hour, day of month, month, day of week, e.g. 0 0 * * WED= every Wednesday More examples: https://crontab.guru/examples.html
- Description string
- A brief description of this environment.
- DisplayName string
- Display name used for UI purposes. Name can only contain alphanumeric characters, hyphens -, and underscores_.
- ExecutionTemplate Pulumi.Google Native. Notebooks. V1. Outputs. Execution Template Response 
- Notebook Execution Template corresponding to this schedule.
- Name string
- The name of this schedule. Format: projects/{project_id}/locations/{location}/schedules/{schedule_id}
- RecentExecutions List<Pulumi.Google Native. Notebooks. V1. Outputs. Execution Response> 
- The most recent execution names triggered from this schedule and their corresponding states.
- State string
- TimeZone string
- Timezone on which the cron_schedule. The value of this field must be a time zone name from the tz database. TZ Database: https://en.wikipedia.org/wiki/List_of_tz_database_time_zones Note that some time zones include a provision for daylight savings time. The rules for daylight saving time are determined by the chosen tz. For UTC use the string "utc". If a time zone is not specified, the default will be in UTC (also known as GMT).
- UpdateTime string
- Time the schedule was last updated.
- CreateTime string
- Time the schedule was created.
- CronSchedule string
- Cron-tab formatted schedule by which the job will execute. Format: minute, hour, day of month, month, day of week, e.g. 0 0 * * WED= every Wednesday More examples: https://crontab.guru/examples.html
- Description string
- A brief description of this environment.
- DisplayName string
- Display name used for UI purposes. Name can only contain alphanumeric characters, hyphens -, and underscores_.
- ExecutionTemplate ExecutionTemplate Response 
- Notebook Execution Template corresponding to this schedule.
- Name string
- The name of this schedule. Format: projects/{project_id}/locations/{location}/schedules/{schedule_id}
- RecentExecutions []ExecutionResponse 
- The most recent execution names triggered from this schedule and their corresponding states.
- State string
- TimeZone string
- Timezone on which the cron_schedule. The value of this field must be a time zone name from the tz database. TZ Database: https://en.wikipedia.org/wiki/List_of_tz_database_time_zones Note that some time zones include a provision for daylight savings time. The rules for daylight saving time are determined by the chosen tz. For UTC use the string "utc". If a time zone is not specified, the default will be in UTC (also known as GMT).
- UpdateTime string
- Time the schedule was last updated.
- createTime String
- Time the schedule was created.
- cronSchedule String
- Cron-tab formatted schedule by which the job will execute. Format: minute, hour, day of month, month, day of week, e.g. 0 0 * * WED= every Wednesday More examples: https://crontab.guru/examples.html
- description String
- A brief description of this environment.
- displayName String
- Display name used for UI purposes. Name can only contain alphanumeric characters, hyphens -, and underscores_.
- executionTemplate ExecutionTemplate Response 
- Notebook Execution Template corresponding to this schedule.
- name String
- The name of this schedule. Format: projects/{project_id}/locations/{location}/schedules/{schedule_id}
- recentExecutions List<ExecutionResponse> 
- The most recent execution names triggered from this schedule and their corresponding states.
- state String
- timeZone String
- Timezone on which the cron_schedule. The value of this field must be a time zone name from the tz database. TZ Database: https://en.wikipedia.org/wiki/List_of_tz_database_time_zones Note that some time zones include a provision for daylight savings time. The rules for daylight saving time are determined by the chosen tz. For UTC use the string "utc". If a time zone is not specified, the default will be in UTC (also known as GMT).
- updateTime String
- Time the schedule was last updated.
- createTime string
- Time the schedule was created.
- cronSchedule string
- Cron-tab formatted schedule by which the job will execute. Format: minute, hour, day of month, month, day of week, e.g. 0 0 * * WED= every Wednesday More examples: https://crontab.guru/examples.html
- description string
- A brief description of this environment.
- displayName string
- Display name used for UI purposes. Name can only contain alphanumeric characters, hyphens -, and underscores_.
- executionTemplate ExecutionTemplate Response 
- Notebook Execution Template corresponding to this schedule.
- name string
- The name of this schedule. Format: projects/{project_id}/locations/{location}/schedules/{schedule_id}
- recentExecutions ExecutionResponse[] 
- The most recent execution names triggered from this schedule and their corresponding states.
- state string
- timeZone string
- Timezone on which the cron_schedule. The value of this field must be a time zone name from the tz database. TZ Database: https://en.wikipedia.org/wiki/List_of_tz_database_time_zones Note that some time zones include a provision for daylight savings time. The rules for daylight saving time are determined by the chosen tz. For UTC use the string "utc". If a time zone is not specified, the default will be in UTC (also known as GMT).
- updateTime string
- Time the schedule was last updated.
- create_time str
- Time the schedule was created.
- cron_schedule str
- Cron-tab formatted schedule by which the job will execute. Format: minute, hour, day of month, month, day of week, e.g. 0 0 * * WED= every Wednesday More examples: https://crontab.guru/examples.html
- description str
- A brief description of this environment.
- display_name str
- Display name used for UI purposes. Name can only contain alphanumeric characters, hyphens -, and underscores_.
- execution_template ExecutionTemplate Response 
- Notebook Execution Template corresponding to this schedule.
- name str
- The name of this schedule. Format: projects/{project_id}/locations/{location}/schedules/{schedule_id}
- recent_executions Sequence[ExecutionResponse] 
- The most recent execution names triggered from this schedule and their corresponding states.
- state str
- time_zone str
- Timezone on which the cron_schedule. The value of this field must be a time zone name from the tz database. TZ Database: https://en.wikipedia.org/wiki/List_of_tz_database_time_zones Note that some time zones include a provision for daylight savings time. The rules for daylight saving time are determined by the chosen tz. For UTC use the string "utc". If a time zone is not specified, the default will be in UTC (also known as GMT).
- update_time str
- Time the schedule was last updated.
- createTime String
- Time the schedule was created.
- cronSchedule String
- Cron-tab formatted schedule by which the job will execute. Format: minute, hour, day of month, month, day of week, e.g. 0 0 * * WED= every Wednesday More examples: https://crontab.guru/examples.html
- description String
- A brief description of this environment.
- displayName String
- Display name used for UI purposes. Name can only contain alphanumeric characters, hyphens -, and underscores_.
- executionTemplate Property Map
- Notebook Execution Template corresponding to this schedule.
- name String
- The name of this schedule. Format: projects/{project_id}/locations/{location}/schedules/{schedule_id}
- recentExecutions List<Property Map>
- The most recent execution names triggered from this schedule and their corresponding states.
- state String
- timeZone String
- Timezone on which the cron_schedule. The value of this field must be a time zone name from the tz database. TZ Database: https://en.wikipedia.org/wiki/List_of_tz_database_time_zones Note that some time zones include a provision for daylight savings time. The rules for daylight saving time are determined by the chosen tz. For UTC use the string "utc". If a time zone is not specified, the default will be in UTC (also known as GMT).
- updateTime String
- Time the schedule was last updated.
Supporting Types
DataprocParametersResponse  
- Cluster string
- URI for cluster used to run Dataproc execution. Format: projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}
- Cluster string
- URI for cluster used to run Dataproc execution. Format: projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}
- cluster String
- URI for cluster used to run Dataproc execution. Format: projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}
- cluster string
- URI for cluster used to run Dataproc execution. Format: projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}
- cluster str
- URI for cluster used to run Dataproc execution. Format: projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}
- cluster String
- URI for cluster used to run Dataproc execution. Format: projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}
ExecutionResponse 
- CreateTime string
- Time the Execution was instantiated.
- Description string
- A brief description of this execution.
- DisplayName string
- Name used for UI purposes. Name can only contain alphanumeric characters and underscores '_'.
- ExecutionTemplate Pulumi.Google Native. Notebooks. V1. Inputs. Execution Template Response 
- execute metadata including name, hardware spec, region, labels, etc.
- JobUri string
- The URI of the external job used to execute the notebook.
- Name string
- The resource name of the execute. Format: projects/{project_id}/locations/{location}/executions/{execution_id}
- OutputNotebook stringFile 
- Output notebook file generated by this execution
- State string
- State of the underlying AI Platform job.
- UpdateTime string
- Time the Execution was last updated.
- CreateTime string
- Time the Execution was instantiated.
- Description string
- A brief description of this execution.
- DisplayName string
- Name used for UI purposes. Name can only contain alphanumeric characters and underscores '_'.
- ExecutionTemplate ExecutionTemplate Response 
- execute metadata including name, hardware spec, region, labels, etc.
- JobUri string
- The URI of the external job used to execute the notebook.
- Name string
- The resource name of the execute. Format: projects/{project_id}/locations/{location}/executions/{execution_id}
- OutputNotebook stringFile 
- Output notebook file generated by this execution
- State string
- State of the underlying AI Platform job.
- UpdateTime string
- Time the Execution was last updated.
- createTime String
- Time the Execution was instantiated.
- description String
- A brief description of this execution.
- displayName String
- Name used for UI purposes. Name can only contain alphanumeric characters and underscores '_'.
- executionTemplate ExecutionTemplate Response 
- execute metadata including name, hardware spec, region, labels, etc.
- jobUri String
- The URI of the external job used to execute the notebook.
- name String
- The resource name of the execute. Format: projects/{project_id}/locations/{location}/executions/{execution_id}
- outputNotebook StringFile 
- Output notebook file generated by this execution
- state String
- State of the underlying AI Platform job.
- updateTime String
- Time the Execution was last updated.
- createTime string
- Time the Execution was instantiated.
- description string
- A brief description of this execution.
- displayName string
- Name used for UI purposes. Name can only contain alphanumeric characters and underscores '_'.
- executionTemplate ExecutionTemplate Response 
- execute metadata including name, hardware spec, region, labels, etc.
- jobUri string
- The URI of the external job used to execute the notebook.
- name string
- The resource name of the execute. Format: projects/{project_id}/locations/{location}/executions/{execution_id}
- outputNotebook stringFile 
- Output notebook file generated by this execution
- state string
- State of the underlying AI Platform job.
- updateTime string
- Time the Execution was last updated.
- create_time str
- Time the Execution was instantiated.
- description str
- A brief description of this execution.
- display_name str
- Name used for UI purposes. Name can only contain alphanumeric characters and underscores '_'.
- execution_template ExecutionTemplate Response 
- execute metadata including name, hardware spec, region, labels, etc.
- job_uri str
- The URI of the external job used to execute the notebook.
- name str
- The resource name of the execute. Format: projects/{project_id}/locations/{location}/executions/{execution_id}
- output_notebook_ strfile 
- Output notebook file generated by this execution
- state str
- State of the underlying AI Platform job.
- update_time str
- Time the Execution was last updated.
- createTime String
- Time the Execution was instantiated.
- description String
- A brief description of this execution.
- displayName String
- Name used for UI purposes. Name can only contain alphanumeric characters and underscores '_'.
- executionTemplate Property Map
- execute metadata including name, hardware spec, region, labels, etc.
- jobUri String
- The URI of the external job used to execute the notebook.
- name String
- The resource name of the execute. Format: projects/{project_id}/locations/{location}/executions/{execution_id}
- outputNotebook StringFile 
- Output notebook file generated by this execution
- state String
- State of the underlying AI Platform job.
- updateTime String
- Time the Execution was last updated.
ExecutionTemplateResponse  
- AcceleratorConfig Pulumi.Google Native. Notebooks. V1. Inputs. Scheduler Accelerator Config Response 
- Configuration (count and accelerator type) for hardware running notebook execution.
- ContainerImage stringUri 
- Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container
- DataprocParameters Pulumi.Google Native. Notebooks. V1. Inputs. Dataproc Parameters Response 
- Parameters used in Dataproc JobType executions.
- InputNotebook stringFile 
- Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format: gs://{bucket_name}/{folder}/{notebook_file_name}Ex:gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb
- JobType string
- The type of Job to be used on this execution.
- KernelSpec string
- Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.
- Labels Dictionary<string, string>
- Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.
- MasterType string
- Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when scaleTieris set toCUSTOM. You can use certain Compute Engine machine types directly in this field. The following types are supported: -n1-standard-4-n1-standard-8-n1-standard-16-n1-standard-32-n1-standard-64-n1-standard-96-n1-highmem-2-n1-highmem-4-n1-highmem-8-n1-highmem-16-n1-highmem-32-n1-highmem-64-n1-highmem-96-n1-highcpu-16-n1-highcpu-32-n1-highcpu-64-n1-highcpu-96Alternatively, you can use the following legacy machine types: -standard-large_model-complex_model_s-complex_model_m-complex_model_l-standard_gpu-complex_model_m_gpu-complex_model_l_gpu-standard_p100-complex_model_m_p100-standard_v100-large_model_v100-complex_model_m_v100-complex_model_l_v100Finally, if you want to use a TPU for training, specifycloud_tpuin this field. Learn more about the special configuration options for training with TPU.
- OutputNotebook stringFolder 
- Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format: gs://{bucket_name}/{folder}Ex:gs://notebook_user/scheduled_notebooks
- Parameters string
- Parameters used within the 'input_notebook_file' notebook.
- ParamsYaml stringFile 
- Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml
- ScaleTier string
- Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.
- ServiceAccount string
- The email address of a service account to use when running the execution. You must have the iam.serviceAccounts.actAspermission for the specified service account.
- Tensorboard string
- The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}
- VertexAi Pulumi.Parameters Google Native. Notebooks. V1. Inputs. Vertex AIParameters Response 
- Parameters used in Vertex AI JobType executions.
- AcceleratorConfig SchedulerAccelerator Config Response 
- Configuration (count and accelerator type) for hardware running notebook execution.
- ContainerImage stringUri 
- Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container
- DataprocParameters DataprocParameters Response 
- Parameters used in Dataproc JobType executions.
- InputNotebook stringFile 
- Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format: gs://{bucket_name}/{folder}/{notebook_file_name}Ex:gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb
- JobType string
- The type of Job to be used on this execution.
- KernelSpec string
- Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.
- Labels map[string]string
- Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.
- MasterType string
- Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when scaleTieris set toCUSTOM. You can use certain Compute Engine machine types directly in this field. The following types are supported: -n1-standard-4-n1-standard-8-n1-standard-16-n1-standard-32-n1-standard-64-n1-standard-96-n1-highmem-2-n1-highmem-4-n1-highmem-8-n1-highmem-16-n1-highmem-32-n1-highmem-64-n1-highmem-96-n1-highcpu-16-n1-highcpu-32-n1-highcpu-64-n1-highcpu-96Alternatively, you can use the following legacy machine types: -standard-large_model-complex_model_s-complex_model_m-complex_model_l-standard_gpu-complex_model_m_gpu-complex_model_l_gpu-standard_p100-complex_model_m_p100-standard_v100-large_model_v100-complex_model_m_v100-complex_model_l_v100Finally, if you want to use a TPU for training, specifycloud_tpuin this field. Learn more about the special configuration options for training with TPU.
- OutputNotebook stringFolder 
- Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format: gs://{bucket_name}/{folder}Ex:gs://notebook_user/scheduled_notebooks
- Parameters string
- Parameters used within the 'input_notebook_file' notebook.
- ParamsYaml stringFile 
- Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml
- ScaleTier string
- Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.
- ServiceAccount string
- The email address of a service account to use when running the execution. You must have the iam.serviceAccounts.actAspermission for the specified service account.
- Tensorboard string
- The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}
- VertexAi VertexParameters AIParameters Response 
- Parameters used in Vertex AI JobType executions.
- acceleratorConfig SchedulerAccelerator Config Response 
- Configuration (count and accelerator type) for hardware running notebook execution.
- containerImage StringUri 
- Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container
- dataprocParameters DataprocParameters Response 
- Parameters used in Dataproc JobType executions.
- inputNotebook StringFile 
- Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format: gs://{bucket_name}/{folder}/{notebook_file_name}Ex:gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb
- jobType String
- The type of Job to be used on this execution.
- kernelSpec String
- Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.
- labels Map<String,String>
- Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.
- masterType String
- Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when scaleTieris set toCUSTOM. You can use certain Compute Engine machine types directly in this field. The following types are supported: -n1-standard-4-n1-standard-8-n1-standard-16-n1-standard-32-n1-standard-64-n1-standard-96-n1-highmem-2-n1-highmem-4-n1-highmem-8-n1-highmem-16-n1-highmem-32-n1-highmem-64-n1-highmem-96-n1-highcpu-16-n1-highcpu-32-n1-highcpu-64-n1-highcpu-96Alternatively, you can use the following legacy machine types: -standard-large_model-complex_model_s-complex_model_m-complex_model_l-standard_gpu-complex_model_m_gpu-complex_model_l_gpu-standard_p100-complex_model_m_p100-standard_v100-large_model_v100-complex_model_m_v100-complex_model_l_v100Finally, if you want to use a TPU for training, specifycloud_tpuin this field. Learn more about the special configuration options for training with TPU.
- outputNotebook StringFolder 
- Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format: gs://{bucket_name}/{folder}Ex:gs://notebook_user/scheduled_notebooks
- parameters String
- Parameters used within the 'input_notebook_file' notebook.
- paramsYaml StringFile 
- Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml
- scaleTier String
- Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.
- serviceAccount String
- The email address of a service account to use when running the execution. You must have the iam.serviceAccounts.actAspermission for the specified service account.
- tensorboard String
- The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}
- vertexAi VertexParameters AIParameters Response 
- Parameters used in Vertex AI JobType executions.
- acceleratorConfig SchedulerAccelerator Config Response 
- Configuration (count and accelerator type) for hardware running notebook execution.
- containerImage stringUri 
- Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container
- dataprocParameters DataprocParameters Response 
- Parameters used in Dataproc JobType executions.
- inputNotebook stringFile 
- Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format: gs://{bucket_name}/{folder}/{notebook_file_name}Ex:gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb
- jobType string
- The type of Job to be used on this execution.
- kernelSpec string
- Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.
- labels {[key: string]: string}
- Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.
- masterType string
- Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when scaleTieris set toCUSTOM. You can use certain Compute Engine machine types directly in this field. The following types are supported: -n1-standard-4-n1-standard-8-n1-standard-16-n1-standard-32-n1-standard-64-n1-standard-96-n1-highmem-2-n1-highmem-4-n1-highmem-8-n1-highmem-16-n1-highmem-32-n1-highmem-64-n1-highmem-96-n1-highcpu-16-n1-highcpu-32-n1-highcpu-64-n1-highcpu-96Alternatively, you can use the following legacy machine types: -standard-large_model-complex_model_s-complex_model_m-complex_model_l-standard_gpu-complex_model_m_gpu-complex_model_l_gpu-standard_p100-complex_model_m_p100-standard_v100-large_model_v100-complex_model_m_v100-complex_model_l_v100Finally, if you want to use a TPU for training, specifycloud_tpuin this field. Learn more about the special configuration options for training with TPU.
- outputNotebook stringFolder 
- Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format: gs://{bucket_name}/{folder}Ex:gs://notebook_user/scheduled_notebooks
- parameters string
- Parameters used within the 'input_notebook_file' notebook.
- paramsYaml stringFile 
- Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml
- scaleTier string
- Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.
- serviceAccount string
- The email address of a service account to use when running the execution. You must have the iam.serviceAccounts.actAspermission for the specified service account.
- tensorboard string
- The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}
- vertexAi VertexParameters AIParameters Response 
- Parameters used in Vertex AI JobType executions.
- accelerator_config SchedulerAccelerator Config Response 
- Configuration (count and accelerator type) for hardware running notebook execution.
- container_image_ struri 
- Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container
- dataproc_parameters DataprocParameters Response 
- Parameters used in Dataproc JobType executions.
- input_notebook_ strfile 
- Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format: gs://{bucket_name}/{folder}/{notebook_file_name}Ex:gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb
- job_type str
- The type of Job to be used on this execution.
- kernel_spec str
- Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.
- labels Mapping[str, str]
- Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.
- master_type str
- Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when scaleTieris set toCUSTOM. You can use certain Compute Engine machine types directly in this field. The following types are supported: -n1-standard-4-n1-standard-8-n1-standard-16-n1-standard-32-n1-standard-64-n1-standard-96-n1-highmem-2-n1-highmem-4-n1-highmem-8-n1-highmem-16-n1-highmem-32-n1-highmem-64-n1-highmem-96-n1-highcpu-16-n1-highcpu-32-n1-highcpu-64-n1-highcpu-96Alternatively, you can use the following legacy machine types: -standard-large_model-complex_model_s-complex_model_m-complex_model_l-standard_gpu-complex_model_m_gpu-complex_model_l_gpu-standard_p100-complex_model_m_p100-standard_v100-large_model_v100-complex_model_m_v100-complex_model_l_v100Finally, if you want to use a TPU for training, specifycloud_tpuin this field. Learn more about the special configuration options for training with TPU.
- output_notebook_ strfolder 
- Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format: gs://{bucket_name}/{folder}Ex:gs://notebook_user/scheduled_notebooks
- parameters str
- Parameters used within the 'input_notebook_file' notebook.
- params_yaml_ strfile 
- Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml
- scale_tier str
- Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.
- service_account str
- The email address of a service account to use when running the execution. You must have the iam.serviceAccounts.actAspermission for the specified service account.
- tensorboard str
- The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}
- vertex_ai_ Vertexparameters AIParameters Response 
- Parameters used in Vertex AI JobType executions.
- acceleratorConfig Property Map
- Configuration (count and accelerator type) for hardware running notebook execution.
- containerImage StringUri 
- Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container
- dataprocParameters Property Map
- Parameters used in Dataproc JobType executions.
- inputNotebook StringFile 
- Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format: gs://{bucket_name}/{folder}/{notebook_file_name}Ex:gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb
- jobType String
- The type of Job to be used on this execution.
- kernelSpec String
- Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.
- labels Map<String>
- Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.
- masterType String
- Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when scaleTieris set toCUSTOM. You can use certain Compute Engine machine types directly in this field. The following types are supported: -n1-standard-4-n1-standard-8-n1-standard-16-n1-standard-32-n1-standard-64-n1-standard-96-n1-highmem-2-n1-highmem-4-n1-highmem-8-n1-highmem-16-n1-highmem-32-n1-highmem-64-n1-highmem-96-n1-highcpu-16-n1-highcpu-32-n1-highcpu-64-n1-highcpu-96Alternatively, you can use the following legacy machine types: -standard-large_model-complex_model_s-complex_model_m-complex_model_l-standard_gpu-complex_model_m_gpu-complex_model_l_gpu-standard_p100-complex_model_m_p100-standard_v100-large_model_v100-complex_model_m_v100-complex_model_l_v100Finally, if you want to use a TPU for training, specifycloud_tpuin this field. Learn more about the special configuration options for training with TPU.
- outputNotebook StringFolder 
- Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format: gs://{bucket_name}/{folder}Ex:gs://notebook_user/scheduled_notebooks
- parameters String
- Parameters used within the 'input_notebook_file' notebook.
- paramsYaml StringFile 
- Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml
- scaleTier String
- Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.
- serviceAccount String
- The email address of a service account to use when running the execution. You must have the iam.serviceAccounts.actAspermission for the specified service account.
- tensorboard String
- The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}
- vertexAi Property MapParameters 
- Parameters used in Vertex AI JobType executions.
SchedulerAcceleratorConfigResponse   
- core_count str
- Count of cores of this accelerator.
- type str
- Type of this accelerator.
VertexAIParametersResponse  
- Env Dictionary<string, string>
- Environment variables. At most 100 environment variables can be specified and unique. Example: GCP_BUCKET=gs://my-bucket/samples/
- Network string
- The full name of the Compute Engine network to which the Job should be peered. For example, projects/12345/global/networks/myVPC. Format is of the formprojects/{project}/global/networks/{network}. Where{project}is a project number, as in12345, and{network}is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.
- Env map[string]string
- Environment variables. At most 100 environment variables can be specified and unique. Example: GCP_BUCKET=gs://my-bucket/samples/
- Network string
- The full name of the Compute Engine network to which the Job should be peered. For example, projects/12345/global/networks/myVPC. Format is of the formprojects/{project}/global/networks/{network}. Where{project}is a project number, as in12345, and{network}is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.
- env Map<String,String>
- Environment variables. At most 100 environment variables can be specified and unique. Example: GCP_BUCKET=gs://my-bucket/samples/
- network String
- The full name of the Compute Engine network to which the Job should be peered. For example, projects/12345/global/networks/myVPC. Format is of the formprojects/{project}/global/networks/{network}. Where{project}is a project number, as in12345, and{network}is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.
- env {[key: string]: string}
- Environment variables. At most 100 environment variables can be specified and unique. Example: GCP_BUCKET=gs://my-bucket/samples/
- network string
- The full name of the Compute Engine network to which the Job should be peered. For example, projects/12345/global/networks/myVPC. Format is of the formprojects/{project}/global/networks/{network}. Where{project}is a project number, as in12345, and{network}is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.
- env Mapping[str, str]
- Environment variables. At most 100 environment variables can be specified and unique. Example: GCP_BUCKET=gs://my-bucket/samples/
- network str
- The full name of the Compute Engine network to which the Job should be peered. For example, projects/12345/global/networks/myVPC. Format is of the formprojects/{project}/global/networks/{network}. Where{project}is a project number, as in12345, and{network}is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.
- env Map<String>
- Environment variables. At most 100 environment variables can be specified and unique. Example: GCP_BUCKET=gs://my-bucket/samples/
- network String
- The full name of the Compute Engine network to which the Job should be peered. For example, projects/12345/global/networks/myVPC. Format is of the formprojects/{project}/global/networks/{network}. Where{project}is a project number, as in12345, and{network}is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.
Package Details
- Repository
- Google Cloud Native pulumi/pulumi-google-native
- License
- Apache-2.0
Google Cloud Native is in preview. Google Cloud Classic is fully supported.
Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi