Skip to content

bigquery.v1beta1.table

"Table is the Schema for the Tables API. Creates a table resource in a dataset for Google BigQuery."

Index

Fields

fn new

new(name)

new returns an instance of Table

obj metadata

"ObjectMeta is metadata that all persisted resources must have, which includes all objects users must create."

fn metadata.withAnnotations

withAnnotations(annotations)

"Annotations is an unstructured key value map stored with a resource that may be set by external tools to store and retrieve arbitrary metadata. They are not queryable and should be preserved when modifying objects. More info: http://kubernetes.io/docs/user-guide/annotations"

fn metadata.withAnnotationsMixin

withAnnotationsMixin(annotations)

"Annotations is an unstructured key value map stored with a resource that may be set by external tools to store and retrieve arbitrary metadata. They are not queryable and should be preserved when modifying objects. More info: http://kubernetes.io/docs/user-guide/annotations"

Note: This function appends passed data to existing values

fn metadata.withClusterName

withClusterName(clusterName)

"The name of the cluster which the object belongs to. This is used to distinguish resources with same name and namespace in different clusters. This field is not set anywhere right now and apiserver is going to ignore it if set in create or update request."

fn metadata.withCreationTimestamp

withCreationTimestamp(creationTimestamp)

"Time is a wrapper around time.Time which supports correct marshaling to YAML and JSON. Wrappers are provided for many of the factory methods that the time package offers."

fn metadata.withDeletionGracePeriodSeconds

withDeletionGracePeriodSeconds(deletionGracePeriodSeconds)

"Number of seconds allowed for this object to gracefully terminate before it will be removed from the system. Only set when deletionTimestamp is also set. May only be shortened. Read-only."

fn metadata.withDeletionTimestamp

withDeletionTimestamp(deletionTimestamp)

"Time is a wrapper around time.Time which supports correct marshaling to YAML and JSON. Wrappers are provided for many of the factory methods that the time package offers."

fn metadata.withFinalizers

withFinalizers(finalizers)

"Must be empty before the object is deleted from the registry. Each entry is an identifier for the responsible component that will remove the entry from the list. If the deletionTimestamp of the object is non-nil, entries in this list can only be removed. Finalizers may be processed and removed in any order. Order is NOT enforced because it introduces significant risk of stuck finalizers. finalizers is a shared field, any actor with permission can reorder it. If the finalizer list is processed in order, then this can lead to a situation in which the component responsible for the first finalizer in the list is waiting for a signal (field value, external system, or other) produced by a component responsible for a finalizer later in the list, resulting in a deadlock. Without enforced ordering finalizers are free to order amongst themselves and are not vulnerable to ordering changes in the list."

fn metadata.withFinalizersMixin

withFinalizersMixin(finalizers)

"Must be empty before the object is deleted from the registry. Each entry is an identifier for the responsible component that will remove the entry from the list. If the deletionTimestamp of the object is non-nil, entries in this list can only be removed. Finalizers may be processed and removed in any order. Order is NOT enforced because it introduces significant risk of stuck finalizers. finalizers is a shared field, any actor with permission can reorder it. If the finalizer list is processed in order, then this can lead to a situation in which the component responsible for the first finalizer in the list is waiting for a signal (field value, external system, or other) produced by a component responsible for a finalizer later in the list, resulting in a deadlock. Without enforced ordering finalizers are free to order amongst themselves and are not vulnerable to ordering changes in the list."

Note: This function appends passed data to existing values

fn metadata.withGenerateName

withGenerateName(generateName)

"GenerateName is an optional prefix, used by the server, to generate a unique name ONLY IF the Name field has not been provided. If this field is used, the name returned to the client will be different than the name passed. This value will also be combined with a unique suffix. The provided value has the same validation rules as the Name field, and may be truncated by the length of the suffix required to make the value unique on the server.\n\nIf this field is specified and the generated name exists, the server will NOT return a 409 - instead, it will either return 201 Created or 500 with Reason ServerTimeout indicating a unique name could not be found in the time allotted, and the client should retry (optionally after the time indicated in the Retry-After header).\n\nApplied only if Name is not specified. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency"

fn metadata.withGeneration

withGeneration(generation)

"A sequence number representing a specific generation of the desired state. Populated by the system. Read-only."

fn metadata.withLabels

withLabels(labels)

"Map of string keys and values that can be used to organize and categorize (scope and select) objects. May match selectors of replication controllers and services. More info: http://kubernetes.io/docs/user-guide/labels"

fn metadata.withLabelsMixin

withLabelsMixin(labels)

"Map of string keys and values that can be used to organize and categorize (scope and select) objects. May match selectors of replication controllers and services. More info: http://kubernetes.io/docs/user-guide/labels"

Note: This function appends passed data to existing values

fn metadata.withName

withName(name)

"Name must be unique within a namespace. Is required when creating resources, although some resources may allow a client to request the generation of an appropriate name automatically. Name is primarily intended for creation idempotence and configuration definition. Cannot be updated. More info: http://kubernetes.io/docs/user-guide/identifiers#names"

fn metadata.withNamespace

withNamespace(namespace)

"Namespace defines the space within which each name must be unique. An empty namespace is equivalent to the \"default\" namespace, but \"default\" is the canonical representation. Not all objects are required to be scoped to a namespace - the value of this field for those objects will be empty.\n\nMust be a DNS_LABEL. Cannot be updated. More info: http://kubernetes.io/docs/user-guide/namespaces"

fn metadata.withOwnerReferences

withOwnerReferences(ownerReferences)

"List of objects depended by this object. If ALL objects in the list have been deleted, this object will be garbage collected. If this object is managed by a controller, then an entry in this list will point to this controller, with the controller field set to true. There cannot be more than one managing controller."

fn metadata.withOwnerReferencesMixin

withOwnerReferencesMixin(ownerReferences)

"List of objects depended by this object. If ALL objects in the list have been deleted, this object will be garbage collected. If this object is managed by a controller, then an entry in this list will point to this controller, with the controller field set to true. There cannot be more than one managing controller."

Note: This function appends passed data to existing values

fn metadata.withResourceVersion

withResourceVersion(resourceVersion)

"An opaque value that represents the internal version of this object that can be used by clients to determine when objects have changed. May be used for optimistic concurrency, change detection, and the watch operation on a resource or set of resources. Clients must treat these values as opaque and passed unmodified back to the server. They may only be valid for a particular resource or set of resources.\n\nPopulated by the system. Read-only. Value must be treated as opaque by clients and . More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency"

withSelfLink(selfLink)

"SelfLink is a URL representing this object. Populated by the system. Read-only.\n\nDEPRECATED Kubernetes will stop propagating this field in 1.20 release and the field is planned to be removed in 1.21 release."

fn metadata.withUid

withUid(uid)

"UID is the unique in time and space value for this object. It is typically generated by the server on successful creation of a resource and is not allowed to change on PUT operations.\n\nPopulated by the system. Read-only. More info: http://kubernetes.io/docs/user-guide/identifiers#uids"

obj spec

"TableSpec defines the desired state of Table"

fn spec.withDeletionPolicy

withDeletionPolicy(deletionPolicy)

"DeletionPolicy specifies what will happen to the underlying external when this managed resource is deleted - either \"Delete\" or \"Orphan\" the external resource. This field is planned to be deprecated in favor of the ManagementPolicies field in a future release. Currently, both could be set independently and non-default values would be honored if the feature flag is enabled. See the design doc for more information: https://github.com/crossplane/crossplane/blob/499895a25d1a1a0ba1604944ef98ac7a1a71f197/design/design-doc-observe-only-resources.md?plain=1#L223"

fn spec.withManagementPolicies

withManagementPolicies(managementPolicies)

"THIS IS AN ALPHA FIELD. Do not use it in production. It is not honored unless the relevant Crossplane feature flag is enabled, and may be changed or removed without notice. ManagementPolicies specify the array of actions Crossplane is allowed to take on the managed and external resources. This field is planned to replace the DeletionPolicy field in a future release. Currently, both could be set independently and non-default values would be honored if the feature flag is enabled. If both are custom, the DeletionPolicy field will be ignored. See the design doc for more information: https://github.com/crossplane/crossplane/blob/499895a25d1a1a0ba1604944ef98ac7a1a71f197/design/design-doc-observe-only-resources.md?plain=1#L223 and this one: https://github.com/crossplane/crossplane/blob/444267e84783136daa93568b364a5f01228cacbe/design/one-pager-ignore-changes.md"

fn spec.withManagementPoliciesMixin

withManagementPoliciesMixin(managementPolicies)

"THIS IS AN ALPHA FIELD. Do not use it in production. It is not honored unless the relevant Crossplane feature flag is enabled, and may be changed or removed without notice. ManagementPolicies specify the array of actions Crossplane is allowed to take on the managed and external resources. This field is planned to replace the DeletionPolicy field in a future release. Currently, both could be set independently and non-default values would be honored if the feature flag is enabled. If both are custom, the DeletionPolicy field will be ignored. See the design doc for more information: https://github.com/crossplane/crossplane/blob/499895a25d1a1a0ba1604944ef98ac7a1a71f197/design/design-doc-observe-only-resources.md?plain=1#L223 and this one: https://github.com/crossplane/crossplane/blob/444267e84783136daa93568b364a5f01228cacbe/design/one-pager-ignore-changes.md"

Note: This function appends passed data to existing values

obj spec.forProvider

fn spec.forProvider.withClustering

withClustering(clustering)

"Specifies column names to use for data clustering. Up to four top-level columns are allowed, and should be specified in descending priority order."

fn spec.forProvider.withClusteringMixin

withClusteringMixin(clustering)

"Specifies column names to use for data clustering. Up to four top-level columns are allowed, and should be specified in descending priority order."

Note: This function appends passed data to existing values

fn spec.forProvider.withDatasetId

withDatasetId(datasetId)

"The dataset ID to create the table in. Changing this forces a new resource to be created."

fn spec.forProvider.withDeletionProtection

withDeletionProtection(deletionProtection)

fn spec.forProvider.withDescription

withDescription(description)

"The field description."

fn spec.forProvider.withEncryptionConfiguration

withEncryptionConfiguration(encryptionConfiguration)

"Specifies how the table should be encrypted. If left blank, the table will be encrypted with a Google-managed key; that process is transparent to the user. Structure is documented below."

fn spec.forProvider.withEncryptionConfigurationMixin

withEncryptionConfigurationMixin(encryptionConfiguration)

"Specifies how the table should be encrypted. If left blank, the table will be encrypted with a Google-managed key; that process is transparent to the user. Structure is documented below."

Note: This function appends passed data to existing values

fn spec.forProvider.withExpirationTime

withExpirationTime(expirationTime)

"The time when this table expires, in milliseconds since the epoch. If not present, the table will persist indefinitely. Expired tables will be deleted and their storage reclaimed."

fn spec.forProvider.withExternalDataConfiguration

withExternalDataConfiguration(externalDataConfiguration)

"Describes the data format, location, and other properties of a table stored outside of BigQuery. By defining these properties, the data source can then be queried as if it were a standard BigQuery table. Structure is documented below."

fn spec.forProvider.withExternalDataConfigurationMixin

withExternalDataConfigurationMixin(externalDataConfiguration)

"Describes the data format, location, and other properties of a table stored outside of BigQuery. By defining these properties, the data source can then be queried as if it were a standard BigQuery table. Structure is documented below."

Note: This function appends passed data to existing values

fn spec.forProvider.withFriendlyName

withFriendlyName(friendlyName)

"A descriptive name for the table."

fn spec.forProvider.withLabels

withLabels(labels)

"A mapping of labels to assign to the resource."

fn spec.forProvider.withLabelsMixin

withLabelsMixin(labels)

"A mapping of labels to assign to the resource."

Note: This function appends passed data to existing values

fn spec.forProvider.withMaterializedView

withMaterializedView(materializedView)

"If specified, configures this table as a materialized view. Structure is documented below."

fn spec.forProvider.withMaterializedViewMixin

withMaterializedViewMixin(materializedView)

"If specified, configures this table as a materialized view. Structure is documented below."

Note: This function appends passed data to existing values

fn spec.forProvider.withProject

withProject(project)

"The ID of the project in which the resource belongs. If it is not provided, the provider project is used."

fn spec.forProvider.withRangePartitioning

withRangePartitioning(rangePartitioning)

"If specified, configures range-based partitioning for this table. Structure is documented below."

fn spec.forProvider.withRangePartitioningMixin

withRangePartitioningMixin(rangePartitioning)

"If specified, configures range-based partitioning for this table. Structure is documented below."

Note: This function appends passed data to existing values

fn spec.forProvider.withSchema

withSchema(schema)

"A JSON schema for the table."

fn spec.forProvider.withTimePartitioning

withTimePartitioning(timePartitioning)

"If specified, configures time-based partitioning for this table. Structure is documented below."

fn spec.forProvider.withTimePartitioningMixin

withTimePartitioningMixin(timePartitioning)

"If specified, configures time-based partitioning for this table. Structure is documented below."

Note: This function appends passed data to existing values

fn spec.forProvider.withView

withView(view)

"If specified, configures this table as a view. Structure is documented below."

fn spec.forProvider.withViewMixin

withViewMixin(view)

"If specified, configures this table as a view. Structure is documented below."

Note: This function appends passed data to existing values

obj spec.forProvider.datasetIdRef

"Reference to a Dataset in bigquery to populate datasetId."

fn spec.forProvider.datasetIdRef.withName

withName(name)

"Name of the referenced object."

obj spec.forProvider.datasetIdRef.policy

"Policies for referencing."

fn spec.forProvider.datasetIdRef.policy.withResolution

withResolution(resolution)

"Resolution specifies whether resolution of this reference is required. The default is 'Required', which means the reconcile will fail if the reference cannot be resolved. 'Optional' means this reference will be a no-op if it cannot be resolved."

fn spec.forProvider.datasetIdRef.policy.withResolve

withResolve(resolve)

"Resolve specifies when this reference should be resolved. The default is 'IfNotPresent', which will attempt to resolve the reference only when the corresponding field is not present. Use 'Always' to resolve the reference on every reconcile."

obj spec.forProvider.datasetIdSelector

"Selector for a Dataset in bigquery to populate datasetId."

fn spec.forProvider.datasetIdSelector.withMatchControllerRef

withMatchControllerRef(matchControllerRef)

"MatchControllerRef ensures an object with the same controller reference as the selecting object is selected."

fn spec.forProvider.datasetIdSelector.withMatchLabels

withMatchLabels(matchLabels)

"MatchLabels ensures an object with matching labels is selected."

fn spec.forProvider.datasetIdSelector.withMatchLabelsMixin

withMatchLabelsMixin(matchLabels)

"MatchLabels ensures an object with matching labels is selected."

Note: This function appends passed data to existing values

obj spec.forProvider.datasetIdSelector.policy

"Policies for selection."

fn spec.forProvider.datasetIdSelector.policy.withResolution

withResolution(resolution)

"Resolution specifies whether resolution of this reference is required. The default is 'Required', which means the reconcile will fail if the reference cannot be resolved. 'Optional' means this reference will be a no-op if it cannot be resolved."

fn spec.forProvider.datasetIdSelector.policy.withResolve

withResolve(resolve)

"Resolve specifies when this reference should be resolved. The default is 'IfNotPresent', which will attempt to resolve the reference only when the corresponding field is not present. Use 'Always' to resolve the reference on every reconcile."

obj spec.forProvider.encryptionConfiguration

"Specifies how the table should be encrypted. If left blank, the table will be encrypted with a Google-managed key; that process is transparent to the user. Structure is documented below."

fn spec.forProvider.encryptionConfiguration.withKmsKeyName

withKmsKeyName(kmsKeyName)

"The self link or full name of a key which should be used to encrypt this table. Note that the default bigquery service account will need to have encrypt/decrypt permissions on this key - you may want to see the google_bigquery_default_service_account datasource and the google_kms_crypto_key_iam_binding resource."

obj spec.forProvider.externalDataConfiguration

"Describes the data format, location, and other properties of a table stored outside of BigQuery. By defining these properties, the data source can then be queried as if it were a standard BigQuery table. Structure is documented below."

fn spec.forProvider.externalDataConfiguration.withAutodetect

withAutodetect(autodetect)

"- Let BigQuery try to autodetect the schema and format of the table."

fn spec.forProvider.externalDataConfiguration.withAvroOptions

withAvroOptions(avroOptions)

"Additional options if source_format is set to \"AVRO\". Structure is documented below."

fn spec.forProvider.externalDataConfiguration.withAvroOptionsMixin

withAvroOptionsMixin(avroOptions)

"Additional options if source_format is set to \"AVRO\". Structure is documented below."

Note: This function appends passed data to existing values

fn spec.forProvider.externalDataConfiguration.withCompression

withCompression(compression)

"The compression type of the data source. Valid values are \"NONE\" or \"GZIP\"."

fn spec.forProvider.externalDataConfiguration.withConnectionId

withConnectionId(connectionId)

"The connection specifying the credentials to be used to read external storage, such as Azure Blob, Cloud Storage, or S3. The connection_id can have the form {{project}}.{{location}}.{{connection_id}} or projects/{{project}}/locations/{{location}}/connections/{{connection_id}}."

fn spec.forProvider.externalDataConfiguration.withCsvOptions

withCsvOptions(csvOptions)

"Additional properties to set if source_format is set to \"CSV\". Structure is documented below."

fn spec.forProvider.externalDataConfiguration.withCsvOptionsMixin

withCsvOptionsMixin(csvOptions)

"Additional properties to set if source_format is set to \"CSV\". Structure is documented below."

Note: This function appends passed data to existing values

fn spec.forProvider.externalDataConfiguration.withGoogleSheetsOptions

withGoogleSheetsOptions(googleSheetsOptions)

"Additional options if source_format is set to \"GOOGLE_SHEETS\". Structure is documented below."

fn spec.forProvider.externalDataConfiguration.withGoogleSheetsOptionsMixin

withGoogleSheetsOptionsMixin(googleSheetsOptions)

"Additional options if source_format is set to \"GOOGLE_SHEETS\". Structure is documented below."

Note: This function appends passed data to existing values

fn spec.forProvider.externalDataConfiguration.withHivePartitioningOptions

withHivePartitioningOptions(hivePartitioningOptions)

"When set, configures hive partitioning support. Not all storage formats support hive partitioning -- requesting hive partitioning on an unsupported format will lead to an error, as will providing an invalid specification. Structure is documented below."

fn spec.forProvider.externalDataConfiguration.withHivePartitioningOptionsMixin

withHivePartitioningOptionsMixin(hivePartitioningOptions)

"When set, configures hive partitioning support. Not all storage formats support hive partitioning -- requesting hive partitioning on an unsupported format will lead to an error, as will providing an invalid specification. Structure is documented below."

Note: This function appends passed data to existing values

fn spec.forProvider.externalDataConfiguration.withIgnoreUnknownValues

withIgnoreUnknownValues(ignoreUnknownValues)

"Indicates if BigQuery should allow extra values that are not represented in the table schema. If true, the extra values are ignored. If false, records with extra columns are treated as bad records, and if there are too many bad records, an invalid error is returned in the job result. The default value is false."

fn spec.forProvider.externalDataConfiguration.withMaxBadRecords

withMaxBadRecords(maxBadRecords)

"The maximum number of bad records that BigQuery can ignore when reading data."

fn spec.forProvider.externalDataConfiguration.withReferenceFileSchemaUri

withReferenceFileSchemaUri(referenceFileSchemaUri)

"When creating an external table, the user can provide a reference file with the table schema. This is enabled for the following formats: AVRO, PARQUET, ORC."

fn spec.forProvider.externalDataConfiguration.withSchema

withSchema(schema)

"A JSON schema for the external table. Schema is required for CSV and JSON formats if autodetect is not on. Schema is disallowed for Google Cloud Bigtable, Cloud Datastore backups, Avro, Iceberg, ORC and Parquet formats. ~>NOTE: Because this field expects a JSON string, any changes to the string will create a diff, even if the JSON itself hasn't changed. Furthermore drift for this field cannot not be detected because BigQuery only uses this schema to compute the effective schema for the table, therefore any changes on the configured value will force the table to be recreated. This schema is effectively only applied when creating a table from an external datasource, after creation the computed schema will be stored in google_bigquery_table.schema"

fn spec.forProvider.externalDataConfiguration.withSourceFormat

withSourceFormat(sourceFormat)

"The data format. Please see sourceFormat under ExternalDataConfiguration in Bigquery's public API documentation for supported formats. To use \"GOOGLE_SHEETS\" the scopes must include \"https://www.googleapis.com/auth/drive.readonly\"."

fn spec.forProvider.externalDataConfiguration.withSourceUris

withSourceUris(sourceUris)

"A list of the fully-qualified URIs that point to your data in Google Cloud."

fn spec.forProvider.externalDataConfiguration.withSourceUrisMixin

withSourceUrisMixin(sourceUris)

"A list of the fully-qualified URIs that point to your data in Google Cloud."

Note: This function appends passed data to existing values

obj spec.forProvider.externalDataConfiguration.avroOptions

"Additional options if source_format is set to \"AVRO\". Structure is documented below."

fn spec.forProvider.externalDataConfiguration.avroOptions.withUseAvroLogicalTypes

withUseAvroLogicalTypes(useAvroLogicalTypes)

"If is set to true, indicates whether to interpret logical types as the corresponding BigQuery data type (for example, TIMESTAMP), instead of using the raw type (for example, INTEGER)."

obj spec.forProvider.externalDataConfiguration.csvOptions

"Additional properties to set if source_format is set to \"CSV\". Structure is documented below."

fn spec.forProvider.externalDataConfiguration.csvOptions.withAllowJaggedRows

withAllowJaggedRows(allowJaggedRows)

"Indicates if BigQuery should accept rows that are missing trailing optional columns."

fn spec.forProvider.externalDataConfiguration.csvOptions.withAllowQuotedNewlines

withAllowQuotedNewlines(allowQuotedNewlines)

"Indicates if BigQuery should allow quoted data sections that contain newline characters in a CSV file. The default value is false."

fn spec.forProvider.externalDataConfiguration.csvOptions.withEncoding

withEncoding(encoding)

"The character encoding of the data. The supported values are UTF-8 or ISO-8859-1."

fn spec.forProvider.externalDataConfiguration.csvOptions.withFieldDelimiter

withFieldDelimiter(fieldDelimiter)

"The separator for fields in a CSV file."

fn spec.forProvider.externalDataConfiguration.csvOptions.withQuote

withQuote(quote)

"The value that is used to quote data sections in a CSV file. If your data does not contain quoted sections, set the property value to an empty string. If your data contains quoted newline characters, you must also set the allow_quoted_newlines property to true."

fn spec.forProvider.externalDataConfiguration.csvOptions.withSkipLeadingRows

withSkipLeadingRows(skipLeadingRows)

"The number of rows at the top of the sheet that BigQuery will skip when reading the data. At least one of range or skip_leading_rows must be set."

obj spec.forProvider.externalDataConfiguration.googleSheetsOptions

"Additional options if source_format is set to \"GOOGLE_SHEETS\". Structure is documented below."

fn spec.forProvider.externalDataConfiguration.googleSheetsOptions.withRange

withRange(range)

"Information required to partition based on ranges. Structure is documented below."

fn spec.forProvider.externalDataConfiguration.googleSheetsOptions.withSkipLeadingRows

withSkipLeadingRows(skipLeadingRows)

"The number of rows at the top of the sheet that BigQuery will skip when reading the data. At least one of range or skip_leading_rows must be set."

obj spec.forProvider.externalDataConfiguration.hivePartitioningOptions

"When set, configures hive partitioning support. Not all storage formats support hive partitioning -- requesting hive partitioning on an unsupported format will lead to an error, as will providing an invalid specification. Structure is documented below."

fn spec.forProvider.externalDataConfiguration.hivePartitioningOptions.withMode

withMode(mode)

"When set, what mode of hive partitioning to use when reading data. The following modes are supported."

fn spec.forProvider.externalDataConfiguration.hivePartitioningOptions.withRequirePartitionFilter

withRequirePartitionFilter(requirePartitionFilter)

"If set to true, queries over this table require a partition filter that can be used for partition elimination to be specified."

fn spec.forProvider.externalDataConfiguration.hivePartitioningOptions.withSourceUriPrefix

withSourceUriPrefix(sourceUriPrefix)

"When hive partition detection is requested, a common for all source uris must be required. The prefix must end immediately before the partition key encoding begins. For example, consider files following this data layout. gs://bucket/path_to_table/dt=2019-06-01/country=USA/id=7/file.avro gs://bucket/path_to_table/dt=2019-05-31/country=CA/id=3/file.avro When hive partitioning is requested with either AUTO or STRINGS detection, the common prefix can be either of gs://bucket/path_to_table or gs://bucket/path_to_table/. Note that when mode is set to CUSTOM, you must encode the partition key schema within the source_uri_prefix by setting source_uri_prefix to gs://bucket/path_to_table/{key1:TYPE1}/{key2:TYPE2}/{key3:TYPE3}."

obj spec.forProvider.materializedView

"If specified, configures this table as a materialized view. Structure is documented below."

fn spec.forProvider.materializedView.withEnableRefresh

withEnableRefresh(enableRefresh)

"Specifies whether to use BigQuery's automatic refresh for this materialized view when the base table is updated. The default value is true."

fn spec.forProvider.materializedView.withQuery

withQuery(query)

"A query whose result is persisted."

fn spec.forProvider.materializedView.withRefreshIntervalMs

withRefreshIntervalMs(refreshIntervalMs)

"The maximum frequency at which this materialized view will be refreshed. The default value is 1800000"

obj spec.forProvider.rangePartitioning

"If specified, configures range-based partitioning for this table. Structure is documented below."

fn spec.forProvider.rangePartitioning.withField

withField(field)

"The field used to determine how to create a range-based partition."

fn spec.forProvider.rangePartitioning.withRange

withRange(range)

"Information required to partition based on ranges. Structure is documented below."

fn spec.forProvider.rangePartitioning.withRangeMixin

withRangeMixin(range)

"Information required to partition based on ranges. Structure is documented below."

Note: This function appends passed data to existing values

obj spec.forProvider.rangePartitioning.range

"Information required to partition based on ranges. Structure is documented below."

fn spec.forProvider.rangePartitioning.range.withEnd

withEnd(end)

"End of the range partitioning, exclusive."

fn spec.forProvider.rangePartitioning.range.withInterval

withInterval(interval)

"The width of each range within the partition."

fn spec.forProvider.rangePartitioning.range.withStart

withStart(start)

"Start of the range partitioning, inclusive."

obj spec.forProvider.timePartitioning

"If specified, configures time-based partitioning for this table. Structure is documented below."

fn spec.forProvider.timePartitioning.withExpirationMs

withExpirationMs(expirationMs)

"Number of milliseconds for which to keep the storage for a partition."

fn spec.forProvider.timePartitioning.withField

withField(field)

"The field used to determine how to create a time-based partition. If time-based partitioning is enabled without this value, the table is partitioned based on the load time."

fn spec.forProvider.timePartitioning.withRequirePartitionFilter

withRequirePartitionFilter(requirePartitionFilter)

"If set to true, queries over this table require a partition filter that can be used for partition elimination to be specified."

fn spec.forProvider.timePartitioning.withType

withType(type)

"The supported types are DAY, HOUR, MONTH, and YEAR, which will generate one partition per day, hour, month, and year, respectively."

obj spec.forProvider.view

"If specified, configures this table as a view. Structure is documented below."

fn spec.forProvider.view.withQuery

withQuery(query)

"A query that BigQuery executes when the view is referenced."

fn spec.forProvider.view.withUseLegacySql

withUseLegacySql(useLegacySql)

"Specifies whether to use BigQuery's legacy SQL for this view. The default value is true. If set to false, the view will use BigQuery's standard SQL."

obj spec.initProvider

"THIS IS AN ALPHA FIELD. Do not use it in production. It is not honored unless the relevant Crossplane feature flag is enabled, and may be changed or removed without notice. InitProvider holds the same fields as ForProvider, with the exception of Identifier and other resource reference fields. The fields that are in InitProvider are merged into ForProvider when the resource is created. The same fields are also added to the terraform ignore_changes hook, to avoid updating them after creation. This is useful for fields that are required on creation, but we do not desire to update them after creation, for example because of an external controller is managing them, like an autoscaler."

fn spec.initProvider.withClustering

withClustering(clustering)

"Specifies column names to use for data clustering. Up to four top-level columns are allowed, and should be specified in descending priority order."

fn spec.initProvider.withClusteringMixin

withClusteringMixin(clustering)

"Specifies column names to use for data clustering. Up to four top-level columns are allowed, and should be specified in descending priority order."

Note: This function appends passed data to existing values

fn spec.initProvider.withDeletionProtection

withDeletionProtection(deletionProtection)

fn spec.initProvider.withDescription

withDescription(description)

"The field description."

fn spec.initProvider.withEncryptionConfiguration

withEncryptionConfiguration(encryptionConfiguration)

"Specifies how the table should be encrypted. If left blank, the table will be encrypted with a Google-managed key; that process is transparent to the user. Structure is documented below."

fn spec.initProvider.withEncryptionConfigurationMixin

withEncryptionConfigurationMixin(encryptionConfiguration)

"Specifies how the table should be encrypted. If left blank, the table will be encrypted with a Google-managed key; that process is transparent to the user. Structure is documented below."

Note: This function appends passed data to existing values

fn spec.initProvider.withExpirationTime

withExpirationTime(expirationTime)

"The time when this table expires, in milliseconds since the epoch. If not present, the table will persist indefinitely. Expired tables will be deleted and their storage reclaimed."

fn spec.initProvider.withExternalDataConfiguration

withExternalDataConfiguration(externalDataConfiguration)

"Describes the data format, location, and other properties of a table stored outside of BigQuery. By defining these properties, the data source can then be queried as if it were a standard BigQuery table. Structure is documented below."

fn spec.initProvider.withExternalDataConfigurationMixin

withExternalDataConfigurationMixin(externalDataConfiguration)

"Describes the data format, location, and other properties of a table stored outside of BigQuery. By defining these properties, the data source can then be queried as if it were a standard BigQuery table. Structure is documented below."

Note: This function appends passed data to existing values

fn spec.initProvider.withFriendlyName

withFriendlyName(friendlyName)

"A descriptive name for the table."

fn spec.initProvider.withLabels

withLabels(labels)

"A mapping of labels to assign to the resource."

fn spec.initProvider.withLabelsMixin

withLabelsMixin(labels)

"A mapping of labels to assign to the resource."

Note: This function appends passed data to existing values

fn spec.initProvider.withMaterializedView

withMaterializedView(materializedView)

"If specified, configures this table as a materialized view. Structure is documented below."

fn spec.initProvider.withMaterializedViewMixin

withMaterializedViewMixin(materializedView)

"If specified, configures this table as a materialized view. Structure is documented below."

Note: This function appends passed data to existing values

fn spec.initProvider.withRangePartitioning

withRangePartitioning(rangePartitioning)

"If specified, configures range-based partitioning for this table. Structure is documented below."

fn spec.initProvider.withRangePartitioningMixin

withRangePartitioningMixin(rangePartitioning)

"If specified, configures range-based partitioning for this table. Structure is documented below."

Note: This function appends passed data to existing values

fn spec.initProvider.withSchema

withSchema(schema)

"A JSON schema for the table."

fn spec.initProvider.withTimePartitioning

withTimePartitioning(timePartitioning)

"If specified, configures time-based partitioning for this table. Structure is documented below."

fn spec.initProvider.withTimePartitioningMixin

withTimePartitioningMixin(timePartitioning)

"If specified, configures time-based partitioning for this table. Structure is documented below."

Note: This function appends passed data to existing values

fn spec.initProvider.withView

withView(view)

"If specified, configures this table as a view. Structure is documented below."

fn spec.initProvider.withViewMixin

withViewMixin(view)

"If specified, configures this table as a view. Structure is documented below."

Note: This function appends passed data to existing values

obj spec.initProvider.encryptionConfiguration

"Specifies how the table should be encrypted. If left blank, the table will be encrypted with a Google-managed key; that process is transparent to the user. Structure is documented below."

fn spec.initProvider.encryptionConfiguration.withKmsKeyName

withKmsKeyName(kmsKeyName)

"The self link or full name of a key which should be used to encrypt this table. Note that the default bigquery service account will need to have encrypt/decrypt permissions on this key - you may want to see the google_bigquery_default_service_account datasource and the google_kms_crypto_key_iam_binding resource."

obj spec.initProvider.externalDataConfiguration

"Describes the data format, location, and other properties of a table stored outside of BigQuery. By defining these properties, the data source can then be queried as if it were a standard BigQuery table. Structure is documented below."

fn spec.initProvider.externalDataConfiguration.withAutodetect

withAutodetect(autodetect)

"- Let BigQuery try to autodetect the schema and format of the table."

fn spec.initProvider.externalDataConfiguration.withAvroOptions

withAvroOptions(avroOptions)

"Additional options if source_format is set to \"AVRO\". Structure is documented below."

fn spec.initProvider.externalDataConfiguration.withAvroOptionsMixin

withAvroOptionsMixin(avroOptions)

"Additional options if source_format is set to \"AVRO\". Structure is documented below."

Note: This function appends passed data to existing values

fn spec.initProvider.externalDataConfiguration.withCompression

withCompression(compression)

"The compression type of the data source. Valid values are \"NONE\" or \"GZIP\"."

fn spec.initProvider.externalDataConfiguration.withConnectionId

withConnectionId(connectionId)

"The connection specifying the credentials to be used to read external storage, such as Azure Blob, Cloud Storage, or S3. The connection_id can have the form {{project}}.{{location}}.{{connection_id}} or projects/{{project}}/locations/{{location}}/connections/{{connection_id}}."

fn spec.initProvider.externalDataConfiguration.withCsvOptions

withCsvOptions(csvOptions)

"Additional properties to set if source_format is set to \"CSV\". Structure is documented below."

fn spec.initProvider.externalDataConfiguration.withCsvOptionsMixin

withCsvOptionsMixin(csvOptions)

"Additional properties to set if source_format is set to \"CSV\". Structure is documented below."

Note: This function appends passed data to existing values

fn spec.initProvider.externalDataConfiguration.withGoogleSheetsOptions

withGoogleSheetsOptions(googleSheetsOptions)

"Additional options if source_format is set to \"GOOGLE_SHEETS\". Structure is documented below."

fn spec.initProvider.externalDataConfiguration.withGoogleSheetsOptionsMixin

withGoogleSheetsOptionsMixin(googleSheetsOptions)

"Additional options if source_format is set to \"GOOGLE_SHEETS\". Structure is documented below."

Note: This function appends passed data to existing values

fn spec.initProvider.externalDataConfiguration.withHivePartitioningOptions

withHivePartitioningOptions(hivePartitioningOptions)

"When set, configures hive partitioning support. Not all storage formats support hive partitioning -- requesting hive partitioning on an unsupported format will lead to an error, as will providing an invalid specification. Structure is documented below."

fn spec.initProvider.externalDataConfiguration.withHivePartitioningOptionsMixin

withHivePartitioningOptionsMixin(hivePartitioningOptions)

"When set, configures hive partitioning support. Not all storage formats support hive partitioning -- requesting hive partitioning on an unsupported format will lead to an error, as will providing an invalid specification. Structure is documented below."

Note: This function appends passed data to existing values

fn spec.initProvider.externalDataConfiguration.withIgnoreUnknownValues

withIgnoreUnknownValues(ignoreUnknownValues)

"Indicates if BigQuery should allow extra values that are not represented in the table schema. If true, the extra values are ignored. If false, records with extra columns are treated as bad records, and if there are too many bad records, an invalid error is returned in the job result. The default value is false."

fn spec.initProvider.externalDataConfiguration.withMaxBadRecords

withMaxBadRecords(maxBadRecords)

"The maximum number of bad records that BigQuery can ignore when reading data."

fn spec.initProvider.externalDataConfiguration.withReferenceFileSchemaUri

withReferenceFileSchemaUri(referenceFileSchemaUri)

"When creating an external table, the user can provide a reference file with the table schema. This is enabled for the following formats: AVRO, PARQUET, ORC."

fn spec.initProvider.externalDataConfiguration.withSchema

withSchema(schema)

"A JSON schema for the external table. Schema is required for CSV and JSON formats if autodetect is not on. Schema is disallowed for Google Cloud Bigtable, Cloud Datastore backups, Avro, Iceberg, ORC and Parquet formats. ~>NOTE: Because this field expects a JSON string, any changes to the string will create a diff, even if the JSON itself hasn't changed. Furthermore drift for this field cannot not be detected because BigQuery only uses this schema to compute the effective schema for the table, therefore any changes on the configured value will force the table to be recreated. This schema is effectively only applied when creating a table from an external datasource, after creation the computed schema will be stored in google_bigquery_table.schema"

fn spec.initProvider.externalDataConfiguration.withSourceFormat

withSourceFormat(sourceFormat)

"The data format. Please see sourceFormat under ExternalDataConfiguration in Bigquery's public API documentation for supported formats. To use \"GOOGLE_SHEETS\" the scopes must include \"https://www.googleapis.com/auth/drive.readonly\"."

fn spec.initProvider.externalDataConfiguration.withSourceUris

withSourceUris(sourceUris)

"A list of the fully-qualified URIs that point to your data in Google Cloud."

fn spec.initProvider.externalDataConfiguration.withSourceUrisMixin

withSourceUrisMixin(sourceUris)

"A list of the fully-qualified URIs that point to your data in Google Cloud."

Note: This function appends passed data to existing values

obj spec.initProvider.externalDataConfiguration.avroOptions

"Additional options if source_format is set to \"AVRO\". Structure is documented below."

fn spec.initProvider.externalDataConfiguration.avroOptions.withUseAvroLogicalTypes

withUseAvroLogicalTypes(useAvroLogicalTypes)

"If is set to true, indicates whether to interpret logical types as the corresponding BigQuery data type (for example, TIMESTAMP), instead of using the raw type (for example, INTEGER)."

obj spec.initProvider.externalDataConfiguration.csvOptions

"Additional properties to set if source_format is set to \"CSV\". Structure is documented below."

fn spec.initProvider.externalDataConfiguration.csvOptions.withAllowJaggedRows

withAllowJaggedRows(allowJaggedRows)

"Indicates if BigQuery should accept rows that are missing trailing optional columns."

fn spec.initProvider.externalDataConfiguration.csvOptions.withAllowQuotedNewlines

withAllowQuotedNewlines(allowQuotedNewlines)

"Indicates if BigQuery should allow quoted data sections that contain newline characters in a CSV file. The default value is false."

fn spec.initProvider.externalDataConfiguration.csvOptions.withEncoding

withEncoding(encoding)

"The character encoding of the data. The supported values are UTF-8 or ISO-8859-1."

fn spec.initProvider.externalDataConfiguration.csvOptions.withFieldDelimiter

withFieldDelimiter(fieldDelimiter)

"The separator for fields in a CSV file."

fn spec.initProvider.externalDataConfiguration.csvOptions.withQuote

withQuote(quote)

"The value that is used to quote data sections in a CSV file. If your data does not contain quoted sections, set the property value to an empty string. If your data contains quoted newline characters, you must also set the allow_quoted_newlines property to true."

fn spec.initProvider.externalDataConfiguration.csvOptions.withSkipLeadingRows

withSkipLeadingRows(skipLeadingRows)

"The number of rows at the top of the sheet that BigQuery will skip when reading the data. At least one of range or skip_leading_rows must be set."

obj spec.initProvider.externalDataConfiguration.googleSheetsOptions

"Additional options if source_format is set to \"GOOGLE_SHEETS\". Structure is documented below."

fn spec.initProvider.externalDataConfiguration.googleSheetsOptions.withRange

withRange(range)

"Information required to partition based on ranges. Structure is documented below."

fn spec.initProvider.externalDataConfiguration.googleSheetsOptions.withSkipLeadingRows

withSkipLeadingRows(skipLeadingRows)

"The number of rows at the top of the sheet that BigQuery will skip when reading the data. At least one of range or skip_leading_rows must be set."

obj spec.initProvider.externalDataConfiguration.hivePartitioningOptions

"When set, configures hive partitioning support. Not all storage formats support hive partitioning -- requesting hive partitioning on an unsupported format will lead to an error, as will providing an invalid specification. Structure is documented below."

fn spec.initProvider.externalDataConfiguration.hivePartitioningOptions.withMode

withMode(mode)

"When set, what mode of hive partitioning to use when reading data. The following modes are supported."

fn spec.initProvider.externalDataConfiguration.hivePartitioningOptions.withRequirePartitionFilter

withRequirePartitionFilter(requirePartitionFilter)

"If set to true, queries over this table require a partition filter that can be used for partition elimination to be specified."

fn spec.initProvider.externalDataConfiguration.hivePartitioningOptions.withSourceUriPrefix

withSourceUriPrefix(sourceUriPrefix)

"When hive partition detection is requested, a common for all source uris must be required. The prefix must end immediately before the partition key encoding begins. For example, consider files following this data layout. gs://bucket/path_to_table/dt=2019-06-01/country=USA/id=7/file.avro gs://bucket/path_to_table/dt=2019-05-31/country=CA/id=3/file.avro When hive partitioning is requested with either AUTO or STRINGS detection, the common prefix can be either of gs://bucket/path_to_table or gs://bucket/path_to_table/. Note that when mode is set to CUSTOM, you must encode the partition key schema within the source_uri_prefix by setting source_uri_prefix to gs://bucket/path_to_table/{key1:TYPE1}/{key2:TYPE2}/{key3:TYPE3}."

obj spec.initProvider.materializedView

"If specified, configures this table as a materialized view. Structure is documented below."

fn spec.initProvider.materializedView.withEnableRefresh

withEnableRefresh(enableRefresh)

"Specifies whether to use BigQuery's automatic refresh for this materialized view when the base table is updated. The default value is true."

fn spec.initProvider.materializedView.withQuery

withQuery(query)

"A query whose result is persisted."

fn spec.initProvider.materializedView.withRefreshIntervalMs

withRefreshIntervalMs(refreshIntervalMs)

"The maximum frequency at which this materialized view will be refreshed. The default value is 1800000"

obj spec.initProvider.rangePartitioning

"If specified, configures range-based partitioning for this table. Structure is documented below."

fn spec.initProvider.rangePartitioning.withField

withField(field)

"The field used to determine how to create a range-based partition."

fn spec.initProvider.rangePartitioning.withRange

withRange(range)

"Information required to partition based on ranges. Structure is documented below."

fn spec.initProvider.rangePartitioning.withRangeMixin

withRangeMixin(range)

"Information required to partition based on ranges. Structure is documented below."

Note: This function appends passed data to existing values

obj spec.initProvider.rangePartitioning.range

"Information required to partition based on ranges. Structure is documented below."

fn spec.initProvider.rangePartitioning.range.withEnd

withEnd(end)

"End of the range partitioning, exclusive."

fn spec.initProvider.rangePartitioning.range.withInterval

withInterval(interval)

"The width of each range within the partition."

fn spec.initProvider.rangePartitioning.range.withStart

withStart(start)

"Start of the range partitioning, inclusive."

obj spec.initProvider.timePartitioning

"If specified, configures time-based partitioning for this table. Structure is documented below."

fn spec.initProvider.timePartitioning.withExpirationMs

withExpirationMs(expirationMs)

"Number of milliseconds for which to keep the storage for a partition."

fn spec.initProvider.timePartitioning.withField

withField(field)

"The field used to determine how to create a time-based partition. If time-based partitioning is enabled without this value, the table is partitioned based on the load time."

fn spec.initProvider.timePartitioning.withRequirePartitionFilter

withRequirePartitionFilter(requirePartitionFilter)

"If set to true, queries over this table require a partition filter that can be used for partition elimination to be specified."

fn spec.initProvider.timePartitioning.withType

withType(type)

"The supported types are DAY, HOUR, MONTH, and YEAR, which will generate one partition per day, hour, month, and year, respectively."

obj spec.initProvider.view

"If specified, configures this table as a view. Structure is documented below."

fn spec.initProvider.view.withQuery

withQuery(query)

"A query that BigQuery executes when the view is referenced."

fn spec.initProvider.view.withUseLegacySql

withUseLegacySql(useLegacySql)

"Specifies whether to use BigQuery's legacy SQL for this view. The default value is true. If set to false, the view will use BigQuery's standard SQL."

obj spec.providerConfigRef

"ProviderConfigReference specifies how the provider that will be used to create, observe, update, and delete this managed resource should be configured."

fn spec.providerConfigRef.withName

withName(name)

"Name of the referenced object."

obj spec.providerConfigRef.policy

"Policies for referencing."

fn spec.providerConfigRef.policy.withResolution

withResolution(resolution)

"Resolution specifies whether resolution of this reference is required. The default is 'Required', which means the reconcile will fail if the reference cannot be resolved. 'Optional' means this reference will be a no-op if it cannot be resolved."

fn spec.providerConfigRef.policy.withResolve

withResolve(resolve)

"Resolve specifies when this reference should be resolved. The default is 'IfNotPresent', which will attempt to resolve the reference only when the corresponding field is not present. Use 'Always' to resolve the reference on every reconcile."

obj spec.providerRef

"ProviderReference specifies the provider that will be used to create, observe, update, and delete this managed resource. Deprecated: Please use ProviderConfigReference, i.e. providerConfigRef"

fn spec.providerRef.withName

withName(name)

"Name of the referenced object."

obj spec.providerRef.policy

"Policies for referencing."

fn spec.providerRef.policy.withResolution

withResolution(resolution)

"Resolution specifies whether resolution of this reference is required. The default is 'Required', which means the reconcile will fail if the reference cannot be resolved. 'Optional' means this reference will be a no-op if it cannot be resolved."

fn spec.providerRef.policy.withResolve

withResolve(resolve)

"Resolve specifies when this reference should be resolved. The default is 'IfNotPresent', which will attempt to resolve the reference only when the corresponding field is not present. Use 'Always' to resolve the reference on every reconcile."

obj spec.publishConnectionDetailsTo

"PublishConnectionDetailsTo specifies the connection secret config which contains a name, metadata and a reference to secret store config to which any connection details for this managed resource should be written. Connection details frequently include the endpoint, username, and password required to connect to the managed resource."

fn spec.publishConnectionDetailsTo.withName

withName(name)

"Name is the name of the connection secret."

obj spec.publishConnectionDetailsTo.configRef

"SecretStoreConfigRef specifies which secret store config should be used for this ConnectionSecret."

fn spec.publishConnectionDetailsTo.configRef.withName

withName(name)

"Name of the referenced object."

obj spec.publishConnectionDetailsTo.configRef.policy

"Policies for referencing."

fn spec.publishConnectionDetailsTo.configRef.policy.withResolution

withResolution(resolution)

"Resolution specifies whether resolution of this reference is required. The default is 'Required', which means the reconcile will fail if the reference cannot be resolved. 'Optional' means this reference will be a no-op if it cannot be resolved."

fn spec.publishConnectionDetailsTo.configRef.policy.withResolve

withResolve(resolve)

"Resolve specifies when this reference should be resolved. The default is 'IfNotPresent', which will attempt to resolve the reference only when the corresponding field is not present. Use 'Always' to resolve the reference on every reconcile."

obj spec.publishConnectionDetailsTo.metadata

"Metadata is the metadata for connection secret."

fn spec.publishConnectionDetailsTo.metadata.withAnnotations

withAnnotations(annotations)

"Annotations are the annotations to be added to connection secret. - For Kubernetes secrets, this will be used as \"metadata.annotations\". - It is up to Secret Store implementation for others store types."

fn spec.publishConnectionDetailsTo.metadata.withAnnotationsMixin

withAnnotationsMixin(annotations)

"Annotations are the annotations to be added to connection secret. - For Kubernetes secrets, this will be used as \"metadata.annotations\". - It is up to Secret Store implementation for others store types."

Note: This function appends passed data to existing values

fn spec.publishConnectionDetailsTo.metadata.withLabels

withLabels(labels)

"Labels are the labels/tags to be added to connection secret. - For Kubernetes secrets, this will be used as \"metadata.labels\". - It is up to Secret Store implementation for others store types."

fn spec.publishConnectionDetailsTo.metadata.withLabelsMixin

withLabelsMixin(labels)

"Labels are the labels/tags to be added to connection secret. - For Kubernetes secrets, this will be used as \"metadata.labels\". - It is up to Secret Store implementation for others store types."

Note: This function appends passed data to existing values

fn spec.publishConnectionDetailsTo.metadata.withType

withType(type)

"Type is the SecretType for the connection secret. - Only valid for Kubernetes Secret Stores."

obj spec.writeConnectionSecretToRef

"WriteConnectionSecretToReference specifies the namespace and name of a Secret to which any connection details for this managed resource should be written. Connection details frequently include the endpoint, username, and password required to connect to the managed resource. This field is planned to be replaced in a future release in favor of PublishConnectionDetailsTo. Currently, both could be set independently and connection details would be published to both without affecting each other."

fn spec.writeConnectionSecretToRef.withName

withName(name)

"Name of the secret."

fn spec.writeConnectionSecretToRef.withNamespace

withNamespace(namespace)

"Namespace of the secret."