Skip to content

monitoring.v1beta1.alertPolicy

"AlertPolicy is the Schema for the AlertPolicys API. A description of the conditions under which some aspect of your system is considered to be \"unhealthy\" and the ways to notify people or services about this state."

Index

Fields

fn new

new(name)

new returns an instance of AlertPolicy

obj metadata

"ObjectMeta is metadata that all persisted resources must have, which includes all objects users must create."

fn metadata.withAnnotations

withAnnotations(annotations)

"Annotations is an unstructured key value map stored with a resource that may be set by external tools to store and retrieve arbitrary metadata. They are not queryable and should be preserved when modifying objects. More info: http://kubernetes.io/docs/user-guide/annotations"

fn metadata.withAnnotationsMixin

withAnnotationsMixin(annotations)

"Annotations is an unstructured key value map stored with a resource that may be set by external tools to store and retrieve arbitrary metadata. They are not queryable and should be preserved when modifying objects. More info: http://kubernetes.io/docs/user-guide/annotations"

Note: This function appends passed data to existing values

fn metadata.withClusterName

withClusterName(clusterName)

"The name of the cluster which the object belongs to. This is used to distinguish resources with same name and namespace in different clusters. This field is not set anywhere right now and apiserver is going to ignore it if set in create or update request."

fn metadata.withCreationTimestamp

withCreationTimestamp(creationTimestamp)

"Time is a wrapper around time.Time which supports correct marshaling to YAML and JSON. Wrappers are provided for many of the factory methods that the time package offers."

fn metadata.withDeletionGracePeriodSeconds

withDeletionGracePeriodSeconds(deletionGracePeriodSeconds)

"Number of seconds allowed for this object to gracefully terminate before it will be removed from the system. Only set when deletionTimestamp is also set. May only be shortened. Read-only."

fn metadata.withDeletionTimestamp

withDeletionTimestamp(deletionTimestamp)

"Time is a wrapper around time.Time which supports correct marshaling to YAML and JSON. Wrappers are provided for many of the factory methods that the time package offers."

fn metadata.withFinalizers

withFinalizers(finalizers)

"Must be empty before the object is deleted from the registry. Each entry is an identifier for the responsible component that will remove the entry from the list. If the deletionTimestamp of the object is non-nil, entries in this list can only be removed. Finalizers may be processed and removed in any order. Order is NOT enforced because it introduces significant risk of stuck finalizers. finalizers is a shared field, any actor with permission can reorder it. If the finalizer list is processed in order, then this can lead to a situation in which the component responsible for the first finalizer in the list is waiting for a signal (field value, external system, or other) produced by a component responsible for a finalizer later in the list, resulting in a deadlock. Without enforced ordering finalizers are free to order amongst themselves and are not vulnerable to ordering changes in the list."

fn metadata.withFinalizersMixin

withFinalizersMixin(finalizers)

"Must be empty before the object is deleted from the registry. Each entry is an identifier for the responsible component that will remove the entry from the list. If the deletionTimestamp of the object is non-nil, entries in this list can only be removed. Finalizers may be processed and removed in any order. Order is NOT enforced because it introduces significant risk of stuck finalizers. finalizers is a shared field, any actor with permission can reorder it. If the finalizer list is processed in order, then this can lead to a situation in which the component responsible for the first finalizer in the list is waiting for a signal (field value, external system, or other) produced by a component responsible for a finalizer later in the list, resulting in a deadlock. Without enforced ordering finalizers are free to order amongst themselves and are not vulnerable to ordering changes in the list."

Note: This function appends passed data to existing values

fn metadata.withGenerateName

withGenerateName(generateName)

"GenerateName is an optional prefix, used by the server, to generate a unique name ONLY IF the Name field has not been provided. If this field is used, the name returned to the client will be different than the name passed. This value will also be combined with a unique suffix. The provided value has the same validation rules as the Name field, and may be truncated by the length of the suffix required to make the value unique on the server.\n\nIf this field is specified and the generated name exists, the server will NOT return a 409 - instead, it will either return 201 Created or 500 with Reason ServerTimeout indicating a unique name could not be found in the time allotted, and the client should retry (optionally after the time indicated in the Retry-After header).\n\nApplied only if Name is not specified. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency"

fn metadata.withGeneration

withGeneration(generation)

"A sequence number representing a specific generation of the desired state. Populated by the system. Read-only."

fn metadata.withLabels

withLabels(labels)

"Map of string keys and values that can be used to organize and categorize (scope and select) objects. May match selectors of replication controllers and services. More info: http://kubernetes.io/docs/user-guide/labels"

fn metadata.withLabelsMixin

withLabelsMixin(labels)

"Map of string keys and values that can be used to organize and categorize (scope and select) objects. May match selectors of replication controllers and services. More info: http://kubernetes.io/docs/user-guide/labels"

Note: This function appends passed data to existing values

fn metadata.withName

withName(name)

"Name must be unique within a namespace. Is required when creating resources, although some resources may allow a client to request the generation of an appropriate name automatically. Name is primarily intended for creation idempotence and configuration definition. Cannot be updated. More info: http://kubernetes.io/docs/user-guide/identifiers#names"

fn metadata.withNamespace

withNamespace(namespace)

"Namespace defines the space within which each name must be unique. An empty namespace is equivalent to the \"default\" namespace, but \"default\" is the canonical representation. Not all objects are required to be scoped to a namespace - the value of this field for those objects will be empty.\n\nMust be a DNS_LABEL. Cannot be updated. More info: http://kubernetes.io/docs/user-guide/namespaces"

fn metadata.withOwnerReferences

withOwnerReferences(ownerReferences)

"List of objects depended by this object. If ALL objects in the list have been deleted, this object will be garbage collected. If this object is managed by a controller, then an entry in this list will point to this controller, with the controller field set to true. There cannot be more than one managing controller."

fn metadata.withOwnerReferencesMixin

withOwnerReferencesMixin(ownerReferences)

"List of objects depended by this object. If ALL objects in the list have been deleted, this object will be garbage collected. If this object is managed by a controller, then an entry in this list will point to this controller, with the controller field set to true. There cannot be more than one managing controller."

Note: This function appends passed data to existing values

fn metadata.withResourceVersion

withResourceVersion(resourceVersion)

"An opaque value that represents the internal version of this object that can be used by clients to determine when objects have changed. May be used for optimistic concurrency, change detection, and the watch operation on a resource or set of resources. Clients must treat these values as opaque and passed unmodified back to the server. They may only be valid for a particular resource or set of resources.\n\nPopulated by the system. Read-only. Value must be treated as opaque by clients and . More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency"

withSelfLink(selfLink)

"SelfLink is a URL representing this object. Populated by the system. Read-only.\n\nDEPRECATED Kubernetes will stop propagating this field in 1.20 release and the field is planned to be removed in 1.21 release."

fn metadata.withUid

withUid(uid)

"UID is the unique in time and space value for this object. It is typically generated by the server on successful creation of a resource and is not allowed to change on PUT operations.\n\nPopulated by the system. Read-only. More info: http://kubernetes.io/docs/user-guide/identifiers#uids"

obj spec

"AlertPolicySpec defines the desired state of AlertPolicy"

fn spec.withDeletionPolicy

withDeletionPolicy(deletionPolicy)

"DeletionPolicy specifies what will happen to the underlying external when this managed resource is deleted - either \"Delete\" or \"Orphan\" the external resource. This field is planned to be deprecated in favor of the ManagementPolicies field in a future release. Currently, both could be set independently and non-default values would be honored if the feature flag is enabled. See the design doc for more information: https://github.com/crossplane/crossplane/blob/499895a25d1a1a0ba1604944ef98ac7a1a71f197/design/design-doc-observe-only-resources.md?plain=1#L223"

fn spec.withManagementPolicies

withManagementPolicies(managementPolicies)

"THIS IS AN ALPHA FIELD. Do not use it in production. It is not honored unless the relevant Crossplane feature flag is enabled, and may be changed or removed without notice. ManagementPolicies specify the array of actions Crossplane is allowed to take on the managed and external resources. This field is planned to replace the DeletionPolicy field in a future release. Currently, both could be set independently and non-default values would be honored if the feature flag is enabled. If both are custom, the DeletionPolicy field will be ignored. See the design doc for more information: https://github.com/crossplane/crossplane/blob/499895a25d1a1a0ba1604944ef98ac7a1a71f197/design/design-doc-observe-only-resources.md?plain=1#L223 and this one: https://github.com/crossplane/crossplane/blob/444267e84783136daa93568b364a5f01228cacbe/design/one-pager-ignore-changes.md"

fn spec.withManagementPoliciesMixin

withManagementPoliciesMixin(managementPolicies)

"THIS IS AN ALPHA FIELD. Do not use it in production. It is not honored unless the relevant Crossplane feature flag is enabled, and may be changed or removed without notice. ManagementPolicies specify the array of actions Crossplane is allowed to take on the managed and external resources. This field is planned to replace the DeletionPolicy field in a future release. Currently, both could be set independently and non-default values would be honored if the feature flag is enabled. If both are custom, the DeletionPolicy field will be ignored. See the design doc for more information: https://github.com/crossplane/crossplane/blob/499895a25d1a1a0ba1604944ef98ac7a1a71f197/design/design-doc-observe-only-resources.md?plain=1#L223 and this one: https://github.com/crossplane/crossplane/blob/444267e84783136daa93568b364a5f01228cacbe/design/one-pager-ignore-changes.md"

Note: This function appends passed data to existing values

obj spec.forProvider

fn spec.forProvider.withAlertStrategy

withAlertStrategy(alertStrategy)

"Control over how this alert policy's notification channels are notified. Structure is documented below."

fn spec.forProvider.withAlertStrategyMixin

withAlertStrategyMixin(alertStrategy)

"Control over how this alert policy's notification channels are notified. Structure is documented below."

Note: This function appends passed data to existing values

fn spec.forProvider.withCombiner

withCombiner(combiner)

"How to combine the results of multiple conditions to determine if an incident should be opened. Possible values are: AND, OR, AND_WITH_MATCHING_RESOURCE."

fn spec.forProvider.withConditions

withConditions(conditions)

"A list of conditions for the policy. The conditions are combined by AND or OR according to the combiner field. If the combined conditions evaluate to true, then an incident is created. A policy can have from one to six conditions. Structure is documented below."

fn spec.forProvider.withConditionsMixin

withConditionsMixin(conditions)

"A list of conditions for the policy. The conditions are combined by AND or OR according to the combiner field. If the combined conditions evaluate to true, then an incident is created. A policy can have from one to six conditions. Structure is documented below."

Note: This function appends passed data to existing values

fn spec.forProvider.withDisplayName

withDisplayName(displayName)

"A short name or phrase used to identify the policy in dashboards, notifications, and incidents. To avoid confusion, don't use the same display name for multiple policies in the same project. The name is limited to 512 Unicode characters."

fn spec.forProvider.withDocumentation

withDocumentation(documentation)

"Documentation that is included with notifications and incidents related to this policy. Best practice is for the documentation to include information to help responders understand, mitigate, escalate, and correct the underlying problems detected by the alerting policy. Notification channels that have limited capacity might not show this documentation. Structure is documented below."

fn spec.forProvider.withDocumentationMixin

withDocumentationMixin(documentation)

"Documentation that is included with notifications and incidents related to this policy. Best practice is for the documentation to include information to help responders understand, mitigate, escalate, and correct the underlying problems detected by the alerting policy. Notification channels that have limited capacity might not show this documentation. Structure is documented below."

Note: This function appends passed data to existing values

fn spec.forProvider.withEnabled

withEnabled(enabled)

"Whether or not the policy is enabled. The default is true."

fn spec.forProvider.withNotificationChannels

withNotificationChannels(notificationChannels)

"Identifies the notification channels to which notifications should be sent when incidents are opened or closed or when new violations occur on an already opened incident. Each element of this array corresponds to the name field in each of the NotificationChannel objects that are returned from the notificationChannels.list method. The syntax of the entries in this field is projects/[PROJECT_ID]/notificationChannels/[CHANNEL_ID]"

fn spec.forProvider.withNotificationChannelsMixin

withNotificationChannelsMixin(notificationChannels)

"Identifies the notification channels to which notifications should be sent when incidents are opened or closed or when new violations occur on an already opened incident. Each element of this array corresponds to the name field in each of the NotificationChannel objects that are returned from the notificationChannels.list method. The syntax of the entries in this field is projects/[PROJECT_ID]/notificationChannels/[CHANNEL_ID]"

Note: This function appends passed data to existing values

fn spec.forProvider.withProject

withProject(project)

"The ID of the project in which the resource belongs. If it is not provided, the provider project is used."

fn spec.forProvider.withUserLabels

withUserLabels(userLabels)

"This field is intended to be used for organizing and identifying the AlertPolicy objects.The field can contain up to 64 entries. Each key and value is limited to 63 Unicode characters or 128 bytes, whichever is smaller. Labels and values can contain only lowercase letters, numerals, underscores, and dashes. Keys must begin with a letter."

fn spec.forProvider.withUserLabelsMixin

withUserLabelsMixin(userLabels)

"This field is intended to be used for organizing and identifying the AlertPolicy objects.The field can contain up to 64 entries. Each key and value is limited to 63 Unicode characters or 128 bytes, whichever is smaller. Labels and values can contain only lowercase letters, numerals, underscores, and dashes. Keys must begin with a letter."

Note: This function appends passed data to existing values

obj spec.forProvider.alertStrategy

"Control over how this alert policy's notification channels are notified. Structure is documented below."

fn spec.forProvider.alertStrategy.withAutoClose

withAutoClose(autoClose)

"If an alert policy that was active has no data for this long, any open incidents will close."

fn spec.forProvider.alertStrategy.withNotificationChannelStrategy

withNotificationChannelStrategy(notificationChannelStrategy)

"Control over how the notification channels in notification_channels are notified when this alert fires, on a per-channel basis. Structure is documented below."

fn spec.forProvider.alertStrategy.withNotificationChannelStrategyMixin

withNotificationChannelStrategyMixin(notificationChannelStrategy)

"Control over how the notification channels in notification_channels are notified when this alert fires, on a per-channel basis. Structure is documented below."

Note: This function appends passed data to existing values

fn spec.forProvider.alertStrategy.withNotificationRateLimit

withNotificationRateLimit(notificationRateLimit)

"Required for alert policies with a LogMatch condition. This limit is not implemented for alert policies that are not log-based. Structure is documented below."

fn spec.forProvider.alertStrategy.withNotificationRateLimitMixin

withNotificationRateLimitMixin(notificationRateLimit)

"Required for alert policies with a LogMatch condition. This limit is not implemented for alert policies that are not log-based. Structure is documented below."

Note: This function appends passed data to existing values

obj spec.forProvider.alertStrategy.notificationChannelStrategy

"Control over how the notification channels in notification_channels are notified when this alert fires, on a per-channel basis. Structure is documented below."

fn spec.forProvider.alertStrategy.notificationChannelStrategy.withNotificationChannelNames

withNotificationChannelNames(notificationChannelNames)

"The notification channels that these settings apply to. Each of these correspond to the name field in one of the NotificationChannel objects referenced in the notification_channels field of this AlertPolicy. The format is projects/[PROJECT_ID_OR_NUMBER]/notificationChannels/[CHANNEL_ID]"

fn spec.forProvider.alertStrategy.notificationChannelStrategy.withNotificationChannelNamesMixin

withNotificationChannelNamesMixin(notificationChannelNames)

"The notification channels that these settings apply to. Each of these correspond to the name field in one of the NotificationChannel objects referenced in the notification_channels field of this AlertPolicy. The format is projects/[PROJECT_ID_OR_NUMBER]/notificationChannels/[CHANNEL_ID]"

Note: This function appends passed data to existing values

fn spec.forProvider.alertStrategy.notificationChannelStrategy.withRenotifyInterval

withRenotifyInterval(renotifyInterval)

"The frequency at which to send reminder notifications for open incidents."

obj spec.forProvider.alertStrategy.notificationRateLimit

"Required for alert policies with a LogMatch condition. This limit is not implemented for alert policies that are not log-based. Structure is documented below."

fn spec.forProvider.alertStrategy.notificationRateLimit.withPeriod

withPeriod(period)

"Not more than one notification per period."

obj spec.forProvider.conditions

"A list of conditions for the policy. The conditions are combined by AND or OR according to the combiner field. If the combined conditions evaluate to true, then an incident is created. A policy can have from one to six conditions. Structure is documented below."

fn spec.forProvider.conditions.withConditionAbsent

withConditionAbsent(conditionAbsent)

"A condition that checks that a time series continues to receive new data points. Structure is documented below."

fn spec.forProvider.conditions.withConditionAbsentMixin

withConditionAbsentMixin(conditionAbsent)

"A condition that checks that a time series continues to receive new data points. Structure is documented below."

Note: This function appends passed data to existing values

fn spec.forProvider.conditions.withConditionMatchedLog

withConditionMatchedLog(conditionMatchedLog)

"A condition that checks for log messages matching given constraints. If set, no other conditions can be present. Structure is documented below."

fn spec.forProvider.conditions.withConditionMatchedLogMixin

withConditionMatchedLogMixin(conditionMatchedLog)

"A condition that checks for log messages matching given constraints. If set, no other conditions can be present. Structure is documented below."

Note: This function appends passed data to existing values

fn spec.forProvider.conditions.withConditionMonitoringQueryLanguage

withConditionMonitoringQueryLanguage(conditionMonitoringQueryLanguage)

"A Monitoring Query Language query that outputs a boolean stream Structure is documented below."

fn spec.forProvider.conditions.withConditionMonitoringQueryLanguageMixin

withConditionMonitoringQueryLanguageMixin(conditionMonitoringQueryLanguage)

"A Monitoring Query Language query that outputs a boolean stream Structure is documented below."

Note: This function appends passed data to existing values

fn spec.forProvider.conditions.withConditionThreshold

withConditionThreshold(conditionThreshold)

"A condition that compares a time series against a threshold. Structure is documented below."

fn spec.forProvider.conditions.withConditionThresholdMixin

withConditionThresholdMixin(conditionThreshold)

"A condition that compares a time series against a threshold. Structure is documented below."

Note: This function appends passed data to existing values

fn spec.forProvider.conditions.withDisplayName

withDisplayName(displayName)

"A short name or phrase used to identify the condition in dashboards, notifications, and incidents. To avoid confusion, don't use the same display name for multiple conditions in the same policy."

obj spec.forProvider.conditions.conditionAbsent

"A condition that checks that a time series continues to receive new data points. Structure is documented below."

fn spec.forProvider.conditions.conditionAbsent.withAggregations

withAggregations(aggregations)

"Specifies the alignment of data points in individual time series as well as how to combine the retrieved time series together (such as when aggregating multiple streams on each resource to a single stream for each resource or when aggregating streams across all members of a group of resources). Multiple aggregations are applied in the order specified.This field is similar to the one in the MetricService.ListTimeSeries request. It is advisable to use the ListTimeSeries method when debugging this field. Structure is documented below."

fn spec.forProvider.conditions.conditionAbsent.withAggregationsMixin

withAggregationsMixin(aggregations)

"Specifies the alignment of data points in individual time series as well as how to combine the retrieved time series together (such as when aggregating multiple streams on each resource to a single stream for each resource or when aggregating streams across all members of a group of resources). Multiple aggregations are applied in the order specified.This field is similar to the one in the MetricService.ListTimeSeries request. It is advisable to use the ListTimeSeries method when debugging this field. Structure is documented below."

Note: This function appends passed data to existing values

fn spec.forProvider.conditions.conditionAbsent.withDuration

withDuration(duration)

"The amount of time that a time series must violate the threshold to be considered failing. Currently, only values that are a multiple of a minute--e.g., 0, 60, 120, or 300 seconds--are supported. If an invalid value is given, an error will be returned. When choosing a duration, it is useful to keep in mind the frequency of the underlying time series data (which may also be affected by any alignments specified in the aggregations field); a good duration is long enough so that a single outlier does not generate spurious alerts, but short enough that unhealthy states are detected and alerted on quickly."

fn spec.forProvider.conditions.conditionAbsent.withFilter

withFilter(filter)

"A filter that identifies which time series should be compared with the threshold.The filter is similar to the one that is specified in the MetricService.ListTimeSeries request (that call is useful to verify the time series that will be retrieved / processed) and must specify the metric type and optionally may contain restrictions on resource type, resource labels, and metric labels. This field may not exceed 2048 Unicode characters in length."

fn spec.forProvider.conditions.conditionAbsent.withTrigger

withTrigger(trigger)

"The number/percent of time series for which the comparison must hold in order for the condition to trigger. If unspecified, then the condition will trigger if the comparison is true for any of the time series that have been identified by filter and aggregations, or by the ratio, if denominator_filter and denominator_aggregations are specified. Structure is documented below."

fn spec.forProvider.conditions.conditionAbsent.withTriggerMixin

withTriggerMixin(trigger)

"The number/percent of time series for which the comparison must hold in order for the condition to trigger. If unspecified, then the condition will trigger if the comparison is true for any of the time series that have been identified by filter and aggregations, or by the ratio, if denominator_filter and denominator_aggregations are specified. Structure is documented below."

Note: This function appends passed data to existing values

obj spec.forProvider.conditions.conditionAbsent.aggregations

"Specifies the alignment of data points in individual time series as well as how to combine the retrieved time series together (such as when aggregating multiple streams on each resource to a single stream for each resource or when aggregating streams across all members of a group of resources). Multiple aggregations are applied in the order specified.This field is similar to the one in the MetricService.ListTimeSeries request. It is advisable to use the ListTimeSeries method when debugging this field. Structure is documented below."

fn spec.forProvider.conditions.conditionAbsent.aggregations.withAlignmentPeriod

withAlignmentPeriod(alignmentPeriod)

"The alignment period for per-time series alignment. If present, alignmentPeriod must be at least 60 seconds. After per-time series alignment, each time series will contain data points only on the period boundaries. If perSeriesAligner is not specified or equals ALIGN_NONE, then this field is ignored. If perSeriesAligner is specified and does not equal ALIGN_NONE, then this field must be defined; otherwise an error is returned."

fn spec.forProvider.conditions.conditionAbsent.aggregations.withCrossSeriesReducer

withCrossSeriesReducer(crossSeriesReducer)

"The approach to be used to combine time series. Not all reducer functions may be applied to all time series, depending on the metric type and the value type of the original time series. Reduction may change the metric type of value type of the time series.Time series data must be aligned in order to perform cross- time series reduction. If crossSeriesReducer is specified, then perSeriesAligner must be specified and not equal ALIGN_NONE and alignmentPeriod must be specified; otherwise, an error is returned. Possible values are: REDUCE_NONE, REDUCE_MEAN, REDUCE_MIN, REDUCE_MAX, REDUCE_SUM, REDUCE_STDDEV, REDUCE_COUNT, REDUCE_COUNT_TRUE, REDUCE_COUNT_FALSE, REDUCE_FRACTION_TRUE, REDUCE_PERCENTILE_99, REDUCE_PERCENTILE_95, REDUCE_PERCENTILE_50, REDUCE_PERCENTILE_05."

fn spec.forProvider.conditions.conditionAbsent.aggregations.withGroupByFields

withGroupByFields(groupByFields)

"The set of fields to preserve when crossSeriesReducer is specified. The groupByFields determine how the time series are partitioned into subsets prior to applying the aggregation function. Each subset contains time series that have the same value for each of the grouping fields. Each individual time series is a member of exactly one subset. The crossSeriesReducer is applied to each subset of time series. It is not possible to reduce across different resource types, so this field implicitly contains resource.type. Fields not specified in groupByFields are aggregated away. If groupByFields is not specified and all the time series have the same resource type, then the time series are aggregated into a single output time series. If crossSeriesReducer is not defined, this field is ignored."

fn spec.forProvider.conditions.conditionAbsent.aggregations.withGroupByFieldsMixin

withGroupByFieldsMixin(groupByFields)

"The set of fields to preserve when crossSeriesReducer is specified. The groupByFields determine how the time series are partitioned into subsets prior to applying the aggregation function. Each subset contains time series that have the same value for each of the grouping fields. Each individual time series is a member of exactly one subset. The crossSeriesReducer is applied to each subset of time series. It is not possible to reduce across different resource types, so this field implicitly contains resource.type. Fields not specified in groupByFields are aggregated away. If groupByFields is not specified and all the time series have the same resource type, then the time series are aggregated into a single output time series. If crossSeriesReducer is not defined, this field is ignored."

Note: This function appends passed data to existing values

fn spec.forProvider.conditions.conditionAbsent.aggregations.withPerSeriesAligner

withPerSeriesAligner(perSeriesAligner)

"The approach to be used to align individual time series. Not all alignment functions may be applied to all time series, depending on the metric type and value type of the original time series. Alignment may change the metric type or the value type of the time series.Time series data must be aligned in order to perform cross- time series reduction. If crossSeriesReducer is specified, then perSeriesAligner must be specified and not equal ALIGN_NONE and alignmentPeriod must be specified; otherwise, an error is returned. Possible values are: ALIGN_NONE, ALIGN_DELTA, ALIGN_RATE, ALIGN_INTERPOLATE, ALIGN_NEXT_OLDER, ALIGN_MIN, ALIGN_MAX, ALIGN_MEAN, ALIGN_COUNT, ALIGN_SUM, ALIGN_STDDEV, ALIGN_COUNT_TRUE, ALIGN_COUNT_FALSE, ALIGN_FRACTION_TRUE, ALIGN_PERCENTILE_99, ALIGN_PERCENTILE_95, ALIGN_PERCENTILE_50, ALIGN_PERCENTILE_05, ALIGN_PERCENT_CHANGE."

obj spec.forProvider.conditions.conditionAbsent.trigger

"The number/percent of time series for which the comparison must hold in order for the condition to trigger. If unspecified, then the condition will trigger if the comparison is true for any of the time series that have been identified by filter and aggregations, or by the ratio, if denominator_filter and denominator_aggregations are specified. Structure is documented below."

fn spec.forProvider.conditions.conditionAbsent.trigger.withCount

withCount(count)

"The absolute number of time series that must fail the predicate for the condition to be triggered."

fn spec.forProvider.conditions.conditionAbsent.trigger.withPercent

withPercent(percent)

"The percentage of time series that must fail the predicate for the condition to be triggered."

obj spec.forProvider.conditions.conditionMatchedLog

"A condition that checks for log messages matching given constraints. If set, no other conditions can be present. Structure is documented below."

fn spec.forProvider.conditions.conditionMatchedLog.withFilter

withFilter(filter)

"A filter that identifies which time series should be compared with the threshold.The filter is similar to the one that is specified in the MetricService.ListTimeSeries request (that call is useful to verify the time series that will be retrieved / processed) and must specify the metric type and optionally may contain restrictions on resource type, resource labels, and metric labels. This field may not exceed 2048 Unicode characters in length."

fn spec.forProvider.conditions.conditionMatchedLog.withLabelExtractors

withLabelExtractors(labelExtractors)

"A map from a label key to an extractor expression, which is used to extract the value for this label key. Each entry in this map is a specification for how data should be extracted from log entries that match filter. Each combination of extracted values is treated as a separate rule for the purposes of triggering notifications. Label keys and corresponding values can be used in notifications generated by this condition."

fn spec.forProvider.conditions.conditionMatchedLog.withLabelExtractorsMixin

withLabelExtractorsMixin(labelExtractors)

"A map from a label key to an extractor expression, which is used to extract the value for this label key. Each entry in this map is a specification for how data should be extracted from log entries that match filter. Each combination of extracted values is treated as a separate rule for the purposes of triggering notifications. Label keys and corresponding values can be used in notifications generated by this condition."

Note: This function appends passed data to existing values

obj spec.forProvider.conditions.conditionMonitoringQueryLanguage

"A Monitoring Query Language query that outputs a boolean stream Structure is documented below."

fn spec.forProvider.conditions.conditionMonitoringQueryLanguage.withDuration

withDuration(duration)

"The amount of time that a time series must violate the threshold to be considered failing. Currently, only values that are a multiple of a minute--e.g., 0, 60, 120, or 300 seconds--are supported. If an invalid value is given, an error will be returned. When choosing a duration, it is useful to keep in mind the frequency of the underlying time series data (which may also be affected by any alignments specified in the aggregations field); a good duration is long enough so that a single outlier does not generate spurious alerts, but short enough that unhealthy states are detected and alerted on quickly."

fn spec.forProvider.conditions.conditionMonitoringQueryLanguage.withEvaluationMissingData

withEvaluationMissingData(evaluationMissingData)

"A condition control that determines how metric-threshold conditions are evaluated when data stops arriving. Possible values are: EVALUATION_MISSING_DATA_INACTIVE, EVALUATION_MISSING_DATA_ACTIVE, EVALUATION_MISSING_DATA_NO_OP."

fn spec.forProvider.conditions.conditionMonitoringQueryLanguage.withQuery

withQuery(query)

"Monitoring Query Language query that outputs a boolean stream."

fn spec.forProvider.conditions.conditionMonitoringQueryLanguage.withTrigger

withTrigger(trigger)

"The number/percent of time series for which the comparison must hold in order for the condition to trigger. If unspecified, then the condition will trigger if the comparison is true for any of the time series that have been identified by filter and aggregations, or by the ratio, if denominator_filter and denominator_aggregations are specified. Structure is documented below."

fn spec.forProvider.conditions.conditionMonitoringQueryLanguage.withTriggerMixin

withTriggerMixin(trigger)

"The number/percent of time series for which the comparison must hold in order for the condition to trigger. If unspecified, then the condition will trigger if the comparison is true for any of the time series that have been identified by filter and aggregations, or by the ratio, if denominator_filter and denominator_aggregations are specified. Structure is documented below."

Note: This function appends passed data to existing values

obj spec.forProvider.conditions.conditionMonitoringQueryLanguage.trigger

"The number/percent of time series for which the comparison must hold in order for the condition to trigger. If unspecified, then the condition will trigger if the comparison is true for any of the time series that have been identified by filter and aggregations, or by the ratio, if denominator_filter and denominator_aggregations are specified. Structure is documented below."

fn spec.forProvider.conditions.conditionMonitoringQueryLanguage.trigger.withCount

withCount(count)

"The absolute number of time series that must fail the predicate for the condition to be triggered."

fn spec.forProvider.conditions.conditionMonitoringQueryLanguage.trigger.withPercent

withPercent(percent)

"The percentage of time series that must fail the predicate for the condition to be triggered."

obj spec.forProvider.conditions.conditionThreshold

"A condition that compares a time series against a threshold. Structure is documented below."

fn spec.forProvider.conditions.conditionThreshold.withAggregations

withAggregations(aggregations)

"Specifies the alignment of data points in individual time series as well as how to combine the retrieved time series together (such as when aggregating multiple streams on each resource to a single stream for each resource or when aggregating streams across all members of a group of resources). Multiple aggregations are applied in the order specified.This field is similar to the one in the MetricService.ListTimeSeries request. It is advisable to use the ListTimeSeries method when debugging this field. Structure is documented below."

fn spec.forProvider.conditions.conditionThreshold.withAggregationsMixin

withAggregationsMixin(aggregations)

"Specifies the alignment of data points in individual time series as well as how to combine the retrieved time series together (such as when aggregating multiple streams on each resource to a single stream for each resource or when aggregating streams across all members of a group of resources). Multiple aggregations are applied in the order specified.This field is similar to the one in the MetricService.ListTimeSeries request. It is advisable to use the ListTimeSeries method when debugging this field. Structure is documented below."

Note: This function appends passed data to existing values

fn spec.forProvider.conditions.conditionThreshold.withComparison

withComparison(comparison)

"The comparison to apply between the time series (indicated by filter and aggregation) and the threshold (indicated by threshold_value). The comparison is applied on each time series, with the time series on the left-hand side and the threshold on the right-hand side. Only COMPARISON_LT and COMPARISON_GT are supported currently. Possible values are: COMPARISON_GT, COMPARISON_GE, COMPARISON_LT, COMPARISON_LE, COMPARISON_EQ, COMPARISON_NE."

fn spec.forProvider.conditions.conditionThreshold.withDenominatorAggregations

withDenominatorAggregations(denominatorAggregations)

"Specifies the alignment of data points in individual time series selected by denominatorFilter as well as how to combine the retrieved time series together (such as when aggregating multiple streams on each resource to a single stream for each resource or when aggregating streams across all members of a group of resources).When computing ratios, the aggregations and denominator_aggregations fields must use the same alignment period and produce time series that have the same periodicity and labels.This field is similar to the one in the MetricService.ListTimeSeries request. It is advisable to use the ListTimeSeries method when debugging this field. Structure is documented below."

fn spec.forProvider.conditions.conditionThreshold.withDenominatorAggregationsMixin

withDenominatorAggregationsMixin(denominatorAggregations)

"Specifies the alignment of data points in individual time series selected by denominatorFilter as well as how to combine the retrieved time series together (such as when aggregating multiple streams on each resource to a single stream for each resource or when aggregating streams across all members of a group of resources).When computing ratios, the aggregations and denominator_aggregations fields must use the same alignment period and produce time series that have the same periodicity and labels.This field is similar to the one in the MetricService.ListTimeSeries request. It is advisable to use the ListTimeSeries method when debugging this field. Structure is documented below."

Note: This function appends passed data to existing values

fn spec.forProvider.conditions.conditionThreshold.withDenominatorFilter

withDenominatorFilter(denominatorFilter)

"A filter that identifies a time series that should be used as the denominator of a ratio that will be compared with the threshold. If a denominator_filter is specified, the time series specified by the filter field will be used as the numerator.The filter is similar to the one that is specified in the MetricService.ListTimeSeries request (that call is useful to verify the time series that will be retrieved / processed) and must specify the metric type and optionally may contain restrictions on resource type, resource labels, and metric labels. This field may not exceed 2048 Unicode characters in length."

fn spec.forProvider.conditions.conditionThreshold.withDuration

withDuration(duration)

"The amount of time that a time series must violate the threshold to be considered failing. Currently, only values that are a multiple of a minute--e.g., 0, 60, 120, or 300 seconds--are supported. If an invalid value is given, an error will be returned. When choosing a duration, it is useful to keep in mind the frequency of the underlying time series data (which may also be affected by any alignments specified in the aggregations field); a good duration is long enough so that a single outlier does not generate spurious alerts, but short enough that unhealthy states are detected and alerted on quickly."

fn spec.forProvider.conditions.conditionThreshold.withEvaluationMissingData

withEvaluationMissingData(evaluationMissingData)

"A condition control that determines how metric-threshold conditions are evaluated when data stops arriving. Possible values are: EVALUATION_MISSING_DATA_INACTIVE, EVALUATION_MISSING_DATA_ACTIVE, EVALUATION_MISSING_DATA_NO_OP."

fn spec.forProvider.conditions.conditionThreshold.withFilter

withFilter(filter)

"A filter that identifies which time series should be compared with the threshold.The filter is similar to the one that is specified in the MetricService.ListTimeSeries request (that call is useful to verify the time series that will be retrieved / processed) and must specify the metric type and optionally may contain restrictions on resource type, resource labels, and metric labels. This field may not exceed 2048 Unicode characters in length."

fn spec.forProvider.conditions.conditionThreshold.withForecastOptions

withForecastOptions(forecastOptions)

"When this field is present, the MetricThreshold condition forecasts whether the time series is predicted to violate the threshold within the forecastHorizon. When this field is not set, the MetricThreshold tests the current value of the timeseries against the threshold. Structure is documented below."

fn spec.forProvider.conditions.conditionThreshold.withForecastOptionsMixin

withForecastOptionsMixin(forecastOptions)

"When this field is present, the MetricThreshold condition forecasts whether the time series is predicted to violate the threshold within the forecastHorizon. When this field is not set, the MetricThreshold tests the current value of the timeseries against the threshold. Structure is documented below."

Note: This function appends passed data to existing values

fn spec.forProvider.conditions.conditionThreshold.withThresholdValue

withThresholdValue(thresholdValue)

"A value against which to compare the time series."

fn spec.forProvider.conditions.conditionThreshold.withTrigger

withTrigger(trigger)

"The number/percent of time series for which the comparison must hold in order for the condition to trigger. If unspecified, then the condition will trigger if the comparison is true for any of the time series that have been identified by filter and aggregations, or by the ratio, if denominator_filter and denominator_aggregations are specified. Structure is documented below."

fn spec.forProvider.conditions.conditionThreshold.withTriggerMixin

withTriggerMixin(trigger)

"The number/percent of time series for which the comparison must hold in order for the condition to trigger. If unspecified, then the condition will trigger if the comparison is true for any of the time series that have been identified by filter and aggregations, or by the ratio, if denominator_filter and denominator_aggregations are specified. Structure is documented below."

Note: This function appends passed data to existing values

obj spec.forProvider.conditions.conditionThreshold.aggregations

"Specifies the alignment of data points in individual time series as well as how to combine the retrieved time series together (such as when aggregating multiple streams on each resource to a single stream for each resource or when aggregating streams across all members of a group of resources). Multiple aggregations are applied in the order specified.This field is similar to the one in the MetricService.ListTimeSeries request. It is advisable to use the ListTimeSeries method when debugging this field. Structure is documented below."

fn spec.forProvider.conditions.conditionThreshold.aggregations.withAlignmentPeriod

withAlignmentPeriod(alignmentPeriod)

"The alignment period for per-time series alignment. If present, alignmentPeriod must be at least 60 seconds. After per-time series alignment, each time series will contain data points only on the period boundaries. If perSeriesAligner is not specified or equals ALIGN_NONE, then this field is ignored. If perSeriesAligner is specified and does not equal ALIGN_NONE, then this field must be defined; otherwise an error is returned."

fn spec.forProvider.conditions.conditionThreshold.aggregations.withCrossSeriesReducer

withCrossSeriesReducer(crossSeriesReducer)

"The approach to be used to combine time series. Not all reducer functions may be applied to all time series, depending on the metric type and the value type of the original time series. Reduction may change the metric type of value type of the time series.Time series data must be aligned in order to perform cross- time series reduction. If crossSeriesReducer is specified, then perSeriesAligner must be specified and not equal ALIGN_NONE and alignmentPeriod must be specified; otherwise, an error is returned. Possible values are: REDUCE_NONE, REDUCE_MEAN, REDUCE_MIN, REDUCE_MAX, REDUCE_SUM, REDUCE_STDDEV, REDUCE_COUNT, REDUCE_COUNT_TRUE, REDUCE_COUNT_FALSE, REDUCE_FRACTION_TRUE, REDUCE_PERCENTILE_99, REDUCE_PERCENTILE_95, REDUCE_PERCENTILE_50, REDUCE_PERCENTILE_05."

fn spec.forProvider.conditions.conditionThreshold.aggregations.withGroupByFields

withGroupByFields(groupByFields)

"The set of fields to preserve when crossSeriesReducer is specified. The groupByFields determine how the time series are partitioned into subsets prior to applying the aggregation function. Each subset contains time series that have the same value for each of the grouping fields. Each individual time series is a member of exactly one subset. The crossSeriesReducer is applied to each subset of time series. It is not possible to reduce across different resource types, so this field implicitly contains resource.type. Fields not specified in groupByFields are aggregated away. If groupByFields is not specified and all the time series have the same resource type, then the time series are aggregated into a single output time series. If crossSeriesReducer is not defined, this field is ignored."

fn spec.forProvider.conditions.conditionThreshold.aggregations.withGroupByFieldsMixin

withGroupByFieldsMixin(groupByFields)

"The set of fields to preserve when crossSeriesReducer is specified. The groupByFields determine how the time series are partitioned into subsets prior to applying the aggregation function. Each subset contains time series that have the same value for each of the grouping fields. Each individual time series is a member of exactly one subset. The crossSeriesReducer is applied to each subset of time series. It is not possible to reduce across different resource types, so this field implicitly contains resource.type. Fields not specified in groupByFields are aggregated away. If groupByFields is not specified and all the time series have the same resource type, then the time series are aggregated into a single output time series. If crossSeriesReducer is not defined, this field is ignored."

Note: This function appends passed data to existing values

fn spec.forProvider.conditions.conditionThreshold.aggregations.withPerSeriesAligner

withPerSeriesAligner(perSeriesAligner)

"The approach to be used to align individual time series. Not all alignment functions may be applied to all time series, depending on the metric type and value type of the original time series. Alignment may change the metric type or the value type of the time series.Time series data must be aligned in order to perform cross- time series reduction. If crossSeriesReducer is specified, then perSeriesAligner must be specified and not equal ALIGN_NONE and alignmentPeriod must be specified; otherwise, an error is returned. Possible values are: ALIGN_NONE, ALIGN_DELTA, ALIGN_RATE, ALIGN_INTERPOLATE, ALIGN_NEXT_OLDER, ALIGN_MIN, ALIGN_MAX, ALIGN_MEAN, ALIGN_COUNT, ALIGN_SUM, ALIGN_STDDEV, ALIGN_COUNT_TRUE, ALIGN_COUNT_FALSE, ALIGN_FRACTION_TRUE, ALIGN_PERCENTILE_99, ALIGN_PERCENTILE_95, ALIGN_PERCENTILE_50, ALIGN_PERCENTILE_05, ALIGN_PERCENT_CHANGE."

obj spec.forProvider.conditions.conditionThreshold.denominatorAggregations

"Specifies the alignment of data points in individual time series selected by denominatorFilter as well as how to combine the retrieved time series together (such as when aggregating multiple streams on each resource to a single stream for each resource or when aggregating streams across all members of a group of resources).When computing ratios, the aggregations and denominator_aggregations fields must use the same alignment period and produce time series that have the same periodicity and labels.This field is similar to the one in the MetricService.ListTimeSeries request. It is advisable to use the ListTimeSeries method when debugging this field. Structure is documented below."

fn spec.forProvider.conditions.conditionThreshold.denominatorAggregations.withAlignmentPeriod

withAlignmentPeriod(alignmentPeriod)

"The alignment period for per-time series alignment. If present, alignmentPeriod must be at least 60 seconds. After per-time series alignment, each time series will contain data points only on the period boundaries. If perSeriesAligner is not specified or equals ALIGN_NONE, then this field is ignored. If perSeriesAligner is specified and does not equal ALIGN_NONE, then this field must be defined; otherwise an error is returned."

fn spec.forProvider.conditions.conditionThreshold.denominatorAggregations.withCrossSeriesReducer

withCrossSeriesReducer(crossSeriesReducer)

"The approach to be used to combine time series. Not all reducer functions may be applied to all time series, depending on the metric type and the value type of the original time series. Reduction may change the metric type of value type of the time series.Time series data must be aligned in order to perform cross- time series reduction. If crossSeriesReducer is specified, then perSeriesAligner must be specified and not equal ALIGN_NONE and alignmentPeriod must be specified; otherwise, an error is returned. Possible values are: REDUCE_NONE, REDUCE_MEAN, REDUCE_MIN, REDUCE_MAX, REDUCE_SUM, REDUCE_STDDEV, REDUCE_COUNT, REDUCE_COUNT_TRUE, REDUCE_COUNT_FALSE, REDUCE_FRACTION_TRUE, REDUCE_PERCENTILE_99, REDUCE_PERCENTILE_95, REDUCE_PERCENTILE_50, REDUCE_PERCENTILE_05."

fn spec.forProvider.conditions.conditionThreshold.denominatorAggregations.withGroupByFields

withGroupByFields(groupByFields)

"The set of fields to preserve when crossSeriesReducer is specified. The groupByFields determine how the time series are partitioned into subsets prior to applying the aggregation function. Each subset contains time series that have the same value for each of the grouping fields. Each individual time series is a member of exactly one subset. The crossSeriesReducer is applied to each subset of time series. It is not possible to reduce across different resource types, so this field implicitly contains resource.type. Fields not specified in groupByFields are aggregated away. If groupByFields is not specified and all the time series have the same resource type, then the time series are aggregated into a single output time series. If crossSeriesReducer is not defined, this field is ignored."

fn spec.forProvider.conditions.conditionThreshold.denominatorAggregations.withGroupByFieldsMixin

withGroupByFieldsMixin(groupByFields)

"The set of fields to preserve when crossSeriesReducer is specified. The groupByFields determine how the time series are partitioned into subsets prior to applying the aggregation function. Each subset contains time series that have the same value for each of the grouping fields. Each individual time series is a member of exactly one subset. The crossSeriesReducer is applied to each subset of time series. It is not possible to reduce across different resource types, so this field implicitly contains resource.type. Fields not specified in groupByFields are aggregated away. If groupByFields is not specified and all the time series have the same resource type, then the time series are aggregated into a single output time series. If crossSeriesReducer is not defined, this field is ignored."

Note: This function appends passed data to existing values

fn spec.forProvider.conditions.conditionThreshold.denominatorAggregations.withPerSeriesAligner

withPerSeriesAligner(perSeriesAligner)

"The approach to be used to align individual time series. Not all alignment functions may be applied to all time series, depending on the metric type and value type of the original time series. Alignment may change the metric type or the value type of the time series.Time series data must be aligned in order to perform cross- time series reduction. If crossSeriesReducer is specified, then perSeriesAligner must be specified and not equal ALIGN_NONE and alignmentPeriod must be specified; otherwise, an error is returned. Possible values are: ALIGN_NONE, ALIGN_DELTA, ALIGN_RATE, ALIGN_INTERPOLATE, ALIGN_NEXT_OLDER, ALIGN_MIN, ALIGN_MAX, ALIGN_MEAN, ALIGN_COUNT, ALIGN_SUM, ALIGN_STDDEV, ALIGN_COUNT_TRUE, ALIGN_COUNT_FALSE, ALIGN_FRACTION_TRUE, ALIGN_PERCENTILE_99, ALIGN_PERCENTILE_95, ALIGN_PERCENTILE_50, ALIGN_PERCENTILE_05, ALIGN_PERCENT_CHANGE."

obj spec.forProvider.conditions.conditionThreshold.forecastOptions

"When this field is present, the MetricThreshold condition forecasts whether the time series is predicted to violate the threshold within the forecastHorizon. When this field is not set, the MetricThreshold tests the current value of the timeseries against the threshold. Structure is documented below."

fn spec.forProvider.conditions.conditionThreshold.forecastOptions.withForecastHorizon

withForecastHorizon(forecastHorizon)

"The length of time into the future to forecast whether a timeseries will violate the threshold. If the predicted value is found to violate the threshold, and the violation is observed in all forecasts made for the Configured duration, then the timeseries is considered to be failing."

obj spec.forProvider.conditions.conditionThreshold.trigger

"The number/percent of time series for which the comparison must hold in order for the condition to trigger. If unspecified, then the condition will trigger if the comparison is true for any of the time series that have been identified by filter and aggregations, or by the ratio, if denominator_filter and denominator_aggregations are specified. Structure is documented below."

fn spec.forProvider.conditions.conditionThreshold.trigger.withCount

withCount(count)

"The absolute number of time series that must fail the predicate for the condition to be triggered."

fn spec.forProvider.conditions.conditionThreshold.trigger.withPercent

withPercent(percent)

"The percentage of time series that must fail the predicate for the condition to be triggered."

obj spec.forProvider.documentation

"Documentation that is included with notifications and incidents related to this policy. Best practice is for the documentation to include information to help responders understand, mitigate, escalate, and correct the underlying problems detected by the alerting policy. Notification channels that have limited capacity might not show this documentation. Structure is documented below."

fn spec.forProvider.documentation.withContent

withContent(content)

"The text of the documentation, interpreted according to mimeType. The content may not exceed 8,192 Unicode characters and may not exceed more than 10,240 bytes when encoded in UTF-8 format, whichever is smaller."

fn spec.forProvider.documentation.withMimeType

withMimeType(mimeType)

"The format of the content field. Presently, only the value \"text/markdown\" is supported."

obj spec.initProvider

"THIS IS AN ALPHA FIELD. Do not use it in production. It is not honored unless the relevant Crossplane feature flag is enabled, and may be changed or removed without notice. InitProvider holds the same fields as ForProvider, with the exception of Identifier and other resource reference fields. The fields that are in InitProvider are merged into ForProvider when the resource is created. The same fields are also added to the terraform ignore_changes hook, to avoid updating them after creation. This is useful for fields that are required on creation, but we do not desire to update them after creation, for example because of an external controller is managing them, like an autoscaler."

fn spec.initProvider.withAlertStrategy

withAlertStrategy(alertStrategy)

"Control over how this alert policy's notification channels are notified. Structure is documented below."

fn spec.initProvider.withAlertStrategyMixin

withAlertStrategyMixin(alertStrategy)

"Control over how this alert policy's notification channels are notified. Structure is documented below."

Note: This function appends passed data to existing values

fn spec.initProvider.withCombiner

withCombiner(combiner)

"How to combine the results of multiple conditions to determine if an incident should be opened. Possible values are: AND, OR, AND_WITH_MATCHING_RESOURCE."

fn spec.initProvider.withConditions

withConditions(conditions)

"A list of conditions for the policy. The conditions are combined by AND or OR according to the combiner field. If the combined conditions evaluate to true, then an incident is created. A policy can have from one to six conditions. Structure is documented below."

fn spec.initProvider.withConditionsMixin

withConditionsMixin(conditions)

"A list of conditions for the policy. The conditions are combined by AND or OR according to the combiner field. If the combined conditions evaluate to true, then an incident is created. A policy can have from one to six conditions. Structure is documented below."

Note: This function appends passed data to existing values

fn spec.initProvider.withDisplayName

withDisplayName(displayName)

"A short name or phrase used to identify the policy in dashboards, notifications, and incidents. To avoid confusion, don't use the same display name for multiple policies in the same project. The name is limited to 512 Unicode characters."

fn spec.initProvider.withDocumentation

withDocumentation(documentation)

"Documentation that is included with notifications and incidents related to this policy. Best practice is for the documentation to include information to help responders understand, mitigate, escalate, and correct the underlying problems detected by the alerting policy. Notification channels that have limited capacity might not show this documentation. Structure is documented below."

fn spec.initProvider.withDocumentationMixin

withDocumentationMixin(documentation)

"Documentation that is included with notifications and incidents related to this policy. Best practice is for the documentation to include information to help responders understand, mitigate, escalate, and correct the underlying problems detected by the alerting policy. Notification channels that have limited capacity might not show this documentation. Structure is documented below."

Note: This function appends passed data to existing values

fn spec.initProvider.withEnabled

withEnabled(enabled)

"Whether or not the policy is enabled. The default is true."

fn spec.initProvider.withNotificationChannels

withNotificationChannels(notificationChannels)

"Identifies the notification channels to which notifications should be sent when incidents are opened or closed or when new violations occur on an already opened incident. Each element of this array corresponds to the name field in each of the NotificationChannel objects that are returned from the notificationChannels.list method. The syntax of the entries in this field is projects/[PROJECT_ID]/notificationChannels/[CHANNEL_ID]"

fn spec.initProvider.withNotificationChannelsMixin

withNotificationChannelsMixin(notificationChannels)

"Identifies the notification channels to which notifications should be sent when incidents are opened or closed or when new violations occur on an already opened incident. Each element of this array corresponds to the name field in each of the NotificationChannel objects that are returned from the notificationChannels.list method. The syntax of the entries in this field is projects/[PROJECT_ID]/notificationChannels/[CHANNEL_ID]"

Note: This function appends passed data to existing values

fn spec.initProvider.withProject

withProject(project)

"The ID of the project in which the resource belongs. If it is not provided, the provider project is used."

fn spec.initProvider.withUserLabels

withUserLabels(userLabels)

"This field is intended to be used for organizing and identifying the AlertPolicy objects.The field can contain up to 64 entries. Each key and value is limited to 63 Unicode characters or 128 bytes, whichever is smaller. Labels and values can contain only lowercase letters, numerals, underscores, and dashes. Keys must begin with a letter."

fn spec.initProvider.withUserLabelsMixin

withUserLabelsMixin(userLabels)

"This field is intended to be used for organizing and identifying the AlertPolicy objects.The field can contain up to 64 entries. Each key and value is limited to 63 Unicode characters or 128 bytes, whichever is smaller. Labels and values can contain only lowercase letters, numerals, underscores, and dashes. Keys must begin with a letter."

Note: This function appends passed data to existing values

obj spec.initProvider.alertStrategy

"Control over how this alert policy's notification channels are notified. Structure is documented below."

fn spec.initProvider.alertStrategy.withAutoClose

withAutoClose(autoClose)

"If an alert policy that was active has no data for this long, any open incidents will close."

fn spec.initProvider.alertStrategy.withNotificationChannelStrategy

withNotificationChannelStrategy(notificationChannelStrategy)

"Control over how the notification channels in notification_channels are notified when this alert fires, on a per-channel basis. Structure is documented below."

fn spec.initProvider.alertStrategy.withNotificationChannelStrategyMixin

withNotificationChannelStrategyMixin(notificationChannelStrategy)

"Control over how the notification channels in notification_channels are notified when this alert fires, on a per-channel basis. Structure is documented below."

Note: This function appends passed data to existing values

fn spec.initProvider.alertStrategy.withNotificationRateLimit

withNotificationRateLimit(notificationRateLimit)

"Required for alert policies with a LogMatch condition. This limit is not implemented for alert policies that are not log-based. Structure is documented below."

fn spec.initProvider.alertStrategy.withNotificationRateLimitMixin

withNotificationRateLimitMixin(notificationRateLimit)

"Required for alert policies with a LogMatch condition. This limit is not implemented for alert policies that are not log-based. Structure is documented below."

Note: This function appends passed data to existing values

obj spec.initProvider.alertStrategy.notificationChannelStrategy

"Control over how the notification channels in notification_channels are notified when this alert fires, on a per-channel basis. Structure is documented below."

fn spec.initProvider.alertStrategy.notificationChannelStrategy.withNotificationChannelNames

withNotificationChannelNames(notificationChannelNames)

"The notification channels that these settings apply to. Each of these correspond to the name field in one of the NotificationChannel objects referenced in the notification_channels field of this AlertPolicy. The format is projects/[PROJECT_ID_OR_NUMBER]/notificationChannels/[CHANNEL_ID]"

fn spec.initProvider.alertStrategy.notificationChannelStrategy.withNotificationChannelNamesMixin

withNotificationChannelNamesMixin(notificationChannelNames)

"The notification channels that these settings apply to. Each of these correspond to the name field in one of the NotificationChannel objects referenced in the notification_channels field of this AlertPolicy. The format is projects/[PROJECT_ID_OR_NUMBER]/notificationChannels/[CHANNEL_ID]"

Note: This function appends passed data to existing values

fn spec.initProvider.alertStrategy.notificationChannelStrategy.withRenotifyInterval

withRenotifyInterval(renotifyInterval)

"The frequency at which to send reminder notifications for open incidents."

obj spec.initProvider.alertStrategy.notificationRateLimit

"Required for alert policies with a LogMatch condition. This limit is not implemented for alert policies that are not log-based. Structure is documented below."

fn spec.initProvider.alertStrategy.notificationRateLimit.withPeriod

withPeriod(period)

"Not more than one notification per period."

obj spec.initProvider.conditions

"A list of conditions for the policy. The conditions are combined by AND or OR according to the combiner field. If the combined conditions evaluate to true, then an incident is created. A policy can have from one to six conditions. Structure is documented below."

fn spec.initProvider.conditions.withConditionAbsent

withConditionAbsent(conditionAbsent)

"A condition that checks that a time series continues to receive new data points. Structure is documented below."

fn spec.initProvider.conditions.withConditionAbsentMixin

withConditionAbsentMixin(conditionAbsent)

"A condition that checks that a time series continues to receive new data points. Structure is documented below."

Note: This function appends passed data to existing values

fn spec.initProvider.conditions.withConditionMatchedLog

withConditionMatchedLog(conditionMatchedLog)

"A condition that checks for log messages matching given constraints. If set, no other conditions can be present. Structure is documented below."

fn spec.initProvider.conditions.withConditionMatchedLogMixin

withConditionMatchedLogMixin(conditionMatchedLog)

"A condition that checks for log messages matching given constraints. If set, no other conditions can be present. Structure is documented below."

Note: This function appends passed data to existing values

fn spec.initProvider.conditions.withConditionMonitoringQueryLanguage

withConditionMonitoringQueryLanguage(conditionMonitoringQueryLanguage)

"A Monitoring Query Language query that outputs a boolean stream Structure is documented below."

fn spec.initProvider.conditions.withConditionMonitoringQueryLanguageMixin

withConditionMonitoringQueryLanguageMixin(conditionMonitoringQueryLanguage)

"A Monitoring Query Language query that outputs a boolean stream Structure is documented below."

Note: This function appends passed data to existing values

fn spec.initProvider.conditions.withConditionThreshold

withConditionThreshold(conditionThreshold)

"A condition that compares a time series against a threshold. Structure is documented below."

fn spec.initProvider.conditions.withConditionThresholdMixin

withConditionThresholdMixin(conditionThreshold)

"A condition that compares a time series against a threshold. Structure is documented below."

Note: This function appends passed data to existing values

fn spec.initProvider.conditions.withDisplayName

withDisplayName(displayName)

"A short name or phrase used to identify the condition in dashboards, notifications, and incidents. To avoid confusion, don't use the same display name for multiple conditions in the same policy."

obj spec.initProvider.conditions.conditionAbsent

"A condition that checks that a time series continues to receive new data points. Structure is documented below."

fn spec.initProvider.conditions.conditionAbsent.withAggregations

withAggregations(aggregations)

"Specifies the alignment of data points in individual time series as well as how to combine the retrieved time series together (such as when aggregating multiple streams on each resource to a single stream for each resource or when aggregating streams across all members of a group of resources). Multiple aggregations are applied in the order specified.This field is similar to the one in the MetricService.ListTimeSeries request. It is advisable to use the ListTimeSeries method when debugging this field. Structure is documented below."

fn spec.initProvider.conditions.conditionAbsent.withAggregationsMixin

withAggregationsMixin(aggregations)

"Specifies the alignment of data points in individual time series as well as how to combine the retrieved time series together (such as when aggregating multiple streams on each resource to a single stream for each resource or when aggregating streams across all members of a group of resources). Multiple aggregations are applied in the order specified.This field is similar to the one in the MetricService.ListTimeSeries request. It is advisable to use the ListTimeSeries method when debugging this field. Structure is documented below."

Note: This function appends passed data to existing values

fn spec.initProvider.conditions.conditionAbsent.withDuration

withDuration(duration)

"The amount of time that a time series must violate the threshold to be considered failing. Currently, only values that are a multiple of a minute--e.g., 0, 60, 120, or 300 seconds--are supported. If an invalid value is given, an error will be returned. When choosing a duration, it is useful to keep in mind the frequency of the underlying time series data (which may also be affected by any alignments specified in the aggregations field); a good duration is long enough so that a single outlier does not generate spurious alerts, but short enough that unhealthy states are detected and alerted on quickly."

fn spec.initProvider.conditions.conditionAbsent.withFilter

withFilter(filter)

"A filter that identifies which time series should be compared with the threshold.The filter is similar to the one that is specified in the MetricService.ListTimeSeries request (that call is useful to verify the time series that will be retrieved / processed) and must specify the metric type and optionally may contain restrictions on resource type, resource labels, and metric labels. This field may not exceed 2048 Unicode characters in length."

fn spec.initProvider.conditions.conditionAbsent.withTrigger

withTrigger(trigger)

"The number/percent of time series for which the comparison must hold in order for the condition to trigger. If unspecified, then the condition will trigger if the comparison is true for any of the time series that have been identified by filter and aggregations, or by the ratio, if denominator_filter and denominator_aggregations are specified. Structure is documented below."

fn spec.initProvider.conditions.conditionAbsent.withTriggerMixin

withTriggerMixin(trigger)

"The number/percent of time series for which the comparison must hold in order for the condition to trigger. If unspecified, then the condition will trigger if the comparison is true for any of the time series that have been identified by filter and aggregations, or by the ratio, if denominator_filter and denominator_aggregations are specified. Structure is documented below."

Note: This function appends passed data to existing values

obj spec.initProvider.conditions.conditionAbsent.aggregations

"Specifies the alignment of data points in individual time series as well as how to combine the retrieved time series together (such as when aggregating multiple streams on each resource to a single stream for each resource or when aggregating streams across all members of a group of resources). Multiple aggregations are applied in the order specified.This field is similar to the one in the MetricService.ListTimeSeries request. It is advisable to use the ListTimeSeries method when debugging this field. Structure is documented below."

fn spec.initProvider.conditions.conditionAbsent.aggregations.withAlignmentPeriod

withAlignmentPeriod(alignmentPeriod)

"The alignment period for per-time series alignment. If present, alignmentPeriod must be at least 60 seconds. After per-time series alignment, each time series will contain data points only on the period boundaries. If perSeriesAligner is not specified or equals ALIGN_NONE, then this field is ignored. If perSeriesAligner is specified and does not equal ALIGN_NONE, then this field must be defined; otherwise an error is returned."

fn spec.initProvider.conditions.conditionAbsent.aggregations.withCrossSeriesReducer

withCrossSeriesReducer(crossSeriesReducer)

"The approach to be used to combine time series. Not all reducer functions may be applied to all time series, depending on the metric type and the value type of the original time series. Reduction may change the metric type of value type of the time series.Time series data must be aligned in order to perform cross- time series reduction. If crossSeriesReducer is specified, then perSeriesAligner must be specified and not equal ALIGN_NONE and alignmentPeriod must be specified; otherwise, an error is returned. Possible values are: REDUCE_NONE, REDUCE_MEAN, REDUCE_MIN, REDUCE_MAX, REDUCE_SUM, REDUCE_STDDEV, REDUCE_COUNT, REDUCE_COUNT_TRUE, REDUCE_COUNT_FALSE, REDUCE_FRACTION_TRUE, REDUCE_PERCENTILE_99, REDUCE_PERCENTILE_95, REDUCE_PERCENTILE_50, REDUCE_PERCENTILE_05."

fn spec.initProvider.conditions.conditionAbsent.aggregations.withGroupByFields

withGroupByFields(groupByFields)

"The set of fields to preserve when crossSeriesReducer is specified. The groupByFields determine how the time series are partitioned into subsets prior to applying the aggregation function. Each subset contains time series that have the same value for each of the grouping fields. Each individual time series is a member of exactly one subset. The crossSeriesReducer is applied to each subset of time series. It is not possible to reduce across different resource types, so this field implicitly contains resource.type. Fields not specified in groupByFields are aggregated away. If groupByFields is not specified and all the time series have the same resource type, then the time series are aggregated into a single output time series. If crossSeriesReducer is not defined, this field is ignored."

fn spec.initProvider.conditions.conditionAbsent.aggregations.withGroupByFieldsMixin

withGroupByFieldsMixin(groupByFields)

"The set of fields to preserve when crossSeriesReducer is specified. The groupByFields determine how the time series are partitioned into subsets prior to applying the aggregation function. Each subset contains time series that have the same value for each of the grouping fields. Each individual time series is a member of exactly one subset. The crossSeriesReducer is applied to each subset of time series. It is not possible to reduce across different resource types, so this field implicitly contains resource.type. Fields not specified in groupByFields are aggregated away. If groupByFields is not specified and all the time series have the same resource type, then the time series are aggregated into a single output time series. If crossSeriesReducer is not defined, this field is ignored."

Note: This function appends passed data to existing values

fn spec.initProvider.conditions.conditionAbsent.aggregations.withPerSeriesAligner

withPerSeriesAligner(perSeriesAligner)

"The approach to be used to align individual time series. Not all alignment functions may be applied to all time series, depending on the metric type and value type of the original time series. Alignment may change the metric type or the value type of the time series.Time series data must be aligned in order to perform cross- time series reduction. If crossSeriesReducer is specified, then perSeriesAligner must be specified and not equal ALIGN_NONE and alignmentPeriod must be specified; otherwise, an error is returned. Possible values are: ALIGN_NONE, ALIGN_DELTA, ALIGN_RATE, ALIGN_INTERPOLATE, ALIGN_NEXT_OLDER, ALIGN_MIN, ALIGN_MAX, ALIGN_MEAN, ALIGN_COUNT, ALIGN_SUM, ALIGN_STDDEV, ALIGN_COUNT_TRUE, ALIGN_COUNT_FALSE, ALIGN_FRACTION_TRUE, ALIGN_PERCENTILE_99, ALIGN_PERCENTILE_95, ALIGN_PERCENTILE_50, ALIGN_PERCENTILE_05, ALIGN_PERCENT_CHANGE."

obj spec.initProvider.conditions.conditionAbsent.trigger

"The number/percent of time series for which the comparison must hold in order for the condition to trigger. If unspecified, then the condition will trigger if the comparison is true for any of the time series that have been identified by filter and aggregations, or by the ratio, if denominator_filter and denominator_aggregations are specified. Structure is documented below."

fn spec.initProvider.conditions.conditionAbsent.trigger.withCount

withCount(count)

"The absolute number of time series that must fail the predicate for the condition to be triggered."

fn spec.initProvider.conditions.conditionAbsent.trigger.withPercent

withPercent(percent)

"The percentage of time series that must fail the predicate for the condition to be triggered."

obj spec.initProvider.conditions.conditionMatchedLog

"A condition that checks for log messages matching given constraints. If set, no other conditions can be present. Structure is documented below."

fn spec.initProvider.conditions.conditionMatchedLog.withFilter

withFilter(filter)

"A filter that identifies which time series should be compared with the threshold.The filter is similar to the one that is specified in the MetricService.ListTimeSeries request (that call is useful to verify the time series that will be retrieved / processed) and must specify the metric type and optionally may contain restrictions on resource type, resource labels, and metric labels. This field may not exceed 2048 Unicode characters in length."

fn spec.initProvider.conditions.conditionMatchedLog.withLabelExtractors

withLabelExtractors(labelExtractors)

"A map from a label key to an extractor expression, which is used to extract the value for this label key. Each entry in this map is a specification for how data should be extracted from log entries that match filter. Each combination of extracted values is treated as a separate rule for the purposes of triggering notifications. Label keys and corresponding values can be used in notifications generated by this condition."

fn spec.initProvider.conditions.conditionMatchedLog.withLabelExtractorsMixin

withLabelExtractorsMixin(labelExtractors)

"A map from a label key to an extractor expression, which is used to extract the value for this label key. Each entry in this map is a specification for how data should be extracted from log entries that match filter. Each combination of extracted values is treated as a separate rule for the purposes of triggering notifications. Label keys and corresponding values can be used in notifications generated by this condition."

Note: This function appends passed data to existing values

obj spec.initProvider.conditions.conditionMonitoringQueryLanguage

"A Monitoring Query Language query that outputs a boolean stream Structure is documented below."

fn spec.initProvider.conditions.conditionMonitoringQueryLanguage.withDuration

withDuration(duration)

"The amount of time that a time series must violate the threshold to be considered failing. Currently, only values that are a multiple of a minute--e.g., 0, 60, 120, or 300 seconds--are supported. If an invalid value is given, an error will be returned. When choosing a duration, it is useful to keep in mind the frequency of the underlying time series data (which may also be affected by any alignments specified in the aggregations field); a good duration is long enough so that a single outlier does not generate spurious alerts, but short enough that unhealthy states are detected and alerted on quickly."

fn spec.initProvider.conditions.conditionMonitoringQueryLanguage.withEvaluationMissingData

withEvaluationMissingData(evaluationMissingData)

"A condition control that determines how metric-threshold conditions are evaluated when data stops arriving. Possible values are: EVALUATION_MISSING_DATA_INACTIVE, EVALUATION_MISSING_DATA_ACTIVE, EVALUATION_MISSING_DATA_NO_OP."

fn spec.initProvider.conditions.conditionMonitoringQueryLanguage.withQuery

withQuery(query)

"Monitoring Query Language query that outputs a boolean stream."

fn spec.initProvider.conditions.conditionMonitoringQueryLanguage.withTrigger

withTrigger(trigger)

"The number/percent of time series for which the comparison must hold in order for the condition to trigger. If unspecified, then the condition will trigger if the comparison is true for any of the time series that have been identified by filter and aggregations, or by the ratio, if denominator_filter and denominator_aggregations are specified. Structure is documented below."

fn spec.initProvider.conditions.conditionMonitoringQueryLanguage.withTriggerMixin

withTriggerMixin(trigger)

"The number/percent of time series for which the comparison must hold in order for the condition to trigger. If unspecified, then the condition will trigger if the comparison is true for any of the time series that have been identified by filter and aggregations, or by the ratio, if denominator_filter and denominator_aggregations are specified. Structure is documented below."

Note: This function appends passed data to existing values

obj spec.initProvider.conditions.conditionMonitoringQueryLanguage.trigger

"The number/percent of time series for which the comparison must hold in order for the condition to trigger. If unspecified, then the condition will trigger if the comparison is true for any of the time series that have been identified by filter and aggregations, or by the ratio, if denominator_filter and denominator_aggregations are specified. Structure is documented below."

fn spec.initProvider.conditions.conditionMonitoringQueryLanguage.trigger.withCount

withCount(count)

"The absolute number of time series that must fail the predicate for the condition to be triggered."

fn spec.initProvider.conditions.conditionMonitoringQueryLanguage.trigger.withPercent

withPercent(percent)

"The percentage of time series that must fail the predicate for the condition to be triggered."

obj spec.initProvider.conditions.conditionThreshold

"A condition that compares a time series against a threshold. Structure is documented below."

fn spec.initProvider.conditions.conditionThreshold.withAggregations

withAggregations(aggregations)

"Specifies the alignment of data points in individual time series as well as how to combine the retrieved time series together (such as when aggregating multiple streams on each resource to a single stream for each resource or when aggregating streams across all members of a group of resources). Multiple aggregations are applied in the order specified.This field is similar to the one in the MetricService.ListTimeSeries request. It is advisable to use the ListTimeSeries method when debugging this field. Structure is documented below."

fn spec.initProvider.conditions.conditionThreshold.withAggregationsMixin

withAggregationsMixin(aggregations)

"Specifies the alignment of data points in individual time series as well as how to combine the retrieved time series together (such as when aggregating multiple streams on each resource to a single stream for each resource or when aggregating streams across all members of a group of resources). Multiple aggregations are applied in the order specified.This field is similar to the one in the MetricService.ListTimeSeries request. It is advisable to use the ListTimeSeries method when debugging this field. Structure is documented below."

Note: This function appends passed data to existing values

fn spec.initProvider.conditions.conditionThreshold.withComparison

withComparison(comparison)

"The comparison to apply between the time series (indicated by filter and aggregation) and the threshold (indicated by threshold_value). The comparison is applied on each time series, with the time series on the left-hand side and the threshold on the right-hand side. Only COMPARISON_LT and COMPARISON_GT are supported currently. Possible values are: COMPARISON_GT, COMPARISON_GE, COMPARISON_LT, COMPARISON_LE, COMPARISON_EQ, COMPARISON_NE."

fn spec.initProvider.conditions.conditionThreshold.withDenominatorAggregations

withDenominatorAggregations(denominatorAggregations)

"Specifies the alignment of data points in individual time series selected by denominatorFilter as well as how to combine the retrieved time series together (such as when aggregating multiple streams on each resource to a single stream for each resource or when aggregating streams across all members of a group of resources).When computing ratios, the aggregations and denominator_aggregations fields must use the same alignment period and produce time series that have the same periodicity and labels.This field is similar to the one in the MetricService.ListTimeSeries request. It is advisable to use the ListTimeSeries method when debugging this field. Structure is documented below."

fn spec.initProvider.conditions.conditionThreshold.withDenominatorAggregationsMixin

withDenominatorAggregationsMixin(denominatorAggregations)

"Specifies the alignment of data points in individual time series selected by denominatorFilter as well as how to combine the retrieved time series together (such as when aggregating multiple streams on each resource to a single stream for each resource or when aggregating streams across all members of a group of resources).When computing ratios, the aggregations and denominator_aggregations fields must use the same alignment period and produce time series that have the same periodicity and labels.This field is similar to the one in the MetricService.ListTimeSeries request. It is advisable to use the ListTimeSeries method when debugging this field. Structure is documented below."

Note: This function appends passed data to existing values

fn spec.initProvider.conditions.conditionThreshold.withDenominatorFilter

withDenominatorFilter(denominatorFilter)

"A filter that identifies a time series that should be used as the denominator of a ratio that will be compared with the threshold. If a denominator_filter is specified, the time series specified by the filter field will be used as the numerator.The filter is similar to the one that is specified in the MetricService.ListTimeSeries request (that call is useful to verify the time series that will be retrieved / processed) and must specify the metric type and optionally may contain restrictions on resource type, resource labels, and metric labels. This field may not exceed 2048 Unicode characters in length."

fn spec.initProvider.conditions.conditionThreshold.withDuration

withDuration(duration)

"The amount of time that a time series must violate the threshold to be considered failing. Currently, only values that are a multiple of a minute--e.g., 0, 60, 120, or 300 seconds--are supported. If an invalid value is given, an error will be returned. When choosing a duration, it is useful to keep in mind the frequency of the underlying time series data (which may also be affected by any alignments specified in the aggregations field); a good duration is long enough so that a single outlier does not generate spurious alerts, but short enough that unhealthy states are detected and alerted on quickly."

fn spec.initProvider.conditions.conditionThreshold.withEvaluationMissingData

withEvaluationMissingData(evaluationMissingData)

"A condition control that determines how metric-threshold conditions are evaluated when data stops arriving. Possible values are: EVALUATION_MISSING_DATA_INACTIVE, EVALUATION_MISSING_DATA_ACTIVE, EVALUATION_MISSING_DATA_NO_OP."

fn spec.initProvider.conditions.conditionThreshold.withFilter

withFilter(filter)

"A filter that identifies which time series should be compared with the threshold.The filter is similar to the one that is specified in the MetricService.ListTimeSeries request (that call is useful to verify the time series that will be retrieved / processed) and must specify the metric type and optionally may contain restrictions on resource type, resource labels, and metric labels. This field may not exceed 2048 Unicode characters in length."

fn spec.initProvider.conditions.conditionThreshold.withForecastOptions

withForecastOptions(forecastOptions)

"When this field is present, the MetricThreshold condition forecasts whether the time series is predicted to violate the threshold within the forecastHorizon. When this field is not set, the MetricThreshold tests the current value of the timeseries against the threshold. Structure is documented below."

fn spec.initProvider.conditions.conditionThreshold.withForecastOptionsMixin

withForecastOptionsMixin(forecastOptions)

"When this field is present, the MetricThreshold condition forecasts whether the time series is predicted to violate the threshold within the forecastHorizon. When this field is not set, the MetricThreshold tests the current value of the timeseries against the threshold. Structure is documented below."

Note: This function appends passed data to existing values

fn spec.initProvider.conditions.conditionThreshold.withThresholdValue

withThresholdValue(thresholdValue)

"A value against which to compare the time series."

fn spec.initProvider.conditions.conditionThreshold.withTrigger

withTrigger(trigger)

"The number/percent of time series for which the comparison must hold in order for the condition to trigger. If unspecified, then the condition will trigger if the comparison is true for any of the time series that have been identified by filter and aggregations, or by the ratio, if denominator_filter and denominator_aggregations are specified. Structure is documented below."

fn spec.initProvider.conditions.conditionThreshold.withTriggerMixin

withTriggerMixin(trigger)

"The number/percent of time series for which the comparison must hold in order for the condition to trigger. If unspecified, then the condition will trigger if the comparison is true for any of the time series that have been identified by filter and aggregations, or by the ratio, if denominator_filter and denominator_aggregations are specified. Structure is documented below."

Note: This function appends passed data to existing values

obj spec.initProvider.conditions.conditionThreshold.aggregations

"Specifies the alignment of data points in individual time series as well as how to combine the retrieved time series together (such as when aggregating multiple streams on each resource to a single stream for each resource or when aggregating streams across all members of a group of resources). Multiple aggregations are applied in the order specified.This field is similar to the one in the MetricService.ListTimeSeries request. It is advisable to use the ListTimeSeries method when debugging this field. Structure is documented below."

fn spec.initProvider.conditions.conditionThreshold.aggregations.withAlignmentPeriod

withAlignmentPeriod(alignmentPeriod)

"The alignment period for per-time series alignment. If present, alignmentPeriod must be at least 60 seconds. After per-time series alignment, each time series will contain data points only on the period boundaries. If perSeriesAligner is not specified or equals ALIGN_NONE, then this field is ignored. If perSeriesAligner is specified and does not equal ALIGN_NONE, then this field must be defined; otherwise an error is returned."

fn spec.initProvider.conditions.conditionThreshold.aggregations.withCrossSeriesReducer

withCrossSeriesReducer(crossSeriesReducer)

"The approach to be used to combine time series. Not all reducer functions may be applied to all time series, depending on the metric type and the value type of the original time series. Reduction may change the metric type of value type of the time series.Time series data must be aligned in order to perform cross- time series reduction. If crossSeriesReducer is specified, then perSeriesAligner must be specified and not equal ALIGN_NONE and alignmentPeriod must be specified; otherwise, an error is returned. Possible values are: REDUCE_NONE, REDUCE_MEAN, REDUCE_MIN, REDUCE_MAX, REDUCE_SUM, REDUCE_STDDEV, REDUCE_COUNT, REDUCE_COUNT_TRUE, REDUCE_COUNT_FALSE, REDUCE_FRACTION_TRUE, REDUCE_PERCENTILE_99, REDUCE_PERCENTILE_95, REDUCE_PERCENTILE_50, REDUCE_PERCENTILE_05."

fn spec.initProvider.conditions.conditionThreshold.aggregations.withGroupByFields

withGroupByFields(groupByFields)

"The set of fields to preserve when crossSeriesReducer is specified. The groupByFields determine how the time series are partitioned into subsets prior to applying the aggregation function. Each subset contains time series that have the same value for each of the grouping fields. Each individual time series is a member of exactly one subset. The crossSeriesReducer is applied to each subset of time series. It is not possible to reduce across different resource types, so this field implicitly contains resource.type. Fields not specified in groupByFields are aggregated away. If groupByFields is not specified and all the time series have the same resource type, then the time series are aggregated into a single output time series. If crossSeriesReducer is not defined, this field is ignored."

fn spec.initProvider.conditions.conditionThreshold.aggregations.withGroupByFieldsMixin

withGroupByFieldsMixin(groupByFields)

"The set of fields to preserve when crossSeriesReducer is specified. The groupByFields determine how the time series are partitioned into subsets prior to applying the aggregation function. Each subset contains time series that have the same value for each of the grouping fields. Each individual time series is a member of exactly one subset. The crossSeriesReducer is applied to each subset of time series. It is not possible to reduce across different resource types, so this field implicitly contains resource.type. Fields not specified in groupByFields are aggregated away. If groupByFields is not specified and all the time series have the same resource type, then the time series are aggregated into a single output time series. If crossSeriesReducer is not defined, this field is ignored."

Note: This function appends passed data to existing values

fn spec.initProvider.conditions.conditionThreshold.aggregations.withPerSeriesAligner

withPerSeriesAligner(perSeriesAligner)

"The approach to be used to align individual time series. Not all alignment functions may be applied to all time series, depending on the metric type and value type of the original time series. Alignment may change the metric type or the value type of the time series.Time series data must be aligned in order to perform cross- time series reduction. If crossSeriesReducer is specified, then perSeriesAligner must be specified and not equal ALIGN_NONE and alignmentPeriod must be specified; otherwise, an error is returned. Possible values are: ALIGN_NONE, ALIGN_DELTA, ALIGN_RATE, ALIGN_INTERPOLATE, ALIGN_NEXT_OLDER, ALIGN_MIN, ALIGN_MAX, ALIGN_MEAN, ALIGN_COUNT, ALIGN_SUM, ALIGN_STDDEV, ALIGN_COUNT_TRUE, ALIGN_COUNT_FALSE, ALIGN_FRACTION_TRUE, ALIGN_PERCENTILE_99, ALIGN_PERCENTILE_95, ALIGN_PERCENTILE_50, ALIGN_PERCENTILE_05, ALIGN_PERCENT_CHANGE."

obj spec.initProvider.conditions.conditionThreshold.denominatorAggregations

"Specifies the alignment of data points in individual time series selected by denominatorFilter as well as how to combine the retrieved time series together (such as when aggregating multiple streams on each resource to a single stream for each resource or when aggregating streams across all members of a group of resources).When computing ratios, the aggregations and denominator_aggregations fields must use the same alignment period and produce time series that have the same periodicity and labels.This field is similar to the one in the MetricService.ListTimeSeries request. It is advisable to use the ListTimeSeries method when debugging this field. Structure is documented below."

fn spec.initProvider.conditions.conditionThreshold.denominatorAggregations.withAlignmentPeriod

withAlignmentPeriod(alignmentPeriod)

"The alignment period for per-time series alignment. If present, alignmentPeriod must be at least 60 seconds. After per-time series alignment, each time series will contain data points only on the period boundaries. If perSeriesAligner is not specified or equals ALIGN_NONE, then this field is ignored. If perSeriesAligner is specified and does not equal ALIGN_NONE, then this field must be defined; otherwise an error is returned."

fn spec.initProvider.conditions.conditionThreshold.denominatorAggregations.withCrossSeriesReducer

withCrossSeriesReducer(crossSeriesReducer)

"The approach to be used to combine time series. Not all reducer functions may be applied to all time series, depending on the metric type and the value type of the original time series. Reduction may change the metric type of value type of the time series.Time series data must be aligned in order to perform cross- time series reduction. If crossSeriesReducer is specified, then perSeriesAligner must be specified and not equal ALIGN_NONE and alignmentPeriod must be specified; otherwise, an error is returned. Possible values are: REDUCE_NONE, REDUCE_MEAN, REDUCE_MIN, REDUCE_MAX, REDUCE_SUM, REDUCE_STDDEV, REDUCE_COUNT, REDUCE_COUNT_TRUE, REDUCE_COUNT_FALSE, REDUCE_FRACTION_TRUE, REDUCE_PERCENTILE_99, REDUCE_PERCENTILE_95, REDUCE_PERCENTILE_50, REDUCE_PERCENTILE_05."

fn spec.initProvider.conditions.conditionThreshold.denominatorAggregations.withGroupByFields

withGroupByFields(groupByFields)

"The set of fields to preserve when crossSeriesReducer is specified. The groupByFields determine how the time series are partitioned into subsets prior to applying the aggregation function. Each subset contains time series that have the same value for each of the grouping fields. Each individual time series is a member of exactly one subset. The crossSeriesReducer is applied to each subset of time series. It is not possible to reduce across different resource types, so this field implicitly contains resource.type. Fields not specified in groupByFields are aggregated away. If groupByFields is not specified and all the time series have the same resource type, then the time series are aggregated into a single output time series. If crossSeriesReducer is not defined, this field is ignored."

fn spec.initProvider.conditions.conditionThreshold.denominatorAggregations.withGroupByFieldsMixin

withGroupByFieldsMixin(groupByFields)

"The set of fields to preserve when crossSeriesReducer is specified. The groupByFields determine how the time series are partitioned into subsets prior to applying the aggregation function. Each subset contains time series that have the same value for each of the grouping fields. Each individual time series is a member of exactly one subset. The crossSeriesReducer is applied to each subset of time series. It is not possible to reduce across different resource types, so this field implicitly contains resource.type. Fields not specified in groupByFields are aggregated away. If groupByFields is not specified and all the time series have the same resource type, then the time series are aggregated into a single output time series. If crossSeriesReducer is not defined, this field is ignored."

Note: This function appends passed data to existing values

fn spec.initProvider.conditions.conditionThreshold.denominatorAggregations.withPerSeriesAligner

withPerSeriesAligner(perSeriesAligner)

"The approach to be used to align individual time series. Not all alignment functions may be applied to all time series, depending on the metric type and value type of the original time series. Alignment may change the metric type or the value type of the time series.Time series data must be aligned in order to perform cross- time series reduction. If crossSeriesReducer is specified, then perSeriesAligner must be specified and not equal ALIGN_NONE and alignmentPeriod must be specified; otherwise, an error is returned. Possible values are: ALIGN_NONE, ALIGN_DELTA, ALIGN_RATE, ALIGN_INTERPOLATE, ALIGN_NEXT_OLDER, ALIGN_MIN, ALIGN_MAX, ALIGN_MEAN, ALIGN_COUNT, ALIGN_SUM, ALIGN_STDDEV, ALIGN_COUNT_TRUE, ALIGN_COUNT_FALSE, ALIGN_FRACTION_TRUE, ALIGN_PERCENTILE_99, ALIGN_PERCENTILE_95, ALIGN_PERCENTILE_50, ALIGN_PERCENTILE_05, ALIGN_PERCENT_CHANGE."

obj spec.initProvider.conditions.conditionThreshold.forecastOptions

"When this field is present, the MetricThreshold condition forecasts whether the time series is predicted to violate the threshold within the forecastHorizon. When this field is not set, the MetricThreshold tests the current value of the timeseries against the threshold. Structure is documented below."

fn spec.initProvider.conditions.conditionThreshold.forecastOptions.withForecastHorizon

withForecastHorizon(forecastHorizon)

"The length of time into the future to forecast whether a timeseries will violate the threshold. If the predicted value is found to violate the threshold, and the violation is observed in all forecasts made for the Configured duration, then the timeseries is considered to be failing."

obj spec.initProvider.conditions.conditionThreshold.trigger

"The number/percent of time series for which the comparison must hold in order for the condition to trigger. If unspecified, then the condition will trigger if the comparison is true for any of the time series that have been identified by filter and aggregations, or by the ratio, if denominator_filter and denominator_aggregations are specified. Structure is documented below."

fn spec.initProvider.conditions.conditionThreshold.trigger.withCount

withCount(count)

"The absolute number of time series that must fail the predicate for the condition to be triggered."

fn spec.initProvider.conditions.conditionThreshold.trigger.withPercent

withPercent(percent)

"The percentage of time series that must fail the predicate for the condition to be triggered."

obj spec.initProvider.documentation

"Documentation that is included with notifications and incidents related to this policy. Best practice is for the documentation to include information to help responders understand, mitigate, escalate, and correct the underlying problems detected by the alerting policy. Notification channels that have limited capacity might not show this documentation. Structure is documented below."

fn spec.initProvider.documentation.withContent

withContent(content)

"The text of the documentation, interpreted according to mimeType. The content may not exceed 8,192 Unicode characters and may not exceed more than 10,240 bytes when encoded in UTF-8 format, whichever is smaller."

fn spec.initProvider.documentation.withMimeType

withMimeType(mimeType)

"The format of the content field. Presently, only the value \"text/markdown\" is supported."

obj spec.providerConfigRef

"ProviderConfigReference specifies how the provider that will be used to create, observe, update, and delete this managed resource should be configured."

fn spec.providerConfigRef.withName

withName(name)

"Name of the referenced object."

obj spec.providerConfigRef.policy

"Policies for referencing."

fn spec.providerConfigRef.policy.withResolution

withResolution(resolution)

"Resolution specifies whether resolution of this reference is required. The default is 'Required', which means the reconcile will fail if the reference cannot be resolved. 'Optional' means this reference will be a no-op if it cannot be resolved."

fn spec.providerConfigRef.policy.withResolve

withResolve(resolve)

"Resolve specifies when this reference should be resolved. The default is 'IfNotPresent', which will attempt to resolve the reference only when the corresponding field is not present. Use 'Always' to resolve the reference on every reconcile."

obj spec.providerRef

"ProviderReference specifies the provider that will be used to create, observe, update, and delete this managed resource. Deprecated: Please use ProviderConfigReference, i.e. providerConfigRef"

fn spec.providerRef.withName

withName(name)

"Name of the referenced object."

obj spec.providerRef.policy

"Policies for referencing."

fn spec.providerRef.policy.withResolution

withResolution(resolution)

"Resolution specifies whether resolution of this reference is required. The default is 'Required', which means the reconcile will fail if the reference cannot be resolved. 'Optional' means this reference will be a no-op if it cannot be resolved."

fn spec.providerRef.policy.withResolve

withResolve(resolve)

"Resolve specifies when this reference should be resolved. The default is 'IfNotPresent', which will attempt to resolve the reference only when the corresponding field is not present. Use 'Always' to resolve the reference on every reconcile."

obj spec.publishConnectionDetailsTo

"PublishConnectionDetailsTo specifies the connection secret config which contains a name, metadata and a reference to secret store config to which any connection details for this managed resource should be written. Connection details frequently include the endpoint, username, and password required to connect to the managed resource."

fn spec.publishConnectionDetailsTo.withName

withName(name)

"Name is the name of the connection secret."

obj spec.publishConnectionDetailsTo.configRef

"SecretStoreConfigRef specifies which secret store config should be used for this ConnectionSecret."

fn spec.publishConnectionDetailsTo.configRef.withName

withName(name)

"Name of the referenced object."

obj spec.publishConnectionDetailsTo.configRef.policy

"Policies for referencing."

fn spec.publishConnectionDetailsTo.configRef.policy.withResolution

withResolution(resolution)

"Resolution specifies whether resolution of this reference is required. The default is 'Required', which means the reconcile will fail if the reference cannot be resolved. 'Optional' means this reference will be a no-op if it cannot be resolved."

fn spec.publishConnectionDetailsTo.configRef.policy.withResolve

withResolve(resolve)

"Resolve specifies when this reference should be resolved. The default is 'IfNotPresent', which will attempt to resolve the reference only when the corresponding field is not present. Use 'Always' to resolve the reference on every reconcile."

obj spec.publishConnectionDetailsTo.metadata

"Metadata is the metadata for connection secret."

fn spec.publishConnectionDetailsTo.metadata.withAnnotations

withAnnotations(annotations)

"Annotations are the annotations to be added to connection secret. - For Kubernetes secrets, this will be used as \"metadata.annotations\". - It is up to Secret Store implementation for others store types."

fn spec.publishConnectionDetailsTo.metadata.withAnnotationsMixin

withAnnotationsMixin(annotations)

"Annotations are the annotations to be added to connection secret. - For Kubernetes secrets, this will be used as \"metadata.annotations\". - It is up to Secret Store implementation for others store types."

Note: This function appends passed data to existing values

fn spec.publishConnectionDetailsTo.metadata.withLabels

withLabels(labels)

"Labels are the labels/tags to be added to connection secret. - For Kubernetes secrets, this will be used as \"metadata.labels\". - It is up to Secret Store implementation for others store types."

fn spec.publishConnectionDetailsTo.metadata.withLabelsMixin

withLabelsMixin(labels)

"Labels are the labels/tags to be added to connection secret. - For Kubernetes secrets, this will be used as \"metadata.labels\". - It is up to Secret Store implementation for others store types."

Note: This function appends passed data to existing values

fn spec.publishConnectionDetailsTo.metadata.withType

withType(type)

"Type is the SecretType for the connection secret. - Only valid for Kubernetes Secret Stores."

obj spec.writeConnectionSecretToRef

"WriteConnectionSecretToReference specifies the namespace and name of a Secret to which any connection details for this managed resource should be written. Connection details frequently include the endpoint, username, and password required to connect to the managed resource. This field is planned to be replaced in a future release in favor of PublishConnectionDetailsTo. Currently, both could be set independently and connection details would be published to both without affecting each other."

fn spec.writeConnectionSecretToRef.withName

withName(name)

"Name of the secret."

fn spec.writeConnectionSecretToRef.withNamespace

withNamespace(namespace)

"Namespace of the secret."