Skip to content

postgresql.v1.cluster

"Cluster is the Schema for the PostgreSQL API"

Index

Fields

fn new

new(name)

new returns an instance of Cluster

obj metadata

"ObjectMeta is metadata that all persisted resources must have, which includes all objects users must create."

fn metadata.withAnnotations

withAnnotations(annotations)

"Annotations is an unstructured key value map stored with a resource that may be set by external tools to store and retrieve arbitrary metadata. They are not queryable and should be preserved when modifying objects. More info: http://kubernetes.io/docs/user-guide/annotations"

fn metadata.withAnnotationsMixin

withAnnotationsMixin(annotations)

"Annotations is an unstructured key value map stored with a resource that may be set by external tools to store and retrieve arbitrary metadata. They are not queryable and should be preserved when modifying objects. More info: http://kubernetes.io/docs/user-guide/annotations"

Note: This function appends passed data to existing values

fn metadata.withClusterName

withClusterName(clusterName)

"The name of the cluster which the object belongs to. This is used to distinguish resources with same name and namespace in different clusters. This field is not set anywhere right now and apiserver is going to ignore it if set in create or update request."

fn metadata.withCreationTimestamp

withCreationTimestamp(creationTimestamp)

"Time is a wrapper around time.Time which supports correct marshaling to YAML and JSON. Wrappers are provided for many of the factory methods that the time package offers."

fn metadata.withDeletionGracePeriodSeconds

withDeletionGracePeriodSeconds(deletionGracePeriodSeconds)

"Number of seconds allowed for this object to gracefully terminate before it will be removed from the system. Only set when deletionTimestamp is also set. May only be shortened. Read-only."

fn metadata.withDeletionTimestamp

withDeletionTimestamp(deletionTimestamp)

"Time is a wrapper around time.Time which supports correct marshaling to YAML and JSON. Wrappers are provided for many of the factory methods that the time package offers."

fn metadata.withFinalizers

withFinalizers(finalizers)

"Must be empty before the object is deleted from the registry. Each entry is an identifier for the responsible component that will remove the entry from the list. If the deletionTimestamp of the object is non-nil, entries in this list can only be removed. Finalizers may be processed and removed in any order. Order is NOT enforced because it introduces significant risk of stuck finalizers. finalizers is a shared field, any actor with permission can reorder it. If the finalizer list is processed in order, then this can lead to a situation in which the component responsible for the first finalizer in the list is waiting for a signal (field value, external system, or other) produced by a component responsible for a finalizer later in the list, resulting in a deadlock. Without enforced ordering finalizers are free to order amongst themselves and are not vulnerable to ordering changes in the list."

fn metadata.withFinalizersMixin

withFinalizersMixin(finalizers)

"Must be empty before the object is deleted from the registry. Each entry is an identifier for the responsible component that will remove the entry from the list. If the deletionTimestamp of the object is non-nil, entries in this list can only be removed. Finalizers may be processed and removed in any order. Order is NOT enforced because it introduces significant risk of stuck finalizers. finalizers is a shared field, any actor with permission can reorder it. If the finalizer list is processed in order, then this can lead to a situation in which the component responsible for the first finalizer in the list is waiting for a signal (field value, external system, or other) produced by a component responsible for a finalizer later in the list, resulting in a deadlock. Without enforced ordering finalizers are free to order amongst themselves and are not vulnerable to ordering changes in the list."

Note: This function appends passed data to existing values

fn metadata.withGenerateName

withGenerateName(generateName)

"GenerateName is an optional prefix, used by the server, to generate a unique name ONLY IF the Name field has not been provided. If this field is used, the name returned to the client will be different than the name passed. This value will also be combined with a unique suffix. The provided value has the same validation rules as the Name field, and may be truncated by the length of the suffix required to make the value unique on the server.\n\nIf this field is specified and the generated name exists, the server will NOT return a 409 - instead, it will either return 201 Created or 500 with Reason ServerTimeout indicating a unique name could not be found in the time allotted, and the client should retry (optionally after the time indicated in the Retry-After header).\n\nApplied only if Name is not specified. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency"

fn metadata.withGeneration

withGeneration(generation)

"A sequence number representing a specific generation of the desired state. Populated by the system. Read-only."

fn metadata.withLabels

withLabels(labels)

"Map of string keys and values that can be used to organize and categorize (scope and select) objects. May match selectors of replication controllers and services. More info: http://kubernetes.io/docs/user-guide/labels"

fn metadata.withLabelsMixin

withLabelsMixin(labels)

"Map of string keys and values that can be used to organize and categorize (scope and select) objects. May match selectors of replication controllers and services. More info: http://kubernetes.io/docs/user-guide/labels"

Note: This function appends passed data to existing values

fn metadata.withName

withName(name)

"Name must be unique within a namespace. Is required when creating resources, although some resources may allow a client to request the generation of an appropriate name automatically. Name is primarily intended for creation idempotence and configuration definition. Cannot be updated. More info: http://kubernetes.io/docs/user-guide/identifiers#names"

fn metadata.withNamespace

withNamespace(namespace)

"Namespace defines the space within which each name must be unique. An empty namespace is equivalent to the \"default\" namespace, but \"default\" is the canonical representation. Not all objects are required to be scoped to a namespace - the value of this field for those objects will be empty.\n\nMust be a DNS_LABEL. Cannot be updated. More info: http://kubernetes.io/docs/user-guide/namespaces"

fn metadata.withOwnerReferences

withOwnerReferences(ownerReferences)

"List of objects depended by this object. If ALL objects in the list have been deleted, this object will be garbage collected. If this object is managed by a controller, then an entry in this list will point to this controller, with the controller field set to true. There cannot be more than one managing controller."

fn metadata.withOwnerReferencesMixin

withOwnerReferencesMixin(ownerReferences)

"List of objects depended by this object. If ALL objects in the list have been deleted, this object will be garbage collected. If this object is managed by a controller, then an entry in this list will point to this controller, with the controller field set to true. There cannot be more than one managing controller."

Note: This function appends passed data to existing values

fn metadata.withResourceVersion

withResourceVersion(resourceVersion)

"An opaque value that represents the internal version of this object that can be used by clients to determine when objects have changed. May be used for optimistic concurrency, change detection, and the watch operation on a resource or set of resources. Clients must treat these values as opaque and passed unmodified back to the server. They may only be valid for a particular resource or set of resources.\n\nPopulated by the system. Read-only. Value must be treated as opaque by clients and . More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency"

withSelfLink(selfLink)

"SelfLink is a URL representing this object. Populated by the system. Read-only.\n\nDEPRECATED Kubernetes will stop propagating this field in 1.20 release and the field is planned to be removed in 1.21 release."

fn metadata.withUid

withUid(uid)

"UID is the unique in time and space value for this object. It is typically generated by the server on successful creation of a resource and is not allowed to change on PUT operations.\n\nPopulated by the system. Read-only. More info: http://kubernetes.io/docs/user-guide/identifiers#uids"

obj spec

"Specification of the desired behavior of the cluster.\nMore info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status"

fn spec.withDescription

withDescription(description)

"Description of this PostgreSQL cluster"

fn spec.withEnablePDB

withEnablePDB(enablePDB)

"Manage the PodDisruptionBudget resources within the cluster. When\nconfigured as true (default setting), the pod disruption budgets\nwill safeguard the primary node from being terminated. Conversely,\nsetting it to false will result in the absence of any\nPodDisruptionBudget resource, permitting the shutdown of all nodes\nhosting the PostgreSQL cluster. This latter configuration is\nadvisable for any PostgreSQL cluster employed for\ndevelopment/staging purposes."

fn spec.withEnableSuperuserAccess

withEnableSuperuserAccess(enableSuperuserAccess)

"When this option is enabled, the operator will use the SuperuserSecret\nto update the postgres user password (if the secret is\nnot present, the operator will automatically create one). When this\noption is disabled, the operator will ignore the SuperuserSecret content, delete\nit when automatically created, and then blank the password of the postgres\nuser by setting it to NULL. Disabled by default."

fn spec.withEnv

withEnv(env)

"Env follows the Env format to pass environment variables\nto the pods created in the cluster"

fn spec.withEnvFrom

withEnvFrom(envFrom)

"EnvFrom follows the EnvFrom format to pass environment variables\nsources to the pods to be used by Env"

fn spec.withEnvFromMixin

withEnvFromMixin(envFrom)

"EnvFrom follows the EnvFrom format to pass environment variables\nsources to the pods to be used by Env"

Note: This function appends passed data to existing values

fn spec.withEnvMixin

withEnvMixin(env)

"Env follows the Env format to pass environment variables\nto the pods created in the cluster"

Note: This function appends passed data to existing values

fn spec.withExternalClusters

withExternalClusters(externalClusters)

"The list of external clusters which are used in the configuration"

fn spec.withExternalClustersMixin

withExternalClustersMixin(externalClusters)

"The list of external clusters which are used in the configuration"

Note: This function appends passed data to existing values

fn spec.withFailoverDelay

withFailoverDelay(failoverDelay)

"The amount of time (in seconds) to wait before triggering a failover\nafter the primary PostgreSQL instance in the cluster was detected\nto be unhealthy"

fn spec.withImageName

withImageName(imageName)

"Name of the container image, supporting both tags (<image>:<tag>)\nand digests for deterministic and repeatable deployments\n(<image>:<tag>@sha256:<digestValue>)"

fn spec.withImagePullPolicy

withImagePullPolicy(imagePullPolicy)

"Image pull policy.\nOne of Always, Never or IfNotPresent.\nIf not defined, it defaults to IfNotPresent.\nCannot be updated.\nMore info: https://kubernetes.io/docs/concepts/containers/images#updating-images"

fn spec.withImagePullSecrets

withImagePullSecrets(imagePullSecrets)

"The list of pull secrets to be used to pull the images"

fn spec.withImagePullSecretsMixin

withImagePullSecretsMixin(imagePullSecrets)

"The list of pull secrets to be used to pull the images"

Note: This function appends passed data to existing values

fn spec.withInstances

withInstances(instances)

"Number of instances required in the cluster"

fn spec.withLivenessProbeTimeout

withLivenessProbeTimeout(livenessProbeTimeout)

"LivenessProbeTimeout is the time (in seconds) that is allowed for a PostgreSQL instance\nto successfully respond to the liveness probe (default 30).\nThe Liveness probe failure threshold is derived from this value using the formula:\nceiling(livenessProbe / 10)."

fn spec.withLogLevel

withLogLevel(logLevel)

"The instances' log level, one of the following values: error, warning, info (default), debug, trace"

fn spec.withMaxSyncReplicas

withMaxSyncReplicas(maxSyncReplicas)

"The target value for the synchronous replication quorum, that can be\ndecreased if the number of ready standbys is lower than this.\nUndefined or 0 disable synchronous replication."

fn spec.withMinSyncReplicas

withMinSyncReplicas(minSyncReplicas)

"Minimum number of instances required in synchronous replication with the\nprimary. Undefined or 0 allow writes to complete when no standby is\navailable."

fn spec.withPlugins

withPlugins(plugins)

"The plugins configuration, containing\nany plugin to be loaded with the corresponding configuration"

fn spec.withPluginsMixin

withPluginsMixin(plugins)

"The plugins configuration, containing\nany plugin to be loaded with the corresponding configuration"

Note: This function appends passed data to existing values

fn spec.withPostgresGID

withPostgresGID(postgresGID)

"The GID of the postgres user inside the image, defaults to 26"

fn spec.withPostgresUID

withPostgresUID(postgresUID)

"The UID of the postgres user inside the image, defaults to 26"

fn spec.withPrimaryUpdateMethod

withPrimaryUpdateMethod(primaryUpdateMethod)

"Method to follow to upgrade the primary server during a rolling\nupdate procedure, after all replicas have been successfully updated:\nit can be with a switchover (switchover) or in-place (restart - default)"

fn spec.withPrimaryUpdateStrategy

withPrimaryUpdateStrategy(primaryUpdateStrategy)

"Deployment strategy to follow to upgrade the primary server during a rolling\nupdate procedure, after all replicas have been successfully updated:\nit can be automated (unsupervised - default) or manual (supervised)"

fn spec.withPriorityClassName

withPriorityClassName(priorityClassName)

"Name of the priority class which will be used in every generated Pod, if the PriorityClass\nspecified does not exist, the pod will not be able to schedule. Please refer to\nhttps://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/#priorityclass\nfor more information"

fn spec.withSchedulerName

withSchedulerName(schedulerName)

"If specified, the pod will be dispatched by specified Kubernetes\nscheduler. If not specified, the pod will be dispatched by the default\nscheduler. More info:\nhttps://kubernetes.io/docs/concepts/scheduling-eviction/kube-scheduler/"

fn spec.withSmartShutdownTimeout

withSmartShutdownTimeout(smartShutdownTimeout)

"The time in seconds that controls the window of time reserved for the smart shutdown of Postgres to complete.\nMake sure you reserve enough time for the operator to request a fast shutdown of Postgres\n(that is: stopDelay - smartShutdownTimeout)."

fn spec.withStartDelay

withStartDelay(startDelay)

"The time in seconds that is allowed for a PostgreSQL instance to\nsuccessfully start up (default 3600).\nThe startup probe failure threshold is derived from this value using the formula:\nceiling(startDelay / 10)."

fn spec.withStopDelay

withStopDelay(stopDelay)

"The time in seconds that is allowed for a PostgreSQL instance to\ngracefully shutdown (default 1800)"

fn spec.withSwitchoverDelay

withSwitchoverDelay(switchoverDelay)

"The time in seconds that is allowed for a primary PostgreSQL instance\nto gracefully shutdown during a switchover.\nDefault value is 3600 seconds (1 hour)."

fn spec.withTablespaces

withTablespaces(tablespaces)

"The tablespaces configuration"

fn spec.withTablespacesMixin

withTablespacesMixin(tablespaces)

"The tablespaces configuration"

Note: This function appends passed data to existing values

fn spec.withTopologySpreadConstraints

withTopologySpreadConstraints(topologySpreadConstraints)

"TopologySpreadConstraints specifies how to spread matching pods among the given topology.\nMore info:\nhttps://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/"

fn spec.withTopologySpreadConstraintsMixin

withTopologySpreadConstraintsMixin(topologySpreadConstraints)

"TopologySpreadConstraints specifies how to spread matching pods among the given topology.\nMore info:\nhttps://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/"

Note: This function appends passed data to existing values

obj spec.affinity

"Affinity/Anti-affinity rules for Pods"

fn spec.affinity.withEnablePodAntiAffinity

withEnablePodAntiAffinity(enablePodAntiAffinity)

"Activates anti-affinity for the pods. The operator will define pods\nanti-affinity unless this field is explicitly set to false"

fn spec.affinity.withNodeSelector

withNodeSelector(nodeSelector)

"NodeSelector is map of key-value pairs used to define the nodes on which\nthe pods can run.\nMore info: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/"

fn spec.affinity.withNodeSelectorMixin

withNodeSelectorMixin(nodeSelector)

"NodeSelector is map of key-value pairs used to define the nodes on which\nthe pods can run.\nMore info: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/"

Note: This function appends passed data to existing values

fn spec.affinity.withPodAntiAffinityType

withPodAntiAffinityType(podAntiAffinityType)

"PodAntiAffinityType allows the user to decide whether pod anti-affinity between cluster instance has to be\nconsidered a strong requirement during scheduling or not. Allowed values are: \"preferred\" (default if empty) or\n\"required\". Setting it to \"required\", could lead to instances remaining pending until new kubernetes nodes are\nadded if all the existing nodes don't match the required pod anti-affinity rule.\nMore info:\nhttps://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity"

fn spec.affinity.withTolerations

withTolerations(tolerations)

"Tolerations is a list of Tolerations that should be set for all the pods, in order to allow them to run\non tainted nodes.\nMore info: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/"

fn spec.affinity.withTolerationsMixin

withTolerationsMixin(tolerations)

"Tolerations is a list of Tolerations that should be set for all the pods, in order to allow them to run\non tainted nodes.\nMore info: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/"

Note: This function appends passed data to existing values

fn spec.affinity.withTopologyKey

withTopologyKey(topologyKey)

"TopologyKey to use for anti-affinity configuration. See k8s documentation\nfor more info on that"

obj spec.affinity.additionalPodAffinity

"AdditionalPodAffinity allows to specify pod affinity terms to be passed to all the cluster's pods."

fn spec.affinity.additionalPodAffinity.withPreferredDuringSchedulingIgnoredDuringExecution

withPreferredDuringSchedulingIgnoredDuringExecution(preferredDuringSchedulingIgnoredDuringExecution)

"The scheduler will prefer to schedule pods to nodes that satisfy\nthe affinity expressions specified by this field, but it may choose\na node that violates one or more of the expressions. The node that is\nmost preferred is the one with the greatest sum of weights, i.e.\nfor each node that meets all of the scheduling requirements (resource\nrequest, requiredDuringScheduling affinity expressions, etc.),\ncompute a sum by iterating through the elements of this field and adding\n\"weight\" to the sum if the node has pods which matches the corresponding podAffinityTerm; the\nnode(s) with the highest sum are the most preferred."

fn spec.affinity.additionalPodAffinity.withPreferredDuringSchedulingIgnoredDuringExecutionMixin

withPreferredDuringSchedulingIgnoredDuringExecutionMixin(preferredDuringSchedulingIgnoredDuringExecution)

"The scheduler will prefer to schedule pods to nodes that satisfy\nthe affinity expressions specified by this field, but it may choose\na node that violates one or more of the expressions. The node that is\nmost preferred is the one with the greatest sum of weights, i.e.\nfor each node that meets all of the scheduling requirements (resource\nrequest, requiredDuringScheduling affinity expressions, etc.),\ncompute a sum by iterating through the elements of this field and adding\n\"weight\" to the sum if the node has pods which matches the corresponding podAffinityTerm; the\nnode(s) with the highest sum are the most preferred."

Note: This function appends passed data to existing values

fn spec.affinity.additionalPodAffinity.withRequiredDuringSchedulingIgnoredDuringExecution

withRequiredDuringSchedulingIgnoredDuringExecution(requiredDuringSchedulingIgnoredDuringExecution)

"If the affinity requirements specified by this field are not met at\nscheduling time, the pod will not be scheduled onto the node.\nIf the affinity requirements specified by this field cease to be met\nat some point during pod execution (e.g. due to a pod label update), the\nsystem may or may not try to eventually evict the pod from its node.\nWhen there are multiple elements, the lists of nodes corresponding to each\npodAffinityTerm are intersected, i.e. all terms must be satisfied."

fn spec.affinity.additionalPodAffinity.withRequiredDuringSchedulingIgnoredDuringExecutionMixin

withRequiredDuringSchedulingIgnoredDuringExecutionMixin(requiredDuringSchedulingIgnoredDuringExecution)

"If the affinity requirements specified by this field are not met at\nscheduling time, the pod will not be scheduled onto the node.\nIf the affinity requirements specified by this field cease to be met\nat some point during pod execution (e.g. due to a pod label update), the\nsystem may or may not try to eventually evict the pod from its node.\nWhen there are multiple elements, the lists of nodes corresponding to each\npodAffinityTerm are intersected, i.e. all terms must be satisfied."

Note: This function appends passed data to existing values

obj spec.affinity.additionalPodAffinity.preferredDuringSchedulingIgnoredDuringExecution

"The scheduler will prefer to schedule pods to nodes that satisfy\nthe affinity expressions specified by this field, but it may choose\na node that violates one or more of the expressions. The node that is\nmost preferred is the one with the greatest sum of weights, i.e.\nfor each node that meets all of the scheduling requirements (resource\nrequest, requiredDuringScheduling affinity expressions, etc.),\ncompute a sum by iterating through the elements of this field and adding\n\"weight\" to the sum if the node has pods which matches the corresponding podAffinityTerm; the\nnode(s) with the highest sum are the most preferred."

fn spec.affinity.additionalPodAffinity.preferredDuringSchedulingIgnoredDuringExecution.withWeight

withWeight(weight)

"weight associated with matching the corresponding podAffinityTerm,\nin the range 1-100."

obj spec.affinity.additionalPodAffinity.preferredDuringSchedulingIgnoredDuringExecution.podAffinityTerm

"Required. A pod affinity term, associated with the corresponding weight."

fn spec.affinity.additionalPodAffinity.preferredDuringSchedulingIgnoredDuringExecution.podAffinityTerm.withMatchLabelKeys

withMatchLabelKeys(matchLabelKeys)

"MatchLabelKeys is a set of pod label keys to select which pods will\nbe taken into consideration. The keys are used to lookup values from the\nincoming pod labels, those key-value labels are merged with labelSelector as key in (value)\nto select the group of existing pods which pods will be taken into consideration\nfor the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming\npod labels will be ignored. The default value is empty.\nThe same key is forbidden to exist in both matchLabelKeys and labelSelector.\nAlso, matchLabelKeys cannot be set when labelSelector isn't set.\nThis is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate."

fn spec.affinity.additionalPodAffinity.preferredDuringSchedulingIgnoredDuringExecution.podAffinityTerm.withMatchLabelKeysMixin

withMatchLabelKeysMixin(matchLabelKeys)

"MatchLabelKeys is a set of pod label keys to select which pods will\nbe taken into consideration. The keys are used to lookup values from the\nincoming pod labels, those key-value labels are merged with labelSelector as key in (value)\nto select the group of existing pods which pods will be taken into consideration\nfor the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming\npod labels will be ignored. The default value is empty.\nThe same key is forbidden to exist in both matchLabelKeys and labelSelector.\nAlso, matchLabelKeys cannot be set when labelSelector isn't set.\nThis is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate."

Note: This function appends passed data to existing values

fn spec.affinity.additionalPodAffinity.preferredDuringSchedulingIgnoredDuringExecution.podAffinityTerm.withMismatchLabelKeys

withMismatchLabelKeys(mismatchLabelKeys)

"MismatchLabelKeys is a set of pod label keys to select which pods will\nbe taken into consideration. The keys are used to lookup values from the\nincoming pod labels, those key-value labels are merged with labelSelector as key notin (value)\nto select the group of existing pods which pods will be taken into consideration\nfor the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming\npod labels will be ignored. The default value is empty.\nThe same key is forbidden to exist in both mismatchLabelKeys and labelSelector.\nAlso, mismatchLabelKeys cannot be set when labelSelector isn't set.\nThis is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate."

fn spec.affinity.additionalPodAffinity.preferredDuringSchedulingIgnoredDuringExecution.podAffinityTerm.withMismatchLabelKeysMixin

withMismatchLabelKeysMixin(mismatchLabelKeys)

"MismatchLabelKeys is a set of pod label keys to select which pods will\nbe taken into consideration. The keys are used to lookup values from the\nincoming pod labels, those key-value labels are merged with labelSelector as key notin (value)\nto select the group of existing pods which pods will be taken into consideration\nfor the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming\npod labels will be ignored. The default value is empty.\nThe same key is forbidden to exist in both mismatchLabelKeys and labelSelector.\nAlso, mismatchLabelKeys cannot be set when labelSelector isn't set.\nThis is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate."

Note: This function appends passed data to existing values

fn spec.affinity.additionalPodAffinity.preferredDuringSchedulingIgnoredDuringExecution.podAffinityTerm.withNamespaces

withNamespaces(namespaces)

"namespaces specifies a static list of namespace names that the term applies to.\nThe term is applied to the union of the namespaces listed in this field\nand the ones selected by namespaceSelector.\nnull or empty namespaces list and null namespaceSelector means \"this pod's namespace\"."

fn spec.affinity.additionalPodAffinity.preferredDuringSchedulingIgnoredDuringExecution.podAffinityTerm.withNamespacesMixin

withNamespacesMixin(namespaces)

"namespaces specifies a static list of namespace names that the term applies to.\nThe term is applied to the union of the namespaces listed in this field\nand the ones selected by namespaceSelector.\nnull or empty namespaces list and null namespaceSelector means \"this pod's namespace\"."

Note: This function appends passed data to existing values

fn spec.affinity.additionalPodAffinity.preferredDuringSchedulingIgnoredDuringExecution.podAffinityTerm.withTopologyKey

withTopologyKey(topologyKey)

"This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching\nthe labelSelector in the specified namespaces, where co-located is defined as running on a node\nwhose value of the label with key topologyKey matches that of any node on which any of the\nselected pods is running.\nEmpty topologyKey is not allowed."

obj spec.affinity.additionalPodAffinity.preferredDuringSchedulingIgnoredDuringExecution.podAffinityTerm.labelSelector

"A label query over a set of resources, in this case pods.\nIf it's null, this PodAffinityTerm matches with no Pods."

fn spec.affinity.additionalPodAffinity.preferredDuringSchedulingIgnoredDuringExecution.podAffinityTerm.labelSelector.withMatchExpressions

withMatchExpressions(matchExpressions)

"matchExpressions is a list of label selector requirements. The requirements are ANDed."

fn spec.affinity.additionalPodAffinity.preferredDuringSchedulingIgnoredDuringExecution.podAffinityTerm.labelSelector.withMatchExpressionsMixin

withMatchExpressionsMixin(matchExpressions)

"matchExpressions is a list of label selector requirements. The requirements are ANDed."

Note: This function appends passed data to existing values

fn spec.affinity.additionalPodAffinity.preferredDuringSchedulingIgnoredDuringExecution.podAffinityTerm.labelSelector.withMatchLabels

withMatchLabels(matchLabels)

"matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels\nmap is equivalent to an element of matchExpressions, whose key field is \"key\", the\noperator is \"In\", and the values array contains only \"value\". The requirements are ANDed."

fn spec.affinity.additionalPodAffinity.preferredDuringSchedulingIgnoredDuringExecution.podAffinityTerm.labelSelector.withMatchLabelsMixin

withMatchLabelsMixin(matchLabels)

"matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels\nmap is equivalent to an element of matchExpressions, whose key field is \"key\", the\noperator is \"In\", and the values array contains only \"value\". The requirements are ANDed."

Note: This function appends passed data to existing values

obj spec.affinity.additionalPodAffinity.preferredDuringSchedulingIgnoredDuringExecution.podAffinityTerm.labelSelector.matchExpressions

"matchExpressions is a list of label selector requirements. The requirements are ANDed."

fn spec.affinity.additionalPodAffinity.preferredDuringSchedulingIgnoredDuringExecution.podAffinityTerm.labelSelector.matchExpressions.withKey

withKey(key)

"key is the label key that the selector applies to."

fn spec.affinity.additionalPodAffinity.preferredDuringSchedulingIgnoredDuringExecution.podAffinityTerm.labelSelector.matchExpressions.withOperator

withOperator(operator)

"operator represents a key's relationship to a set of values.\nValid operators are In, NotIn, Exists and DoesNotExist."

fn spec.affinity.additionalPodAffinity.preferredDuringSchedulingIgnoredDuringExecution.podAffinityTerm.labelSelector.matchExpressions.withValues

withValues(values)

"values is an array of string values. If the operator is In or NotIn,\nthe values array must be non-empty. If the operator is Exists or DoesNotExist,\nthe values array must be empty. This array is replaced during a strategic\nmerge patch."

fn spec.affinity.additionalPodAffinity.preferredDuringSchedulingIgnoredDuringExecution.podAffinityTerm.labelSelector.matchExpressions.withValuesMixin

withValuesMixin(values)

"values is an array of string values. If the operator is In or NotIn,\nthe values array must be non-empty. If the operator is Exists or DoesNotExist,\nthe values array must be empty. This array is replaced during a strategic\nmerge patch."

Note: This function appends passed data to existing values

obj spec.affinity.additionalPodAffinity.preferredDuringSchedulingIgnoredDuringExecution.podAffinityTerm.namespaceSelector

"A label query over the set of namespaces that the term applies to.\nThe term is applied to the union of the namespaces selected by this field\nand the ones listed in the namespaces field.\nnull selector and null or empty namespaces list means \"this pod's namespace\".\nAn empty selector ({}) matches all namespaces."

fn spec.affinity.additionalPodAffinity.preferredDuringSchedulingIgnoredDuringExecution.podAffinityTerm.namespaceSelector.withMatchExpressions

withMatchExpressions(matchExpressions)

"matchExpressions is a list of label selector requirements. The requirements are ANDed."

fn spec.affinity.additionalPodAffinity.preferredDuringSchedulingIgnoredDuringExecution.podAffinityTerm.namespaceSelector.withMatchExpressionsMixin

withMatchExpressionsMixin(matchExpressions)

"matchExpressions is a list of label selector requirements. The requirements are ANDed."

Note: This function appends passed data to existing values

fn spec.affinity.additionalPodAffinity.preferredDuringSchedulingIgnoredDuringExecution.podAffinityTerm.namespaceSelector.withMatchLabels

withMatchLabels(matchLabels)

"matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels\nmap is equivalent to an element of matchExpressions, whose key field is \"key\", the\noperator is \"In\", and the values array contains only \"value\". The requirements are ANDed."

fn spec.affinity.additionalPodAffinity.preferredDuringSchedulingIgnoredDuringExecution.podAffinityTerm.namespaceSelector.withMatchLabelsMixin

withMatchLabelsMixin(matchLabels)

"matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels\nmap is equivalent to an element of matchExpressions, whose key field is \"key\", the\noperator is \"In\", and the values array contains only \"value\". The requirements are ANDed."

Note: This function appends passed data to existing values

obj spec.affinity.additionalPodAffinity.preferredDuringSchedulingIgnoredDuringExecution.podAffinityTerm.namespaceSelector.matchExpressions

"matchExpressions is a list of label selector requirements. The requirements are ANDed."

fn spec.affinity.additionalPodAffinity.preferredDuringSchedulingIgnoredDuringExecution.podAffinityTerm.namespaceSelector.matchExpressions.withKey

withKey(key)

"key is the label key that the selector applies to."

fn spec.affinity.additionalPodAffinity.preferredDuringSchedulingIgnoredDuringExecution.podAffinityTerm.namespaceSelector.matchExpressions.withOperator

withOperator(operator)

"operator represents a key's relationship to a set of values.\nValid operators are In, NotIn, Exists and DoesNotExist."

fn spec.affinity.additionalPodAffinity.preferredDuringSchedulingIgnoredDuringExecution.podAffinityTerm.namespaceSelector.matchExpressions.withValues

withValues(values)

"values is an array of string values. If the operator is In or NotIn,\nthe values array must be non-empty. If the operator is Exists or DoesNotExist,\nthe values array must be empty. This array is replaced during a strategic\nmerge patch."

fn spec.affinity.additionalPodAffinity.preferredDuringSchedulingIgnoredDuringExecution.podAffinityTerm.namespaceSelector.matchExpressions.withValuesMixin

withValuesMixin(values)

"values is an array of string values. If the operator is In or NotIn,\nthe values array must be non-empty. If the operator is Exists or DoesNotExist,\nthe values array must be empty. This array is replaced during a strategic\nmerge patch."

Note: This function appends passed data to existing values

obj spec.affinity.additionalPodAffinity.requiredDuringSchedulingIgnoredDuringExecution

"If the affinity requirements specified by this field are not met at\nscheduling time, the pod will not be scheduled onto the node.\nIf the affinity requirements specified by this field cease to be met\nat some point during pod execution (e.g. due to a pod label update), the\nsystem may or may not try to eventually evict the pod from its node.\nWhen there are multiple elements, the lists of nodes corresponding to each\npodAffinityTerm are intersected, i.e. all terms must be satisfied."

fn spec.affinity.additionalPodAffinity.requiredDuringSchedulingIgnoredDuringExecution.withMatchLabelKeys

withMatchLabelKeys(matchLabelKeys)

"MatchLabelKeys is a set of pod label keys to select which pods will\nbe taken into consideration. The keys are used to lookup values from the\nincoming pod labels, those key-value labels are merged with labelSelector as key in (value)\nto select the group of existing pods which pods will be taken into consideration\nfor the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming\npod labels will be ignored. The default value is empty.\nThe same key is forbidden to exist in both matchLabelKeys and labelSelector.\nAlso, matchLabelKeys cannot be set when labelSelector isn't set.\nThis is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate."

fn spec.affinity.additionalPodAffinity.requiredDuringSchedulingIgnoredDuringExecution.withMatchLabelKeysMixin

withMatchLabelKeysMixin(matchLabelKeys)

"MatchLabelKeys is a set of pod label keys to select which pods will\nbe taken into consideration. The keys are used to lookup values from the\nincoming pod labels, those key-value labels are merged with labelSelector as key in (value)\nto select the group of existing pods which pods will be taken into consideration\nfor the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming\npod labels will be ignored. The default value is empty.\nThe same key is forbidden to exist in both matchLabelKeys and labelSelector.\nAlso, matchLabelKeys cannot be set when labelSelector isn't set.\nThis is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate."

Note: This function appends passed data to existing values

fn spec.affinity.additionalPodAffinity.requiredDuringSchedulingIgnoredDuringExecution.withMismatchLabelKeys

withMismatchLabelKeys(mismatchLabelKeys)

"MismatchLabelKeys is a set of pod label keys to select which pods will\nbe taken into consideration. The keys are used to lookup values from the\nincoming pod labels, those key-value labels are merged with labelSelector as key notin (value)\nto select the group of existing pods which pods will be taken into consideration\nfor the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming\npod labels will be ignored. The default value is empty.\nThe same key is forbidden to exist in both mismatchLabelKeys and labelSelector.\nAlso, mismatchLabelKeys cannot be set when labelSelector isn't set.\nThis is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate."

fn spec.affinity.additionalPodAffinity.requiredDuringSchedulingIgnoredDuringExecution.withMismatchLabelKeysMixin

withMismatchLabelKeysMixin(mismatchLabelKeys)

"MismatchLabelKeys is a set of pod label keys to select which pods will\nbe taken into consideration. The keys are used to lookup values from the\nincoming pod labels, those key-value labels are merged with labelSelector as key notin (value)\nto select the group of existing pods which pods will be taken into consideration\nfor the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming\npod labels will be ignored. The default value is empty.\nThe same key is forbidden to exist in both mismatchLabelKeys and labelSelector.\nAlso, mismatchLabelKeys cannot be set when labelSelector isn't set.\nThis is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate."

Note: This function appends passed data to existing values

fn spec.affinity.additionalPodAffinity.requiredDuringSchedulingIgnoredDuringExecution.withNamespaces

withNamespaces(namespaces)

"namespaces specifies a static list of namespace names that the term applies to.\nThe term is applied to the union of the namespaces listed in this field\nand the ones selected by namespaceSelector.\nnull or empty namespaces list and null namespaceSelector means \"this pod's namespace\"."

fn spec.affinity.additionalPodAffinity.requiredDuringSchedulingIgnoredDuringExecution.withNamespacesMixin

withNamespacesMixin(namespaces)

"namespaces specifies a static list of namespace names that the term applies to.\nThe term is applied to the union of the namespaces listed in this field\nand the ones selected by namespaceSelector.\nnull or empty namespaces list and null namespaceSelector means \"this pod's namespace\"."

Note: This function appends passed data to existing values

fn spec.affinity.additionalPodAffinity.requiredDuringSchedulingIgnoredDuringExecution.withTopologyKey

withTopologyKey(topologyKey)

"This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching\nthe labelSelector in the specified namespaces, where co-located is defined as running on a node\nwhose value of the label with key topologyKey matches that of any node on which any of the\nselected pods is running.\nEmpty topologyKey is not allowed."

obj spec.affinity.additionalPodAffinity.requiredDuringSchedulingIgnoredDuringExecution.labelSelector

"A label query over a set of resources, in this case pods.\nIf it's null, this PodAffinityTerm matches with no Pods."

fn spec.affinity.additionalPodAffinity.requiredDuringSchedulingIgnoredDuringExecution.labelSelector.withMatchExpressions

withMatchExpressions(matchExpressions)

"matchExpressions is a list of label selector requirements. The requirements are ANDed."

fn spec.affinity.additionalPodAffinity.requiredDuringSchedulingIgnoredDuringExecution.labelSelector.withMatchExpressionsMixin

withMatchExpressionsMixin(matchExpressions)

"matchExpressions is a list of label selector requirements. The requirements are ANDed."

Note: This function appends passed data to existing values

fn spec.affinity.additionalPodAffinity.requiredDuringSchedulingIgnoredDuringExecution.labelSelector.withMatchLabels

withMatchLabels(matchLabels)

"matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels\nmap is equivalent to an element of matchExpressions, whose key field is \"key\", the\noperator is \"In\", and the values array contains only \"value\". The requirements are ANDed."

fn spec.affinity.additionalPodAffinity.requiredDuringSchedulingIgnoredDuringExecution.labelSelector.withMatchLabelsMixin

withMatchLabelsMixin(matchLabels)

"matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels\nmap is equivalent to an element of matchExpressions, whose key field is \"key\", the\noperator is \"In\", and the values array contains only \"value\". The requirements are ANDed."

Note: This function appends passed data to existing values

obj spec.affinity.additionalPodAffinity.requiredDuringSchedulingIgnoredDuringExecution.labelSelector.matchExpressions

"matchExpressions is a list of label selector requirements. The requirements are ANDed."

fn spec.affinity.additionalPodAffinity.requiredDuringSchedulingIgnoredDuringExecution.labelSelector.matchExpressions.withKey

withKey(key)

"key is the label key that the selector applies to."

fn spec.affinity.additionalPodAffinity.requiredDuringSchedulingIgnoredDuringExecution.labelSelector.matchExpressions.withOperator

withOperator(operator)

"operator represents a key's relationship to a set of values.\nValid operators are In, NotIn, Exists and DoesNotExist."

fn spec.affinity.additionalPodAffinity.requiredDuringSchedulingIgnoredDuringExecution.labelSelector.matchExpressions.withValues

withValues(values)

"values is an array of string values. If the operator is In or NotIn,\nthe values array must be non-empty. If the operator is Exists or DoesNotExist,\nthe values array must be empty. This array is replaced during a strategic\nmerge patch."

fn spec.affinity.additionalPodAffinity.requiredDuringSchedulingIgnoredDuringExecution.labelSelector.matchExpressions.withValuesMixin

withValuesMixin(values)

"values is an array of string values. If the operator is In or NotIn,\nthe values array must be non-empty. If the operator is Exists or DoesNotExist,\nthe values array must be empty. This array is replaced during a strategic\nmerge patch."

Note: This function appends passed data to existing values

obj spec.affinity.additionalPodAffinity.requiredDuringSchedulingIgnoredDuringExecution.namespaceSelector

"A label query over the set of namespaces that the term applies to.\nThe term is applied to the union of the namespaces selected by this field\nand the ones listed in the namespaces field.\nnull selector and null or empty namespaces list means \"this pod's namespace\".\nAn empty selector ({}) matches all namespaces."

fn spec.affinity.additionalPodAffinity.requiredDuringSchedulingIgnoredDuringExecution.namespaceSelector.withMatchExpressions

withMatchExpressions(matchExpressions)

"matchExpressions is a list of label selector requirements. The requirements are ANDed."

fn spec.affinity.additionalPodAffinity.requiredDuringSchedulingIgnoredDuringExecution.namespaceSelector.withMatchExpressionsMixin

withMatchExpressionsMixin(matchExpressions)

"matchExpressions is a list of label selector requirements. The requirements are ANDed."

Note: This function appends passed data to existing values

fn spec.affinity.additionalPodAffinity.requiredDuringSchedulingIgnoredDuringExecution.namespaceSelector.withMatchLabels

withMatchLabels(matchLabels)

"matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels\nmap is equivalent to an element of matchExpressions, whose key field is \"key\", the\noperator is \"In\", and the values array contains only \"value\". The requirements are ANDed."

fn spec.affinity.additionalPodAffinity.requiredDuringSchedulingIgnoredDuringExecution.namespaceSelector.withMatchLabelsMixin

withMatchLabelsMixin(matchLabels)

"matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels\nmap is equivalent to an element of matchExpressions, whose key field is \"key\", the\noperator is \"In\", and the values array contains only \"value\". The requirements are ANDed."

Note: This function appends passed data to existing values

obj spec.affinity.additionalPodAffinity.requiredDuringSchedulingIgnoredDuringExecution.namespaceSelector.matchExpressions

"matchExpressions is a list of label selector requirements. The requirements are ANDed."

fn spec.affinity.additionalPodAffinity.requiredDuringSchedulingIgnoredDuringExecution.namespaceSelector.matchExpressions.withKey

withKey(key)

"key is the label key that the selector applies to."

fn spec.affinity.additionalPodAffinity.requiredDuringSchedulingIgnoredDuringExecution.namespaceSelector.matchExpressions.withOperator

withOperator(operator)

"operator represents a key's relationship to a set of values.\nValid operators are In, NotIn, Exists and DoesNotExist."

fn spec.affinity.additionalPodAffinity.requiredDuringSchedulingIgnoredDuringExecution.namespaceSelector.matchExpressions.withValues

withValues(values)

"values is an array of string values. If the operator is In or NotIn,\nthe values array must be non-empty. If the operator is Exists or DoesNotExist,\nthe values array must be empty. This array is replaced during a strategic\nmerge patch."

fn spec.affinity.additionalPodAffinity.requiredDuringSchedulingIgnoredDuringExecution.namespaceSelector.matchExpressions.withValuesMixin

withValuesMixin(values)

"values is an array of string values. If the operator is In or NotIn,\nthe values array must be non-empty. If the operator is Exists or DoesNotExist,\nthe values array must be empty. This array is replaced during a strategic\nmerge patch."

Note: This function appends passed data to existing values

obj spec.affinity.additionalPodAntiAffinity

"AdditionalPodAntiAffinity allows to specify pod anti-affinity terms to be added to the ones generated\nby the operator if EnablePodAntiAffinity is set to true (default) or to be used exclusively if set to false."

fn spec.affinity.additionalPodAntiAffinity.withPreferredDuringSchedulingIgnoredDuringExecution

withPreferredDuringSchedulingIgnoredDuringExecution(preferredDuringSchedulingIgnoredDuringExecution)

"The scheduler will prefer to schedule pods to nodes that satisfy\nthe anti-affinity expressions specified by this field, but it may choose\na node that violates one or more of the expressions. The node that is\nmost preferred is the one with the greatest sum of weights, i.e.\nfor each node that meets all of the scheduling requirements (resource\nrequest, requiredDuringScheduling anti-affinity expressions, etc.),\ncompute a sum by iterating through the elements of this field and adding\n\"weight\" to the sum if the node has pods which matches the corresponding podAffinityTerm; the\nnode(s) with the highest sum are the most preferred."

fn spec.affinity.additionalPodAntiAffinity.withPreferredDuringSchedulingIgnoredDuringExecutionMixin

withPreferredDuringSchedulingIgnoredDuringExecutionMixin(preferredDuringSchedulingIgnoredDuringExecution)

"The scheduler will prefer to schedule pods to nodes that satisfy\nthe anti-affinity expressions specified by this field, but it may choose\na node that violates one or more of the expressions. The node that is\nmost preferred is the one with the greatest sum of weights, i.e.\nfor each node that meets all of the scheduling requirements (resource\nrequest, requiredDuringScheduling anti-affinity expressions, etc.),\ncompute a sum by iterating through the elements of this field and adding\n\"weight\" to the sum if the node has pods which matches the corresponding podAffinityTerm; the\nnode(s) with the highest sum are the most preferred."

Note: This function appends passed data to existing values

fn spec.affinity.additionalPodAntiAffinity.withRequiredDuringSchedulingIgnoredDuringExecution

withRequiredDuringSchedulingIgnoredDuringExecution(requiredDuringSchedulingIgnoredDuringExecution)

"If the anti-affinity requirements specified by this field are not met at\nscheduling time, the pod will not be scheduled onto the node.\nIf the anti-affinity requirements specified by this field cease to be met\nat some point during pod execution (e.g. due to a pod label update), the\nsystem may or may not try to eventually evict the pod from its node.\nWhen there are multiple elements, the lists of nodes corresponding to each\npodAffinityTerm are intersected, i.e. all terms must be satisfied."

fn spec.affinity.additionalPodAntiAffinity.withRequiredDuringSchedulingIgnoredDuringExecutionMixin

withRequiredDuringSchedulingIgnoredDuringExecutionMixin(requiredDuringSchedulingIgnoredDuringExecution)

"If the anti-affinity requirements specified by this field are not met at\nscheduling time, the pod will not be scheduled onto the node.\nIf the anti-affinity requirements specified by this field cease to be met\nat some point during pod execution (e.g. due to a pod label update), the\nsystem may or may not try to eventually evict the pod from its node.\nWhen there are multiple elements, the lists of nodes corresponding to each\npodAffinityTerm are intersected, i.e. all terms must be satisfied."

Note: This function appends passed data to existing values

obj spec.affinity.additionalPodAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution

"The scheduler will prefer to schedule pods to nodes that satisfy\nthe anti-affinity expressions specified by this field, but it may choose\na node that violates one or more of the expressions. The node that is\nmost preferred is the one with the greatest sum of weights, i.e.\nfor each node that meets all of the scheduling requirements (resource\nrequest, requiredDuringScheduling anti-affinity expressions, etc.),\ncompute a sum by iterating through the elements of this field and adding\n\"weight\" to the sum if the node has pods which matches the corresponding podAffinityTerm; the\nnode(s) with the highest sum are the most preferred."

fn spec.affinity.additionalPodAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution.withWeight

withWeight(weight)

"weight associated with matching the corresponding podAffinityTerm,\nin the range 1-100."

obj spec.affinity.additionalPodAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution.podAffinityTerm

"Required. A pod affinity term, associated with the corresponding weight."

fn spec.affinity.additionalPodAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution.podAffinityTerm.withMatchLabelKeys

withMatchLabelKeys(matchLabelKeys)

"MatchLabelKeys is a set of pod label keys to select which pods will\nbe taken into consideration. The keys are used to lookup values from the\nincoming pod labels, those key-value labels are merged with labelSelector as key in (value)\nto select the group of existing pods which pods will be taken into consideration\nfor the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming\npod labels will be ignored. The default value is empty.\nThe same key is forbidden to exist in both matchLabelKeys and labelSelector.\nAlso, matchLabelKeys cannot be set when labelSelector isn't set.\nThis is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate."

fn spec.affinity.additionalPodAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution.podAffinityTerm.withMatchLabelKeysMixin

withMatchLabelKeysMixin(matchLabelKeys)

"MatchLabelKeys is a set of pod label keys to select which pods will\nbe taken into consideration. The keys are used to lookup values from the\nincoming pod labels, those key-value labels are merged with labelSelector as key in (value)\nto select the group of existing pods which pods will be taken into consideration\nfor the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming\npod labels will be ignored. The default value is empty.\nThe same key is forbidden to exist in both matchLabelKeys and labelSelector.\nAlso, matchLabelKeys cannot be set when labelSelector isn't set.\nThis is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate."

Note: This function appends passed data to existing values

fn spec.affinity.additionalPodAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution.podAffinityTerm.withMismatchLabelKeys

withMismatchLabelKeys(mismatchLabelKeys)

"MismatchLabelKeys is a set of pod label keys to select which pods will\nbe taken into consideration. The keys are used to lookup values from the\nincoming pod labels, those key-value labels are merged with labelSelector as key notin (value)\nto select the group of existing pods which pods will be taken into consideration\nfor the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming\npod labels will be ignored. The default value is empty.\nThe same key is forbidden to exist in both mismatchLabelKeys and labelSelector.\nAlso, mismatchLabelKeys cannot be set when labelSelector isn't set.\nThis is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate."

fn spec.affinity.additionalPodAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution.podAffinityTerm.withMismatchLabelKeysMixin

withMismatchLabelKeysMixin(mismatchLabelKeys)

"MismatchLabelKeys is a set of pod label keys to select which pods will\nbe taken into consideration. The keys are used to lookup values from the\nincoming pod labels, those key-value labels are merged with labelSelector as key notin (value)\nto select the group of existing pods which pods will be taken into consideration\nfor the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming\npod labels will be ignored. The default value is empty.\nThe same key is forbidden to exist in both mismatchLabelKeys and labelSelector.\nAlso, mismatchLabelKeys cannot be set when labelSelector isn't set.\nThis is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate."

Note: This function appends passed data to existing values

fn spec.affinity.additionalPodAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution.podAffinityTerm.withNamespaces

withNamespaces(namespaces)

"namespaces specifies a static list of namespace names that the term applies to.\nThe term is applied to the union of the namespaces listed in this field\nand the ones selected by namespaceSelector.\nnull or empty namespaces list and null namespaceSelector means \"this pod's namespace\"."

fn spec.affinity.additionalPodAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution.podAffinityTerm.withNamespacesMixin

withNamespacesMixin(namespaces)

"namespaces specifies a static list of namespace names that the term applies to.\nThe term is applied to the union of the namespaces listed in this field\nand the ones selected by namespaceSelector.\nnull or empty namespaces list and null namespaceSelector means \"this pod's namespace\"."

Note: This function appends passed data to existing values

fn spec.affinity.additionalPodAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution.podAffinityTerm.withTopologyKey

withTopologyKey(topologyKey)

"This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching\nthe labelSelector in the specified namespaces, where co-located is defined as running on a node\nwhose value of the label with key topologyKey matches that of any node on which any of the\nselected pods is running.\nEmpty topologyKey is not allowed."

obj spec.affinity.additionalPodAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution.podAffinityTerm.labelSelector

"A label query over a set of resources, in this case pods.\nIf it's null, this PodAffinityTerm matches with no Pods."

fn spec.affinity.additionalPodAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution.podAffinityTerm.labelSelector.withMatchExpressions

withMatchExpressions(matchExpressions)

"matchExpressions is a list of label selector requirements. The requirements are ANDed."

fn spec.affinity.additionalPodAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution.podAffinityTerm.labelSelector.withMatchExpressionsMixin

withMatchExpressionsMixin(matchExpressions)

"matchExpressions is a list of label selector requirements. The requirements are ANDed."

Note: This function appends passed data to existing values

fn spec.affinity.additionalPodAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution.podAffinityTerm.labelSelector.withMatchLabels

withMatchLabels(matchLabels)

"matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels\nmap is equivalent to an element of matchExpressions, whose key field is \"key\", the\noperator is \"In\", and the values array contains only \"value\". The requirements are ANDed."

fn spec.affinity.additionalPodAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution.podAffinityTerm.labelSelector.withMatchLabelsMixin

withMatchLabelsMixin(matchLabels)

"matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels\nmap is equivalent to an element of matchExpressions, whose key field is \"key\", the\noperator is \"In\", and the values array contains only \"value\". The requirements are ANDed."

Note: This function appends passed data to existing values

obj spec.affinity.additionalPodAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution.podAffinityTerm.labelSelector.matchExpressions

"matchExpressions is a list of label selector requirements. The requirements are ANDed."

fn spec.affinity.additionalPodAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution.podAffinityTerm.labelSelector.matchExpressions.withKey

withKey(key)

"key is the label key that the selector applies to."

fn spec.affinity.additionalPodAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution.podAffinityTerm.labelSelector.matchExpressions.withOperator

withOperator(operator)

"operator represents a key's relationship to a set of values.\nValid operators are In, NotIn, Exists and DoesNotExist."

fn spec.affinity.additionalPodAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution.podAffinityTerm.labelSelector.matchExpressions.withValues

withValues(values)

"values is an array of string values. If the operator is In or NotIn,\nthe values array must be non-empty. If the operator is Exists or DoesNotExist,\nthe values array must be empty. This array is replaced during a strategic\nmerge patch."

fn spec.affinity.additionalPodAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution.podAffinityTerm.labelSelector.matchExpressions.withValuesMixin

withValuesMixin(values)

"values is an array of string values. If the operator is In or NotIn,\nthe values array must be non-empty. If the operator is Exists or DoesNotExist,\nthe values array must be empty. This array is replaced during a strategic\nmerge patch."

Note: This function appends passed data to existing values

obj spec.affinity.additionalPodAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution.podAffinityTerm.namespaceSelector

"A label query over the set of namespaces that the term applies to.\nThe term is applied to the union of the namespaces selected by this field\nand the ones listed in the namespaces field.\nnull selector and null or empty namespaces list means \"this pod's namespace\".\nAn empty selector ({}) matches all namespaces."

fn spec.affinity.additionalPodAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution.podAffinityTerm.namespaceSelector.withMatchExpressions

withMatchExpressions(matchExpressions)

"matchExpressions is a list of label selector requirements. The requirements are ANDed."

fn spec.affinity.additionalPodAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution.podAffinityTerm.namespaceSelector.withMatchExpressionsMixin

withMatchExpressionsMixin(matchExpressions)

"matchExpressions is a list of label selector requirements. The requirements are ANDed."

Note: This function appends passed data to existing values

fn spec.affinity.additionalPodAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution.podAffinityTerm.namespaceSelector.withMatchLabels

withMatchLabels(matchLabels)

"matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels\nmap is equivalent to an element of matchExpressions, whose key field is \"key\", the\noperator is \"In\", and the values array contains only \"value\". The requirements are ANDed."

fn spec.affinity.additionalPodAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution.podAffinityTerm.namespaceSelector.withMatchLabelsMixin

withMatchLabelsMixin(matchLabels)

"matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels\nmap is equivalent to an element of matchExpressions, whose key field is \"key\", the\noperator is \"In\", and the values array contains only \"value\". The requirements are ANDed."

Note: This function appends passed data to existing values

obj spec.affinity.additionalPodAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution.podAffinityTerm.namespaceSelector.matchExpressions

"matchExpressions is a list of label selector requirements. The requirements are ANDed."

fn spec.affinity.additionalPodAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution.podAffinityTerm.namespaceSelector.matchExpressions.withKey

withKey(key)

"key is the label key that the selector applies to."

fn spec.affinity.additionalPodAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution.podAffinityTerm.namespaceSelector.matchExpressions.withOperator

withOperator(operator)

"operator represents a key's relationship to a set of values.\nValid operators are In, NotIn, Exists and DoesNotExist."

fn spec.affinity.additionalPodAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution.podAffinityTerm.namespaceSelector.matchExpressions.withValues

withValues(values)

"values is an array of string values. If the operator is In or NotIn,\nthe values array must be non-empty. If the operator is Exists or DoesNotExist,\nthe values array must be empty. This array is replaced during a strategic\nmerge patch."

fn spec.affinity.additionalPodAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution.podAffinityTerm.namespaceSelector.matchExpressions.withValuesMixin

withValuesMixin(values)

"values is an array of string values. If the operator is In or NotIn,\nthe values array must be non-empty. If the operator is Exists or DoesNotExist,\nthe values array must be empty. This array is replaced during a strategic\nmerge patch."

Note: This function appends passed data to existing values

obj spec.affinity.additionalPodAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution

"If the anti-affinity requirements specified by this field are not met at\nscheduling time, the pod will not be scheduled onto the node.\nIf the anti-affinity requirements specified by this field cease to be met\nat some point during pod execution (e.g. due to a pod label update), the\nsystem may or may not try to eventually evict the pod from its node.\nWhen there are multiple elements, the lists of nodes corresponding to each\npodAffinityTerm are intersected, i.e. all terms must be satisfied."

fn spec.affinity.additionalPodAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution.withMatchLabelKeys

withMatchLabelKeys(matchLabelKeys)

"MatchLabelKeys is a set of pod label keys to select which pods will\nbe taken into consideration. The keys are used to lookup values from the\nincoming pod labels, those key-value labels are merged with labelSelector as key in (value)\nto select the group of existing pods which pods will be taken into consideration\nfor the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming\npod labels will be ignored. The default value is empty.\nThe same key is forbidden to exist in both matchLabelKeys and labelSelector.\nAlso, matchLabelKeys cannot be set when labelSelector isn't set.\nThis is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate."

fn spec.affinity.additionalPodAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution.withMatchLabelKeysMixin

withMatchLabelKeysMixin(matchLabelKeys)

"MatchLabelKeys is a set of pod label keys to select which pods will\nbe taken into consideration. The keys are used to lookup values from the\nincoming pod labels, those key-value labels are merged with labelSelector as key in (value)\nto select the group of existing pods which pods will be taken into consideration\nfor the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming\npod labels will be ignored. The default value is empty.\nThe same key is forbidden to exist in both matchLabelKeys and labelSelector.\nAlso, matchLabelKeys cannot be set when labelSelector isn't set.\nThis is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate."

Note: This function appends passed data to existing values

fn spec.affinity.additionalPodAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution.withMismatchLabelKeys

withMismatchLabelKeys(mismatchLabelKeys)

"MismatchLabelKeys is a set of pod label keys to select which pods will\nbe taken into consideration. The keys are used to lookup values from the\nincoming pod labels, those key-value labels are merged with labelSelector as key notin (value)\nto select the group of existing pods which pods will be taken into consideration\nfor the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming\npod labels will be ignored. The default value is empty.\nThe same key is forbidden to exist in both mismatchLabelKeys and labelSelector.\nAlso, mismatchLabelKeys cannot be set when labelSelector isn't set.\nThis is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate."

fn spec.affinity.additionalPodAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution.withMismatchLabelKeysMixin

withMismatchLabelKeysMixin(mismatchLabelKeys)

"MismatchLabelKeys is a set of pod label keys to select which pods will\nbe taken into consideration. The keys are used to lookup values from the\nincoming pod labels, those key-value labels are merged with labelSelector as key notin (value)\nto select the group of existing pods which pods will be taken into consideration\nfor the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming\npod labels will be ignored. The default value is empty.\nThe same key is forbidden to exist in both mismatchLabelKeys and labelSelector.\nAlso, mismatchLabelKeys cannot be set when labelSelector isn't set.\nThis is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate."

Note: This function appends passed data to existing values

fn spec.affinity.additionalPodAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution.withNamespaces

withNamespaces(namespaces)

"namespaces specifies a static list of namespace names that the term applies to.\nThe term is applied to the union of the namespaces listed in this field\nand the ones selected by namespaceSelector.\nnull or empty namespaces list and null namespaceSelector means \"this pod's namespace\"."

fn spec.affinity.additionalPodAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution.withNamespacesMixin

withNamespacesMixin(namespaces)

"namespaces specifies a static list of namespace names that the term applies to.\nThe term is applied to the union of the namespaces listed in this field\nand the ones selected by namespaceSelector.\nnull or empty namespaces list and null namespaceSelector means \"this pod's namespace\"."

Note: This function appends passed data to existing values

fn spec.affinity.additionalPodAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution.withTopologyKey

withTopologyKey(topologyKey)

"This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching\nthe labelSelector in the specified namespaces, where co-located is defined as running on a node\nwhose value of the label with key topologyKey matches that of any node on which any of the\nselected pods is running.\nEmpty topologyKey is not allowed."

obj spec.affinity.additionalPodAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution.labelSelector

"A label query over a set of resources, in this case pods.\nIf it's null, this PodAffinityTerm matches with no Pods."

fn spec.affinity.additionalPodAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution.labelSelector.withMatchExpressions

withMatchExpressions(matchExpressions)

"matchExpressions is a list of label selector requirements. The requirements are ANDed."

fn spec.affinity.additionalPodAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution.labelSelector.withMatchExpressionsMixin

withMatchExpressionsMixin(matchExpressions)

"matchExpressions is a list of label selector requirements. The requirements are ANDed."

Note: This function appends passed data to existing values

fn spec.affinity.additionalPodAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution.labelSelector.withMatchLabels

withMatchLabels(matchLabels)

"matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels\nmap is equivalent to an element of matchExpressions, whose key field is \"key\", the\noperator is \"In\", and the values array contains only \"value\". The requirements are ANDed."

fn spec.affinity.additionalPodAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution.labelSelector.withMatchLabelsMixin

withMatchLabelsMixin(matchLabels)

"matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels\nmap is equivalent to an element of matchExpressions, whose key field is \"key\", the\noperator is \"In\", and the values array contains only \"value\". The requirements are ANDed."

Note: This function appends passed data to existing values

obj spec.affinity.additionalPodAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution.labelSelector.matchExpressions

"matchExpressions is a list of label selector requirements. The requirements are ANDed."

fn spec.affinity.additionalPodAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution.labelSelector.matchExpressions.withKey

withKey(key)

"key is the label key that the selector applies to."

fn spec.affinity.additionalPodAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution.labelSelector.matchExpressions.withOperator

withOperator(operator)

"operator represents a key's relationship to a set of values.\nValid operators are In, NotIn, Exists and DoesNotExist."

fn spec.affinity.additionalPodAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution.labelSelector.matchExpressions.withValues

withValues(values)

"values is an array of string values. If the operator is In or NotIn,\nthe values array must be non-empty. If the operator is Exists or DoesNotExist,\nthe values array must be empty. This array is replaced during a strategic\nmerge patch."

fn spec.affinity.additionalPodAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution.labelSelector.matchExpressions.withValuesMixin

withValuesMixin(values)

"values is an array of string values. If the operator is In or NotIn,\nthe values array must be non-empty. If the operator is Exists or DoesNotExist,\nthe values array must be empty. This array is replaced during a strategic\nmerge patch."

Note: This function appends passed data to existing values

obj spec.affinity.additionalPodAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution.namespaceSelector

"A label query over the set of namespaces that the term applies to.\nThe term is applied to the union of the namespaces selected by this field\nand the ones listed in the namespaces field.\nnull selector and null or empty namespaces list means \"this pod's namespace\".\nAn empty selector ({}) matches all namespaces."

fn spec.affinity.additionalPodAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution.namespaceSelector.withMatchExpressions

withMatchExpressions(matchExpressions)

"matchExpressions is a list of label selector requirements. The requirements are ANDed."

fn spec.affinity.additionalPodAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution.namespaceSelector.withMatchExpressionsMixin

withMatchExpressionsMixin(matchExpressions)

"matchExpressions is a list of label selector requirements. The requirements are ANDed."

Note: This function appends passed data to existing values

fn spec.affinity.additionalPodAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution.namespaceSelector.withMatchLabels

withMatchLabels(matchLabels)

"matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels\nmap is equivalent to an element of matchExpressions, whose key field is \"key\", the\noperator is \"In\", and the values array contains only \"value\". The requirements are ANDed."

fn spec.affinity.additionalPodAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution.namespaceSelector.withMatchLabelsMixin

withMatchLabelsMixin(matchLabels)

"matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels\nmap is equivalent to an element of matchExpressions, whose key field is \"key\", the\noperator is \"In\", and the values array contains only \"value\". The requirements are ANDed."

Note: This function appends passed data to existing values

obj spec.affinity.additionalPodAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution.namespaceSelector.matchExpressions

"matchExpressions is a list of label selector requirements. The requirements are ANDed."

fn spec.affinity.additionalPodAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution.namespaceSelector.matchExpressions.withKey

withKey(key)

"key is the label key that the selector applies to."

fn spec.affinity.additionalPodAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution.namespaceSelector.matchExpressions.withOperator

withOperator(operator)

"operator represents a key's relationship to a set of values.\nValid operators are In, NotIn, Exists and DoesNotExist."

fn spec.affinity.additionalPodAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution.namespaceSelector.matchExpressions.withValues

withValues(values)

"values is an array of string values. If the operator is In or NotIn,\nthe values array must be non-empty. If the operator is Exists or DoesNotExist,\nthe values array must be empty. This array is replaced during a strategic\nmerge patch."

fn spec.affinity.additionalPodAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution.namespaceSelector.matchExpressions.withValuesMixin

withValuesMixin(values)

"values is an array of string values. If the operator is In or NotIn,\nthe values array must be non-empty. If the operator is Exists or DoesNotExist,\nthe values array must be empty. This array is replaced during a strategic\nmerge patch."

Note: This function appends passed data to existing values

obj spec.affinity.nodeAffinity

"NodeAffinity describes node affinity scheduling rules for the pod.\nMore info: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity"

fn spec.affinity.nodeAffinity.withPreferredDuringSchedulingIgnoredDuringExecution

withPreferredDuringSchedulingIgnoredDuringExecution(preferredDuringSchedulingIgnoredDuringExecution)

"The scheduler will prefer to schedule pods to nodes that satisfy\nthe affinity expressions specified by this field, but it may choose\na node that violates one or more of the expressions. The node that is\nmost preferred is the one with the greatest sum of weights, i.e.\nfor each node that meets all of the scheduling requirements (resource\nrequest, requiredDuringScheduling affinity expressions, etc.),\ncompute a sum by iterating through the elements of this field and adding\n\"weight\" to the sum if the node matches the corresponding matchExpressions; the\nnode(s) with the highest sum are the most preferred."

fn spec.affinity.nodeAffinity.withPreferredDuringSchedulingIgnoredDuringExecutionMixin

withPreferredDuringSchedulingIgnoredDuringExecutionMixin(preferredDuringSchedulingIgnoredDuringExecution)

"The scheduler will prefer to schedule pods to nodes that satisfy\nthe affinity expressions specified by this field, but it may choose\na node that violates one or more of the expressions. The node that is\nmost preferred is the one with the greatest sum of weights, i.e.\nfor each node that meets all of the scheduling requirements (resource\nrequest, requiredDuringScheduling affinity expressions, etc.),\ncompute a sum by iterating through the elements of this field and adding\n\"weight\" to the sum if the node matches the corresponding matchExpressions; the\nnode(s) with the highest sum are the most preferred."

Note: This function appends passed data to existing values

obj spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution

"The scheduler will prefer to schedule pods to nodes that satisfy\nthe affinity expressions specified by this field, but it may choose\na node that violates one or more of the expressions. The node that is\nmost preferred is the one with the greatest sum of weights, i.e.\nfor each node that meets all of the scheduling requirements (resource\nrequest, requiredDuringScheduling affinity expressions, etc.),\ncompute a sum by iterating through the elements of this field and adding\n\"weight\" to the sum if the node matches the corresponding matchExpressions; the\nnode(s) with the highest sum are the most preferred."

fn spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution.withWeight

withWeight(weight)

"Weight associated with matching the corresponding nodeSelectorTerm, in the range 1-100."

obj spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution.preference

"A node selector term, associated with the corresponding weight."

fn spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution.preference.withMatchExpressions

withMatchExpressions(matchExpressions)

"A list of node selector requirements by node's labels."

fn spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution.preference.withMatchExpressionsMixin

withMatchExpressionsMixin(matchExpressions)

"A list of node selector requirements by node's labels."

Note: This function appends passed data to existing values

fn spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution.preference.withMatchFields

withMatchFields(matchFields)

"A list of node selector requirements by node's fields."

fn spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution.preference.withMatchFieldsMixin

withMatchFieldsMixin(matchFields)

"A list of node selector requirements by node's fields."

Note: This function appends passed data to existing values

obj spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution.preference.matchExpressions

"A list of node selector requirements by node's labels."

fn spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution.preference.matchExpressions.withKey

withKey(key)

"The label key that the selector applies to."

fn spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution.preference.matchExpressions.withOperator

withOperator(operator)

"Represents a key's relationship to a set of values.\nValid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt."

fn spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution.preference.matchExpressions.withValues

withValues(values)

"An array of string values. If the operator is In or NotIn,\nthe values array must be non-empty. If the operator is Exists or DoesNotExist,\nthe values array must be empty. If the operator is Gt or Lt, the values\narray must have a single element, which will be interpreted as an integer.\nThis array is replaced during a strategic merge patch."

fn spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution.preference.matchExpressions.withValuesMixin

withValuesMixin(values)

"An array of string values. If the operator is In or NotIn,\nthe values array must be non-empty. If the operator is Exists or DoesNotExist,\nthe values array must be empty. If the operator is Gt or Lt, the values\narray must have a single element, which will be interpreted as an integer.\nThis array is replaced during a strategic merge patch."

Note: This function appends passed data to existing values

obj spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution.preference.matchFields

"A list of node selector requirements by node's fields."

fn spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution.preference.matchFields.withKey

withKey(key)

"The label key that the selector applies to."

fn spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution.preference.matchFields.withOperator

withOperator(operator)

"Represents a key's relationship to a set of values.\nValid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt."

fn spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution.preference.matchFields.withValues

withValues(values)

"An array of string values. If the operator is In or NotIn,\nthe values array must be non-empty. If the operator is Exists or DoesNotExist,\nthe values array must be empty. If the operator is Gt or Lt, the values\narray must have a single element, which will be interpreted as an integer.\nThis array is replaced during a strategic merge patch."

fn spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution.preference.matchFields.withValuesMixin

withValuesMixin(values)

"An array of string values. If the operator is In or NotIn,\nthe values array must be non-empty. If the operator is Exists or DoesNotExist,\nthe values array must be empty. If the operator is Gt or Lt, the values\narray must have a single element, which will be interpreted as an integer.\nThis array is replaced during a strategic merge patch."

Note: This function appends passed data to existing values

obj spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution

"If the affinity requirements specified by this field are not met at\nscheduling time, the pod will not be scheduled onto the node.\nIf the affinity requirements specified by this field cease to be met\nat some point during pod execution (e.g. due to an update), the system\nmay or may not try to eventually evict the pod from its node."

fn spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.withNodeSelectorTerms

withNodeSelectorTerms(nodeSelectorTerms)

"Required. A list of node selector terms. The terms are ORed."

fn spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.withNodeSelectorTermsMixin

withNodeSelectorTermsMixin(nodeSelectorTerms)

"Required. A list of node selector terms. The terms are ORed."

Note: This function appends passed data to existing values

obj spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms

"Required. A list of node selector terms. The terms are ORed."

fn spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms.withMatchExpressions

withMatchExpressions(matchExpressions)

"A list of node selector requirements by node's labels."

fn spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms.withMatchExpressionsMixin

withMatchExpressionsMixin(matchExpressions)

"A list of node selector requirements by node's labels."

Note: This function appends passed data to existing values

fn spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms.withMatchFields

withMatchFields(matchFields)

"A list of node selector requirements by node's fields."

fn spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms.withMatchFieldsMixin

withMatchFieldsMixin(matchFields)

"A list of node selector requirements by node's fields."

Note: This function appends passed data to existing values

obj spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms.matchExpressions

"A list of node selector requirements by node's labels."

fn spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms.matchExpressions.withKey

withKey(key)

"The label key that the selector applies to."

fn spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms.matchExpressions.withOperator

withOperator(operator)

"Represents a key's relationship to a set of values.\nValid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt."

fn spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms.matchExpressions.withValues

withValues(values)

"An array of string values. If the operator is In or NotIn,\nthe values array must be non-empty. If the operator is Exists or DoesNotExist,\nthe values array must be empty. If the operator is Gt or Lt, the values\narray must have a single element, which will be interpreted as an integer.\nThis array is replaced during a strategic merge patch."

fn spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms.matchExpressions.withValuesMixin

withValuesMixin(values)

"An array of string values. If the operator is In or NotIn,\nthe values array must be non-empty. If the operator is Exists or DoesNotExist,\nthe values array must be empty. If the operator is Gt or Lt, the values\narray must have a single element, which will be interpreted as an integer.\nThis array is replaced during a strategic merge patch."

Note: This function appends passed data to existing values

obj spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms.matchFields

"A list of node selector requirements by node's fields."

fn spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms.matchFields.withKey

withKey(key)

"The label key that the selector applies to."

fn spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms.matchFields.withOperator

withOperator(operator)

"Represents a key's relationship to a set of values.\nValid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt."

fn spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms.matchFields.withValues

withValues(values)

"An array of string values. If the operator is In or NotIn,\nthe values array must be non-empty. If the operator is Exists or DoesNotExist,\nthe values array must be empty. If the operator is Gt or Lt, the values\narray must have a single element, which will be interpreted as an integer.\nThis array is replaced during a strategic merge patch."

fn spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms.matchFields.withValuesMixin

withValuesMixin(values)

"An array of string values. If the operator is In or NotIn,\nthe values array must be non-empty. If the operator is Exists or DoesNotExist,\nthe values array must be empty. If the operator is Gt or Lt, the values\narray must have a single element, which will be interpreted as an integer.\nThis array is replaced during a strategic merge patch."

Note: This function appends passed data to existing values

obj spec.affinity.tolerations

"Tolerations is a list of Tolerations that should be set for all the pods, in order to allow them to run\non tainted nodes.\nMore info: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/"

fn spec.affinity.tolerations.withEffect

withEffect(effect)

"Effect indicates the taint effect to match. Empty means match all taint effects.\nWhen specified, allowed values are NoSchedule, PreferNoSchedule and NoExecute."

fn spec.affinity.tolerations.withKey

withKey(key)

"Key is the taint key that the toleration applies to. Empty means match all taint keys.\nIf the key is empty, operator must be Exists; this combination means to match all values and all keys."

fn spec.affinity.tolerations.withOperator

withOperator(operator)

"Operator represents a key's relationship to the value.\nValid operators are Exists and Equal. Defaults to Equal.\nExists is equivalent to wildcard for value, so that a pod can\ntolerate all taints of a particular category."

fn spec.affinity.tolerations.withTolerationSeconds

withTolerationSeconds(tolerationSeconds)

"TolerationSeconds represents the period of time the toleration (which must be\nof effect NoExecute, otherwise this field is ignored) tolerates the taint. By default,\nit is not set, which means tolerate the taint forever (do not evict). Zero and\nnegative values will be treated as 0 (evict immediately) by the system."

fn spec.affinity.tolerations.withValue

withValue(value)

"Value is the taint value the toleration matches to.\nIf the operator is Exists, the value should be empty, otherwise just a regular string."

obj spec.backup

"The configuration to be used for backups"

fn spec.backup.withRetentionPolicy

withRetentionPolicy(retentionPolicy)

"RetentionPolicy is the retention policy to be used for backups\nand WALs (i.e. '60d'). The retention policy is expressed in the form\nof XXu where XX is a positive integer and u is in [dwm] -\ndays, weeks, months.\nIt's currently only applicable when using the BarmanObjectStore method."

fn spec.backup.withTarget

withTarget(target)

"The policy to decide which instance should perform backups. Available\noptions are empty string, which will default to prefer-standby policy,\nprimary to have backups run always on primary instances, prefer-standby\nto have backups run preferably on the most updated standby, if available."

obj spec.backup.barmanObjectStore

"The configuration for the barman-cloud tool suite"

fn spec.backup.barmanObjectStore.withDestinationPath

withDestinationPath(destinationPath)

"The path where to store the backup (i.e. s3://bucket/path/to/folder)\nthis path, with different destination folders, will be used for WALs\nand for data"

fn spec.backup.barmanObjectStore.withEndpointURL

withEndpointURL(endpointURL)

"Endpoint to be used to upload data to the cloud,\noverriding the automatic endpoint discovery"

fn spec.backup.barmanObjectStore.withHistoryTags

withHistoryTags(historyTags)

"HistoryTags is a list of key value pairs that will be passed to the\nBarman --history-tags option."

fn spec.backup.barmanObjectStore.withHistoryTagsMixin

withHistoryTagsMixin(historyTags)

"HistoryTags is a list of key value pairs that will be passed to the\nBarman --history-tags option."

Note: This function appends passed data to existing values

fn spec.backup.barmanObjectStore.withServerName

withServerName(serverName)

"The server name on S3, the cluster name is used if this\nparameter is omitted"

fn spec.backup.barmanObjectStore.withTags

withTags(tags)

"Tags is a list of key value pairs that will be passed to the\nBarman --tags option."

fn spec.backup.barmanObjectStore.withTagsMixin

withTagsMixin(tags)

"Tags is a list of key value pairs that will be passed to the\nBarman --tags option."

Note: This function appends passed data to existing values

obj spec.backup.barmanObjectStore.azureCredentials

"The credentials to use to upload data to Azure Blob Storage"

fn spec.backup.barmanObjectStore.azureCredentials.withInheritFromAzureAD

withInheritFromAzureAD(inheritFromAzureAD)

"Use the Azure AD based authentication without providing explicitly the keys."

obj spec.backup.barmanObjectStore.azureCredentials.connectionString

"The connection string to be used"

fn spec.backup.barmanObjectStore.azureCredentials.connectionString.withKey

withKey(key)

"The key to select"

fn spec.backup.barmanObjectStore.azureCredentials.connectionString.withName

withName(name)

"Name of the referent."

obj spec.backup.barmanObjectStore.azureCredentials.storageAccount

"The storage account where to upload data"

fn spec.backup.barmanObjectStore.azureCredentials.storageAccount.withKey

withKey(key)

"The key to select"

fn spec.backup.barmanObjectStore.azureCredentials.storageAccount.withName

withName(name)

"Name of the referent."

obj spec.backup.barmanObjectStore.azureCredentials.storageKey

"The storage account key to be used in conjunction\nwith the storage account name"

fn spec.backup.barmanObjectStore.azureCredentials.storageKey.withKey

withKey(key)

"The key to select"

fn spec.backup.barmanObjectStore.azureCredentials.storageKey.withName

withName(name)

"Name of the referent."

obj spec.backup.barmanObjectStore.azureCredentials.storageSasToken

"A shared-access-signature to be used in conjunction with\nthe storage account name"

fn spec.backup.barmanObjectStore.azureCredentials.storageSasToken.withKey

withKey(key)

"The key to select"

fn spec.backup.barmanObjectStore.azureCredentials.storageSasToken.withName

withName(name)

"Name of the referent."

obj spec.backup.barmanObjectStore.data

"The configuration to be used to backup the data files\nWhen not defined, base backups files will be stored uncompressed and may\nbe unencrypted in the object store, according to the bucket default\npolicy."

fn spec.backup.barmanObjectStore.data.withAdditionalCommandArgs

withAdditionalCommandArgs(additionalCommandArgs)

"AdditionalCommandArgs represents additional arguments that can be appended\nto the 'barman-cloud-backup' command-line invocation. These arguments\nprovide flexibility to customize the backup process further according to\nspecific requirements or configurations.\n\n\nExample:\nIn a scenario where specialized backup options are required, such as setting\na specific timeout or defining custom behavior, users can use this field\nto specify additional command arguments.\n\n\nNote:\nIt's essential to ensure that the provided arguments are valid and supported\nby the 'barman-cloud-backup' command, to avoid potential errors or unintended\nbehavior during execution."

fn spec.backup.barmanObjectStore.data.withAdditionalCommandArgsMixin

withAdditionalCommandArgsMixin(additionalCommandArgs)

"AdditionalCommandArgs represents additional arguments that can be appended\nto the 'barman-cloud-backup' command-line invocation. These arguments\nprovide flexibility to customize the backup process further according to\nspecific requirements or configurations.\n\n\nExample:\nIn a scenario where specialized backup options are required, such as setting\na specific timeout or defining custom behavior, users can use this field\nto specify additional command arguments.\n\n\nNote:\nIt's essential to ensure that the provided arguments are valid and supported\nby the 'barman-cloud-backup' command, to avoid potential errors or unintended\nbehavior during execution."

Note: This function appends passed data to existing values

fn spec.backup.barmanObjectStore.data.withCompression

withCompression(compression)

"Compress a backup file (a tar file per tablespace) while streaming it\nto the object store. Available options are empty string (no\ncompression, default), gzip, bzip2 or snappy."

fn spec.backup.barmanObjectStore.data.withEncryption

withEncryption(encryption)

"Whenever to force the encryption of files (if the bucket is\nnot already configured for that).\nAllowed options are empty string (use the bucket policy, default),\nAES256 and aws:kms"

fn spec.backup.barmanObjectStore.data.withImmediateCheckpoint

withImmediateCheckpoint(immediateCheckpoint)

"Control whether the I/O workload for the backup initial checkpoint will\nbe limited, according to the checkpoint_completion_target setting on\nthe PostgreSQL server. If set to true, an immediate checkpoint will be\nused, meaning PostgreSQL will complete the checkpoint as soon as\npossible. false by default."

fn spec.backup.barmanObjectStore.data.withJobs

withJobs(jobs)

"The number of parallel jobs to be used to upload the backup, defaults\nto 2"

obj spec.backup.barmanObjectStore.endpointCA

"EndpointCA store the CA bundle of the barman endpoint.\nUseful when using self-signed certificates to avoid\nerrors with certificate issuer and barman-cloud-wal-archive"

fn spec.backup.barmanObjectStore.endpointCA.withKey

withKey(key)

"The key to select"

fn spec.backup.barmanObjectStore.endpointCA.withName

withName(name)

"Name of the referent."

obj spec.backup.barmanObjectStore.googleCredentials

"The credentials to use to upload data to Google Cloud Storage"

fn spec.backup.barmanObjectStore.googleCredentials.withGkeEnvironment

withGkeEnvironment(gkeEnvironment)

"If set to true, will presume that it's running inside a GKE environment,\ndefault to false."

obj spec.backup.barmanObjectStore.googleCredentials.applicationCredentials

"The secret containing the Google Cloud Storage JSON file with the credentials"

fn spec.backup.barmanObjectStore.googleCredentials.applicationCredentials.withKey

withKey(key)

"The key to select"

fn spec.backup.barmanObjectStore.googleCredentials.applicationCredentials.withName

withName(name)

"Name of the referent."

obj spec.backup.barmanObjectStore.s3Credentials

"The credentials to use to upload data to S3"

fn spec.backup.barmanObjectStore.s3Credentials.withInheritFromIAMRole

withInheritFromIAMRole(inheritFromIAMRole)

"Use the role based authentication without providing explicitly the keys."

obj spec.backup.barmanObjectStore.s3Credentials.accessKeyId

"The reference to the access key id"

fn spec.backup.barmanObjectStore.s3Credentials.accessKeyId.withKey

withKey(key)

"The key to select"

fn spec.backup.barmanObjectStore.s3Credentials.accessKeyId.withName

withName(name)

"Name of the referent."

obj spec.backup.barmanObjectStore.s3Credentials.region

"The reference to the secret containing the region name"

fn spec.backup.barmanObjectStore.s3Credentials.region.withKey

withKey(key)

"The key to select"

fn spec.backup.barmanObjectStore.s3Credentials.region.withName

withName(name)

"Name of the referent."

obj spec.backup.barmanObjectStore.s3Credentials.secretAccessKey

"The reference to the secret access key"

fn spec.backup.barmanObjectStore.s3Credentials.secretAccessKey.withKey

withKey(key)

"The key to select"

fn spec.backup.barmanObjectStore.s3Credentials.secretAccessKey.withName

withName(name)

"Name of the referent."

obj spec.backup.barmanObjectStore.s3Credentials.sessionToken

"The references to the session key"

fn spec.backup.barmanObjectStore.s3Credentials.sessionToken.withKey

withKey(key)

"The key to select"

fn spec.backup.barmanObjectStore.s3Credentials.sessionToken.withName

withName(name)

"Name of the referent."

obj spec.backup.barmanObjectStore.wal

"The configuration for the backup of the WAL stream.\nWhen not defined, WAL files will be stored uncompressed and may be\nunencrypted in the object store, according to the bucket default policy."

fn spec.backup.barmanObjectStore.wal.withArchiveAdditionalCommandArgs

withArchiveAdditionalCommandArgs(archiveAdditionalCommandArgs)

"Additional arguments that can be appended to the 'barman-cloud-wal-archive'\ncommand-line invocation. These arguments provide flexibility to customize\nthe WAL archive process further, according to specific requirements or configurations.\n\n\nExample:\nIn a scenario where specialized backup options are required, such as setting\na specific timeout or defining custom behavior, users can use this field\nto specify additional command arguments.\n\n\nNote:\nIt's essential to ensure that the provided arguments are valid and supported\nby the 'barman-cloud-wal-archive' command, to avoid potential errors or unintended\nbehavior during execution."

fn spec.backup.barmanObjectStore.wal.withArchiveAdditionalCommandArgsMixin

withArchiveAdditionalCommandArgsMixin(archiveAdditionalCommandArgs)

"Additional arguments that can be appended to the 'barman-cloud-wal-archive'\ncommand-line invocation. These arguments provide flexibility to customize\nthe WAL archive process further, according to specific requirements or configurations.\n\n\nExample:\nIn a scenario where specialized backup options are required, such as setting\na specific timeout or defining custom behavior, users can use this field\nto specify additional command arguments.\n\n\nNote:\nIt's essential to ensure that the provided arguments are valid and supported\nby the 'barman-cloud-wal-archive' command, to avoid potential errors or unintended\nbehavior during execution."

Note: This function appends passed data to existing values

fn spec.backup.barmanObjectStore.wal.withCompression

withCompression(compression)

"Compress a WAL file before sending it to the object store. Available\noptions are empty string (no compression, default), gzip, bzip2 or snappy."

fn spec.backup.barmanObjectStore.wal.withEncryption

withEncryption(encryption)

"Whenever to force the encryption of files (if the bucket is\nnot already configured for that).\nAllowed options are empty string (use the bucket policy, default),\nAES256 and aws:kms"

fn spec.backup.barmanObjectStore.wal.withMaxParallel

withMaxParallel(maxParallel)

"Number of WAL files to be either archived in parallel (when the\nPostgreSQL instance is archiving to a backup object store) or\nrestored in parallel (when a PostgreSQL standby is fetching WAL\nfiles from a recovery object store). If not specified, WAL files\nwill be processed one at a time. It accepts a positive integer as a\nvalue - with 1 being the minimum accepted value."

fn spec.backup.barmanObjectStore.wal.withRestoreAdditionalCommandArgs

withRestoreAdditionalCommandArgs(restoreAdditionalCommandArgs)

"Additional arguments that can be appended to the 'barman-cloud-wal-restore'\ncommand-line invocation. These arguments provide flexibility to customize\nthe WAL restore process further, according to specific requirements or configurations.\n\n\nExample:\nIn a scenario where specialized backup options are required, such as setting\na specific timeout or defining custom behavior, users can use this field\nto specify additional command arguments.\n\n\nNote:\nIt's essential to ensure that the provided arguments are valid and supported\nby the 'barman-cloud-wal-restore' command, to avoid potential errors or unintended\nbehavior during execution."

fn spec.backup.barmanObjectStore.wal.withRestoreAdditionalCommandArgsMixin

withRestoreAdditionalCommandArgsMixin(restoreAdditionalCommandArgs)

"Additional arguments that can be appended to the 'barman-cloud-wal-restore'\ncommand-line invocation. These arguments provide flexibility to customize\nthe WAL restore process further, according to specific requirements or configurations.\n\n\nExample:\nIn a scenario where specialized backup options are required, such as setting\na specific timeout or defining custom behavior, users can use this field\nto specify additional command arguments.\n\n\nNote:\nIt's essential to ensure that the provided arguments are valid and supported\nby the 'barman-cloud-wal-restore' command, to avoid potential errors or unintended\nbehavior during execution."

Note: This function appends passed data to existing values

obj spec.backup.volumeSnapshot

"VolumeSnapshot provides the configuration for the execution of volume snapshot backups."

fn spec.backup.volumeSnapshot.withAnnotations

withAnnotations(annotations)

"Annotations key-value pairs that will be added to .metadata.annotations snapshot resources."

fn spec.backup.volumeSnapshot.withAnnotationsMixin

withAnnotationsMixin(annotations)

"Annotations key-value pairs that will be added to .metadata.annotations snapshot resources."

Note: This function appends passed data to existing values

fn spec.backup.volumeSnapshot.withClassName

withClassName(className)

"ClassName specifies the Snapshot Class to be used for PG_DATA PersistentVolumeClaim.\nIt is the default class for the other types if no specific class is present"

fn spec.backup.volumeSnapshot.withLabels

withLabels(labels)

"Labels are key-value pairs that will be added to .metadata.labels snapshot resources."

fn spec.backup.volumeSnapshot.withLabelsMixin

withLabelsMixin(labels)

"Labels are key-value pairs that will be added to .metadata.labels snapshot resources."

Note: This function appends passed data to existing values

fn spec.backup.volumeSnapshot.withOnline

withOnline(online)

"Whether the default type of backup with volume snapshots is\nonline/hot (true, default) or offline/cold (false)"

fn spec.backup.volumeSnapshot.withSnapshotOwnerReference

withSnapshotOwnerReference(snapshotOwnerReference)

"SnapshotOwnerReference indicates the type of owner reference the snapshot should have"

fn spec.backup.volumeSnapshot.withTablespaceClassName

withTablespaceClassName(tablespaceClassName)

"TablespaceClassName specifies the Snapshot Class to be used for the tablespaces.\ndefaults to the PGDATA Snapshot Class, if set"

fn spec.backup.volumeSnapshot.withTablespaceClassNameMixin

withTablespaceClassNameMixin(tablespaceClassName)

"TablespaceClassName specifies the Snapshot Class to be used for the tablespaces.\ndefaults to the PGDATA Snapshot Class, if set"

Note: This function appends passed data to existing values

fn spec.backup.volumeSnapshot.withWalClassName

withWalClassName(walClassName)

"WalClassName specifies the Snapshot Class to be used for the PG_WAL PersistentVolumeClaim."

obj spec.backup.volumeSnapshot.onlineConfiguration

"Configuration parameters to control the online/hot backup with volume snapshots"

fn spec.backup.volumeSnapshot.onlineConfiguration.withImmediateCheckpoint

withImmediateCheckpoint(immediateCheckpoint)

"Control whether the I/O workload for the backup initial checkpoint will\nbe limited, according to the checkpoint_completion_target setting on\nthe PostgreSQL server. If set to true, an immediate checkpoint will be\nused, meaning PostgreSQL will complete the checkpoint as soon as\npossible. false by default."

fn spec.backup.volumeSnapshot.onlineConfiguration.withWaitForArchive

withWaitForArchive(waitForArchive)

"If false, the function will return immediately after the backup is completed,\nwithout waiting for WAL to be archived.\nThis behavior is only useful with backup software that independently monitors WAL archiving.\nOtherwise, WAL required to make the backup consistent might be missing and make the backup useless.\nBy default, or when this parameter is true, pg_backup_stop will wait for WAL to be archived when archiving is\nenabled.\nOn a standby, this means that it will wait only when archive_mode = always.\nIf write activity on the primary is low, it may be useful to run pg_switch_wal on the primary in order to trigger\nan immediate segment switch."

obj spec.bootstrap

"Instructions to bootstrap this cluster"

obj spec.bootstrap.initdb

"Bootstrap the cluster via initdb"

fn spec.bootstrap.initdb.withDataChecksums

withDataChecksums(dataChecksums)

"Whether the -k option should be passed to initdb,\nenabling checksums on data pages (default: false)"

fn spec.bootstrap.initdb.withDatabase

withDatabase(database)

"Name of the database used by the application. Default: app."

fn spec.bootstrap.initdb.withEncoding

withEncoding(encoding)

"The value to be passed as option --encoding for initdb (default:UTF8)"

fn spec.bootstrap.initdb.withLocaleCType

withLocaleCType(localeCType)

"The value to be passed as option --lc-ctype for initdb (default:C)"

fn spec.bootstrap.initdb.withLocaleCollate

withLocaleCollate(localeCollate)

"The value to be passed as option --lc-collate for initdb (default:C)"

fn spec.bootstrap.initdb.withOptions

withOptions(options)

"The list of options that must be passed to initdb when creating the cluster.\nDeprecated: This could lead to inconsistent configurations,\nplease use the explicit provided parameters instead.\nIf defined, explicit values will be ignored."

fn spec.bootstrap.initdb.withOptionsMixin

withOptionsMixin(options)

"The list of options that must be passed to initdb when creating the cluster.\nDeprecated: This could lead to inconsistent configurations,\nplease use the explicit provided parameters instead.\nIf defined, explicit values will be ignored."

Note: This function appends passed data to existing values

fn spec.bootstrap.initdb.withOwner

withOwner(owner)

"Name of the owner of the database in the instance to be used\nby applications. Defaults to the value of the database key."

fn spec.bootstrap.initdb.withPostInitApplicationSQL

withPostInitApplicationSQL(postInitApplicationSQL)

"List of SQL queries to be executed as a superuser in the application\ndatabase right after the cluster has been created - to be used with extreme care\n(by default empty)"

fn spec.bootstrap.initdb.withPostInitApplicationSQLMixin

withPostInitApplicationSQLMixin(postInitApplicationSQL)

"List of SQL queries to be executed as a superuser in the application\ndatabase right after the cluster has been created - to be used with extreme care\n(by default empty)"

Note: This function appends passed data to existing values

fn spec.bootstrap.initdb.withPostInitSQL

withPostInitSQL(postInitSQL)

"List of SQL queries to be executed as a superuser in the postgres\ndatabase right after the cluster has been created - to be used with extreme care\n(by default empty)"

fn spec.bootstrap.initdb.withPostInitSQLMixin

withPostInitSQLMixin(postInitSQL)

"List of SQL queries to be executed as a superuser in the postgres\ndatabase right after the cluster has been created - to be used with extreme care\n(by default empty)"

Note: This function appends passed data to existing values

fn spec.bootstrap.initdb.withPostInitTemplateSQL

withPostInitTemplateSQL(postInitTemplateSQL)

"List of SQL queries to be executed as a superuser in the template1\ndatabase right after the cluster has been created - to be used with extreme care\n(by default empty)"

fn spec.bootstrap.initdb.withPostInitTemplateSQLMixin

withPostInitTemplateSQLMixin(postInitTemplateSQL)

"List of SQL queries to be executed as a superuser in the template1\ndatabase right after the cluster has been created - to be used with extreme care\n(by default empty)"

Note: This function appends passed data to existing values

fn spec.bootstrap.initdb.withWalSegmentSize

withWalSegmentSize(walSegmentSize)

"The value in megabytes (1 to 1024) to be passed to the --wal-segsize\noption for initdb (default: empty, resulting in PostgreSQL default: 16MB)"

obj spec.bootstrap.initdb.import

"Bootstraps the new cluster by importing data from an existing PostgreSQL\ninstance using logical backup (pg_dump and pg_restore)"

fn spec.bootstrap.initdb.import.withDatabases

withDatabases(databases)

"The databases to import"

fn spec.bootstrap.initdb.import.withDatabasesMixin

withDatabasesMixin(databases)

"The databases to import"

Note: This function appends passed data to existing values

fn spec.bootstrap.initdb.import.withPostImportApplicationSQL

withPostImportApplicationSQL(postImportApplicationSQL)

"List of SQL queries to be executed as a superuser in the application\ndatabase right after is imported - to be used with extreme care\n(by default empty). Only available in microservice type."

fn spec.bootstrap.initdb.import.withPostImportApplicationSQLMixin

withPostImportApplicationSQLMixin(postImportApplicationSQL)

"List of SQL queries to be executed as a superuser in the application\ndatabase right after is imported - to be used with extreme care\n(by default empty). Only available in microservice type."

Note: This function appends passed data to existing values

fn spec.bootstrap.initdb.import.withRoles

withRoles(roles)

"The roles to import"

fn spec.bootstrap.initdb.import.withRolesMixin

withRolesMixin(roles)

"The roles to import"

Note: This function appends passed data to existing values

fn spec.bootstrap.initdb.import.withSchemaOnly

withSchemaOnly(schemaOnly)

"When set to true, only the pre-data and post-data sections of\npg_restore are invoked, avoiding data import. Default: false."

fn spec.bootstrap.initdb.import.withType

withType(type)

"The import type. Can be microservice or monolith."

obj spec.bootstrap.initdb.import.source

"The source of the import"

fn spec.bootstrap.initdb.import.source.withExternalCluster

withExternalCluster(externalCluster)

"The name of the externalCluster used for import"

obj spec.bootstrap.initdb.postInitApplicationSQLRefs

"List of references to ConfigMaps or Secrets containing SQL files\nto be executed as a superuser in the application database right after\nthe cluster has been created. The references are processed in a specific order:\nfirst, all Secrets are processed, followed by all ConfigMaps.\nWithin each group, the processing order follows the sequence specified\nin their respective arrays.\n(by default empty)"

fn spec.bootstrap.initdb.postInitApplicationSQLRefs.withConfigMapRefs

withConfigMapRefs(configMapRefs)

"ConfigMapRefs holds a list of references to ConfigMaps"

fn spec.bootstrap.initdb.postInitApplicationSQLRefs.withConfigMapRefsMixin

withConfigMapRefsMixin(configMapRefs)

"ConfigMapRefs holds a list of references to ConfigMaps"

Note: This function appends passed data to existing values

fn spec.bootstrap.initdb.postInitApplicationSQLRefs.withSecretRefs

withSecretRefs(secretRefs)

"SecretRefs holds a list of references to Secrets"

fn spec.bootstrap.initdb.postInitApplicationSQLRefs.withSecretRefsMixin

withSecretRefsMixin(secretRefs)

"SecretRefs holds a list of references to Secrets"

Note: This function appends passed data to existing values

obj spec.bootstrap.initdb.postInitApplicationSQLRefs.configMapRefs

"ConfigMapRefs holds a list of references to ConfigMaps"

fn spec.bootstrap.initdb.postInitApplicationSQLRefs.configMapRefs.withKey

withKey(key)

"The key to select"

fn spec.bootstrap.initdb.postInitApplicationSQLRefs.configMapRefs.withName

withName(name)

"Name of the referent."

obj spec.bootstrap.initdb.postInitApplicationSQLRefs.secretRefs

"SecretRefs holds a list of references to Secrets"

fn spec.bootstrap.initdb.postInitApplicationSQLRefs.secretRefs.withKey

withKey(key)

"The key to select"

fn spec.bootstrap.initdb.postInitApplicationSQLRefs.secretRefs.withName

withName(name)

"Name of the referent."

obj spec.bootstrap.initdb.postInitSQLRefs

"List of references to ConfigMaps or Secrets containing SQL files\nto be executed as a superuser in the postgres database right after\nthe cluster has been created. The references are processed in a specific order:\nfirst, all Secrets are processed, followed by all ConfigMaps.\nWithin each group, the processing order follows the sequence specified\nin their respective arrays.\n(by default empty)"

fn spec.bootstrap.initdb.postInitSQLRefs.withConfigMapRefs

withConfigMapRefs(configMapRefs)

"ConfigMapRefs holds a list of references to ConfigMaps"

fn spec.bootstrap.initdb.postInitSQLRefs.withConfigMapRefsMixin

withConfigMapRefsMixin(configMapRefs)

"ConfigMapRefs holds a list of references to ConfigMaps"

Note: This function appends passed data to existing values

fn spec.bootstrap.initdb.postInitSQLRefs.withSecretRefs

withSecretRefs(secretRefs)

"SecretRefs holds a list of references to Secrets"

fn spec.bootstrap.initdb.postInitSQLRefs.withSecretRefsMixin

withSecretRefsMixin(secretRefs)

"SecretRefs holds a list of references to Secrets"

Note: This function appends passed data to existing values

obj spec.bootstrap.initdb.postInitSQLRefs.configMapRefs

"ConfigMapRefs holds a list of references to ConfigMaps"

fn spec.bootstrap.initdb.postInitSQLRefs.configMapRefs.withKey

withKey(key)

"The key to select"

fn spec.bootstrap.initdb.postInitSQLRefs.configMapRefs.withName

withName(name)

"Name of the referent."

obj spec.bootstrap.initdb.postInitSQLRefs.secretRefs

"SecretRefs holds a list of references to Secrets"

fn spec.bootstrap.initdb.postInitSQLRefs.secretRefs.withKey

withKey(key)

"The key to select"

fn spec.bootstrap.initdb.postInitSQLRefs.secretRefs.withName

withName(name)

"Name of the referent."

obj spec.bootstrap.initdb.postInitTemplateSQLRefs

"List of references to ConfigMaps or Secrets containing SQL files\nto be executed as a superuser in the template1 database right after\nthe cluster has been created. The references are processed in a specific order:\nfirst, all Secrets are processed, followed by all ConfigMaps.\nWithin each group, the processing order follows the sequence specified\nin their respective arrays.\n(by default empty)"

fn spec.bootstrap.initdb.postInitTemplateSQLRefs.withConfigMapRefs

withConfigMapRefs(configMapRefs)

"ConfigMapRefs holds a list of references to ConfigMaps"

fn spec.bootstrap.initdb.postInitTemplateSQLRefs.withConfigMapRefsMixin

withConfigMapRefsMixin(configMapRefs)

"ConfigMapRefs holds a list of references to ConfigMaps"

Note: This function appends passed data to existing values

fn spec.bootstrap.initdb.postInitTemplateSQLRefs.withSecretRefs

withSecretRefs(secretRefs)

"SecretRefs holds a list of references to Secrets"

fn spec.bootstrap.initdb.postInitTemplateSQLRefs.withSecretRefsMixin

withSecretRefsMixin(secretRefs)

"SecretRefs holds a list of references to Secrets"

Note: This function appends passed data to existing values

obj spec.bootstrap.initdb.postInitTemplateSQLRefs.configMapRefs

"ConfigMapRefs holds a list of references to ConfigMaps"

fn spec.bootstrap.initdb.postInitTemplateSQLRefs.configMapRefs.withKey

withKey(key)

"The key to select"

fn spec.bootstrap.initdb.postInitTemplateSQLRefs.configMapRefs.withName

withName(name)

"Name of the referent."

obj spec.bootstrap.initdb.postInitTemplateSQLRefs.secretRefs

"SecretRefs holds a list of references to Secrets"

fn spec.bootstrap.initdb.postInitTemplateSQLRefs.secretRefs.withKey

withKey(key)

"The key to select"

fn spec.bootstrap.initdb.postInitTemplateSQLRefs.secretRefs.withName

withName(name)

"Name of the referent."

obj spec.bootstrap.initdb.secret

"Name of the secret containing the initial credentials for the\nowner of the user database. If empty a new secret will be\ncreated from scratch"

fn spec.bootstrap.initdb.secret.withName

withName(name)

"Name of the referent."

obj spec.bootstrap.pg_basebackup

"Bootstrap the cluster taking a physical backup of another compatible\nPostgreSQL instance"

fn spec.bootstrap.pg_basebackup.withDatabase

withDatabase(database)

"Name of the database used by the application. Default: app."

fn spec.bootstrap.pg_basebackup.withOwner

withOwner(owner)

"Name of the owner of the database in the instance to be used\nby applications. Defaults to the value of the database key."

fn spec.bootstrap.pg_basebackup.withSource

withSource(source)

"The name of the server of which we need to take a physical backup"

obj spec.bootstrap.pg_basebackup.secret

"Name of the secret containing the initial credentials for the\nowner of the user database. If empty a new secret will be\ncreated from scratch"

fn spec.bootstrap.pg_basebackup.secret.withName

withName(name)

"Name of the referent."

obj spec.bootstrap.recovery

"Bootstrap the cluster from a backup"

fn spec.bootstrap.recovery.withDatabase

withDatabase(database)

"Name of the database used by the application. Default: app."

fn spec.bootstrap.recovery.withOwner

withOwner(owner)

"Name of the owner of the database in the instance to be used\nby applications. Defaults to the value of the database key."

fn spec.bootstrap.recovery.withSource

withSource(source)

"The external cluster whose backup we will restore. This is also\nused as the name of the folder under which the backup is stored,\nso it must be set to the name of the source cluster\nMutually exclusive with backup."

obj spec.bootstrap.recovery.backup

"The backup object containing the physical base backup from which to\ninitiate the recovery procedure.\nMutually exclusive with source and volumeSnapshots."

fn spec.bootstrap.recovery.backup.withName

withName(name)

"Name of the referent."

obj spec.bootstrap.recovery.backup.endpointCA

"EndpointCA store the CA bundle of the barman endpoint.\nUseful when using self-signed certificates to avoid\nerrors with certificate issuer and barman-cloud-wal-archive."

fn spec.bootstrap.recovery.backup.endpointCA.withKey

withKey(key)

"The key to select"

fn spec.bootstrap.recovery.backup.endpointCA.withName

withName(name)

"Name of the referent."

obj spec.bootstrap.recovery.recoveryTarget

"By default, the recovery process applies all the available\nWAL files in the archive (full recovery). However, you can also\nend the recovery as soon as a consistent state is reached or\nrecover to a point-in-time (PITR) by specifying a RecoveryTarget object,\nas expected by PostgreSQL (i.e., timestamp, transaction Id, LSN, ...).\nMore info: https://www.postgresql.org/docs/current/runtime-config-wal.html#RUNTIME-CONFIG-WAL-RECOVERY-TARGET"

fn spec.bootstrap.recovery.recoveryTarget.withBackupID

withBackupID(backupID)

"The ID of the backup from which to start the recovery process.\nIf empty (default) the operator will automatically detect the backup\nbased on targetTime or targetLSN if specified. Otherwise use the\nlatest available backup in chronological order."

fn spec.bootstrap.recovery.recoveryTarget.withExclusive

withExclusive(exclusive)

"Set the target to be exclusive. If omitted, defaults to false, so that\nin Postgres, recovery_target_inclusive will be true"

fn spec.bootstrap.recovery.recoveryTarget.withTargetImmediate

withTargetImmediate(targetImmediate)

"End recovery as soon as a consistent state is reached"

fn spec.bootstrap.recovery.recoveryTarget.withTargetLSN

withTargetLSN(targetLSN)

"The target LSN (Log Sequence Number)"

fn spec.bootstrap.recovery.recoveryTarget.withTargetName

withTargetName(targetName)

"The target name (to be previously created\nwith pg_create_restore_point)"

fn spec.bootstrap.recovery.recoveryTarget.withTargetTLI

withTargetTLI(targetTLI)

"The target timeline (\"latest\" or a positive integer)"

fn spec.bootstrap.recovery.recoveryTarget.withTargetTime

withTargetTime(targetTime)

"The target time as a timestamp in the RFC3339 standard"

fn spec.bootstrap.recovery.recoveryTarget.withTargetXID

withTargetXID(targetXID)

"The target transaction ID"

obj spec.bootstrap.recovery.secret

"Name of the secret containing the initial credentials for the\nowner of the user database. If empty a new secret will be\ncreated from scratch"

fn spec.bootstrap.recovery.secret.withName

withName(name)

"Name of the referent."

obj spec.bootstrap.recovery.volumeSnapshots

"The static PVC data source(s) from which to initiate the\nrecovery procedure. Currently supporting VolumeSnapshot\nand PersistentVolumeClaim resources that map an existing\nPVC group, compatible with CloudNativePG, and taken with\na cold backup copy on a fenced Postgres instance (limitation\nwhich will be removed in the future when online backup\nwill be implemented).\nMutually exclusive with backup."

fn spec.bootstrap.recovery.volumeSnapshots.withTablespaceStorage

withTablespaceStorage(tablespaceStorage)

"Configuration of the storage for PostgreSQL tablespaces"

fn spec.bootstrap.recovery.volumeSnapshots.withTablespaceStorageMixin

withTablespaceStorageMixin(tablespaceStorage)

"Configuration of the storage for PostgreSQL tablespaces"

Note: This function appends passed data to existing values

obj spec.bootstrap.recovery.volumeSnapshots.storage

"Configuration of the storage of the instances"

fn spec.bootstrap.recovery.volumeSnapshots.storage.withApiGroup

withApiGroup(apiGroup)

"APIGroup is the group for the resource being referenced.\nIf APIGroup is not specified, the specified Kind must be in the core API group.\nFor any other third-party types, APIGroup is required."

fn spec.bootstrap.recovery.volumeSnapshots.storage.withKind

withKind(kind)

"Kind is the type of resource being referenced"

fn spec.bootstrap.recovery.volumeSnapshots.storage.withName

withName(name)

"Name is the name of resource being referenced"

obj spec.bootstrap.recovery.volumeSnapshots.walStorage

"Configuration of the storage for PostgreSQL WAL (Write-Ahead Log)"

fn spec.bootstrap.recovery.volumeSnapshots.walStorage.withApiGroup

withApiGroup(apiGroup)

"APIGroup is the group for the resource being referenced.\nIf APIGroup is not specified, the specified Kind must be in the core API group.\nFor any other third-party types, APIGroup is required."

fn spec.bootstrap.recovery.volumeSnapshots.walStorage.withKind

withKind(kind)

"Kind is the type of resource being referenced"

fn spec.bootstrap.recovery.volumeSnapshots.walStorage.withName

withName(name)

"Name is the name of resource being referenced"

obj spec.certificates

"The configuration for the CA and related certificates"

fn spec.certificates.withClientCASecret

withClientCASecret(clientCASecret)

"The secret containing the Client CA certificate. If not defined, a new secret will be created\nwith a self-signed CA and will be used to generate all the client certificates.
\n
\nContains:
\n
\n- ca.crt: CA that should be used to validate the client certificates,\nused as ssl_ca_file of all the instances.
\n- ca.key: key used to generate client certificates, if ReplicationTLSSecret is provided,\nthis can be omitted.
"

fn spec.certificates.withReplicationTLSSecret

withReplicationTLSSecret(replicationTLSSecret)

"The secret of type kubernetes.io/tls containing the client certificate to authenticate as\nthe streaming_replica user.\nIf not defined, ClientCASecret must provide also ca.key, and a new secret will be\ncreated using the provided CA."

fn spec.certificates.withServerAltDNSNames

withServerAltDNSNames(serverAltDNSNames)

"The list of the server alternative DNS names to be added to the generated server TLS certificates, when required."

fn spec.certificates.withServerAltDNSNamesMixin

withServerAltDNSNamesMixin(serverAltDNSNames)

"The list of the server alternative DNS names to be added to the generated server TLS certificates, when required."

Note: This function appends passed data to existing values

fn spec.certificates.withServerCASecret

withServerCASecret(serverCASecret)

"The secret containing the Server CA certificate. If not defined, a new secret will be created\nwith a self-signed CA and will be used to generate the TLS certificate ServerTLSSecret.
\n
\nContains:
\n
\n- ca.crt: CA that should be used to validate the server certificate,\nused as sslrootcert in client connection strings.
\n- ca.key: key used to generate Server SSL certs, if ServerTLSSecret is provided,\nthis can be omitted.
"

fn spec.certificates.withServerTLSSecret

withServerTLSSecret(serverTLSSecret)

"The secret of type kubernetes.io/tls containing the server TLS certificate and key that will be set as\nssl_cert_file and ssl_key_file so that clients can connect to postgres securely.\nIf not defined, ServerCASecret must provide also ca.key and a new secret will be\ncreated using the provided CA."

obj spec.env

"Env follows the Env format to pass environment variables\nto the pods created in the cluster"

fn spec.env.withName

withName(name)

"Name of the environment variable. Must be a C_IDENTIFIER."

fn spec.env.withValue

withValue(value)

"Variable references $(VAR_NAME) are expanded\nusing the previously defined environment variables in the container and\nany service environment variables. If a variable cannot be resolved,\nthe reference in the input string will be unchanged. Double $$ are reduced\nto a single $, which allows for escaping the $(VAR_NAME) syntax: i.e.\n\"$$(VAR_NAME)\" will produce the string literal \"$(VAR_NAME)\".\nEscaped references will never be expanded, regardless of whether the variable\nexists or not.\nDefaults to \"\"."

obj spec.env.valueFrom

"Source for the environment variable's value. Cannot be used if value is not empty."

obj spec.env.valueFrom.configMapKeyRef

"Selects a key of a ConfigMap."

fn spec.env.valueFrom.configMapKeyRef.withKey

withKey(key)

"The key to select."

fn spec.env.valueFrom.configMapKeyRef.withName

withName(name)

"Name of the referent.\nThis field is effectively required, but due to backwards compatibility is\nallowed to be empty. Instances of this type with an empty value here are\nalmost certainly wrong.\nTODO: Add other useful fields. apiVersion, kind, uid?\nMore info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names\nTODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896."

fn spec.env.valueFrom.configMapKeyRef.withOptional

withOptional(optional)

"Specify whether the ConfigMap or its key must be defined"

obj spec.env.valueFrom.fieldRef

"Selects a field of the pod: supports metadata.name, metadata.namespace, metadata.labels['<KEY>'], metadata.annotations['<KEY>'],\nspec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs."

fn spec.env.valueFrom.fieldRef.withApiVersion

withApiVersion(apiVersion)

"Version of the schema the FieldPath is written in terms of, defaults to \"v1\"."

fn spec.env.valueFrom.fieldRef.withFieldPath

withFieldPath(fieldPath)

"Path of the field to select in the specified API version."

obj spec.env.valueFrom.resourceFieldRef

"Selects a resource of the container: only resources limits and requests\n(limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported."

fn spec.env.valueFrom.resourceFieldRef.withContainerName

withContainerName(containerName)

"Container name: required for volumes, optional for env vars"

fn spec.env.valueFrom.resourceFieldRef.withDivisor

withDivisor(divisor)

"Specifies the output format of the exposed resources, defaults to \"1\

fn spec.env.valueFrom.resourceFieldRef.withResource

withResource(resource)

"Required: resource to select"

obj spec.env.valueFrom.secretKeyRef

"Selects a key of a secret in the pod's namespace"

fn spec.env.valueFrom.secretKeyRef.withKey

withKey(key)

"The key of the secret to select from. Must be a valid secret key."

fn spec.env.valueFrom.secretKeyRef.withName

withName(name)

"Name of the referent.\nThis field is effectively required, but due to backwards compatibility is\nallowed to be empty. Instances of this type with an empty value here are\nalmost certainly wrong.\nTODO: Add other useful fields. apiVersion, kind, uid?\nMore info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names\nTODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896."

fn spec.env.valueFrom.secretKeyRef.withOptional

withOptional(optional)

"Specify whether the Secret or its key must be defined"

obj spec.envFrom

"EnvFrom follows the EnvFrom format to pass environment variables\nsources to the pods to be used by Env"

fn spec.envFrom.withPrefix

withPrefix(prefix)

"An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER."

obj spec.envFrom.configMapRef

"The ConfigMap to select from"

fn spec.envFrom.configMapRef.withName

withName(name)

"Name of the referent.\nThis field is effectively required, but due to backwards compatibility is\nallowed to be empty. Instances of this type with an empty value here are\nalmost certainly wrong.\nTODO: Add other useful fields. apiVersion, kind, uid?\nMore info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names\nTODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896."

fn spec.envFrom.configMapRef.withOptional

withOptional(optional)

"Specify whether the ConfigMap must be defined"

obj spec.envFrom.secretRef

"The Secret to select from"

fn spec.envFrom.secretRef.withName

withName(name)

"Name of the referent.\nThis field is effectively required, but due to backwards compatibility is\nallowed to be empty. Instances of this type with an empty value here are\nalmost certainly wrong.\nTODO: Add other useful fields. apiVersion, kind, uid?\nMore info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names\nTODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896."

fn spec.envFrom.secretRef.withOptional

withOptional(optional)

"Specify whether the Secret must be defined"

obj spec.ephemeralVolumeSource

"EphemeralVolumeSource allows the user to configure the source of ephemeral volumes."

obj spec.ephemeralVolumeSource.volumeClaimTemplate

"Will be used to create a stand-alone PVC to provision the volume.\nThe pod in which this EphemeralVolumeSource is embedded will be the\nowner of the PVC, i.e. the PVC will be deleted together with the\npod. The name of the PVC will be <pod name>-<volume name> where\n<volume name> is the name from the PodSpec.Volumes array\nentry. Pod validation will reject the pod if the concatenated name\nis not valid for a PVC (for example, too long).\n\n\nAn existing PVC with that name that is not owned by the pod\nwill not be used for the pod to avoid using an unrelated\nvolume by mistake. Starting the pod is then blocked until\nthe unrelated PVC is removed. If such a pre-created PVC is\nmeant to be used by the pod, the PVC has to updated with an\nowner reference to the pod once the pod exists. Normally\nthis should not be necessary, but it may be useful when\nmanually reconstructing a broken cluster.\n\n\nThis field is read-only and no changes will be made by Kubernetes\nto the PVC after it has been created.\n\n\nRequired, must not be nil."

fn spec.ephemeralVolumeSource.volumeClaimTemplate.withMetadata

withMetadata(metadata)

"May contain labels and annotations that will be copied into the PVC\nwhen creating it. No other fields are allowed and will be rejected during\nvalidation."

fn spec.ephemeralVolumeSource.volumeClaimTemplate.withMetadataMixin

withMetadataMixin(metadata)

"May contain labels and annotations that will be copied into the PVC\nwhen creating it. No other fields are allowed and will be rejected during\nvalidation."

Note: This function appends passed data to existing values

obj spec.ephemeralVolumeSource.volumeClaimTemplate.spec

"The specification for the PersistentVolumeClaim. The entire content is\ncopied unchanged into the PVC that gets created from this\ntemplate. The same fields as in a PersistentVolumeClaim\nare also valid here."

fn spec.ephemeralVolumeSource.volumeClaimTemplate.spec.withAccessModes

withAccessModes(accessModes)

"accessModes contains the desired access modes the volume should have.\nMore info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1"

fn spec.ephemeralVolumeSource.volumeClaimTemplate.spec.withAccessModesMixin

withAccessModesMixin(accessModes)

"accessModes contains the desired access modes the volume should have.\nMore info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1"

Note: This function appends passed data to existing values

fn spec.ephemeralVolumeSource.volumeClaimTemplate.spec.withStorageClassName

withStorageClassName(storageClassName)

"storageClassName is the name of the StorageClass required by the claim.\nMore info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1"

fn spec.ephemeralVolumeSource.volumeClaimTemplate.spec.withVolumeAttributesClassName

withVolumeAttributesClassName(volumeAttributesClassName)

"volumeAttributesClassName may be used to set the VolumeAttributesClass used by this claim.\nIf specified, the CSI driver will create or update the volume with the attributes defined\nin the corresponding VolumeAttributesClass. This has a different purpose than storageClassName,\nit can be changed after the claim is created. An empty string value means that no VolumeAttributesClass\nwill be applied to the claim but it's not allowed to reset this field to empty string once it is set.\nIf unspecified and the PersistentVolumeClaim is unbound, the default VolumeAttributesClass\nwill be set by the persistentvolume controller if it exists.\nIf the resource referred to by volumeAttributesClass does not exist, this PersistentVolumeClaim will be\nset to a Pending state, as reflected by the modifyVolumeStatus field, until such as a resource\nexists.\nMore info: https://kubernetes.io/docs/concepts/storage/volume-attributes-classes/\n(Alpha) Using this field requires the VolumeAttributesClass feature gate to be enabled."

fn spec.ephemeralVolumeSource.volumeClaimTemplate.spec.withVolumeMode

withVolumeMode(volumeMode)

"volumeMode defines what type of volume is required by the claim.\nValue of Filesystem is implied when not included in claim spec."

fn spec.ephemeralVolumeSource.volumeClaimTemplate.spec.withVolumeName

withVolumeName(volumeName)

"volumeName is the binding reference to the PersistentVolume backing this claim."

obj spec.ephemeralVolumeSource.volumeClaimTemplate.spec.dataSource

"dataSource field can be used to specify either:\n An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot)\n An existing PVC (PersistentVolumeClaim)\nIf the provisioner or an external controller can support the specified data source,\nit will create a new volume based on the contents of the specified data source.\nWhen the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef,\nand dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified.\nIf the namespace is specified, then dataSourceRef will not be copied to dataSource."

fn spec.ephemeralVolumeSource.volumeClaimTemplate.spec.dataSource.withApiGroup

withApiGroup(apiGroup)

"APIGroup is the group for the resource being referenced.\nIf APIGroup is not specified, the specified Kind must be in the core API group.\nFor any other third-party types, APIGroup is required."

fn spec.ephemeralVolumeSource.volumeClaimTemplate.spec.dataSource.withKind

withKind(kind)

"Kind is the type of resource being referenced"

fn spec.ephemeralVolumeSource.volumeClaimTemplate.spec.dataSource.withName

withName(name)

"Name is the name of resource being referenced"

obj spec.ephemeralVolumeSource.volumeClaimTemplate.spec.dataSourceRef

"dataSourceRef specifies the object from which to populate the volume with data, if a non-empty\nvolume is desired. This may be any object from a non-empty API group (non\ncore object) or a PersistentVolumeClaim object.\nWhen this field is specified, volume binding will only succeed if the type of\nthe specified object matches some installed volume populator or dynamic\nprovisioner.\nThis field will replace the functionality of the dataSource field and as such\nif both fields are non-empty, they must have the same value. For backwards\ncompatibility, when namespace isn't specified in dataSourceRef,\nboth fields (dataSource and dataSourceRef) will be set to the same\nvalue automatically if one of them is empty and the other is non-empty.\nWhen namespace is specified in dataSourceRef,\ndataSource isn't set to the same value and must be empty.\nThere are three important differences between dataSource and dataSourceRef:\n While dataSource only allows two specific types of objects, dataSourceRef\n allows any non-core object, as well as PersistentVolumeClaim objects.\n While dataSource ignores disallowed values (dropping them), dataSourceRef\n preserves all values, and generates an error if a disallowed value is\n specified.\n* While dataSource only allows local objects, dataSourceRef allows objects\n in any namespaces.\n(Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled.\n(Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled."

fn spec.ephemeralVolumeSource.volumeClaimTemplate.spec.dataSourceRef.withApiGroup

withApiGroup(apiGroup)

"APIGroup is the group for the resource being referenced.\nIf APIGroup is not specified, the specified Kind must be in the core API group.\nFor any other third-party types, APIGroup is required."

fn spec.ephemeralVolumeSource.volumeClaimTemplate.spec.dataSourceRef.withKind

withKind(kind)

"Kind is the type of resource being referenced"

fn spec.ephemeralVolumeSource.volumeClaimTemplate.spec.dataSourceRef.withName

withName(name)

"Name is the name of resource being referenced"

fn spec.ephemeralVolumeSource.volumeClaimTemplate.spec.dataSourceRef.withNamespace

withNamespace(namespace)

"Namespace is the namespace of resource being referenced\nNote that when a namespace is specified, a gateway.networking.k8s.io/ReferenceGrant object is required in the referent namespace to allow that namespace's owner to accept the reference. See the ReferenceGrant documentation for details.\n(Alpha) This field requires the CrossNamespaceVolumeDataSource feature gate to be enabled."

obj spec.ephemeralVolumeSource.volumeClaimTemplate.spec.resources

"resources represents the minimum resources the volume should have.\nIf RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements\nthat are lower than previous value but must still be higher than capacity recorded in the\nstatus field of the claim.\nMore info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources"

fn spec.ephemeralVolumeSource.volumeClaimTemplate.spec.resources.withLimits

withLimits(limits)

"Limits describes the maximum amount of compute resources allowed.\nMore info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/"

fn spec.ephemeralVolumeSource.volumeClaimTemplate.spec.resources.withLimitsMixin

withLimitsMixin(limits)

"Limits describes the maximum amount of compute resources allowed.\nMore info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/"

Note: This function appends passed data to existing values

fn spec.ephemeralVolumeSource.volumeClaimTemplate.spec.resources.withRequests

withRequests(requests)

"Requests describes the minimum amount of compute resources required.\nIf Requests is omitted for a container, it defaults to Limits if that is explicitly specified,\notherwise to an implementation-defined value. Requests cannot exceed Limits.\nMore info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/"

fn spec.ephemeralVolumeSource.volumeClaimTemplate.spec.resources.withRequestsMixin

withRequestsMixin(requests)

"Requests describes the minimum amount of compute resources required.\nIf Requests is omitted for a container, it defaults to Limits if that is explicitly specified,\notherwise to an implementation-defined value. Requests cannot exceed Limits.\nMore info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/"

Note: This function appends passed data to existing values

obj spec.ephemeralVolumeSource.volumeClaimTemplate.spec.selector

"selector is a label query over volumes to consider for binding."

fn spec.ephemeralVolumeSource.volumeClaimTemplate.spec.selector.withMatchExpressions

withMatchExpressions(matchExpressions)

"matchExpressions is a list of label selector requirements. The requirements are ANDed."

fn spec.ephemeralVolumeSource.volumeClaimTemplate.spec.selector.withMatchExpressionsMixin

withMatchExpressionsMixin(matchExpressions)

"matchExpressions is a list of label selector requirements. The requirements are ANDed."

Note: This function appends passed data to existing values

fn spec.ephemeralVolumeSource.volumeClaimTemplate.spec.selector.withMatchLabels

withMatchLabels(matchLabels)

"matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels\nmap is equivalent to an element of matchExpressions, whose key field is \"key\", the\noperator is \"In\", and the values array contains only \"value\". The requirements are ANDed."

fn spec.ephemeralVolumeSource.volumeClaimTemplate.spec.selector.withMatchLabelsMixin

withMatchLabelsMixin(matchLabels)

"matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels\nmap is equivalent to an element of matchExpressions, whose key field is \"key\", the\noperator is \"In\", and the values array contains only \"value\". The requirements are ANDed."

Note: This function appends passed data to existing values

obj spec.ephemeralVolumeSource.volumeClaimTemplate.spec.selector.matchExpressions

"matchExpressions is a list of label selector requirements. The requirements are ANDed."

fn spec.ephemeralVolumeSource.volumeClaimTemplate.spec.selector.matchExpressions.withKey

withKey(key)

"key is the label key that the selector applies to."

fn spec.ephemeralVolumeSource.volumeClaimTemplate.spec.selector.matchExpressions.withOperator

withOperator(operator)

"operator represents a key's relationship to a set of values.\nValid operators are In, NotIn, Exists and DoesNotExist."

fn spec.ephemeralVolumeSource.volumeClaimTemplate.spec.selector.matchExpressions.withValues

withValues(values)

"values is an array of string values. If the operator is In or NotIn,\nthe values array must be non-empty. If the operator is Exists or DoesNotExist,\nthe values array must be empty. This array is replaced during a strategic\nmerge patch."

fn spec.ephemeralVolumeSource.volumeClaimTemplate.spec.selector.matchExpressions.withValuesMixin

withValuesMixin(values)

"values is an array of string values. If the operator is In or NotIn,\nthe values array must be non-empty. If the operator is Exists or DoesNotExist,\nthe values array must be empty. This array is replaced during a strategic\nmerge patch."

Note: This function appends passed data to existing values

obj spec.ephemeralVolumesSizeLimit

"EphemeralVolumesSizeLimit allows the user to set the limits for the ephemeral\nvolumes"

fn spec.ephemeralVolumesSizeLimit.withShm

withShm(shm)

"Shm is the size limit of the shared memory volume"

fn spec.ephemeralVolumesSizeLimit.withTemporaryData

withTemporaryData(temporaryData)

"TemporaryData is the size limit of the temporary data volume"

obj spec.externalClusters

"The list of external clusters which are used in the configuration"

fn spec.externalClusters.withConnectionParameters

withConnectionParameters(connectionParameters)

"The list of connection parameters, such as dbname, host, username, etc"

fn spec.externalClusters.withConnectionParametersMixin

withConnectionParametersMixin(connectionParameters)

"The list of connection parameters, such as dbname, host, username, etc"

Note: This function appends passed data to existing values

fn spec.externalClusters.withName

withName(name)

"The server name, required"

obj spec.externalClusters.barmanObjectStore

"The configuration for the barman-cloud tool suite"

fn spec.externalClusters.barmanObjectStore.withDestinationPath

withDestinationPath(destinationPath)

"The path where to store the backup (i.e. s3://bucket/path/to/folder)\nthis path, with different destination folders, will be used for WALs\nand for data"

fn spec.externalClusters.barmanObjectStore.withEndpointURL

withEndpointURL(endpointURL)

"Endpoint to be used to upload data to the cloud,\noverriding the automatic endpoint discovery"

fn spec.externalClusters.barmanObjectStore.withHistoryTags

withHistoryTags(historyTags)

"HistoryTags is a list of key value pairs that will be passed to the\nBarman --history-tags option."

fn spec.externalClusters.barmanObjectStore.withHistoryTagsMixin

withHistoryTagsMixin(historyTags)

"HistoryTags is a list of key value pairs that will be passed to the\nBarman --history-tags option."

Note: This function appends passed data to existing values

fn spec.externalClusters.barmanObjectStore.withServerName

withServerName(serverName)

"The server name on S3, the cluster name is used if this\nparameter is omitted"

fn spec.externalClusters.barmanObjectStore.withTags

withTags(tags)

"Tags is a list of key value pairs that will be passed to the\nBarman --tags option."

fn spec.externalClusters.barmanObjectStore.withTagsMixin

withTagsMixin(tags)

"Tags is a list of key value pairs that will be passed to the\nBarman --tags option."

Note: This function appends passed data to existing values

obj spec.externalClusters.barmanObjectStore.azureCredentials

"The credentials to use to upload data to Azure Blob Storage"

fn spec.externalClusters.barmanObjectStore.azureCredentials.withInheritFromAzureAD

withInheritFromAzureAD(inheritFromAzureAD)

"Use the Azure AD based authentication without providing explicitly the keys."

obj spec.externalClusters.barmanObjectStore.azureCredentials.connectionString

"The connection string to be used"

fn spec.externalClusters.barmanObjectStore.azureCredentials.connectionString.withKey

withKey(key)

"The key to select"

fn spec.externalClusters.barmanObjectStore.azureCredentials.connectionString.withName

withName(name)

"Name of the referent."

obj spec.externalClusters.barmanObjectStore.azureCredentials.storageAccount

"The storage account where to upload data"

fn spec.externalClusters.barmanObjectStore.azureCredentials.storageAccount.withKey

withKey(key)

"The key to select"

fn spec.externalClusters.barmanObjectStore.azureCredentials.storageAccount.withName

withName(name)

"Name of the referent."

obj spec.externalClusters.barmanObjectStore.azureCredentials.storageKey

"The storage account key to be used in conjunction\nwith the storage account name"

fn spec.externalClusters.barmanObjectStore.azureCredentials.storageKey.withKey

withKey(key)

"The key to select"

fn spec.externalClusters.barmanObjectStore.azureCredentials.storageKey.withName

withName(name)

"Name of the referent."

obj spec.externalClusters.barmanObjectStore.azureCredentials.storageSasToken

"A shared-access-signature to be used in conjunction with\nthe storage account name"

fn spec.externalClusters.barmanObjectStore.azureCredentials.storageSasToken.withKey

withKey(key)

"The key to select"

fn spec.externalClusters.barmanObjectStore.azureCredentials.storageSasToken.withName

withName(name)

"Name of the referent."

obj spec.externalClusters.barmanObjectStore.data

"The configuration to be used to backup the data files\nWhen not defined, base backups files will be stored uncompressed and may\nbe unencrypted in the object store, according to the bucket default\npolicy."

fn spec.externalClusters.barmanObjectStore.data.withAdditionalCommandArgs

withAdditionalCommandArgs(additionalCommandArgs)

"AdditionalCommandArgs represents additional arguments that can be appended\nto the 'barman-cloud-backup' command-line invocation. These arguments\nprovide flexibility to customize the backup process further according to\nspecific requirements or configurations.\n\n\nExample:\nIn a scenario where specialized backup options are required, such as setting\na specific timeout or defining custom behavior, users can use this field\nto specify additional command arguments.\n\n\nNote:\nIt's essential to ensure that the provided arguments are valid and supported\nby the 'barman-cloud-backup' command, to avoid potential errors or unintended\nbehavior during execution."

fn spec.externalClusters.barmanObjectStore.data.withAdditionalCommandArgsMixin

withAdditionalCommandArgsMixin(additionalCommandArgs)

"AdditionalCommandArgs represents additional arguments that can be appended\nto the 'barman-cloud-backup' command-line invocation. These arguments\nprovide flexibility to customize the backup process further according to\nspecific requirements or configurations.\n\n\nExample:\nIn a scenario where specialized backup options are required, such as setting\na specific timeout or defining custom behavior, users can use this field\nto specify additional command arguments.\n\n\nNote:\nIt's essential to ensure that the provided arguments are valid and supported\nby the 'barman-cloud-backup' command, to avoid potential errors or unintended\nbehavior during execution."

Note: This function appends passed data to existing values

fn spec.externalClusters.barmanObjectStore.data.withCompression

withCompression(compression)

"Compress a backup file (a tar file per tablespace) while streaming it\nto the object store. Available options are empty string (no\ncompression, default), gzip, bzip2 or snappy."

fn spec.externalClusters.barmanObjectStore.data.withEncryption

withEncryption(encryption)

"Whenever to force the encryption of files (if the bucket is\nnot already configured for that).\nAllowed options are empty string (use the bucket policy, default),\nAES256 and aws:kms"

fn spec.externalClusters.barmanObjectStore.data.withImmediateCheckpoint

withImmediateCheckpoint(immediateCheckpoint)

"Control whether the I/O workload for the backup initial checkpoint will\nbe limited, according to the checkpoint_completion_target setting on\nthe PostgreSQL server. If set to true, an immediate checkpoint will be\nused, meaning PostgreSQL will complete the checkpoint as soon as\npossible. false by default."

fn spec.externalClusters.barmanObjectStore.data.withJobs

withJobs(jobs)

"The number of parallel jobs to be used to upload the backup, defaults\nto 2"

obj spec.externalClusters.barmanObjectStore.endpointCA

"EndpointCA store the CA bundle of the barman endpoint.\nUseful when using self-signed certificates to avoid\nerrors with certificate issuer and barman-cloud-wal-archive"

fn spec.externalClusters.barmanObjectStore.endpointCA.withKey

withKey(key)

"The key to select"

fn spec.externalClusters.barmanObjectStore.endpointCA.withName

withName(name)

"Name of the referent."

obj spec.externalClusters.barmanObjectStore.googleCredentials

"The credentials to use to upload data to Google Cloud Storage"

fn spec.externalClusters.barmanObjectStore.googleCredentials.withGkeEnvironment

withGkeEnvironment(gkeEnvironment)

"If set to true, will presume that it's running inside a GKE environment,\ndefault to false."

obj spec.externalClusters.barmanObjectStore.googleCredentials.applicationCredentials

"The secret containing the Google Cloud Storage JSON file with the credentials"

fn spec.externalClusters.barmanObjectStore.googleCredentials.applicationCredentials.withKey

withKey(key)

"The key to select"

fn spec.externalClusters.barmanObjectStore.googleCredentials.applicationCredentials.withName

withName(name)

"Name of the referent."

obj spec.externalClusters.barmanObjectStore.s3Credentials

"The credentials to use to upload data to S3"

fn spec.externalClusters.barmanObjectStore.s3Credentials.withInheritFromIAMRole

withInheritFromIAMRole(inheritFromIAMRole)

"Use the role based authentication without providing explicitly the keys."

obj spec.externalClusters.barmanObjectStore.s3Credentials.accessKeyId

"The reference to the access key id"

fn spec.externalClusters.barmanObjectStore.s3Credentials.accessKeyId.withKey

withKey(key)

"The key to select"

fn spec.externalClusters.barmanObjectStore.s3Credentials.accessKeyId.withName

withName(name)

"Name of the referent."

obj spec.externalClusters.barmanObjectStore.s3Credentials.region

"The reference to the secret containing the region name"

fn spec.externalClusters.barmanObjectStore.s3Credentials.region.withKey

withKey(key)

"The key to select"

fn spec.externalClusters.barmanObjectStore.s3Credentials.region.withName

withName(name)

"Name of the referent."

obj spec.externalClusters.barmanObjectStore.s3Credentials.secretAccessKey

"The reference to the secret access key"

fn spec.externalClusters.barmanObjectStore.s3Credentials.secretAccessKey.withKey

withKey(key)

"The key to select"

fn spec.externalClusters.barmanObjectStore.s3Credentials.secretAccessKey.withName

withName(name)

"Name of the referent."

obj spec.externalClusters.barmanObjectStore.s3Credentials.sessionToken

"The references to the session key"

fn spec.externalClusters.barmanObjectStore.s3Credentials.sessionToken.withKey

withKey(key)

"The key to select"

fn spec.externalClusters.barmanObjectStore.s3Credentials.sessionToken.withName

withName(name)

"Name of the referent."

obj spec.externalClusters.barmanObjectStore.wal

"The configuration for the backup of the WAL stream.\nWhen not defined, WAL files will be stored uncompressed and may be\nunencrypted in the object store, according to the bucket default policy."

fn spec.externalClusters.barmanObjectStore.wal.withArchiveAdditionalCommandArgs

withArchiveAdditionalCommandArgs(archiveAdditionalCommandArgs)

"Additional arguments that can be appended to the 'barman-cloud-wal-archive'\ncommand-line invocation. These arguments provide flexibility to customize\nthe WAL archive process further, according to specific requirements or configurations.\n\n\nExample:\nIn a scenario where specialized backup options are required, such as setting\na specific timeout or defining custom behavior, users can use this field\nto specify additional command arguments.\n\n\nNote:\nIt's essential to ensure that the provided arguments are valid and supported\nby the 'barman-cloud-wal-archive' command, to avoid potential errors or unintended\nbehavior during execution."

fn spec.externalClusters.barmanObjectStore.wal.withArchiveAdditionalCommandArgsMixin

withArchiveAdditionalCommandArgsMixin(archiveAdditionalCommandArgs)

"Additional arguments that can be appended to the 'barman-cloud-wal-archive'\ncommand-line invocation. These arguments provide flexibility to customize\nthe WAL archive process further, according to specific requirements or configurations.\n\n\nExample:\nIn a scenario where specialized backup options are required, such as setting\na specific timeout or defining custom behavior, users can use this field\nto specify additional command arguments.\n\n\nNote:\nIt's essential to ensure that the provided arguments are valid and supported\nby the 'barman-cloud-wal-archive' command, to avoid potential errors or unintended\nbehavior during execution."

Note: This function appends passed data to existing values

fn spec.externalClusters.barmanObjectStore.wal.withCompression

withCompression(compression)

"Compress a WAL file before sending it to the object store. Available\noptions are empty string (no compression, default), gzip, bzip2 or snappy."

fn spec.externalClusters.barmanObjectStore.wal.withEncryption

withEncryption(encryption)

"Whenever to force the encryption of files (if the bucket is\nnot already configured for that).\nAllowed options are empty string (use the bucket policy, default),\nAES256 and aws:kms"

fn spec.externalClusters.barmanObjectStore.wal.withMaxParallel

withMaxParallel(maxParallel)

"Number of WAL files to be either archived in parallel (when the\nPostgreSQL instance is archiving to a backup object store) or\nrestored in parallel (when a PostgreSQL standby is fetching WAL\nfiles from a recovery object store). If not specified, WAL files\nwill be processed one at a time. It accepts a positive integer as a\nvalue - with 1 being the minimum accepted value."

fn spec.externalClusters.barmanObjectStore.wal.withRestoreAdditionalCommandArgs

withRestoreAdditionalCommandArgs(restoreAdditionalCommandArgs)

"Additional arguments that can be appended to the 'barman-cloud-wal-restore'\ncommand-line invocation. These arguments provide flexibility to customize\nthe WAL restore process further, according to specific requirements or configurations.\n\n\nExample:\nIn a scenario where specialized backup options are required, such as setting\na specific timeout or defining custom behavior, users can use this field\nto specify additional command arguments.\n\n\nNote:\nIt's essential to ensure that the provided arguments are valid and supported\nby the 'barman-cloud-wal-restore' command, to avoid potential errors or unintended\nbehavior during execution."

fn spec.externalClusters.barmanObjectStore.wal.withRestoreAdditionalCommandArgsMixin

withRestoreAdditionalCommandArgsMixin(restoreAdditionalCommandArgs)

"Additional arguments that can be appended to the 'barman-cloud-wal-restore'\ncommand-line invocation. These arguments provide flexibility to customize\nthe WAL restore process further, according to specific requirements or configurations.\n\n\nExample:\nIn a scenario where specialized backup options are required, such as setting\na specific timeout or defining custom behavior, users can use this field\nto specify additional command arguments.\n\n\nNote:\nIt's essential to ensure that the provided arguments are valid and supported\nby the 'barman-cloud-wal-restore' command, to avoid potential errors or unintended\nbehavior during execution."

Note: This function appends passed data to existing values

obj spec.externalClusters.password

"The reference to the password to be used to connect to the server.\nIf a password is provided, CloudNativePG creates a PostgreSQL\npassfile at /controller/external/NAME/pass (where \"NAME\" is the\ncluster's name). This passfile is automatically referenced in the\nconnection string when establishing a connection to the remote\nPostgreSQL server from the current PostgreSQL Cluster. This ensures\nsecure and efficient password management for external clusters."

fn spec.externalClusters.password.withKey

withKey(key)

"The key of the secret to select from. Must be a valid secret key."

fn spec.externalClusters.password.withName

withName(name)

"Name of the referent.\nThis field is effectively required, but due to backwards compatibility is\nallowed to be empty. Instances of this type with an empty value here are\nalmost certainly wrong.\nTODO: Add other useful fields. apiVersion, kind, uid?\nMore info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names\nTODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896."

fn spec.externalClusters.password.withOptional

withOptional(optional)

"Specify whether the Secret or its key must be defined"

obj spec.externalClusters.sslCert

"The reference to an SSL certificate to be used to connect to this\ninstance"

fn spec.externalClusters.sslCert.withKey

withKey(key)

"The key of the secret to select from. Must be a valid secret key."

fn spec.externalClusters.sslCert.withName

withName(name)

"Name of the referent.\nThis field is effectively required, but due to backwards compatibility is\nallowed to be empty. Instances of this type with an empty value here are\nalmost certainly wrong.\nTODO: Add other useful fields. apiVersion, kind, uid?\nMore info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names\nTODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896."

fn spec.externalClusters.sslCert.withOptional

withOptional(optional)

"Specify whether the Secret or its key must be defined"

obj spec.externalClusters.sslKey

"The reference to an SSL private key to be used to connect to this\ninstance"

fn spec.externalClusters.sslKey.withKey

withKey(key)

"The key of the secret to select from. Must be a valid secret key."

fn spec.externalClusters.sslKey.withName

withName(name)

"Name of the referent.\nThis field is effectively required, but due to backwards compatibility is\nallowed to be empty. Instances of this type with an empty value here are\nalmost certainly wrong.\nTODO: Add other useful fields. apiVersion, kind, uid?\nMore info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names\nTODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896."

fn spec.externalClusters.sslKey.withOptional

withOptional(optional)

"Specify whether the Secret or its key must be defined"

obj spec.externalClusters.sslRootCert

"The reference to an SSL CA public key to be used to connect to this\ninstance"

fn spec.externalClusters.sslRootCert.withKey

withKey(key)

"The key of the secret to select from. Must be a valid secret key."

fn spec.externalClusters.sslRootCert.withName

withName(name)

"Name of the referent.\nThis field is effectively required, but due to backwards compatibility is\nallowed to be empty. Instances of this type with an empty value here are\nalmost certainly wrong.\nTODO: Add other useful fields. apiVersion, kind, uid?\nMore info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names\nTODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896."

fn spec.externalClusters.sslRootCert.withOptional

withOptional(optional)

"Specify whether the Secret or its key must be defined"

obj spec.imageCatalogRef

"Defines the major PostgreSQL version we want to use within an ImageCatalog"

fn spec.imageCatalogRef.withApiGroup

withApiGroup(apiGroup)

"APIGroup is the group for the resource being referenced.\nIf APIGroup is not specified, the specified Kind must be in the core API group.\nFor any other third-party types, APIGroup is required."

fn spec.imageCatalogRef.withKind

withKind(kind)

"Kind is the type of resource being referenced"

fn spec.imageCatalogRef.withMajor

withMajor(major)

"The major version of PostgreSQL we want to use from the ImageCatalog"

fn spec.imageCatalogRef.withName

withName(name)

"Name is the name of resource being referenced"

obj spec.imagePullSecrets

"The list of pull secrets to be used to pull the images"

fn spec.imagePullSecrets.withName

withName(name)

"Name of the referent."

obj spec.inheritedMetadata

"Metadata that will be inherited by all objects related to the Cluster"

fn spec.inheritedMetadata.withAnnotations

withAnnotations(annotations)

fn spec.inheritedMetadata.withAnnotationsMixin

withAnnotationsMixin(annotations)

Note: This function appends passed data to existing values

fn spec.inheritedMetadata.withLabels

withLabels(labels)

fn spec.inheritedMetadata.withLabelsMixin

withLabelsMixin(labels)

Note: This function appends passed data to existing values

obj spec.managed

"The configuration that is used by the portions of PostgreSQL that are managed by the instance manager"

fn spec.managed.withRoles

withRoles(roles)

"Database roles managed by the Cluster"

fn spec.managed.withRolesMixin

withRolesMixin(roles)

"Database roles managed by the Cluster"

Note: This function appends passed data to existing values

obj spec.managed.roles

"Database roles managed by the Cluster"

fn spec.managed.roles.withBypassrls

withBypassrls(bypassrls)

"Whether a role bypasses every row-level security (RLS) policy.\nDefault is false."

fn spec.managed.roles.withComment

withComment(comment)

"Description of the role"

fn spec.managed.roles.withConnectionLimit

withConnectionLimit(connectionLimit)

"If the role can log in, this specifies how many concurrent\nconnections the role can make. -1 (the default) means no limit."

fn spec.managed.roles.withCreatedb

withCreatedb(createdb)

"When set to true, the role being defined will be allowed to create\nnew databases. Specifying false (default) will deny a role the\nability to create databases."

fn spec.managed.roles.withCreaterole

withCreaterole(createrole)

"Whether the role will be permitted to create, alter, drop, comment\non, change the security label for, and grant or revoke membership in\nother roles. Default is false."

fn spec.managed.roles.withDisablePassword

withDisablePassword(disablePassword)

"DisablePassword indicates that a role's password should be set to NULL in Postgres"

fn spec.managed.roles.withEnsure

withEnsure(ensure)

"Ensure the role is present or absent - defaults to \"present\

fn spec.managed.roles.withInRoles

withInRoles(inRoles)

"List of one or more existing roles to which this role will be\nimmediately added as a new member. Default empty."

fn spec.managed.roles.withInRolesMixin

withInRolesMixin(inRoles)

"List of one or more existing roles to which this role will be\nimmediately added as a new member. Default empty."

Note: This function appends passed data to existing values

fn spec.managed.roles.withInherit

withInherit(inherit)

"Whether a role \"inherits\" the privileges of roles it is a member of.\nDefaults is true."

fn spec.managed.roles.withLogin

withLogin(login)

"Whether the role is allowed to log in. A role having the login\nattribute can be thought of as a user. Roles without this attribute\nare useful for managing database privileges, but are not users in\nthe usual sense of the word. Default is false."

fn spec.managed.roles.withName

withName(name)

"Name of the role"

fn spec.managed.roles.withReplication

withReplication(replication)

"Whether a role is a replication role. A role must have this\nattribute (or be a superuser) in order to be able to connect to the\nserver in replication mode (physical or logical replication) and in\norder to be able to create or drop replication slots. A role having\nthe replication attribute is a very highly privileged role, and\nshould only be used on roles actually used for replication. Default\nis false."

fn spec.managed.roles.withSuperuser

withSuperuser(superuser)

"Whether the role is a superuser who can override all access\nrestrictions within the database - superuser status is dangerous and\nshould be used only when really needed. You must yourself be a\nsuperuser to create a new superuser. Defaults is false."

fn spec.managed.roles.withValidUntil

withValidUntil(validUntil)

"Date and time after which the role's password is no longer valid.\nWhen omitted, the password will never expire (default)."

obj spec.managed.roles.passwordSecret

"Secret containing the password of the role (if present)\nIf null, the password will be ignored unless DisablePassword is set"

fn spec.managed.roles.passwordSecret.withName

withName(name)

"Name of the referent."

obj spec.managed.services

"Services roles managed by the Cluster"

fn spec.managed.services.withAdditional

withAdditional(additional)

"Additional is a list of additional managed services specified by the user."

fn spec.managed.services.withAdditionalMixin

withAdditionalMixin(additional)

"Additional is a list of additional managed services specified by the user."

Note: This function appends passed data to existing values

fn spec.managed.services.withDisabledDefaultServices

withDisabledDefaultServices(disabledDefaultServices)

"DisabledDefaultServices is a list of service types that are disabled by default.\nValid values are \"r\", and \"ro\", representing read, and read-only services."

fn spec.managed.services.withDisabledDefaultServicesMixin

withDisabledDefaultServicesMixin(disabledDefaultServices)

"DisabledDefaultServices is a list of service types that are disabled by default.\nValid values are \"r\", and \"ro\", representing read, and read-only services."

Note: This function appends passed data to existing values

obj spec.managed.services.additional

"Additional is a list of additional managed services specified by the user."

fn spec.managed.services.additional.withSelectorType

withSelectorType(selectorType)

"SelectorType specifies the type of selectors that the service will have.\nValid values are \"rw\", \"r\", and \"ro\", representing read-write, read, and read-only services."

fn spec.managed.services.additional.withUpdateStrategy

withUpdateStrategy(updateStrategy)

"UpdateStrategy describes how the service differences should be reconciled"

obj spec.managed.services.additional.serviceTemplate

"ServiceTemplate is the template specification for the service."

obj spec.managed.services.additional.serviceTemplate.metadata

"Standard object's metadata.\nMore info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata"

fn spec.managed.services.additional.serviceTemplate.metadata.withAnnotations

withAnnotations(annotations)

"Annotations is an unstructured key value map stored with a resource that may be\nset by external tools to store and retrieve arbitrary metadata. They are not\nqueryable and should be preserved when modifying objects.\nMore info: http://kubernetes.io/docs/user-guide/annotations"

fn spec.managed.services.additional.serviceTemplate.metadata.withAnnotationsMixin

withAnnotationsMixin(annotations)

"Annotations is an unstructured key value map stored with a resource that may be\nset by external tools to store and retrieve arbitrary metadata. They are not\nqueryable and should be preserved when modifying objects.\nMore info: http://kubernetes.io/docs/user-guide/annotations"

Note: This function appends passed data to existing values

fn spec.managed.services.additional.serviceTemplate.metadata.withLabels

withLabels(labels)

"Map of string keys and values that can be used to organize and categorize\n(scope and select) objects. May match selectors of replication controllers\nand services.\nMore info: http://kubernetes.io/docs/user-guide/labels"

fn spec.managed.services.additional.serviceTemplate.metadata.withLabelsMixin

withLabelsMixin(labels)

"Map of string keys and values that can be used to organize and categorize\n(scope and select) objects. May match selectors of replication controllers\nand services.\nMore info: http://kubernetes.io/docs/user-guide/labels"

Note: This function appends passed data to existing values

fn spec.managed.services.additional.serviceTemplate.metadata.withName

withName(name)

"The name of the resource. Only supported for certain types"

obj spec.managed.services.additional.serviceTemplate.spec

"Specification of the desired behavior of the service.\nMore info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status"

fn spec.managed.services.additional.serviceTemplate.spec.withAllocateLoadBalancerNodePorts

withAllocateLoadBalancerNodePorts(allocateLoadBalancerNodePorts)

"allocateLoadBalancerNodePorts defines if NodePorts will be automatically\nallocated for services with type LoadBalancer. Default is \"true\". It\nmay be set to \"false\" if the cluster load-balancer does not rely on\nNodePorts. If the caller requests specific NodePorts (by specifying a\nvalue), those requests will be respected, regardless of this field.\nThis field may only be set for services with type LoadBalancer and will\nbe cleared if the type is changed to any other type."

fn spec.managed.services.additional.serviceTemplate.spec.withClusterIP

withClusterIP(clusterIP)

"clusterIP is the IP address of the service and is usually assigned\nrandomly. If an address is specified manually, is in-range (as per\nsystem configuration), and is not in use, it will be allocated to the\nservice; otherwise creation of the service will fail. This field may not\nbe changed through updates unless the type field is also being changed\nto ExternalName (which requires this field to be blank) or the type\nfield is being changed from ExternalName (in which case this field may\noptionally be specified, as describe above). Valid values are \"None\",\nempty string (\"\"), or a valid IP address. Setting this to \"None\" makes a\n\"headless service\" (no virtual IP), which is useful when direct endpoint\nconnections are preferred and proxying is not required. Only applies to\ntypes ClusterIP, NodePort, and LoadBalancer. If this field is specified\nwhen creating a Service of type ExternalName, creation will fail. This\nfield will be wiped when updating a Service to type ExternalName.\nMore info: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies"

fn spec.managed.services.additional.serviceTemplate.spec.withClusterIPs

withClusterIPs(clusterIPs)

"ClusterIPs is a list of IP addresses assigned to this service, and are\nusually assigned randomly. If an address is specified manually, is\nin-range (as per system configuration), and is not in use, it will be\nallocated to the service; otherwise creation of the service will fail.\nThis field may not be changed through updates unless the type field is\nalso being changed to ExternalName (which requires this field to be\nempty) or the type field is being changed from ExternalName (in which\ncase this field may optionally be specified, as describe above). Valid\nvalues are \"None\", empty string (\"\"), or a valid IP address. Setting\nthis to \"None\" makes a \"headless service\" (no virtual IP), which is\nuseful when direct endpoint connections are preferred and proxying is\nnot required. Only applies to types ClusterIP, NodePort, and\nLoadBalancer. If this field is specified when creating a Service of type\nExternalName, creation will fail. This field will be wiped when updating\na Service to type ExternalName. If this field is not specified, it will\nbe initialized from the clusterIP field. If this field is specified,\nclients must ensure that clusterIPs[0] and clusterIP have the same\nvalue.\n\n\nThis field may hold a maximum of two entries (dual-stack IPs, in either order).\nThese IPs must correspond to the values of the ipFamilies field. Both\nclusterIPs and ipFamilies are governed by the ipFamilyPolicy field.\nMore info: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies"

fn spec.managed.services.additional.serviceTemplate.spec.withClusterIPsMixin

withClusterIPsMixin(clusterIPs)

"ClusterIPs is a list of IP addresses assigned to this service, and are\nusually assigned randomly. If an address is specified manually, is\nin-range (as per system configuration), and is not in use, it will be\nallocated to the service; otherwise creation of the service will fail.\nThis field may not be changed through updates unless the type field is\nalso being changed to ExternalName (which requires this field to be\nempty) or the type field is being changed from ExternalName (in which\ncase this field may optionally be specified, as describe above). Valid\nvalues are \"None\", empty string (\"\"), or a valid IP address. Setting\nthis to \"None\" makes a \"headless service\" (no virtual IP), which is\nuseful when direct endpoint connections are preferred and proxying is\nnot required. Only applies to types ClusterIP, NodePort, and\nLoadBalancer. If this field is specified when creating a Service of type\nExternalName, creation will fail. This field will be wiped when updating\na Service to type ExternalName. If this field is not specified, it will\nbe initialized from the clusterIP field. If this field is specified,\nclients must ensure that clusterIPs[0] and clusterIP have the same\nvalue.\n\n\nThis field may hold a maximum of two entries (dual-stack IPs, in either order).\nThese IPs must correspond to the values of the ipFamilies field. Both\nclusterIPs and ipFamilies are governed by the ipFamilyPolicy field.\nMore info: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies"

Note: This function appends passed data to existing values

fn spec.managed.services.additional.serviceTemplate.spec.withExternalIPs

withExternalIPs(externalIPs)

"externalIPs is a list of IP addresses for which nodes in the cluster\nwill also accept traffic for this service. These IPs are not managed by\nKubernetes. The user is responsible for ensuring that traffic arrives\nat a node with this IP. A common example is external load-balancers\nthat are not part of the Kubernetes system."

fn spec.managed.services.additional.serviceTemplate.spec.withExternalIPsMixin

withExternalIPsMixin(externalIPs)

"externalIPs is a list of IP addresses for which nodes in the cluster\nwill also accept traffic for this service. These IPs are not managed by\nKubernetes. The user is responsible for ensuring that traffic arrives\nat a node with this IP. A common example is external load-balancers\nthat are not part of the Kubernetes system."

Note: This function appends passed data to existing values

fn spec.managed.services.additional.serviceTemplate.spec.withExternalName

withExternalName(externalName)

"externalName is the external reference that discovery mechanisms will\nreturn as an alias for this service (e.g. a DNS CNAME record). No\nproxying will be involved. Must be a lowercase RFC-1123 hostname\n(https://tools.ietf.org/html/rfc1123) and requires type to be \"ExternalName\"."

fn spec.managed.services.additional.serviceTemplate.spec.withExternalTrafficPolicy

withExternalTrafficPolicy(externalTrafficPolicy)

"externalTrafficPolicy describes how nodes distribute service traffic they\nreceive on one of the Service's \"externally-facing\" addresses (NodePorts,\nExternalIPs, and LoadBalancer IPs). If set to \"Local\", the proxy will configure\nthe service in a way that assumes that external load balancers will take care\nof balancing the service traffic between nodes, and so each node will deliver\ntraffic only to the node-local endpoints of the service, without masquerading\nthe client source IP. (Traffic mistakenly sent to a node with no endpoints will\nbe dropped.) The default value, \"Cluster\", uses the standard behavior of\nrouting to all endpoints evenly (possibly modified by topology and other\nfeatures). Note that traffic sent to an External IP or LoadBalancer IP from\nwithin the cluster will always get \"Cluster\" semantics, but clients sending to\na NodePort from within the cluster may need to take traffic policy into account\nwhen picking a node."

fn spec.managed.services.additional.serviceTemplate.spec.withHealthCheckNodePort

withHealthCheckNodePort(healthCheckNodePort)

"healthCheckNodePort specifies the healthcheck nodePort for the service.\nThis only applies when type is set to LoadBalancer and\nexternalTrafficPolicy is set to Local. If a value is specified, is\nin-range, and is not in use, it will be used. If not specified, a value\nwill be automatically allocated. External systems (e.g. load-balancers)\ncan use this port to determine if a given node holds endpoints for this\nservice or not. If this field is specified when creating a Service\nwhich does not need it, creation will fail. This field will be wiped\nwhen updating a Service to no longer need it (e.g. changing type).\nThis field cannot be updated once set."

fn spec.managed.services.additional.serviceTemplate.spec.withInternalTrafficPolicy

withInternalTrafficPolicy(internalTrafficPolicy)

"InternalTrafficPolicy describes how nodes distribute service traffic they\nreceive on the ClusterIP. If set to \"Local\", the proxy will assume that pods\nonly want to talk to endpoints of the service on the same node as the pod,\ndropping the traffic if there are no local endpoints. The default value,\n\"Cluster\", uses the standard behavior of routing to all endpoints evenly\n(possibly modified by topology and other features)."

fn spec.managed.services.additional.serviceTemplate.spec.withIpFamilies

withIpFamilies(ipFamilies)

"IPFamilies is a list of IP families (e.g. IPv4, IPv6) assigned to this\nservice. This field is usually assigned automatically based on cluster\nconfiguration and the ipFamilyPolicy field. If this field is specified\nmanually, the requested family is available in the cluster,\nand ipFamilyPolicy allows it, it will be used; otherwise creation of\nthe service will fail. This field is conditionally mutable: it allows\nfor adding or removing a secondary IP family, but it does not allow\nchanging the primary IP family of the Service. Valid values are \"IPv4\"\nand \"IPv6\". This field only applies to Services of types ClusterIP,\nNodePort, and LoadBalancer, and does apply to \"headless\" services.\nThis field will be wiped when updating a Service to type ExternalName.\n\n\nThis field may hold a maximum of two entries (dual-stack families, in\neither order). These families must correspond to the values of the\nclusterIPs field, if specified. Both clusterIPs and ipFamilies are\ngoverned by the ipFamilyPolicy field."

fn spec.managed.services.additional.serviceTemplate.spec.withIpFamiliesMixin

withIpFamiliesMixin(ipFamilies)

"IPFamilies is a list of IP families (e.g. IPv4, IPv6) assigned to this\nservice. This field is usually assigned automatically based on cluster\nconfiguration and the ipFamilyPolicy field. If this field is specified\nmanually, the requested family is available in the cluster,\nand ipFamilyPolicy allows it, it will be used; otherwise creation of\nthe service will fail. This field is conditionally mutable: it allows\nfor adding or removing a secondary IP family, but it does not allow\nchanging the primary IP family of the Service. Valid values are \"IPv4\"\nand \"IPv6\". This field only applies to Services of types ClusterIP,\nNodePort, and LoadBalancer, and does apply to \"headless\" services.\nThis field will be wiped when updating a Service to type ExternalName.\n\n\nThis field may hold a maximum of two entries (dual-stack families, in\neither order). These families must correspond to the values of the\nclusterIPs field, if specified. Both clusterIPs and ipFamilies are\ngoverned by the ipFamilyPolicy field."

Note: This function appends passed data to existing values

fn spec.managed.services.additional.serviceTemplate.spec.withIpFamilyPolicy

withIpFamilyPolicy(ipFamilyPolicy)

"IPFamilyPolicy represents the dual-stack-ness requested or required by\nthis Service. If there is no value provided, then this field will be set\nto SingleStack. Services can be \"SingleStack\" (a single IP family),\n\"PreferDualStack\" (two IP families on dual-stack configured clusters or\na single IP family on single-stack clusters), or \"RequireDualStack\"\n(two IP families on dual-stack configured clusters, otherwise fail). The\nipFamilies and clusterIPs fields depend on the value of this field. This\nfield will be wiped when updating a service to type ExternalName."

fn spec.managed.services.additional.serviceTemplate.spec.withLoadBalancerClass

withLoadBalancerClass(loadBalancerClass)

"loadBalancerClass is the class of the load balancer implementation this Service belongs to.\nIf specified, the value of this field must be a label-style identifier, with an optional prefix,\ne.g. \"internal-vip\" or \"example.com/internal-vip\". Unprefixed names are reserved for end-users.\nThis field can only be set when the Service type is 'LoadBalancer'. If not set, the default load\nbalancer implementation is used, today this is typically done through the cloud provider integration,\nbut should apply for any default implementation. If set, it is assumed that a load balancer\nimplementation is watching for Services with a matching class. Any default load balancer\nimplementation (e.g. cloud providers) should ignore Services that set this field.\nThis field can only be set when creating or updating a Service to type 'LoadBalancer'.\nOnce set, it can not be changed. This field will be wiped when a service is updated to a non 'LoadBalancer' type."

fn spec.managed.services.additional.serviceTemplate.spec.withLoadBalancerIP

withLoadBalancerIP(loadBalancerIP)

"Only applies to Service Type: LoadBalancer.\nThis feature depends on whether the underlying cloud-provider supports specifying\nthe loadBalancerIP when a load balancer is created.\nThis field will be ignored if the cloud-provider does not support the feature.\nDeprecated: This field was under-specified and its meaning varies across implementations.\nUsing it is non-portable and it may not support dual-stack.\nUsers are encouraged to use implementation-specific annotations when available."

fn spec.managed.services.additional.serviceTemplate.spec.withLoadBalancerSourceRanges

withLoadBalancerSourceRanges(loadBalancerSourceRanges)

"If specified and supported by the platform, this will restrict traffic through the cloud-provider\nload-balancer will be restricted to the specified client IPs. This field will be ignored if the\ncloud-provider does not support the feature.\"\nMore info: https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/"

fn spec.managed.services.additional.serviceTemplate.spec.withLoadBalancerSourceRangesMixin

withLoadBalancerSourceRangesMixin(loadBalancerSourceRanges)

"If specified and supported by the platform, this will restrict traffic through the cloud-provider\nload-balancer will be restricted to the specified client IPs. This field will be ignored if the\ncloud-provider does not support the feature.\"\nMore info: https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/"

Note: This function appends passed data to existing values

fn spec.managed.services.additional.serviceTemplate.spec.withPorts

withPorts(ports)

"The list of ports that are exposed by this service.\nMore info: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies"

fn spec.managed.services.additional.serviceTemplate.spec.withPortsMixin

withPortsMixin(ports)

"The list of ports that are exposed by this service.\nMore info: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies"

Note: This function appends passed data to existing values

fn spec.managed.services.additional.serviceTemplate.spec.withPublishNotReadyAddresses

withPublishNotReadyAddresses(publishNotReadyAddresses)

"publishNotReadyAddresses indicates that any agent which deals with endpoints for this\nService should disregard any indications of ready/not-ready.\nThe primary use case for setting this field is for a StatefulSet's Headless Service to\npropagate SRV DNS records for its Pods for the purpose of peer discovery.\nThe Kubernetes controllers that generate Endpoints and EndpointSlice resources for\nServices interpret this to mean that all endpoints are considered \"ready\" even if the\nPods themselves are not. Agents which consume only Kubernetes generated endpoints\nthrough the Endpoints or EndpointSlice resources can safely assume this behavior."

fn spec.managed.services.additional.serviceTemplate.spec.withSelector

withSelector(selector)

"Route service traffic to pods with label keys and values matching this\nselector. If empty or not present, the service is assumed to have an\nexternal process managing its endpoints, which Kubernetes will not\nmodify. Only applies to types ClusterIP, NodePort, and LoadBalancer.\nIgnored if type is ExternalName.\nMore info: https://kubernetes.io/docs/concepts/services-networking/service/"

fn spec.managed.services.additional.serviceTemplate.spec.withSelectorMixin

withSelectorMixin(selector)

"Route service traffic to pods with label keys and values matching this\nselector. If empty or not present, the service is assumed to have an\nexternal process managing its endpoints, which Kubernetes will not\nmodify. Only applies to types ClusterIP, NodePort, and LoadBalancer.\nIgnored if type is ExternalName.\nMore info: https://kubernetes.io/docs/concepts/services-networking/service/"

Note: This function appends passed data to existing values

fn spec.managed.services.additional.serviceTemplate.spec.withSessionAffinity

withSessionAffinity(sessionAffinity)

"Supports \"ClientIP\" and \"None\". Used to maintain session affinity.\nEnable client IP based session affinity.\nMust be ClientIP or None.\nDefaults to None.\nMore info: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies"

fn spec.managed.services.additional.serviceTemplate.spec.withTrafficDistribution

withTrafficDistribution(trafficDistribution)

"TrafficDistribution offers a way to express preferences for how traffic is\ndistributed to Service endpoints. Implementations can use this field as a\nhint, but are not required to guarantee strict adherence. If the field is\nnot set, the implementation will apply its default routing strategy. If set\nto \"PreferClose\", implementations should prioritize endpoints that are\ntopologically close (e.g., same zone).\nThis is an alpha field and requires enabling ServiceTrafficDistribution feature."

fn spec.managed.services.additional.serviceTemplate.spec.withType

withType(type)

"type determines how the Service is exposed. Defaults to ClusterIP. Valid\noptions are ExternalName, ClusterIP, NodePort, and LoadBalancer.\n\"ClusterIP\" allocates a cluster-internal IP address for load-balancing\nto endpoints. Endpoints are determined by the selector or if that is not\nspecified, by manual construction of an Endpoints object or\nEndpointSlice objects. If clusterIP is \"None\", no virtual IP is\nallocated and the endpoints are published as a set of endpoints rather\nthan a virtual IP.\n\"NodePort\" builds on ClusterIP and allocates a port on every node which\nroutes to the same endpoints as the clusterIP.\n\"LoadBalancer\" builds on NodePort and creates an external load-balancer\n(if supported in the current cloud) which routes to the same endpoints\nas the clusterIP.\n\"ExternalName\" aliases this service to the specified externalName.\nSeveral other fields do not apply to ExternalName services.\nMore info: https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types"

obj spec.managed.services.additional.serviceTemplate.spec.ports

"The list of ports that are exposed by this service.\nMore info: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies"

fn spec.managed.services.additional.serviceTemplate.spec.ports.withAppProtocol

withAppProtocol(appProtocol)

"The application protocol for this port.\nThis is used as a hint for implementations to offer richer behavior for protocols that they understand.\nThis field follows standard Kubernetes label syntax.\nValid values are either:\n\n\n Un-prefixed protocol names - reserved for IANA standard service names (as per\nRFC-6335 and https://www.iana.org/assignments/service-names).\n\n\n Kubernetes-defined prefixed names:\n * 'kubernetes.io/h2c' - HTTP/2 prior knowledge over cleartext as described in https://www.rfc-editor.org/rfc/rfc9113.html#name-starting-http-2-with-prior-\n * 'kubernetes.io/ws' - WebSocket over cleartext as described in https://www.rfc-editor.org/rfc/rfc6455\n * 'kubernetes.io/wss' - WebSocket over TLS as described in https://www.rfc-editor.org/rfc/rfc6455\n\n\n* Other protocols should use implementation-defined prefixed names such as\nmycompany.com/my-custom-protocol."

fn spec.managed.services.additional.serviceTemplate.spec.ports.withName

withName(name)

"The name of this port within the service. This must be a DNS_LABEL.\nAll ports within a ServiceSpec must have unique names. When considering\nthe endpoints for a Service, this must match the 'name' field in the\nEndpointPort.\nOptional if only one ServicePort is defined on this service."

fn spec.managed.services.additional.serviceTemplate.spec.ports.withNodePort

withNodePort(nodePort)

"The port on each node on which this service is exposed when type is\nNodePort or LoadBalancer. Usually assigned by the system. If a value is\nspecified, in-range, and not in use it will be used, otherwise the\noperation will fail. If not specified, a port will be allocated if this\nService requires one. If this field is specified when creating a\nService which does not need it, creation will fail. This field will be\nwiped when updating a Service to no longer need it (e.g. changing type\nfrom NodePort to ClusterIP).\nMore info: https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport"

fn spec.managed.services.additional.serviceTemplate.spec.ports.withPort

withPort(port)

"The port that will be exposed by this service."

fn spec.managed.services.additional.serviceTemplate.spec.ports.withProtocol

withProtocol(protocol)

"The IP protocol for this port. Supports \"TCP\", \"UDP\", and \"SCTP\".\nDefault is TCP."

fn spec.managed.services.additional.serviceTemplate.spec.ports.withTargetPort

withTargetPort(targetPort)

"Number or name of the port to access on the pods targeted by the service.\nNumber must be in the range 1 to 65535. Name must be an IANA_SVC_NAME.\nIf this is a string, it will be looked up as a named port in the\ntarget Pod's container ports. If this is not specified, the value\nof the 'port' field is used (an identity map).\nThis field is ignored for services with clusterIP=None, and should be\nomitted or set equal to the 'port' field.\nMore info: https://kubernetes.io/docs/concepts/services-networking/service/#defining-a-service"

obj spec.managed.services.additional.serviceTemplate.spec.sessionAffinityConfig

"sessionAffinityConfig contains the configurations of session affinity."

obj spec.managed.services.additional.serviceTemplate.spec.sessionAffinityConfig.clientIP

"clientIP contains the configurations of Client IP based session affinity."

fn spec.managed.services.additional.serviceTemplate.spec.sessionAffinityConfig.clientIP.withTimeoutSeconds

withTimeoutSeconds(timeoutSeconds)

"timeoutSeconds specifies the seconds of ClientIP type session sticky time.\nThe value must be >0 && <=86400(for 1 day) if ServiceAffinity == \"ClientIP\".\nDefault value is 10800(for 3 hours)."

obj spec.monitoring

"The configuration of the monitoring infrastructure of this cluster"

fn spec.monitoring.withCustomQueriesConfigMap

withCustomQueriesConfigMap(customQueriesConfigMap)

"The list of config maps containing the custom queries"

fn spec.monitoring.withCustomQueriesConfigMapMixin

withCustomQueriesConfigMapMixin(customQueriesConfigMap)

"The list of config maps containing the custom queries"

Note: This function appends passed data to existing values

fn spec.monitoring.withCustomQueriesSecret

withCustomQueriesSecret(customQueriesSecret)

"The list of secrets containing the custom queries"

fn spec.monitoring.withCustomQueriesSecretMixin

withCustomQueriesSecretMixin(customQueriesSecret)

"The list of secrets containing the custom queries"

Note: This function appends passed data to existing values

fn spec.monitoring.withDisableDefaultQueries

withDisableDefaultQueries(disableDefaultQueries)

"Whether the default queries should be injected.\nSet it to true if you don't want to inject default queries into the cluster.\nDefault: false."

fn spec.monitoring.withEnablePodMonitor

withEnablePodMonitor(enablePodMonitor)

"Enable or disable the PodMonitor"

fn spec.monitoring.withPodMonitorMetricRelabelings

withPodMonitorMetricRelabelings(podMonitorMetricRelabelings)

"The list of metric relabelings for the PodMonitor. Applied to samples before ingestion."

fn spec.monitoring.withPodMonitorMetricRelabelingsMixin

withPodMonitorMetricRelabelingsMixin(podMonitorMetricRelabelings)

"The list of metric relabelings for the PodMonitor. Applied to samples before ingestion."

Note: This function appends passed data to existing values

fn spec.monitoring.withPodMonitorRelabelings

withPodMonitorRelabelings(podMonitorRelabelings)

"The list of relabelings for the PodMonitor. Applied to samples before scraping."

fn spec.monitoring.withPodMonitorRelabelingsMixin

withPodMonitorRelabelingsMixin(podMonitorRelabelings)

"The list of relabelings for the PodMonitor. Applied to samples before scraping."

Note: This function appends passed data to existing values

obj spec.monitoring.customQueriesConfigMap

"The list of config maps containing the custom queries"

fn spec.monitoring.customQueriesConfigMap.withKey

withKey(key)

"The key to select"

fn spec.monitoring.customQueriesConfigMap.withName

withName(name)

"Name of the referent."

obj spec.monitoring.customQueriesSecret

"The list of secrets containing the custom queries"

fn spec.monitoring.customQueriesSecret.withKey

withKey(key)

"The key to select"

fn spec.monitoring.customQueriesSecret.withName

withName(name)

"Name of the referent."

obj spec.monitoring.podMonitorMetricRelabelings

"The list of metric relabelings for the PodMonitor. Applied to samples before ingestion."

fn spec.monitoring.podMonitorMetricRelabelings.withAction

withAction(action)

"Action to perform based on the regex matching.\n\n\nUppercase and Lowercase actions require Prometheus >= v2.36.0.\nDropEqual and KeepEqual actions require Prometheus >= v2.41.0.\n\n\nDefault: \"Replace\

fn spec.monitoring.podMonitorMetricRelabelings.withModulus

withModulus(modulus)

"Modulus to take of the hash of the source label values.\n\n\nOnly applicable when the action is HashMod."

fn spec.monitoring.podMonitorMetricRelabelings.withRegex

withRegex(regex)

"Regular expression against which the extracted value is matched."

fn spec.monitoring.podMonitorMetricRelabelings.withReplacement

withReplacement(replacement)

"Replacement value against which a Replace action is performed if the\nregular expression matches.\n\n\nRegex capture groups are available."

fn spec.monitoring.podMonitorMetricRelabelings.withSeparator

withSeparator(separator)

"Separator is the string between concatenated SourceLabels."

fn spec.monitoring.podMonitorMetricRelabelings.withSourceLabels

withSourceLabels(sourceLabels)

"The source labels select values from existing labels. Their content is\nconcatenated using the configured Separator and matched against the\nconfigured regular expression."

fn spec.monitoring.podMonitorMetricRelabelings.withSourceLabelsMixin

withSourceLabelsMixin(sourceLabels)

"The source labels select values from existing labels. Their content is\nconcatenated using the configured Separator and matched against the\nconfigured regular expression."

Note: This function appends passed data to existing values

fn spec.monitoring.podMonitorMetricRelabelings.withTargetLabel

withTargetLabel(targetLabel)

"Label to which the resulting string is written in a replacement.\n\n\nIt is mandatory for Replace, HashMod, Lowercase, Uppercase,\nKeepEqual and DropEqual actions.\n\n\nRegex capture groups are available."

obj spec.monitoring.podMonitorRelabelings

"The list of relabelings for the PodMonitor. Applied to samples before scraping."

fn spec.monitoring.podMonitorRelabelings.withAction

withAction(action)

"Action to perform based on the regex matching.\n\n\nUppercase and Lowercase actions require Prometheus >= v2.36.0.\nDropEqual and KeepEqual actions require Prometheus >= v2.41.0.\n\n\nDefault: \"Replace\

fn spec.monitoring.podMonitorRelabelings.withModulus

withModulus(modulus)

"Modulus to take of the hash of the source label values.\n\n\nOnly applicable when the action is HashMod."

fn spec.monitoring.podMonitorRelabelings.withRegex

withRegex(regex)

"Regular expression against which the extracted value is matched."

fn spec.monitoring.podMonitorRelabelings.withReplacement

withReplacement(replacement)

"Replacement value against which a Replace action is performed if the\nregular expression matches.\n\n\nRegex capture groups are available."

fn spec.monitoring.podMonitorRelabelings.withSeparator

withSeparator(separator)

"Separator is the string between concatenated SourceLabels."

fn spec.monitoring.podMonitorRelabelings.withSourceLabels

withSourceLabels(sourceLabels)

"The source labels select values from existing labels. Their content is\nconcatenated using the configured Separator and matched against the\nconfigured regular expression."

fn spec.monitoring.podMonitorRelabelings.withSourceLabelsMixin

withSourceLabelsMixin(sourceLabels)

"The source labels select values from existing labels. Their content is\nconcatenated using the configured Separator and matched against the\nconfigured regular expression."

Note: This function appends passed data to existing values

fn spec.monitoring.podMonitorRelabelings.withTargetLabel

withTargetLabel(targetLabel)

"Label to which the resulting string is written in a replacement.\n\n\nIt is mandatory for Replace, HashMod, Lowercase, Uppercase,\nKeepEqual and DropEqual actions.\n\n\nRegex capture groups are available."

obj spec.monitoring.tls

"Configure TLS communication for the metrics endpoint.\nChanging tls.enabled option will force a rollout of all instances."

fn spec.monitoring.tls.withEnabled

withEnabled(enabled)

"Enable TLS for the monitoring endpoint.\nChanging this option will force a rollout of all instances."

obj spec.nodeMaintenanceWindow

"Define a maintenance window for the Kubernetes nodes"

fn spec.nodeMaintenanceWindow.withInProgress

withInProgress(inProgress)

"Is there a node maintenance activity in progress?"

fn spec.nodeMaintenanceWindow.withReusePVC

withReusePVC(reusePVC)

"Reuse the existing PVC (wait for the node to come\nup again) or not (recreate it elsewhere - when instances >1)"

obj spec.plugins

"The plugins configuration, containing\nany plugin to be loaded with the corresponding configuration"

fn spec.plugins.withName

withName(name)

"Name is the plugin name"

fn spec.plugins.withParameters

withParameters(parameters)

"Parameters is the configuration of the plugin"

fn spec.plugins.withParametersMixin

withParametersMixin(parameters)

"Parameters is the configuration of the plugin"

Note: This function appends passed data to existing values

obj spec.postgresql

"Configuration of the PostgreSQL server"

fn spec.postgresql.withEnableAlterSystem

withEnableAlterSystem(enableAlterSystem)

"If this parameter is true, the user will be able to invoke ALTER SYSTEM\non this CloudNativePG Cluster.\nThis should only be used for debugging and troubleshooting.\nDefaults to false."

fn spec.postgresql.withParameters

withParameters(parameters)

"PostgreSQL configuration options (postgresql.conf)"

fn spec.postgresql.withParametersMixin

withParametersMixin(parameters)

"PostgreSQL configuration options (postgresql.conf)"

Note: This function appends passed data to existing values

fn spec.postgresql.withPg_hba

withPg_hba(pg_hba)

"PostgreSQL Host Based Authentication rules (lines to be appended\nto the pg_hba.conf file)"

fn spec.postgresql.withPg_hbaMixin

withPg_hbaMixin(pg_hba)

"PostgreSQL Host Based Authentication rules (lines to be appended\nto the pg_hba.conf file)"

Note: This function appends passed data to existing values

fn spec.postgresql.withPg_ident

withPg_ident(pg_ident)

"PostgreSQL User Name Maps rules (lines to be appended\nto the pg_ident.conf file)"

fn spec.postgresql.withPg_identMixin

withPg_identMixin(pg_ident)

"PostgreSQL User Name Maps rules (lines to be appended\nto the pg_ident.conf file)"

Note: This function appends passed data to existing values

fn spec.postgresql.withPromotionTimeout

withPromotionTimeout(promotionTimeout)

"Specifies the maximum number of seconds to wait when promoting an instance to primary.\nDefault value is 40000000, greater than one year in seconds,\nbig enough to simulate an infinite timeout"

fn spec.postgresql.withShared_preload_libraries

withShared_preload_libraries(shared_preload_libraries)

"Lists of shared preload libraries to add to the default ones"

fn spec.postgresql.withShared_preload_librariesMixin

withShared_preload_librariesMixin(shared_preload_libraries)

"Lists of shared preload libraries to add to the default ones"

Note: This function appends passed data to existing values

obj spec.postgresql.ldap

"Options to specify LDAP configuration"

fn spec.postgresql.ldap.withPort

withPort(port)

"LDAP server port"

fn spec.postgresql.ldap.withScheme

withScheme(scheme)

"LDAP schema to be used, possible options are ldap and ldaps"

fn spec.postgresql.ldap.withServer

withServer(server)

"LDAP hostname or IP address"

fn spec.postgresql.ldap.withTls

withTls(tls)

"Set to 'true' to enable LDAP over TLS. 'false' is default"

obj spec.postgresql.ldap.bindAsAuth

"Bind as authentication configuration"

fn spec.postgresql.ldap.bindAsAuth.withPrefix

withPrefix(prefix)

"Prefix for the bind authentication option"

fn spec.postgresql.ldap.bindAsAuth.withSuffix

withSuffix(suffix)

"Suffix for the bind authentication option"

obj spec.postgresql.ldap.bindSearchAuth

"Bind+Search authentication configuration"

fn spec.postgresql.ldap.bindSearchAuth.withBaseDN

withBaseDN(baseDN)

"Root DN to begin the user search"

fn spec.postgresql.ldap.bindSearchAuth.withBindDN

withBindDN(bindDN)

"DN of the user to bind to the directory"

fn spec.postgresql.ldap.bindSearchAuth.withSearchAttribute

withSearchAttribute(searchAttribute)

"Attribute to match against the username"

fn spec.postgresql.ldap.bindSearchAuth.withSearchFilter

withSearchFilter(searchFilter)

"Search filter to use when doing the search+bind authentication"

obj spec.postgresql.ldap.bindSearchAuth.bindPassword

"Secret with the password for the user to bind to the directory"

fn spec.postgresql.ldap.bindSearchAuth.bindPassword.withKey

withKey(key)

"The key of the secret to select from. Must be a valid secret key."

fn spec.postgresql.ldap.bindSearchAuth.bindPassword.withName

withName(name)

"Name of the referent.\nThis field is effectively required, but due to backwards compatibility is\nallowed to be empty. Instances of this type with an empty value here are\nalmost certainly wrong.\nTODO: Add other useful fields. apiVersion, kind, uid?\nMore info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names\nTODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896."

fn spec.postgresql.ldap.bindSearchAuth.bindPassword.withOptional

withOptional(optional)

"Specify whether the Secret or its key must be defined"

obj spec.postgresql.syncReplicaElectionConstraint

"Requirements to be met by sync replicas. This will affect how the \"synchronous_standby_names\" parameter will be\nset up."

fn spec.postgresql.syncReplicaElectionConstraint.withEnabled

withEnabled(enabled)

"This flag enables the constraints for sync replicas"

fn spec.postgresql.syncReplicaElectionConstraint.withNodeLabelsAntiAffinity

withNodeLabelsAntiAffinity(nodeLabelsAntiAffinity)

"A list of node labels values to extract and compare to evaluate if the pods reside in the same topology or not"

fn spec.postgresql.syncReplicaElectionConstraint.withNodeLabelsAntiAffinityMixin

withNodeLabelsAntiAffinityMixin(nodeLabelsAntiAffinity)

"A list of node labels values to extract and compare to evaluate if the pods reside in the same topology or not"

Note: This function appends passed data to existing values

obj spec.postgresql.synchronous

"Configuration of the PostgreSQL synchronous replication feature"

fn spec.postgresql.synchronous.withMaxStandbyNamesFromCluster

withMaxStandbyNamesFromCluster(maxStandbyNamesFromCluster)

"Specifies the maximum number of local cluster pods that can be\nautomatically included in the synchronous_standby_names option in\nPostgreSQL."

fn spec.postgresql.synchronous.withMethod

withMethod(method)

"Method to select synchronous replication standbys from the listed\nservers, accepting 'any' (quorum-based synchronous replication) or\n'first' (priority-based synchronous replication) as values."

fn spec.postgresql.synchronous.withNumber

withNumber(number)

"Specifies the number of synchronous standby servers that\ntransactions must wait for responses from."

fn spec.postgresql.synchronous.withStandbyNamesPost

withStandbyNamesPost(standbyNamesPost)

"A user-defined list of application names to be added to\nsynchronous_standby_names after local cluster pods (the order is\nonly useful for priority-based synchronous replication)."

fn spec.postgresql.synchronous.withStandbyNamesPostMixin

withStandbyNamesPostMixin(standbyNamesPost)

"A user-defined list of application names to be added to\nsynchronous_standby_names after local cluster pods (the order is\nonly useful for priority-based synchronous replication)."

Note: This function appends passed data to existing values

fn spec.postgresql.synchronous.withStandbyNamesPre

withStandbyNamesPre(standbyNamesPre)

"A user-defined list of application names to be added to\nsynchronous_standby_names before local cluster pods (the order is\nonly useful for priority-based synchronous replication)."

fn spec.postgresql.synchronous.withStandbyNamesPreMixin

withStandbyNamesPreMixin(standbyNamesPre)

"A user-defined list of application names to be added to\nsynchronous_standby_names before local cluster pods (the order is\nonly useful for priority-based synchronous replication)."

Note: This function appends passed data to existing values

obj spec.projectedVolumeTemplate

"Template to be used to define projected volumes, projected volumes will be mounted\nunder /projected base folder"

fn spec.projectedVolumeTemplate.withDefaultMode

withDefaultMode(defaultMode)

"defaultMode are the mode bits used to set permissions on created files by default.\nMust be an octal value between 0000 and 0777 or a decimal value between 0 and 511.\nYAML accepts both octal and decimal values, JSON requires decimal values for mode bits.\nDirectories within the path are not affected by this setting.\nThis might be in conflict with other options that affect the file\nmode, like fsGroup, and the result can be other mode bits set."

fn spec.projectedVolumeTemplate.withSources

withSources(sources)

"sources is the list of volume projections"

fn spec.projectedVolumeTemplate.withSourcesMixin

withSourcesMixin(sources)

"sources is the list of volume projections"

Note: This function appends passed data to existing values

obj spec.projectedVolumeTemplate.sources

"sources is the list of volume projections"

obj spec.projectedVolumeTemplate.sources.clusterTrustBundle

"ClusterTrustBundle allows a pod to access the .spec.trustBundle field\nof ClusterTrustBundle objects in an auto-updating file.\n\n\nAlpha, gated by the ClusterTrustBundleProjection feature gate.\n\n\nClusterTrustBundle objects can either be selected by name, or by the\ncombination of signer name and a label selector.\n\n\nKubelet performs aggressive normalization of the PEM contents written\ninto the pod filesystem. Esoteric PEM features such as inter-block\ncomments and block headers are stripped. Certificates are deduplicated.\nThe ordering of certificates within the file is arbitrary, and Kubelet\nmay change the order over time."

fn spec.projectedVolumeTemplate.sources.clusterTrustBundle.withName

withName(name)

"Select a single ClusterTrustBundle by object name. Mutually-exclusive\nwith signerName and labelSelector."

fn spec.projectedVolumeTemplate.sources.clusterTrustBundle.withOptional

withOptional(optional)

"If true, don't block pod startup if the referenced ClusterTrustBundle(s)\naren't available. If using name, then the named ClusterTrustBundle is\nallowed not to exist. If using signerName, then the combination of\nsignerName and labelSelector is allowed to match zero\nClusterTrustBundles."

fn spec.projectedVolumeTemplate.sources.clusterTrustBundle.withPath

withPath(path)

"Relative path from the volume root to write the bundle."

fn spec.projectedVolumeTemplate.sources.clusterTrustBundle.withSignerName

withSignerName(signerName)

"Select all ClusterTrustBundles that match this signer name.\nMutually-exclusive with name. The contents of all selected\nClusterTrustBundles will be unified and deduplicated."

obj spec.projectedVolumeTemplate.sources.clusterTrustBundle.labelSelector

"Select all ClusterTrustBundles that match this label selector. Only has\neffect if signerName is set. Mutually-exclusive with name. If unset,\ninterpreted as \"match nothing\". If set but empty, interpreted as \"match\neverything\"."

fn spec.projectedVolumeTemplate.sources.clusterTrustBundle.labelSelector.withMatchExpressions

withMatchExpressions(matchExpressions)

"matchExpressions is a list of label selector requirements. The requirements are ANDed."

fn spec.projectedVolumeTemplate.sources.clusterTrustBundle.labelSelector.withMatchExpressionsMixin

withMatchExpressionsMixin(matchExpressions)

"matchExpressions is a list of label selector requirements. The requirements are ANDed."

Note: This function appends passed data to existing values

fn spec.projectedVolumeTemplate.sources.clusterTrustBundle.labelSelector.withMatchLabels

withMatchLabels(matchLabels)

"matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels\nmap is equivalent to an element of matchExpressions, whose key field is \"key\", the\noperator is \"In\", and the values array contains only \"value\". The requirements are ANDed."

fn spec.projectedVolumeTemplate.sources.clusterTrustBundle.labelSelector.withMatchLabelsMixin

withMatchLabelsMixin(matchLabels)

"matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels\nmap is equivalent to an element of matchExpressions, whose key field is \"key\", the\noperator is \"In\", and the values array contains only \"value\". The requirements are ANDed."

Note: This function appends passed data to existing values

obj spec.projectedVolumeTemplate.sources.clusterTrustBundle.labelSelector.matchExpressions

"matchExpressions is a list of label selector requirements. The requirements are ANDed."

fn spec.projectedVolumeTemplate.sources.clusterTrustBundle.labelSelector.matchExpressions.withKey

withKey(key)

"key is the label key that the selector applies to."

fn spec.projectedVolumeTemplate.sources.clusterTrustBundle.labelSelector.matchExpressions.withOperator

withOperator(operator)

"operator represents a key's relationship to a set of values.\nValid operators are In, NotIn, Exists and DoesNotExist."

fn spec.projectedVolumeTemplate.sources.clusterTrustBundle.labelSelector.matchExpressions.withValues

withValues(values)

"values is an array of string values. If the operator is In or NotIn,\nthe values array must be non-empty. If the operator is Exists or DoesNotExist,\nthe values array must be empty. This array is replaced during a strategic\nmerge patch."

fn spec.projectedVolumeTemplate.sources.clusterTrustBundle.labelSelector.matchExpressions.withValuesMixin

withValuesMixin(values)

"values is an array of string values. If the operator is In or NotIn,\nthe values array must be non-empty. If the operator is Exists or DoesNotExist,\nthe values array must be empty. This array is replaced during a strategic\nmerge patch."

Note: This function appends passed data to existing values

obj spec.projectedVolumeTemplate.sources.configMap

"configMap information about the configMap data to project"

fn spec.projectedVolumeTemplate.sources.configMap.withItems

withItems(items)

"items if unspecified, each key-value pair in the Data field of the referenced\nConfigMap will be projected into the volume as a file whose name is the\nkey and content is the value. If specified, the listed keys will be\nprojected into the specified paths, and unlisted keys will not be\npresent. If a key is specified which is not present in the ConfigMap,\nthe volume setup will error unless it is marked optional. Paths must be\nrelative and may not contain the '..' path or start with '..'."

fn spec.projectedVolumeTemplate.sources.configMap.withItemsMixin

withItemsMixin(items)

"items if unspecified, each key-value pair in the Data field of the referenced\nConfigMap will be projected into the volume as a file whose name is the\nkey and content is the value. If specified, the listed keys will be\nprojected into the specified paths, and unlisted keys will not be\npresent. If a key is specified which is not present in the ConfigMap,\nthe volume setup will error unless it is marked optional. Paths must be\nrelative and may not contain the '..' path or start with '..'."

Note: This function appends passed data to existing values

fn spec.projectedVolumeTemplate.sources.configMap.withName

withName(name)

"Name of the referent.\nThis field is effectively required, but due to backwards compatibility is\nallowed to be empty. Instances of this type with an empty value here are\nalmost certainly wrong.\nTODO: Add other useful fields. apiVersion, kind, uid?\nMore info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names\nTODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896."

fn spec.projectedVolumeTemplate.sources.configMap.withOptional

withOptional(optional)

"optional specify whether the ConfigMap or its keys must be defined"

obj spec.projectedVolumeTemplate.sources.configMap.items

"items if unspecified, each key-value pair in the Data field of the referenced\nConfigMap will be projected into the volume as a file whose name is the\nkey and content is the value. If specified, the listed keys will be\nprojected into the specified paths, and unlisted keys will not be\npresent. If a key is specified which is not present in the ConfigMap,\nthe volume setup will error unless it is marked optional. Paths must be\nrelative and may not contain the '..' path or start with '..'."

fn spec.projectedVolumeTemplate.sources.configMap.items.withKey

withKey(key)

"key is the key to project."

fn spec.projectedVolumeTemplate.sources.configMap.items.withMode

withMode(mode)

"mode is Optional: mode bits used to set permissions on this file.\nMust be an octal value between 0000 and 0777 or a decimal value between 0 and 511.\nYAML accepts both octal and decimal values, JSON requires decimal values for mode bits.\nIf not specified, the volume defaultMode will be used.\nThis might be in conflict with other options that affect the file\nmode, like fsGroup, and the result can be other mode bits set."

fn spec.projectedVolumeTemplate.sources.configMap.items.withPath

withPath(path)

"path is the relative path of the file to map the key to.\nMay not be an absolute path.\nMay not contain the path element '..'.\nMay not start with the string '..'."

obj spec.projectedVolumeTemplate.sources.downwardAPI

"downwardAPI information about the downwardAPI data to project"

fn spec.projectedVolumeTemplate.sources.downwardAPI.withItems

withItems(items)

"Items is a list of DownwardAPIVolume file"

fn spec.projectedVolumeTemplate.sources.downwardAPI.withItemsMixin

withItemsMixin(items)

"Items is a list of DownwardAPIVolume file"

Note: This function appends passed data to existing values

obj spec.projectedVolumeTemplate.sources.downwardAPI.items

"Items is a list of DownwardAPIVolume file"

fn spec.projectedVolumeTemplate.sources.downwardAPI.items.withMode

withMode(mode)

"Optional: mode bits used to set permissions on this file, must be an octal value\nbetween 0000 and 0777 or a decimal value between 0 and 511.\nYAML accepts both octal and decimal values, JSON requires decimal values for mode bits.\nIf not specified, the volume defaultMode will be used.\nThis might be in conflict with other options that affect the file\nmode, like fsGroup, and the result can be other mode bits set."

fn spec.projectedVolumeTemplate.sources.downwardAPI.items.withPath

withPath(path)

"Required: Path is the relative path name of the file to be created. Must not be absolute or contain the '..' path. Must be utf-8 encoded. The first item of the relative path must not start with '..'"

obj spec.projectedVolumeTemplate.sources.downwardAPI.items.fieldRef

"Required: Selects a field of the pod: only annotations, labels, name, namespace and uid are supported."

fn spec.projectedVolumeTemplate.sources.downwardAPI.items.fieldRef.withApiVersion

withApiVersion(apiVersion)

"Version of the schema the FieldPath is written in terms of, defaults to \"v1\"."

fn spec.projectedVolumeTemplate.sources.downwardAPI.items.fieldRef.withFieldPath

withFieldPath(fieldPath)

"Path of the field to select in the specified API version."

obj spec.projectedVolumeTemplate.sources.downwardAPI.items.resourceFieldRef

"Selects a resource of the container: only resources limits and requests\n(limits.cpu, limits.memory, requests.cpu and requests.memory) are currently supported."

fn spec.projectedVolumeTemplate.sources.downwardAPI.items.resourceFieldRef.withContainerName

withContainerName(containerName)

"Container name: required for volumes, optional for env vars"

fn spec.projectedVolumeTemplate.sources.downwardAPI.items.resourceFieldRef.withDivisor

withDivisor(divisor)

"Specifies the output format of the exposed resources, defaults to \"1\

fn spec.projectedVolumeTemplate.sources.downwardAPI.items.resourceFieldRef.withResource

withResource(resource)

"Required: resource to select"

obj spec.projectedVolumeTemplate.sources.secret

"secret information about the secret data to project"

fn spec.projectedVolumeTemplate.sources.secret.withItems

withItems(items)

"items if unspecified, each key-value pair in the Data field of the referenced\nSecret will be projected into the volume as a file whose name is the\nkey and content is the value. If specified, the listed keys will be\nprojected into the specified paths, and unlisted keys will not be\npresent. If a key is specified which is not present in the Secret,\nthe volume setup will error unless it is marked optional. Paths must be\nrelative and may not contain the '..' path or start with '..'."

fn spec.projectedVolumeTemplate.sources.secret.withItemsMixin

withItemsMixin(items)

"items if unspecified, each key-value pair in the Data field of the referenced\nSecret will be projected into the volume as a file whose name is the\nkey and content is the value. If specified, the listed keys will be\nprojected into the specified paths, and unlisted keys will not be\npresent. If a key is specified which is not present in the Secret,\nthe volume setup will error unless it is marked optional. Paths must be\nrelative and may not contain the '..' path or start with '..'."

Note: This function appends passed data to existing values

fn spec.projectedVolumeTemplate.sources.secret.withName

withName(name)

"Name of the referent.\nThis field is effectively required, but due to backwards compatibility is\nallowed to be empty. Instances of this type with an empty value here are\nalmost certainly wrong.\nTODO: Add other useful fields. apiVersion, kind, uid?\nMore info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names\nTODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896."

fn spec.projectedVolumeTemplate.sources.secret.withOptional

withOptional(optional)

"optional field specify whether the Secret or its key must be defined"

obj spec.projectedVolumeTemplate.sources.secret.items

"items if unspecified, each key-value pair in the Data field of the referenced\nSecret will be projected into the volume as a file whose name is the\nkey and content is the value. If specified, the listed keys will be\nprojected into the specified paths, and unlisted keys will not be\npresent. If a key is specified which is not present in the Secret,\nthe volume setup will error unless it is marked optional. Paths must be\nrelative and may not contain the '..' path or start with '..'."

fn spec.projectedVolumeTemplate.sources.secret.items.withKey

withKey(key)

"key is the key to project."

fn spec.projectedVolumeTemplate.sources.secret.items.withMode

withMode(mode)

"mode is Optional: mode bits used to set permissions on this file.\nMust be an octal value between 0000 and 0777 or a decimal value between 0 and 511.\nYAML accepts both octal and decimal values, JSON requires decimal values for mode bits.\nIf not specified, the volume defaultMode will be used.\nThis might be in conflict with other options that affect the file\nmode, like fsGroup, and the result can be other mode bits set."

fn spec.projectedVolumeTemplate.sources.secret.items.withPath

withPath(path)

"path is the relative path of the file to map the key to.\nMay not be an absolute path.\nMay not contain the path element '..'.\nMay not start with the string '..'."

obj spec.projectedVolumeTemplate.sources.serviceAccountToken

"serviceAccountToken is information about the serviceAccountToken data to project"

fn spec.projectedVolumeTemplate.sources.serviceAccountToken.withAudience

withAudience(audience)

"audience is the intended audience of the token. A recipient of a token\nmust identify itself with an identifier specified in the audience of the\ntoken, and otherwise should reject the token. The audience defaults to the\nidentifier of the apiserver."

fn spec.projectedVolumeTemplate.sources.serviceAccountToken.withExpirationSeconds

withExpirationSeconds(expirationSeconds)

"expirationSeconds is the requested duration of validity of the service\naccount token. As the token approaches expiration, the kubelet volume\nplugin will proactively rotate the service account token. The kubelet will\nstart trying to rotate the token if the token is older than 80 percent of\nits time to live or if the token is older than 24 hours.Defaults to 1 hour\nand must be at least 10 minutes."

fn spec.projectedVolumeTemplate.sources.serviceAccountToken.withPath

withPath(path)

"path is the path relative to the mount point of the file to project the\ntoken into."

obj spec.replica

"Replica cluster configuration"

fn spec.replica.withEnabled

withEnabled(enabled)

"If replica mode is enabled, this cluster will be a replica of an\nexisting cluster. Replica cluster can be created from a recovery\nobject store or via streaming through pg_basebackup.\nRefer to the Replica clusters page of the documentation for more information."

fn spec.replica.withMinApplyDelay

withMinApplyDelay(minApplyDelay)

"When replica mode is enabled, this parameter allows you to replay\ntransactions only when the system time is at least the configured\ntime past the commit time. This provides an opportunity to correct\ndata loss errors. Note that when this parameter is set, a promotion\ntoken cannot be used."

fn spec.replica.withPrimary

withPrimary(primary)

"Primary defines which Cluster is defined to be the primary in the distributed PostgreSQL cluster, based on the\ntopology specified in externalClusters"

fn spec.replica.withPromotionToken

withPromotionToken(promotionToken)

"A demotion token generated by an external cluster used to\ncheck if the promotion requirements are met."

fn spec.replica.withSelf

withSelf(Self)

"Self defines the name of this cluster. It is used to determine if this is a primary\nor a replica cluster, comparing it with primary"

fn spec.replica.withSource

withSource(source)

"The name of the external cluster which is the replication origin"

obj spec.replicationSlots

"Replication slots management configuration"

fn spec.replicationSlots.withUpdateInterval

withUpdateInterval(updateInterval)

"Standby will update the status of the local replication slots\nevery updateInterval seconds (default 30)."

obj spec.replicationSlots.highAvailability

"Replication slots for high availability configuration"

fn spec.replicationSlots.highAvailability.withEnabled

withEnabled(enabled)

"If enabled (default), the operator will automatically manage replication slots\non the primary instance and use them in streaming replication\nconnections with all the standby instances that are part of the HA\ncluster. If disabled, the operator will not take advantage\nof replication slots in streaming connections with the replicas.\nThis feature also controls replication slots in replica cluster,\nfrom the designated primary to its cascading replicas."

fn spec.replicationSlots.highAvailability.withSlotPrefix

withSlotPrefix(slotPrefix)

"Prefix for replication slots managed by the operator for HA.\nIt may only contain lower case letters, numbers, and the underscore character.\nThis can only be set at creation time. By default set to _cnpg_."

obj spec.replicationSlots.synchronizeReplicas

"Configures the synchronization of the user defined physical replication slots"

fn spec.replicationSlots.synchronizeReplicas.withEnabled

withEnabled(enabled)

"When set to true, every replication slot that is on the primary is synchronized on each standby"

fn spec.replicationSlots.synchronizeReplicas.withExcludePatterns

withExcludePatterns(excludePatterns)

"List of regular expression patterns to match the names of replication slots to be excluded (by default empty)"

fn spec.replicationSlots.synchronizeReplicas.withExcludePatternsMixin

withExcludePatternsMixin(excludePatterns)

"List of regular expression patterns to match the names of replication slots to be excluded (by default empty)"

Note: This function appends passed data to existing values

obj spec.resources

"Resources requirements of every generated Pod. Please refer to\nhttps://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\nfor more information."

fn spec.resources.withClaims

withClaims(claims)

"Claims lists the names of resources, defined in spec.resourceClaims,\nthat are used by this container.\n\n\nThis is an alpha field and requires enabling the\nDynamicResourceAllocation feature gate.\n\n\nThis field is immutable. It can only be set for containers."

fn spec.resources.withClaimsMixin

withClaimsMixin(claims)

"Claims lists the names of resources, defined in spec.resourceClaims,\nthat are used by this container.\n\n\nThis is an alpha field and requires enabling the\nDynamicResourceAllocation feature gate.\n\n\nThis field is immutable. It can only be set for containers."

Note: This function appends passed data to existing values

fn spec.resources.withLimits

withLimits(limits)

"Limits describes the maximum amount of compute resources allowed.\nMore info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/"

fn spec.resources.withLimitsMixin

withLimitsMixin(limits)

"Limits describes the maximum amount of compute resources allowed.\nMore info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/"

Note: This function appends passed data to existing values

fn spec.resources.withRequests

withRequests(requests)

"Requests describes the minimum amount of compute resources required.\nIf Requests is omitted for a container, it defaults to Limits if that is explicitly specified,\notherwise to an implementation-defined value. Requests cannot exceed Limits.\nMore info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/"

fn spec.resources.withRequestsMixin

withRequestsMixin(requests)

"Requests describes the minimum amount of compute resources required.\nIf Requests is omitted for a container, it defaults to Limits if that is explicitly specified,\notherwise to an implementation-defined value. Requests cannot exceed Limits.\nMore info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/"

Note: This function appends passed data to existing values

obj spec.resources.claims

"Claims lists the names of resources, defined in spec.resourceClaims,\nthat are used by this container.\n\n\nThis is an alpha field and requires enabling the\nDynamicResourceAllocation feature gate.\n\n\nThis field is immutable. It can only be set for containers."

fn spec.resources.claims.withName

withName(name)

"Name must match the name of one entry in pod.spec.resourceClaims of\nthe Pod where this field is used. It makes that resource available\ninside a container."

obj spec.seccompProfile

"The SeccompProfile applied to every Pod and Container.\nDefaults to: RuntimeDefault"

fn spec.seccompProfile.withLocalhostProfile

withLocalhostProfile(localhostProfile)

"localhostProfile indicates a profile defined in a file on the node should be used.\nThe profile must be preconfigured on the node to work.\nMust be a descending path, relative to the kubelet's configured seccomp profile location.\nMust be set if type is \"Localhost\". Must NOT be set for any other type."

fn spec.seccompProfile.withType

withType(type)

"type indicates which kind of seccomp profile will be applied.\nValid options are:\n\n\nLocalhost - a profile defined in a file on the node should be used.\nRuntimeDefault - the container runtime default profile should be used.\nUnconfined - no profile should be applied."

obj spec.serviceAccountTemplate

"Configure the generation of the service account"

obj spec.serviceAccountTemplate.metadata

"Metadata are the metadata to be used for the generated\nservice account"

fn spec.serviceAccountTemplate.metadata.withAnnotations

withAnnotations(annotations)

"Annotations is an unstructured key value map stored with a resource that may be\nset by external tools to store and retrieve arbitrary metadata. They are not\nqueryable and should be preserved when modifying objects.\nMore info: http://kubernetes.io/docs/user-guide/annotations"

fn spec.serviceAccountTemplate.metadata.withAnnotationsMixin

withAnnotationsMixin(annotations)

"Annotations is an unstructured key value map stored with a resource that may be\nset by external tools to store and retrieve arbitrary metadata. They are not\nqueryable and should be preserved when modifying objects.\nMore info: http://kubernetes.io/docs/user-guide/annotations"

Note: This function appends passed data to existing values

fn spec.serviceAccountTemplate.metadata.withLabels

withLabels(labels)

"Map of string keys and values that can be used to organize and categorize\n(scope and select) objects. May match selectors of replication controllers\nand services.\nMore info: http://kubernetes.io/docs/user-guide/labels"

fn spec.serviceAccountTemplate.metadata.withLabelsMixin

withLabelsMixin(labels)

"Map of string keys and values that can be used to organize and categorize\n(scope and select) objects. May match selectors of replication controllers\nand services.\nMore info: http://kubernetes.io/docs/user-guide/labels"

Note: This function appends passed data to existing values

fn spec.serviceAccountTemplate.metadata.withName

withName(name)

"The name of the resource. Only supported for certain types"

obj spec.storage

"Configuration of the storage of the instances"

fn spec.storage.withResizeInUseVolumes

withResizeInUseVolumes(resizeInUseVolumes)

"Resize existent PVCs, defaults to true"

fn spec.storage.withSize

withSize(size)

"Size of the storage. Required if not already specified in the PVC template.\nChanges to this field are automatically reapplied to the created PVCs.\nSize cannot be decreased."

fn spec.storage.withStorageClass

withStorageClass(storageClass)

"StorageClass to use for PVCs. Applied after\nevaluating the PVC template, if available.\nIf not specified, the generated PVCs will use the\ndefault storage class"

obj spec.storage.pvcTemplate

"Template to be used to generate the Persistent Volume Claim"

fn spec.storage.pvcTemplate.withAccessModes

withAccessModes(accessModes)

"accessModes contains the desired access modes the volume should have.\nMore info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1"

fn spec.storage.pvcTemplate.withAccessModesMixin

withAccessModesMixin(accessModes)

"accessModes contains the desired access modes the volume should have.\nMore info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1"

Note: This function appends passed data to existing values

fn spec.storage.pvcTemplate.withStorageClassName

withStorageClassName(storageClassName)

"storageClassName is the name of the StorageClass required by the claim.\nMore info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1"

fn spec.storage.pvcTemplate.withVolumeAttributesClassName

withVolumeAttributesClassName(volumeAttributesClassName)

"volumeAttributesClassName may be used to set the VolumeAttributesClass used by this claim.\nIf specified, the CSI driver will create or update the volume with the attributes defined\nin the corresponding VolumeAttributesClass. This has a different purpose than storageClassName,\nit can be changed after the claim is created. An empty string value means that no VolumeAttributesClass\nwill be applied to the claim but it's not allowed to reset this field to empty string once it is set.\nIf unspecified and the PersistentVolumeClaim is unbound, the default VolumeAttributesClass\nwill be set by the persistentvolume controller if it exists.\nIf the resource referred to by volumeAttributesClass does not exist, this PersistentVolumeClaim will be\nset to a Pending state, as reflected by the modifyVolumeStatus field, until such as a resource\nexists.\nMore info: https://kubernetes.io/docs/concepts/storage/volume-attributes-classes/\n(Alpha) Using this field requires the VolumeAttributesClass feature gate to be enabled."

fn spec.storage.pvcTemplate.withVolumeMode

withVolumeMode(volumeMode)

"volumeMode defines what type of volume is required by the claim.\nValue of Filesystem is implied when not included in claim spec."

fn spec.storage.pvcTemplate.withVolumeName

withVolumeName(volumeName)

"volumeName is the binding reference to the PersistentVolume backing this claim."

obj spec.storage.pvcTemplate.dataSource

"dataSource field can be used to specify either:\n An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot)\n An existing PVC (PersistentVolumeClaim)\nIf the provisioner or an external controller can support the specified data source,\nit will create a new volume based on the contents of the specified data source.\nWhen the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef,\nand dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified.\nIf the namespace is specified, then dataSourceRef will not be copied to dataSource."

fn spec.storage.pvcTemplate.dataSource.withApiGroup

withApiGroup(apiGroup)

"APIGroup is the group for the resource being referenced.\nIf APIGroup is not specified, the specified Kind must be in the core API group.\nFor any other third-party types, APIGroup is required."

fn spec.storage.pvcTemplate.dataSource.withKind

withKind(kind)

"Kind is the type of resource being referenced"

fn spec.storage.pvcTemplate.dataSource.withName

withName(name)

"Name is the name of resource being referenced"

obj spec.storage.pvcTemplate.dataSourceRef

"dataSourceRef specifies the object from which to populate the volume with data, if a non-empty\nvolume is desired. This may be any object from a non-empty API group (non\ncore object) or a PersistentVolumeClaim object.\nWhen this field is specified, volume binding will only succeed if the type of\nthe specified object matches some installed volume populator or dynamic\nprovisioner.\nThis field will replace the functionality of the dataSource field and as such\nif both fields are non-empty, they must have the same value. For backwards\ncompatibility, when namespace isn't specified in dataSourceRef,\nboth fields (dataSource and dataSourceRef) will be set to the same\nvalue automatically if one of them is empty and the other is non-empty.\nWhen namespace is specified in dataSourceRef,\ndataSource isn't set to the same value and must be empty.\nThere are three important differences between dataSource and dataSourceRef:\n While dataSource only allows two specific types of objects, dataSourceRef\n allows any non-core object, as well as PersistentVolumeClaim objects.\n While dataSource ignores disallowed values (dropping them), dataSourceRef\n preserves all values, and generates an error if a disallowed value is\n specified.\n* While dataSource only allows local objects, dataSourceRef allows objects\n in any namespaces.\n(Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled.\n(Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled."

fn spec.storage.pvcTemplate.dataSourceRef.withApiGroup

withApiGroup(apiGroup)

"APIGroup is the group for the resource being referenced.\nIf APIGroup is not specified, the specified Kind must be in the core API group.\nFor any other third-party types, APIGroup is required."

fn spec.storage.pvcTemplate.dataSourceRef.withKind

withKind(kind)

"Kind is the type of resource being referenced"

fn spec.storage.pvcTemplate.dataSourceRef.withName

withName(name)

"Name is the name of resource being referenced"

fn spec.storage.pvcTemplate.dataSourceRef.withNamespace

withNamespace(namespace)

"Namespace is the namespace of resource being referenced\nNote that when a namespace is specified, a gateway.networking.k8s.io/ReferenceGrant object is required in the referent namespace to allow that namespace's owner to accept the reference. See the ReferenceGrant documentation for details.\n(Alpha) This field requires the CrossNamespaceVolumeDataSource feature gate to be enabled."

obj spec.storage.pvcTemplate.resources

"resources represents the minimum resources the volume should have.\nIf RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements\nthat are lower than previous value but must still be higher than capacity recorded in the\nstatus field of the claim.\nMore info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources"

fn spec.storage.pvcTemplate.resources.withLimits

withLimits(limits)

"Limits describes the maximum amount of compute resources allowed.\nMore info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/"

fn spec.storage.pvcTemplate.resources.withLimitsMixin

withLimitsMixin(limits)

"Limits describes the maximum amount of compute resources allowed.\nMore info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/"

Note: This function appends passed data to existing values

fn spec.storage.pvcTemplate.resources.withRequests

withRequests(requests)

"Requests describes the minimum amount of compute resources required.\nIf Requests is omitted for a container, it defaults to Limits if that is explicitly specified,\notherwise to an implementation-defined value. Requests cannot exceed Limits.\nMore info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/"

fn spec.storage.pvcTemplate.resources.withRequestsMixin

withRequestsMixin(requests)

"Requests describes the minimum amount of compute resources required.\nIf Requests is omitted for a container, it defaults to Limits if that is explicitly specified,\notherwise to an implementation-defined value. Requests cannot exceed Limits.\nMore info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/"

Note: This function appends passed data to existing values

obj spec.storage.pvcTemplate.selector

"selector is a label query over volumes to consider for binding."

fn spec.storage.pvcTemplate.selector.withMatchExpressions

withMatchExpressions(matchExpressions)

"matchExpressions is a list of label selector requirements. The requirements are ANDed."

fn spec.storage.pvcTemplate.selector.withMatchExpressionsMixin

withMatchExpressionsMixin(matchExpressions)

"matchExpressions is a list of label selector requirements. The requirements are ANDed."

Note: This function appends passed data to existing values

fn spec.storage.pvcTemplate.selector.withMatchLabels

withMatchLabels(matchLabels)

"matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels\nmap is equivalent to an element of matchExpressions, whose key field is \"key\", the\noperator is \"In\", and the values array contains only \"value\". The requirements are ANDed."

fn spec.storage.pvcTemplate.selector.withMatchLabelsMixin

withMatchLabelsMixin(matchLabels)

"matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels\nmap is equivalent to an element of matchExpressions, whose key field is \"key\", the\noperator is \"In\", and the values array contains only \"value\". The requirements are ANDed."

Note: This function appends passed data to existing values

obj spec.storage.pvcTemplate.selector.matchExpressions

"matchExpressions is a list of label selector requirements. The requirements are ANDed."

fn spec.storage.pvcTemplate.selector.matchExpressions.withKey

withKey(key)

"key is the label key that the selector applies to."

fn spec.storage.pvcTemplate.selector.matchExpressions.withOperator

withOperator(operator)

"operator represents a key's relationship to a set of values.\nValid operators are In, NotIn, Exists and DoesNotExist."

fn spec.storage.pvcTemplate.selector.matchExpressions.withValues

withValues(values)

"values is an array of string values. If the operator is In or NotIn,\nthe values array must be non-empty. If the operator is Exists or DoesNotExist,\nthe values array must be empty. This array is replaced during a strategic\nmerge patch."

fn spec.storage.pvcTemplate.selector.matchExpressions.withValuesMixin

withValuesMixin(values)

"values is an array of string values. If the operator is In or NotIn,\nthe values array must be non-empty. If the operator is Exists or DoesNotExist,\nthe values array must be empty. This array is replaced during a strategic\nmerge patch."

Note: This function appends passed data to existing values

obj spec.superuserSecret

"The secret containing the superuser password. If not defined a new\nsecret will be created with a randomly generated password"

fn spec.superuserSecret.withName

withName(name)

"Name of the referent."

obj spec.tablespaces

"The tablespaces configuration"

fn spec.tablespaces.withName

withName(name)

"The name of the tablespace"

fn spec.tablespaces.withTemporary

withTemporary(temporary)

"When set to true, the tablespace will be added as a temp_tablespaces\nentry in PostgreSQL, and will be available to automatically house temp\ndatabase objects, or other temporary files. Please refer to PostgreSQL\ndocumentation for more information on the temp_tablespaces GUC."

obj spec.tablespaces.owner

"Owner is the PostgreSQL user owning the tablespace"

fn spec.tablespaces.owner.withName

withName(name)

obj spec.tablespaces.storage

"The storage configuration for the tablespace"

fn spec.tablespaces.storage.withResizeInUseVolumes

withResizeInUseVolumes(resizeInUseVolumes)

"Resize existent PVCs, defaults to true"

fn spec.tablespaces.storage.withSize

withSize(size)

"Size of the storage. Required if not already specified in the PVC template.\nChanges to this field are automatically reapplied to the created PVCs.\nSize cannot be decreased."

fn spec.tablespaces.storage.withStorageClass

withStorageClass(storageClass)

"StorageClass to use for PVCs. Applied after\nevaluating the PVC template, if available.\nIf not specified, the generated PVCs will use the\ndefault storage class"

obj spec.tablespaces.storage.pvcTemplate

"Template to be used to generate the Persistent Volume Claim"

fn spec.tablespaces.storage.pvcTemplate.withAccessModes

withAccessModes(accessModes)

"accessModes contains the desired access modes the volume should have.\nMore info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1"

fn spec.tablespaces.storage.pvcTemplate.withAccessModesMixin

withAccessModesMixin(accessModes)

"accessModes contains the desired access modes the volume should have.\nMore info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1"

Note: This function appends passed data to existing values

fn spec.tablespaces.storage.pvcTemplate.withStorageClassName

withStorageClassName(storageClassName)

"storageClassName is the name of the StorageClass required by the claim.\nMore info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1"

fn spec.tablespaces.storage.pvcTemplate.withVolumeAttributesClassName

withVolumeAttributesClassName(volumeAttributesClassName)

"volumeAttributesClassName may be used to set the VolumeAttributesClass used by this claim.\nIf specified, the CSI driver will create or update the volume with the attributes defined\nin the corresponding VolumeAttributesClass. This has a different purpose than storageClassName,\nit can be changed after the claim is created. An empty string value means that no VolumeAttributesClass\nwill be applied to the claim but it's not allowed to reset this field to empty string once it is set.\nIf unspecified and the PersistentVolumeClaim is unbound, the default VolumeAttributesClass\nwill be set by the persistentvolume controller if it exists.\nIf the resource referred to by volumeAttributesClass does not exist, this PersistentVolumeClaim will be\nset to a Pending state, as reflected by the modifyVolumeStatus field, until such as a resource\nexists.\nMore info: https://kubernetes.io/docs/concepts/storage/volume-attributes-classes/\n(Alpha) Using this field requires the VolumeAttributesClass feature gate to be enabled."

fn spec.tablespaces.storage.pvcTemplate.withVolumeMode

withVolumeMode(volumeMode)

"volumeMode defines what type of volume is required by the claim.\nValue of Filesystem is implied when not included in claim spec."

fn spec.tablespaces.storage.pvcTemplate.withVolumeName

withVolumeName(volumeName)

"volumeName is the binding reference to the PersistentVolume backing this claim."

obj spec.tablespaces.storage.pvcTemplate.dataSource

"dataSource field can be used to specify either:\n An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot)\n An existing PVC (PersistentVolumeClaim)\nIf the provisioner or an external controller can support the specified data source,\nit will create a new volume based on the contents of the specified data source.\nWhen the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef,\nand dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified.\nIf the namespace is specified, then dataSourceRef will not be copied to dataSource."

fn spec.tablespaces.storage.pvcTemplate.dataSource.withApiGroup

withApiGroup(apiGroup)

"APIGroup is the group for the resource being referenced.\nIf APIGroup is not specified, the specified Kind must be in the core API group.\nFor any other third-party types, APIGroup is required."

fn spec.tablespaces.storage.pvcTemplate.dataSource.withKind

withKind(kind)

"Kind is the type of resource being referenced"

fn spec.tablespaces.storage.pvcTemplate.dataSource.withName

withName(name)

"Name is the name of resource being referenced"

obj spec.tablespaces.storage.pvcTemplate.dataSourceRef

"dataSourceRef specifies the object from which to populate the volume with data, if a non-empty\nvolume is desired. This may be any object from a non-empty API group (non\ncore object) or a PersistentVolumeClaim object.\nWhen this field is specified, volume binding will only succeed if the type of\nthe specified object matches some installed volume populator or dynamic\nprovisioner.\nThis field will replace the functionality of the dataSource field and as such\nif both fields are non-empty, they must have the same value. For backwards\ncompatibility, when namespace isn't specified in dataSourceRef,\nboth fields (dataSource and dataSourceRef) will be set to the same\nvalue automatically if one of them is empty and the other is non-empty.\nWhen namespace is specified in dataSourceRef,\ndataSource isn't set to the same value and must be empty.\nThere are three important differences between dataSource and dataSourceRef:\n While dataSource only allows two specific types of objects, dataSourceRef\n allows any non-core object, as well as PersistentVolumeClaim objects.\n While dataSource ignores disallowed values (dropping them), dataSourceRef\n preserves all values, and generates an error if a disallowed value is\n specified.\n* While dataSource only allows local objects, dataSourceRef allows objects\n in any namespaces.\n(Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled.\n(Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled."

fn spec.tablespaces.storage.pvcTemplate.dataSourceRef.withApiGroup

withApiGroup(apiGroup)

"APIGroup is the group for the resource being referenced.\nIf APIGroup is not specified, the specified Kind must be in the core API group.\nFor any other third-party types, APIGroup is required."

fn spec.tablespaces.storage.pvcTemplate.dataSourceRef.withKind

withKind(kind)

"Kind is the type of resource being referenced"

fn spec.tablespaces.storage.pvcTemplate.dataSourceRef.withName

withName(name)

"Name is the name of resource being referenced"

fn spec.tablespaces.storage.pvcTemplate.dataSourceRef.withNamespace

withNamespace(namespace)

"Namespace is the namespace of resource being referenced\nNote that when a namespace is specified, a gateway.networking.k8s.io/ReferenceGrant object is required in the referent namespace to allow that namespace's owner to accept the reference. See the ReferenceGrant documentation for details.\n(Alpha) This field requires the CrossNamespaceVolumeDataSource feature gate to be enabled."

obj spec.tablespaces.storage.pvcTemplate.resources

"resources represents the minimum resources the volume should have.\nIf RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements\nthat are lower than previous value but must still be higher than capacity recorded in the\nstatus field of the claim.\nMore info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources"

fn spec.tablespaces.storage.pvcTemplate.resources.withLimits

withLimits(limits)

"Limits describes the maximum amount of compute resources allowed.\nMore info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/"

fn spec.tablespaces.storage.pvcTemplate.resources.withLimitsMixin

withLimitsMixin(limits)

"Limits describes the maximum amount of compute resources allowed.\nMore info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/"

Note: This function appends passed data to existing values

fn spec.tablespaces.storage.pvcTemplate.resources.withRequests

withRequests(requests)

"Requests describes the minimum amount of compute resources required.\nIf Requests is omitted for a container, it defaults to Limits if that is explicitly specified,\notherwise to an implementation-defined value. Requests cannot exceed Limits.\nMore info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/"

fn spec.tablespaces.storage.pvcTemplate.resources.withRequestsMixin

withRequestsMixin(requests)

"Requests describes the minimum amount of compute resources required.\nIf Requests is omitted for a container, it defaults to Limits if that is explicitly specified,\notherwise to an implementation-defined value. Requests cannot exceed Limits.\nMore info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/"

Note: This function appends passed data to existing values

obj spec.tablespaces.storage.pvcTemplate.selector

"selector is a label query over volumes to consider for binding."

fn spec.tablespaces.storage.pvcTemplate.selector.withMatchExpressions

withMatchExpressions(matchExpressions)

"matchExpressions is a list of label selector requirements. The requirements are ANDed."

fn spec.tablespaces.storage.pvcTemplate.selector.withMatchExpressionsMixin

withMatchExpressionsMixin(matchExpressions)

"matchExpressions is a list of label selector requirements. The requirements are ANDed."

Note: This function appends passed data to existing values

fn spec.tablespaces.storage.pvcTemplate.selector.withMatchLabels

withMatchLabels(matchLabels)

"matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels\nmap is equivalent to an element of matchExpressions, whose key field is \"key\", the\noperator is \"In\", and the values array contains only \"value\". The requirements are ANDed."

fn spec.tablespaces.storage.pvcTemplate.selector.withMatchLabelsMixin

withMatchLabelsMixin(matchLabels)

"matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels\nmap is equivalent to an element of matchExpressions, whose key field is \"key\", the\noperator is \"In\", and the values array contains only \"value\". The requirements are ANDed."

Note: This function appends passed data to existing values

obj spec.tablespaces.storage.pvcTemplate.selector.matchExpressions

"matchExpressions is a list of label selector requirements. The requirements are ANDed."

fn spec.tablespaces.storage.pvcTemplate.selector.matchExpressions.withKey

withKey(key)

"key is the label key that the selector applies to."

fn spec.tablespaces.storage.pvcTemplate.selector.matchExpressions.withOperator

withOperator(operator)

"operator represents a key's relationship to a set of values.\nValid operators are In, NotIn, Exists and DoesNotExist."

fn spec.tablespaces.storage.pvcTemplate.selector.matchExpressions.withValues

withValues(values)

"values is an array of string values. If the operator is In or NotIn,\nthe values array must be non-empty. If the operator is Exists or DoesNotExist,\nthe values array must be empty. This array is replaced during a strategic\nmerge patch."

fn spec.tablespaces.storage.pvcTemplate.selector.matchExpressions.withValuesMixin

withValuesMixin(values)

"values is an array of string values. If the operator is In or NotIn,\nthe values array must be non-empty. If the operator is Exists or DoesNotExist,\nthe values array must be empty. This array is replaced during a strategic\nmerge patch."

Note: This function appends passed data to existing values

obj spec.topologySpreadConstraints

"TopologySpreadConstraints specifies how to spread matching pods among the given topology.\nMore info:\nhttps://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/"

fn spec.topologySpreadConstraints.withMatchLabelKeys

withMatchLabelKeys(matchLabelKeys)

"MatchLabelKeys is a set of pod label keys to select the pods over which\nspreading will be calculated. The keys are used to lookup values from the\nincoming pod labels, those key-value labels are ANDed with labelSelector\nto select the group of existing pods over which spreading will be calculated\nfor the incoming pod. The same key is forbidden to exist in both MatchLabelKeys and LabelSelector.\nMatchLabelKeys cannot be set when LabelSelector isn't set.\nKeys that don't exist in the incoming pod labels will\nbe ignored. A null or empty list means only match against labelSelector.\n\n\nThis is a beta field and requires the MatchLabelKeysInPodTopologySpread feature gate to be enabled (enabled by default)."

fn spec.topologySpreadConstraints.withMatchLabelKeysMixin

withMatchLabelKeysMixin(matchLabelKeys)

"MatchLabelKeys is a set of pod label keys to select the pods over which\nspreading will be calculated. The keys are used to lookup values from the\nincoming pod labels, those key-value labels are ANDed with labelSelector\nto select the group of existing pods over which spreading will be calculated\nfor the incoming pod. The same key is forbidden to exist in both MatchLabelKeys and LabelSelector.\nMatchLabelKeys cannot be set when LabelSelector isn't set.\nKeys that don't exist in the incoming pod labels will\nbe ignored. A null or empty list means only match against labelSelector.\n\n\nThis is a beta field and requires the MatchLabelKeysInPodTopologySpread feature gate to be enabled (enabled by default)."

Note: This function appends passed data to existing values

fn spec.topologySpreadConstraints.withMaxSkew

withMaxSkew(maxSkew)

"MaxSkew describes the degree to which pods may be unevenly distributed.\nWhen whenUnsatisfiable=DoNotSchedule, it is the maximum permitted difference\nbetween the number of matching pods in the target topology and the global minimum.\nThe global minimum is the minimum number of matching pods in an eligible domain\nor zero if the number of eligible domains is less than MinDomains.\nFor example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same\nlabelSelector spread as 2/2/1:\nIn this case, the global minimum is 1.\n| zone1 | zone2 | zone3 |\n| P P | P P | P |\n- if MaxSkew is 1, incoming pod can only be scheduled to zone3 to become 2/2/2;\nscheduling it onto zone1(zone2) would make the ActualSkew(3-1) on zone1(zone2)\nviolate MaxSkew(1).\n- if MaxSkew is 2, incoming pod can be scheduled onto any zone.\nWhen whenUnsatisfiable=ScheduleAnyway, it is used to give higher precedence\nto topologies that satisfy it.\nIt's a required field. Default value is 1 and 0 is not allowed."

fn spec.topologySpreadConstraints.withMinDomains

withMinDomains(minDomains)

"MinDomains indicates a minimum number of eligible domains.\nWhen the number of eligible domains with matching topology keys is less than minDomains,\nPod Topology Spread treats \"global minimum\" as 0, and then the calculation of Skew is performed.\nAnd when the number of eligible domains with matching topology keys equals or greater than minDomains,\nthis value has no effect on scheduling.\nAs a result, when the number of eligible domains is less than minDomains,\nscheduler won't schedule more than maxSkew Pods to those domains.\nIf value is nil, the constraint behaves as if MinDomains is equal to 1.\nValid values are integers greater than 0.\nWhen value is not nil, WhenUnsatisfiable must be DoNotSchedule.\n\n\nFor example, in a 3-zone cluster, MaxSkew is set to 2, MinDomains is set to 5 and pods with the same\nlabelSelector spread as 2/2/2:\n| zone1 | zone2 | zone3 |\n| P P | P P | P P |\nThe number of domains is less than 5(MinDomains), so \"global minimum\" is treated as 0.\nIn this situation, new pod with the same labelSelector cannot be scheduled,\nbecause computed skew will be 3(3 - 0) if new Pod is scheduled to any of the three zones,\nit will violate MaxSkew."

fn spec.topologySpreadConstraints.withNodeAffinityPolicy

withNodeAffinityPolicy(nodeAffinityPolicy)

"NodeAffinityPolicy indicates how we will treat Pod's nodeAffinity/nodeSelector\nwhen calculating pod topology spread skew. Options are:\n- Honor: only nodes matching nodeAffinity/nodeSelector are included in the calculations.\n- Ignore: nodeAffinity/nodeSelector are ignored. All nodes are included in the calculations.\n\n\nIf this value is nil, the behavior is equivalent to the Honor policy.\nThis is a beta-level feature default enabled by the NodeInclusionPolicyInPodTopologySpread feature flag."

fn spec.topologySpreadConstraints.withNodeTaintsPolicy

withNodeTaintsPolicy(nodeTaintsPolicy)

"NodeTaintsPolicy indicates how we will treat node taints when calculating\npod topology spread skew. Options are:\n- Honor: nodes without taints, along with tainted nodes for which the incoming pod\nhas a toleration, are included.\n- Ignore: node taints are ignored. All nodes are included.\n\n\nIf this value is nil, the behavior is equivalent to the Ignore policy.\nThis is a beta-level feature default enabled by the NodeInclusionPolicyInPodTopologySpread feature flag."

fn spec.topologySpreadConstraints.withTopologyKey

withTopologyKey(topologyKey)

"TopologyKey is the key of node labels. Nodes that have a label with this key\nand identical values are considered to be in the same topology.\nWe consider each as a \"bucket\", and try to put balanced number\nof pods into each bucket.\nWe define a domain as a particular instance of a topology.\nAlso, we define an eligible domain as a domain whose nodes meet the requirements of\nnodeAffinityPolicy and nodeTaintsPolicy.\ne.g. If TopologyKey is \"kubernetes.io/hostname\", each Node is a domain of that topology.\nAnd, if TopologyKey is \"topology.kubernetes.io/zone\", each zone is a domain of that topology.\nIt's a required field."

fn spec.topologySpreadConstraints.withWhenUnsatisfiable

withWhenUnsatisfiable(whenUnsatisfiable)

"WhenUnsatisfiable indicates how to deal with a pod if it doesn't satisfy\nthe spread constraint.\n- DoNotSchedule (default) tells the scheduler not to schedule it.\n- ScheduleAnyway tells the scheduler to schedule the pod in any location,\n but giving higher precedence to topologies that would help reduce the\n skew.\nA constraint is considered \"Unsatisfiable\" for an incoming pod\nif and only if every possible node assignment for that pod would violate\n\"MaxSkew\" on some topology.\nFor example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same\nlabelSelector spread as 3/1/1:\n| zone1 | zone2 | zone3 |\n| P P P | P | P |\nIf WhenUnsatisfiable is set to DoNotSchedule, incoming pod can only be scheduled\nto zone2(zone3) to become 3/2/1(3/1/2) as ActualSkew(2-1) on zone2(zone3) satisfies\nMaxSkew(1). In other words, the cluster can still be imbalanced, but scheduler\nwon't make it more imbalanced.\nIt's a required field."

obj spec.topologySpreadConstraints.labelSelector

"LabelSelector is used to find matching pods.\nPods that match this label selector are counted to determine the number of pods\nin their corresponding topology domain."

fn spec.topologySpreadConstraints.labelSelector.withMatchExpressions

withMatchExpressions(matchExpressions)

"matchExpressions is a list of label selector requirements. The requirements are ANDed."

fn spec.topologySpreadConstraints.labelSelector.withMatchExpressionsMixin

withMatchExpressionsMixin(matchExpressions)

"matchExpressions is a list of label selector requirements. The requirements are ANDed."

Note: This function appends passed data to existing values

fn spec.topologySpreadConstraints.labelSelector.withMatchLabels

withMatchLabels(matchLabels)

"matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels\nmap is equivalent to an element of matchExpressions, whose key field is \"key\", the\noperator is \"In\", and the values array contains only \"value\". The requirements are ANDed."

fn spec.topologySpreadConstraints.labelSelector.withMatchLabelsMixin

withMatchLabelsMixin(matchLabels)

"matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels\nmap is equivalent to an element of matchExpressions, whose key field is \"key\", the\noperator is \"In\", and the values array contains only \"value\". The requirements are ANDed."

Note: This function appends passed data to existing values

obj spec.topologySpreadConstraints.labelSelector.matchExpressions

"matchExpressions is a list of label selector requirements. The requirements are ANDed."

fn spec.topologySpreadConstraints.labelSelector.matchExpressions.withKey

withKey(key)

"key is the label key that the selector applies to."

fn spec.topologySpreadConstraints.labelSelector.matchExpressions.withOperator

withOperator(operator)

"operator represents a key's relationship to a set of values.\nValid operators are In, NotIn, Exists and DoesNotExist."

fn spec.topologySpreadConstraints.labelSelector.matchExpressions.withValues

withValues(values)

"values is an array of string values. If the operator is In or NotIn,\nthe values array must be non-empty. If the operator is Exists or DoesNotExist,\nthe values array must be empty. This array is replaced during a strategic\nmerge patch."

fn spec.topologySpreadConstraints.labelSelector.matchExpressions.withValuesMixin

withValuesMixin(values)

"values is an array of string values. If the operator is In or NotIn,\nthe values array must be non-empty. If the operator is Exists or DoesNotExist,\nthe values array must be empty. This array is replaced during a strategic\nmerge patch."

Note: This function appends passed data to existing values

obj spec.walStorage

"Configuration of the storage for PostgreSQL WAL (Write-Ahead Log)"

fn spec.walStorage.withResizeInUseVolumes

withResizeInUseVolumes(resizeInUseVolumes)

"Resize existent PVCs, defaults to true"

fn spec.walStorage.withSize

withSize(size)

"Size of the storage. Required if not already specified in the PVC template.\nChanges to this field are automatically reapplied to the created PVCs.\nSize cannot be decreased."

fn spec.walStorage.withStorageClass

withStorageClass(storageClass)

"StorageClass to use for PVCs. Applied after\nevaluating the PVC template, if available.\nIf not specified, the generated PVCs will use the\ndefault storage class"

obj spec.walStorage.pvcTemplate

"Template to be used to generate the Persistent Volume Claim"

fn spec.walStorage.pvcTemplate.withAccessModes

withAccessModes(accessModes)

"accessModes contains the desired access modes the volume should have.\nMore info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1"

fn spec.walStorage.pvcTemplate.withAccessModesMixin

withAccessModesMixin(accessModes)

"accessModes contains the desired access modes the volume should have.\nMore info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1"

Note: This function appends passed data to existing values

fn spec.walStorage.pvcTemplate.withStorageClassName

withStorageClassName(storageClassName)

"storageClassName is the name of the StorageClass required by the claim.\nMore info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1"

fn spec.walStorage.pvcTemplate.withVolumeAttributesClassName

withVolumeAttributesClassName(volumeAttributesClassName)

"volumeAttributesClassName may be used to set the VolumeAttributesClass used by this claim.\nIf specified, the CSI driver will create or update the volume with the attributes defined\nin the corresponding VolumeAttributesClass. This has a different purpose than storageClassName,\nit can be changed after the claim is created. An empty string value means that no VolumeAttributesClass\nwill be applied to the claim but it's not allowed to reset this field to empty string once it is set.\nIf unspecified and the PersistentVolumeClaim is unbound, the default VolumeAttributesClass\nwill be set by the persistentvolume controller if it exists.\nIf the resource referred to by volumeAttributesClass does not exist, this PersistentVolumeClaim will be\nset to a Pending state, as reflected by the modifyVolumeStatus field, until such as a resource\nexists.\nMore info: https://kubernetes.io/docs/concepts/storage/volume-attributes-classes/\n(Alpha) Using this field requires the VolumeAttributesClass feature gate to be enabled."

fn spec.walStorage.pvcTemplate.withVolumeMode

withVolumeMode(volumeMode)

"volumeMode defines what type of volume is required by the claim.\nValue of Filesystem is implied when not included in claim spec."

fn spec.walStorage.pvcTemplate.withVolumeName

withVolumeName(volumeName)

"volumeName is the binding reference to the PersistentVolume backing this claim."

obj spec.walStorage.pvcTemplate.dataSource

"dataSource field can be used to specify either:\n An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot)\n An existing PVC (PersistentVolumeClaim)\nIf the provisioner or an external controller can support the specified data source,\nit will create a new volume based on the contents of the specified data source.\nWhen the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef,\nand dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified.\nIf the namespace is specified, then dataSourceRef will not be copied to dataSource."

fn spec.walStorage.pvcTemplate.dataSource.withApiGroup

withApiGroup(apiGroup)

"APIGroup is the group for the resource being referenced.\nIf APIGroup is not specified, the specified Kind must be in the core API group.\nFor any other third-party types, APIGroup is required."

fn spec.walStorage.pvcTemplate.dataSource.withKind

withKind(kind)

"Kind is the type of resource being referenced"

fn spec.walStorage.pvcTemplate.dataSource.withName

withName(name)

"Name is the name of resource being referenced"

obj spec.walStorage.pvcTemplate.dataSourceRef

"dataSourceRef specifies the object from which to populate the volume with data, if a non-empty\nvolume is desired. This may be any object from a non-empty API group (non\ncore object) or a PersistentVolumeClaim object.\nWhen this field is specified, volume binding will only succeed if the type of\nthe specified object matches some installed volume populator or dynamic\nprovisioner.\nThis field will replace the functionality of the dataSource field and as such\nif both fields are non-empty, they must have the same value. For backwards\ncompatibility, when namespace isn't specified in dataSourceRef,\nboth fields (dataSource and dataSourceRef) will be set to the same\nvalue automatically if one of them is empty and the other is non-empty.\nWhen namespace is specified in dataSourceRef,\ndataSource isn't set to the same value and must be empty.\nThere are three important differences between dataSource and dataSourceRef:\n While dataSource only allows two specific types of objects, dataSourceRef\n allows any non-core object, as well as PersistentVolumeClaim objects.\n While dataSource ignores disallowed values (dropping them), dataSourceRef\n preserves all values, and generates an error if a disallowed value is\n specified.\n* While dataSource only allows local objects, dataSourceRef allows objects\n in any namespaces.\n(Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled.\n(Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled."

fn spec.walStorage.pvcTemplate.dataSourceRef.withApiGroup

withApiGroup(apiGroup)

"APIGroup is the group for the resource being referenced.\nIf APIGroup is not specified, the specified Kind must be in the core API group.\nFor any other third-party types, APIGroup is required."

fn spec.walStorage.pvcTemplate.dataSourceRef.withKind

withKind(kind)

"Kind is the type of resource being referenced"

fn spec.walStorage.pvcTemplate.dataSourceRef.withName

withName(name)

"Name is the name of resource being referenced"

fn spec.walStorage.pvcTemplate.dataSourceRef.withNamespace

withNamespace(namespace)

"Namespace is the namespace of resource being referenced\nNote that when a namespace is specified, a gateway.networking.k8s.io/ReferenceGrant object is required in the referent namespace to allow that namespace's owner to accept the reference. See the ReferenceGrant documentation for details.\n(Alpha) This field requires the CrossNamespaceVolumeDataSource feature gate to be enabled."

obj spec.walStorage.pvcTemplate.resources

"resources represents the minimum resources the volume should have.\nIf RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements\nthat are lower than previous value but must still be higher than capacity recorded in the\nstatus field of the claim.\nMore info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources"

fn spec.walStorage.pvcTemplate.resources.withLimits

withLimits(limits)

"Limits describes the maximum amount of compute resources allowed.\nMore info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/"

fn spec.walStorage.pvcTemplate.resources.withLimitsMixin

withLimitsMixin(limits)

"Limits describes the maximum amount of compute resources allowed.\nMore info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/"

Note: This function appends passed data to existing values

fn spec.walStorage.pvcTemplate.resources.withRequests

withRequests(requests)

"Requests describes the minimum amount of compute resources required.\nIf Requests is omitted for a container, it defaults to Limits if that is explicitly specified,\notherwise to an implementation-defined value. Requests cannot exceed Limits.\nMore info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/"

fn spec.walStorage.pvcTemplate.resources.withRequestsMixin

withRequestsMixin(requests)

"Requests describes the minimum amount of compute resources required.\nIf Requests is omitted for a container, it defaults to Limits if that is explicitly specified,\notherwise to an implementation-defined value. Requests cannot exceed Limits.\nMore info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/"

Note: This function appends passed data to existing values

obj spec.walStorage.pvcTemplate.selector

"selector is a label query over volumes to consider for binding."

fn spec.walStorage.pvcTemplate.selector.withMatchExpressions

withMatchExpressions(matchExpressions)

"matchExpressions is a list of label selector requirements. The requirements are ANDed."

fn spec.walStorage.pvcTemplate.selector.withMatchExpressionsMixin

withMatchExpressionsMixin(matchExpressions)

"matchExpressions is a list of label selector requirements. The requirements are ANDed."

Note: This function appends passed data to existing values

fn spec.walStorage.pvcTemplate.selector.withMatchLabels

withMatchLabels(matchLabels)

"matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels\nmap is equivalent to an element of matchExpressions, whose key field is \"key\", the\noperator is \"In\", and the values array contains only \"value\". The requirements are ANDed."

fn spec.walStorage.pvcTemplate.selector.withMatchLabelsMixin

withMatchLabelsMixin(matchLabels)

"matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels\nmap is equivalent to an element of matchExpressions, whose key field is \"key\", the\noperator is \"In\", and the values array contains only \"value\". The requirements are ANDed."

Note: This function appends passed data to existing values

obj spec.walStorage.pvcTemplate.selector.matchExpressions

"matchExpressions is a list of label selector requirements. The requirements are ANDed."

fn spec.walStorage.pvcTemplate.selector.matchExpressions.withKey

withKey(key)

"key is the label key that the selector applies to."

fn spec.walStorage.pvcTemplate.selector.matchExpressions.withOperator

withOperator(operator)

"operator represents a key's relationship to a set of values.\nValid operators are In, NotIn, Exists and DoesNotExist."

fn spec.walStorage.pvcTemplate.selector.matchExpressions.withValues

withValues(values)

"values is an array of string values. If the operator is In or NotIn,\nthe values array must be non-empty. If the operator is Exists or DoesNotExist,\nthe values array must be empty. This array is replaced during a strategic\nmerge patch."

fn spec.walStorage.pvcTemplate.selector.matchExpressions.withValuesMixin

withValuesMixin(values)

"values is an array of string values. If the operator is In or NotIn,\nthe values array must be non-empty. If the operator is Exists or DoesNotExist,\nthe values array must be empty. This array is replaced during a strategic\nmerge patch."

Note: This function appends passed data to existing values