Files
chart/library/common-test/tests/service/validation_test.yaml
Stavros Kois 5b1abdd839 NAS-118930 / 23.10 / Improve/Refactor Common Library (#917)
* fix

* fix

* some more

* somefixs

* whops

* initial structure

* finish up configmap

* secret class

* runtest secret

* move files arround

* ignore

* make clear on call template that need root context

* imagePullSecret (minus targetSelector)

* move out of the way

* clean up comment

* deployment basic spec

* daemonset basic spec

* statefulset spec

* split file

* docs

* update values

* job spec

* job docs

* cronJob basic spec

* job in cron test

* add common version

* podsepc

* whoopsis

* selectorlabels and pod metadata

* job and cron pod metadata

* update docs

* consistent order

* get ready for pod

* first targetSelector

* remove todo

* update docs

* add hostnet and enableservicelinks

* update selector logic

* update docs

* add tests for restartpolicy

* schedulerName

* priorityclassname

* hostname

* termperiodsec

* nodeselector

* add fail case

* host aliases

* dns policy

* dns config

* tolerations

* serviceaccoutn class, spawner, saname selector

* add pod todo

* update some tests

* add runtimeclassname

* controllers -> workload and plurar to singular

* require at least 1 primary on enabled SAs

* fix script

* remove wrong comment

* update naming scheme

* update rbac values ref

* rbac docs

* rbac's

* append short name, for future use

* update comments

* initial service wireframe

* shorten line

* simplify labels and update tests

* service selectors

* simplify error messages

* finish clusterIP type

* loadbalancer

* noedport

* externalname

* external ip

* update service

* fix highlighting

* session affinity

* add comment

* update comments

* service ports

* fix indentation

* externalname can have no ports

* fixup externalIP

* add pvc class and spawner and tests

* add nfs and emptyDir vols

* example

* extend docs a bit

* not create pvc if existing claim is set

* helm... you are dumb really. how this fixes an unrelated test

* add configmap

* add secret vol

* add pvc vol

* add hostpath

* finish volumes

* initial podsec

* podsec context with some todo's to check

* automatic sysctls

* remove todo

* update doc struct

* split docs

* split service docs

* initial container plumbing

* fix tests

* fix test

* rename to class

* command and args

* termination

* add lifecycle

* int value from tpl

* another case

* fix service protocol tpl

* update readme

* ports

* update todo

* cleanup values a bit

* only add sysctl when port is bellow 1024

* whops, thats a different range

* update avlue

* move some old docs to the "to be deleted" dir

* externalinteface validation

* update an error message and apply externalinterface annotations to workloads

* external interfaces

* TZ - TIMEZONE

* update rdoc

* reduce code duple

* device vol type

* initial certificate plumbing

* update comments

* finish secret creation of certificate

* cert dosc

* volumeMounts

* scale certs

* doc

* add tests for volMounts

* values updates

* update todo

* add test case

* remove some todo

* update todos

* vct

* remove tdoo

* restore default

* rename function

* make selectorlabels a bit better

* trim

* some cleanup

* update some ci values

* update ci

* rollingup defaults

* rename dir

* fix nil pointers

* check the same strategy var

* whops

* fix tests

* typo

* not a good day for copy paste

* move check

* move another check

* fix some tests for upcoming probes

* one mroe

* split docs

* add default probes for `main` and docs

* add probes and some ci testruns

* whops

* fix an edge case

* add an error for edge case

* runtests

* runtest updaets

* update

* check if podvalues exist first

* force types

* force only one of the 2

* quote labels and annotaions values

* job/cron have auto gen selectors

* remove false test

* fix maxsureg

* fix end

* different fix

* fix some tests

* fix rollUp

* try to fix 3.9.4 helm

* move file to helpers

* use capital types in probes and lifecycle

* Revert "use capital types in probes and lifecycle"

This reverts commit 380ebd5f1f.

* typo

* use lowercase for protocol everywhere

* rbac runtest

* prune old

* add resources

* add resources

* fix rbc

* fix sa naming in pod

* fix test

* 44 suppl group on gpu

* remove todo

* extract function in another file

* whops

* add securityContext implementation

* add fail cases

* add rest of the tests

* remove todo

* envFrom

* minify

* env list

* add env

* add envdupe check tests

* add fixed envs

* replace containers with callers

* add callers

* add initContainer

* add init run test

* reset default test val

* add  name tests

* add some more tests

* rename

* validate workload type only if enabled

* lint fix for 3.9.4

* add tpl on init enabled

* whops

* fix init

* echo

* echo

* args...

* list

* comment out disabled persistences

* fix some typos and improve resources `requests` requirement

* improve docs a bit

* require name,description,version,type

* add some wording regarding what Helm Template column means

* add title as requirement

* remove scheduler

* remove priority class name

* remove nfs + externalIP

* remove LB

* remove STS & VCT

* fix a test

* remove nodeselector

* remove DS

* remove pvc

* remove todo

* conditionally print the type, as we might want to use the template to select all objects inthe chart

* add some docs

* docs for notes

* add `tls.` in the certificate secret, according to k8s docs

* add some basic docs around the rest of the options

* clean values.yaml

* catch an edge case

* remove externalName

* set autmountSA on SA to false

* add note about the automountSA
2023-02-20 15:23:33 +02:00

392 lines
10 KiB
YAML

suite: service validation test
templates:
- common.yaml
tests:
- it: should fail without primary service
set:
service:
service-name:
enabled: true
asserts:
- failedTemplate:
errorMessage: Service - At least one enabled service must be primary
- it: should fail with more than one primary service
set:
service:
service-name1:
enabled: true
primary: true
service-name2:
enabled: true
primary: true
asserts:
- failedTemplate:
errorMessage: Service - Only one service can be primary
- it: should fail without primary port in service
set:
service:
service-name1:
enabled: true
primary: true
ports:
port-name:
enabled: true
asserts:
- failedTemplate:
errorMessage: Service - At least one enabled port in service must be primary
- it: should fail with more than one primary port in service
set:
service:
service-name1:
enabled: true
primary: true
ports:
port-name1:
enabled: true
primary: true
port-name2:
enabled: true
primary: true
asserts:
- failedTemplate:
errorMessage: Service - Only one port per service can be primary
- it: should fail with no enabled ports in enabled service
set:
service:
service-name1:
enabled: true
primary: true
ports:
port-name1:
enabled: true
primary: true
port-name2:
enabled: true
primary: true
asserts:
- failedTemplate:
errorMessage: Service - Only one port per service can be primary
- it: should fail with annotations not a dict
set:
service:
service-name1:
enabled: true
primary: true
annotations: not-a-dict
ports:
port-name1:
enabled: true
primary: true
asserts:
- failedTemplate:
errorMessage: Service - Expected <annotations> to be a dictionary, but got [string]
- it: should fail with labels not a dict
set:
service:
service-name1:
enabled: true
primary: true
labels: not-a-dict
ports:
port-name1:
enabled: true
primary: true
asserts:
- failedTemplate:
errorMessage: Service - Expected <labels> to be a dictionary, but got [string]
- it: should fail with pod targetSelector not a string
set:
service:
service-name1:
enabled: true
primary: true
targetSelector:
pod: not-a-string
ports:
port-name1:
enabled: true
primary: true
asserts:
- failedTemplate:
errorMessage: Service - Expected <targetSelector> to be [string], but got [map]
- it: should fail with container targetSelector not a string
set:
service:
service-name1:
enabled: true
primary: true
ports:
port-name1:
enabled: true
primary: true
targetSelector:
container: not-a-string
asserts:
- failedTemplate:
errorMessage: Service - Expected <port.targetSelector> to be [string], but got [map]
- it: should fail with selected pod not defined
set:
service:
service-name1:
enabled: true
primary: true
targetSelector: some-pod-name
ports:
port-name:
enabled: true
primary: true
port: 12345
workload:
main:
enabled: true
primary: true
type: Deployment
podSpec: {}
asserts:
- failedTemplate:
errorMessage: Service - Selected pod [some-pod-name] is not defined
- it: should fail with selected pod not enabled
set:
service:
service-name1:
enabled: true
primary: true
targetSelector: some-pod-name
ports:
port-name:
enabled: true
primary: true
port: 12345
workload:
some-pod-name:
enabled: false
primary: true
type: Deployment
podSpec: {}
asserts:
- failedTemplate:
errorMessage: Service - Selected pod [some-pod-name] is not enabled
- it: should fail with invalid port protocol
set:
service:
service-name1:
enabled: true
primary: true
ports:
port-name1:
enabled: true
primary: true
port: 12345
protocol: not-a-protocol
asserts:
- failedTemplate:
errorMessage: Service - Expected <port.protocol> to be one of [tcp, udp, http, https] but got [not-a-protocol]
- it: should fail without port number
set:
service:
service-name1:
enabled: true
primary: true
ports:
port-name1:
enabled: true
primary: true
port:
asserts:
- failedTemplate:
errorMessage: Service - Expected non-empty <port.port>
- it: should fail with invalid service type
set:
service:
service-name1:
enabled: true
primary: true
type: not-a-type
ports:
port-name1:
enabled: true
primary: true
asserts:
- failedTemplate:
errorMessage: Service - Expected <type> to be one of [ClusterIP, NodePort] but got [not-a-type]
- it: should fail with invalid ipFamilyPolicy
set:
service:
service-name1:
enabled: true
primary: true
type: ClusterIP
ipFamilyPolicy: not-a-policy
ports:
port-name1:
enabled: true
primary: true
port: 12345
workload:
some-pod-name:
enabled: true
primary: true
type: Deployment
podSpec: {}
asserts:
- failedTemplate:
errorMessage: Service - Expected <ipFamilyPolicy> to be one of [SingleStack, PreferDualStack, RequireDualStack], but got [not-a-policy]
- it: should fail with ipFamilies not a list
set:
service:
service-name1:
enabled: true
primary: true
type: ClusterIP
ipFamilies: not-a-list
ports:
port-name1:
enabled: true
primary: true
port: 12345
workload:
some-pod-name:
enabled: true
primary: true
type: Deployment
podSpec: {}
asserts:
- failedTemplate:
errorMessage: Service - Expected <ipFamilies> to be a list, but got a [string]
- it: should fail with invalid ipFamilies
set:
service:
service-name1:
enabled: true
primary: true
type: ClusterIP
ipFamilies:
- not-a-family
ports:
port-name1:
enabled: true
primary: true
port: 12345
workload:
some-pod-name:
enabled: true
primary: true
type: Deployment
podSpec: {}
asserts:
- failedTemplate:
errorMessage: Service - Expected <ipFamilies> to be one of [IPv4, IPv6], but got [not-a-family]
- it: should fail with invalid sessionAffinity
set:
service:
service-name1:
enabled: true
primary: true
type: ClusterIP
sessionAffinity: not-an-affinity
ports:
port-name1:
enabled: true
primary: true
port: 12345
workload:
some-pod-name:
enabled: true
primary: true
type: Deployment
podSpec: {}
asserts:
- failedTemplate:
errorMessage: Service - Expected <sessionAffinity> to be one of [ClientIP, None], but got [not-an-affinity]
- it: should fail with invalid timeoutSeconds in sessionAffinityConfig
set:
service:
service-name1:
enabled: true
primary: true
type: ClusterIP
sessionAffinity: ClientIP
sessionAffinityConfig:
clientIP:
timeoutSeconds: -1
ports:
port-name1:
enabled: true
primary: true
port: 12345
workload:
some-pod-name:
enabled: true
primary: true
type: Deployment
podSpec: {}
asserts:
- failedTemplate:
errorMessage: Service - Expected <sessionAffinityConfig.clientIP.timeoutSeconds> to be between [0 - 86400], but got [-1]
- it: should fail without nodePort number on NodePort
set:
service:
service-name1:
enabled: true
primary: true
type: NodePort
ports:
port-name1:
enabled: true
primary: true
port: 80
nodePort:
workload:
some-pod-name:
enabled: true
primary: true
type: Deployment
podSpec: {}
asserts:
- failedTemplate:
errorMessage: Service - Expected non-empty <nodePort> on NodePort service type
- it: should fail with nodePort lower than the minimum on NodePort
set:
global:
minNodePort: 10000
service:
service-name1:
enabled: true
primary: true
type: NodePort
ports:
port-name1:
enabled: true
primary: true
port: 80
nodePort: 9999
workload:
some-pod-name:
enabled: true
primary: true
type: Deployment
podSpec: {}
asserts:
- failedTemplate:
errorMessage: Service - Expected <nodePort> to be higher than [10000], but got [9999]