This is a follow up of the migration to Deployment. Based on learnings
from a successful coreos-ostree-pruner migration, this fixes the
`no image` and `unknown field \"spec.strategy.resources\` errors
presented on batcave.
This is a follow up of the migration to Deployment. Based on learnings
from a succesfull coreos-cincinnati modification, this fixes the
`no image` error displayed currently on batcave.
As the previous change took no effect and the error while executing
`sudo rbac-playbook -l os_control_stg openshift-apps/coreos-cincinnati.yml`
continues to return an error as below, let's try another aproach.
```
"stderr": "The Deployment \"coreos-cincinnati\" is invalid: \n*
spec.template.spec.containers[0].image: Required value
```
Here adding a specific F42 image
During recent migration to deployment, the oc command:
`sudo rbac-playbook -l os_control_stg openshift-apps/coreos-cincinnati.yml`
returns an error:
```
"stderr": "The Deployment \"coreos-cincinnati\" is invalid: \n*
spec.template.spec.containers[0].image: Required value
```
This PR aims to fix the above.
When running
`sudo rbac-playbook -l os_control_stg openshift-apps/coreos-koji-tagger.yml`
An error is present as a result of not complete migration to deployment:
strict decoding error: unknown field \"spec.strategy.resources\"
This aims to fix the above.
When running
`sudo rbac-playbook -l os_control_stg openshift-apps/coreos-cincinnati.yml`
I came across the below errors due to recent migration to deployment and j2.
unknown field :\"spec.strategy
.activeDeadlineSeconds /.recreateParams /.resources /.rollingParams /.name
This should fix all of the above.
It turns out the ostree volumes are special in that they are NFS shares
mapped to the NetApp Volumes for OSTree repo storage. They have
specific names and require no storage class to be set. This reverts the
changes made yesterday.
Since there is no PV fedora-ostree-content-volume-2 on stg
the volumeName: "fedora-ostree-content-volume-1" was added.
We use ocs-storagecluster-cephfs to keep the ReadWriteMany
functionality. If the RBD volume is required,
then use storageClassName: ocs-storagecluster-ceph-rbd.
ODF will provision a new volume automatically.
ReadWriteMany cannot be used in a Filesystem mode with RBD.
I broke the trigger when I switched from one container to multiple
containers in the pod. Syntax-wise, I found this multi-line variant in
the CoreOS Cincinnati deployment config, and it seems like there's not a
way to say "all container images in the spec". Or there might be, but I
couldn't find an example or documentation.
Signed-off-by: Jeremy Cline <jeremycline@linux.microsoft.com>