Hallo ich möchte zwecks Evaluation eine Testinstanz mit K3s wie im Blog Artikel Vorteile von Open-Source-Softwarelösungen mit Self-Hosting beschrieben aufsetzten. Leider scheitert der apply reproduzierbar beim nubus Service.
Kann mir da jemand einen Tipp geben was da schief geht ?
FAILED RELEASES:
NAME NAMESPACE CHART VERSION DURATION
ums opendesk nubus/nubus 8s
in ./helmfile.yaml.gotmpl: in .helmfiles[0]: in ./helmfile_generic.yaml.gotmpl: in .helmfiles[3]: in helmfile/apps/nubus/helmfile-child.yaml.gotmpl: failed processing release ums: command „/usr/local/bin/helm“ exited with non-zero status:
PATH:
/usr/local/bin/helm
ARGS:
0: helm (4 bytes)
1: upgrade (7 bytes)
2: --install (9 bytes)
3: ums (3 bytes)
4: /root/.cache/helmfile/nubus/nubus/1.18.1/nubus (46 bytes)
5: --version (9 bytes)
6: 1.18.1 (6 bytes)
7: --timeout (9 bytes)
8: 900s (4 bytes)
9: --create-namespace (18 bytes)
10: --namespace (11 bytes)
11: opendesk (8 bytes)
12: --values (8 bytes)
13: /tmp/helmfile146685300/opendesk-ums-values-56946644b (52 bytes)
14: --reset-values (14 bytes)
15: --history-max (13 bytes)
16: 10 (2 bytes)
ERROR:
exit status 1
EXIT STATUS
1
STDERR:
level=INFO msg=„warning: cannot overwrite table with non table for keycloak.postgresql.auth.existingSecret (map[keyMapping:map[password:] name:])“
level=INFO msg=„warning: cannot overwrite table with non table for nubusKeycloakExtensions.postgresql.auth.existingSecret (map[keyMapping:map[password:] name:])“
I0323 16:29:41.262576 265602 warnings.go:107] „Warning: path /$ cannot be used with pathType Exact“
Error: failed to create typed patch object (opendesk/ums-portal-consumer; apps/v1, Kind=StatefulSet): errors:
.spec.template.spec.initContainers[name=„wait-for-ldap“].env: duplicate entries for key [name=„AWS_DEFAULT_REGION“]
.spec.template.spec.initContainers[name=„wait-for-udm“].env: duplicate entries for key [name=„AWS_DEFAULT_REGION“]
.spec.template.spec.initContainers[name=„wait-for-object-storage“].env: duplicate entries for key [name=„AWS_DEFAULT_REGION“]
.spec.template.spec.initContainers[name=„wait-for-provisioning-api“].env: duplicate entries for key [name=„AWS_DEFAULT_REGION“]
.spec.template.spec.initContainers[name=„univention-compatibility“].env: duplicate entries for key [name=„AWS_DEFAULT_REGION“]
admission webhook „validate.nginx.ingress.kubernetes.io“ denied the request: host „portal.desk.simpego.ch“ and path „/favicon.ico“ is already defined in ingress opendesk/opendesk-static-files-portal
COMBINED OUTPUT:
Release „ums“ does not exist. Installing it now.
level=INFO msg=„warning: cannot overwrite table with non table for keycloak.postgresql.auth.existingSecret (map[keyMapping:map[password:] name:])“
level=INFO msg=„warning: cannot overwrite table with non table for nubusKeycloakExtensions.postgresql.auth.existingSecret (map[keyMapping:map[password:] name:])“
I0323 16:29:41.262576 265602 warnings.go:107] „Warning: path /$ cannot be used with pathType Exact“
Error: failed to create typed patch object (opendesk/ums-portal-consumer; apps/v1, Kind=StatefulSet): errors:
.spec.template.spec.initContainers[name=„wait-for-ldap“].env: duplicate entries for key [name=„AWS_DEFAULT_REGION“]
.spec.template.spec.initContainers[name=„wait-for-udm“].env: duplicate entries for key [name=„AWS_DEFAULT_REGION“]
.spec.template.spec.initContainers[name=„wait-for-object-storage“].env: duplicate entries for key [name=„AWS_DEFAULT_REGION“]
.spec.template.spec.initContainers[name=„wait-for-provisioning-api“].env: duplicate entries for key [name=„AWS_DEFAULT_REGION“]
.spec.template.spec.initContainers[name=„univention-compatibility“].env: duplicate entries for key [name=„AWS_DEFAULT_REGION“]
Die beiden existingSecret-Warnings sind vermutlich nicht der eigentliche Blocker. Relevant sind hier drei andere Punkte:
- duplicate entries for key [name=„AWS_DEFAULT_REGION“]
Das ist ein echter Fehler. In den generierten Manifests wird AWS_DEFAULT_REGION für mehrere initContainers doppelt gesetzt. Das kommt typischerweise daher, dass die Region sowohl über die openDesk-Values als auch noch einmal über eigene Overrides/extraEnv gesetzt wird.Bitte einmal vor dem apply rendern und prüfen:
helmfile template --environment dev --namespace opendesk > /tmp/opendesk-rendered.yaml rg -n "AWS_DEFAULT_REGION|ums-portal-consumer|wait-for-object-storage|wait-for-ldap|wait-for-udm" /tmp/opendesk-rendered.yaml
Wenn AWS_DEFAULT_REGION dort in denselben initContainers doppelt vorkommt, muss eine der beiden Quellen raus. Wenn du das gebündelte MinIO/S3-Setup nutzt, würde ich zuerst eigene zusätzliche AWS_*-Overrides entfernen.
2. host „portal.desk.simpego.ch“ and path „/favicon.ico“ is already defined
Das sieht nach einer nicht sauberen vorherigen Installation oder nach kollidierenden Ingress-Ressourcen aus. Für eine Evaluation würde ich nicht „drüberinstallieren“, sondern sauber neu anfangen:
kubectl get ingress -n opendesk kubectl delete namespace opendesk
Danach warten, bis der Namespace wirklich weg ist, und erst dann neu deployen. Alternativ direkt auf einem frischen K3s-Cluster testen.
3. Warning: path /$ cannot be used with pathType Exact
Das ist ein Hinweis darauf, dass der Ingress-NGINX bei dir vermutlich noch mit den neuen strikten Defaults läuft. Im offiziellen K3s-Blogartikel zu openDesk steht dafür explizit, dass bei Ingress-NGINX die folgenden Einstellungen nötig sind:
- controller.config.annotations-risk-level=Critical
- controller.config.strict-validate-path-type=falseAußerdem sollte Traefik in K3s deaktiviert sein und eine passende NGINX-Ingress-Version verwendet werden.
Mein Vorschlag wäre also in dieser Reihenfolge:
- K3s/Namespace bereinigen
- prüfen, ob Traefik wirklich aus ist
- Ingress-NGINX wie im Blog konfigurieren
- helmfile template laufen lassen und gezielt nach doppeltem AWS_DEFAULT_REGION suchen
- erst danach wieder helmfile apply
Wenn es dann immer noch scheitert, wäre der nächste sinnvolle Schritt, die eigene helmfile/environments/dev/values.yaml.gotmpl bzw. alle Custom-Overrides zu prüfen, die Objekt-Storage/S3-Settings setzen.
Danke für deine schnelle und detailierte Antwort. Ist sehr geschätzt.
1-3 Kann ich eigentlich ausschliessen da ich mich kanonisch an die Anleitung halte, der nginx proxy ist ja nicht im opendesk namespace und bleibt daher erhalten. Ich installiere aber um diese Probleme möglichst zu vermeiden auf einer frischen Debian 13 VM
FAILED RELEASES:
NAME NAMESPACE CHART VERSION DURATION
ums opendesk nubus/nubus 5s
in ./helmfile.yaml.gotmpl: in .helmfiles[0]: in ./helmfile_generic.yaml.gotmpl: in .helmfiles[3]: in helmfile/apps/nubus/helmfile-child.yaml.gotmpl: failed processing release ums: command "/usr/local/bin/helm" exited with non-zero status:
PATH:
/usr/local/bin/helm
ARGS:
0: helm (4 bytes)
1: upgrade (7 bytes)
2: --install (9 bytes)
3: ums (3 bytes)
4: /root/.cache/helmfile/nubus/nubus/1.18.1/nubus (46 bytes)
5: --version (9 bytes)
6: 1.18.1 (6 bytes)
7: --timeout (9 bytes)
8: 900s (4 bytes)
9: --create-namespace (18 bytes)
10: --namespace (11 bytes)
11: opendesk (8 bytes)
12: --values (8 bytes)
13: /tmp/helmfile2949786645/opendesk-ums-values-85ff6cd859 (54 bytes)
14: --reset-values (14 bytes)
15: --history-max (13 bytes)
16: 10 (2 bytes)
ERROR:
exit status 1
EXIT STATUS
1
STDERR:
level=INFO msg="warning: cannot overwrite table with non table for keycloak.postgresql.auth.existingSecret (map[keyMapping:map[password:<nil>] name:<nil>])"
level=INFO msg="warning: cannot overwrite table with non table for nubusKeycloakExtensions.postgresql.auth.existingSecret (map[keyMapping:map[password:<nil>] name:])"
I0324 12:33:09.228059 27717 warnings.go:107] "Warning: path /$ cannot be used with pathType Exact"
Error: failed to create typed patch object (opendesk/ums-portal-consumer; apps/v1, Kind=StatefulSet): errors:
.spec.template.spec.initContainers[name="wait-for-ldap"].env: duplicate entries for key [name="AWS_DEFAULT_REGION"]
.spec.template.spec.initContainers[name="wait-for-udm"].env: duplicate entries for key [name="AWS_DEFAULT_REGION"]
.spec.template.spec.initContainers[name="wait-for-object-storage"].env: duplicate entries for key [name="AWS_DEFAULT_REGION"]
.spec.template.spec.initContainers[name="wait-for-provisioning-api"].env: duplicate entries for key [name="AWS_DEFAULT_REGION"]
.spec.template.spec.initContainers[name="univention-compatibility"].env: duplicate entries for key [name="AWS_DEFAULT_REGION"]
COMBINED OUTPUT:
Release "ums" does not exist. Installing it now.
level=INFO msg="warning: cannot overwrite table with non table for keycloak.postgresql.auth.existingSecret (map[keyMapping:map[password:<nil>] name:<nil>])"
level=INFO msg="warning: cannot overwrite table with non table for nubusKeycloakExtensions.postgresql.auth.existingSecret (map[keyMapping:map[password:<nil>] name:])"
I0324 12:33:09.228059 27717 warnings.go:107] "Warning: path /$ cannot be used with pathType Exact"
Error: failed to create typed patch object (opendesk/ums-portal-consumer; apps/v1, Kind=StatefulSet): errors:
.spec.template.spec.initContainers[name="wait-for-ldap"].env: duplicate entries for key [name="AWS_DEFAULT_REGION"]
.spec.template.spec.initContainers[name="wait-for-udm"].env: duplicate entries for key [name="AWS_DEFAULT_REGION"]
.spec.template.spec.initContainers[name="wait-for-object-storage"].env: duplicate entries for key [name="AWS_DEFAULT_REGION"]
.spec.template.spec.initContainers[name="wait-for-provisioning-api"].env: duplicate entries for key [name="AWS_DEFAULT_REGION"]
.spec.template.spec.initContainers[name="univention-compatibility"].env: duplicate entries for key [name="AWS_DEFAULT_REGION"]
dein helmfile template command spuckt auch einen Fehler:
Templating release=open-xchange, chart=/root/.cache/helmfile/open-xchange-repo/appsuite-public-sector/2.28.18/appsuite-public-sector
in ./helmfile.yaml.gotmpl: in .helmfiles[0]: in ./helmfile_generic.yaml.gotmpl: in .helmfiles[4]: in helmfile/apps/open-xchange/helmfile-child.yaml.gotmpl: command "/usr/local/bin/helm" exited with non-zero status:
PATH:
/usr/local/bin/helm
ARGS:
0: helm (4 bytes)
1: template (8 bytes)
2: open-xchange (12 bytes)
3: /root/.cache/helmfile/open-xchange-repo/appsuite-public-sector/2.28.18/appsuite-public-sector (93 bytes)
4: --version (9 bytes)
5: 2.28.18 (7 bytes)
6: --post-renderer (15 bytes)
7: post-renderer-openxchange-HAPROXY (33 bytes)
8: --namespace (11 bytes)
9: AWS_DEFAULT_REGION|ums-portal-consumer|wait-for-object-storage|wait-for-ldap|wait-for-udm (89 bytes)
10: --values (8 bytes)
11: /tmp/helmfile3001126411/AWS_DEFAULT_REGION|ums-portal-consumer|wait-for-object-storage|wait-for-ldap|wait-for-udm-open-xchange-values-588c78f65f (144 bytes)
12: --values (8 bytes)
13: /tmp/helmfile3972646574/AWS_DEFAULT_REGION|ums-portal-consumer|wait-for-object-storage|wait-for-ldap|wait-for-udm-open-xchange-values-6f97749f8c (144 bytes)
ERROR:
exit status 1
EXIT STATUS
1
STDERR:
Error: invalid argument "post-renderer-openxchange-HAPROXY" for "--post-renderer" flag: plugin: {Name:post-renderer-openxchange-HAPROXY Type:postrenderer/v1} not found
COMBINED OUTPUT:
Error: invalid argument "post-renderer-openxchange-HAPROXY" for "--post-renderer" flag: plugin: {Name:post-renderer-openxchange-HAPROXY Type:postrenderer/v1} not found
der Output von rg auf dem vor dem Fehler geschriebenen Content
7593: name: ums-portal-consumer
7839: name: "ums-portal-consumer-object-storage"
7856: name: "ums-portal-consumer-provisioning-api"
7904: name: ums-portal-consumer-common
9544: name: ums-portal-consumer-scripts
9581: wait-for-ldap.sh: |
9591: wait-for-udm.sh: |
9628: name: ums-portal-consumer-ucr
9649: name: ums-portal-consumer
13326: - name: AWS_DEFAULT_REGION
13368: - name: AWS_DEFAULT_REGION
13790: - name: wait-for-udm
13804: command: ["wait-for-udm.sh"]
14016: - name: wait-for-udm
14030: command: ["wait-for-udm.sh"]
16339: name: ums-portal-consumer
16348: serviceName: ums-portal-consumer
16360: intents.otterize.com/service-name: ums-portal-consumer
16375: serviceAccountName: ums-portal-consumer
16377: - name: wait-for-ldap
16391: command: ["/bin/bash", "/scripts/wait-for-ldap.sh"]
16395: - name: AWS_DEFAULT_REGION
16397: - name: AWS_DEFAULT_REGION
16401: name: ums-portal-consumer
16423: - name: wait-for-udm
16437: command: ["/bin/bash", "/scripts/wait-for-udm.sh"]
16441: - name: AWS_DEFAULT_REGION
16443: - name: AWS_DEFAULT_REGION
16447: name: ums-portal-consumer
16469: - name: wait-for-object-storage
16488: name: ums-portal-consumer
16493: name: ums-portal-consumer-object-storage
16498: name: ums-portal-consumer-object-storage
16500: - name: AWS_DEFAULT_REGION
16502: - name: AWS_DEFAULT_REGION
16538: name: ums-portal-consumer-provisioning-api
16540: - name: AWS_DEFAULT_REGION
16542: - name: AWS_DEFAULT_REGION
16546: name: ums-portal-consumer
16572: name: ums-portal-consumer
16579: name: ums-portal-consumer-object-storage
16584: name: ums-portal-consumer-object-storage
16589: name: ums-portal-consumer-provisioning-api
16591: - name: AWS_DEFAULT_REGION
16593: - name: AWS_DEFAULT_REGION
16663: name: ums-portal-consumer
16670: name: ums-portal-consumer-object-storage
16675: name: ums-portal-consumer-object-storage
16680: name: ums-portal-consumer-provisioning-api
16682: - name: AWS_DEFAULT_REGION
16721: name: ums-portal-consumer-ucr
16725: name: ums-portal-consumer-scripts
17975: name: ums-portal-consumer-provisioning-api
18033: command: ["wait-for-udm.sh"]
18291: - name: wait-for-udm-rest-api
18292: command: ["/bin/sh", "-c", "/usr/local/bin/wait-for-udm-rest-api.py"]
18416: - name: "wait-for-ldap"
18430: command: ["wait-for-ldap.sh"]
Hier noch mein Setup Skript
#!/bin/bash
export ENV_DOMAIN=""
export ENV_EMAIL=""
export MASTER_PASSWORD=""
export OPENDESK_RELEASE="v1.13.1"
apt -y update
apt -y dist-upgrade
apt -y autoremove
apt -y autoclean
apt -y clean
apt -y install git
wget https://github.com/mikefarah/yq/releases/latest/download/yq_linux_amd64 -O /usr/local/bin/yq
chmod +x /usr/local/bin/yq
sysctl -w fs.inotify.max_user_instances=10000000
curl -sfL https://get.k3s.io | sh -s -- --disable=traefik
export OPENDESK_NAMESPACE="opendesk"
#curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
wget https://get.helm.sh/helm-v3.20.1-linux-amd64.tar.gz -O helm.tar.gz
tar -xf helm.tar.gz
cp linux-amd64/helm /usr/local/bin/
chmod +x /usr/local/bin/helm
wget https://github.com/helmfile/helmfile/releases/download/v1.4.3/helmfile_1.4.3_linux_amd64.tar.gz -O helmfile.tar.gz
tar -xf helmfile.tar.gz
cp helmfile /usr/local/bin/
chmod +x /usr/local/bin/helmfile
helmfile init
export KUBECONFIG=/etc/rancher/k3s/k3s.yaml
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm install ingress-nginx ingress-nginx/ingress-nginx \
--create-namespace \
--namespace ingress-nginx \
--set controller.allowSnippetAnnotations=true \
--set controller.admissionWebhooks.allowSnippetAnnotations=true \
--set controller.config.annotations-risk-level=Critical \
--set controller.config.strict-validate-path-type=false
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.18.2/cert-manager.yaml
cat << 'EOF' > clusterissuer-tmp.yaml
apiVersion: cert-manager.io/v1
kind: ClusterIssuer # I'm using ClusterIssuer here
metadata:
name: letsencrypt-prod
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: $ENV_EMAIL
privateKeySecretRef:
name: letsencrypt-prod
solvers:
- http01:
ingress:
class: nginx
EOF
envsubst < clusterissuer-tmp.yaml > clusterissuer.yaml
rm clusterissuer-tmp.yaml
kubectl apply -f clusterissuer.yaml
kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/v0.0.32/deploy/local-path-storage.yaml
kubectl patch storageclass local-path -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
git clone https://gitlab.opencode.de/bmi/opendesk/deployment/opendesk.git
cd opendesk
git checkout -b $OPENDESK_RELEASE
cat << 'EOF' > helmfile/environments/dev/values-tmp.yaml.gotmpl
global:
domain: "$ENV_DOMAIN"
cluster:
service:
type: "NodePort"
ingress:
ingressClassName: "nginx"
certificate:
issuerRef:
name: "letsencrypt-prod"
EOF
envsubst < helmfile/environments/dev/values-tmp.yaml.gotmpl > helmfile/environments/dev/values.yaml.gotmpl
rm helmfile/environments/dev/values-tmp.yaml.gotmpl
export OPENDESK_NAMESPACE="opendesk"
kubectl create namespace ${OPENDESK_NAMESPACE}
kubectl config set-context --current --namespace=${OPENDESK_NAMESPACE}
helmfile apply --environment dev --namespace ${OPENDESK_NAMESPACE} --timeout 1200
Das Problem duplicate entries for key [name="AWS_DEFAULT_REGION"] ist auf die helm Version zurückzuführen siehe https://github.com/helm/helm/issues/31529 das Manual sagt helm >= 3.9.0 sei OK (ohne v3.18) gilt das wohl nicht für helm 4 daher auf die getestete v3.17.3 mit helmfile v1.0.0 und helmDiff v3.11.0 gewechselt.
Neben git muss auch yq auf dem server verfügbar sein.
Führt aber zu einem Fehler im Modul openexchange
in ./helmfile.yaml.gotmpl: in .helmfiles[0]: in ./helmfile_generic.yaml.gotmpl: in .helmfiles[4]: in helmfile/apps/open-xchange/helmfile-child.yaml.gotmpl: command "/usr/local/bin/helm" exited with non-zero status:
PATH:
/usr/local/bin/helm
ARGS:
0: helm (4 bytes)
1: diff (4 bytes)
2: upgrade (7 bytes)
3: --allow-unreleased (18 bytes)
4: open-xchange (12 bytes)
5: /tmp/helmfile2501130435/opendesk/open-xchange/appsuite-public-sector/2.28.18/appsuite-public-sector (99 bytes)
6: --version (9 bytes)
7: 2.28.18 (7 bytes)
8: --post-renderer (15 bytes)
9: ./post-renderer-openxchange-HAPROXY.sh (38 bytes)
10: --namespace (11 bytes)
11: opendesk (8 bytes)
12: --values (8 bytes)
13: /tmp/helmfile3867778798/opendesk-open-xchange-values-7fd9d679ff (63 bytes)
14: --values (8 bytes)
15: /tmp/helmfile3227894008/opendesk-open-xchange-values-865bc4b867 (63 bytes)
16: --reset-values (14 bytes)
17: --detailed-exitcode (19 bytes)
18: --color (7 bytes)
ERROR:
exit status 1
EXIT STATUS
1
STDERR:
Error: Failed to render chart: exit status 1: coalesce.go:298: warning: cannot overwrite table with non table for appsuite-public-sector.appsuite.core-mw.update.podAnnotations (map[commit.open-xchange.com/core:a1e874a6e51bdedab4e2c768fd037159c943aeb6 commit.open-xchange.com/exchange-interop:e43f77b956c8fa0bb49f8ed4d075e3c3de8f1351 commit.open-xchange.com/guard:8a07217bd94a34442d2e0b15d98cd70f7e8b9a85 commit.open-xchange.com/nextcloud-integration:8cf1b742aff361973f638204afcb05eb59e53dcc commit.open-xchange.com/office:4b1a916d403e8cd71318c2d0e528ac0d20892624 commit.open-xchange.com/public-sector:16d74ae6675d97f3077dbb266e98dc3da8b0f5c6 commit.open-xchange.com/weakforced:7a3d63dfece8ba3fbfec8b9f2e0e60e47567d4bf])
coalesce.go:298: warning: cannot overwrite table with non table for appsuite-public-sector.appsuite.core-mw.update.podAnnotations (map[commit.open-xchange.com/core:a1e874a6e51bdedab4e2c768fd037159c943aeb6 commit.open-xchange.com/guard:8a07217bd94a34442d2e0b15d98cd70f7e8b9a85 commit.open-xchange.com/office:4b1a916d403e8cd71318c2d0e528ac0d20892624 commit.open-xchange.com/weakforced:7a3d63dfece8ba3fbfec8b9f2e0e60e47567d4bf])
coalesce.go:298: warning: cannot overwrite table with non table for appsuite-public-sector.appsuite.ingress.annotations (map[])
coalesce.go:298: warning: cannot overwrite table with non table for appsuite-public-sector.appsuite.core-mw.serviceAccount.annotations (map[])
coalesce.go:298: warning: cannot overwrite table with non table for appsuite-public-sector.appsuite.core-mw.update.podAnnotations (map[commit.open-xchange.com/core:a1e874a6e51bdedab4e2c768fd037159c943aeb6 commit.open-xchange.com/exchange-interop:e43f77b956c8fa0bb49f8ed4d075e3c3de8f1351 commit.open-xchange.com/guard:8a07217bd94a34442d2e0b15d98cd70f7e8b9a85 commit.open-xchange.com/nextcloud-integration:8cf1b742aff361973f638204afcb05eb59e53dcc commit.open-xchange.com/office:4b1a916d403e8cd71318c2d0e528ac0d20892624 commit.open-xchange.com/public-sector:16d74ae6675d97f3077dbb266e98dc3da8b0f5c6 commit.open-xchange.com/weakforced:7a3d63dfece8ba3fbfec8b9f2e0e60e47567d4bf])
coalesce.go:298: warning: cannot overwrite table with non table for appsuite-public-sector.appsuite.core-mw.serviceAccount.annotations (map[])
coalesce.go:298: warning: cannot overwrite table with non table for appsuite-public-sector.appsuite.core-mw.update.podAnnotations (map[commit.open-xchange.com/core:a1e874a6e51bdedab4e2c768fd037159c943aeb6 commit.open-xchange.com/guard:8a07217bd94a34442d2e0b15d98cd70f7e8b9a85 commit.open-xchange.com/office:4b1a916d403e8cd71318c2d0e528ac0d20892624 commit.open-xchange.com/weakforced:7a3d63dfece8ba3fbfec8b9f2e0e60e47567d4bf])
Error: error while running post render on files: error while running command /root/opendesk/helmfile/apps/open-xchange/post-renderer-openxchange-HAPROXY.sh. error output:
usage: yq [-h] [--yaml-output] [--yaml-roundtrip]
[--yaml-output-grammar-version {1.1,1.2}] [--width WIDTH]
[--indentless-lists] [--explicit-start] [--explicit-end]
[--in-place] [--version]
[jq_filter] [files ...]
yq: error: argument files: can't open '
(. | select(.kind == "Ingress") | select(.metadata.name == "open-xchange-appsuite-http-api-routes-appsuite-api") | .spec.rules[].http.paths[].path) |=
(select(. == "/appsuite/api(.*)") | "/appsuite/api") // .
': [Errno 2] No such file or directory: '\n (. | select(.kind == "Ingress") | select(.metadata.name == "open-xchange-appsuite-http-api-routes-appsuite-api") | .spec.rules[].http.paths[].path) |=\n (select(. == "/appsuite/api(.*)") | "/appsuite/api") // .\n'
: exit status 2
Use --debug flag to render out invalid YAML
Error: plugin "diff" exited with error
COMBINED OUTPUT:
********************
Release was not present in Helm. Diff will show entire contents as new.
********************
Error: Failed to render chart: exit status 1: coalesce.go:298: warning: cannot overwrite table with non table for appsuite-public-sector.appsuite.core-mw.update.podAnnotations (map[commit.open-xchange.com/core:a1e874a6e51bdedab4e2c768fd037159c943aeb6 commit.open-xchange.com/exchange-interop:e43f77b956c8fa0bb49f8ed4d075e3c3de8f1351 commit.open-xchange.com/guard:8a07217bd94a34442d2e0b15d98cd70f7e8b9a85 commit.open-xchange.com/nextcloud-integration:8cf1b742aff361973f638204afcb05eb59e53dcc commit.open-xchange.com/office:4b1a916d403e8cd71318c2d0e528ac0d20892624 commit.open-xchange.com/public-sector:16d74ae6675d97f3077dbb266e98dc3da8b0f5c6 commit.open-xchange.com/weakforced:7a3d63dfece8ba3fbfec8b9f2e0e60e47567d4bf])
coalesce.go:298: warning: cannot overwrite table with non table for appsuite-public-sector.appsuite.core-mw.update.podAnnotations (map[commit.open-xchange.com/core:a1e874a6e51bdedab4e2c768fd037159c943aeb6 commit.open-xchange.com/guard:8a07217bd94a34442d2e0b15d98cd70f7e8b9a85 commit.open-xchange.com/office:4b1a916d403e8cd71318c2d0e528ac0d20892624 commit.open-xchange.com/weakforced:7a3d63dfece8ba3fbfec8b9f2e0e60e47567d4bf])
coalesce.go:298: warning: cannot overwrite table with non table for appsuite-public-sector.appsuite.ingress.annotations (map[])
coalesce.go:298: warning: cannot overwrite table with non table for appsuite-public-sector.appsuite.core-mw.serviceAccount.annotations (map[])
coalesce.go:298: warning: cannot overwrite table with non table for appsuite-public-sector.appsuite.core-mw.update.podAnnotations (map[commit.open-xchange.com/core:a1e874a6e51bdedab4e2c768fd037159c943aeb6 commit.open-xchange.com/exchange-interop:e43f77b956c8fa0bb49f8ed4d075e3c3de8f1351 commit.open-xchange.com/guard:8a07217bd94a34442d2e0b15d98cd70f7e8b9a85 commit.open-xchange.com/nextcloud-integration:8cf1b742aff361973f638204afcb05eb59e53dcc commit.open-xchange.com/office:4b1a916d403e8cd71318c2d0e528ac0d20892624 commit.open-xchange.com/public-sector:16d74ae6675d97f3077dbb266e98dc3da8b0f5c6 commit.open-xchange.com/weakforced:7a3d63dfece8ba3fbfec8b9f2e0e60e47567d4bf])
coalesce.go:298: warning: cannot overwrite table with non table for appsuite-public-sector.appsuite.core-mw.serviceAccount.annotations (map[])
coalesce.go:298: warning: cannot overwrite table with non table for appsuite-public-sector.appsuite.core-mw.update.podAnnotations (map[commit.open-xchange.com/core:a1e874a6e51bdedab4e2c768fd037159c943aeb6 commit.open-xchange.com/guard:8a07217bd94a34442d2e0b15d98cd70f7e8b9a85 commit.open-xchange.com/office:4b1a916d403e8cd71318c2d0e528ac0d20892624 commit.open-xchange.com/weakforced:7a3d63dfece8ba3fbfec8b9f2e0e60e47567d4bf])
Error: error while running post render on files: error while running command /root/opendesk/helmfile/apps/open-xchange/post-renderer-openxchange-HAPROXY.sh. error output:
usage: yq [-h] [--yaml-output] [--yaml-roundtrip]
[--yaml-output-grammar-version {1.1,1.2}] [--width WIDTH]
[--indentless-lists] [--explicit-start] [--explicit-end]
[--in-place] [--version]
[jq_filter] [files ...]
yq: error: argument files: can't open '
(. | select(.kind == "Ingress") | select(.metadata.name == "open-xchange-appsuite-http-api-routes-appsuite-api") | .spec.rules[].http.paths[].path) |=
(select(. == "/appsuite/api(.*)") | "/appsuite/api") // .
': [Errno 2] No such file or directory: '\n (. | select(.kind == "Ingress") | select(.metadata.name == "open-xchange-appsuite-http-api-routes-appsuite-api") | .spec.rules[].http.paths[].path) |=\n (select(. == "/appsuite/api(.*)") | "/appsuite/api") // .\n'
: exit status 2
Use --debug flag to render out invalid YAML
Error: plugin "diff" exited with error
Ein Schritt weiter: Es gibt offenbar zwei yq Tools die sich leicht unterscheiden. Benötigt wird die Go Version nicht die Python sonst tritt obiger Fehler auf. Leider installiert der Paketmanager von Debian 13 standardmässig die Python Version.
Führt leider zum nächsten Fehler
Upgrading release=opendesk-nextcloud-management, chart=/tmp/helmfile3322734551/opendesk/opendesk-nextcloud-management/opendesk-nextcloud-management/4.9.1/opendesk-nextcloud-management, namespace=opendesk
Release "opendesk-nextcloud-management" does not exist. Installing it now.
FAILED RELEASES:
NAME NAMESPACE CHART VERSION DURATION
opendesk-nextcloud-management opendesk nextcloud-repo/opendesk-nextcloud-management 7m55s
in ./helmfile.yaml.gotmpl: in .helmfiles[0]: in ./helmfile_generic.yaml.gotmpl: in .helmfiles[5]: in helmfile/apps/nextcloud/helmfile-child.yaml.gotmpl: failed processing release opendesk-nextcloud-management: command "/usr/local/bin/helm" exited with non-zero status:
PATH:
/usr/local/bin/helm
ARGS:
0: helm (4 bytes)
1: upgrade (7 bytes)
2: --install (9 bytes)
3: opendesk-nextcloud-management (29 bytes)
4: /tmp/helmfile3322734551/opendesk/opendesk-nextcloud-management/opendesk-nextcloud-management/4.9.1/opendesk-nextcloud-management (128 bytes)
5: --version (9 bytes)
6: 4.9.1 (5 bytes)
7: --wait (6 bytes)
8: --wait-for-jobs (15 bytes)
9: --timeout (9 bytes)
10: 1800s (5 bytes)
11: --create-namespace (18 bytes)
12: --namespace (11 bytes)
13: opendesk (8 bytes)
14: --values (8 bytes)
15: /tmp/helmfile1070563665/opendesk-opendesk-nextcloud-management-values-c5d8b5699 (79 bytes)
16: --reset-values (14 bytes)
17: --history-max (13 bytes)
18: 10 (2 bytes)
ERROR:
exit status 1
EXIT STATUS
1
STDERR:
Error: jobs.batch "opendesk-nextcloud-management-1" not found
COMBINED OUTPUT:
Release "opendesk-nextcloud-management" does not exist. Installing it now.
Error: jobs.batch "opendesk-nextcloud-management-1" not found
Update auf helm 3.20.1 helmfile 1.4.3 und opendesk 13.1 hat nichts gebracht, gleicher Fehler
scheint ein generelles Problem zu sein Deployment fails at opendesk-nextcloud-management (#311) · Issues · BMI / openDesk / Deployment / openDesk · GitLab