You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Feb 12, 2024. It is now read-only.
Version of Helm, Kubernetes and the Nifi chart:
Helm version: version.BuildInfo{Version:"v3.11.0", GitCommit:"472c5736ab01133de504a826bd9ee12cbe4e7904", GitTreeState:"clean", GoVersion:"go1.18.10"}
What you expected to happen:
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know:
Here are some information that help troubleshooting:
if relevant, provide your values.yaml or the changes made to the default one (after removing sensitive information)
the output of the folowing commands:
Check if a pod is in error:
kubectl get pod
NAME READY STATUS RESTARTS AGE
app-nifi-0 3/4 CrashLoopBackOff 15 (2m12s ago) 55m
app-nifi-1 3/4 CrashLoopBackOff 15 (103s ago) 55m
app-nifi-ca-6f565f7867-8j2dx 1/1 Running 0 55m
app-nifi-registry-0 1/1 Running 0 55m
app-nifi-zookeeper-0 1/1 Running 0 55m
app-nifi-zookeeper-1 1/1 Running 0 55m
app-nifi-zookeeper-2 1/1 Running 0 55m
Inspect the pod, check the "Events" section at the end for anything suspicious.
kubectl describe pod myrelease-nifi-0
Name: app-nifi-0
Namespace: nifi
Priority: 0
Node: ip-10-99-9-30.ec2.internal/10.99.9.30
Start Time: Wed, 24 Jan 2024 17:10:02 -0300
Labels: app=nifi
chart=nifi-1.2.1
controller-revision-hash=app-nifi-6f64799544
heritage=Helm
release=app-nifi
statefulset.kubernetes.io/pod-name=app-nifi-0
Annotations: kubernetes.io/psp: eks.privileged
security.alpha.kubernetes.io/sysctls: net.ipv4.ip_local_port_range=10000 65000
Status: Running
IP: 100.64.111.240
IPs:
IP: 100.64.111.240
Controlled By: StatefulSet/app-nifi
Init Containers:
zookeeper:
Container ID: containerd://3a128622e2a54b9fa1888b6e1410c32cb6ce08c0ad490d2c1b4357b4ceb4824a
Image: busybox:1.32.0
Image ID: docker.io/library/busybox@sha256:bde48e1751173b709090c2539fdf12d6ba64e88ec7a4301591227ce925f3c678
Port: <none>
Host Port: <none>
Command:
sh
-c
echo trying to contact app-nifi-zookeeper 2181
until nc -vzw 1 app-nifi-zookeeper 2181;doecho"waiting for zookeeper..."
sleep 2
done
State: Terminated
Reason: Completed
Exit Code: 0
Started: Wed, 24 Jan 2024 17:10:16 -0300
Finished: Wed, 24 Jan 2024 17:10:25 -0300
Ready: True
Restart Count: 0
Environment:
AWS_STS_REGIONAL_ENDPOINTS: regional
AWS_DEFAULT_REGION: us-east-1
AWS_REGION: us-east-1
AWS_ROLE_ARN: arn:aws:iam::111111111111:role/nifi-dev
AWS_WEB_IDENTITY_TOKEN_FILE: /var/run/secrets/eks.amazonaws.com/serviceaccount/token
Mounts:
/var/run/secrets/eks.amazonaws.com/serviceaccount from aws-iam-token (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kptf9 (ro)
Containers:
server:
Container ID: containerd://4346715e9e8641175d7a1c4d86997166f81c800713e64684847ff145cd20fea5
Image: 111111111111.dkr.ecr.us-east-1.amazonaws.com/app-nifi:5ddbdde
Image ID: 111111111111.dkr.ecr.us-east-1.amazonaws.com/app-nifi@sha256:aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
Ports: 8443/TCP, 6007/TCP
Host Ports: 0/TCP, 0/TCP
Command:
bash
-ce
prop_replace () {
target_file=${NIFI_HOME}/conf/${3:-nifi.properties}echo"updating ${1} in ${target_file}"if egrep "^${1}="${target_file}&> /dev/null;then
sed -i -e "s|^$1=.*$|$1=$2|"${target_file}elseecho${1}=${2}>>${target_file}fi
}
mkdir -p ${NIFI_HOME}/config-data/conf
FQDN=$(hostname -f)
cat "${NIFI_HOME}/conf/nifi.temp">"${NIFI_HOME}/conf/nifi.properties"
prop_replace nifi.security.user.login.identity.provider ''
prop_replace nifi.security.user.authorizer managed-authorizer
prop_replace nifi.security.user.oidc.discovery.url https://keycloak-app.dev.domain.com/realms/nifi/.well-known/openid-configuration
prop_replace nifi.security.user.oidc.client.id nifi-dev
prop_replace nifi.security.user.oidc.client.secret superSecret
prop_replace nifi.security.user.oidc.claim.identifying.user email
xmlstarlet ed --inplace --delete "//authorizers/authorizer[identifier='single-user-authorizer']""${NIFI_HOME}/conf/authorizers.xml"
xmlstarlet ed --inplace --update "//authorizers/userGroupProvider/property[@name='Users File']" -v './auth-conf/users.xml'"${NIFI_HOME}/conf/authorizers.xml"
xmlstarlet ed --inplace --delete "//authorizers/userGroupProvider/property[@name='Initial User Identity 1']""${NIFI_HOME}/conf/authorizers.xml"
xmlstarlet ed --inplace \
--subnode "authorizers/userGroupProvider" --type 'elem' -n 'property' \
--value "[email protected]" \
--insert "authorizers/userGroupProvider/property[not(@name)]" --type attr -n name \
--value "Initial User Identity 2" \
"${NIFI_HOME}/conf/authorizers.xml"
xmlstarlet ed --inplace --update "//authorizers/accessPolicyProvider/property[@name='Initial Admin Identity']" -v "[email protected]""${NIFI_HOME}/conf/authorizers.xml"
xmlstarlet ed --inplace --update "//authorizers/accessPolicyProvider/property[@name='Authorizations File']" -v './auth-conf/authorizations.xml'"${NIFI_HOME}/conf/authorizers.xml"
xmlstarlet ed --inplace --delete "authorizers/accessPolicyProvider/property[@name='Node Identity 1']""${NIFI_HOME}/conf/authorizers.xml"
xmlstarlet ed --inplace \
--subnode "authorizers/accessPolicyProvider" --type 'elem' -n 'property' \
--value "CN=app-nifi-0.app-nifi-headless.nifi.svc.cluster.local, OU=NIFI" \
--insert "authorizers/accessPolicyProvider/property[not(@name)]" --type attr -n name \
--value "Node Identity 0" \
"${NIFI_HOME}/conf/authorizers.xml"
xmlstarlet ed --inplace \
--subnode "authorizers/userGroupProvider" --type 'elem' -n 'property' \
--value "CN=app-nifi-0.app-nifi-headless.nifi.svc.cluster.local, OU=NIFI" \
--insert "authorizers/userGroupProvider/property[not(@name)]" --type attr -n name \
--value "Initial User Identity 0" \
"${NIFI_HOME}/conf/authorizers.xml"
xmlstarlet ed --inplace \
--subnode "authorizers/accessPolicyProvider" --type 'elem' -n 'property' \
--value "CN=app-nifi-1.app-nifi-headless.nifi.svc.cluster.local, OU=NIFI" \
--insert "authorizers/accessPolicyProvider/property[not(@name)]" --type attr -n name \
--value "Node Identity 1" \
"${NIFI_HOME}/conf/authorizers.xml"
xmlstarlet ed --inplace \
--subnode "authorizers/userGroupProvider" --type 'elem' -n 'property' \
--value "CN=app-nifi-1.app-nifi-headless.nifi.svc.cluster.local, OU=NIFI" \
--insert "authorizers/userGroupProvider/property[not(@name)]" --type attr -n name \
--value "Initial User Identity 1" \
"${NIFI_HOME}/conf/authorizers.xml"if!test -f /opt/nifi/data/flow.xml.gz &&test -f /opt/nifi/data/flow.xml;then
gzip /opt/nifi/data/flow.xml
fi
prop_replace nifi.ui.banner.text $(hostname -s)
prop_replace nifi.remote.input.host ${FQDN}
prop_replace nifi.cluster.node.address ${FQDN}
prop_replace nifi.zookeeper.connect.string ${NIFI_ZOOKEEPER_CONNECT_STRING}
prop_replace nifi.web.http.host ${FQDN}# Update nifi.properties for web ui proxy hostname
prop_replace nifi.web.proxy.host nifi-app.dev.domain.com:8443
if [ !-r"${NIFI_HOME}/conf/nifi-cert.pem" ]
then
/opt/nifi/nifi-toolkit-current/bin/tls-toolkit.sh standalone \
-n 'app-nifi-nifi-0.app-nifi-nifi-headless.nifi.svc.cluster.local' \
-C '[email protected]' \
-o "${NIFI_HOME}/conf/" \
-P superSecret \
-S superSecret \
--nifiPropertiesFile /opt/nifi/nifi-current/conf/nifi.properties
fi
prop_replace nifi.web.http.network.interface.default "eth0" nifi.properties
prop_replace nifi.web.http.network.interface.lo "lo" nifi.properties
forfin"${NIFI_HOME}/conf/authorizers.xml""${NIFI_HOME}/conf/login-identity-providers.xml"${NIFI_HOME}/conf/nifi.properties
doecho === $f ===
cat $fdoneecho === end of files ===
functionprop () {
target_file=${NIFI_HOME}/conf/nifi.properties
egrep "^${1}="${target_file}| cut -d'=' -f2
}
functionoffloadNode() {
FQDN=$(hostname -f)echo"disconnecting node '$FQDN'"
baseUrl=https://${FQDN}:8443
echo"keystoreType=$(prop nifi.security.keystoreType)"> secure.properties
echo"keystore=$(prop nifi.security.keystore)">> secure.properties
echo"keystorePasswd=$(prop nifi.security.keystorePasswd)">> secure.properties
echo"truststoreType=$(prop nifi.security.truststoreType)">> secure.properties
echo"truststore=$(prop nifi.security.truststore)">> secure.properties
echo"truststorePasswd=$(prop nifi.security.truststorePasswd)">> secure.properties
echo"[email protected]">> secure.properties
secureArgs="-p secure.properties"echo baseUrl ${baseUrl}echo"gracefully disconnecting node '$FQDN' from cluster"${NIFI_TOOLKIT_HOME}/bin/cli.sh nifi get-nodes -ot json -u ${baseUrl}${secureArgs}> nodes.json
nnid=$(jq --arg FQDN "$FQDN"'.cluster.nodes[] | select(.address==$FQDN) | .nodeId' nodes.json)echo"disconnecting node ${nnid}"${NIFI_TOOLKIT_HOME}/bin/cli.sh nifi disconnect-node -nnid $nnid -u ${baseUrl}${secureArgs}echo""echo"get a connected node"
connectedNode=$(jq -r 'first(.cluster.nodes|=sort_by(.address)| .cluster.nodes[] | select(.status=="CONNECTED")) | .address' nodes.json)
baseUrl=https://${connectedNode}:8443
echo baseUrl ${baseUrl}echo""echo"wait until node has state 'DISCONNECTED'"while [[ "${node_state}"!="DISCONNECTED" ]];do
sleep 1
${NIFI_TOOLKIT_HOME}/bin/cli.sh nifi get-nodes -ot json -u ${baseUrl}${secureArgs}> nodes.json
node_state=$(jq -r --arg FQDN "$FQDN"'.cluster.nodes[] | select(.address==$FQDN) | .status' nodes.json)echo"state is '${node_state}'"doneecho""echo"node '${nnid}' was disconnected"echo"offloading node"${NIFI_TOOLKIT_HOME}/bin/cli.sh nifi offload-node -nnid $nnid -u ${baseUrl}${secureArgs}echo""echo"wait until node has state 'OFFLOADED'"while [[ "${node_state}"!="OFFLOADED" ]];do
sleep 1
${NIFI_TOOLKIT_HOME}/bin/cli.sh nifi get-nodes -ot json -u ${baseUrl}${secureArgs}> nodes.json
node_state=$(jq -r --arg FQDN "$FQDN"'.cluster.nodes[] | select(.address==$FQDN) | .status' nodes.json)echo"state is '${node_state}'"done
}
deleteNode() {
echo"deleting node"${NIFI_TOOLKIT_HOME}/bin/cli.sh nifi delete-node -nnid ${nnid} -u ${baseUrl}${secureArgs}echo"node deleted"
}
executeTrap() {
echo Received trapped signal, beginning shutdown...;
offloadNode;
./bin/nifi.sh stop;
deleteNode;exit 0;
}
trap executeTrap TERM HUP INT;trap":" EXIT
exec bin/nifi.sh run & nifi_pid="$!"echo NiFi running with PID ${nifi_pid}.
wait${nifi_pid}
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 5
Started: Wed, 24 Jan 2024 18:03:26 -0300
Finished: Wed, 24 Jan 2024 18:03:27 -0300
Ready: False
Restart Count: 15
Liveness: tcp-socket :8443 delay=90s timeout=1s period=60s #success=1 #failure=3
Readiness: tcp-socket :8443 delay=60s timeout=1s period=20s #success=1 #failure=3
Environment:
NIFI_ZOOKEEPER_CONNECT_STRING: app-nifi-zookeeper:2181
AWS_STS_REGIONAL_ENDPOINTS: regional
AWS_DEFAULT_REGION: us-east-1
AWS_REGION: us-east-1
AWS_ROLE_ARN: arn:aws:iam::111111111111:role/nifi-dev
AWS_WEB_IDENTITY_TOKEN_FILE: /var/run/secrets/eks.amazonaws.com/serviceaccount/token
Mounts:
/opt/nifi/content_repository from content-repository (rw)
/opt/nifi/data from data (rw)
/opt/nifi/data/flow.xml from flow-content (rw,path="flow.xml")
/opt/nifi/flowfile_repository from flowfile-repository (rw)
/opt/nifi/nifi-current/auth-conf/ from auth-conf (rw)
/opt/nifi/nifi-current/conf/authorizers.temp from authorizers-temp (rw,path="authorizers.temp")
/opt/nifi/nifi-current/conf/bootstrap-notification-services.xml from bootstrap-notification-services-xml (rw,path="bootstrap-notification-services.xml")
/opt/nifi/nifi-current/conf/bootstrap.conf from bootstrap-conf (rw,path="bootstrap.conf")
/opt/nifi/nifi-current/conf/login-identity-providers-ldap.xml from login-identity-providers-ldap-xml (rw,path="login-identity-providers-ldap.xml")
/opt/nifi/nifi-current/conf/nifi.temp from nifi-properties (rw,path="nifi.temp")
/opt/nifi/nifi-current/conf/state-management.xml from state-management-xml (rw,path="state-management.xml")
/opt/nifi/nifi-current/conf/zookeeper.properties from zookeeper-properties (rw,path="zookeeper.properties")
/opt/nifi/nifi-current/config-data from config-data (rw)
/opt/nifi/nifi-current/logs from logs (rw)
/opt/nifi/provenance_repository from provenance-repository (rw)
/var/run/secrets/eks.amazonaws.com/serviceaccount from aws-iam-token (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kptf9 (ro)
app-log:
Container ID: containerd://b86541d7544152de8b096599c28f5ac170805d0709a6e0e94e9248cd02b533a2
Image: busybox:1.32.0
Image ID: docker.io/library/busybox@sha256:bde48e1751173b709090c2539fdf12d6ba64e88ec7a4301591227ce925f3c678
Port: <none>
Host Port: <none>
Args:
/bin/sh
-c
trap"exit 0" TERM; tail -n+1 -F /var/log/nifi-app.log &wait$!
State: Running
Started: Wed, 24 Jan 2024 17:11:25 -0300
Ready: True
Restart Count: 0
Limits:
cpu: 50m
memory: 50Mi
Requests:
cpu: 10m
memory: 10Mi
Environment:
AWS_STS_REGIONAL_ENDPOINTS: regional
AWS_DEFAULT_REGION: us-east-1
AWS_REGION: us-east-1
AWS_ROLE_ARN: arn:aws:iam::111111111111:role/nifi-dev
AWS_WEB_IDENTITY_TOKEN_FILE: /var/run/secrets/eks.amazonaws.com/serviceaccount/token
Mounts:
/var/log from logs (rw)
/var/run/secrets/eks.amazonaws.com/serviceaccount from aws-iam-token (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kptf9 (ro)
bootstrap-log:
Container ID: containerd://5c347f134854b1b4d036de51701c83dc986ec1cc5c8bf30c2d704c0eb78573dd
Image: busybox:1.32.0
Image ID: docker.io/library/busybox@sha256:bde48e1751173b709090c2539fdf12d6ba64e88ec7a4301591227ce925f3c678
Port: <none>
Host Port: <none>
Args:
/bin/sh
-c
trap"exit 0" TERM; tail -n+1 -F /var/log/nifi-bootstrap.log &wait$!
State: Running
Started: Wed, 24 Jan 2024 17:11:25 -0300
Ready: True
Restart Count: 0
Limits:
cpu: 50m
memory: 50Mi
Requests:
cpu: 10m
memory: 10Mi
Environment:
AWS_STS_REGIONAL_ENDPOINTS: regional
AWS_DEFAULT_REGION: us-east-1
AWS_REGION: us-east-1
AWS_ROLE_ARN: arn:aws:iam::111111111111:role/nifi-dev
AWS_WEB_IDENTITY_TOKEN_FILE: /var/run/secrets/eks.amazonaws.com/serviceaccount/token
Mounts:
/var/log from logs (rw)
/var/run/secrets/eks.amazonaws.com/serviceaccount from aws-iam-token (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kptf9 (ro)
user-log:
Container ID: containerd://ee17dbbaed5998639414030e52d3e9a4dbf9eaf584aa20a53062be46997e697b
Image: busybox:1.32.0
Image ID: docker.io/library/busybox@sha256:bde48e1751173b709090c2539fdf12d6ba64e88ec7a4301591227ce925f3c678
Port: <none>
Host Port: <none>
Args:
/bin/sh
-c
trap"exit 0" TERM; tail -n+1 -F /var/log/nifi-user.log &wait$!
State: Running
Started: Wed, 24 Jan 2024 17:11:25 -0300
Ready: True
Restart Count: 0
Limits:
cpu: 50m
memory: 50Mi
Requests:
cpu: 10m
memory: 10Mi
Environment:
AWS_STS_REGIONAL_ENDPOINTS: regional
AWS_DEFAULT_REGION: us-east-1
AWS_REGION: us-east-1
AWS_ROLE_ARN: arn:aws:iam::111111111111:role/nifi-dev
AWS_WEB_IDENTITY_TOKEN_FILE: /var/run/secrets/eks.amazonaws.com/serviceaccount/token
Mounts:
/var/log from logs (rw)
/var/run/secrets/eks.amazonaws.com/serviceaccount from aws-iam-token (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kptf9 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
aws-iam-token:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 86400
auth-conf:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: auth-conf-app-nifi-0
ReadOnly: false
logs:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: logs-app-nifi-0
ReadOnly: false
config-data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: config-data-app-nifi-0
ReadOnly: false
data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: data-app-nifi-0
ReadOnly: false
flowfile-repository:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: flowfile-repository-app-nifi-0
ReadOnly: false
content-repository:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: content-repository-app-nifi-0
ReadOnly: false
provenance-repository:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: provenance-repository-app-nifi-0
ReadOnly: false
bootstrap-conf:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: app-nifi-config
Optional: false
nifi-properties:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: app-nifi-config
Optional: false
authorizers-temp:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: app-nifi-config
Optional: false
bootstrap-notification-services-xml:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: app-nifi-config
Optional: false
login-identity-providers-ldap-xml:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: app-nifi-config
Optional: false
state-management-xml:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: app-nifi-config
Optional: false
zookeeper-properties:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: app-nifi-config
Optional: false
flow-content:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: app-nifi-config
Optional: false
kube-api-access-kptf9:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: name=nifi:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 56m default-scheduler Successfully assigned nifi/app-nifi-0 to ip-10-99-9-30.ec2.internal
Normal SuccessfulAttachVolume 56m attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-1a91d1ee-5139-480c-85cd-1b013a87146c"
Normal SuccessfulAttachVolume 56m attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-00ae1c3f-ecfe-40e5-86c3-54fac84869e6"
Normal SuccessfulAttachVolume 56m attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-a7684ff8-5df2-4bc3-967c-63e599cde1c5"
Normal SuccessfulAttachVolume 56m attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-f5765299-98e4-48fa-9f0f-1bafc03bd68a"
Normal SuccessfulAttachVolume 56m attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-e4481030-9212-4802-b078-a9d45d46adfe"
Normal SuccessfulAttachVolume 56m attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-95f6789e-6745-40fc-a9da-c3f7eda3d30a"
Normal SuccessfulAttachVolume 56m attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-4201f276-5a52-4ba8-a2c3-411667d4a040"
Normal Pulled 56m kubelet Container image "busybox:1.32.0" already present on machine
Normal Created 56m kubelet Created container zookeeper
Normal Started 56m kubelet Started container zookeeper
Normal Pulled 55m kubelet Successfully pulled image "111111111111.dkr.ecr.us-east-1.amazonaws.com/app-nifi:5ddbdde"in 59.074027755s
Normal Pulled 55m kubelet Container image "busybox:1.32.0" already present on machine
Normal Started 55m kubelet Started container user-log
Normal Created 55m kubelet Created container user-log
Normal Pulled 55m kubelet Container image "busybox:1.32.0" already present on machine
Normal Created 55m kubelet Created container app-log
Normal Started 55m kubelet Started container app-log
Normal Pulled 55m kubelet Container image "busybox:1.32.0" already present on machine
Normal Created 55m kubelet Created container bootstrap-log
Normal Started 55m kubelet Started container bootstrap-log
Normal Pulled 55m kubelet Successfully pulled image "111111111111.dkr.ecr.us-east-1.amazonaws.com/app-nifi:5ddbdde"in 191.578433ms
Normal Pulling 54m (x3 over 56m) kubelet Pulling image "111111111111.dkr.ecr.us-east-1.amazonaws.com/app-nifi:5ddbdde"
Normal Pulled 54m kubelet Successfully pulled image "111111111111.dkr.ecr.us-east-1.amazonaws.com/app-nifi:5ddbdde"in 190.73031ms
Normal Started 54m (x3 over 55m) kubelet Started container server
Normal Created 54m (x3 over 55m) kubelet Created container server
Warning BackOff 69s (x251 over 55m) kubelet Back-off restarting failed container
Get logs on a failed container inside the pod (here the server one):
kubectl logs myrelease-nifi-0 server
updating nifi.security.user.login.identity.provider in /opt/nifi/nifi-current/conf/nifi.properties
updating nifi.security.user.authorizer in /opt/nifi/nifi-current/conf/nifi.properties
updating nifi.security.user.oidc.discovery.url in /opt/nifi/nifi-current/conf/nifi.properties
updating nifi.security.user.oidc.client.id in /opt/nifi/nifi-current/conf/nifi.properties
updating nifi.security.user.oidc.client.secret in /opt/nifi/nifi-current/conf/nifi.properties
updating nifi.security.user.oidc.claim.identifying.user in /opt/nifi/nifi-current/conf/nifi.properties
updating nifi.ui.banner.text in /opt/nifi/nifi-current/conf/nifi.properties
updating nifi.remote.input.host in /opt/nifi/nifi-current/conf/nifi.properties
updating nifi.cluster.node.address in /opt/nifi/nifi-current/conf/nifi.properties
updating nifi.zookeeper.connect.string in /opt/nifi/nifi-current/conf/nifi.properties
updating nifi.web.http.host in /opt/nifi/nifi-current/conf/nifi.properties
updating nifi.web.proxy.host in /opt/nifi/nifi-current/conf/nifi.properties
[main] INFO org.apache.nifi.toolkit.tls.standalone.TlsToolkitStandaloneCommandLine - Using /opt/nifi/nifi-current/conf/nifi.properties as template.
[main] INFO org.apache.nifi.toolkit.tls.standalone.TlsToolkitStandalone - Running standalone certificate generation with output directory /opt/nifi/nifi-current/conf
[main] INFO org.apache.nifi.toolkit.tls.standalone.TlsToolkitStandalone - Generated new CA certificate /opt/nifi/nifi-current/conf/nifi-cert.pem and key /opt/nifi/nifi-current/conf/nifi-key.key
[main] INFO org.apache.nifi.toolkit.tls.standalone.TlsToolkitStandalone - Writing new ssl configuration to /opt/nifi/nifi-current/conf/app-nifi-nifi-0.app-nifi-nifi-headless.nifi.svc.cluster.local
[main] INFO org.apache.nifi.toolkit.tls.standalone.TlsToolkitStandalone - Successfully generated TLS configuration forapp-nifi-nifi-0.app-nifi-nifi-headless.nifi.svc.cluster.local 1in /opt/nifi/nifi-current/conf/app-nifi-nifi-0.app-nifi-nifi-headless.nifi.svc.cluster.local
Error generating TLS configuration. (badly formatted directory string)
usage: org.apache.nifi.toolkit.tls.TlsToolkitMain [-a <arg>] [--additionalCACertificate <arg>] [-B <arg>] [-c <arg>] [-C <arg>] [-d <arg>] [-f <arg>] [-g] [-G
<arg>] [-h] [-k <arg>] [-K <arg>] [-n <arg>] [--nifiDnPrefix <arg>] [--nifiDnSuffix <arg>] [-o <arg>] [-O] [-P <arg>] [-s <arg>] [-S <arg>]
[--splitKeystore <arg>] [--subjectAlternativeNames <arg>] [-T <arg>]
Creates certificates and config files for nifi cluster.
-a,--keyAlgorithm <arg> Algorithm to use for generated keys. (default: RSA)
--additionalCACertificate <arg> Path to additional CA certificate (used to sign toolkit CA certificate) in PEM format if necessary
-B,--clientCertPassword <arg> Password for client certificate. Must either be one value or one for each client DN. (autogenerate if not specified)
-c,--certificateAuthorityHostname <arg> Hostname of NiFi Certificate Authority (default: localhost)
-C,--clientCertDn <arg> Generate client certificate suitable forusein browser with specified DN. (Can be specified multiple times.)
-d,--days <arg> Number of days issued certificate should be valid for. (default: 825)
-f,--nifiPropertiesFile <arg> Base nifi.properties file to update. (Embedded file identical to the one in a default NiFi install will be used if
not specified.)
-g,--differentKeyAndKeystorePasswords Use different generated password for the key and the keyStore.
-G,--globalPortSequence <arg> Use sequential ports that are calculated for all hosts according to the provided hostname expressions. (Can be
specified multiple times, MUST BE SAME FROM RUN TO RUN.)
-h,--help Print help and exit.
-k,--keySize <arg> Number of bits for generated keys. (default: 2048)
-K,--keyPassword <arg> Key password to use. Must either be one value or one for each host. (autogenerate if not specified)
-n,--hostnames <arg> Comma separated list of hostnames.
--nifiDnPrefix <arg> String to prepend to hostname(s) when determining DN. (default: CN=)
--nifiDnSuffix <arg> String to append to hostname(s) when determining DN. (default: , OU=NIFI)
-o,--outputDirectory <arg> The directory to output keystores, truststore, config files. (default: ../nifi-current)
-O,--isOverwrite Overwrite existing host output.
-P,--trustStorePassword <arg> Keystore password to use. Must either be one value or one for each host. (autogenerate if not specified)
-s,--signingAlgorithm <arg> Algorithm to use for signing certificates. (default: SHA256WITHRSA)
-S,--keyStorePassword <arg> Keystore password to use. Must either be one value or one for each host. (autogenerate if not specified)
--splitKeystore <arg> Split out a given keystore into its unencrypted key and certificates. Use -S and -K to specify the keystore and key
passwords.
--subjectAlternativeNames <arg> Comma-separated list of domains to use as Subject Alternative Names in the certificate
-T,--keyStoreType <arg> The type of keyStores to generate. (default: jks)
Java home: /opt/java/openjdk
NiFi Toolkit home: /opt/nifi/nifi-toolkit-current
values.yaml - Sensitive information will be injected externally and "#{}#" will be replaced by CI/CD
---
# Number of nifi nodes
replicaCount: #{REPLICAS}### Set default image, imageTag, and imagePullPolicy.## ref: https://hub.docker.com/r/apache/nifi/##
image:
repository: #{DOCKER_ECR_REPO_URL}#/#{IMAGE_NAME}#
tag: "#{IMAGE_TAG}#"
pullPolicy: "Always"## Optionally specify an imagePullSecret.## Secret must be manually created in the namespace.## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/### pullSecret: myRegistrKeySecretName
securityContext:
runAsUser: 1000
fsGroup: 1000
## @param useHostNetwork - boolean - optional## Bind ports on the hostNetwork. Useful for CNI networking where hostPort might## not be supported. The ports need to be available on all hosts. It can be## used for custom metrics instead of a service endpoint.#### WARNING: Make sure that hosts using this are properly firewalled otherwise## metrics and traces are accepted from any host able to connect to this host.#
sts:
# Parallel podManagementPolicy for faster bootstrap and teardown. Default is OrderedReady.
podManagementPolicy: Parallel
AntiAffinity: soft
useHostNetwork: null
hostPort: null
pod:
annotations:
security.alpha.kubernetes.io/sysctls: net.ipv4.ip_local_port_range=10000 65000
#prometheus.io/scrape: "true"
serviceAccount:
create: true
name: nifi
annotations:
eks.amazonaws.com/role-arn: #{SERVICE_ACCOUNT_ARN}#
hostAliases: []
# - ip: "1.2.3.4"# hostnames:# - example.com# - example
startupProbe:
enabled: false
failureThreshold: 60
periodSeconds: 10
## Useful if using any custom secrets## Pass in some secrets to use (if required)# secrets:# - name: myNifiSecret# keys:# - key1# - key2# mountPath: /opt/nifi/secret## Useful if using any custom configmaps## Pass in some configmaps to use (if required)# configmaps:# - name: myNifiConf# keys:# - myconf.conf# mountPath: /opt/nifi/custom-config
properties:
# https://nifi.apache.org/docs/nifi-docs/html/administration-guide.html#nifi_sensitive_props_key
sensitiveKey: NIFI_SENSITIVE_KEY # Must have at least 12 characters# NiFi assumes conf/nifi.properties is persistent but this helm chart# recreates it every time. Setting the Sensitive Properties Key# (nifi.sensitive.props.key) is supposed to happen at the same time# /opt/nifi/data/flow.xml.gz sensitive properties are encrypted. If that# doesn't happen then NiFi won't start because decryption fails.# So if sensitiveKeySetFile is configured but doesn't exist, assume# /opt/nifi/flow.xml.gz hasn't been encrypted and follow the procedure# https://nifi.apache.org/docs/nifi-docs/html/administration-guide.html#updating-the-sensitive-properties-key# to simultaneously encrypt it and set nifi.sensitive.props.key.# sensitiveKeySetFile: /opt/nifi/data/sensitive-props-key-applied# If sensitiveKey was already set, then pass in sensitiveKeyPrior with the old key.# sensitiveKeyPrior: OldPasswordToChangeFrom
algorithm: NIFI_PBKDF2_AES_GCM_256
# use externalSecure for when inbound SSL is provided by nginx-ingress or other external mechanism
externalSecure: true
isNode: true
httpsPort: 8443
webProxyHost: #{WEB_PROXY_HOST}#
clusterPort: 6007
zkClientEnsembleTraker: false# https://issues.apache.org/jira/browse/NIFI-10481
clusterNodeConnectionTimeout: '5 sec'
clusterNodeReadTimeout: '5 sec'
zookeeperConnectTimeout: '3 secs'
zookeeperSessionTimeout: '3 secs'
archiveMaxRetentionPeriod: "3 days"
archiveMaxUsagePercentage: "85%"
provenanceStorage: #{PROVENANCE_STORAGE_IN_GB}#
provenanceMaxStorageTime: "10 days"
siteToSite:
port: 10000
# use properties.safetyValve to pass explicit 'key: value' pairs that overwrite other configuration
safetyValve:
#nifi.variable.registry.properties: "${NIFI_HOME}/example1.properties, ${NIFI_HOME}/example2.properties"
nifi.web.http.network.interface.default: eth0
# listen to loopback interface so "kubectl port-forward ..." works
nifi.web.http.network.interface.lo: lo
## Include aditional processors# customLibPath: "/opt/configuration_resources/custom_lib"## Include additional libraries in the Nifi containers by using the postStart handler## ref: https://kubernetes.io/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/# postStart: /opt/nifi/psql; wget -P /opt/nifi/psql https://jdbc.postgresql.org/download/postgresql-42.2.6.jar# Nifi User Authentication
auth:
# If set while LDAP is enabled, this value will be used for the initial admin and not the ldap bind dn / admin
admin: #{AUTH_ADMIN_EMAIL}#
SSL:
keystorePasswd: SSL_KEYSTORE_PASSWORD
truststorePasswd: SSL_TRUSTSTORE_PASSWORD
# Automaticaly disabled if OIDC or LDAP enabled
singleUser:
username: username
password: changemechangeme # Must to have at least 12 characters
clientAuth:
enabled: false
ldap:
enabled: false
host: #ldap://<hostname>:<port>
searchBase: #CN=Users,DC=ldap,DC=example,DC=be
admin: #cn=admin,dc=ldap,dc=example,dc=be
pass: #ChangeMe
searchFilter: (objectClass=*)
userIdentityAttribute: cn
authStrategy: SIMPLE # How the connection to the LDAP server is authenticated. Possible values are ANONYMOUS, SIMPLE, LDAPS, or START_TLS.
identityStrategy: USE_DN
authExpiration: 12 hours
userSearchScope: ONE_LEVEL # Search scope for searching users (ONE_LEVEL, OBJECT, or SUBTREE). Required if searching users.
groupSearchScope: ONE_LEVEL # Search scope for searching groups (ONE_LEVEL, OBJECT, or SUBTREE). Required if searching groups.
oidc:
enabled: true
discoveryUrl: #{OIDC_DISCOVERY_URL}#
clientId: #{OIDC_CLIENT_ID}#
claimIdentifyingUser: email
admin: #{AUTH_ADMIN_EMAIL}#
preferredJwsAlgorithm:
## Request additional scopes, for example profile
additionalScopes:
openldap:
enabled: false
persistence:
enabled: true
env:
LDAP_ORGANISATION: # name of your organization e.g. "Example"
LDAP_DOMAIN: # your domain e.g. "ldap.example.be"
LDAP_BACKEND: "hdb"
LDAP_TLS: "true"
LDAP_TLS_ENFORCE: "false"
LDAP_REMOVE_CONFIG_AFTER_SETUP: "false"
adminPassword: #ChengeMe
configPassword: #ChangeMe
customLdifFiles:
1-default-users.ldif: |-
# You can find an example ldif file at https://github.com/cetic/fadi/blob/master/examples/basic/example.ldif## Expose the nifi service to be accessed from outside the cluster (LoadBalancer service).## or access it from within the cluster (ClusterIP service). Set the service type and the port to serve it.## ref: http://kubernetes.io/docs/user-guide/services/### headless service
headless:
type: ClusterIP
annotations:
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"# ui service
service:
type: NodePort
httpsPort: 8443
# nodePort: 30236
annotations: {}
# loadBalancerIP:## Load Balancer sources## https://kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/#restrict-access-for-loadbalancer-service### loadBalancerSourceRanges:# - 10.10.10.0/24## OIDC authentication requires "sticky" session on the LoadBalancer for JWT to work properly...but AWS doesn't like it on creation
sessionAffinity: ClientIP
# sessionAffinityConfig:# clientIP:# timeoutSeconds: 10800# Enables additional port/ports to nifi service for internal processors
processors:
enabled: false
ports:
- name: processor01
port: 7001
targetPort: 7001
#nodePort: 30701
- name: processor02
port: 7002
targetPort: 7002
#nodePort: 30702## Configure containerPorts section with following attributes: name, containerport and protocol.
containerPorts: []
# - name: example# containerPort: 1111# protocol: TCP## Configure Ingress based on the documentation here: https://kubernetes.io/docs/concepts/services-networking/ingress/##
ingress:
enabled: true
className: alb
annotations:
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/group.name: aws-alb-external
alb.ingress.kubernetes.io/backend-protocol: HTTPS
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS": 443}]'
alb.ingress.kubernetes.io/success-codes: "200"
alb.ingress.kubernetes.io/ssl-redirect: "443"
alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
alb.ingress.kubernetes.io/target-group-attributes: stickiness.enabled=true,stickiness.lb_cookie.duration_seconds=3600
tls: []
path: /
hosts:
- #{HOST}## If you want to change the default path, see this issue https://github.com/cetic/helm-nifi/issues/22# Amount of memory to give the NiFi java heap
jvmMemory: 2g
# Separate image for tailing each log separately and checking zookeeper connectivity
sidecar:
image: busybox
tag: "1.32.0"
imagePullPolicy: "IfNotPresent"## Enable persistence using Persistent Volume Claims## ref: http://kubernetes.io/docs/user-guide/persistent-volumes/##
customStorageClass: true
storageClass: nifi
storageProvisioner: ebs.csi.aws.com
storageType: gp3
persistence:
enabled: true# When creating persistent storage, the NiFi helm chart can either reference an already-defined# storage class by name, such as "standard" or can define a custom storage class by specifying# customStorageClass: true and providing the "storageClass", "storageProvisioner" and "storageType".# For example, to use SSD storage on Google Compute Engine see values-gcp.yaml## To use a storage class that already exists on the Kubernetes cluster, we can simply reference it by name.# For example:
storageClass: nifi
## The default storage class is used if this variable is not set.
accessModes: [ReadWriteOnce]
## Use subPath and have 1 persistent volume instead of 7 volumes - use when your k8s nodes have limited volume slots, to limit waste of space,## or your available volume sizes are quite large# The one disk will have a directory folder for each volumeMount, but this is hidden. Run 'mount' to view each mount.
subPath:
enabled: false
name: data
size: 30Gi
## Storage Capacities for persistent volumes (these are ignored if using one volume with subPath)
configStorage:
size: #{CONFIG_STORAGE}#
authconfStorage:
size: #{AUTH_STORAGE}## Storage capacity for the 'data' directory, which is used to hold things such as the flow.xml.gz, configuration, state, etc.
dataStorage:
size: #{DATA_STORAGE}## Storage capacity for the FlowFile repository
flowfileRepoStorage:
size: #{FLOWFILE_STORAGE}## Storage capacity for the Content repository
contentRepoStorage:
size: #{CONTENT_STORAGE}## Storage capacity for the Provenance repository. When changing this, one should also change the properties.provenanceStorage value above, also.
provenanceRepoStorage:
size: #{PROVENANCE_STORAGE}## Storage capacity for nifi logs
logStorage:
size: #{LOG_STORAGE}### Configure resource requests and limits## ref: http://kubernetes.io/docs/user-guide/compute-resources/##
resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious# choice for the user. This also increases chances charts run on environments with little# resources, such as Minikube. If you do want to specify resources, uncomment the following# lines, adjust them as necessary, and remove the curly braces after 'resources:'.# limits:# cpu: 100m# memory: 128Mi# requests:# cpu: 100m# memory: 128Mi
logresources:
requests:
cpu: 10m
memory: 10Mi
limits:
cpu: 50m
memory: 50Mi
## Enables setting your own affinity. Mutually exclusive with sts.AntiAffinity## You need to set the value of sts.AntiAffinity other than "soft" and "hard"
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: type
operator: In
values:
- nifi
nodeSelector: {}
tolerations:
- key: "name"
operator: "Equal"
value: "nifi"
effect: "NoSchedule"
initContainers: {}
# foo-init: # <- will be used as container name# image: "busybox:1.30.1"# imagePullPolicy: "IfNotPresent"# command: ['sh', '-c', 'echo this is an initContainer']# volumeMounts:# - mountPath: /tmp/foo# name: foo
extraVolumeMounts: []
extraVolumes: []
## Extra containers
extraContainers: []
terminationGracePeriodSeconds: 30
## Extra environment variables that will be pass onto deployment pods
env: []
## Extra environment variables from secrets and config maps
envFrom: []
## Extra options to add to the bootstrap.conf file
extraOptions: []
# envFrom:# - configMapRef:# name: config-name# - secretRef:# name: mysecret## Openshift support## Use the following varables in order to enable Route and Security Context Constraint creation
openshift:
scc:
enabled: false
route:
enabled: false#host: www.test.com#path: /nifi# ca server details# Setting this true would create a nifi-toolkit based ca server# The ca server will be used to generate self-signed certificates required setting up secured cluster
ca:
## If true, enable the nifi-toolkit certificate authority
enabled: true
persistence:
enabled: true
server: ""
service:
port: 9090
token: CA_TOKENNNNNNNNN
admin:
cn: admin
serviceAccount:
create: false#name: nifi-ca
openshift:
scc:
enabled: false# cert-manager support# Setting this true will have cert-manager create a private CA for the cluster# as well as the certificates for each cluster node.
certManager:
enabled: false
clusterDomain: cluster.local
keystorePasswd: changeme
truststorePasswd: changeme
replaceDefaultTrustStore: false
additionalDnsNames:
- localhost
refreshSeconds: 300
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 100m
memory: 128Mi
# cert-manager takes care of rotating the node certificates, so default# their lifetime to 90 days. But when the CA expires you may need to# 'helm delete' the cluster, delete all the node certificates and secrets,# and then 'helm install' the NiFi cluster again. If a site-to-site trusted# CA or a NiFi Registry CA certificate expires, you'll need to restart all# pods to pick up the new version of the CA certificate. So default the CA# lifetime to 10 years to avoid that happening very often.# c.f. https://github.com/cert-manager/cert-manager/issues/2478#issuecomment-1095545529
certDuration: 2160h
caDuration: 87660h
# ------------------------------------------------------------------------------# Zookeeper:# ------------------------------------------------------------------------------
zookeeper:
## If true, install the Zookeeper chart## ref: https://github.com/bitnami/charts/blob/master/bitnami/zookeeper/values.yaml
enabled: true## If the Zookeeper Chart is disabled a URL and port are required to connect
url: ""
port: 2181
replicaCount: 3
tolerations:
- key: "name"
operator: "Equal"
value: "nifi"
effect: "NoSchedule"
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: type
operator: In
values:
- nifi
# ------------------------------------------------------------------------------# Nifi registry:# ------------------------------------------------------------------------------
registry:
## If true, install the Nifi registry
enabled: true
url: ""
port: 18080
tolerations:
- key: "name"
operator: "Equal"
value: "nifi"
effect: "NoSchedule"
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: type
operator: In
values:
- nifi
## Add values for the nifi-registry here## ref: https://github.com/dysnix/charts/blob/main/dysnix/nifi-registry/values.yaml# Configure metrics
metrics:
prometheus:
# Enable Prometheus metrics
enabled: false# Port used to expose Prometheus metrics
port: 9092
serviceMonitor:
# Enable deployment of Prometheus Operator ServiceMonitor resource
enabled: false# namespace: monitoring# Additional labels for the ServiceMonitor
labels: {}
The text was updated successfully, but these errors were encountered:
Describe the bug
I'm trying to migrate from 0.7.8 to 1.2.0 and I'm getting TLS toolkit error
Kubernetes: 1.24 - AWS EKS
Nifi chart: 1.2.0 with Nifi 1.23.2
What happened:
[main] INFO org.apache.nifi.toolkit.tls.standalone.TlsToolkitStandalone - Successfully generated TLS configuration for app-nifi-nifi-0.app-nifi-nifi-headless.nifi.svc.cluster.local 1 in /opt/nifi/nifi-current/conf/app-nifi-nifi-0.app-nifi-nifi-headless.nifi.svc.cluster.local
Error generating TLS configuration. (badly formatted directory string)
usage: org.apache.nifi.toolkit.tls.TlsToolkitMain [-a ] [--additionalCACertificate ] [-B ] [-c ] [-C ] [-d ] [-f ] [-g] [-G
] [-h] [-k ] [-K ] [-n ] [--nifiDnPrefix ] [--nifiDnSuffix ] [-o ] [-O] [-P ] [-s ] [-S ]
[--splitKeystore ] [--subjectAlternativeNames ] [-T ]
Version of Helm, Kubernetes and the Nifi chart:
Helm version: version.BuildInfo{Version:"v3.11.0", GitCommit:"472c5736ab01133de504a826bd9ee12cbe4e7904", GitTreeState:"clean", GoVersion:"go1.18.10"}
What you expected to happen:
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know:
Here are some information that help troubleshooting:
values.yaml
or the changes made to the default one (after removing sensitive information)Check if a pod is in error:
Inspect the pod, check the "Events" section at the end for anything suspicious.
Get logs on a failed container inside the pod (here the
server
one):values.yaml - Sensitive information will be injected externally and "#{}#" will be replaced by CI/CD
The text was updated successfully, but these errors were encountered: