title | weight | type |
---|---|---|
Debugging issues with your application |
25 |
docs |
You deployed your app to Knative Serving, but it isn't working as expected. Go through this step-by-step guide to understand what failed.
Check your deploy command output to see whether it succeeded or not. If your deployment process was terminated, you should see an error message in the output that describes the reason why the deployment failed.
This kind of failure is most likely due to either a misconfigured manifest or wrong command. For example, the following output says that you must configure route traffic percent to sum to 100:
Error from server (InternalError): error when applying patch:
{"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"serving.knative.dev/v1alpha1\",\"kind\":\"Route\",\"metadata\":{\"annotations\":{},\"name\":\"route-example\",\"namespace\":\"default\"},\"spec\":{\"traffic\":[{\"configurationName\":\"configuration-example\",\"percent\":50}]}}\n"}},"spec":{"traffic":[{"configurationName":"configuration-example","percent":50}]}}
to:
&{0xc421d98240 0xc421e77490 default route-example STDIN 0xc421db0488 264682 false}
for: "STDIN": Internal error occurred: admission webhook "webhook.knative.dev" denied the request: mutation failed: The route must have traffic percent sum equal to 100.
ERROR: Non-zero return code '1' from command: Process exited with status 1
Knative Serving provides default out-of-the-box logs for your application. Access your application logs using Accessing Logs page.
Run the following command to get the status
of the Route
object with which
you deployed your application:
kubectl get route <route-name> --output yaml
The conditions
in status
provide the reason if there is any failure. For
details, see Knative
Error Conditions and Reporting(currently
some of them are not implemented yet).
Run the following command to list all the cluster ingress, with their labels
kubectl get clusteringress -o=custom-columns='NAME:.metadata.name,LABELS:.metadata.labels'
NAME LABELS
helloworld-go-h5kd4 map[serving.knative.dev/route:helloworld-go serving.knative.dev/routeNamespace:default]
The labels serving.knative.dev/route
and serving.knative.dev/routeNamespace
will tell exactly which Route a ClusterIngress is a child resource of. Find the
one corresponding to your Route. If a ClusterIngress does not exist, the route
controller believes that the Revisions targeted by your Route/Service isn't
ready. Please proceed to later sections to diagnose Revision readiness status.
Otherwise, run the following command to look at the ClusterIngress created for your Route
kubectl get clusteringress <CLUSTERINGRESS_NAME> --output yaml
particularly, look at the status:
section. If the ClusterIngress is working
correctly, we should see the condition with type=Ready
to have status=True
.
Otherwise, there will be error messages.
Now, if ClusterIngress shows status Ready, there must be a corresponding VirtualService. Run the following command:
kubectl get virtualservice <CLUSTERINGRESS_NAME> -n knative-serving --output yaml
the network configuration in VirtualService must match that of ClusterIngress and Route. VirtualService currently doesn't expose a Status field, so if one exists and have matching configurations with ClusterIngress and Route, you may want to wait a little bit for those settings to propagate.
If you are familar with Istio and istioctl
, you may try using istioctl
to
look deeper using Istio
guide.
Before Knative 0.3 we use a LoadBalancer service call knative-ingressgateway
to handle ingress. Since Knative 0.3 we now use istio-ingressgateway
Service.
To check the IP address of your Ingress, use
kubectl get svc -n istio-system istio-ingressgateway
Or replace that with knative-ingressgateway
if you are using Knative release
older than 0.3.
If there is no external IP address, use
kubectl describe svc istio-ingressgateway -n istio-system
to see a reason why IP addresses weren't provisioned. Most likely it is due to a quota issue.
If you configure your Route
with Configuration
, run the following command to
get the name of the Revision
created for you deployment (look up the
configuration name in the Route
.yaml file):
kubectl get configuration <configuration-name> --output jsonpath="{.status.latestCreatedRevisionName}"
If you configure your Route
with Revision
directly, look up the revision
name in the Route
yaml file.
Then run the following command:
kubectl get revision <revision-name> --output yaml
A ready Revision
should have the following condition in status
:
conditions:
- reason: ServiceReady
status: "True"
type: Ready
If you see this condition, check the following to continue debugging:
If you see other conditions, to debug further:
- Look up the meaning of the conditions in Knative Error Conditions and Reporting. Note: some of them are not implemented yet. An alternative is to check Pod status.
- If you are using
BUILD
to deploy and theBuildComplete
condition is notTrue
, check BUILD status.
To get the Pod
s for all your deployments:
kubectl get pods
This command should list all Pod
s with brief status. For example:
NAME READY STATUS RESTARTS AGE
configuration-example-00001-deployment-659747ff99-9bvr4 2/2 Running 0 3h
configuration-example-00002-deployment-5f475b7849-gxcht 1/2 CrashLoopBackOff 2 36s
Choose one and use the following command to see detailed information for its
status
. Some useful fields are conditions
and containerStatuses
:
kubectl get pod <pod-name> --output yaml
If you see issues with "user-container" container in the containerStatuses, check your application logs as described below.
If you are using Build to deploy, run the following command to get the Build for
your Revision
:
kubectl get build $(kubectl get revision <revision-name> --output jsonpath="{.spec.buildName}") --output yaml
If there is any failure, the conditions
in status
provide the reason. To
access build logs, first execute kubectl proxy
and then open
Kibana UI.
Use any of the following filters within Kibana UI to see build logs. For more
information about the Knative observability features, see
Installing logging, metrics, and traces.
- All build logs:
_exists_:"kubernetes.labels.build-name"
- Build logs for a specific build:
kubernetes.labels.build-name:"<BUILD NAME>"
- Build logs for a specific build and step:
kubernetes.labels.build-name:"<BUILD NAME>" AND kubernetes.container_name:"build-step-<BUILD STEP NAME>"
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License.