Reconstruct OpenAPI Specifications from real-time workload traffic seamlessly.
- Not all applications have an OpenAPI specification available
- How can we get this for legacy or external applications?
- Detect whether microservices still use deprecated APIs (a.k.a. Zombie APIs)
- Detect whether microservices use undocumented APIs (a.k.a. Shadow APIs)
- Generate OpenAPI specifications without code instrumentation or modifying existing workloads (seamless documentation)
- Capture all API traffic in an existing environment using a service mesh framework (e.g. Istio)
- Construct an OpenAPI specification by observing API traffic or upload a reference OpenAPI spec
- Review, modify and approve automatically generated OpenAPI specs
- Alert on any differences between the approved API specification and the API calls observed at runtime; detects shadow & zombie APIs
- UI dashboard to audit and monitor the findings
DOCKER_IMAGE=<your repo>/apiclarity DOCKER_TAG=<your tag> make push-docker
# Modify the image name of the APIClarity deployment in ./deployment/apiclarity.yaml
make ui
make backend
-
Make sure that Istio is installed and running in your cluster. See the Official installation instructions for more information.
-
Clone the apiclarity repository to your local system
git clone https://github.com/apiclarity/apiclarity cd apiclarity
-
Deploy APIClarity in K8s. It will be deployed in a new namespace
apiclarity
:kubectl apply -f deployment/apiclarity.yaml
Note: The manifest uses
PersistentVolumeClaim
s to request two persistent volumes. Make sure you have a defaultStorageClass
available in your cluster or, if deploying on a cluster that does not have this, edit the manifest to provide your own local storage configuration. -
Verify that APIClarity is running:
$ kubectl get pods -n apiclarity NAME READY STATUS RESTARTS AGE apiclarity-5df5fd6d98-h8v7t 1/1 Running 0 15m apiclarity-postgresql-0 1/1 Running 0 15m
-
Initialize and pull the
wasm-filters
submodule:git submodule init wasm-filters git submodule update wasm-filters cd wasm-filters
-
Deploy the Envoy Wasm filter for capturing the traffic:
Run the Wasm deployment script for selected namespaces to allow traffic tracing.
Tracing is accomplished by patching the Istio sidecars within the pods to load the APIClarity Wasm filter. So ensure Istio sidecar injection is enabled for all namespaces you intend to trace before deploying anything to that namespace.
The script will automatically:
- Deploy the Wasm filter binary as a config map
- Deploy the Istio Envoy filter to use the Wasm binary
- Patch all deployment annotations within the selected namespaces to mount the Wasm binary
./deploy.sh <namespace1> <namespace2> ...
Note: To build the Wasm filter from source instead of using the pre-built binary, please follow the instructions in the wasm-filters repository.
-
Port forward to APIClarity UI:
kubectl port-forward -n apiclarity svc/apiclarity 9999:8080
-
Open APIClarity UI in the browser: http://localhost:9999/
-
Generate some traffic in the applications in the traced namespaces and check the APIClarity UI :)
The file deployment/apiclarity.yaml
is used to deploy and configure APIClarity on your cluster.
-
Set
RESPONSE_HEADERS_TO_IGNORE
andREQUEST_HEADERS_TO_IGNORE
with a space separated list of headers to ignore when reconstructing the spec.Note: Current values defined in
headers-to-ignore-config
ConfigMap
A good demo application to try APIClarity with is the Sock Shop Demo.
To deploy the Sock Shop Demo follow these steps:
-
Create the
sock-shop
namespace and enable Istio injection:kubectl create namespace sock-shop kubectl label namespaces sock-shop istio-injection=enabled
-
Deploy the Sock Shop Demo to your cluster:
kubectl apply -f https://raw.githubusercontent.com/microservices-demo/microservices-demo/master/deploy/kubernetes/complete-demo.yaml
-
From the APIClarity git repository deploy the Wasm filter in the
sock-shop
namespace:cd apiclarity/wasm-filters ./deploy.sh sock-shop
-
Find the NodePort to access the Sock Shop Demo App
$ kubectl describe svc front-end -n sock-shop [...] NodePort: <unset> 30001/TCP [...]
Use this port together with your node IP to access the demo webshop and run some transactions to generate data to review on the APIClarity dashboard.
-
Build UI & backend locally as described above:
make ui && make backend
-
Copy the built site:
cp -r ./ui/build ./site
-
Run backend and frontend locally using demo data:
DATABASE_DRIVER=LOCAL FAKE_TRACES=true FAKE_TRACES_PATH=./backend/pkg/test/trace_files \ ENABLE_DB_INFO_LOGS=true ./backend/bin/backend run
-
Open APIClarity UI in the browser: http://localhost:8080/
Pull requests and bug reports are welcome.
For larger changes please create an Issue in GitHub first to discuss your proposed changes and possible implications.