Installing gatekeeper is easy:
kubectl apply -f https://raw.githubusercontent.com/open-policy-agent/gatekeeper/master/deploy/gatekeeper.yaml
We can see this creates configs and constrainttemplates CRDs:
$ kubectl api-resources | grep gatekeeper.sh
configs config.gatekeeper.sh true Config
constrainttemplates templates.gatekeeper.sh false ConstraintTemplate
The constraints based off the constraintemplates will themselves will be their own CRDs. In the gatekeeper-system namespace we have the controller manager and webhook that will serve the validating webhook requests from the API server:
$ kubectl get all -n gatekeeper-system
NAME READY STATUS RESTARTS AGE
pod/gatekeeper-controller-manager-77ff8cc995-sh8xq 1/1 Running 1 6d21h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/gatekeeper-webhook-service ClusterIP 10.0.5.112 443/TCP 6d21h
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/gatekeeper-controller-manager 1/1 1 1 6d21h
NAME DESIRED CURRENT READY AGE
replicaset.apps/gatekeeper-controller-manager-77ff8cc995 1 1 1 6d21h
The next step is creating constraint templates, here's one that denies all Ingress:
apiVersion: templates.gatekeeper.sh/v1beta1
kind: ConstraintTemplate
metadata:
name: k8snoexternalservices
spec:
crd:
spec:
names:
kind: K8sNoExternalServices
targets:
- target: admission.k8s.gatekeeper.sh
rego: |
package k8snoexternalservices
violation[{"msg": msg}] {
input.review.kind.kind == "Ingress"
re_match("^(extensions|networking.k8s.io)$", input.review.kind.group)
msg := sprintf("No external service exposure is allowed via ingress: %v", [input.review.object.metadata.name])
}
But how did we know that it was input.review.kind.kind? That's pretty unintuitive. Turns out it's because that's the structure of the object passed to the authenticating webhook. We can see this by creating a constraint template that just blocks everything and logs the full object:
$ cat template.yaml
apiVersion: templates.gatekeeper.sh/v1beta1
kind: ConstraintTemplate
metadata:
name: k8sdebugtemplate
spec:
crd:
spec:
names:
kind: K8sDebugTemplate
targets:
- target: admission.k8s.gatekeeper.sh
rego: |
package debugging
violation[{"msg": msg}] {
msg := sprintf("Review object: %v", [input.review])
}
$ kubectl apply -f template.yaml
And then apply that to ingress objects:
$ cat constraint.yaml
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sDebugTemplate
metadata:
name: debuggingdeny
spec:
match:
kinds:
- apiGroups: ["extensions", "networking.k8s.io"]
kinds: ["Ingress"]
$ kubectl apply -f constraint.yaml
$ kubectl api-resources --api-group=constraints.gatekeeper.sh
NAME SHORTNAMES APIGROUP NAMESPACED KIND
k8sdebugtemplate constraints.gatekeeper.sh false K8sDebugTemplate
Then when we create any ingress we get a full object logged:
$ cat basic-ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: basic-ingress
namespace: frontend
spec:
backend:
serviceName: web
servicePort: 8080
If we patch an existing object we get to see old object filled out too:
$ kubectl patch -f basic-ingress.yaml -p '{"spec":{"backend":{"servicePort":8081}}}' 2>&1 | sed s'/): admission webhook.*//' | sed s'/.* Review object: //' | jq
{
"operation": "UPDATE",
"userInfo": {
"username": "me@example.com",
"groups": [
"system:authenticated"
],
"extra": {
"user-assertion.cloud.google.com": [
"XX=="
]
}
},
"object": {
"kind": "Ingress",
"apiVersion": "extensions/v1beta1",
"metadata": {
"annotations": {
"kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Ingress\",\"metadata\":{\"annotations\":{},\"name\":\"basic-ingress\",\"namespace\":\"frontend\"},\"spec\":{\"backend\":{\"serviceName\":\"web\",\"servicePort\":8080}}}\n"
},
"finalizers": [
"finalizers.gatekeeper.sh/sync"
],
"name": "basic-ingress",
"namespace": "frontend",
"uid": "403d4a8f-2db0-11ea-828b-42010a80004c",
"resourceVersion": "2135434",
"generation": 2,
"creationTimestamp": "2020-01-02T22:36:00Z"
},
"spec": {
"backend": {
"serviceName": "web",
"servicePort": 8081
}
},
"status": {
"loadBalancer": {
"ingress": [
{
"ip": "1.2.3.4"
}
]
}
}
},
"oldObject": {
"kind": "Ingress",
"apiVersion": "extensions/v1beta1",
"metadata": {
"name": "basic-ingress",
"namespace": "frontend",
"uid": "403d4a8f-2db0-11ea-828b-42010a80004c",
"resourceVersion": "2135434",
"generation": 1,
"creationTimestamp": "2020-01-02T22:36:00Z",
"annotations": {
"kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Ingress\",\"metadata\":{\"annotations\":{},\"name\":\"basic-ingress\",\"namespace\":\"frontend\"},\"spec\":{\"backend\":{\"serviceName\":\"web\",\"servicePort\":8080}}}\n"
},
"finalizers": [
"finalizers.gatekeeper.sh/sync"
]
},
"spec": {
"backend": {
"serviceName": "web",
"servicePort": 8080
}
},
"status": {
"loadBalancer": {
"ingress": [
{
"ip": "1.2.3.4"
}
]
}
}
},
"uid": "6410973b-2db1-11ea-828b-42010a80004c",
"kind": {
"group": "extensions",
"version": "v1beta1",
"kind": "Ingress"
},
"resource": {
"group": "extensions",
"version": "v1beta1",
"resource": "ingresses"
},
"options": null,
"_unstable": {
"namespace": {
"kind": "Namespace",
"apiVersion": "v1",
"metadata": {
"creationTimestamp": "2019-12-26T23:29:47Z",
"annotations": {
"kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"v1\",\"kind\":\"Namespace\",\"metadata\":{\"annotations\":{},\"name\":\"frontend\"}}\n"
},
"name": "frontend",
"selfLink": "/api/v1/namespaces/frontend",
"uid": "9a96616e-2837-11ea-8e60-42010a80017b",
"resourceVersion": "33039"
},
"spec": {
"finalizers": [
"kubernetes"
]
},
"status": {
"phase": "Active"
}
}
},
"name": "basic-ingress",
"namespace": "frontend",
"dryRun": false
}
We can clean up the debugging rule by deleting the constraint:
kubectl delete k8sdebugtemplate.constraints.gatekeeper.sh debuggingdeny
In terms of policy writing you can test individual policies like this:
$ docker run -v /Users/gcastle/git/gatekeeper/library/general/uniqueingresshost:/tests openpolicyagent/opa test /tests/src.rego /tests/src_test.rego
PASS: 12/12
The current testing relies on rego assertions as tests, which is a bit of a PITA when you need to create a lot of test permutations on objects because you
need to re-create the admission object above to some extent. It's also not super obvious what passing and failing are, it could do with a higher-level test framework:
- "count(results) == 0" means pass this test if there were no violations
- "count(results) == 1" means pass this test if there was exactly one violation
Bundling up templates is easy because gatekeeper is using kustomize. To create all the templates you could just do:
kustomize build templates | kubectl apply -f -