Friday, August 21, 2020

DNS CNAME at the root of a domain

TIL about DNS CNAME "flattening" which is a cloudflare feature that allows you to put a CNAME at the domain root: something that is actually not allowed by the DNS RFC spec.

Why does that matter? I was helping a nonprofit who is using a website hosting company, but the nonprofit has its own domain name. The hosting company tells them to "create a CNAME pointing at blahblah.somehosting.com" which works great, I created "www.example.com" as a CNAME pointing to the hosting address.

But what about the "naked" domain? i.e. if I type example.com like a normal person instead of www.example.com it doesn't resolve, and I don't have anywhere to point it. The cloudflare solution as above is to allow you to do something non-RFC compliant and also put a CNAME record at the root.

Google domains takes a different approach, to solve the problem you can create a subdomain forward for "@" and point it at your www.example.com target. With "forward path" enabled on the entry the URLs will also be preserved through the redirect.

Under the hood Google creates A records for the root domain, so a request for the root domain will resolve to a Google service that then resolves your www.example.com CNAME, establishes a HTTP session with the requester and issues the requesting browser a 301/302 redirect.

Monday, May 25, 2020

Measuring internet bandwidth in a cron script speedtest.net

This is a simple way to keep tabs on bandwidth provided by your ISP. It's also useful as a historical record to prove exactly when service got worse.

The speedtest-cli is a python CLI for speedtest.net that has been dockerized by some kind folks.

To get a self-appending CSV just run:
/usr/bin/docker run --rm robertcsapo/speedtest --csv >> speedtest.csv
Assuming you run docker passwordless (actually a bad idea for security) you can then add this to your crontab. You may want to consider running it outside of when you need your network bandwidth the most:
0 1 * * * /usr/bin/docker run --rm robertcsapo/speedtest --csv >> /home/myuser/speedtest.csv
To do this with better security don't run docker passwordless, but allow sudo to run this specific command, where the userid 12345 is your userid that you can get by running 'id'. This will allow you to run this command as a non-root user:
myuser ALL=(ALL) NOPASSWD: /usr/bin/docker run -u 12345 --rm robertcsapo/speedtest --csv
And then in your cron put:
0 1 * * * sudo /usr/bin/docker run -u 12345 --rm robertcsapo/speedtest --csv >> /home/me/speedtest.csv

Thursday, January 2, 2020

Gatekeeper, OPA, rego: notes from testing and debugging policies

Installing gatekeeper is easy:
kubectl apply -f https://raw.githubusercontent.com/open-policy-agent/gatekeeper/master/deploy/gatekeeper.yaml
We can see this creates configs and constrainttemplates CRDs:
$ kubectl api-resources | grep gatekeeper.sh
configs                                        config.gatekeeper.sh           true         Config
constrainttemplates                            templates.gatekeeper.sh        false        ConstraintTemplate
The constraints based off the constraintemplates will themselves will be their own CRDs. In the gatekeeper-system namespace we have the controller manager and webhook that will serve the validating webhook requests from the API server:
$ kubectl get all -n gatekeeper-system
NAME                                                 READY   STATUS    RESTARTS   AGE
pod/gatekeeper-controller-manager-77ff8cc995-sh8xq   1/1     Running   1          6d21h

NAME                                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/gatekeeper-webhook-service   ClusterIP   10.0.5.112           443/TCP   6d21h

NAME                                            READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/gatekeeper-controller-manager   1/1     1            1           6d21h

NAME                                                       DESIRED   CURRENT   READY   AGE
replicaset.apps/gatekeeper-controller-manager-77ff8cc995   1         1         1       6d21h
The next step is creating constraint templates, here's one that denies all Ingress:
apiVersion: templates.gatekeeper.sh/v1beta1
kind: ConstraintTemplate
metadata:
  name: k8snoexternalservices
spec:
  crd:
    spec:
      names:
        kind: K8sNoExternalServices
  targets:
    - target: admission.k8s.gatekeeper.sh
      rego: |
        package k8snoexternalservices

        violation[{"msg": msg}] {
          input.review.kind.kind == "Ingress"
          re_match("^(extensions|networking.k8s.io)$", input.review.kind.group)
          msg := sprintf("No external service exposure is allowed via ingress: %v", [input.review.object.metadata.name])
        }
But how did we know that it was input.review.kind.kind? That's pretty unintuitive. Turns out it's because that's the structure of the object passed to the authenticating webhook. We can see this by creating a constraint template that just blocks everything and logs the full object:
$ cat template.yaml
apiVersion: templates.gatekeeper.sh/v1beta1
kind: ConstraintTemplate
metadata:
  name: k8sdebugtemplate
spec:
  crd:
    spec:
      names:
        kind: K8sDebugTemplate
  targets:
    - target: admission.k8s.gatekeeper.sh
      rego: |
        package debugging

        violation[{"msg": msg}] {
          msg := sprintf("Review object: %v", [input.review])
        }
$ kubectl apply -f template.yaml
And then apply that to ingress objects:
$ cat constraint.yaml 
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sDebugTemplate 
metadata:
  name: debuggingdeny
spec:
  match:
    kinds:
      - apiGroups: ["extensions", "networking.k8s.io"]
        kinds: ["Ingress"]
$ kubectl apply -f constraint.yaml
$ kubectl api-resources --api-group=constraints.gatekeeper.sh
NAME                    SHORTNAMES   APIGROUP                    NAMESPACED   KIND
k8sdebugtemplate                     constraints.gatekeeper.sh   false        K8sDebugTemplate
Then when we create any ingress we get a full object logged:
$ cat basic-ingress.yaml 
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: basic-ingress
  namespace: frontend
spec:
  backend:
    serviceName: web
    servicePort: 8080
If we patch an existing object we get to see old object filled out too:
$ kubectl patch -f basic-ingress.yaml -p '{"spec":{"backend":{"servicePort":8081}}}' 2>&1 | sed s'/): admission webhook.*//' | sed s'/.* Review object: //' | jq

{
  "operation": "UPDATE",
  "userInfo": {
    "username": "me@example.com",
    "groups": [
      "system:authenticated"
    ],
    "extra": {
      "user-assertion.cloud.google.com": [
        "XX=="
      ]
    }
  },
  "object": {
    "kind": "Ingress",
    "apiVersion": "extensions/v1beta1",
    "metadata": {
      "annotations": {
        "kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Ingress\",\"metadata\":{\"annotations\":{},\"name\":\"basic-ingress\",\"namespace\":\"frontend\"},\"spec\":{\"backend\":{\"serviceName\":\"web\",\"servicePort\":8080}}}\n"
      },
      "finalizers": [
        "finalizers.gatekeeper.sh/sync"
      ],
      "name": "basic-ingress",
      "namespace": "frontend",
      "uid": "403d4a8f-2db0-11ea-828b-42010a80004c",
      "resourceVersion": "2135434",
      "generation": 2,
      "creationTimestamp": "2020-01-02T22:36:00Z"
    },
    "spec": {
      "backend": {
        "serviceName": "web",
        "servicePort": 8081
      }
    },
    "status": {
      "loadBalancer": {
        "ingress": [
          {
            "ip": "1.2.3.4"
          }
        ]
      }
    }
  },
  "oldObject": {
    "kind": "Ingress",
    "apiVersion": "extensions/v1beta1",
    "metadata": {
      "name": "basic-ingress",
      "namespace": "frontend",
      "uid": "403d4a8f-2db0-11ea-828b-42010a80004c",
      "resourceVersion": "2135434",
      "generation": 1,
      "creationTimestamp": "2020-01-02T22:36:00Z",
      "annotations": {
        "kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Ingress\",\"metadata\":{\"annotations\":{},\"name\":\"basic-ingress\",\"namespace\":\"frontend\"},\"spec\":{\"backend\":{\"serviceName\":\"web\",\"servicePort\":8080}}}\n"
      },
      "finalizers": [
        "finalizers.gatekeeper.sh/sync"
      ]
    },
    "spec": {
      "backend": {
        "serviceName": "web",
        "servicePort": 8080
      }
    },
    "status": {
      "loadBalancer": {
        "ingress": [
          {
            "ip": "1.2.3.4"
          }
        ]
      }
    }
  },
  "uid": "6410973b-2db1-11ea-828b-42010a80004c",
  "kind": {
    "group": "extensions",
    "version": "v1beta1",
    "kind": "Ingress"
  },
  "resource": {
    "group": "extensions",
    "version": "v1beta1",
    "resource": "ingresses"
  },
  "options": null,
  "_unstable": {
    "namespace": {
      "kind": "Namespace",
      "apiVersion": "v1",
      "metadata": {
        "creationTimestamp": "2019-12-26T23:29:47Z",
        "annotations": {
          "kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"v1\",\"kind\":\"Namespace\",\"metadata\":{\"annotations\":{},\"name\":\"frontend\"}}\n"
        },
        "name": "frontend",
        "selfLink": "/api/v1/namespaces/frontend",
        "uid": "9a96616e-2837-11ea-8e60-42010a80017b",
        "resourceVersion": "33039"
      },
      "spec": {
        "finalizers": [
          "kubernetes"
        ]
      },
      "status": {
        "phase": "Active"
      }
    }
  },
  "name": "basic-ingress",
  "namespace": "frontend",
  "dryRun": false
}
We can clean up the debugging rule by deleting the constraint:
kubectl delete k8sdebugtemplate.constraints.gatekeeper.sh debuggingdeny 
In terms of policy writing you can test individual policies like this:
$ docker run -v /Users/gcastle/git/gatekeeper/library/general/uniqueingresshost:/tests openpolicyagent/opa test /tests/src.rego /tests/src_test.rego
PASS: 12/12
The current testing relies on rego assertions as tests, which is a bit of a PITA when you need to create a lot of test permutations on objects because you need to re-create the admission object above to some extent. It's also not super obvious what passing and failing are, it could do with a higher-level test framework:
  • "count(results) == 0" means pass this test if there were no violations
  • "count(results) == 1" means pass this test if there was exactly one violation
Bundling up templates is easy because gatekeeper is using kustomize. To create all the templates you could just do:
kustomize build templates | kubectl apply -f -