tag:blogger.com,1999:blog-53850365871944040382024-03-19T02:50:03.098-07:00Technical notes, my online memoryUnknownnoreply@blogger.comBlogger419125tag:blogger.com,1999:blog-5385036587194404038.post-37837027360542121792024-02-09T14:33:00.000-08:002024-02-09T14:33:56.138-08:00Listing TLS cipher suitesNmap will list the ciphersuites of a server, and do some warning about old ones:
<pre class="prettyprint">nmap --script ssl-enum-ciphers -p 443 www.example.com</pre>
Ghttp://www.blogger.com/profile/04827935129754285904noreply@blogger.com0tag:blogger.com,1999:blog-5385036587194404038.post-40656864694775241682023-01-05T15:55:00.002-08:002023-01-05T15:55:25.451-08:00Gatekeeper/policycontroller kubectl cheat sheetGatekeeper/policycontroller cheat sheet.<br/><br/>
List all the constraint templates
<pre class="prettyprint">
kubectl get constrainttemplates -l="configmanagement.gke.io/configmanagement=config-management"
</pre>
Get the status of a particular constraint, including logged violations:
<pre class="prettyprint">
kubectl get k8sallowedrepos.constraints.gatekeeper.sh repo-is-gcr
</pre>
Get the policy controller logs:
<pre class="prettyprint">
kubectl logs -n gatekeeper-system -l gatekeeper.sh/system=yes
</pre>Ghttp://www.blogger.com/profile/04827935129754285904noreply@blogger.com0tag:blogger.com,1999:blog-5385036587194404038.post-85037216168882793192023-01-04T15:58:00.001-08:002023-01-04T15:58:27.523-08:00Replacements for docker<p> With docker changing its license and the general problem of running a super privileged daemon, I've been looking for alternatives. Here's what I've found to work.</p>
<p>Gcrane for interacting with registries:</p><p><br /></p>
<pre class="prettyprint">
go install github.com/google/go-containerregistry/cmd/gcrane@latest
export PATH="${HOME}/go/bin:$PATH"
gcrane pull ubuntu ubuntu.tar
gcrane push ubuntu.tar gcr.io/my-project/ubuntu
gcrane cp ubuntu gcr.io/my-project/ubuntu
</pre>
<p>Podman for building and pushing</p>
<pre class="prettyprint">
sudo apt-get install podman
</pre>
If you get this warning:
<blockquote>
Reading allowed ID mappings: reading subuid mappings for user "${USER}" and subgid mappings for group "${USER}": no subuid ranges found for user "${USER}" in /etc/subuid
</blockquote>
You need to add some UIDs:
<pre class="prettyprint">
sudo usermod --add-subuids ${start_uid:=100000}-$(( ${start_uid:=100000} + "65535" )) $(whoami)
sudo usermod --add-subgids ${start_gid:=100000}-$(( ${start_gid:=100000} + "65535" )) $(whoami)
</pre>
Then you can use it:
<pre class="prettyprint">
podman build -t ${IMG}:${TAG} .
# Auth to an artifact registry repo
gcloud auth print-access-token | podman login -u oauth2accesstoken --password-stdin ${REGION}-docker.pkg.dev
podman push ${REGION}-docker.pkg.dev/${PROJECT}/${REPO}/${IMG}:${TAG}
# Run a container
podman run --rm -it alpine:latest /bin/sh
</pre>Ghttp://www.blogger.com/profile/04827935129754285904noreply@blogger.com0tag:blogger.com,1999:blog-5385036587194404038.post-20421331224082551002022-09-26T10:59:00.009-07:002022-09-26T11:15:02.156-07:00Unpatchable vulnerabilities detected by scanners<div>There's quite a lot of vulnerabilities in <a href="https://nvd.nist.gov/">the NVD</a> that aren't really vulnerabilities but <i>may</i> show up in vulnerability scanners depending on the behavior of the scanner. Here's some examples of open vulnerabilities (pulled from a debian container that's up-to-date at the time of writing) that just aren't vulnerabilities at all:</div><div><ul>
<li><a href="https://security-tracker.debian.org/tracker/CVE-2004-0971">We don't ship that vulnerable script in kerberos</a></li><li><a href="https://security-tracker.debian.org/tracker/CVE-2005-2541">That's how tar works</a></li><li>The<a href="https://security-tracker.debian.org/tracker/CVE-2007-5686"> default setting has blocked unknown username logging for more than a decade</a>, you can change it for debugging.</li><li><a href="https://security-tracker.debian.org/tracker/CVE-2010-4756">That's how glob works</a></li><li>Not <a href="https://security-tracker.debian.org/tracker/CVE-2011-3374">actually a vulnerability because URI isn't used</a></li><li><a href="https://security-tracker.debian.org/tracker/CVE-2007-6755">Dual_EC_DRBG is unused/broken in OpenSSL</a></li></ul><div>As <a href="https://raesene.github.io/blog/2020/11/22/When_Is_A_Vulnerability_Not_A_Vulnerability/">raesene@ points out</a> whether these show up in your container vulnerability scanner depends entirely on the scanner. Empirical testing shows that nessus filters out quite a lot of unactionable vulnerability scan results, whereas free scanners like <a href="https://github.com/aquasecurity/trivy">Trivy</a>, or even other paid scanners, don't. They take the approach of "show everything, but you can filter out the actionable vulns that have patches".</div><div><br /></div><div>The second approach is the easiest and least controversial for the scanner implementor because they don't have to make any security decisions about what to show or not show. It effectively delegates the problem of filtering out the noise to the user. The first problem is that the default view is very noisy and it's not immediately obvious how to make it better, especially for practitioners coming from scanners that filter the noise up-front. It takes effort to get to a sensible view.</div><div><br /></div><div>The second problem is that if you just filter vulns by which ones have patches, to eliminate the this-isn't-a-vuln entries from 2005, you may miss something important that isn't patched yet but that you should be mitigating with a workaround or settings change.</div><div><br /></div><div>I don't think we're doing users any favors by continuing to put 18-year-old "vulnerabilities" that were never vulnerabilities in their scan results. It's harder to do, but I think a line should be drawn by the scanner as nessus does.</div><div><br /></div><div>Other ancient unactionable vulnerabilities you'll see, in tools that don't filter up-front, are technically vulnerabilities but low enough severity that no-one has gotten around to fixing them for years. Some are incompatible with current software so there's no obvious fix without something like a new kernel syscall.</div></div><div><ul style="text-align: left;"><li><a href="https://security-tracker.debian.org/tracker/CVE-2011-4116">Perl symlinks</a></li><li><a href="https://security-tracker.debian.org/tracker/CVE-2013-4235">Shadow TOCTOU</a></li><li><a href="https://security-tracker.debian.org/tracker/CVE-2016-2781">chroot with --userspec</a></li><li><a href="https://security-tracker.debian.org/tracker/CVE-2013-4392">systemd and selinux</a></li></ul><div>Should these be filtered out? From the user's, i.e. the scan consumer's, point of view they are basically useless. It's not like the average practitioner has the skills or time to go fix these, and even if they did the projects may just reject the fixes.</div><div><br /></div><div>Showing them in tools, "i.e. many eyes", isn't exerting much pressure on getting them fixed because they are all multiple years old or completely abandoned. From a scanner implementor point of view the decision on whether to show them is more difficult, because unlike the first list these are real vulns, just not important ones that are going to be fixed inside of any kind of compliance SLO.</div></div><div><br /></div><div>So should a scanner hide a "real" vuln? Or show the user an unpatchable vuln in their dashboard that may, depending on the user's specific compliance requirements, trigger a need to <a href="https://www.coalfire.com/the-coalfire-blog/managing-your-vulnerabilities-fedramp-style">write monthly deviation requests forever</a>? Tough choice.</div>Ghttp://www.blogger.com/profile/04827935129754285904noreply@blogger.com0tag:blogger.com,1999:blog-5385036587194404038.post-16171885210186282542022-06-09T10:53:00.003-07:002022-06-09T10:53:33.154-07:00GNOME desktop: disabling all the annoying stuff<p> Sadly GNOME seems to be missing (or has removed) options to disable all of the annoying stuff it's doing.</p><p>To disable the stupid workspace switching animation you have to install an entire extension. I installed <a href="https://github.com/windsorschmidt/disable-workspace-switcher-popup">this one directly from github</a>, the code is tiny, you can read it.</p><p>To turn off the hotcorner that I continually accidentally hit there is apparently no longer a UI option, but running:</p><pre class="prettyprint">gsettings set org.gnome.desktop.interface enable-hot-corners false</pre><p>worked.</p><p>Annoyingly the UI control you <i>do</i> have is not anywhere intuitive like with GNOME in the name. It's in "tweaks" and "extensions". Tweaks is where you can adjust font size, Extensions is where the workspace switching toggle shows up and also where you can disable the applications menu.</p>Ghttp://www.blogger.com/profile/04827935129754285904noreply@blogger.com0tag:blogger.com,1999:blog-5385036587194404038.post-74064561818244580802021-06-03T17:00:00.000-07:002021-06-03T17:00:17.906-07:00Quick summary of the cybersecurity executive orderExec order <a href="https://www.whitehouse.gov/briefing-room/presidential-actions/2021/05/12/executive-order-on-improving-the-nations-cybersecurity/">is here</a>, <a href="https://www.whitehouse.gov/briefing-room/statements-releases/2021/05/12/fact-sheet-president-signs-executive-order-charting-new-course-to-improve-the-nations-cybersecurity-and-protect-federal-government-networks/">fact sheet summary</a>. Good analysis from <a href="https://www.lawfareblog.com/everything-you-need-know-about-new-executive-order-cybersecurity">lawfare blog here</a>.
<ul>
<li>Share info on incidents (anything that impacts CIA according to <a href="https://www.law.cornell.edu/uscode/text/44/3552">44 U.S.C. 3552(b)(2)</a>, which could be read incredibly broadly) by amending gov contractual language.
<li>Zero trust all the things. Make a plan to adopt zero trust as defined by NIST. It's basically defense in depth plus least privilege and seems about as likely to make progress as a result of this order as those general ideas have made in the last 15 years.
<li>Use FedRAMP to set a cloud security strategy and adopt new cloud security principles. FedRAMP will develop "cloud-security technical reference architecture documentation that illustrates recommended approaches to cloud migration and data protection for agency data collection and reporting."
<li>CISA to develop "a cloud services governance framework" which sounds like it's to help with gov IR: "identify a range of services and protections available to agencies based on incident severity".
<li>Gov agencies must identify and report sensitive unclass data. I interpret this as the beginning of a process to adjust thinking from "unclass data doesn't matter" to a more sensible data classification that isn't solely focused on impact to national security.
<li>MFA and "encryption at rest and in transit" within 180 days for all gov agencies. Reports every 60 days after.
<li>Train gov agencies on FedRAMP and automate fedramp comms/forms with CSPs. Map compliance requirements onto FedRAMP authorization requirements and rely on the compliance certs instead of re-doing work for FedRAMP.
<li>Publish secure software supply chain guidelines for "critical software" within 180 days, NIST to publish 90 days after that. Preview of requirements around providing purchaser a software bill of materials, proof of provenance, vuln disclosure etc. Format of BOM to be decided and go into contract language within a year. This whole section is very optimistic.
<li>Consider consumer labelling for IoT re secure supply chain. This isn't my field but if I was buying one of these devices I would love to know what the security patch frequency and EOL is.
<li>Build a cyber safety review board that looks at big incidents modeled after the NTSB. This is great.
<li>CISA to write an incident response playbook for all gov agencies. This might be helpful for agencies that have no such playbooks, and may be a hindrance for those that already have good agency-specific ones. A bad, and likely, outcome would be to force sophisticated response private sector companies to do worse security response because they need to follow the letter of the official government playbook.
<li>EDR initiative: "CISA, to engage in cyber hunt, detection, and response activities". They get access to all data they need to do it, without any pre-authorization. This seems big. The lawfare blog points out that "Congress actually granted CISA expanded (and clarified) centralized threat-hunting authority in Section 1705 of the fiscal 2021 National Defense Authorization Act". Will we see a gov EDR product that has to be able to run on all gov-owned infra, including cloud?
<li>Gov agencies need logs from CSPs and to be able to provide those logs to DHS for analysis. Logs need to be signed at export time to prove authenticity as they pass through multiple hands.
<li>Classified systems should do the same or better as this exec order, without upsetting the existing rules/authorities.
</ul>Ghttp://www.blogger.com/profile/04827935129754285904noreply@blogger.com0tag:blogger.com,1999:blog-5385036587194404038.post-63617108528729046952021-05-12T22:14:00.002-07:002021-05-12T23:19:22.678-07:00Debug logs for nest wifi and google wifi<p> There are no logs available in the app, but there's quite a lot available from the diagnostic report API. It's in protobuf format, so someone wrote a <a href="https://github.com/benmanns/onhub/tree/master/cmd/onhubdump">handy little parser</a>.</p><p><br /></p>
<pre class="prettyprint">
go get github.com/benmanns/onhub/cmd/onhubdump
~/go/bin/onhubdump http://192.168.86.1/api/v1/diagnostic-report > logs.json
$ jq 'keys' logs.json
[
"commandOutputs",
"fileLengths",
"files",
"networkConfig",
"stormVersion",
"unixTime",
"unknown1",
"unknownPairs",
"version",
"wanInfo",
"whirlwindVersion"
]
$ jq -r '.files[].path' logs.json
/etc/lsb-release
/etc/resolv.conf
/proc/net/arp
/proc/slabinfo
/proc/meminfo
/sys/firmware/log
/var/log/debug-log/debug-log
/var/log/boot.log
/var/log/net.log
/var/log/update_engine/update_engine.20200102-000001
/var/log/update_engine/update_engine.20190102-000001
/var/lib/ap/monitor/wan_idle_usage
/var/lib/ap/monitor/child_idle_usage
/var/lib/ap/health-monitor/wan_connectivity_history
/var/log/ap_fresh_dns_messages
/var/log/ap_https_server_messages
/var/log/critical_events.log
/var/log/messages
# Get /var/log/messages content:
$ jq -r '.files[17].content' logs.json | less
</pre>Ghttp://www.blogger.com/profile/04827935129754285904noreply@blogger.com0tag:blogger.com,1999:blog-5385036587194404038.post-45187736639680140612020-08-21T23:34:00.002-07:002020-08-21T23:34:35.947-07:00DNS CNAME at the root of a domain<p>TIL about DNS CNAME "flattening" which is <a href="https://blog.cloudflare.com/introducing-cname-flattening-rfc-compliant-cnames-at-a-domains-root/">a cloudflare feature</a> that allows you to put a CNAME at the domain root: something that is actually not allowed by the DNS RFC spec.</p><p>Why does that matter? I was helping a nonprofit who is using a website hosting company, but the nonprofit has its own domain name. The hosting company tells them to "create a CNAME pointing at blahblah.somehosting.com" which works great, I created "www.example.com" as a CNAME pointing to the hosting address.</p><p>But what about the "naked" domain? i.e. if I type example.com like a normal person instead of www.example.com it doesn't resolve, and I don't have anywhere to point it. The cloudflare solution as above is to allow you to do something non-RFC compliant and also put a CNAME record at the root.</p><p>Google domains takes a different approach, <a href="https://serverfault.com/questions/617248/does-google-domains-support-cname-like-functionality-at-the-zone-apex">to solve the problem you can create a subdomain forward</a> for "@" and point it at your www.example.com target. With "forward path" enabled on the entry the URLs will also be preserved through the redirect.</p><p>Under the hood Google creates A records for the root domain, so a request for the root domain will resolve to a Google service that then resolves your www.example.com CNAME, establishes a HTTP session with the requester and issues the requesting browser a 301/302 redirect.</p>Ghttp://www.blogger.com/profile/04827935129754285904noreply@blogger.com0tag:blogger.com,1999:blog-5385036587194404038.post-8291087014169605522020-05-25T13:03:00.001-07:002020-05-25T13:03:37.784-07:00Measuring internet bandwidth in a cron script speedtest.netThis is a simple way to keep tabs on bandwidth provided by your ISP. It's also useful as a historical record to prove exactly when service got worse.<br /><br />
The speedtest-cli is <a href="https://github.com/sivel/speedtest-cli">a python CLI for speedtest.net</a> that has <a href="https://github.com/robertcsapo/docker-speedtest-cli">been dockerized</a> by some kind folks.<br /><br />
To get a self-appending CSV just run:
<pre class="prettyprint">
/usr/bin/docker run --rm robertcsapo/speedtest --csv >> speedtest.csv
</pre>
Assuming you run docker passwordless (actually a bad idea for security) you can then add this to your crontab. You may want to consider running it outside of when you need your network bandwidth the most:
<pre class="prettyprint">
0 1 * * * /usr/bin/docker run --rm robertcsapo/speedtest --csv >> /home/myuser/speedtest.csv
</pre>
To do this with better security don't run docker passwordless, but allow sudo to run this specific command, where the userid 12345 is your userid that you can get by running 'id'. This will allow you to run this command as a non-root user:
<pre class="prettyprint">
myuser ALL=(ALL) NOPASSWD: /usr/bin/docker run -u 12345 --rm robertcsapo/speedtest --csv
</pre>
And then in your cron put:
<pre class="prettyprint">
0 1 * * * sudo /usr/bin/docker run -u 12345 --rm robertcsapo/speedtest --csv >> /home/me/speedtest.csv
</pre>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-5385036587194404038.post-70736955067514976112020-01-02T15:03:00.002-08:002023-06-14T21:40:15.462-07:00Gatekeeper, OPA, rego: notes from testing and debugging policiesInstalling gatekeeper is easy:
<pre class="prettyprint">
kubectl apply -f https://raw.githubusercontent.com/open-policy-agent/gatekeeper/master/deploy/gatekeeper.yaml
</pre>
We can see this creates configs and constrainttemplates CRDs:
<pre class="prettyprint">
$ kubectl api-resources | grep gatekeeper.sh
configs config.gatekeeper.sh true Config
constrainttemplates templates.gatekeeper.sh false ConstraintTemplate
</pre>
The constraints based off the constraintemplates will themselves will be their own CRDs. In the gatekeeper-system namespace we have the controller manager and webhook that will serve the validating webhook requests from the API server:
<pre class="prettyprint">
$ kubectl get all -n gatekeeper-system
NAME READY STATUS RESTARTS AGE
pod/gatekeeper-controller-manager-77ff8cc995-sh8xq 1/1 Running 1 6d21h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/gatekeeper-webhook-service ClusterIP 10.0.5.112 <none> 443/TCP 6d21h
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/gatekeeper-controller-manager 1/1 1 1 6d21h
NAME DESIRED CURRENT READY AGE
replicaset.apps/gatekeeper-controller-manager-77ff8cc995 1 1 1 6d21h
</pre>
The next step is creating constraint templates, here's one that denies all Ingress:
<pre class="prettyprint">
apiVersion: templates.gatekeeper.sh/v1beta1
kind: ConstraintTemplate
metadata:
name: k8snoexternalservices
spec:
crd:
spec:
names:
kind: K8sNoExternalServices
targets:
- target: admission.k8s.gatekeeper.sh
rego: |
package k8snoexternalservices
violation[{"msg": msg}] {
input.review.kind.kind == "Ingress"
re_match("^(extensions|networking.k8s.io)$", input.review.kind.group)
msg := sprintf("No external service exposure is allowed via ingress: %v", [input.review.object.metadata.name])
}
</pre>
But how did we know that it was input.review.kind.kind? That's pretty unintuitive. Turns out it's because that's the structure of the object passed to the authenticating webhook. We can see this by creating a constraint template that just blocks everything and logs the full object:
<pre class="prettyprint">
$ cat template.yaml
apiVersion: templates.gatekeeper.sh/v1beta1
kind: ConstraintTemplate
metadata:
name: k8sdebugtemplate
spec:
crd:
spec:
names:
kind: K8sDebugTemplate
targets:
- target: admission.k8s.gatekeeper.sh
rego: |
package debugging
violation[{"msg": msg}] {
msg := sprintf("Review object: %v", [input.review])
}
$ kubectl apply -f template.yaml
</pre>
And then apply that to ingress objects:
<pre class="prettyprint">
$ cat constraint.yaml
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sDebugTemplate
metadata:
name: debuggingdeny
spec:
match:
kinds:
- apiGroups: ["extensions", "networking.k8s.io"]
kinds: ["Ingress"]
$ kubectl apply -f constraint.yaml
$ kubectl api-resources --api-group=constraints.gatekeeper.sh
NAME SHORTNAMES APIGROUP NAMESPACED KIND
k8sdebugtemplate constraints.gatekeeper.sh false K8sDebugTemplate
</pre>
Then when we create any ingress we get a full object logged:
<pre class="prettyprint">
$ cat basic-ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: basic-ingress
namespace: frontend
spec:
backend:
serviceName: web
servicePort: 8080
</pre>
If we patch an existing object we get to see old object filled out too:
<pre class="prettyprint">
$ kubectl patch -f basic-ingress.yaml -p '{"spec":{"backend":{"servicePort":8081}}}' 2>&1 | sed s'/): admission webhook.*//' | sed s'/.* Review object: //' | jq
{
"operation": "UPDATE",
"userInfo": {
"username": "me@example.com",
"groups": [
"system:authenticated"
],
"extra": {
"user-assertion.cloud.google.com": [
"XX=="
]
}
},
"object": {
"kind": "Ingress",
"apiVersion": "extensions/v1beta1",
"metadata": {
"annotations": {
"kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Ingress\",\"metadata\":{\"annotations\":{},\"name\":\"basic-ingress\",\"namespace\":\"frontend\"},\"spec\":{\"backend\":{\"serviceName\":\"web\",\"servicePort\":8080}}}\n"
},
"finalizers": [
"finalizers.gatekeeper.sh/sync"
],
"name": "basic-ingress",
"namespace": "frontend",
"uid": "403d4a8f-2db0-11ea-828b-42010a80004c",
"resourceVersion": "2135434",
"generation": 2,
"creationTimestamp": "2020-01-02T22:36:00Z"
},
"spec": {
"backend": {
"serviceName": "web",
"servicePort": 8081
}
},
"status": {
"loadBalancer": {
"ingress": [
{
"ip": "1.2.3.4"
}
]
}
}
},
"oldObject": {
"kind": "Ingress",
"apiVersion": "extensions/v1beta1",
"metadata": {
"name": "basic-ingress",
"namespace": "frontend",
"uid": "403d4a8f-2db0-11ea-828b-42010a80004c",
"resourceVersion": "2135434",
"generation": 1,
"creationTimestamp": "2020-01-02T22:36:00Z",
"annotations": {
"kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Ingress\",\"metadata\":{\"annotations\":{},\"name\":\"basic-ingress\",\"namespace\":\"frontend\"},\"spec\":{\"backend\":{\"serviceName\":\"web\",\"servicePort\":8080}}}\n"
},
"finalizers": [
"finalizers.gatekeeper.sh/sync"
]
},
"spec": {
"backend": {
"serviceName": "web",
"servicePort": 8080
}
},
"status": {
"loadBalancer": {
"ingress": [
{
"ip": "1.2.3.4"
}
]
}
}
},
"uid": "6410973b-2db1-11ea-828b-42010a80004c",
"kind": {
"group": "extensions",
"version": "v1beta1",
"kind": "Ingress"
},
"resource": {
"group": "extensions",
"version": "v1beta1",
"resource": "ingresses"
},
"options": null,
"_unstable": {
"namespace": {
"kind": "Namespace",
"apiVersion": "v1",
"metadata": {
"creationTimestamp": "2019-12-26T23:29:47Z",
"annotations": {
"kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"v1\",\"kind\":\"Namespace\",\"metadata\":{\"annotations\":{},\"name\":\"frontend\"}}\n"
},
"name": "frontend",
"selfLink": "/api/v1/namespaces/frontend",
"uid": "9a96616e-2837-11ea-8e60-42010a80017b",
"resourceVersion": "33039"
},
"spec": {
"finalizers": [
"kubernetes"
]
},
"status": {
"phase": "Active"
}
}
},
"name": "basic-ingress",
"namespace": "frontend",
"dryRun": false
}
</pre>
We can clean up the debugging rule by deleting the constraint:
<pre class="prettyprint">
kubectl delete k8sdebugtemplate.constraints.gatekeeper.sh debuggingdeny
</pre>
In terms of policy writing you can test individual policies like this:
<pre class="prettyprint">
$ docker run -v /Users/gcastle/git/gatekeeper/library/general/uniqueingresshost:/tests openpolicyagent/opa test /tests/src.rego /tests/src_test.rego
PASS: 12/12
</pre>
The current testing relies on rego assertions as tests, which is a bit of a PITA when you need to create a lot of test permutations on objects because you <a href="https://github.com/open-policy-agent/gatekeeper/blob/fb72b78249d5bf0b02ee8f4abead55489b956749/library/general/uniqueingresshost/src_test.rego#L81">need to re-create the admission object</a> above to some extent. It's also not super obvious what passing and failing are, it could do with a higher-level test framework:
<ul>
<li>"count(results) == 0" means pass this test if there were no violations
<li>"count(results) == 1" means pass this test if there was exactly one violation
</ul>
Bundling up templates is easy because gatekeeper is using kustomize. To create all the templates you could just do:
<pre class="prettyprint">
kustomize build templates | kubectl apply -f -
</pre>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-5385036587194404038.post-51580461812108379782019-12-30T15:32:00.000-08:002019-12-30T15:32:58.742-08:00Vim regex examples<pre class="prettyprint">
# Put a quote in front of the first alphabetic character on each line
s!\([a-z]\)!"\1!c
</pre>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-5385036587194404038.post-10602252201136994642019-03-24T14:13:00.000-07:002019-03-24T14:13:12.552-07:00Finding DNS servers provided by DHCP using network manager on LinuxSuppose you want to figure out which DNS server you're using and it is provided via DHCP. This can be surprisingly difficult. Suppose there's:
<ol>
<li>Nothing in /etc/resolv.conf
<li>`dig google.com` shows your machine uses a local DNS server (local dnsmasq)
<li>/var/lib/dhcp/dhclient.leases looks...wrong
</ol>
I resorted to:
<pre class="prettyprint">
sudo tcpdump -i eth0 -s0 -n 'udp port 53'
</pre>
Which is guaranteed to show you what's going on. It turns out network manager stashes leases under its own directory. If you use <pre>ps aux | grep dhclient</pre> you can see network manager running dhclient (perhaps multiple instances per interface) and the -lf (lease file) parameter will point you at something like <pre>/var/lib/NetworkManager/dhclient-[guid].lease</pre>
Mystery solved!Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-5385036587194404038.post-78654301096286984522018-08-26T17:09:00.000-07:002018-08-26T17:18:09.672-07:00Scrub GPS location from JPEG EXIF dataI tried two linux tools, exiv2 and exiftool. exiftool seems to be better.<br /><br />
<h2>exiv2</h2>
View full exif data with this (the default view only gives you a summary):
<pre class="prettyprint">
exiv2 -pa myimg.jpg
</pre>
Delete all exif data with this:
<pre class="prettyprint">
exiv2 rm myimg.jpg
</pre>
There doesn't appear to be a way to just delete the GPS info with exiv2.<br /><br />
<h2>exiftool</h2>
Install:
<pre class="prettyprint">
sudo apt-get install libimage-exiftool-perl
</pre>
View exif data:
<pre class="prettyprint">
exiftool myimg.jpg
</pre>
Delete just GPS data with this:
<pre class="prettyprint">
exiftool -gps:all= myimg.jpg
</pre>
Unknownnoreply@blogger.com2tag:blogger.com,1999:blog-5385036587194404038.post-78122902372498325442017-11-25T16:41:00.001-08:002017-11-25T16:41:49.675-08:00Sharing Kindle content between Amazon household membersAmazon lets you create a "household" to share content, but it's <i>really</i> not obvious how you make kindle content from the other adult turn up on your kindle. Once you have created a household you need to go into <a href="https://www.amazon.com/gp/help/customer/display.html?nodeId=201620400">each device under your devices section in the web app</a>. There you can click a box to "show content from [insert other adult's name]". Then when you sync your phone kindle app or your physical kindle the books shared will show up. So you'll need to do that every time you want to read on a device.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-5385036587194404038.post-81547367348674301382017-11-19T09:25:00.000-08:002018-08-09T22:07:18.491-07:00Adding a yubikey GPG key onto a new machineIf you are using a <a href="http://ilostmynotes.blogspot.com/2016/02/storing-and-using-gpg-keys-on-yubikey.html">Yubikey encryption scheme</a> and want to add the key onto a new system there's a few hoops to jump through. These instructions are for Ubuntu trusty.<br />
<br />
First, get set up for using the yubikey:<br />
<pre class="prettyprint">
sudo apt-get install gnupg-agent scdaemon pcscd pcsc-tools</pre>
<div>
you probably need to logout and back in. This <a href="https://blog.josefsson.org/2015/01/02/openpgp-smartcards-and-gnome/">post has extra setup</a>, but I didn't have to do any of that, perhaps the gnome keyring badness has been fixed now.</div>
<div>
<br /></div>
<div>
Now check your yubikey is recognized:</div>
<pre class="prettyprint">
pcsc_scan
gpg --card-status
</pre>
<div>
Import the public key into the keyring and trust it:</div>
<pre class="prettyprint">
gpg --import mykey_public_only.asc<br />gpg --expert --edit-key 123456<br />trust (set to ultimate)<br />save
</pre>
You should now be good to go!
<br />
<br />
<br />
One more note: If you have multiple yubikeys for the same secret key and need to switch to using one of the other yubikeys I've had some problems with gpg wanting to see the card with the previous serial number, even if you delete the secret key. On the mac I found the easiest way to clean this up was to quit GPG Keychain and just remove the whole gnupg directory:
<pre class="prettyprint">
rm -rf ~/.gnupg
</pre>
You should then be able to import the public key again and get it set up with the new yubikey by running:
<pre class="prettyprint">
gpg --card-status
</pre>
<br />
<br />Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-5385036587194404038.post-423440391779324842017-10-23T20:46:00.001-07:002017-10-23T20:46:39.592-07:00Upgrading dd-wrt to protect against CVE-2017-14493Google <a href="https://security.googleblog.com/2017/10/behind-masq-yet-more-dns-and-dhcp.html">found, and worked with the dnsmasq author to fix</a>, a bunch of vulnerabilities in dnsmasq. Now everyone needs to update tons of devices, including your router.<br />
<br />
It's been a while since I updated this firmware so I had to figure it out from scratch again. AFAICT the procedure is to go find your router <a href="http://dd-wrt.com/site/support/router-database">in the dd-wrt database</a>. Hopefully it's supported. If it is you can go <a href="ftp://ftp.dd-wrt.com/betas/">download the latest firmware from the ftp server</a> and upload via the web interface. Anything <a href="http://svn.dd-wrt.com/ticket/5987">later than 33430</a> should contain the fix.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-5385036587194404038.post-20626503500416119422017-05-26T09:20:00.000-07:002017-05-26T09:20:13.755-07:00Switching yubikeysIn <a href="http://ilostmynotes.blogspot.com/2016/02/storing-and-using-gpg-keys-on-yubikey.html">this post</a> I described how I set up gpg keys on a yubikey. Since I have multiple yubikeys for some redundancy I occasionally have to use a different one. This basically involves deleting the secret key and re-importing it from the yubikey.<br />
<br />
<h4>On OSX</h4>
<br />
Open up GPG keychain and click through the scary warning to delete the secret keys. If you set it up right these are only stubs, the actual key is on the yubikey. Once you've done that, insert the key you want to use and get the stubs recreated with:<br />
<pre>
$ gpg --card-status
</pre>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-5385036587194404038.post-78764262026019478632017-03-28T14:16:00.002-07:002017-03-28T14:16:58.110-07:00Managing go versions with gvm<a href="https://github.com/moovweb/gvm">gvm</a> is a way to manage multiple go versions. It has some strange behaviour with go paths that I don't really understand. It essentially <a href="https://github.com/moovweb/gvm/issues/189">sets your GOPATH to a different directory for every version</a>. You could just <a href="http://www.ascent.io/blog/2014/03/11/gvm-with-golang/">append your real go path</a>, but it seems like there might be tooling that <a href="http://stackoverflow.com/questions/36017724/can-i-have-multiple-gopath-directories">doesn't expect gopath to be a list</a>.
My solution was to:
<pre class="prettyprint">
gvm install go1.8
gvm use go1.8
gvm pkgenv
</pre>
This pops $EDITOR which you can use to set your go path to $HOME/go. Check it with:
<pre class="prettyprint">
go env
</pre>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-5385036587194404038.post-5414103586345390452017-01-10T17:00:00.001-08:002022-01-26T15:20:22.913-08:00kubectl Kubernetes Cheat SheetA complement to the <a href="http://kubernetes.io/docs/user-guide/kubectl-cheatsheet/">official kubectl cheat sheet</a>.<br />
<br />
<b>Getting kubectl</b><br />
<br />
There's better ways to install it for permanent use, but here's a quick way for temporary use:
<pre class="prettyprint">
export PATH=/tmp:$PATH
cd /tmp; curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl; chmod 555 kubectl
</pre>
<b>Nodes</b>
<br />
<pre class="prettyprint">$ kubectl get nodes
$ kubectl get nodes/gke-hello-world-default-pool-9dbb0d2c-5qkl --show-labels
$ kubectl label nodes --all mylabel=myvalue
$ kubectl label nodes --all mylabel-
</pre>
<b><br /></b>
<b>Namespaces</b>
<br />
<br />
<pre class="prettyprint">
$ kubectl get all -n mynamespace
</pre>
<b>RBAC</b><br />
<br />
What permissions do I have?
<br />
<pre class="prettyprint">kubectl auth can-i --list
</pre>
All clusterrolebindings:
<br />
<pre class="prettyprint">kubectl get clusterrolebinding -o yaml
</pre>
Role bindings for all namespaces:
<br />
<pre class="prettyprint">kubectl get rolebinding --all-namespaces -o yaml
</pre>
Privileges of current user:
<br />
<pre class="prettyprint">kubectl create -f - -o yaml << EOF
apiVersion: authorization.k8s.io/v1
kind: SelfSubjectRulesReview
spec:
namespace: system
EOF
</pre>
<b><br /></b>
<b>DaemonSet</b><br />
<br />
Creating a daemonset:
<br />
<pre class="prettyprint">$ echo 'apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: daemonset-example
spec:
template:
metadata:
labels:
app: daemonset-example
spec:
containers:
- name: daemonset-example
image: ubuntu:trusty
command:
- /bin/sh
args:
- -c
- >-
while [ true ]; do
echo "DaemonSet running on $(hostname)" ;
sleep 10 ;
done
' | kubectl create -f -
</pre>
<pre class="prettyprint">$ kubectl delete daemonset daemonset-example
</pre>
<b><br /></b>
<b>Get a shell in a container</b><br />
<pre class="prettyprint">
$ kubectl run --rm=true -i --tty ubuntu --image=ubuntu -- /bin/bash
# Or with an existing container
$ kubectl exec -it shell-demo -- /bin/bash
</pre>
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-5385036587194404038.post-43059088622158236512016-12-19T13:16:00.000-08:002019-02-15T16:25:36.742-08:00gcloud cheatsheet for GKEThis is by no means comprehensive, it's just some things I've found useful.<br /><br />
<a href="https://cloud.google.com/container-engine/docs/quickstart">Get installed and auth'd</a>:
<br />
<pre>gcloud components install kubectl
gcloud auth application-default login
</pre>
Creating a cluster with a specific version:
<br />
<pre>gcloud config set compute/zone us-west1-b
gcloud beta container clusters create permissions-test-cluster \
--cluster-version=1.6.1 \
--no-enable-legacy-authorization
</pre>
<a href="https://cloud.google.com/container-engine/docs/clusters/upgrade">Upgrading GKE:</a>
<br />
<pre># Get available versions
$ gcloud container get-server-config
Fetching server config for us-west1-b
defaultClusterVersion: 1.5.7
defaultImageType: COS
validImageTypes:
- COS
- CONTAINER_VM
validMasterVersions:
- 1.6.4
- 1.5.7
validNodeVersions:
- 1.6.4
- 1.6.2
- 1.5.7
- 1.5.6
- 1.4.9
$ CLUSTER_NAME="testing"
$ CLUSTER_VERSION="1.6.4"
# Nodes
$ gcloud container clusters upgrade $CLUSTER_NAME --cluster-version=$CLUSTER_VERSION
# Master
$ gcloud container clusters upgrade $CLUSTER_NAME --master --cluster-version=$CLUSTER_VERSION
</pre>
List containers in the google-containers project:
<pre>
gcloud container images list --repository gcr.io/google-containers
</pre>
Tags for a given container:
<pre>
gcloud container images list-tags gcr.io/gke-release/auditproxy
</pre>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-5385036587194404038.post-74296036118112460412016-12-15T16:41:00.003-08:002018-04-11T22:54:07.227-07:00Google gcloud tool cheatsheetSome gcloud commands I've found useful:
<pre class="prettyprint">
# See config
gcloud config list
# Change default zone
gcloud config set compute/zone us-central1-a
# Copy a file, default zone
gcloud compute copy-files some/file.txt cloud-machine-name:~/
# Copy a file, specifying zone for machine
gcloud compute copy-files some/file.txt cloud-machine-name:~/ --zone=us-west1-a
# Forward a port with ssh
gcloud compute ssh client-machine-name -- -L 8080:localhost:8080
</pre>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-5385036587194404038.post-59439403987857304332016-10-26T09:57:00.000-07:002016-10-26T10:09:29.865-07:00Mac OS X Sierra and SSH keysWith OS X Sierra Apple changed the ssh client key handling behavior. They aligned with OpenSSH behavior by <a href="https://openradar.appspot.com/27348363">not automatically loading passphrases from the keychain on login</a>. More surprisingly, it now <a href="https://openradar.appspot.com/28394826"><b>remembers your ssh key passphrase automatically by default</b></a>. To disable this behavior you can add this to ~/.ssh/config:
<pre class="prettyprint">
Host *
UseKeyChain no
</pre>
As you can see in the radar report, deleting keys using "ssh-add -D" seems to be just as problematic and confusing as it is <a href="http://ilostmynotes.blogspot.com/2015/02/workaround-for-broken-vagrant-up-ssh.html">with gnome-keyring</a>, i.e. "All identities removed" is a lie.<br /><br />
For deleting already saved passwords and re-instating the El-Cap ssh behavior see <a href="http://apple.stackexchange.com/questions/253779/macos-10-12-sierra-will-not-forget-my-ssh-keyfile-passphrase/256866">here</a>.
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-5385036587194404038.post-26455030847472325302016-10-04T15:28:00.002-07:002016-10-04T15:28:43.708-07:00Prevent system management from installing over a test package on UbuntuWhen you are testing a new package version it's annoying to have your system management come and install the old version over the top of your test one. There's <a href="http://askubuntu.com/questions/18654/how-to-prevent-updating-of-a-specific-package">a bunch of ways to stop this</a>, the one I tend to use on Ubuntu is:
<pre class="prettyprint">
echo "package hold" | sudo dpkg --set-selections
</pre>
To undo the hold and go back to normal:
<pre class="prettyprint">
echo "package install" | sudo dpkg --set-selections
</pre>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-5385036587194404038.post-39611482344214224122016-07-14T10:41:00.000-07:002016-08-09T10:22:32.968-07:00Running modern python on Ubuntu LTSThe python version on your Ubunutu LTS may be slightly behind latest, or years behind, depending on the release cycle. Here's how to run a newer python without interfering with the system one.<br /><br />
<b>Note that setting an install prefix is necessary to avoid making this the default system python (which will break cinnamon-settings apps as well as possibly other things)</b>. The prefix I chose puts it in a directory with my username.<br /><br />
Download <a href="https://www.python.org/downloads/">the latest python source</a> and install it:
<pre>
sudo apt-get install build-essential libreadline-dev libsqlite3-dev
./configure --enable-ipv6 --enable-unicode=ucs4 --prefix=/usr/local/${USER}/
make
sudo make install</pre>
Your new python is now in /usr/local/${USER}/bin/python2.7. To use it, specify it in any virtualenvs you create. Make it an alias so you never forget:
<pre>
alias virtualenv='virtualenv --python=/usr/local/${USER}/bin/python'
</pre>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-5385036587194404038.post-48605302013158083202016-07-12T14:11:00.000-07:002016-07-12T14:11:56.371-07:00Run a different command on an existing docker container using execTo run a previously created container with bash, start it as normal and then use exec (this assumes your original container can actually run successfully):
<pre class="prettyprint">
docker start [container id]
docker exec -it [container id] /bin/bash
</pre>
Unknownnoreply@blogger.com0