Introduction
Welcome to my on again off again personal website. Once again rebuilt on a new custom content management system written in Rust.
Finding balance and strategic thinking
So that was an interesting 3 month gap in this stream of consciousness I call a blog. I really had nothing to talk about or say, nothing at all. Because for most of that 3 months I was completely focused on a work project that should have been easy, but instead was an impossible grind. I still have trouble comprehending how the project wasn’t stopped until the blockers were cleared? It’s so strange to me that a team pushed so hard on such an insignificant project using short cuts that were brute forced into working. Was there a strategic goal in such an effort? It’s hard to say as nothing clearly comes to mind. Blockers were found, but not cleared, how can that have strategic value? It could, I just lack the information and frame of reference to understand.
That’s the thing about strategic thinking, it’s often not clear when you are busy in the hands on work to determine what the bigger picture is. Recently I went down a detailed technical discussion with a peer unrelated to my work and at some point he questioned base assumptions that were clearly about implementation strategy and I couldn’t pull my mind out of the tactical details. At least not until after the conversation when I was reviewing it in my head.
And the difference between strategy and tactics is relative, there is a wide range of formulaic execution. Where one level might be strategic to the level beneath it, but there are further levels above that are more strategic. Understanding that any given plan or procedure is tactical allows for a framing of perspective, that is to say we all operate tactically. And conversely strategy happens at all levels as well, it is just the reference frame above whatever we are currently doing. If we were to drop down a level to more detailed work, the previous reference frame would be the strategy and the current work would become the tactics.
So everything is unavoidably tactical. Nothing you can do is strategic. At least, not initially right? You need more than one level of execution to delineate a difference between tactics and strategy, and their must be a clear hierarchy. Why must their be a hierarchy for tactics to change to strategy? Because tactical work can be different but occur at the same level. Sweeping dust and wiping windows are two different tactical things, but one is not necessarily the strategy to another. In our example of sweeping dust and wiping windows the strategy may be to clean the house. It is clearly a higher goal that encompasses both tactical actions. There may be a higher strategy to cleaning the house, of achieving a peaceful environment. To that we may see yet a higher strategy of living well. And so forth.
It is easy to get lost in the current actions of tactics and not even be aware of the strategy. It can take discipline and effort to pull oneself out of the moment and consider the strategy. But the effort and skill of stepping back and seeing the strategy is important. Tactical work, which is really all work, can change, as it must, to the environment it is enacted in. The change in tactics required to meet small goals may diverge from the alignment with the higher strategic goals. This divergence of focus can lead to successful tactical achievements but more difficult to achieve or even failed strategic achievements.
This segways into thinking about iterative engineering, that our reference frame should continually be re-evaluated. As small pieces of tactics are completed or if they start to draw out longer than expected, we should be stepping back from the local goals and evaluating what we are doing based on the strategic goals we have. This needs to be done with an understanding that our strategic goals are also tactical and subject to the same errors of alignment and execution. So that all levels of our forward progress should be flexible to adjust to the environment and our work within it. Constant realignment of all plans seems to be the only real way to ensure maximum effectiveness in execution, and it seems as if anything else is going to inefficient and even counter productive.
712 wordsElon's Development Algorithm
So if you haven’t seen it yet, Tim Dodd (Everyday Astronaut) did a fantastic interview with Elon Musk as things accelerate to the launch of the most powerful rocket ever built over at Starbase, TX(video). Among many other things covered in part 1 of a 3 part series that the interview has turned into, there is a fascinating algorithm that Elon talks about in a bit of depth about design and manufacture that is one of the best distillations of agile practice I’ve ever seen. It’s literally a set of guiding principles to fast, iterative development refined by one of the living masters of agile which is general enough to apply equally to software or manufacturing, physical or digital development. In a period in time when "agile theatre" is a movement of no small momentum this is a welcome breath of fresh air and insight into how to move fast and break things so you can build better things in the end.
So Elon explains this really well with lot’s of anecdotes and self deprecation, but here’s my attempt to paraphrase and simplify the algorithm.
-
Make your requirements less dumb.
- All requirements are dumb, make yours less so.
- Also, all requirements must have named human owners who take responsibility for them.
- Smart people are no exception to this rule and their requirements are more suspect for it.
-
Delete the part or process.
- If you are not forced to add back in at least 10% from time to time, you didn’t delete enough in the first place.
-
Optimize the part or process.
- After step 2 so that you are not optimizing a step or part which should be deleted.
- Accelerate cycle time.
- Automate.
So the first step is great as it admits to a fundamental flaw of gathering requirements. Making requirements is hard, like predicting the future hard, and nobody does it well. No matter how smart you are, or how many time you’ve done something before, your requirements are stupid. This not only concedes to a truth, it opens design up to the endless possibility of improvement. If the original requirements are dumb, then there is always room for improvement.
The second step is probably the most important functionally even if the first is the most important ideologically, strip everything down to the minimum viable product. And it’s ruthless, strip it down until it breaks and is unworkable and things need to be added back in, or you are not doing it right. This is kind of like good security, tighten it down until it’s unworkable then ease back a little bit. You can always iterate later to add features, but focus on the minimum needed this time (next time too, but we don’t talk about that).
The third step is the great destroyer of projects, no matter how important it is, optimization needs to be after the deletion phase. And often optimization should never be a first iteration task, things need to work first before you can consider optimizing them. It should be a well known truth that premature optimization is poisonous to development and engineering, but it’s always so tempting to do when presented with a problem or task. And maybe most importantly, optimizing something that doesn’t even need to exist is pure waste so this step has to exist after trying to remove everything non-critical.
The fourth step helps define a difference between optimization and acceleration which is a really good way to break up the tempo problem. In computing systems it’s often tempting to throw bigger, faster hardware at a problem, but this is hit or miss in effectiveness if you don’t know the problem well enough to optimize it first. Where optimization might be to aerodynamics and acceleration might be a bigger engine with both these are two parts of the same problem that should be distinguished and tackled independently but in series to create a better performing whole.
And finally automate, which is often a well known initial goal, but if you don’t understand step one and go through the process on steps 2 through 4 you probably don’t understand enough to automate correctly in the first place.
If you’re wondering why I called this a "move fast and break things" approach, please review step 2 again.
732 wordsLocal HTTPS development with Angular tutorial
I’m still a little hesitant to recommend any Google sponsored open source as an enterprise solution, but Angular has definitely seen some interesting adoption in enterprises. Unfortunately some of the third party supporting documentation seems a bit dated and sparse so I’m just going to drop this little guide to setting up secure local development. Now by secure, I mean using local TLS certificates so you can use HTTPS, what I don’t mean to imply is that this has anything else to do with the application security of your development.
That said let’s answer, "Why do you want to do that?". So especially in large enterprise applications you are going to be communicating with a whole bunch of other external services and websites, there will be single sign on SaaS providers, data services over REST or GraphQL, probably embedded widgets/trackers/thingies and some, or most, of those are going to require that they be embedded in a secure site(served over HTTPS). To see if these external resources work with your site you’ll need to be serving the page/app securely or integration will fail because of mixed content. Mixed content, secure and insecure bits together, can be annoying by default, or fatal with some of the improved security features of modern browsers. And it’s just going to get harder and harder to make mixed content work in a way you can repeatably test in the future as browsers get stricter with security enforcement. So, here I’m going to go over the three pretty simple steps to setup secure local development with a self signed cert in Angular.
Assuming you already have NodeJS installed, first install the Angular CLI Angular CLI
npm install -g @angular/cli
And create a default project as our base to work on.
ng new angular-base --routing=false --style=css
cd angular-base
Step 1: Generating self signed certificates
Normally I would recommend the excellent mkcert package to create a local CA and generate the certs off that local CA. But sometimes you just can’t do that, so if you can find openssl on your dev system here’s a way to leverage that.
Create a new plaintext file called certificate.cnf
with the following contents.
[req]
default_bits = 2048
prompt = no
default_md = sha256
x509_extensions = v3_req
distinguished_name = dn
[dn]
C = US
ST = Atlanta
L = Atlanta
O = My Organisation
OU = My Organisational Unit
emailAddress = email@domain.com
CN = localhost
[v3_req]
subjectAltName = @alt_names
[alt_names]
DNS.1 = localhost
Then run openssl
with these parameters.
openssl req -new -x509 -newkey rsa:2048 -sha256 -nodes -keyout localhost.key -days 3560 -out localhost.crt -config certificate.cnf
This parameter soup will generate two files, localhost.key
and localhost.crt
. The .key
file is very sensitive, always considered private, and should never be shared. The .crt
file is a public certificate in the x509 format and can be shared, but out of a sense of consistency for security, should never be in code.
Now add these two lines to your .gitignore
file:
*.crt
*.key
In this example we are making these files inside our codebase directory for simplicity, but more practically you should put these somewhere else. Such as making a cert
directory inside your home folder, and then changing the following configuration steps to point to them there. You can even reuse the "localhost" certs for any local development site you put on the hostname this way.
For teams working with self signed certs this location should be an agreed upon standard location so that the configuration doesn’t need to be modified from person to person working on the project.
Step 2: Set Angular to serve locally with TLS(HTTPS protocol)
The Angular general configuration file needs to be modified to contain an options
section which will contain: a boolean flag to serve SSL, a location for the certificate private key file, and a location for the public x509 certificate file.
The server configuration section is going to be in angular.json
or angular-cli.json
depending on your version. The changes are in the "options"
sub array.
...
"serve": {
"builder": "@angular-devkit/build-angular:dev-server",
"configurations": {
"production": {
"browserTarget": "angular-base:build:production"
},
"development": {
"browserTarget": "angular-base:build:development"
}
},
"defaultConfiguration": "development",
"options":
{
"ssl": true,
"sslKey": "localhost.key",
"sslCert": "localhost.crt"
}
},
Step 3: Set Angular test runners to run with TLS(HTTPS)
Changing your local serving to HTTPS is a great improvement, but if none of your integration tests pass then it’s just not enough. So we also need to modify the testing configuration to also serve over HTTPS with our self signed certs. By default this will be in karma.conf.js
for Angular CLI projects, but testing suites differ, so adjust these instructions accordingly. In general you will always need to point to the .key and the .crt file, and usually you need some sort of flag to use HTTPS.
At the top of the karma.conf.js
file add this require statement below the opening comments.
// Karma configuration file, see link for more information
// https://karma-runner.github.io/1.0/config/configuration-file.html
// Required to load the TLS certs from files
var fs = require(’fs’);
...
And add this set of options at the bottom (starting with httpsServerOptions
):
...
singleRun: false,
restartOnFileChange: true,
httpsServerOptions: {
key: fs.readFileSync(’localhost.key’, ’utf8’),
cert: fs.readFileSync(’localhost.crt’, ’utf8’)
},
protocol: ’https’
});
};
Conclusion
Now when you browse to your local dev build, you may be asked to add an exception to the self signed cert, to which you should "Accept the Risk" and add it (local development is really the only time this is clearly ok to do).
Headless testing options will often require a "--no-check-certificate", or "proxyValidateSSL: false", or something similar to prevent the headless browser from trying to find a CA on the internet to validate the certificates with.
And that’s it. Now your local angular server should work over HTTPS (and only HTTPS), and your tests should run over HTTPS and be able to pull in external secure resources.
992 wordsPAGNIs seem to fit well with secure coding concepts
So the longer you watch the back and forth between application development patterns and anti-patterns the more you’ll notice a back and forth undulating of what is good and what is bad. Often it seems a given pattern or anti-pattern achieves acceptance for reasons that may not be reflective of any scientific proof, but rather general trends in anecdotal experiences. Such that a good pattern of yore becomes and anti-pattern of today, and likely will be reborn as a good pattern in the future, and so forth. This type of instability of principles in software development reminds me of philosophy and metaphysics more than any scientific field of study, but I think we can still derive value from gesticulating wildly about the problem space.
So in the spirit of furthering the craft (ie, unscientific/artistic side of software development) let’s explore some interesting concepts around designing/developing things early that you may not need right away but are most likely a good idea. This is in contradiction to the KISS pattern (K.eep I.t S.imple S.tupid) and YAGNI (Y.ou A.ren’t G.onna N.eed I.t), which is a set of patterns that certainly helps build software faster, but does indeed lead to software that is more difficult to work with in the future. I like the term PAGNIs coined by Simon Willison in his blog post reponse to Luke Plant’s observation.
In my experience the idea that there are some things you should always include in application development, even if the requirements don’t explicitly state it, is actual pretty core to concepts around secure coding. Such as user controlled input validation, seems pretty reasonable right? But honestly it’s an often critically missing element to a secure application that I have actually heard KISS being stated repeatedly as the reason for not doing it. I mean, technically the application works without it, right? So you don’t need it, right? Well yes. But if the application gets hacked and crypto-ransomed because of the remote code execution weakness validation would have prevented technically the application was broken because of missing validation right?
Yeah.
So before I jump into what I think are security PAGNIs, let’s sum up Luke and Simon’s ideas around the subject (my summary in italics):
Luke
-
Zero, One, Many
- Always plan for multiples of unique data objects early if it is at all likely to be a future requirement.
-
Versioning
- For all data schemas, API’s, file formats, protocols, structured data types.
-
Logging
- I feel this shouldn’t need to be said, but here we are.
-
Timestamps
- Mentions
created_at
, butmodified_at
is also useful.
- Mentions
-
Relational Data Stores
- Document stores and key=value stores have their limits.
Simon
-
Mobile App Kill Switch
- I would go farther and set all features as something that can be configured without a code change
-
Automated Deploys
- I feel this shouldn’t need to be said, but here we are.
-
Continuous Integration
- I feel this shouldn’t need to be said, but here we are.
-
API Pagination
- Pagination is vastly harder to implement later so always implement it up front.
-
Detailed API Logs
- Yes, but I have some caveats to talk about later.
So I would have to agree with every point made by Luke and Simon. Some things are just easier to bake in up front even if it makes it "not simple" or we "don’t need it(yet)". But I’m going to consolidate the list a bit and then extend it with some security specifics that should always be implicit requirements (that is if you care about your apps security at all).
Consolidated
- Zero, One, Many
- Versioning
- Logging with Timestamps
- Relational Data Stores
- Feature Control in Configuration
- CI/CD
- API Pagination
I really would have liked to get the list smaller, as simpler is better in this case, but this seems to be pretty much it. So seven things you should consider building in from the beginning on anything but the simplest of applications.
Ok. So here’s why I think some of these are very important for security. It’s a smaller list, but that’s ok as I’ll add a couple for security in a bit:
Security Considerations of the Consolidated List
-
Versioning,
- In this case I’m in favor of GitOps style where every change must go through version control. The API’s, data and file schemas originally mentioned are just part of what should be controlled with the version as externally exposed artifacts.
- So mandatory versioning is important for several security reasons. If every change is forced to go through a cryptographically backed version control system you have good non-reputability, which is a security requirement that every change can be attributed to a person (hopefully an authorized person).
- Versioning also gives you the potential to roll back a change or roll out just a canary version of the change more reliably. These lead to better availability, or rather business continuity, which is fundamentally what security is trying to achieve.
-
Logging with Timestamps
- So this is actually a big one, insufficient logging seems to stay on the OWASP top 10 pretty perpetually. Writing an application without good logging is like writing an application without any comments and single letter variables and function/class names. Learn to use logging libraries or built ins, and implement them early with good timestamps. Logs are important for when things go wrong both in regular app issues and critically in security incidents and events. It’s also far easier to write a little logging up front for every new piece of code instead of having to go back and add logging in one large block of work.
- When working with data stores it’s also important that the timestamps for log messages match any timestamps for data events. While analysis can usually make the leap over timestamp disparities, having them match up is much more desirable when figuring out what went wrong.
- It is possible to overdo logging and end up exposing sensitive information. So it’s important to think of logging like error messages, know the audience is not necessarily you, the developer, and take care to provide only pertinent information and omit or mask sensitive information. You can’t just dump a whole complex data object in your logger and call it a day, you actually have to take a little time to just add what might be needed if things go wrong or a sensitive operation is being initiated that could potentially be abused. It takes a little more work up front, but this is just a feature of quality coding.
-
Feature Control in Configuration
- The idea of a "kill switch" is really just scratching the surface of good modular application design. Discrete features should be able to be enabled and disabled without touching the code, preferably still in GitOps control but separate from the application code base. This allows for rapidly stopping the bleeding if any given feature is found to have an actively exploited security vulnerability in the future. This is invaluable to security response in the operation of your application. And in a best case scenario it can allow your application to continue to support the business in a reduced manner but not completely shut down while the dev team has the time to apply a security fix. It can even help avoid hasty patches that actually make things worse by reducing the priority of any fix and only enabling the feature after the fix is in place and fully tested.
-
CI/CD
- So we’re getting to the point where every app should have a CI pipeline, it’s just a better way of building apps. And security wise it is the most desirable mechanism for baking in application and infrastructure security scanning tools.
- CD is a little less a requirement, but the automated nature of CD should be pretty much required at this point. Your CD tooling should in theory be able to deploy as fast as your CI pipeline runs, even if you don’t use it that way and instead require a manual triggering. Continuous delivery tooling provides consistency of production execution and a good hooking point for security and operations to trigger post deployment checks.
Some additional AppSec PAGNIs
-
Standardized Input Validation Libraries and Use Them Early
- This can be as simple as making sure to use a given library for all explicit user inputs. Or just agreeing on what your base text input whitelist looks like, such as
^[a-zA-Z0-9_- ]{1,50}$
as your base validation regex. But in any case a developer or team of developers should agree to have strict text input validation everywhere. Yes, it’s a little bit more work, but it’s so much more secure than hoping your variable typing or built in validators are going to catch malicious escape strings. - Whitelist valid inputs, exceptions or leniencies in validation patterns should be limited and carefully applied. Documenting each use case is just good planning for when the security response team comes over to your desk/zoom asking why the characters
() { :; };
where allowed in a API endpoint (that’s the Shellshock escape string if you were wondering).
- This can be as simple as making sure to use a given library for all explicit user inputs. Or just agreeing on what your base text input whitelist looks like, such as
-
Manage Your Secrets
- Again this can range from using one of the
dotenv
implementations right on up to using Hashicorp Vault or one of the big cloud providers secrets managers. It’s a little more work up front but will vastly improve the security of your application by lessening the chances of a sensitive piece of information ending up hardcoded and in the wrong place in front of the wrong eyes.
- Again this can range from using one of the
So just to be clear, those last two are not even close to everything you could be including early to improve the security of an application. But they have some of the greatest impact to include early in development. Input validation in particular can help prevent wide swaths of specific security problems like SQL injection, remote code execution, malicious filenames, XSS, SSRF and many more.
In general I like the idea of PAGNIs and I think it can be extended to guide not just more mature software development but also more secure software development. I’m thinking maybe there should be a fairly large list of potential PAGNIs, but on project start you should pick a small subset of that list to ensure your development is both secure and future proofed against expensive, tedious coding exercises when requirements change or expand. And of course thinking about security early can prevent painful rewrites and security remediation tasks in the future.
1810 wordsFuzzy finding improved
So sometimes something on the command line is not exactly the way I want it. Sometimes it gives me that itch that won’t go away and whispers in the back of my mind, "fix it", "fix it now", "fix it now or the world will end". So I took a bit of time to make fzf work the way I wanted it to today.
-
First itch, "No way to control recursion depth."
- Which is fine given
fzf
’s design choices. But sometimes I just need to be able to find a file in a specific directory and the sub-directories are less important or sometimes might contain similar but wrong files.
- Which is fine given
Leverage the well designed
fzf
app with *nix magic.
Luckily shell aliases can contain all sorts of complex logic and chained apps to get what you want, in this case:
alias fz="find . -maxdepth 1 | sed ’s/^\.\///g’ | fzf"
And tada! Now we can just use fz
to search the current directory thanks to the -maxdepth
argument. The sed
one liner strips the leading ./
that find
includes in the output stream for a cleaner experience.
Combine that with the bat as a viewer to preview our files and we’re starting to get somewhere.
alias fz="find . -maxdepth 1 | sed ’s/^\.\///g’ | fzf --preview ’bat --color=always --style=numbers --line-range=:500 {}’"
-
Second itch, "Directories don’t show anything useful in the preview."
-
So we don’t want endless recursion but having an idea of what is in each directory on our current file level via the preview would be useful.
In fact the tree output set to a depth limit would be perfect.
-
So bat doesn’t include such conditional logic, but that’s fine as it’s built to do one thing and do it well. And the preview is just a command to execute and show the output, so we can really put anything in there. So I started a script to do the preview that opens the input and makes decisions on how to output on it.
opener.bash
#!/usr/bin/env bash
if [ -d "$1" ]; then
tree -a -C -L 3 "$1"
else
bat --color=always --style=numbers --line-range=:500 {} "$1"
fi
So just chmod a+x
on the script and symlink that script into ~/.local/bin
which is in my $PATH
(essentially installing it). I usually drop the extension from command line scripts for convenience so the symlinking command looks like this:
ln -s $HOME/code/opener/opener.bash $HOME/.local/bin/opener
Then we update the alias to use the script, which ends up being much cleaner:
alias fz="find . -maxdepth 1 | sed ’s/^\.\///g’ | fzf --preview ’opener {}’
And bam! Now we can see text files and directories in the preview pane of fzf
.
-
Third itch, "Binary files show me nothing useful in the preview."
- Ok, so normally I’d use a hex editor to get some idea of what is in a binary before jumping to something like Ghidra to really see what’s going on. Usually there are some important clues in the very beginning of the binary that could be useful.
So I can include a test to try and determine if a file is a binary in the opener
script. The file
command has some magic to help with this. And there is a nifty Rust crate called hexyl that I haven’t had an opportunity to use yet that will let me read just first 4kB of a binary and display it in a wonderous rainbow of colors. So this is how it shook out:
#!/usr/bin/env bash
if [ -d "$1" ]; then
tree -a -C -L 3 "$1"
else
BAT_TEST=$(file --mime "$1")
if [[ $BAT_TEST == *binary ]] ; then
hexyl -n 4kB "$1"
else
bat --color=always --style=numbers --line-range=:500 "$1"
fi
fi
The binary test using file
gets the mime output, which conveniently ends in "binary" if it thinks the file is a binary, so we can store that in the BAT_TEST
variable and test for it with the conditional and run hexyl
or else run bat
.
Short, simple, fast and maybe even useful in more ways than I originally intended. Now the preview script shows something useful for every file it highlights and I have enough focus to narrow down to just where I am working. So in some ways this is a replacement for ls
(which is a bit strange as I already have 2 of those). But it satisfies my desire for a tool that is more forensic, gives a lot more information up front and can reveal clues to what is going on in whatever file store I happen to be looking in.
Itch scratched.
Code and instructions are here: opener
806 wordsThoughts on the hybrid cloud
So getting through a from scratch Kubernetes build was fun and deeply interesting. And on top of that I’ve started finding all sorts of great hybrid cloud technologies that would have been great to have on so many projects I worked on in the past (these projects all suffered big cloud myopia unfortunately). One of these that is really interesting and worth noting is the OpenFaaS project, think of it as AWS Lambdas or Azure Functions but running locally or in a Kubernetes cluster.
It’s really a great project for many reasons, one of the top of which is the function limits are orders of magnitude larger than most major serverless function providers (12TB memory limit, 96 CPU cores, 290 year execution limit). Everyone working in AWS is aware of the strict limits their Lambda’s impose on the workloads, these can be design crippling, forcing teams to re-work how they orchestrate the logical components of their applications to accommodate alternatives. Where as OpenFaaS only seems to have limits inherent to the programming language and frameworks, so it’s suitable for a vastly wider set of discrete processing tasks. I’ve been on several projects in the past where the AWS Lambda timeout limit suddenly killed forward progress. And the amount of re-work required to the data or the logic/compute easily eclipsed standing up an OpenFaaS cluster. To the point where it seems almost criminal not to run an OpenFaaS cluster for at least long running occasional discrete functions.
And to be clear, OpenFaaS is not the same as AWS Lambda. It’s not running something like Firecracker underneath necessarily. There isn’t a sophisticated over provisioning scheme in place. But it does make clever use of docker pause to provide resource conservation so you can load a lot of functions on an OpenFaaS cluster. And you don’t even need a full Kubernetes cluster to take advantage of it, the basic Daemon is called faasd, which can run independently of K8s on a VM or say a RasberryPi.
OpenFaaS is event driven, and provides it’s own REST API to support flexible invocation. There are built in monitoring and control mechanisms to round out the project. So in many ways OpenFaaS can supplement or maybe even replace your serverless function sub systems. At the very least I feel this project is something you should keep in your back pocket for when the limits of big cloud serverless functions suddenly prove to be roadblocks for your projects.
420 wordsExploring kubernetes the hard way
So I’ve been in the AWS cloud space for a long time which has been great as they really have phenomenal cloud offering, but in that time Kubernetes has been steadily gaining speed as an alternative hybrid approach to cloud computing. And while I’ve read some things and worked on containers running in K8’s I’ve never really had an in depth understanding of the cluster management system. So I decided to fix that by doing the "Kubernetes the Hard Way" tutorial. It’s been great, while not a whole lot of it is new, it really is great practice and a very good end to end, "secure", setup walk through. So these are some take a ways I have from the experience.
While the tutorial says "no scripts", that’s not exactly accurate. It will have you write your own setup scripts and bash
is the language used to explain those operations. While the code display pieces on the tutorial have an "easy button" to copy the code and paste it in your terminal there are two main reasons not to do this:
- Don’t copy and paste code from the web in your terminal! There are well documented attack vectors that can compromise your entire system by doing this.
- The idea is to get a non-shallow understanding of Kubernetes, copying the code and writing it your way adds muscle memory to the exercise.
Additionally unless you are a super human typist there are the inevitable typos, bugs and such that actually help to learn in depth how to diagnose problems in Kubernetes and fix them. This is really what I find most useful about doing it the hard way to really learn. For instance, the tutorial assumes using tmux in parallel manner across 3 controllers when initializing etcd
on the controllers, which I didn’t do. Through which I found out that the second controller must be initialized within the timeout period of the daemon start on the first or you will get a timeout error (after those first two are initialized this is no longer an issue). I would likely have never learned this by using scripted setups for K8’s clusters.
Or like when I typo’d a cert location as /etc/kubernetes.pem
instead of /etc/etcd/kubernetes.pem
and I learned I’ve been absolutely spoiled by Rust’s detailed and helpful compiler error messages. The error message was something like ERROR: systemd returned a status of "error code" returned
. I know what you are thinking, "error code
" should be more than enough for you to know what went wrong. Unfortunately I needed a bit more detail to figure out the problem so a bit of research showed me the command
journalctl -ocat -b -u etcd
Where "etcd" is really whatever your systemd
daemon name is. I think I’m going to alias
this to doh
in my shell for future reference. I know journalctl
but the argument soup is a super useful combo for working with systemd
daemons, but one which I’m not sure I’ve ever used or have forgotten. So learning/re-learning it because I’m doing this the "hard way" has been really great. I’d highly recommend this tutorial if you’d like to learn hands on about Kubernetes.
Batteries included backends
So I’ve been working on learning a new backend stack to follow through on some ideas I want to code out and it turns out there are quite a lot of batteries included offerings out there (sometimes referred to as BaaS offerings). Many of them are similar to Firebase in that they bundle data, storage and AA(authentication/authorization) and other useful bits together with slick management interfaces. All of these offer a relatively quicker path to get up and running with a app by bundling features together for a backend. I’m researching them so I thought I’d share what I’ve found.
I’m going to stick with the obviously somewhat open source ones here:
Some that are open source license ambiguous, not that I’m one to judge.
Some of these are actually wrappers for other services and applications of the same type but with different visions, offerings, [ENTER SOME SUCH DISTINGUISHING FEATURE HERE].
And some are more simply ORM like middleware offerings. No batteries included, but sometimes you don’t need batteries cause you’re working with anti-matter or something. I’ll again stick with the apparently open source offerings that include PostgresDB here.
232 wordsConnecting to SupaBase from Rust in Reqwest async
So let’s just say you, my fair reader, are asking yourself, "How can I connect to a turnkey data and api solution from Rust?" Well it just so happens I had the same question last night and decided to give it a shot. Here it is using the Rust Reqwest crate in the async pattern:
use dotenv::dotenv; // Because we never hard code secrets, even in simple prototypes
use reqwest;
use reqwest::header::HeaderMap;
use reqwest::header::HeaderValue;
#[macro_use]
extern crate dotenv_codegen; // Excessive for such a small piece, but I like the look better
fn construct_headers() -> HeaderMap {
dotenv().ok();
let key = dotenv!("SUPABASE_KEY");
let bearer: String = format!("Bearer {}", key);
let mut headers = HeaderMap::new();
headers.insert("apikey", HeaderValue::from_str(key).unwrap());
headers.insert(
"Authorization",
HeaderValue::from_str(bearer.as_str()).unwrap(),
);
headers
}
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let client = reqwest::Client::new();
let resp = client
.get("https://use_your_own_free_tier_account_please.supabase.co/rest/v1/base_content?select=*")
.headers(construct_headers())
.send()
.await?;
println!("{:#?}", resp.text().await?);
Ok(())
}
Of course there is a Rust SDK library coming supposedly, which provides for a more GraphQL like linked query type approach. But this is easy enough for what I’m curious about. And I’d just like to add that I am really beginning to grow fond of SupaBase the more I get into it. Seems the team has made some good design decisions so far and their feature production speed is great. I hope they can layer in some good code and company maturity growth now that they have paid tiers. It’s just really nice to see a turnkey solution built on top of PostgreSQL like this, I’d like to see them succeed.
261 wordsFood that lies
VR Meeting Transcription
[Mark]> It’s not so much the impact of what is being reported, it’s about what is actually going on that I need to understand. Dave, could you explain to me what neurotic drift means?
Dave’s avatar takes center focus in the room.
[Dave]> Sure Mark. As you know, the Matrient packs provide the experience of a high quality traditional meal while being a standardized nutritional material, a synth ration essentially. Its taste, texture, presentation and form are little different from a compressed nutritional bar of synthetic materials that provide a perfect delivery of nutrition for an average human. To understand how we take this normally bland product and make it seem like a delicious meal you have to understand the nano-mechanized delivery of memory engrams.
Mark expands his group focus and interjects,
[Mark]> Yes, yes, try not to cover the details of what we make too much and get to the point.
Mark rescinds focus.
[Dave]> Ok. So the nanites embedded in the ration immediately infiltrate the blood stream and target the brain and mouth nervous systems within a few seconds. They deliver carefully tailored memory engrams that make the consumer think they are eating say, a delicious turkey dinner or a mouth watering hamburger. This is of course at odds with the sensory input the consumer is receiving and must continue to receive to finish the meal. To counter this discontinuity a low dose of neurotropic N-adylhyde-metacystine produces a brief opioid like response and dulls the brain’s confusion at the sensory discontinuity while also stimulating hunger briefly by breaking down quickly into ghrelin. This cascade of factors gives consumers the desired outcome of eating a ration bar while experiencing a fine meal 99.9999999% of the time.
Mark again expands his focus, but says nothing.
Dave relinquishes group focus while Mark considers that number with a lot of nines.
[Mark]> So every one in a billion meals something goes wrong with that "cascade of factors"?
Dave issues an avatar nod and resumes normal conversational focus.
[Mark]> And our product got approved for use based on that exact percentage you mention but based on people effected overall, not on the number of meals that fail to work?
Dave responds in conversational focus.
[Dave]> Yes, there was an error in the approval model that analyzed our submission. I’ve double checked our submission and our numbers are perfect and our data schema is correct and unambiguous. We are not at fault here.
Marks avatar indicates he is reviewing other information while holding focus.
[Mark]> That’s relieving to hear from you, but that doesn’t explain these reports filtering through the lower tiers of the net that we are worried about.
Dave takes conversational focus but with an icon that indicates importance and another that indicates speculation.
[Dave]> Well when it doesn’t work as designed, normally it’s just the cascade of factors. Usually it’s just a confusing experience as the discontinuity mitigations fail. Sometimes the meal becomes difficult to eat, sometimes the actual taste is not overidden but dual experienced, sometimes the brief high is too pronounced. But sometimes it’s the memory engrams that fail to embed correctly.
Mark takes focus.
[Mark]> I thought we determined that engrams failing was impossible? That either the memory takes or it fails and breaks down. Is this new behavior?
Dave takes focus and responds.
[Dave]> Yes, well new behavior to us, the simulation budget being what it was. It turns out there can be interactions between other engram injection systems. Unforeseen behavior in excessive injection of similar engrams. And some extremely rare physiology types that accept the engram but receive a completely different memory. When the engram fails in one of these edge cases the results can be particularly undesirable.
Marks’s avatar portrays annoyance.
[Dave]> Well, uh, the effects are usually minor. But we have confirmed some cases of psychosis.
Mark takes focus.
[Mark]> Is that all?
Dave highlights the speculative icon.
734 words[Dave]> Uh, that one case that turned a consumer into a psychotic uncontrollable cannibal was an unexpected permanent implantation of the engram in the wrong cognitive area of the brain. We think we can avoid that ever happening again by adding some additional targeting meta-proteins in the engram sheath.
Considerations in Distributed Work Models.
So I’ve been considering what can make a distributed work model effective. Thinking about these systems for me at least brings into consideration a handful of successful distributed work systems, such as the blockchain for cryptocurrencies, the grid for projects such as BOINC/SETI/FAH I’ll just collectively call Grid, and of course the logistics system for Coca Cola in Africa that I’ll just call Soda. So we have our data set scope, let’s ask it some questions and postulate answers. I’m going to abstract down to things as basic as I can easily get them.
What drives participation in a distributed work model?
Reward
- Grid = A sense of helping a larger goal
- Blockchain = A currency like thing
- Soda = Actual money
So we kind of get a range between the altruistic group collective goal and actual cash micro-rewards with maybe speculative value in between. I think this provides a good basis for a scale of reward. While it may seem obvious describing it clearly can allow for the direct correlation of credit in accordance with distributed model. This could be use to ambiguate a reward and use the scale to measure the reward in accordance with desired participation models. Maybe.
So there is also a scale here between centrally organized and truly distributed work. Where the closer we get to central management the more concrete the reward is. To some degree this correlates with the difficulty of the work, but I don’t see that as a hard correlation.
So assume we are dealing with a single network of reward and work to be done and a finite pool of credits with which to distribute for doing the work of the network. How would we use the scale to assign credit to best encourage the work to be done?
To postulate on what may work. So a given distributed economy may be considered a distinct network and may form with an arbitrary pool of credits. If we assume the motivations listed are sufficiently accurate the reward system would then scale the reward based speculation or difficulty. So that the smallest rewards are given towards the largest goals, relying on the sense of community effort as the primary reward and the credit number as just a confirmation of personal contribution. Moderate awards would be given towards work that may produce a larger return but at uncertain or unascertainable risk. Large rewards could be tied to complex work with less or no communal affirmation.
I think key to the idea of distributed work systems, well you know actually working, is that there needs to be constant alignment with the idea of minimal investment and minimal infrastructure requirements. I think setting as a basis some monetary investment minimum moves away from this idea of minimal viability.
So how would that work? Just arbitrary assignment of value? That might work for altruistic reward only, but without a way to exchange to more general accepted and exchangeable credit it seems lacking. To postulate without first closing previous postulates, maybe its a bit like you could issue credit similarly to stock but with contractually set exchanges. Balance exchange through a scale of tasks from speculative to concrete. Might work. Seems there would need to be clarity in the lack of fundamental value, speculation or existence of actual concrete backing. Though, to be deeply honest, this does not really exist in current real world fiat systems so why should it exist in virtual systems?
This brings the concept of bootstrapped economies to mind. That something of large value can rise out of something of minimal value, a phenomena of emergence. This is possible. This brings up some more questions worth pursuing. Does a plethora of micro economies increase the chance of value emergence? Do centralized features increase the chance of value emergence? What are the measurable features of value in a virtual economy? Given inputs and controls what increases value emergence and what stifles it? Lot’s of interesting questions, I think I’ll explore these.
680 wordsWorking on integrating Svelte as a progressive component system.
NOTE: While this whole site is rambling, this blog post is particularly so. This is not a "how-to". More of me just publishing my notes as I go along after finishing the first pass of the Svelte tutorial and trying to create some progressive components for my site (see the default component at the top of the base content page).
So starting a new Svelte app from scratch from inside my /static/
directory with:
npx degit sveltejs/template svelte
We get a templated project directory like so
.
└── svelte
├── package.json
├── public
│ ├── favicon.png
│ ├── global.css
│ └── index.html
├── README.md
├── rollup.config.js
├── scripts
│ └── setupTypeScript.js
└── src
├── App.svelte
└── main.js
So the other starter instructions are to install with npm and start the dev server, which for brevity I’ll follow so:
cd svelte
npm install
npm run dev
This will get the build and rolling build update going, but I don’t really care about the dev server. I’ll get back to running the continuous build without it later. For now this gives us a /build
directory and build artifacts, we just need the javascript bundle.js
and the stylesheet bundle.css
. A quick symlink of those into my standard /static/css
and /static/javascript
directories and now I can access them in the content using my CMS .content_meta file.
...
"template_override": "",
"javascript_include": [
"/static/javascript/bundle.js"
],
"javascript_inline": "",
"css_include": [
"/static/css/bundle.css"
],
"css_inline": "",
...
There are more fields in the meta (which each piece of content has), but those five let you include other javascript and css files, inline snippets or even change the backend rendering template on a per piece of content basis. This is probably meant to be more global and long lived, but this is fine for now.
Save the content meta file, refresh my local page and whoa! Below my footer is the Svelte default starter template thingy.
Rollup looks pretty nice, at least the output is clean and colorful, and I’m looking for a potential replacement for Webpack after hitting a few snags with it so maybe Rollup will be the future replacement? I don’t know, we’ll see.
So to be minimally functional I need to be able to inject the Svelte components where I want them to go in the DOM. Sure I could write some Javascript to place them, but I’m thinking there might be support for that in Svelte itself. I don’t know, I just finished the tutorial so I really have no idea. Let’s look at the API reference.
Custom Element API this looks like what I’m trying to do. I tried creating a custom element in content, which for my CMS can just be to copy the markdown filename and give it the .html extension and replace the file contents with <svelte-demo></svelte-demo>
. The CMS will attempt to render the HTML content just above the markdown file I’m using.
But the element is still at the bottom of the page. Ah, but it now uses the <svelte-demo>
tag. This seems like the right direction, but not quite where I wanted it.
Ok, what else might there be in the API? Not much, but it seems like this should have worked. I must be missing something.
Insert random awesome blog that shows exactly what I missed
So the custom component tag in the content can’t be <svelte-demo></svelte-demo>
it just needs to be <svelte-demo />
. This injects the component in the custom HTML snippet I placed above this content, but it also wipes out the following content I created in Markdown. I’ve seen this before, so without diving into it much I just wrapped the tag in a container div and everything works as expected. This is what my content.html file looks like now:
<div id="svelte-container">
<svelte-demo />
</div>
Well almost, it looks like I have 2 Svelte components now, one in the content card
where I am embedding content where I’m expecting it that is missing the propery something
and one at the bottom of the body
that has the expected default prop(erty). Duplicate Svelte components would be a nasty bug, but I don’t think this is a bug in Svelte, more just something I’m not setting up right yet.
There is a console error about a prop not getting set, so I check that first. A quick change to the html content to set the property something
gets me what I expect in the content embedded component. That’s great as I can pass the prop from the static renderer to the component through the Tera template.
<div id="svelte-container">
<svelte-demo something="Gatewaynode" />
</div>
But I still have 2 components being rendered into the page. Let’s take a look at the source layout.
.
└── svelte
...
└── src
├── App.svelte
└── main.js
The App.svelte
file is our main place to write Svelte style JS, the main.js
file is vanilla Javascript to initialize and construct the app. So looking in the main.js
file I found the culprit for double rendering.
import App from ’./App.svelte’;
const demo = new App({
// target: document.body,
props: {
something: ’Default’
}
});
export default demo;
The section, commented out here, tells the app to initialize in the document body. Which I don’t need as I’m already declaring it in the DOM where it should be, which is enough for Svelte. So the component renders correctly, you can tell because it has a shadow DOM which only exists for Svelte components, whole apps don’t use a shadow DOM. There is a pretty good explanation of why they have to use a shadow DOM for components here. If you are not familiar with how to see the shadow DOM in the developer tools you can also look at the page source (right click and choose "view page source") and search for "Mars" or "svelte-container" and notice none of the text you see is in the DOM, that’s because it’s being rendered in Javascript in your browser.
Added a small CSS snippet in the .content_meta to add a little border to the HTML above the markdown content that contains the component:
...
"javascript_include": [
"/static/javascript/bundle.js"
],
"javascript_inline": "",
"css_include": [
"/static/css/bundle.css"
],
"css_inline": "#svelte-container{border: 3px solid grey;}",
...
And I think that’s a good place to stop right with what is just a toy implementation of Svelte right now. It’s progress, but it also has a few bugs (component inline style API doesn’t seem to be working, something strange is going on with JS execution inside the component). Time to go back through the tutorial again and takes notes as to what to study as I go.
1156 words