Building a Custom Automated Development Pipeline
A complete guide on how to implement a fully customized software development pipeline with total automation on source changes. Sample implementation demonstrates a web development setup that includes HTML templating, Sass & ES6 compilation, a backend application server, and automatic browser reload.
Intro
Many software ecosystems come with great tooling to help ease development. A few that come to mind are webpack for managing web sources and bundling projects, Microsoft's Visual Studio for building C++ project, and Rust's Cargo tool for package management and building Rust projects.
These are great on their own, but if a project requires a combination of software from multiple ecosystem and environment configurations, it becomes problematic fairly quickly. From my experience, one of these situations usually occurs when trying to rig everything together:
- A collection of shell scripts build up over time that break regularly
- A configuration file is made to manage everything and eventually grows so large in size that it becomes a project all of it's own
- A clean development environment that is devoid of all config files and scripts! But if
~/.shell_historygets deleted it means you just lost a day of actual work.
Build tools generally use a type of plugin system that may or may not be difficult to implement. This also requires intimate knowledge of how the tool works and perhaps even how plugins interact.
Rather than deal with a single build tool, I prefer the UNIX philosophy of having a large collection of tools that are great at what they do, but only do one thing. Keeping this in mind, I try and use individual CLI tools whenever possible in the software build process, instead of a single monolithic build tool.
If we apply the UNIX philosophy to the problem of managing software build complexity, we arrive at having a collection of commands to run singular parts of the software build process.
In this post, I will be expanding on this by demonstrating how to combine any type of development workflow using just, entr, and plain old CLI. The solution will be fully automated, allow easy editing of steps at any point in the process, use regular build commands that you would normally run, and will all be contained within in an easy to configure project file.
Following Along
A repo is available here that includes everything needed to replicate the workflow in this post. There are many dependencies I use that may not be installed on your system, so feel free to follow along by implementing this technique using tools of your choosing.
I intentionally have a heavy list of dependencies for this post to demonstrate how to handle a project with multiple build steps:
- entr (fileystem watcher)
- just (command executor)
- Sass (compile Sass to CSS)
- Babel (compile ES6 to ES2015)
- Pug (compile Pug markup to HTML)
- Rust (I'm using Rust for the demo backend)
- Python (serving static files)
- xdotool (browser refresh)
Also note that I will not be displaying the full code of the source files. This post is about the process itself and not the code that is being operated upon.
Overview
This is a fairly long post, so here is a quick overview of what I will be doing, and what I am trying to accomplish.
High level overview of how the process works:
- Use the
justtool to create wrapper commands for each command that is needed in the build process - Process source code and dump the output into either an intermediate
scratchdirectory or directly into the final output directory - Run more commands on anything in the
scratchdirectory in order to arrive at a finalized stage, which is then placed into the final output directory - Create commands to monitor source code for any changes
- When a source change occurs, run an appropriate wrapper command
As you can see, the process is literally just issuing commands all the way down.
This may seem exactly like what I mentioned in the intro about a collection of shell scripts.
However, since we are using just, we can encapsulate all our commands easily and keep them trivially organized.
Here is a high level overview of my specific implementation, which will need these wrapper commands:
- Compiling Sass to CSS
- Compiling Pug to HTML
- Compiling ES6 to E2015
- Building the Rust application server
- Exporting
manifest.json - Hosting static files
- Running the application server
- Watching the filesystem for changes
- Hard coding a username via environment variable
- Reloading the web browser for immediate feedback
Let's get started!
Getting Started
Before we start wrapping up commands, we need to set up a few variables, specifically the location of some directories:
- Sass styles
- Scripts
- Pug pages
- Scratch
- Application server
- Distribution
Here is what the final project directory looks like:
project/
├── dist/ (final output directory)
│ ├── appserver (Rust application server binary)
│ ├── index.css (Compiled CSS styles)
│ ├── index.html (Compiled HTML template)
│ ├── index.js (Compiled & edited javascript)
│ └── manifest.json (Site author & description)
├── scratch/ (build/artifact directory)
│ ├── backend/ (Rust build artifacts and output directory)
│ └── index.js (ES6 → ES2015 javascript file)
├── web/
│ ├── frontend/
│ │ └── scripts/
│ │ └── hello.js (An ES6 script that communicates with the application server)
│ ├── backend/ (Rust web application server)
│ │ └── src/
│ │ └── main.rs (Application server code)
│ ├── styles/
│ │ ├── _config.scss (Sass variables)
│ │ ├── _global.scss (Globally applied styles)
│ │ └── index.scss (Import declarations)
│ ╰── webroot/
│ ├── manifest.json (Site author & description)
│ └── index.pug (Page that we will show for an index)
├── justfile (Wrapper commands to wire everything together)
├── .env (Includes a username to be hardcoded in the app)
└── package.json (Needed to use babel for ES6 → ES2015)
We'll create a justfile in the root directory and then add some variables to define important directories:
styles_root := "web/styles"
scripts_root := "web/frontend/scripts"
web_root := "web/webroot"
scratch_root := "scratch"
appserver_root := "web/backend"
dist_root := "dist"
We will also specify a port number to use for hosting static files:
static_server_port := "8001"
just automatically loads any .env file that may be located in the root directory.
We can use this to hard code a username into the final output:
APP_USERNAME=WebUser
And to access it in the justfile:
app_username := `echo $APP_USERNAME`
Backtick-surrounded content is interpreted as a shell command and the result will be stored into APP_USERNAME.
We can also use $APP_USERNAME directly, however this may not work depending on how your command is structured, so I opt for the above method.
Support Commands
Before we can write build command, we need a few support commands to help facilitate building and auto reload.
Create Temporary Directories
Not every command will automatically create the needed directories, so we need this support command to do that for us:
@_make_tmp_dirs:
mkdir -p {dist,scratch}
Prefixing a command with @ supresses all the output (just is verbose by default).
Since this isn't a command that ever needs to be run on it's own, it can be hidden from the command list by starting the command name with an underscore (_).
Browser Reload
To reload the browser, we won't be doing any fancy websocket hot code reloading. We are just going to focus the browser and automatically press F5:
reload-browser name:
#!/bin/sh
CURRENT_WINDOW=$(xdotool getactivewindow)
xdotool search --all --onlyvisible --class {{name}} \
windowfocus sleep 0.1 key --window %@ 'F5'
xdotool windowfocus --sync ${CURRENT_WINDOW}
xdotool windowactivate --sync ${CURRENT_WINDOW}
just will execute code in the language specified by the shebang line.
In this case, we are using a plain shell script.
xdotool does all the heavy lifting here.
We search for a window class that is visible on screen, we then wait 0.1 seconds then send an F5 keystroke to all matched windows.
Build Commands
Now that the support commands are ready, we can start to write our build commands.
Pug
In order to compile Pug templates to HTML, we just use the pug command:
build-pages: _make_tmp_dirs
pug {{web_root}}/index.pug -o {{dist_root}}/
A justfile is very simple.
The above defines a command named build-pages which will run everything indented below it in a shell.
Items surrounded by two curly braces {{}} are variable substitutions.
In this example, we have pug convert our index.pug file and output it to dist/ directly.
Notice the _make_tmp_dirs portion after our command.
Whenever a command is placed after the semicolon (:) of another command, it becomes a "dependency" and is ran automatically whenever the desired command is ran.
This will ensure that we always have directories available before our command executes.
Sass
Compiling the Sass files is just as easy:
build-styles: _make_tmp_dirs
sass --sourcemap=none {{styles_root}}/index.scss {{dist_root}}/index.css
Scripts
Since I want to hard code a username, but not have to change the source manually every time, compiling the scripts take a little extra work.
Here is a snippet from my Javascript file:
...
let el = document.getElementById("greet");
Req.get("http://localhost:8080/{%USERNAME%}")
.then((data) => {
el.innerText = data;
})
...
We are going to replace the instance of {%USERNAME%} with the username indicated in the .env file:
build-scripts: _make_tmp_dirs
@# This hard-codes the username into the ES6 javascript.
@# We use '{%' and '%}' for our custom replacement to avoid issues with
@# just's variables
sed -e 's/{%USERNAME%}/{{app_username}}/g' \
{{scripts_root}}/hello.js > {{scratch_root}}/index.js
@# It is then compiled to ES2015
./node_modules/@babel/cli/bin/babel.js --presets @babel/preset-env \
{{scratch_root}}/index.js -o {{dist_root}}/index.js
We run a simple sed script to replace {%USERNAME%} with the username in our environment variable, which is defined in the beginning of our justfile.
Afterwards, we compile the code to ES2015 using babel-cli and output it to dist/.
Application Server
The Rust web application server is simple to build as well, thanks to cargo:
# Use 'release' and '--release' to build in release mode.
build-appserver target='debug' mode='': _make_tmp_dirs
cd {{appserver_root}} && cargo build --target-dir ../../{{scratch_root}}/backend/ {{mode}}
rsync {{scratch_root}}/backend/{{target}}/appserver {{dist_root}}/appserver
We can also include parameters with a command in a justfile.
target='debug' and mode='' declare variables named target and mode while also assigning default values.
They can be overridden just by specifying a new value when running the command: just build-appserver release will change target to release;
We cd into the appropriate directory and direct cargo to output everything into scratch/backend.
Afterwards, we copy the resulting binary to dist/.
Manifest
We don't modify the manifest.json file, but a large project likely would.
All we do is copy it to /dist:
build-manifest: _make_tmp_dirs
rsync {{web_root}}/manifest.json {{dist_root}}/manifest.json
Wrapping Up The Build Commands
So we currently have everything we need to successfully build our software. If we wrap all our build commands up into a single command we get:
build-all:
just build-pages
just build-scripts
just build-styles
just build-appserver
just build-manifest
Which results in a workflow of typing one command into the shell: just build-all.
This is already pretty nice, however we can take it to the next level and run this automatically when we change our code.
Watching For Changes
In order to watch for changes, I will be using a tool called entr which will run a command whenever a filesystem change event occurs.
Since we have 5 build commands, we will also need 5 watch commands to pair them with. Each watch command will watch the appropriate files and issue a corresponding build command.
Here is what we will be using to monitor and respond to changes in our Javascript files:
watch-scripts:
#!/bin/sh
while true; do
ls {{scripts_root}}/* | entr -d -p -s 'just build-scripts \
&& just reload-browser firefox'
done
entr requires a listing of files to monitor that must be piped into the program.
If we create a new file, entr will not be able to monitor it due to the piping.
In order to get an updated list of files to watch, we have to re-run the command so entr has the most up-to-date file listing.
This is implemented using an infinite while loop in the above script.
Breakdown of flags for entr:
-d: causesentrto terminate when a new file is detected so we can loop back and run the command again.-p: only runs the command when a file changes (not on first load)-s: allows us to chain multiple commands
There are also commands to watch the manifest file, Sass styles, and Pug templates.
They are all the same except for search paths and the build-* command to run.
Full code is available in the repo.
Reloading The Application Server
In order to reload our application server, we will need to kill any existing instances. This will prevent us from getting socket bind errors and will allow us to use a static port number:
reload-appserver:
#!/bin/sh
mkdir -p dist
PID=$(pidof appserver)
if test -n "$PID"
then
kill "$PID"
fi
./{{dist_root}}/appserver
All we are doing is checking if the appserver is running, and kill it if so.
Then we just run the appserver.
In order to monitor the application server, we want to watch for changes to both source files and toml configuration changes.
We also want to relaunch the server after it gets recompiled:
watch-appserver:
#!/bin/sh
while true; do
find -name '*.rs' -o -name '*.toml' | entr -d -s 'just build-appserver \
&& just reload-appserver \
& just reload-browser firefox'
done
We omit -p since we need the application server running when testing.
We also only use one ampersand (&) after just reload-appserver to send the server to the background.
Running A Complete Development Server
Let's recap what has been made so far:
- Build commands for all components of the software
- Watcher commands to run build commands for all components of the software
- A command to reload the browser
- A command to reload the web application server
We are going to put everything together along with static file hosting via the Python http.server module:
run-dev-server:
#!/bin/sh
trap "exit" INT TERM ERR
trap "kill 0" EXIT
just build-all
just watch-styles &
just watch-pages &
just watch-scripts &
just watch-appserver &
just watch-manifest &
python -m http.server --directory {{dist_root}} {{static_server_port}} &
wait
Important to note are the trap and wait lines.
These allow us to just background everything with & while still maintaining the ability to CTRL+C.
Termination will propagate to all child processes and nothing will be left running in the background.
Static files are hosted using the http.server module of python on the dist/ directory.
Conclusion
Once everything is put together, a full build with source change monitoring and automatic browser reload can be initiated by running just run-dev-server.
As a bonus, since we modularized every step of the build process, we can easily adjust the process at any point, or just build a specific portion if needed.
Wrapping up basic commands using just is an easy way to create automated tools without any complicated plugin systems and monolithic build tools.
Sometimes keeping it simple is best.
Happy coding!
Comments