Make it work

I <3 the simple “legend” used here to indicate which bands are supporting each date on this tour. There’s a minor coding mistake- do you see it?-, but the technique is simple and it works so well.


August Calendar


Context matters


Woodworkers and web developers have this in common

Visit a professional woodshop and ask a master carpenter what her favorite tool is. You may find it’s not a tool in the traditional sense, but a “jig” she built. In woodworking, jigs are patterns or templates built to make repeatable tasks more efficient and outcomes more consistent. Building a one-off bookcase may not warrant building a jig. But, if you’re building three or four of the same bookcase, it’s likely worth building a jig first, then using that jig to build the bookcases.

Once upon a time I was a carpenter and I made…mostly sawdust. Nowadays I work for Automattic, on a small team which makes…mostly websites. While each of our projects is unique in some aspects, the vast majority share this common foundation:

  1. a GitHub repo,
  2. a WordPress site,
  3. a DeployHQ configuration to link those platforms together,
  4. a distributed team of designers, developers and producers throwing pixels and puns around on Slack.

We have a creed at Automattic which states, “…Open Source is one of the most powerful ideas of our generation.” In celebration of that belief and in appreciation of the service DeployHQ provides our team, I’m happy to share a jig our team built to bootstrap new projects.

Our jig consists of a small command line application which integrates publicly accessible API’s from these service providers:

  • a managed WordPress host, part of the Automattic family
  • a well-known collaborative software development platform
  • a tool for managing software deployments
  • Slack- an instant messaging and emoji proliferation platform

Here are the basic commands for our CLI tool:

  • create-development-site Creates a new development site (on Pressable).
  • create-production-site Creates a new production site (on Pressable).
  • create-repository Creates a new GitHub repository in our GitHub org.

create-development-site makes use of Pressable’s “site clone” feature and takes a Pressable site-id as an input. The cloned site inherits the source site’s GitHub and DeployHQ setups. It is a more minor matter, but the jig also enforces our preferred site naming conventions- eg: development sites are named with the (production) “sitename” + “” (the * is a Pressable platform default).

create-production-site makes use of Pressable’s site provisioning API. It takes two arguments: a sitename and a GitHub reponame (scoped to our team’s GitHub organization). Optional arguments include a GitHub repo, which when provided, configures DeployHQ so that merges to master are auto-deployed to the production site via SFTP.

create-repository uses the GitHub API to provision a new repository with scaffolding for things such as directory structure (eg: our repos are the “wp-content” folder of the WordPress installation), issue templates, labels and more.

Provisioning repos and sites quickly and in a consistent manner from the command line alone may have been worth building the jig. However, the real value for us is creating sites and repos in a “workflow ready” state.

So, what does our “workflow ready” state look like?

It means that we have a WordPress site provisioned on Pressable, a code repository initialized on GitHub, Travis configured for CI, a DeployHQ project configured with our deploy strategy, and event logging flowing into Slack. With our jig, we can get to that “workflow ready” state in a single command:

mark.drovdahl@penguin:~/bin/deployhq-automation$ team51 create-repository --repo-slug="shiny-new-site" --create-production-site
Checking for updates..
Already up-to-date.
Creating scaffold/shiny-new-site directory.
Copying scaffold/templates/github directory to scaffold/shiny-new-site/.github.
Copying scaffold/templates/gitignore file to scaffold/shiny-new-site/.gitignore.
Copying scaffold/templates/.travis.yml file to scaffold/shiny-new-site/.travis.yml.
Copying scaffold/templates/Makefile file to scaffold/shiny-new-site/Makefile.
Copying scaffold/templates/ file to scaffold/shiny-new-site/
Creating repository README.
Local setup complete! Now we need to create and populate the repository on GitHub.
Creating GitHub repository.
Successfully created repository on GitHub (a8cteam51/shiny-new-site).
Adding, committing, and pushing files to GitHub.
 6/6 [============================] 100%
Configuring GitHub repository labels.
 17/17 [============================] 100%
Logging GitHub init script completion to Slack.
GitHub repository creation and setup is complete!
Creating and configuring new Pressable site.
Creating new Pressable site
Created new Pressable site.

Waiting for Pressable site to deploy.
    6 [============================]
The Pressable site has been deployed!

Creating new project in DeployHQ
Created new project in DeployHQ.

Verifying we received a public key when we created the new DeployHQ project.
Successfully retrieved public key from new DeployHQ project.

Adding DeployHQ public key to GitHub repository's deploy keys.
Successfully added DeployHQ public key to GitHub repository.

Connecting DeployHQ project to GitHub repository.
Successfully added and configured GitHub repository in DeployHQ

Creating new DeployHQ production server for project shiny-new-site.
    6 [============================]
Created new server in DeployHQ.

Verifying we received a webhook URL for automatic deploys when we created the new DeployHQ project.
Successfully retrieved webhook URL from new DeployHQ project.

Adding DeployHQ webhook URL to GitHub repository's list of hooks.
Successfully added DeployHQ webhook URL to GitHub repository.

Deploy HQ is now set up and ready to start receiving and deploying commits!

mark.drovdahl@penguin:~/dev$ git clone
Cloning into 'shiny-new-site'...
remote: Enumerating objects: 12, done.
remote: Counting objects: 100% (12/12), done.
remote: Compressing objects: 100% (11/11), done.
remote: Total 12 (delta 0), reused 9 (delta 0), pack-reused 0
Receiving objects: 100% (12/12), 14.95 KiB | 0 bytes/s, done.

Having the DeployHQ API at the center of our jig gives us flexibility and portability. If we ever need to integrate with a different source control platform or managed WordPress host, an update to our jig adding eg: --source-control-platform and --wp-hosting-platform arguments along with a new DeployHQ project template is all it would take.

What I love about this jig is what I love about any jig, digital or physical- it makes completing an often repeated task more efficient and the outcomes more consistent.

So, what tasks do you or your team have which might benefit from building a jig? What jigs have you built?

If you’re able, share your jig on GitHub and add the “digital-jig” topic to your repo.

— Mark Drovdahl, Senior Digital Producer @ Automattic Inc




Tods Law

Each thing you do spawns new things to be done.

my Theorem of doing stuff


Several People Are Typing

Dad, what was work like before Slack?

Formless and empty, darkness was over the surface of the deep, and the Spirit of God was hovering over the waters.

Dad! Stop it. Seriously what was it like?

Less hurried. More thoughtful.

That sounds nice. Can we go back to that?

No. We only go forward.

Uncategorized unix

How I use ChromeOS’s linux containers

When you enable “Linux” on ChromeOS, a default Virtual Machine is created (and a default linux container). VM’s are the layer above containers- each VM can house multiple containers. The default VM is named “termina” and once it exists, you can interact with it from crosh (The Chromium OS Shell) using the vmc command. The default container is named “penguin”.

Note, for my purposes, I create a second VM named wpvm and then create containers within that VM. This makes it fast and easy to spin up Debian environments (I’ll try to cover other container types in a later post) for whatever your needs may be. The --enable-gpu is optional. YMMV.

First, open a crosh window (ctrl-alt-t). Then, to create and start a new VM named “wpvm”:

crosh> vmc start --enable-gpu wpvm

After the above VM is created, you’ll end up at a shell prompt within the VM. A potential point of confusion, the prompt below shows “(termina)” when we’re actually in a VM named “wpvm”. The “(termina)” here refers to the VM image ChromeOS uses for all VM’s. Keep calm and keyboard on.

(termina) chronos@localhost

From here, exit the VM using ctrl-d

Now let’s create a container within the new “wpvm” VM. We’ll use Google/ChromeOS’s standard Crostini container, “stretch”:

crosh> vmc container wpvm stretch

If everything worked, you’ll find yourself signed into the new Debian Stretch container with a bash prompt:


If you want to verify the VM and the container were created and running, you can open a new crosh window (ctrl-alt-t), list the available VM’s, shell into the “wpvm” VM using the vsh command (reminder, this puts you into the VM, not the container) and then check the status of any running containers with:

crosh> vmc list
 Total Size (bytes): xxxxxxxxx
crosh> vsh wpvm
 (termina) chronos@localhost ~ $ lxc list
 |  NAME   |  STATE  |         IPV4          | IPV6 |    TYPE    | SNAPSHOTS |
 | stretch | RUNNING | (eth0) |      | PERSISTENT | 0         |

To shell into a running container (obviously, the associated VM must also be running):

crosh> vsh wpvm stretch



’93 Landcruiser still rocks


WP Dev with Docker

I got stuck for a bit today on how to configure a PHP PDO connection in a Docker based WordPress dev env. Here’s the function…

 * Get the database connection
 * @return \PDO database connection instance
public function get_connection() {
    $dbname = \TMSC\tmsc_sync()::$tms_db_name;
    $username = \TMSC\tmsc_sync()::$tms_db_user;
    $password = \TMSC\tmsc_sync()::$tms_db_password;
    // Build the DSN string
    $host = \TMSC\tmsc_sync()::$tms_db_host;
    $port = '3306';
    if ( strpos( $host, ':' ) ) {
        list( $host, $port ) = explode( ':', $host );

    $dsn = "mysql:host={$host};port={$port};dbname={$dbname}";

    $connection = new \PDO( $dsn, $username, $password, array(
    ) );

    // Since MySQL supports it, don't automatically quote parameters to allow control over data types.
    // This comes in useful for automatically manipulating the LIMIT clause.
    $connection->setAttribute( \PDO::ATTR_EMULATE_PREPARES, false );

    // Uncomment to enable connection debugging.
    $connection->setAttribute( \PDO::ATTR_ERRMODE, \PDO::ERRMODE_EXCEPTION );

    $this->set_mysql_global_defaults( $connection );
    return $connection;

When this function connects to a Remote AWS/RDS database, we configure it with a FQD for the $host variable. But, when running the same function locally, within our Docker setup, we must use the docker container name “jetpack_mysql” (as defined in the docker-compose.yml file) and not “” or “localhost”.