Squashing Intermittent Tests With ntimes

Today I want to share a tool that I have found indispensable for finding and fixing intermittent tests in test suites. It’s a little script I wrote, called ntimes.

Based on the commit logs to my dotfiles repository, until about 2014, to run the same command many times, I would press up in my terminal and press enter. While effective, this approach has the disadvantage of requiring me to be present at the machine and do manual work. I thought: there must be a better way.

So, probably by cribbing from somewhere and adding my own extensions, I made a script that could run an arbitrary command-line command multiple times and report a summary at the end. To use it, I would use something like:

$ ntimes 100 rspec spec/models/user_spec.rb:42

This would run that specific RSpec test or block one hundred times. At the end, the script prints how many times it succeeded and how many times it failed.

I also use Mac’s say command to get some audio feedback during test runs. It is a bit annoying to have it say “succeeded” whenever it successfully ran, but it can be useful to know when there was a failure (“guess I didn’t fix the issue…”). So I have it say a quick “failure” immediately when there is a failure (non-zero exit code of the command) and either “Success!” or “At least some failed…” at the end depending on the overall status. While it couples this script to Mac, you could probably extend it to use a more cross-platform approach.

Since it just operates based on the exit code, this command could be used on other programs to run them multiple times as well. If you don’t care about the successes or failures, you can still use it to run the same command multiple times.

Combining git bisect with ntimes as the command allows us to see when a test likely started being flaky. It is helpful to have a small scope of what could be causing the test to be intermittent. If it is because of a test setup issue in another directory, then you might have to run your entire test suite. (This would take much longer, so you might have to run it overnight.) ntimes is also quite helpful when taking over an existing code base that might have intermittent tests.

Sometimes if I’m worried about a test that I just wrote, I’ll do a proactive ntimes 100 on it just to be sure that I am not committing a test that will soon fail. I generally try to do this if I have really complicated before/after blocks or if I might be polluting global state.

To install ntimes on your machine, download it, make it executable, and then put it somewhere on your PATH. Please let me know if you found it useful!

Softening Statements With Parentheticals

In our Slack organization and Github pull request review, I have noticed a small pattern of using parentheses to soften or clarify the statements that we make. Sometimes it used by someone in a position of authority to emphasize that a comment is an idea, not a directive. Other times, it represents that things are not critical to address, but might be something that we want to look into.

I originally wanted to call this post “The Shipley Parenthetical” since Kyle uses it all of the time. Didn’t know if he thought of it this way / approves though. :)

In this post I’ll give a few examples of how this works and some thoughts on it.

Example 1

Just capturing a thought that would be helpful or is on the commenter’s mind, but wanting to be clear that there is no specific action item around this.

Will avoid further churn, IMO. (Down the road, we may even want a copywriter with clinical knowledge on or near the product team.)

Example 2

Hedging a comment about what might be the best with a bit of YAGNI:

That is probably my preferred order overall, with my additional comment that making it data instead of code might sidestep the matching problem entirely if it is worth it. (Not sure if it is yet.)

I think this is a good way of resolving the need to point out something that could be better while not committing us to doing it.

Example 3

Comment by author of pull request after a comment by reviewer to remove some code:

This interactor is actually part of the original code. Are we ready to start removing it?

(Not planning to restore here for that reason unless there are objections.)

I like this because it specifies a reason for not removing the code and states what the default behavior will be if an argument against it is not raised. It might be even better if the timeline for when the option to respond expires.

Example 4

In a review, after commenting on a specific issue, the next comment the reviewer makes is on the same issue in a different place. They said:

(Same thing here, sorry.)

I like this as a way of indicating that the reviewer is being empathetic while still showing that there are a few places where the same problem exists. This approach is better than “WRONG AGAIN” or equivalent statements.

Example 5

Interjecting into a pull request conversation to potentially clarify reviewer’s comment while still admitting imperfect knowledge:

I think @shipstar was saying for the right hand side of the expression, is there a foo field on baz? Or should it be bar? (At least that is how I read his comment.)

Example 6

Indicating that a comment is non-blocking, but that would be nice to have:

(Could also use .reject instead of .select if the intentional is filtering. Seems infinitesimally more semantic.)

Any thoughts?

This is kind of a new format of a post for me. Basically I’m harvesting artifacts for free blog posts from Slack and Github. Not sure that I would try it again, but it might be helpful / interesting to someone. (What did you think?)

Night Working Computer Setup

Although I wrote about how automatically turn the internet off at night, sometimes my schedule shifts later or I am trying to get some side project work done and want to burn the midnight oil. In this post I’ll cover what I consider the best tools for an evening computer work environment.1 So what can you do besides changing your text editor colors?

Chrome

I use Chrome for my browser since it has many extensions and doesn’t seem to eat up memory at this point. For the nighttime setup I’m using one extension to make the new tab page black, and another to make most other pages dark.2

Dark Reader

To make most pages inverted and dampened, I highly recommend the Dark Reader Chrome extension.3 It is excellent. Sites look as good or better than their brighter counterparts. Github diffs in particular look really good. Here’s a screenshot of it in action:

Github with Dark Reader
Dark Reader Settings

Dark Reader has several hue and brightness settings that you can change (see right image), and you can toggle it globally or for a particular website. Generally I just turn it on globally with alt+shift+d at night and turn it off in the morning with the same shortcut.

It also handles images well. Unlike other plugins, it does not invert the colors, it just mutes them. Not inverting avoids blinding you with what are normally dark images.

I like using Dark Reader and it always makes me nostalgic for a dark theme that I set up on a personal wiki back in the day. I encourage you to install it and try it out on this page. I think it looks pretty cool!

Blank tab plugin

Dark Blank New Tab Page Extension

Generally when you open a new tab, Chrome presents you with your bookmarks or sites you have visited. I prefer to have a more minimal new tab page to avoid distracting me and to load more quickly. Generally I opt for Empty New Tab Page to simplify the new tab page down to just a blank page. However this plugin produces a blinding white background so it is less desirable for evening working. Fortunately, there is a similar and well-named plugin called Empty New Tab Page - Black which solves this problem by making the new tab black. So I generally have both installed and can disable whichever one I am not using at the moment.

Dark console

You can also change the default Chrome DevTools window from the default light background to a dark background. This makes web app debugging at night a bit more palatable.

Chrome DevTools Dark Theme

To change yours:

  • open the DevTools window
  • click on the dots in the upper right corner of the window
  • select “Settings”
  • choose the dark theme from the list of themes

PDF - MuPDF

I was reading Programming Elixir and Programming Phoenix for a side project that I am working on. The key features that I wanted in a night-time Preview (native OS X PDF viewer) replacement were:

  • Vim-like navigation (j/k to move up/down, etc.)
  • inverted viewing mode

I looked for a bit and found MuPDF. It is open source and available on Homebrew (mupdf package.) It gives Vim-like navigation and its inverted mode is quite good. Here is a side-by-side view of normal and inverted mode:

MuPDF normal mode MuPDF inverted mode

Actually running it is a bit tricky since it is normally a Linux program. The invocation that I found useful is:

mupdf-x11 <filename> &

This opens the file in the X11 version of the program and gives you back shell control (& launches it as a daemon). So not entirely easy to get running, but I have found it useful. To toggle night mode, just press i and it quickly inverts the colors.

Monitor setup

There are a few considerations to make with your displays.

Generally turning down the brightness is useful. The built-in laptop display has keyboard shortcuts so is easy to change. Most external monitors are a little trickier, but some have modes that you can configure so you can more easily toggle between day and night modes.

If your external monitor flashes you with a full screen of blue pixels whenever it is unplugged like mine does, then I would advise turning it off before unplugging to avoid this. Blue light is most harmful to melatonin production, which aids sleep and is a powerful antioxidant.

f.lux

Since we are on the topic of blue light, I will mention f.lux. This program shifts the color palette of the monitors to a more red tint automatically based on the time of the day. I am guessing that most people reading this have heard of f.lux, so I won’t cover this much further.

Any other tools?

Do you have a night-time setup for your computer? What tools have you found useful? Thanks!


  1. I use a Macbook Pro, but similar strategies would apply to other computers. 

  2. There is still a flash of white when opening a new tab, when conducting the first URL change (generally a search), or when navigating to a tab that was previously loaded before it applies the dark styles. But overall, they are a great improvement. 

  3. Some pages that are actually Chrome-specific windows won’t be inverted. For example, the Chrome settings tab or any Chrome store pages. This made testing dark plugins a bit trickier because at first I tried testing on the dark plugin’s page instead of a normal browser page. 

Writing Composable Shell Functions for Better Workflows

Recently I finished up some shell functions that help me with some common git and testing workflows. They are up on Github, but I wanted to call them out since they might be helpful to others and just making something open source does not mean that it is discoverable. I think the philosophies are pretty solid even if you use different tools. You could use similar functions if you are using Bash or ZSH.

Overview

The general problem that I am trying to solve is that different tools like RSpec, git, or RuboCop produce output in a certain format, and I often want to do things with that output. For example, I might want to re-run the RSpec tests that just failed so I can verify fixes more easily.1 However, RSpec outputs in a certain format that is not easily consumable. For example:

...
Failed examples:

rspec ./code/fail_spec.rb:2 # something failed
rspec ./code/fail_spec.rb:8 # yet another failure
...

If I want to re-run these two tests, I could copy the two lines and paste into my terminal. This would have a couple of downsides though. One is that I would need to spin up one RSpec process for each test that failed. This is time-prohibitive if the project loads the Rails environment. It also prevents me from getting a reliable list of the tests that failed so I can repeat this process. Last, I’d like to ideally use a faster test runner like Zeus or Spring. So my real goal is to re-run the failing tests as quickly as possible.

One approach that I took for a few years was to copy the output, paste it into an editor (Vim), and then do some macros or other commands to munge it into the format that I want. However, this is time-consuming and potentially error-prone. It is also usually wasteful since I need to do it each time that I want to transform the output to a particular format, and often I don’t have the editor macros saved. It can be nice to have the list of tests to retry in an external editor to be able to check them off, but I prefer to not need the intermediate step.

Solution

The specific solution I made to solve this problem was to create a shell function that I called respec:

# copy the rspec failures, and this will rerun them as one command
function respec() {
  pbpaste | \
    cut -d ' ' -f 2 | \
    sort | \
    xargs rspec
}

First, I manually copy the tests that I want to run again. Using iTerm2, it is as simple as selecting the failure summary text since selections are automatically copied to the clipboard. pbpaste then echos the contents of the system clipboard. From there, we want a list of the tests that failed so we can run them again. The format of failing tests are:

rspec ./code/fail_spec.rb:2 # something failed
...

To be able to run any test that failed, we want:

./code/fail_spec.rb:2

One approach to solving this problem is to split the line by spaces and take the second item. We could do this with the cut command (saying the delimiter is space and we want the second field) or with awk (awk '{print $2}').2 From there, we just sort the files since they might have not been sorted depending on our test strategy, and we pipe the resultant tests to rspec using xargs.

Expanding on this

Something else we might want to do with the RSpec output is to edit the files that have failing tests. It could be that the tests need to be updated, or that it gives us faster access to the application files if we are using something like vim-projectionist. We don’t want to get all of the test failures, just the files that failed. So we could create a similar function that would edit the files that have failing tests:

# after copying the rspec failures, this will edit the files that had failures
function espec() {
  pbpaste | \
    cut -d ' ' -f 2 | \
    cut -d ':' -f 1 | \
    sort | \
    uniq | \
    xargs $EDITOR

Extracting common functions

There is some duplication in the current code. We’re always pulling the test failures from the clipboard, then getting the actual file and line numbers, and then sorting the tests. To DRY up the functions, we can create an intermediate function to take care of getting the test failures:

# returns the list of filenames from rspec output format
function rspec_paste() {
  pbpaste | cut -d ' ' -f 2 | sort
}

Then we can call this function inside of our existing functions:

function respec() {
  rspec_paste | xargs rspec
}
function espec() {
  rspec_paste | cut -d ':' -f 1 | uniq | xargs $EDITOR
}

Much simpler and easier to read.

Running commands on modified files

Another common workflow that I have is to run any tests that have changed in source control. Back in the day I would have done a git status, copied the test files that I want to retest, and added them after an rspec command on the command-line. Again, this is suboptimal.

Since I’ve shown function extraction above, let’s skip to the chase:

# return all files that are changed but not deleted in git
# needs to handle
# R  spec/1_spec.rb -> features/2_spec.rb
function modified_files() {
  git status --porcelain | \
    grep -v -e '^[ ]*D' | \
    awk 'match($1, ""){print $2}' | \
    sort | \
    uniq
}

This function will print out any files that were modified but not deleted. This is relevant because if we deleted a test file, we don’t want to try to run it since RSpec will immediately fail if test files are not recognized. For more information on the output format of git-porcelain, check out the git docs.

On its own, modified_files is useless. But now we can make functions that work with it. From there, we can run RSpec on any spec files that were changed:

function gspec() {
  modified_files | \
    grep -e "_spec\.rb$" | \
    xargs best_rspec
}

So basically we want to take any modified files, remove any that aren’t spec files, and then we run them through the best_rspec script. What’s that, you ask?

best_rspec

This is a small script that I put on my path to figure out what the fastest RSpec binary is and to use that to run tests. I’ve also aliased it to r, for faster typing of the command and to avoid needing to think about what environment I am running the tests in.

#!/bin/sh

if [ -f bin/rspec ]; then
  # use spring rspec if it is around
  ./bin/rspec $@
elif [ -S .zeus.sock ]; then
  echo "Running tests with Zeus"
  zeus test $@
else
  echo 'Running with naked `rspec`'
  rspec $@
fi

If we have Spring configured for the project, use that since it will be no slower than using the normal RSpec binary. If we have a Zeus socket open, then use that. Otherwise, just use the default rspec command.

I updated respec and gspec and any other commands that run RSpec tests use this to give the same behavior for free. Considering which binary to use is something I previously would have spent a brain cycle or two on that now just happens automatically regardless of what project I am working on. I consider it a win.

Putting it all together

Let’s discuss a common flow that uses these different operations. First, I check out a new branch and want to add a feature. I write some tests and watch them fail, then make them pass. I can make sure that all of the test files that I’ve added or modified pass before committing them with gspec.3 When I think I’m done with my feature, I run the test suite and there are some test failures. After copying the test failure lines which are at the bottom of the RSpec output, I can run respec to quickly confirm that they aren’t intermittent tests. Then I can run espec to open the files up and fix the issues. I run respec each time I think I’ve fixed some of the tests, and the failure output will hopefully reduce each time so I am running fewer tests the next time. Finally, I run gspec again to make sure that the tests are indeed passing. Each time, we are using the Spring test runner to avoid needing to spend seconds reloading the Rails environment.

Food for thought

Consider what workflows you have after seeing a program’s output and how you might create small, composable, functions or programs to help automate them. In addition to RSpec or other testing tools, you might consider the output of code linters or static analyzers, code deployment tools, and so forth.

My configuration is up on Github, check it out for some additional helpers that I use to re-run cucumber tests and to work with RuboCop output for re-running that tool.


  1. I realize that there are some plugins that will re-run the last failed tests, but I wanted a more general solution to this problem. 

  2. If there are spaces in your filenames, I’m not sure that either approach will work well. Also, you should probably reconsider your naming conventions! :) 

  3. I’d like to extend the gspec command to also run tests for application files that changed. For example, if git believes app/controllers/foo_controller.rb has been modified, I’d like to run spec/controllers/foo_controller_spec.rb even if it wasn’t changed. This should be fairly straightforward to do since Rails has a consistent directory format. This change would likely result in less manual effort on my part, since I often want to run the tests for any application files that changed. 

I Gave a Presentation About Redux to Indy.js

After completing a project at work that used the Redux state container, I gave a presentation to the local JavaScript meetup group about my understanding of Redux and showing how it works in the app that we built.

Link to the slides.

Link to a video of the presentation.

Overall, I’d say that Redux was a useful tool and we’d like to use it on new projects going forward. I think there are some good patterns there that we didn’t fully realize the benefit of because the app is so simple (basically a form wizard signup application.) Obviously I said more in the presentation, so if you are interested in hearing more of my thoughts the subject, check out the video of the presentation above.