Last Friday I attended the full-day Indiana Workshops on Software Testing conference with Jen, Howard, Beth, Jeremy and about ten other people. This session was very helpful in understanding my personal knowledge of testing, learning new skills and about previously unknown technologies, and meeting people in the testing community in Indianapolis. I presented second, and discussed my experiences with Test-Driven Development on a recent Rails project that I worked on.
The first experience report probably had the most relevance to things that I have directly worked with. In the production Rails project that I worked on, we used Selenium for automated integration testing as verification of the development process. Dave Christiansen covered using Selenium to do automated integration testing on a Rails app. However, what was noteworthy to me about the approach that he took was that instead of writing and executing these tests only in the test phase of the development process, they were written and executed throughout the development lifecycle. He wrote a wrapper to generate RSpec tests from the Selenium IDE, refactored to DRY up the code, and then had the team write their own integration tests as they were developing. Since the tests were written in RSpec, they would actually be run whenever code was checked in (continuous integration.) After awhile, they built up enough helpers to stop really using Selenium at all except for replicating bugs.
This seems like a nice contrast to the approach that we took for two reasons. First, integration tests would automatically keep up with the development process because the team could run these tests within their own environment. If an integration test was broken, it would be immediately visible. Contrast that to having to rewrite tests several days after the structure changes for no apparent reason. Second, they ran a headless version of the Selenium Grid, which had the effect of getting their integration tests run in about five minutes. I remember this process taking quite a bit longer for us. Part of this was likely due to supporting IE6, IE7, and Firefox, but it seems like this still could have been sped up.
But overall, it’s nice to take the integration testing process and decentralize it. This might not work with all types of projects. Obviously having the developers in charge of their own integration tests seems like conflict of interest, but I suppose that solid reviews and other testing would reduce defects. But depending on the project, I can see that allowing the development and test stages to work more closely together is a plus.
Lunch was a nice experience, it was fun hearing about some things that people have done in the past and how that affected their current views on software development and testing.
It seemed like people liked using Ruby for data munging and actual testing. Over half of the presentations given were using some form of Ruby for development, testing, or tools. You could probably get away with using Perl or something for this, but you can also probably pick up most of Ruby in a couple of days if you have used Perl before.
There was emphasis on abstracting tests to the largest degree possible to facilitate agility of development. This made a lot of sense to me, because one of the common pain points people expressed was having tests that were very brittle. When the application changed, the tests had to be scrapped. Considering testing to be a valuable software architecture practice in itself makes more sense than seeing testing as isolated.
Ideas and technologies
One interesting idea given in the “Open Season” part of each presentation was the idea of thumbnails for quick manual content tests. Instead of specifying a test that used screenshots or some sort of brittle method, instead print out an expected screenshot of each of the screens and the current screenshot on a single sheet of paper. Someone can then easily scan a whole page of these relatively quickly, as humans are naturally good at picking out differences between images and determining whether they are important or not.
Exploratory testing and pair testing were topics that were mentioned a few times, and I had never heard of them before. James Bach was suggested as a resource for this and pair testing. His site contains some newer articles that deal with these subjects.
I really appreciated the constructive nature of the conference. Everyone was very knowledgable and had something to contribute to the process. All questions were asked kindly and there was a feeling of building on things others had said. There was a perceptible energy in the room during the concluding processes. People said that they were fired up to go to work on Monday and solve a problem that they had been facing using new approaches and technologies.
Something that stood out to me was the vast differences in terms that people had for testing practices. Obviously I am not an expert at testing, so there were some word hangups that I had. For example, the decision to call something a unit test versus an integration test was seemingly person-specific, if they used those terms at all. I can see how having a good stream of communication with team members, especially new ones, is very important in testing. Obviously this applies to software development in general.
While I have never been a dedicated tester before, I enjoyed the problems that people presented and their creative approaches to solving those problems. I think that I walked out of the conference having a lot more respect for software testers because of the challenges they expressed, both technical and organizationally.