As staff members of the Eclipse Foundation, we faced the challenge of automating many of our
manual workflows while minimizing the cost of doing so. Having seen similar efforts fail, we
chose a new design point that we believed would entice more of our stakeholders to participate.
This is about that design point and its success-to-date. pdf
Our solution is to script and visualize long running transactions by simulating them
using test databases and capturing the output for display. enlarge
The simulation uses as much of the real application code as possible, replacing only the real databases with test databases, the user input routines with the test scripts, and the browser output with output stored in buffers for post-processing and evaluation. Consequently the simulated transactions are the real transactions and the scripts are testing the real application code and business logic.
Because our transactions can take weeks or months to complete, we obviously run our simulations faster than real time using a simulated clock. The visualizer post-processes the results of the simulation to produce an overview two dimensional picture of the progression of the transaction through time, using a page format inspired by the "swim lane diagrams" made popular in business process reengineering.
The overview contains a column of each person involved in the transaction and icons with fly-out details representing important interactions. This format has worked well for our need, but could easily be substituted with some other format without invalidating the rest of our results or even modification of the test scripts.
The simulator and visualizer are used in development and testing, but are also deployed with the finished application as a form of documentation. This use of the tests as documentation has the natural advantage of the documentation being automatically maintained, but it also forces us to make design choices so that the tests actually make good documentation.
In our previous attempts to provide this same “the tests are the documentation” failed because the tests contained too much detail to be good documentation; detail that was necessary for testing, but overwhelming as documentation in the sense of “can't see the forest for the trees”. Our solution here has a carefully chose set of abstractions that make the scripts both useful as tests and useful as documentation.