Much of my experience with regression tests has been for code that was not volatile. I’ve built regression tests during the course of testing a feature, and found plenty of bugs during that process, but by the time the feature was done and I was done with it, future runs of the regression tests mostly didn’t expose problems. Occasionally there would be a feature interaction with something new, or a change in the way the feature was supposed to work, but I had gotten used to the idea that most of the value provided by a regression suite came from getting the suite to work in the first place. Some of that was due to the old-school development process which meant that I usually wasn’t able to work with a feature until it was complete or nearly so. But even when I was involved much earlier – once an automated test produced the desired result, it mostly continued to do so, and my Dev gave me advance notice of specification and implementation changes. “Hey, this doesn’t work any more” was quite rare.
That is not true for my current product. It is under active development, has a ways to go yet before it is a shippable product, and there’s a certain amount of “hey, this thing we implemented, it isn’t right, we need it to behave this other way instead”. So, as I implement regression tests for a major subsystem, the initial value is still detecting problems as I get test cases of the regression suite to work. But my suite has been running nightly against the latest smoke-tested build for about 2 months now, gradually increasing its coverage, and I’ve already gotten a few “hey, this thing doesn’t work any more” incidents. Once it was an implementation change that happened while I was on vacation, but some others have been “yeah, that’s a bug that got introduced”. Cool!
That might not seem like a big deal to many people, because duh, that’s what automated regression is supposed to be for. Well, it’s a new thing to me, and I’m happy that my suite is finding those problems.