Unit Testing: Waste Of Time? Discuss

I’m sure it has its time and place… it seems to me though that it is mainly being used at the wrong time in the wrong places.

Cas :slight_smile:

It isn’t much of a discussion if it tends to boil down to ‘it depends’.

Having said that - who’s trying to convince who of what again ? :slight_smile:

it depends :persecutioncomplex:

Heh, no convincing, the topic is open discussion :slight_smile: I was interested to hear others’ anecdotes and experiences on it. I’ve seen it working well myself but mostly all I’ve seen is software that still has as many bugs in it but that takes twice as long to make. I was hoping I was wrong and I’ve just been unlucky because all the contracts out there are the moment are all “TDD”, “Agile”, “SCRUMM” etc. (what I would have called “have you actually run it”, “just make it work by Friday” and “ordinary meetings with everyone else like we’ve always done” back in the day)

Cas :slight_smile:

Also for languages with explicit memory management, unit tests can be pretty powerful when wrapped in a runtime memory analysis tool: detecting uninitialized memory, dangling memory etc.

At work, we have a somewhere around 11k unit tests last time I checked; granted, most are more system/integration tests than plain unit tests. The test environment gets deployed across a range of different configurations (OS, database vendors, application containers, JVM versions etc); it’s quite common/annoying that some errors are only reproducible in very specific environments/configurations.

Heh, many years ago, I wrote a payroll + reports application for small casino (a type of casino-in-bars thing that are quite common here) - it was my second project as a solo developer and took place during the time when I thought unit testing was mostly a tool for consultants to embellish their invoices.

Turns out that when your software is essentially dealing with cash flow, it rapidly spins out of the comfort zone every time something looks quirky. After going live and finding the first couple of bugs, every damn anomaly starts looking suspect.

Once the worst of my paranoia had passed, in parts thanks to not uncovering any really serious bugs (aside from a few embarrassing mistakes), income tax laws changes and I had to incorporate it into the software. Naturally, the application still had to be able to generate reports the old way too. And so began another stretch of weeks with too much stress and anxiety.

The above anecdote is of course on the extreme end of stupidity in regards to unit testing. Don’t ask me why I didn’t start writing unit tests asap upon realizing the error of my approach, but I didn’t,

For code out in the wild, only artemis-odb is unit tested: I started doing it when I cleaned up the bugs from vanilla artemis; once dev on new features begun, I tended to write tests for them too. I wasn’t aiming for complete coverage, but I think it’s mostly complete. When refactoring, having a semi-good coverage helps a lot - it was pretty much what made some refactoring work possible without messing up (like when changing the way entity groups were identified within the framework).

Code that writes code.

Testing software is a good idea - whenever you write a non-trivial bit of code you should write a wee main() to reassure yourself that it does what you think it does.
Writing that main() as a @Test just means that you can keep it around and run it again really easily, which is probably a good thing. As EgonOlsen said, tests can be excellent documentation of intended behaviour - passing tests will never go stale.

Unfortunately it’s really easy to measure test coverage and really tricky to measure test quality. Even if there aren’t explicit targets for test coverage, it’s very easy for devs to fall into coverage-chasing behaviour - the compulsion for 100% coverage essentially gamifies poor-quality tests that only serve to slow down development.

To answer the question posed: It Depends. Are you testing complex things that are easily broken in subtle ways, or are you chasing 100% coverage on the jenkins dashboard?

If you’d allow me to dream, then I’d tell about ML. ‘Learning from examples’ it’s just a kind of TDD. Examples of the behavior of the target program are in fact the tests. Thus process of developing the software becomes generate-and-test process. Though, whether these ideas could be ever reified in some real, industrial-strength frameworks? I don’t know. Maybe they’ll be spread in other forms in some more specific technologies … Excuse me if I brought a cloud of dust to this discussion ::slight_smile: :).

In principal testing is a great idea but in practise it is quite hard to do to make it really effective

  • You could write tests for every class but you aren’t then testing how the application works as a whole
  • Its very hard to create every possible type of input, for a complex system, and to then have something that can test all the possible output. Opengl is a great example of it being very hard to test the output
  • if you don’t maintain tests, they can very easily get out of date
  • The effort vs reward ratio is very biased towards effort

Where I work every agrees test cases are a good thing but in practise non are actually written

The question has nothing to do with insuring a piece of code is correct to its contract. That’s a different topic entirely.

Unit-testing is a battery of black-box tests which attempts to insure contracts are fulfilled. This is pretty much only useful in very narrow situations. Code that writes code is always one (languages, meta-programming, expressing rewriting, etc.). Some serious numeric code is another, GCs of any complexity. The vast majority of code (even dynamic and functional) don’t really need it.

The notion that one can prevent bugs by design is a fail.

So I’m wondering who actually started the myth that it was a good thing to be doing, why they concluded that, why everyone has taken up the mantra and repeating it like dogma, and most importantly for software engineers why nobody seems to be questioning it?

Cas :slight_smile:

Not quite the same thing as unit testing, but I regret not implementing a system for regression testing my current (eternal, undying) game.

It’s a bit of an unusual case though.

The game comprises dozens of rooms, each involving a different puzzle (e.g., shoot the switch, kill the monsters, and run across the bridge before it vanishes). I’ve testing each of the rooms pretty exhaustively to make sure they can only be completed in the way I expect (e.g., it’s impossible to cross the bridge while the monsters are still alive). There’s a good chance the game would crash if the player manages to complete a room in an unexpected way.

Thing is, it’s now very difficult for me to change some parts of the code. For instance, an accidental change to how the monsters behave could break one of the puzzles and, without going back and thoroughly retesting every room, I’d never notice.

If I’d thought ahead, I would’ve set up a system to record my control inputs (plus initial state, random seed, final state, etc.) while testing each room. Then I could automatically replay all those hours and hours of testing and find out quickly if any of the older puzzles have been broken.

But, yeah, aside from that, testing’s for wimps. :wink:

I think it’s safe to conclude that unit tests have their place for code you don’t trust.

For Roquen, this mostly means code generating code.
For me, this mostly means code modifying code.
For somebody else this may mean testing that ArrayList actually works - because they feel like it is surely to blame for a crash in their cube renderer.
For yet another person the mistrust lies in dealing with a blackbox or dealing with code from a coworker known to have had a hangover when he wrote that code.

Somehow unit-tests and TDD are used interchangeably. I don’t feel like going into details, so… they are different, mkay? :slight_smile:

For anything related to finances, health or security, thorough unit tests should be a matter of course.

For development, they can be a big time saver while developing pure logic unrelated to frontend or realtime stuff. Instead of relaunching the application after each change and getting to the desired place. Also removes side effects from other application parts youre currently not interested.

And when exaggerated, they fit perfectly into the bloated development process of many J2EE business application projects…

Note…these

Note: my perspective these two are the same. And really my example of GCs is also the same reasoning. Which is code which is performing complex structural changes via a set a rules, transformations, etc. As a personal example I have some Mathematica libraries which manipulate specialize forms of equations. The “test-units” associated with a given library are ~50x larger than the library it’s testing. Something as simple as swapping the ordering of two valid rules can cause things to break.

The real problem here is that there are no good programming languages for these things to be written in. It’s rather dumb that checking of invariants is outside of the programming language itself. I’ve never looked at any of them seriously but I know there’s some experimental languages which are rolling in automatic proofs and theorem solving.

Pretty much all major areas of CS are suffering from this.

TDD = Code a test then build some code to pass that test (I don’t know anyone who does this)
Unit testing (Manual) = You test your own code after you code (Most people do this in some basic form)
Unit testing (automated) = re runnable set of tests you can run, probably done in Junit / JBehave or something similar (more people than TDD but less than unit testing manual probably do this)