test-Driven development (TDD) is a very powerful and beneficial approach to developing software. I routinely recommend it as a solution to untested and hard-to maintain software. However, every rule has its exceptions and there are occasions when a pure TDD approach is not very helpful.
Mark Needham wrote a recent blog post on the subject where he describes a kind of “Spike Driven Development”: Coding: Spike Driven Development at Mark Needham.
I tend to think of this as some sort of experiment-driven development. I start by choosing a goal. Typically this is either finding out about some murky or unknown aspect of a system such as interfacing with a third party, or improving a variable quantity such as performance. This in turn leads to thinking about ways to assess whether we have done enough to meet the goal and how much time to allocate to the exercise. Finally, a period of playing, following where each trial leads. The important bit is to repeatedly check progress against the goal – that’s what links it to the aspects of TDD which work well.
It’s happened on most reasonable sized projects I have worked on. The benefits of test coverage an continuous integration are obvious and pay back immediately. But, somehow, as the project grows and diversifies, a point is reached where the complexity and run time of the CI process begins to slow down development rather than assist it.
Jez Humble has put together some interesting thoughts on how to deal with this issue. Read more at Deployment pipeline anti-patterns.
There has been a lot of speculation and worry about Oracle’s acquisition of Sun, so it’s nice to see some positive aspects. Here’s one about an improved attitude to automated testing.
One of the results of the Oracle purchase of Sun has been an increased focus on testing — not that we didn’t test GlassFish before, but it was mostly manual in my area of the server.
from Jason Lee’s blog Coming Up for Air.
A conference presentation from RubyFringe, designed to be contentious. There are some good points, particularly about the way that different approaches to testing can complement each other, but I think he misses the point about TDD when he lumps it in with developer unit testing and ignores the design aspects of the technique.
I sometimes have to suppress a shudder when people use the term “best practice”. Despite a positive sounding name, the idea of “best practice” is almost always used in a way that is restrictive rather than enabling. Declaring one approach or solution as “best practice” by implication shuts out other answers.
I will admit that for some (very narrow) fields there can be a common understanding of the one best way to do something, but this is often so well understood that it does not even get a name. Walking on your feet is generally better than walking on your hands or knees, for example, but I have never met anyone who referred to foot-walking as “best practice”.
In my world of software development, where the landscape changes at a moment’s notice, naming something as “best practice” is tantamount to declaring it obsolete. Yet large numbers of software developers still numbly follow the its lead.
James Bach tackles a similar issue in testing James Bach’s Blog » Blog Archive » The Great Implication of Context-Driven Methodology.
Every now and then we discuss ways of better automating the manual tests which accompany our web applications. This is especially poignant right now as both the development and test teams have been recently reduced in size. We have had some success with Watir in the past, but it was always dependent on Internet Explorer and Windows. So it’s cool to read that there is now an equivalent for Chrome.
A hot topic in software development circles at the moment is the interaction and demarcation between “developers” and “testers”. Development uses an agile approach but it’s sometimes hard to see how this sits with the testing folks, particularly as most stories seem to move snappily through development then pile up in testing. Sometimes it seems as if a team needs more testers. Sometimes it seems as if a team should reduce or dispense with testers altogether. Sometimes it seems as if roles and responsibilities should change completely.
I wish I knew an answer to this, but at least it is encouraging that others are also considering and writing about these issues.
The British Computer Society (BCS) is supposedly the professional institution in the UK which represents anyone working in the field of Information Technology (IT). I have been an associate member for many years, and most years I consider upgrading my membership to become a full member but have never actually done so. Usually the problem I face is that, despite having worked in software development for at least 15 years, I don’t actually know anyone else in the society to act as my sponsor.
This year, though, I face a different problem – my disappointment with the attitude of the society, particularly as expressed in two recent articles, published on their web site and notified to members via an email “magazine”.
The two articles in question are:
- Employee-driven Corporate Change – Reverse Ludditism?
- ‘Strictly Testing Only’, please – Agile doesn’t work!
I urge you to read them and make up your own mind.
My problem with both of these articles is that they seem to express a one-sided attitude which I had hoped had been banished from the BCS many years ago.
When I first encountered the BCS (during my university time in the mid 1980s) it was seen as the bastion of IT middle management, and in particular middle management in large corporate environments. There seemed little or no representation of or for the people actually doing the work of producing and maintaining information systems, and an almost pathological ignorance of the concerns of contractors and people working for small businesses.
Over the years, my concerns waned. The society seemed to re-invent itself to become more in-line with its charter and thus more inclusive of software developers and contractors. It still struggled with the idea of small IT businesses, and of small departments or lone IT workers in other organisations, but I had hopes that this would improve, too.
But then I received these two articles, and all my concerns about the emphasis of the society came flooding back. Both articles appear riddled with the view that developers are lazy and selfish creatures who will always choose to produce rubbish unless forced by managers or testers. This is an appalling stereotypical slur, and completely at odds with the stated, inclusive, intent of the society.
I understand that both these articles were published through the society’s “blog” initiative, and thus have not had the editorial oversight that one might expect from a more formal publication. However, this does not exempt them from reflecting on the BCS. Since reading these articles a few days ago I have already had one forwarded to me by a colleague. The damage has been done.
Maybe I won’t be upgrading my membership this year, either.
Our agile team is finding some things challenging. In particular deciding how to prioritise and work on “bugs” in the midst of a pool of prioritised and scheduled feature stories.
“Agile In Action” has a nice summary of an approach to software development. Most agile practitioners won’t find anything to object to.
Our first problem is with the second bullet “Work with clients every day“. As a team we would love to work with clients every day, but there seems to be a thick layer of representatives and proxies in between us and real customers. This is made especially difficult as we are currently serving the needs of several customers with a single product, and resolving customer differences is proving tricky.
Our second problem is with “Fix defects as soon as they’re discovered“. In principle this seems obvious, but the trouble we are having rests on the definition of a defect. As an agile team we keep up-front specification to a minimum, and in-effect treat every delivery as a prototype ready for customer feedback. Plenty of people in the company have opinions on such prototypes – things they think it should do, things they think it should not do, and things they think it does wrong. Any of these could be considered as defects (and indeed many of them are raised in our bug tracking system.) If we stopped new work to make all these changes we would (a) greatly reduce our feature velocity, (b) bypass the prioritisation process used to “Deliver the client’s highest-value stuff first“, and (c) leave us stuck in a mire of conflicting opinions.
We certainly do not ever want to deliver “broken” software, but its a fact of life that some “bugs” are lower in priority than others. Some “bugs” are also lower in priority than new features, but this is more of a business decision than a development decision. Working out how deliver prompt, appropriate and minimal software in the face of such a slew of opinions is proving contentious.
I’d be interested in reading any suggestions or answers to these problems.
This looks like an interesting project. I’m slightly worried by the way that it seems to embody the one class === one test assumption, but if that doesn’t get in the way of other forms of unit testing it could be useful.
I’m currently working on some software which sends notifications to users (using SMS, email, or whatever) and have faced the inevitable problems with testing it. On balance I’d prefer not to receive a test SMS on my mobile phone every time our continuous integration system runs an end-to-end test.
Gojko Adzic has some thoughts about how to make such a system more straightforward to test.
I develop almost all my code these days using Test Driven Development (TDD). By taking this approach I never bother about unit test code coverage, but some developers seem very concerned by it.
One thing that I have never considered doing is retroactively adding tests just to increase some code-coverage metric. Dan Manges has written in interesting explanation of why that might be, using some examples in Ruby.
As we attempt to spread agile, continual, iterative, processes throughout the company, there is a growing confusion as to when things are actually ready for release. The old “waterfall” notion of a complex and detailed advance specification, backed up by months of laborious manual testing is no longer applicable.
In most ways this is a very good thing. A whole lot of unnecessary rigidity and busy-work has been removed. However, one aspect we miss is some sort of final “quality gate” to approve a release. We are in danger of ending up either with a queue of potential releases jostling for testing priority, with none of them ever actually being released before they are overtaken by a new candidate, or with untested and potentially broken builds being released to live deployments.
This is obviously also on other people’s minds, too. InfoQ has recently published an article on “The Power of Done” which summarises some attempts to chart an agile course through this dangerous situation.
As software testing spreads out in scope from the old notion of manual exercising of a system into areas such as developer unit tests and automated acceptance tests, the issue of ownership becomes more important. In this short and pithy post Kristan Vingrys states his opinion.
I’ll put my cards on the table. At heart I am a “classic TDD” guy. My first attempt at any new code is to test return values or side-effects rather than immediately jump to setting up mock objects and testing interactions. Although I do use mock objects as a testing technique from time to time I have never liked any of the popular mock frameworks. The nearest I have found to what I would like is probably Mockito, but even so I still use my own recording framework.
This slightly provocative post examines the role of QA (a.k.a testing) in various view of a project lifecycle and considers how we might react to a QA team which reported no bugs.
Update: here’s some more discussion on this topic, and how it is affected by the nature of user stories
User Stories are Just Schedulable Change
An article with some good tips on avoiding a clash of cultures between developers and testers on an agile project. (note, there may be a click-through ad on this link)