Thursday, 11 December 2008

Testability - re-discovering what we learned and forgot about software development.

(or, why agile approaches require good old-fashioned O-O)

What are we all talking about? (the intro)

Testability comes out of an attempt to understand how agile processes and practices change how we write software. Misko Hevery has written some rather wonderful stuff on his blog, and starts to get into issues around singletons, dependencies, and other software bits that get in the way of testability, and starts to look at testability as an attribute. (Full plug at the end of this post) But in particular, he also starts looking at what design and process changes can we start to use to make code more testable. And while we're at it, what is the point? Is testability the point? It's important, especially as a way to remove barriers from working in an agile environment, if that's what we've chosen. There are reasons related to quality as well. But I think there are some deeper implications, which Misko and others have implied and, on occasion, called out. It's that we've forgotten the point of Object-Orientation and what it was trying to achieve in the 80's and 90's, and are re-discovering it.

What have we forgotten? (the reminiscence)

But what is the essence of what Misko is saying? Martin Fowler (coiner of the Inversion of Control term) and others have written wonderful articles on "Law of Demeter" and other principles. In general, they are all looking at how we grow software, and between all the thinkers and talkers and doers, it seems to me we're re-discovering some key concepts that we all learned in college but forgot in the field. The key points are:

  1. Manage complexity by separating concerns and de-coupling code.
  2. Map your solution to the business problem.
  3. Write code that is not brittle with respect to change.
  4. Use tools that empower your goals, don't change your goals to fit the limits of the tools.

In other words, what we all learned when we were taught textbook Object-Oriented Analysis and Design and Programming. Now, O-O has had its various incarnations, and I would contend that all the architectural threads of AOP, Dependency Injection, as well as a more conservative take on O-O have all stemmed from these key principles which were at the core of the Smalltalk revolution and early attempts to get rapid development cycles, and what is often now called "agile" development.

How did we forget all this? (the rant)

So why did we forget all this? Five reasons, I suspect:

Selling Object-Orientation

We did a crappy job of selling O-O. You might not think so, since O-O is so prevalent (or at least O-O languages are). However, we didn't sell the 4 notions I mentioned above, we sold business benefits that were, in essence, lies. They didn't need to be, but usually were. These are things like "O-O will make you go faster because of reuse," or "O-O will help you reduce costs because of reuse," and on and on. These can be true, but are usually the result of a longer evolution of your software in an O-O context, and the costs of realizing the benefits that we sold often would be too high for businesses to stomach. In fact, O-O won, in my view, because managing complexity became fundamentally necessary when software scale became huge.

Cheap, fast computers

Fast computers have allowed us to do so many bad things. Room to move and space to breath unfortunately gave us less necessity for discipline. We removed the impetus to be efficient and crisp and to think through the implications of our decisions in software. However, we're catching up with hardware in terms of real limits. Moores law may still apply, but we are starting to have limits in memory. A client of mine observed that Google is an interesting example. He works for an embedded software firm, and noted that they have probably similar scaling issues as an embedded (say, phones or similar devices) device company, because sheer volume of traffic forces Google against real limits, much the way resource constraints on a telephone forces those companies against their constraints. However most of us live in a client-server mid-level-traffic dream of cheap hardware so that we can always "throw more hardware at it".


Java is an O-O language, and really was the spear-head that won the wars between O-O and structured programming in the '90s. However, bloated processes and territorialism have kept Java from fixing some of its early issues that prevented it from solving problems such as I mention above in efficient ways. The simple example is reflection. If a langauge requires that I create tons of boilerplate code (try-catch, look-up this, materialize that) to find out if an object implements a method, and then to invoke it, it needs to provide a way for me to eliminate the boilerplate. If it can't do it in a clean way, it should at least provide convenience APIs for me to do the most common operations without all that boilerplate. Sadly, the core libraries of Java were bloated even in the beginning, because of the tension between the Smalltalk, Objective-C people on one hand, the C++ people on the others, and Sun not caring really, because they were a hardware company. So because Java won the O-O war (don't argue, I'm generalizing) its flaws became endemic to our adoption of O-O practices. I'm going to mention that J2EE bears about half of Java's responsibility, but I'll leave that for another flame. Nevertheless, the design and coding and idiomatic culture that spawned from these toolsets have informed our approaches to O-O principles for over a decade.

The .com bubble

The dot-com bubble compounded our Java woes by introducing 6-month diploma programmers into the wild - nay, into senior development positions, and elevated UI scripting a-la JSP and ASP, which allowed for the enmeshment of concerns beyond anything we'd seen for a while in computing. All notions of Model-View-Control separation (or Presentation, Abstraction, Control) were jettisoned while millions of lines of .jsp and .asp (and ultimately .php) script were foisted onto production servers, there to be maintained for decades (I weep for our children). While this was invisible in early small internet sites, the Dot-Com bubble which careened the internet into a primary vehicle for business, entertainment, culture, and these days even politics caused a growth in number, interaction, and complexity of these sites that has caused unmitigated hell for those who found their "whipped-up" scripted sites turn into high-traffic internet hubs. Much of this code has been re-written out of necessity, and yet it caused the travesty that is Model-1 MVC and other attempts to back-into good O-O practice from a messy start. These partial solutions were propagated as good practice (which, by comparison with the norm, they were) and a generation of students learned how to do O-O from Struts and other toolsets. Ignored in that process were wonderful tools like WebObjects or Tapestry which actually did a fair job of doing O-O AND doing the web, but I'll leave that point here.

Design Patterns

A small corollary to the dot-com bubble is that combining Java, and Patterns concepts from the Gang of Four, these new developers managed to create a code-by-numbers style of design, where you don't describe architecture with patterns, you design with patterns up-front. This has led to some of the worst architecture I've ever seen. Paint-by-numbers has never resulted in a Picasso or Monet, and rarely results in anything anyone would want to see except the artist's mother. Design Patterns and pattern-languages aren't bad - far from it. However, they are a language to discuss architecture, they are not an instruction manual. They should be symptoms of a good software design, not an ingredient.

Really big, bloated projects

Lastly, really really big projects have taken all of the above and raised the stakes. We are now finding that the limits of software aren't the Hardware (thank you Moore), but rather the people. A whole generation of us attempted to solve this by increasing the process weight around the development effort. This satisfied some contractual issues with scale, but in general failed to attend to the issues raised in The Mythical Man-Month, despite 40 years of its having been published.

A side-effect of really big projects is that when you have that much money on the table, risk-mitigate goes into high-gear, and people are bad at risk analysis and planning. We tend to manage risk by telling ourselves stories. We invent narratives that help us manage our fears, but don't actually manage risk. So we make very large plans. Idealistic (even if they're pessimistic) portrayals of how the project shall be. Then, because we want to "lock down" our risk, we solicit every possible feature that could potentially be in, including, but not limited to, the kitchen sink, to make sure we haven't forgotten it. It goes in the plan, but by this point we have twice the features any user will ever ever use and 80% of the features provide, maybe, 20% of the value. So we actually increase risk to the project's success while we are trying to minimize and control it. This kitchen-sinkism leads to greater and larger projects, but then large projects bring prestige as well, so there are several motivation vectors for large-projects. Most of them aren't good.

Enter Agile (the path to the solution)

Agile software started to address the human problem of software, and I won't go into it much here, as it's well covered elsewhere. However, one can summarize most agile methods by saying that the basics are

  1. Iterate in small cycles
  2. Get frequent feedback (ideally by having teams and customers co-located)
  3. Deliver after each iteration (where possible)
  4. Only work on the most important thing at a time.
  5. Build quality in.
  6. Don't work in "phases" (design, define, code, test)

This is a quick-n-dirty, so no arguments here. It's just an overview. But from these changes there are tons of obstacles, issues, and implications. They are, indeed, too numerous to go into. But a light non-exhaustive summary might include:

  • You can't go fast unless you build quality in
  • You can't build quality in unless you can test quickly
  • You can't test quickly if you can't build quickly
  • You can't test quickly if you aren't separating your types of testing
  • You can't test quickly if your tests are manual
  • You can't automate your tests if your code is hard to test (requires lots of setup-teardown for each test)
  • You can't make your code more amenable to testing if it's not modular
  • You can't ship frequently if you can't verify quickly before shipping
  • You can't build quality in if you ship crap
  • You can't get feedback if you can't ship to customers
  • etc.

Lots of "can't" phrases there, but note that they're conditionals. Agile methods don't actually fix these problems, they expose them, and help you do root-cause analysis to solve them. For example, look at some of those chains there.

If I, for example, take my "Big Ball of Mud" software system and re-tool it to de-couple it's logical systems and components into discrete components in its native language (say, Java), then I suddenly can test it more helpfully, because I can test one component without interference from another. Because of this, my burden of infrastructure to get the same test value goes down. Because of this my speed of test execution improves. This causes me to be able to test more frequently (possibly eventually after each check-in). This causes me to be able to make quick changes without as much fear, because I have a fast way of checking for regressions. This allows me to then be less fearful of making changes, such as cleaning up my code-base... Oh wow - there's a circle.

In fact, it is a positive feedback loop. Starting to make this change enables me to more easily make the change in the future. But once I'm moving along in this way, I start to be able to ship more frequently, because my fast verification reduces the cost of shipping. This means I could ship after three iterations, instead of twelve... or eventually every iteration. It means I can make smaller iterations, because the cost of my end-of-iteration process is going down... There are several feedback loops in process during any transition to a more agile way of doing things, as the agile approach finds more and more procedural obstacles in the organization.

But... and here's the big but... if you start to do an agile process implementation and don't start changing how you think about software, software delivery, how you write it, and how it's designed, you're going to run up against an internal, self-inflicted limitation. You can't move fast unless you're organized to accommodate moving fast. Your code base is part of your environment in this context. So starting to help developers subtly move in this direction, and increase the pace at which they transition the existing code-base into a more suitable shape for working efficiently is critical. This, as it turns out, involves our dear old O-O.

What are testability, O-O, and other best practices today? (the recipies)

There's a wealth of info out there. These don't just include software approaches but also team practices. Martin Fowler, Misko Hevery, Kent Beck, (Uncle) Bob C. Martin, Arlo Belshee, and a host of others I couldn't name in this space provide lots of good text on these. These include dependency-injection, continuous code review (pair programming), team co-location, separation of concerns. On the latter point, Aspect Oriented Programming is a nice approach which I see as another flavour of O-O, conceptually, in that it attempts to get at some of the same key problems. It is often mixed either with O-O, or with Inversion of Control containers. Fearless refactoring, continuous integration, build and test automation (I'm a big fan of Maven, btw, since, for all its problems, it makes dependency explicit). Test Driven Development (and it's cousin test-first development). Also, the use of Domain Specific Languages has become quite helpful in both mapping the business problem to technology, but also eliminating quality problems by defining the language of the solution differently. And of course, wrapping this development in a management method that helps feed the process and consume its results - such as Scrum, or the management elements of Extreme Programming.

These are a sampling of practices that affect how you organize, re-think, design, create, and evolve your software. They rely on the basic principles and premises of agile, but require, in implementation, the core elements that O-O was trying to solve. How can we manage complexity, address the business problem, write healthy code, and be served, not mastered, by our tools.

Prologue (the plugs)

I'm a big Misko Hevery fan these days (I can feel him cringing at that appellation). There's a lot I've had to say to my clients on the subject of testable code, designing for testability, and tools and technologies, but Misko seems to have wonderfully summed up much of my discussion on the topic on his "Testability Explorer" blog. He explains issues like the problem with Gang-of-Four Singletons, why Dependency Injection encourages not only testable code, but it does so by separating concerns (wiring, initialization, and business logic), and all sorts of good stuff like that. It helps to read from the earlier materials first, because Misko does build on earlier postings so later ones may assume you have read the earlier ones and that you are following along. Notwithstanding, his posts are cogent, clear, and insightful and have helped to crystallize certain understandings that I've been formulating over my years of software development into much more precise notions, and he's helped me learn how to articulate and explain such topics.

Misko has also recently published a code-reviewer's guide to testable code. Sorely needed in my view. I also want to make a quick shout-out to his testability explorer tool, which is fabulous, and I'm working on a Maven 2 plugin to integrate it into site reports.

Also, I've built a Dependency Injection container (google-code project here and docs here) suitable for use on Java2 Micro Edition CLDC 1.1 platform, because I had to prove to a client that you could do this in a reflection-free embedded environment. It's BSD licensed, so feel free to use it if you want.

Lastly, Mishkin Berteig and co. have a decent blog called Agile Advice (on which I occasionally also blog) which nicely examines the various process-related, cultural, organizational, and relational issues that working in this way brings up. My posts tend towards the technical on that blog, but occasionally otherwise as well.

Monday, 8 December 2008

Agile Engineering Practices course in Ottawa, December 18-19

To my stalwart readers: I'm putting on my first public version of an Agile Engineering Practices training that I've presented with various clients over the last couple of years.

The training will cover a few areas, including:

  • Workflow of an Agile Software Development Cycle
  • Impact of Agile on Conception, Design, Construction, and Verification
  • Conceptual consequences of Agile methods
    • Software as an emergent property
    • Modularity, Dependency, and Component-Orientation
    • Incremental design
  • Infrastructure needs to support Agile methods
    • Build systems
    • Version Control and SCM
    • Continuous Integration
  • Development Practices and Approaches of Agile methods
    • Testing, Test-First, and Test Driven Development
    • Writing testable code
    • Collaborative software development
    • Pair Programming

There is a small overlap with the Scrum training, mostly for context, but a good chunk will focus on the notion of testable, flexible code, and how infrastructure and an agile approach changes your design concepts and techniques. Also, what architectural features will help your code evolve in a healthy way, rather than spin out of control, or paint you into a corner.

Anyway, if you're in Ottawa, or know anyone there who could benefit from such training, let them know. It'll be a small course (no more than 11 students). The cost is initially discounted from its ultimate price of $1,200.00. A steal at $750.00 (CAD of course). You can look at the course here or can register here.

I'm planning to arrange for more of these in other cities in the new year, including Toronto, Somewhere in the 519 area code - probably Kitchener/Waterloo, as well, possibly, as out west in Saskatoon or Calgary or Vancouver. If you are interested in these cities, let me know, and I'll keep you in the loop.

Monday, 17 November 2008

Saying you're agile, doesn't make you agile.

So a lot of posts have come up in response to James Shore's article on Agile's decline and fall, and the implications of using "scrum alone, and misapplied." While I agree with James about the bulk of his content, there's a much simpler issue at work, and it's an issue that's been hanging around since the earliest days. It's about branding.

Agile is a brand

I can call myself anything. I can call my self "Christian" (that's my name). I can call myself Canadian. I can call myself a consultant. These are descriptive, verifiable, and helpful designations in understanding what I'm about. I can also call myself "agile." This, depending on how the label is understood, can be helpful or not. Do I mean that I'm physically agile? Do I mean that I'm flexible in my thinking? Do I mean that I observe a particular method of software development? Does that mean that I'm iterating? The term is, at that level, meant to be evocative, and therefore is a very easy to apply, but slightly meaningless brand.

Scrum is a stronger brand, because it has some licensing, and there's a "by the book" approach. Extreme Programming is also a stronger brand. You can measure the organization's application of practices and determine if they're really doing XP or Scrum (to a larger extent). In fact Agile (as a brand) has always suffered from this weakness, in that it can be applied so generically as to be unhelpful as a description. Because of this, Agile has failed a lot in its history, because anyone can do a little piece of agile, call themselves "Agile", and fail. "Agile" is then a nice scapegoat for the failure. Waterfall is in the similar unenviable position, but it was received wisdom in the software industry for so long no one really questioned that it was one of the culprits until alternatives were seriously proposed.

So is "Agile" failing?

At one point I worked with a client that implemented Scrum. Only they didn't. They let teams determine their own processes in many ways, except that they all iterated. But the infrastructure of Scrum™ wasn't there. But they did not deliver working software every iteration. They did not have a clearly defined product owner. They had ambiguity about what "the team" was, in fact. No scrum-master-alike role. Little criteria for acceptance was provided to the teams. Few teams were allowed to prioritize defects to the top above new functionality. Basically I'm listing off a set of Scrum assumptions/directives that were not followed. Yet this organization called what it was doing "Agile." This company did not deliver software for several years, and is now not "doing agile" as an organization. (Some of the teams are actually moving forward with stronger agile processes, but the outer shell of the organization is moving towards stronger "contract-oriented" delivery in large phases.)

So did Agile fail? Did they fail Agile? Most Scrum proponents would probably suggest (and we did) that they weren't even doing agile in the first place. In fact, they were (on some teams) starting to do some of the engineering practices. The teams were learning, but the organization wasn't. They held retrospectives and didn't apply the learning. They weren't fearless in their willingness to change and adapt to what they were discovering about their process. So in effect, they started to do Scrum-but, ended up doing 1/4 XP with iteration, and management did not accept what was obvious to the rank and file. I do not consider this an "agile failure" or a "failure of agile" because, frankly, Agile wasn't even given a fair shake.

I contend, based on my experience there, that had they implemented Scrum "by the book", it may well not have "saved" them. Their engineers, however, are extremely competent (when permitted) and some of their best immediately began to implement agile-supportive engineering practices. And as consultants we helped them with infrastructure issues like moving towards a more coherent infrastructure around continuous integration, separation of types of testing, etc. I think they could have succeeded, because the ingredients were there. Scrum was exposing problems, insofar as they were using its elements. But without applying the learning, Scrum is as useless as any other learning system. As Mishkin Berteig opines frequently in his courses, "Scrum is very hard."

Incidentally, In case you know my clients, don't be too hard on these guys. Management tried very hard, was undermined, interfered, and meant well. For the most part, agile implementations I've observed have been partial implementations largely doomed to produce less value less quickly than was otherwise possible.

Can "Agile" fail if you are using something else.

What James talks about is not "agile" or "scrum" failing, but a wolf in sheep's clothing. I'm not copping out. As I said, most of the implementations I've seen have been half-baked partial agile implementations, which, in retrospective analysis done for my clients, have failed in the obvious faults left where missing practices were ignored or seen as too risky or costly. These methods are systems which work together for reasons. Lean analysis can help, but it still is a learning-system closely aligned to scrum in mental-state (if not in brand), and it will merely uncover things that you have to fix.

If I call myself a Buddhist, but become a materialist freak... did Buddhism fail, or am I truly worthy of the label? If I label myself a Christian, but then fail to love my neighbour, is Christianity failing, or am I failing Christianity? If I call myself a Muslim, then violate the strictures of the Qur'an (by, say, killing innocents), is Islam violent, or am I doing an injustice to the name of the religion? I mean no disservice to any religion when I call them a "brand", but like any other label, you can think of them that way. If a brand is easy to apply, then it's easier to distort and pervert from its original intent.

If I take a bottle of Coca-cola, replace the contents with prune-juice and Perrier, then put it in a blind taste test with Pepsi, which subsequently wins... did Pepsi beat Coca-cola in a blind taste test? No. I subverted the brand, by applying it to something else entirely.

If I've made my point clear, the problem should be obvious. Agile is a weak brand, which can be misapplied. Therefore, in a sense, given that Agile was only a loose community of like-minded methods and system practitioners, maybe it's OK that Agile declines, as a brand. The problem is that such a decline will discourage people from applying the management processes and engineering practices that can encourage organizational learning and product quality.

If that happens, then the sky has fallen, because we weren't making high-quality software before Agile, and I would despair if we just gave up.

Not a fad for me

I'll keep teaching and coaching "agile" teams, encouraging them to implement the full range of agile practices (above and below the engineering line). Not because I'm a believer, but because I find these useful practices which, when applied in concert, powerfully enable teams to deliver high-quality software. And because I enjoy working this way. And because I deliver faster when I do them. That's the bottom line. Agile might be a fad, but I don't really care. I wasn't using these techniques just because they had become popular. Many of them I had used before I had ever heard of the terms Scrum, XP, or Agile. It's a bit like Object-Orientation, which I've been using since the NeXTSTEP days in the early 90's. At that point, it wasn't clear that O-O had "won" over structured programming, and it was not really a "fad" until the advent of Java. That didn't stop me from using a helpful paradigm that improved my quality, integrity, and productivity. And it shouldn't stop anyone else. will announce the new fad soon. Use it if its useful. Use Agile methods if you find them useful. Was O-O hard when I first was trying to do it? Sure! There are books on its pitfalls and how to screw it up. (And in fact I find little of it out there, despite the prevalence of O-O supportive languages.) But I still advocate for its use, and use it myself.

The quality, integrity, and delivery of your software is in your hands. Do what you gotta do.

Tuesday, 4 November 2008

code coverage != test driven development

I was vaguely annoyed to see this blog article featured in JavaLobby's recent mailout. Not because Kevin Pang doesn't make some good points about the limits of code coverage, but because his title is needlessly controversial. And, because JavaLobby is engaging in some agile-baiting by publishing it without some editorial restraint.

In asking the question, "Is code coverage all that useful," he asserts at the beginning of his article that Test Driven Development (TDD) proponents "often tend to push code coverage as a useful metric for gauging how well tested an application is." This statement is true, but the remainder of the blog post takes apart code coverage as a valid "one true metric," a claim that TDD proponents don't make, except in Kevin's interpretation.

He further asserts that "100% code coverage has long been the ultimate goal of testing fanatics." This isn't true. High code coverage is a desired attribute of a well tested system, but the goal is to have a fully and sufficiently tested system. Code coverage is indicative, but not proof, of a well-tested system. How do I mean that? Any system whose authors have taken the time to sufficiently test it such that it gets > 95% code coverage is likely (in my experience) thinking through how to test their system in order to fully express its happy paths, edge cases, etc. However, the code coverage here is a symptom, not a cause, of a well-tested system. And the metric can be gamed. Actually, when imposed as a management quality criterion, it usually is gamed. Good metrics should confirm a result obtained by other means, or provide leading indicators. Few numeric measurements are subtle enough to really drive system development.

Having said that, I have used code-coverage in this way, but in context, as I'll mention later in this post.

Kevin provides example code similar to the following:

String foo(boolean condition) {
   if (condition)
       return "true";
       return "false";

... and talks about how if the unit tests are only testing the true path, then this is only working on 50% coverage. Good so far. But then he goes on to express that "code coverage only tells us what was executed by our unit tests, not what executed correctly." He is carefully telling us that a unit test executing a line doesn't guarantee that the line is working as intended. Um... that's obvious. And if the tests didn't pass correctly, then the line should not be considered covered. It seems there are some unclear assumptions on how testing needs to work, so let me get some assertions out of the way...

  1. Code coverage is only meaningful in the context of well-written tests. It doesn't save you from crappy tests.
  2. Code coverage should only be measured on a line/branch if the covering tests are passing.
  3. Code coverage suggests insufficiency, but doesn't guarantee sufficiency.
  4. Test-driven code will likely have the symptom of nearly perfect coverage.
  5. Test-driven code will be sufficiently tested, because the author wrote all the tests that form, in full, the requirements/spec of that code.
  6. Perfectly covered code will not necessarily be sufficiently tested.

What I'm driving at is that Kevin is arguing against something entirely different than that which TDD proponents argue. He's arguing against a common misunderstanding of how TDD works. On point 1 he and I are in agreement. Many of his commentators mention #3 (and he states it in various ways himself). His description of what code coverage doesn't give you is absurd when you take #2 into account (we assume that a line of covered code is only covered if the covering test is passing). But most importantly - "TDD proponents" would, in my experience, find this whole line of explanation rather irrelevant, as it is an argument against code-coverage as a single metric for code quality, and they would attempt to achieve code quality through thoroughness of testing by driving the development through tests. TDD is a design methodology, not a testing methodology. You just get tests as side-effect artifacts of the approach. Useful in their own right? Sure, but it's only sort of the point. It isn't just writing the tests-first.

In other words - TDD implies high or perfect coverage. But the inverse is not necessarily true.

How do you achieve thoroughness by driving your development with tests? You imagine the functionality you need next (your next increment of useful change), and you write or modify your tests to "require" the new piece of functionality. They you write it, then you go green. Code coverage doesn't enter into it, because you should have near perfect coverage at all times by implication, because every new piece of functionality you develop is preceded by tests which test its main paths and error states, upper and lower bounds, etc. Code coverage in this model is a great way to notice that you screwed up and missed something, but nothing else.

So, is code-coverage useful? Heck yeah! I've used coverage to discover lots of waste in my system. I've removed whole sets of APIs that were "just in case I need them" APIs, because they become rote (lots of accessors/mutators that are not called in normal operations). Is code coverage the only way I would find them? No. If I'm dealing with a system that wasn't driven with tests, or was poorly tested in general, I may use coverage as a quick health meter, but probably not. Going from zero to 90% on legacy code is likely to be less valuable than just re-writing whole subsystems using TDD... and often more expensive.

Regardless, while Kevin is formally asking "is code coverage useful?" he's really asking (rhetorically) is it reasonable to worship code coverage as the primary metric. But if no one's asserting the positive, why is he questioning it? He may be dealing with a lot of people with misunderstandings of how TDD works. He could be dealing with metrics bigots. He could be dealing with management-imposed-metrics initiatives which often fail. It might be a pet peeve or he's annoyed with TDD and this is a great way to do some agile-baiting of his own. I don't know him, so I can't say. His comments seem reasonable, so I assume no ill intent. But the answer to his rhetorical question is "yes, but in context." Not surprising, since most rhetorically asked questions are answerable in this fashion. Hopefully it's a bit clearer where it's useful (and where/how) it's not.

Tuesday, 21 October 2008

Hi, my name is Christian, and I'm a geek in a suit.

I've been consulting in the high-tech field for over a decade, and programming for substantially longer than that. In fact, my earliest programs were BASIC programs on my C64, though my first true love was an Amiga 1000 owned by my ultra-cool uncle. But these days I wear a suit. I wear a suit because I consult with "geeks" and "suits" and all comers in between. Most of my clients have been in the Financial sector, so dressing up provides credibility with the customer, and my thinking has moved drastically towards a customer-focus over my decade of consulting.

I've started this blog, because I consider things from a few perspectives. As a technologist, I think about software feasibility, artistry, craft, and shippable result. As a geek I think about cool, new, innovative tech. As someone who has to act on behalf of the customer in many situations, I think about value and fitness-to-purpose and total cost of ownership. As a project manager I have to think about predictability, output, project throughput, cost, and so on. As a consultant, I think of cross-cultural conflict resolution, needs analysis, and communications (geeks and suits speak vastly different languages, leaving aside ethno-linguistic or gender communication issues). So my observations hit a lot of these areas. They often will stray into the Agile and Lean categories, though often I end up working in non-agile, non-lean environments, so they're not exclusive to these communities of interest.

I already blog on, but really this is intended as a vehicle for personal thinking that's probably going to stray a bit from that blog's editorial perspective, though there will be some overlap. I may cross-post some. Regardless, I wanted a way to start talking about things important to me, professionally, from a variety of perspectives, whether or not I have an audience. Hopefully, someone will find this useful.


Christian Gruber