The Agile Management Fad

Is Agile a management fad?  Is its blistering adoption throughout the world rooted in a proven value driven approach or the hysteria of the masses clamoring for a new trend to profit from and identify with?

Fad – defined

According to Wikipedia a management fad has certain characteristics. Let’s look through these defining elements and see if Agile can fit this definition.

1. New Jargon for Existing Business Processes?  Seems that there are plenty of examples that fit this like:

Manifesto = Mission

User Story = Requirement

Planning Poker = Estimating

Daily Scrum = Daily 15 minute meeting

Scrum Master = Coordinator/Facilitator with no authority.

2. External Consultants Who Specialize In the Implementation of the FadWould anyone deny that agile has these in abundance?  Agile consultants are everywhere and firms specializing in agility are in no short supply.

3. A certification or appraisal process performed by an external agency for a fee.  Yep. Many of these exist. Although they don’t all agree with one another on the merits for earning that certification, CSM, CSP, PMI-ACP, and the icAgile body of certifications represent a diversity of vehicles for achieving formal certification.

4. Amending the job titles of existing employees to include references to the fad.  There could be room for debate on this one, but a short search on monster.com or dice.com show a plethora of ‘agile project manager’ versus ‘project manager’ positions.   Equally, we’ve seen software developers that specialize in agile practices now called “ninjas” and there is the  “agile scrum coach” that now replaces the old title “application development manager”.

5. Claims of a measurable business improvement via measurement of a metric that is defined by the fad itself.  Velocity is the most prominent measure that comes to mind.  Further velocity is defined in increments the fad defines: story points.

6. An internal sponsoring department or individual that gains influence due to the fad’s implementation.  Organizational Agile Coaches, or an Agile Center of Excellence are two examples that have become common.

7.  Big words and complex phrases.  This one is kind of subjective, but YAGNI might quality. SolutionsIQ even published an agile glossary so you can keep up with all the terms & definitions.

Fad? Yes.  Value?  Yes.

So by this definition agile is, well,…a fad.   But does that mean agile practices have no value?  Here’s some of the things we’ve learned ( or re-learned ) from agility:

1.  Daily communication among team members really matters when the work is complex.

2.  The people doing the work must be accountable for it.  Don’t let them hide behind a project manager.  Let them take pride in what they do.

3.  Requirements require constant communication, clarification and understanding.  It’s a continuous phase communication cycle, not a document.

4.  Regular, timely feedback on work improves quality and job satisfaction.

5.  Teams need help coordinating, facilitating and communicating between themselves and others.

Agile’s popularity is still growing.  Clearly some of us see benefit even as the marketing machine twists agility into something it never really was or will ever be.  Like management fads before it ( Six Sigma, TQM, and CMMI ) agile has made an impact on how we create value.

Is there a cult following?  Of course.  Everyone likes being popular and making money.  But the end state of the agile bubble will be a reconciliation back to reality.  There’s no silver bullet.  Problems still exist and we’re never fast or perfect enough for the shareholders or customers.  Room for improvement is omnipresent.

Summary

The fever pitch of the fad is a beacon.  Full value has been realized, copied, marketed and redistributed without concern for the result.  While time exists and there is still competitive advantage…the crowd still gathers.  But, the drivers of innovation have long moved off the curve of agility, hybridizing and envisioning new methods and tools for further improvement.  Another wave will again crest and break onto the shores of software management and leadership bringing the promise of ultimate productivity and quality, but delivering only incremental improvement.

Advertisements

A Caboodle of Pragilematic Posts

I’ve been hanging out and posting at the ASPE SDLC blog.  Yes…I have their permission to do that.  Geesh.  Check em out Gilbert:

Six Things To Avoid When Reporting Project Status – Project status is about the facts and your strategy to address and manage those facts.

This Daily Standup Is a Joke – This article details some challenges associated with daily stand-ups and some potential strategies for dealing with these.

An Axiom of Project Success –  What’s the common thread to project success?  We’ve seen projects that should have died.  We’ve also seen projects fall apart that seemed like they were in the bag.  This post attempts to nail the overriding factor.

That’s Great…But How Does Agile Benefit Our Shareholders?  – Selling agile to key leaders in your organization takes more than just a thorough understanding of story points, and time-boxing.  This post brings it home for those wanting  a bigger bang for their agile swang.  Whatever that means.

The Root Cause of Water-Scrum-Fall

Introduction

Water-Scrum-Fall is the norm in many organizations today.  Despite the attempts of scrum coaches and consultants, the weeds of waterfall grow back into the agile garden.  What causes this?  This article looks at the root cause for the water-scrum-fall phenomenon and makes a suggestion about how to address it.

The Root Cause

Water-scrum-fall’s reality is not the result of people being unwilling to adopt scrum.  It’s not the result of a lack of passion for agile processes and practices.  Nor is it caused by a lack of executive support.  The cause?

Capital budgeting

In companies that produce software for internal use water-scrum-fall finds it’s greatest adoption.  Internally used software is strictly accounted for by the regulations in SOP98-1 and this financial machinery is what guides the need to plan up front.

Capital projects are investments.  To determine an investment’s return you need to know the estimated initial cost, and estimated revenue ( or savings ) the project will create.  These things are estimated up front so that a decision can be made on whether or not to pursue the investment.  The up-front nature of capital budgeting compliments waterfall and BDUF. It is a core financial business process guided and regulated by FASB.  Think about that for a moment…and then read on.

Scrum practitioners run up against a wall with capital budgeting.  It doesn’t fit their operational practice for developing software.  Under scrum….we shouldn’t be designing, estimating and crafting the project up front.  Instead we should approach it incrementally.  The problem with this? It ignores how enterprise software development projects ( capital investments  ) are funded.  The result is that everyone compromises and innovates.  Water-Scrum-Fall is the child of this compromise.

The Challenge

Scrum and other agile practices pose a challenge to enterprise software development efforts and the capital budgeting process.  Indirectly they say “Why are we funding this as a capital investment?  It’s not.  It’s an ongoing operational cost and should be accounted for that way.  If we don’t plan on funding this software development effort for the long haul…then why are we doing it?”  Funding a software development effort as an operational expense, as is done within software companies, does fit the scrum operational practice better.  But again…the difference between a software development effort being labeled CAPEX or OPEX is guided by FASB.  It’s not up to the company.

How Do We Fix Water-Scrum-Fall?

I don’t think there’s a silver bullet here.  But in my last article, Is There a Better Way to Estimate Capital Projects? , I threw out a suggestion for how to estimate a capital investment using tolerances.  This bypasses the need for an up-front detailed analysis of what the the LOE ( Level of Effort ) would be for the project, but still gets the business what it needs: an initial funding point and resulting NPV that augments decision making.

Summary

So water-scrum-fall is a pragmatic reaction by agilists and IT professionals to work with the business and its financial processes.  Did the originators of scrum not understand the capital budgeting process?  Were they oblivious to the financial architecture of the businesses around them?  Maybe…but to their credit; they weren’t trying to address this.  Their focus was on how to do software in an adaptive fashion so that it more organically addressed the operational realities of manifesting a complex vision.

What Techniques Do You Use Most To Deliver?

The poll is multiple choice, and you can add your own too.

Predictive *AND* Adaptive Planning

Introduction

If you’ve managed projects for any length of time you understand the truth.  It’s almost never the case that a project is completely predictive ( waterfall ) or completely adaptive ( agile ).  It’s a mix.  There are needs for both in any project.  Those project managers, companies, and consultancies that integrate both approaches into their project management efforts are hybridizing the agile movement and shaping the reality of the future.  Let’s look at some examples that are well known, visible, historical and outside the IT market.  Why?  I want to defuse the agile vs waterfall debate for a moment and abstract things away from technology to help us see the broader picture.

Example 1: Lewis & Clark Expedition

The expedition by Lewis and Clark to navigate the Missouri River and map the western territories was an immense, RISKY, long term project entertained by a focused project team of professionals.  This is a startup.  Their goal was to map a water born ( river ) path to the Pacific Ocean.  To accomplish this goal some level up front planning was required.  You simply couldn’t plop two guys down in St. Louis and say: “For the next two weeks go up the Missouri River, then have a retrospective on what you learned, share that learning with your sponsor in Washington D.C. and plan for the next two weeks of river navigation.”

Lewis & Clark needed supplies, people who knew the land, logistical expertise, some plan up front to start the mission. But while an up front plan was necessary, much was not known.  This is where adaptive planning comes in.  They would need to adapt as maps, events, and people turned out to be unreliable or poorly understood: winters were more severe, the Missouri didn’t go all the way to the Pacific and great Mountain chain blocked their path to the ocean, and accidentally injuries and sickness delayed progress.

While not having an up front plan could have delayed and increased the cost of their project…not being able to adapt along the way would have killed it.  It was an experiment, a gamble, and to make it work required flexibility and persistence.

Standish group would have called Lewis & Clark a failure because they exceeded their baseline schedule.

Example 2:  Apollo missions to the moon.

The missions to land men on the moon and safely return them during the late 1960s and early 1970s in the United States were massive projects of unknown risk and horrible complexity.  Failure would bring down a nation and a philosophy ( democratic freedom).  They closely mirror big ERP implementations or suitably large custom software development efforts in Fortune 500 businesses today.

Could these have succeeded without predictive planning?  It’s hard to imagine they would have.  There were so many variables and unknowns that were required to be nailed down before implementation to ensure success.  In fact, predictive planning to the Nth degree is what made this possible and successful.  Accounting for every risk, having a mitigation plan for every risk, and carefully coordinating all the sub-projects to the common goal would have been hard to accomplish via strict adaptive planning.

This isn’t to say that adaptive planning didn’t play a role in adjusting and dealing with risks/issues along the way ( think Apollo 13 ), but this project was almost completely predictive by necessity at the top-level.  There was too much at stake ( human life and a nation’s perceived status in the world ).

Even though the Apollo moon missions proved to the world that the U.S. was the preeminent technology leader vs the U.S.S.R  and would be revered for decades to come….this project, using Standish’s metrics, would have been deemed ——-> FAIL.

Example 3: Building a Table in Garage with Your Woodworking Equipment.

A more personal, and human project…building a table is something that’s been done many times in the past.  This is like building a new e-commerce site for a company.  It’s been done before, they could have just bought a vended product, but for some business reason they want to build their own.

Ok, since it’s been done before there’s a pattern, and we borrow from it.  We download the diagram, purchase our materials, and define our plan.  This is predictive.  As we begin to execute we discover flaws in the plan or the procurement process.  Wrong nails were purchased or the leg supports were cut too small.   These events require us to adapt.

After building the table we’d dust off our hands and say “Wow, that’s a little over budget and it took some extra time, but I got what I wanted.”   We’d call it a success.  Standish would call it a failure.

More?

I could go on.  What about Michelangelo’s works?  How about building the Dubai Tower?  Great human efforts ( projects ) are vision building.  Successful completion and delivery requires a range of approaches and a commitment to the end goal.

Summary

Hopefully you can visualize the adaptive *AND* predictive elements in these projects.  It wasn’t one or the other.  It was a gradient.  That’s where the present is and the future will continue to be.  Hybridization is good.  It isn’t about agile vs waterfall. It’s about achievement, belief, and success.

Development Shop Productivity – Should You Outsource?

Introduction

CIOs, executives and development managers will find the most interest in this article.  We’ll focus on a productivity measure for custom software development and how it may help you justify outsourcing your software development shop.

The Problem

How do you justify outsourcing your development shop when you know the immediate cost savings may not be there?  The trend toward mega-software development outsourcing shops isn’t slowing down.  But the gains to Fortune 500 enterprises are not always immediate.  You’re gut says it to you every day:  “It’s better to let someone else do this.  We just aren’t good at developing our own systems.  The bugs, the time, the bad requirements…”  But, how do you justify your gut?

A Formula to Measure Your Progress

I’ll skip past all the BS on the benefits and costs of outsourcing.  You can look these up with Gartner, About.com or some independent analyst’s site.  What I will present is a new formula for measuring your custom development shop value and tracking it relative to an outsourced effort.

Here’s the formula:

Custom Development Value Added = CDVA
Capital Dollars Invested = C
Operating Dollars Invested = O
Return on Investment from any completed projects  = NPV
Hours spent gathering & defining requirments = ReqH
Hours spent fixing bugs = BugH
Hours spent addressing help desk tickets = TickH
Hours invested in training = TrainH
Hours of Time Off = OffH

CDVA = ((C + O) - NPV ) / (( ReqH + BugH + TickH ) - ( TrainH + OffH ))

Let’s talk through it.  CDVA is your development shop’s productivity. The first thing you’ll want to do is baseline this for your current shop over a year and then determine how it compares to any outsourcer’s proposal.

The numerator in the equation represents your financial investment in custom development.  The denominator represents labor expended relative to this investment.

So over time you want to see your CDVA increase regardless of your outsourcing decision.  If the trend goes down or stays stagnant then it’s time to seek improvement.  There are some diminishing returns here, and you may need to make periodic investments to see greater value added later on.  But the point should be clear; we’re measuring the return on our development shop’s productivity overall….rather than on a project by project basis ( which can be very misleading ).

You determine the periodicity, but measuring this at every fiscal month makes sense to my inner accountant.

You might ask why I chose measuring hours around bugs, requirements, and trouble tickets and not the overall development effort? Well….experience tells me these are the areas with the greatest variability and also the areas where expertise and experience shine.  Those who are good at software and application development can do so with minimal bugs, less requirements analysis and their end product typically needs very little support ( I can see some CIOs smiling right now ).

Summary

Outsourcing is like any business venture and may require some upfront investment to see returns over time.  But, while this is intuitive it isn’t always measured in a consistent way that takes account of key development metrics like bug counts, support tickets, and requirements time.  Hopefully this article presents a way to do just that.  Enterprise IT is under increasing cost pressures and given the historical waste and loss associated with custom application development it only makes sense to look at outsourcing vendors who have the focus, experience, expertise, and clout to deliver.  Now IT executives may have a measure to justify and track that or at least show their shop is improving.  😉