big

NOOP.NL | The Creative Networker

So, Now You’re an Agilist… What’s Next?

10/04/2009

Mantis-benbengraves-211005006 This week I did a presentation at SPA 2009 and Skills Matter in London. (And I did the same one last week for Agile Holland.) Just like my previous presentation, The Zen of Scrum, this one has a lot of great visuals, and (almost) no boring bullet points.

If you didn't see me perform, you may have a little trouble understanding some of the slides. Though I'm sure you will understand the general idea.

Feel free to share it, download it, and use it for your own purposes.

That's it. I wish you all a happy Easter!

I'm off now. I have a date with Harrods.

Twitter TwitterRss SubscribeEmail NewsletterDelicious Bookmarks

Latest, greatest and favoritest posts:
The Decline and Fall of Agilists
My Latest Team of Chefs
Real Agile Teams Can Flock

This article is written by on in Life & Work. Jurgen Appelo is at Happy Melly. Connect with Jurgen Appelo on .

This article was posted in:

  • http://www.agilemusings.com Brendan

    Great presentation – thanks for sharing, I am a HUGE fan of your writings.

  • Yann Picard de Muller

    I really enjoyed this session at XPDays Benelux 2008, and now it is a pleasure to discover the « other » rules.
    Thank you !

  • http://agileconsulting.blogspot.com Jeff Anderson

    excellent presentation, I’m going to pick up the book, my presentations have way too many words in them.

  • http://profile.typepad.com/jurgenappelo Jurgen Appelo

    Thank you all for the nice feedback!

  • http://profile.typepad.com/galleman Glen Alleman

    Nice presentation.
    However, Ken Schwaber may belief NYC is self organizing. But he shoudl speak to the planning department. The builing codes in NYC are very strict, as is the Port Authority on where, what, when, how, and why you can add someting to the city that invovles changes to the existing baseline.
    This notion of Complex Adaptive System being the model for agile is fundamentally flawed. The underlying theory of CAS is based on stochastics process. These are random processes that evolve over time. The drivers of IT systems are not random processes, they are driven by external events. These events many times have disruptive jumps – these are call markov processes. But the state into which the event can jump of bound, finite, and themselves deterministic in a post hoc manner. They may not be “visible” or “known” prior to their “jump,” but they are no random in the CAS sense.
    Like many thing these days in agile, terms are used where the speaker has co-opted a term from another domain and used in out fo context for essentially “marketing purposes.” CAS is one of those.
    It’s too bad, because those of us who work in the domain of stochastic process modeling – radar and sonar signal processing for example – just smile at how naive the world has become.

  • http://profile.typepad.com/jurgenappelo Jurgen Appelo

    Glen,
    It is the complexity scientists themselves who firmly believe that organizations are complex adaptive systems. People like Ralph Stacey and Roger Lewin wrote several books about this. And what applies to an organization applies to a software project too, as a project is just a small organization. In fact, I even read that it is Ralph Stacey himself who talked about these concepts with Ken Schwaber.
    If you don’t agree with software projects being CAS, it seems you’re disagreeing with the complexity experts themselves. Therefore it is your view on CAS that might be naive in this case…

  • http://profile.typepad.com/galleman Glen Alleman

    Jurgen,
    There is no one single definition of Complex Adaptive Systems.
    I object to the generalization of term and its application to unbounded systems – “software projects.” What kind of software projects, what kind on underlying stochastic processes, what is characterization of those evolving processes in terms of bounded outcomes. All those kinds of questions aere skipped.
    Ralph applied the CAS concepts into the management domain. From the physics world – were I have worked in this area on deep inelastic scattering as a grad student and researcher – CAS has a different meaning – the foundational meaning based in mathematical physics.
    If the business application of CAS works for you. Enjoy.
    Remember though the Stacey material is described as “organizational theorist” (a social science domain) have ADOPTED CAS to their model. This small but critical step “adopting” is a one way trip from the underlying physics of CAS in say compressible fluid flow, the a VERY soft science of organizational theory.
    It’s done all the time. The very basis of social sciences is the adaptation of physical sciences model.
    The trouble comes; when over generalizations enter the conversation – Schwaber for example. And let’s e perfectly clear, Stacey is a social scientist, NOT a complexity expert. Probably a very good social scientist. The use of non-linear dynamics in the business domain was in place before the introduction of CAS. The Club of Rome and Forrester’s book on Limits of Growth was a seminal work. CAS “emerged” from compressible fluid flow in nuclear weapons – a topic I’m familiar with.
    For a science person like myself, I fall victim once or twice a year to feeble attempts to some light on overloading of terms and the misapplication of concepts in domains that have a weak understanding of the source of the “notional” concept.
    That’s all…

  • http://profile.typepad.com/jurgenappelo Jurgen Appelo

    Glen,
    Thanks for your input. I really appreciate your thoughts on this.
    But I believe you’re wrong.
    - Yes, it is true there is no single definition of CAS. In fact, Wikipedia says “Complexity science is not a single theory— it encompasses more than one theoretical framework”. Therefore, you cannot say that software projects (or New York City) are *not* complex systems. It depends on the definition/theory you’re applying. Physicists do *not* have the final say on what defines a CAS. If you would claim supremacy of your view on what a CAS is, and what it isn’t, you would have a fight with many mathematicians, biologists, economists, computer scientists, and social scientists around the world. Physicists don’t rule the sciences.
    - I find your argument that Stacey is a social scientist, and not a complexity expert, a weak one. This would apply to all complexity experts in the world, including you and I. They all have degrees in other fields. I don’t know anyone with a degree in complexity science. That is the whole point of complexity science. It is an interdisciplinary field.
    - Your idea that CAS comes from physics and was “adopted” to other domains is flawed. Wikipedia says this about it: “CAS ideas and models are essentially evolutionary, grounded in modern biological views on adaptation and evolution.” If *anyone* can claim the origins of CAS, then biologists have just as much right to that claim as the physicists. For example: what phycisists call a “phase transition”, the biologists might call a “punctuated equilibrium”. Those phenomena look suspiciously alike.
    So, CAS transcends all disciplines. It is for precisely that reason that the Santa Fe Institute was founded. To study complexity across disciplines, and to see what holds true for all systems, *including* social systems. (What you would call unbounded systems.)
    Your view that CAS just “emerged” from compressible fluid flow (physics) is simply false. You give physicists more credits than they deserve. CAS emerged from physics *and* biology (Darwinism) *and* mathematics (chaos theory and game theory) *and* systems thinking.

  • http://profile.typepad.com/6p01157018d7ac970b Michael Dubakov

    It seems Glen took very narrow definition of CAS that can be expressed via *known* math formulas and applied it to the article. That is not correct on my opinion.
    Complexity science is young and there is a HUGE layer of phenomena to be discovered. You may consider this article as an application of CAS (current level of knowledge) to software development. Time will show whether we can get benefits from this application; what we can explain and improve. It looks very promising to me indeed.

  • http://profile.typepad.com/galleman Glen Alleman

    This very discussion is why CAS is a non-starter outside the agile and social sciences domain.
    And BTW Micahel, if CAS can’t be expressed through know math, that simply re-informces why I need to stay from the conversation and stay within myt world of managing defense and aerspace projects.
    CAS in the social sciences – business and projects – is an early adoptor domain – not my core interest or skills. In that early adoptor paradigm over generlizations, broad sweeping statements fact based on opion and the like move the conversation forward. But, most are ancedotal and notional (notional in defense is a nice power point slide deck of how to fly to the moon and back, more detail are needed before we fuel the lauch vehicle ;>)!!
    And as Micheal suggests – “time will tell.”

  • http://profile.typepad.com/galleman Glen Alleman

    Michael,
    Yes it is narrow in the sense that the community that works on the topic at the math level have strict approaches to the discussion.
    In typical agile fashion, that approach is probably of not much value.
    The challenge of course is how to put these VERY soft proccesses and the benefical outcomes to work outside of an ancedotal process with a small group of your friends.

  • http://profile.typepad.com/galleman Glen Alleman

    Jurgen,
    I speak from direct hands on knowledge. the SFI is a derivative of Sandia Laboratories. The turbelance work was from the plasma solida work there.
    The core algorithms came decades from the weapons work. The popularization of Chaos in biology came from the Gleck book and a need to find homes for non-classified work.
    In the end its a moot point, since the topic works for you in your domain.

  • http://profile.typepad.com/6p01157018d7ac970b Michael Dubakov

    Well, if SOMETIMES you will be able to describe people via math formulas, you will be right. Maybe it will happen in the future, but does it mean we should stop trying to apply interesting theories to real life problems NOW and wait for formulas? I don’t think so. Do you know game of life? It is based on very simple rules (expresses via English) that leads to unpredictably complex behavior. http://www.bitstorm.org/gameoflife/
    Serious math researches were applied to cellular automaton, but still you can explain how it works in 3 simple statements. It is beautiful. And I believe there are many things in the world that can be invented without formulas (at the beginning) and applied to solve real problems.
    I have technical background (physicist), so please don’t tell me that formulas are all you interested (and believe) in.

  • http://profile.typepad.com/galleman Glen Alleman

    Michael,
    Like the flocking bird simulators, the life games is post-hoc emergent. Great class room examples of “possible” underlying processes.
    They are “models” of observable processes build after the fact.
    But can the actual sets of non-linear, coupled, stochastic, partial differential equations describe even the simplist life form interaction well enough to make forecasts about future future behaviour?

  • http://profile.typepad.com/galleman Glen Alleman

    Micheal,
    I too was a physicist – Fred Raines neutrino signatures on SLAC – 1978 UC Irvine. As you know the math is not the end but it most certaintly is the means to understanding. As an experimentalist, we NEVER proceeded very far without the theorist establishing the “reason” we should be seeing what we were seeing in the detector. Otherwise is was “noise” or experiment induced data.
    Causal basis was needed to move from “ancedotal” observation – “I saw a collision in the film and want to call the a W-particle,” to “this is the reason I shoudl be seeing W signatures at this location in the collision cloud.”

  • http://profile.typepad.com/6p01157018d7ac970b Michael Dubakov

    Glen, definitely I can’t disagree about experimental physics :) But still do you have math model of your current project? Do you know the real end date? Do you know all problems you will have? I bet you don’t. But still somehow projects are getting done (with mixed success though). Why the hell is that happens without math? Is it a miracle?
    >>But can the actual sets of non-linear, coupled, stochastic, partial differential equations describe even the simplist life form interaction well enough to make forecasts about future future behaviour?
    Maybe not, but maybe we may receive good probability of some event and at least set right directions. It is impossible to forecast weather longer than about 20 days. And I think it is impossible to forecast exact project release date. I think we can define some useful trends using CAS (for example, obvious improve communication :)

  • http://profile.typepad.com/galleman Glen Alleman

    Michael,
    Yes we do have a math model it is mandated by DID-81650 and FARs/DFARs (Federal Acquisition Regulation / Defense Federal Acquisition Regulation) defining “deliverables based planning.”
    It’s a Monte Carlo simulation of the work to be performed in rolling waves, the cost of these deliverables, and the confidence that the deliverables meet the business and technical requirements. The programs we’re on include flight avionics the shuttle replacement, an autonomous landing system for the Navy, and a large Air Force communications infrastructure system. We (the firm I’m with) provide Program Planning & Controls services to prime and 1st tier subcontractors.
    In one case we are the Program Management Office for $4B or so worth of Task Orders. All three of these programs are software intensive, some with firmware, some with ground, launch, and flight systems. All are mission critical with evolving requirements, unstable funding and highly political customers (NASA, NAVAIR, US Air Force Net-Centric Warfare Office). Not much different in principle from the agile ERP program another part of our firm manages. I stay on the defense and space side because the customers are better behaved.
    The question you ask above are “right on” and have addressed response reviewed on a weekly basis. This approach has developed over the past 8 years or so, with the advent of Integrated Master Plan / Integrated Master Schedule (IMP/IMS) no mandated for any federal procurement. IMP/IMS is a natural fit for iterative and incremental development process. Scrum or Scrum–like development on “less than” 45 day work packages inside of 6 to 9 month rolling waves, on 3 to 5 year program.
    This approach starts with a historical database, best engineering judgment, or a variety of other methods to elicit the underlying probability distributions for the work package efforts. This is a very mature approach coupled with cost modeling. There are good source materials at the NASA PM Challenge site http://pmchallenge.gsfc.nasa.gov/ and http://www.daytonaero.com/IMP-IMS.php
    For small agile team these approaches may seem too complex. But the focus is on defining what “done” looks like, how we are going to get to done, how we would recognize done when it arrives, and what impediments we’ll encounter along the way and how to mitigate or “buy these” down. The principles are independent of scale.
    Forecasting “exact” release dates is not needed. What are needed are the confidence intervals for each element and what “buffers” ala TOC are needed to protect those deliverables. Our manned space flight program has a fixed launch date, so in the end a Plan is needed to show up on time, on budget, with enough capabilities to fly the crew to Space Station.
    In the end the “probabilistic” estimate of the cost, schedule, and technical performance compliance is the minimum buy in cost for spending other people’s money. Know the exact cost and schedule is nonsense, but that does not remove the requirement to “manage” the process in some credible way.
    But forecasting the “exact” complete date requires the underlying statistics – mean, variance, and higher order cumulates – of available.
    Regarding the weather forecasting. I can see the NOAA / NCAR Boulder facility from my deck. You’d have a tough time convincing our atmospheric physics neighbors that the weather planning horizon is only 20 days.

  • http://profile.typepad.com/6p01157018d7ac970b Michael Dubakov

    All what you said is right. Complex and mission critical projects needs more planning and more formalization than usual business apps.
    But still space shuttles crash and nobody can predict that. Is it a stochastic process? Or maybe the system is too complex to make such prediction? Is there any relation between fixed launch date and bugs? I think you can’t predict launch date, since I heard may times that launch postponed because of some defects found (I missed one launch being in Florida, it was sad :).
    It is all about probability. We can’t predict with 100% probability launch date, release date, etc. I know you need to sync effort of many teams and many people, so you need some milestones. But still I believe in complex, innovation projects we can’t predict results with good probability.

  • http://profile.typepad.com/galleman Glen Alleman

    Michael,
    Neither of your statements is factually correct.
    The STS mishaps were both predictable. See
    http://www.nasa.gov/pdf/2200main_COL_ciab_charter_030218.pdf
    and
    http://history.nasa.gov/sts51l.html
    for the casual sources, calculated probabilities of “loss of crew” and “loss of mission,” and the transcripts of the Challenger pre-launch conversations with engineers. Columbia had similar conversations about foam strikes for years prior to the mishap.
    The probability of failure for the current vehicle – Orion – is well understood and calculated in the
    http://www.nasa.gov/exploration/news/ESAS_report.html
    in §1.3.2.3.1 – Safety and Reliability
    The launch date has a window in which it can leave. This varies from a few hours to a few days. The typical project cycle is 12 to 36 months. For Orion it is 7 years. To be late 2 to 3 days in 36 months is “pretty close” to being on time compared to a typical commercial software project.
    I was on the last Titan flight
    http://spaceflightnow.com/titan/b26/
    where there was a hold for 20 minutes. The mission had been in progress for 19 months. 20minute delay over 19 months, that ain’t bad. The second stage was run at full throttle for longer to make up the time to be on station at the minute planned.
    To travel to Mars you can only leave earth once every 3 years in a 3 week window – “don’t be late.”
    My sense is you’re not familiar with the planning and controls processes used to build software intensive system in other domains. The flight software system we work on use iterative and incremental development processes. Some are true agile (XP and Scrum), some are adapted from Scrum. All are mandated iterative.
    As mentioned before all use a mandated probabilistic cost and schedule process where no point estimate is valid without a confidence interval and some knowledge of the underlying statistics for cost, schedule, and technical performance.
    “Is there any relation between a fixed launch date and bug?” No. Bugs are not only extremely rare, the software is fault tolerant. Having designed and developed several fault tolerant realtime operating systems, the approach is much different than simple commercial code. DO-178B is the starting point for flight software.

  • http://profile.typepad.com/galleman Glen Alleman

    Michael,
    Another thought.
    In no way am I suggesting that the way software and system are developed in the world I work are generally applicable to small co-located teams doing business software.
    But, when over generlaizaed statements are made about that domain – mission critical – there may lessons learned that could be used in other domains. And more importantly leasons learned about how things really work – like “no body could have predicted STS mishaps,” and “complex … projects can’t predict results with good pobablity.” These are simply not the case.
    Possibly looking for how this is done in aerospace and defense “could” result in some learnings in the small commerical world.

  • http://profile.typepad.com/6p01157018d7ac970b Michael Dubakov

    Indeed I did not work on mission critical projects and definitely you know the domain, while I don’t. Still it seems you are 100% sure that ALL your projects be successful. C’mon, it is achievable only with a good portion of luck.
    http://www.cs.tau.ac.il/~nachumd/verify/horror.html

  • http://profile.typepad.com/galleman Glen B. Alleman

    Maichael,
    No thee is no such thing as 100% sure. That would be illogical. But to say “nobody could predict …” or “it is impossible to forecat the exact release date…” is replaced with
    “we have an 87% confidence of completing on of before 12 nov 2011 at this point in time.” That point was September 2003. As the project progresses, that confidence increases until of course we reach the month of August 2011. Then the forcast for launch becomes the task of executing discrete lists of tasks that will cause the lauch to occur within the window of error.
    And your notion of “a good portion of luck,” is of course silly. If our spacecraft or avionics programs depended on a “good portion of luck” no person in their right mind woudl come near our vehicles.
    You’re either being cynical or possible mis-informed about how thise projects are executed. Take a look here http://en.wikipedia.org/wiki/Systems_engineering and look a bit through the NASA Systems Engieering Handbook and see how the problem of developing and managing complex systems – complex sofwtare systems – is approached in the absence of a “good deal of luck.”
    Finally my cynical question might be – if the projects you work on are not “critical” in some way to someone, then why are they being done in the first place? This is common question any CIO should ask – wjy am I spending money on IT if it has no real measureable value to my org?

  • http://profile.typepad.com/galleman Glen B. Alleman

    Michael,
    Sorry for the name typo – fat fingers
    Regarding the “software horror” stories
    MCO – reading the actual reveals “the root cause was not caught by the processes in place in the MCO project.” The conversion from english to metric is done all the time. These missions are multi-national. Process error.
    Vincennes – read the details. An F-14 was squawking IFF indicators for “military” on the same azmithm as the Airbus. The Vincennes radar in the CIC does not connect altitude and azmithm – poor design but not considered poor at the time. Not a software problem, a systems engineering problem.
    A320 crash – it was the pilot. He was shoing off for the crowd, did not manage the aircraft properly. As a former military pilot, the pilot is always at fault when operating the aircraft outside the envelope of normal procedures.
    Ariane 5 – it was not a computer arithmetic error, it was a configuration error. The attitude holding software was improperly reused from a Ariane 4, which has different launch characteristics. Loads of misinformation around Ariane 5 mishap
    You’ve taken a common approach to “software failures,” starting with a popular press story – excpet the NASA MCO and MPO Mishap reports. Nearly every example the problem started in the intial design or system architecure or was a configuration control issue.
    For all the Shuttle problems it is NOT a Boeing 777, it’s a science project. Labeled experimental aircraft just below the left and right cockpit windows. See if this “informs” the conversation in anyway,
    http://www.fastcompany.com/node/28121/print
    Not that there aren’t lots of “bone headed” reasons for problems. But “good portions of luck” is certaintly not the solution.

  • http://profile.typepad.com/galleman Glen B. Alleman

    Michael,
    Here’s my collection of Ariane Mishap reports collected when a si,ilar discussion broke out many years ago around the casual factors being arthmetic errors
    http://www.niwotridge.com/Resources/DomainLinks/Ariane5Failure.htm

  • http://profile.typepad.com/6p01157018d7ac970b Michael Dubakov

    It seems the discussion branches into several directions, let me try to merge them into trunk :)
    1. “we have an 87% confidence of completing on of before 12 nov 2011 at this point in time.” It sounds good to me (If you have historical data, project is not “on the edge”, etc.).
    2. What I am talking about is that uncertainty may be very large. “For all the Shuttle problems it is NOT a Boeing 777, it’s a science project.” OK. There are many researches projects in different areas like medicine, biology, chemistry, etc. Many of them will fail. I don’t think anyone can be sure that in next 5 years we will find 100%-rate-success cancer medicine. Luck is important in such projects. If you are talking about engineering projects, it is something different with much better success chance.
    3. “if the projects you work on are not “critical” in some way to someone, then why are they being done in the first place?” By “mission critical” I mean projects that may affect people life. Space, medicine, transport, etc. are mission critical areas. Project management tool like TargetProcess doesn’t. It do provide value to solve problems, but nobody die because of the bug in PM tool (I hope).
    4. Mission critical projects definitely need more formalization, more error checks, more complex process. I am not sure that agile is applicable to such projects (in full degree at least).
    5. Back to our main topic (CAS and software development). Software produces by people, with many technologies, based on other software, etc. I really don’t understand why you stand against CAS. I agree that so far there are little applications to real life software development, but it is very interesting field for future researches and it looks promising to me.

  • http://profile.typepad.com/galleman Glen B. Alleman

    Michael,
    1. Yes historical, best engineering judge, wide band delphi, and any and all other estimating techniques
    2. Managing R&D is a special approach. The defense community switched to a staged approach 10 or so years ago. I’ve managed bio-tech (mass spectrometers) where we staged the developement from “science” to production as well. These are risk buy down or risk retirement processes. In defense the “trade study” approach is now standard. This provides a “risk taking” process early in the program where things blow up, crash (litertally) or generally fail. Knowledge is gained from these failures. This approach is mandated by US DoD. It is rare in IT.
    3. Is TargetProcess “critical” to you eating at night? No body has to die, unless it’s from starvation. The point is to ask “why” someone wants this software. If they can’t provide an answer, cancel the project and find one where they know the answer.
    4. SAA
    5. I stand against to over loading of a term, that has very little evidence of actionable outcomes for those managing the project oher than to restate the obvious – that’s all. I know it’s interesting conversation. But not of lot of predictive outcomes that can be put to use – again beyond the obvious.

  • http://profile.typepad.com/6p01157018d7ac970b Michael Dubakov

    Glen, I think we are coming into agreement :)
    2. I agree, however in IT there is a “fail fast” rule that is becoming popular. I don’t think it is rare now. For example, we released first version as soon as possible and evaluated feedback. It was good so we moved forward.
    3. “Is TargetProcess “critical” to you eating at night?” – not at all :) Definitely we know the answer, the software is poplar enough to treat it as a success project. But I agree with “no answer -> close project” rule.
    4. Sorry I don’t know this acronym.
    5. I agree as well that on the current stage there are not so much real applications of CAS to software development. BUT I think researches in this direction will be very interesting, that is it. And I think IT community should not blame such initial posts about CAS and software.
    P.S. It was very interesting discussion to me, I learnt a lot of new things.

  • http://profile.typepad.com/galleman Glen B. Alleman

    Michael,
    Yes we are making progress…
    2. In my experince the problem with internal IT is there is no corporate memory. No one remembers what they did lat time around. This is think is one of the drivers for COTS ERP systems. Hire someone who knows how to install the system and make it work.
    4. Same as Above
    5. This is challenge of “research” in a practical field. Being an experimentalist in the long past, “research for research sake was a wonderful life.” In the Project Management and development world the other end of the spectrum is in place. “How to improve the probability of success for your project in the current development cycle.”
    Our thread is now below the line on the Blog so maybe people will stop reading…
    Drop me a note or join LinkedIn so we can keep in touch.