Notes for the Book? Certainly Excerpt from the Memoirs. Hierarchy of Process Management: CMMI

I’m frequently fascinated by trying to recreate my train of thought — or perhaps I should say, I occasionally realize how my train of thought came about and am fascinated by the realization. For five or six weeks now I’ve been summarizing various themes that might inform my book about how science fiction relates to human understanding of the cosmos. Now I’ve realized there perhaps three sources that have inspired my idea of building these hierarchies, that run from simple and intuitive, to sophisticated and complex (and generally, the most like reality).

The trigger for the first one, on awareness (22 May) was inspired by a book I read back in February, Scienceblind by Andrew Shtulman. The book came out three years ago, and I finally got around to it earlier this year. Its premise interested me because I’ve long been fascinated by the idea of “intuitive physics,” e.g. why people assume heavy objects fall faster than light ones, why seeing space fighters swoosh through space and bank like in-atmosphere jet fighters doesn’t bother the so many people who watch Star Wars. (Both because they don’t realize ships in space don’t work like that, and also because story is so much more important than correct physical details, I think.) And why most people never get past these intuitive notions, or care.

I’ll post notes on Shtulman eventually, but he expands on my notion of intuitive physics through psychological experiments, and interviews, of infants, children, and adults, to see how naive assumptions about how the universe work initialize (as base human nature), and are modified with age.

That became the first step of that hierarchy. Some people, fewer and fewer at each level of the hierarchy, learn better how the cosmos actually works. Very few reach the higher levels.

And then several later hierarchies.

More recently, in the past week, I’ve recalled that hierarchy of morality, from one Lawrence Kohlberg, as cited in prominent books by E.O. Wilson and Steven Pinker.

And finally, in writing pieces of my memoir — with one long section, not yet posted, about my work at Rocketdyne for 30 years, which I think might be of interest to even people who don’t especially like me as a person — I discussed a process management model that was the focus of most of the last 20 years of my career. And have only just realized — I’m a bit dim at times — that its structure bears remarkable similarity to the ones I’ve been constructing. Shtulman, Kohlberg, CMMI; surely all of these in the back of my mind inspired my recent hierarchical themes.

So, after that lengthy prelude, I will excerpt discussion of CMMI from that as-yet unposted memoir page:

>>

In the early 1990s NASA and the DoD (Department of Defense) adopted a newly developed standard for assessing potential software contractors. This standard was called the Capability Maturity Model, CMM, and it was developed by the Software Engineering Institute (SEI) at Carnegie Mellon University in Pittsburgh. The CMM was an attempt to capture, in abstract terms, the best practices of successful software development organizations in the past.

The context is that software projects had a history of coming in late and over-budget. (Perhaps more so than other kinds of engineering projects, like building bridges.) If there were root causes for that history, they may have in the tendency for the occasional software genius to do everything by himself, or at least take charge and tell everyone else what to do. The problem then would be what the team would do when this “hero” left, or retired. All that expertise existed only in his head, and went with him. Or there was a tendency to apply the methods of the previous project to a new project, no matter how different.

In any case, the CMM established a series of best practices for software development, arranged in five “maturity levels,” to be used both as a guide for companies to manage their projects, and also as a standard whereby external assessors would assess a company for consideration when applying for government contracts.

The five levels, I now realize, are analogous to the various hierarchies I’ve identified as themes for consideration for knowledge and awareness of world, from the simplest and most intuitive, to the more sophisticated and disciplined.

  1. Level 1, Initial, is the default, where projects are managed from experience and by intuition.
  2. Level 2, Managed, requires that each project’s processes be documented and followed.
  3. Level 3, Defined, requires that the organization have a single set of standard processes that are in turn adapted for each project’s use (rather than each project creating new processes from scratch).
  4. Level 4, Quantitatively Managed, requires that each project, and the organization collectively, collect data on process performance and use it to manage the projects. (Trivial example: keep track of how many widgets are finished each month and thereby estimating when they will all be done.)
  5. Level 5, Optimizing, requires that the process performance data be analyzed and used to steadily implement process improvements.

Boiled even further down: processes are documented and reliably followed; data is collected on how the processes are executed, and then used to improve them, steadily, forever.

Examples of “improvements” might be the addition of a checklist for peer reviews, to reduce the number of errors and defects, or the acquisition of a new software tool to automate what had been a manual procedure. They are almost always incremental, not revolutionary.

The directions of those improvements can change, depending on changing business goals. For example, for products like the space shuttle, aerospace companies like Rocketdyne placed the highest premium on quality—there must be no defects that might cause a launch to fail, because astronaut’s lives are at stake. But software for an expendable booster might relax this priority in favor of, say, project completion time.

And software companies with different kinds of products, like Apple and Microsoft, place higher premiums on time-to-market and customer appeal, which is why initial releases of their products are often buggy, and don’t get fixed until a version or three later. But both domains could, in principle, use the same framework for process management and improvement.

Again, projects are run by processes, and in principle all the people executing those processes are interchangeable and replaceable. That’s not to say especially brilliant engineers won’t have a chance to perform, but it has to be done in a context in which their work can be taken over by others if necessary.

… [skipping some of the memoir]

The software CMM was successful from both the government’s and industry’s points of view, in the sense that its basic structure made sense in so many other domains. And so CMMs were written for other contexts: software engineering; acquisitions (about contractors and tool acquisitions), and others. After some years the wise folks at Carnegie Mellon abstracted even further and consolidated all these models into an integrated CMM: CMMI (https://en.wikipedia.org/wiki/Capability_Maturity_Model_Integration). And so my company’s goals became satisfying this model.

Time went on, and the SEI kept refining and improving the CMMI, both the model and the assessment criteria; Rocketdyne’s later CMMI assessments would not get by on the bare bones examples for Level 5 that were used in 2004. I’ve been impressed by the revisions of the CMMI over the years: a version 1.1, then 1.2, then 1.3, each time refining terminology and examples and sometimes revising complete process areas, merging some and eliminating others. They did this, of course, by inviting feedback from the entire affected industry, and holding colloquia to discuss potential changes. The resulting models were written in straightforward language as precise as any legal document but without the obfuscation. This process of steadily refining and revising the model is analogous to science at its best: all conclusions are provisional and subject to refinement based on evidence. (A long-awaited version 2.0 of CMMI has apparently been released in the past year, so I haven’t seen it.)

<<

Here end excerpts from the not-yet posted memoir of my software engineering career. I mean to emphasize the beauty and precision of the CMMI’s language, refined over decades. The memoir has some reflections:

>>

Looking back at these engineering activities, it now occurs to me there’s a strong correlation between them and both science and critical thinking. When beginning a new engineering project, you use the best possible practices available, the result of years of refinement and experience. You don’t rely on the guy who led the last project because you trust him. The processes are independent of the individuals using them; there is no dependence on “heroes” or “authorities.” There is no deference to ancient wisdom, there is no avoiding conclusions because someone’s feelings might be hurt or their vanity offended. Things never go perfectly, but you evaluate your progress and adjust your methods and conclusions as you go. That’s engineering, and that’s also science.

Things never go perfectly… because you can’t predict the future, and because engineers are still human. Even with the best management estimates and tracking of progress, it’s rare for any large project to finish on-time and on-schedule. But you do the best you can, and you try to do it better than your competitors. This is a core reason why virtually all conspiracy theories are bunk: for them to have been executed, everything would have had to have been planned and executed perfectly, and without any of the many people involved leaking the scheme. Such perfection never happens in the real world.

<<

I think this last paragraph is very important.

This entry was posted in Personal history, The Book. Bookmark the permalink.