Showing posts with label Transformantion. Show all posts
Showing posts with label Transformantion. Show all posts

Wednesday, August 10, 2011

The Cost of Rockets Built by NASA: Waterfall Process vs Short-cycle and Agile Processes

Shortcomings
This post is really not about the shortcomings of NASA, it's more about the inevitability poor, high cost deliverables when a) an organization looses its focus because of a constantly changing Vision and Mission; b) a development process for a complex system is stilted by formality, documentation, and transactional management; c) the development process assumption is that all of the requirements and risks (unknowns in the design) must be known before the next step in the effort starts.  Unfortunately, NASA is a good example.

NASA and the Waterfall Development Process
NASA does not have a clearly defined mission.  Instead, it has three poorly defined missions.  The first is to improve aircraft.  Currently, I and many others would expect that the Mission statement might be, "Research methods to improve aircraft fuel efficiency"; but apparently not.  From the sales to American Airlines, the Eurobus aircraft has much better efficiency than the Boeing aircraft. 

The second mission is to perform space science, in particular astronomy.  The consequence is that the astronomers have taken over NASA as their personal toy-maker.  While they are attempting to answer some big questions, it is not clear that these questions need to be answered by this generation of astronomers.  In actuality, they are racing each other to get into the "history of science" books.  The value of that work is not readily apparent except in the very long term.  Further, much of it could be better accomplished from space stations.  And while it is interesting to explore Mars by robot, it would be much more interesting to have humans explore and colonize the moon. 

The third mission of NASA is to make it possible to colonize space.  NASA has fallen down completely with this mission.  Over the past twenty years, NASA has been the poster-child for the highly formal waterfall process for development of new systems.  It's process demonstrates what happens when transactional and bureaucratic management takes on development efforts; it's programs fail to deliver.  It seems to me, there are two interconnected reasons for this.  First, they've lost their Big Hairy Audacious Goal (their BHAG, see Jim Colins' book Built to Last for a discussion).  Second, they've tied up their innovative thinking, designing, and developing in a web of red-taped processes and procedures. 

BHAG and NASA
Elton Musk, the creator of PayPal and the Tesla Motors, has also created SpaceX.  The reason that Mr. Musk invested in SpaceX is that he
"...believes the high prices of other space-launch services are driven in part by unnecessary bureaucracy. He has stated that one of his goals is to improve the cost and reliability of access to space, ultimately by a factor of ten." [see SpaceX]
First, this quote demonstrates that SpaceX has a BHAG, "...to improve the cost and reliability of access to space...by a factor of ten."  In fact this is a very ambitious mission for a small organization, yet it seems to be one at which they are succeeding.

NASA had such a Mission, "...to put a man on the moon before the end of the decade."  That's a Mission is short, clear, and measurable, the three criteria of a good mission statement.  NASA performed admirably.  NASA performed the Mission using a strategy of short cycles, each building on the previous.

However, once, it had achieved its Mission, the NASA leadership assumed that colonizing the Moon was its next Mission, but LBJ had bigger plans, the Entitlement Programs, which have saddled the United States with unimagined debt--predictably; at least I could see it coming in 1965.  One of the first results of the increased debt was cutting the Moon landing program was shut off completely.  In substitute, because NASA has a political constituency, the Mission was to "build" the Skylab on the cheap.  They partially achieved this mission by using components from previous manned space programs.  Then NASA was charged by the US Government with creating a Space Transportation System (STS) that would significantly lower the cost to boost a ton of materials or people into low earth orbit.  Initially, this was couple with constructing a US space station in low earth orbit, similar to the one built of the USSR.  This was a fairly ambitious effort, but no where near as ambitious and audacious as exploring and colonizing the Moon.  The result was, in 1971, a Mission to construct the space shuttle and, in 1984, a Mission to construct the United States space station, Freedom.  As NASA moved from the mission to the Moon to subsequent missions, they were hobbled with the "politically correct" requirements.  These includes ensuring that states of politically powerful Senators and Representatives got contracts to firms in their districts, ensuring the minority owned, female owned, veteran owned firms got a certain percentage of the contracts--as mandated by Federal Law.  From personal experience on the Freedom effort, I can pretty much guarantee these mandates added a minimum of 10 percent to the cost of the shuttle and space station efforts--probably much more when added to the second reason for NASA's failure.  This was a big part of the reason that President Clinton cancelled the US space station program.  Since, NASA has had several manned Missions, but the US Congress has not been on board, so these have not been adequately funded--using the NASA processes (which are very financially wasteful).

Short Cycle Versus Big Bang in Aerospace Development and Transformation
The second reason NASA is failing is their choice of processes.  Since the Apollo Program, NASA has become continuously more bureaucratic, with transactional management and processes rather than transformational management and processes.  Their formal processes are based more and more on the waterfall process with all of the Program Management overhead, in terms of intermediate artifacts--again, increasing intra- and inter-organizational friction and pushing up the cost and out the schedule of the program.

Part of the problem is that the waterfall process assumes that "all requirements are known at the start of the effort" and that "risks (unknowns) can be turned into known through schedule-based invention and innovation", that is, invention and innovation can be performed on a timeline.  Both of these are heroic assumptions and were proven false time and again in the NASA Programs.  However, transactional management and finance engineering militated against it because those assumptions allow them to "control" the effort.  The consequence has been, since the Shuttle Program, there have been no successful development of systems to deliver materials to space; there have been no lessons learned about how to build better shuttles, or even how create better heat shield for the shuttle.  Instead, to "protect" the personnel, these transactional managers have added more audits and check lists and redundant check lists.  And one administration after another cut off one of the key technical centers for real innovation, which from work in the 1960s, led in the 1990s, to advances in computing, the Internet, medicine and so on; instead, choosing to expend the funds on entitlement handouts. ["Feed a man a fish and he satisfied for a day, teach a man to fish...", do research on fish to ensure their survival and you feed generations.]

The results have been similar in character to what Boeing found.  In selling the 777 to Japan, they guaranteed that the Japanese aerospace industry would have a work share.  However, before they let the contract for a tail surface, Boeing asked the Japanese firm for a "test article".  When they received the tail surface, it met all of the specifications and met them more closely than the Boeing manufactured tail surfaces.  However, they had a mystery, the Japanese tail surface weighed ~150 lbs less than the one Boeing built internally.  On taking the Japanese test article and an "identical" Boeing test article apart, the Boeing engineers solved the mystery, the Boeing test article had ~150 lbs of shims to bring into conformance, while the Japanese built it to specs, rather than shimming it to specs.  Since reducing aircraft weight is one key to reducing fuel costs per passenger mile, the shims all over the aircraft were a big issue.

I've found that many program managers of governmental contracts require program management shims.  When a program gets into trouble the first thing they require is more status, more PMRs, and more detailed schedules.  All of this requires formal replan documents, which takes a significant chunk of the program's budget.  The is what has happened to the NASA man-in-space programs and is happening in most other federal, state, and local programs.  And again, it is exacerbated by the great additional friction of federal "fairness" policies, which, as noted earlier, direct funding to organizations with certain types of ownership, regardless of competence.  These are in fact, blatant attempts at quick cultural change.  They may be somewhat successful in meeting their mission, but they have wrecked programs like those of NASA.  This is noted in the quote from Mr. Musk, "the high prices of other space-launch services are driven in part by unnecessary bureaucracy." 

Since the aerospace industry started just over 100 years ago with a flight of 120 feet and in slightly over sixty years, was flying at supersonic speed (in fact Clarence "Kelly" Johnson was one designer who 1933 helped develop the Lockheed Model 10 Electra and finished his career by designing and developing the SR-71 Blackbird, flying at Mach3+) and had reached the Moon.  How did these inventors, innovators, and designers do it?

The Wright Brothers versus Langley
Invention and innovation in the aerospace industry is rife with examples of both short cycle and Big Bang development and transformation, but especially development, starting with the Wright Brothers.  Starting in 1899, the Wright brothers started active work on developing a "controllable heavier than air craft" (their BHAG).  Prior, they had identified that building a "stable" aircraft, an aircraft that would fly straight in still air, as all previous aircraft pioneers were doing, and then adding control functions would not achieve the goal of heavier that air flight.  Instead, they decided that they needed an unstable aircraft, one that the pilot would actually have to fly.

[Sidebar: This concept is found in sailboats as well.  Cruising sailboats have very long keels and small rudders.  The long keel provides stability for going in a straight line.  In fact, frequently, the crew can leave the helm unattended for 5 to 10 minutes without have the boat change direction by 5 degrees. on the other hand, racing sailboat have to be maneuvered before and during the race.  Therefore, they have short, deep keels and large rudders.  This makes then inherently unstable for holding their course.  In fact, my boat, which is a combination racer/cruiser will wonder 30 degrees or more off course in a matter of seconds; but it will turn around in practically its own length.  This shows the difference between stability in instability.]

Consequently, in 1899 the Wright Brothers started to build a series of controllable and maneuverable kites.  By 1901, they felt they had a flyable design, but it behaved poorly when compared with the predictions of the research current at the time.  They felt that there was a risk that the research was wrong.  Since there was no way to avoid, transfer, or accept the risk and be successful, they instituted a mitigation plan.  Initially this consisted of attaching small model airfoils to a bicycle and pedaling as fast as possible.  While the data from this "exercise" showed that the data, current at the time was not reliable, they had to invent the wind tunnel to get accurate data.  Again, they went through short-cycle experiments on very small wind shapes and developed the first set of highly accurate data on airfoils. 

With this data, in 1902, they were able to build a guilder that was fully controllable.  But after a series of test flights, the 1901 manned kite also showed a need for a vertical tail, and again, after a series of short cycles, they implemented the vertical tail surface and allowed it to move.  This too, was incorporated into the 1902 guilder.  The 1902 guilder proved that they were ready to add an engine to create the first fully controllable aircraft.

In 1903, the Wright Brothers found two more unknowns.  First, they needed a light-weight engine and second, they needed to determine what shape of propeller would produce the most thrust.  With respect to the light-weight engine, since they couldn't buy one, they built one with the aid of an employee/team member.  With respect to the prop, the conventional wisdom of the time said that the shape should be much like a ship's propeller blade.  However, the Wright Brothers found that thinking of a propeller as an airfoil that spins produced much more thrust.  Thus by the end of 1903 they were ready and flew the Flyer 120 feet on their second attempt (their first, the previous day had just gotten of the ground when they crashed it).  As the gained experience and confidence they flew farther, to 852 feet.

In 1904, they built a new Flyer based on their experience.  They flew in Ohio, close to home.  There they gained more piloting experience as well as refinements to the design, again in short-cycles, mindful or new requirements and risks as they came along.  They considered the 1904 craft merely a design step and at the end of the 1904 flying season, salvaged it, and burned the leftovers.  By 1905, they finally designed a Flyer that was truly usable, at least for the time and it was from these aircraft design that all useful controllable heavier than aircraft can be traced.

I left out two parts of the story.  First, they spent ~$1000 to create their aircraft (excluding their own time, as an investment) and Second, they were in a competition with Samuel Langley.  Samuel Langley was a well connected researcher that, having shown was funded by the US Government through the Smithsonian Institution (Museum) to over $50,000.  The reason that Langley was funded was that he had created a model unmanned aircraft that flew 3/4 of a mile under ideal conditions; it was stable in its design. 

[Sidebar: Again, as noted earlier, stability allows an aircraft to continue in a straight line.  It does enable the course of the aircraft (or boat) to turn easily.  This is a good analog to lean versus agile processes and waterfall versus short-cycle processes.  To create a lean process, the process engineer looks for waste.  While there are many forms of waste, one is "unnecessary" activities and procedures; and there are many of those, as well.  reducing these enables to process to flow more quickly to "the solution" or "the deliverable".  Therefore, this is a very stable process.

However, if external conditions effecting the lean process change, the process has no ability to effectively or successfully respond.  The same thing happens with the waterfall process.  The process is based on the assumption that "all of the requirements are known up front".  Then the Program Manager can plan out the effort (sometimes to the bathroom breaks) and manage to the schedule.  Unfortunately, this assumption is entirely false.  Consequently, projects using the waterfall process have to undergo much replanning, Engineering Change Orders (ECOs) and so on (that keep Program Managers employed).  This too means that waterfall process-based programs are stable, but brittle in that changing direction, however, slight, requires major effort.]

Langley scaled up this design, changed the powerplant from steam to gasoline, and built a bigger barge; but apparently not big enough since his craft hit something on the barge and immediately crashed.  As several writers has pointed out, even if the aircraft had flown, Langley did not have a good plan for his pilot to land his craft.  Since the pilot wasn't hurt, this attempt ended much better than it might have even if it were "successful", that is, flown successfully only to have the pilot killed at the end of the flight.  So Langley rebuilt his craft again and again it crashed.  While Langley was trying to assess the results and the damage, the Wright Brothers flew their aircraft.

However, because of the secrecy in which they flew, it wasn't until 1908 that they demonstrated their accomplishments to the world.  They flew both in the US and Europe, literally and figuratively flying circles around their competitors; they could control their agile aircraft while their competitors could fly their stable aircraft in straight lines.  The Wright Brothers succeeded because they understood their Vision as flying (controlling) a heavier than air craft where they wanted.  They spent from 1899 to 1903 getting the basics right in a series of short cycles, then refined the design in a series of short cycles.  On the other hand, Langley and a fair number of European competitors had a Vision of a stable heavier than aircraft, which they then hoped to find means to control.  Consequently they built small stable models, then scaled up the results in a waterfall like process--and failed.

Robert Goddard and Wernher Von Braun and Short Cycle Development
Rocket science too, used short cycle development, with little formal program management.  The father of modern rocket science, Dr. Robert H. Goddard worked to a single Vision (his BHAG), manned space travel virtually his entire life.  To realize this Vision, Dr. Goddard started by creating strategies for getting into space.  This led to two landmark patents in 1914 of 214 that he was granted.

By 1915, Dr. Goddard was working on meeting one of the requirements derived from his strategies, creating an engine with sufficient thrust to get into space.  He found that the rocket engines of the time actually converted only 2 percent of the energy they produced into thrust.  When he applied steam turbine nozzle technology to the nozzle of a powder rocket, he could the conversion rate to above 40 percent.  However, the total thrust produced by powder was not great enough to achieve his Vision.

Therefore, he started to investigate other fuels.  At the time, liquid fuels had the greatest chance of producing the needed thrust to weight ratio (and had an added advantage of being controllable, that is, reducing the flow of fuel to the engine reduced the thrust, while increasing the fuel to the engine increased the thrust).  In 1926, Dr. Goddard is credited with the first liquid fuel rocket to lift off, after working a number of years, with many tests on this engine.  Like the Wright Brothers, the first flight Dr. Goddard's flying machine was very short in both time and distance, but it proved that liquid fuel could be used.  And like the Wright Brothers, he then both continued to refine the engine design, and went to a new challenge.  In the case of the Wright Brothers, they learned to control their aircraft first, then to power it, while Dr. Goddard first learned to power the craft and then spent a significant amount of time learning to control it; even while refining the propulsion system.  Additionally, Dr. Goddard spent a good deal of time on raising funds for his research and development efforts.

By 1937, Dr. Goddard had flown a number of his rockets.  While none of them were particularly successful, at least to the general public, they did draw the attention of the rocket science community around the world,  and while Dr. Goddard, like the Wright Brothers, tended to be secretive, he did share technical information with others in the community.

In Germany, one who paid attention was Dr. Wernher Von Braun.  Like Dr. Goddard, Dr. Von Braun had had a Vision (again, his BHAG) of human space travel since he was a child.  By 1930, Dr. Von Braun had joined the "Space Flight Society" in Germany and started working on liquid fuel rockets; this is what brought Dr. Goddard's work to Dr. Von Braun's attention.  By 1934, the work of Dr. Von Braun came to the attention of the Nazis.  When in 1934 he was ready to publish his doctoral thesis, the Nazis classified it.  They then offered to support his work.  Obviously, the Nazi Vision and the Mission for rocket technology differed wildly from Dr. Von Braun's.  Still, both Dr. Braun and the Nazis wanted to develop rocket technology (and there was an implied, but very real, threat that non-cooperation would have dire consequences).

The net result was that Dr. Von Braun built on Dr. Goddard's work during the 1930s and until 1939, asked technical questions of Dr. Goddard from time to time.  During this time he first created the A-1, then A-2 and A-3 series of rockets in a series of short cycles.  Built on these prototypes, the A-4 series first flew in 1941, but due to reliability problems and interference from the Allies, it was not put into production until 1943--the A-4 was then redesignated as the V-2.  This was the first rocket to demonstrate the potential of space travel as well as the first intermediate ranged rocket.  Both Dr. Goddard and Dr. Von Braun confirmed that the V-2 design was actually a refinement of Dr. Goddard's original work.

While it did not affect the course of WWII as Hitler hoped, it did get the Allies attention. At the end, both the USSR and the US captured some V-2 missiles and while the USSR captured some of the German rocket scientists, the majority followed Dr. Von Braun into surrendering to the US.  In the period of the late 1940s to 1957, the USSR worked in secret on derivatives and upgrades of the V-2, while the US made minimal use of the technology and expertise they had available.  Only when the USSR launched Sputnik in 1957 and several Vanguard rockets failed, spectacularly, was the Von Braun team asked to put a satellite into space.  Basicly, this team rolled out the Redstone rockets they had prepared for launching a satellite three years before (They had not been allowed to launch it because it would have been politically incorrect to do so).  This was Explorer 1.

Dr. Von Braun went to NASA and continued his work.  NASA itself, in the early days created a short cycle plan to get a manned landing on the Moon by 1969.  It started with the Mercury Program, which "simply" got men into space, then graduated to Gemini, which verified that vehicles could meet and mate in space (necessary for the method the US chose to go to the Moon).  And finally, using Dr. Von Braun's Saturn 1 and Saturn 5 rockets, move in relatively short cycles to the moon landing.

Notice that from 1915 to 1969 and actually to 1972, the space/Moon landing program was not a single Big Bang process, but a series of small steps, each building on the previous.  This is the only type of process that would work for a Moon landing.  However, since, NASA has abandoned this process in favor or working the way other government departments work, with much red tape, little flexibility, few successes or deliverables...as described so succinctly by Musk.

Final Thoughts
Short cycles allow for experimentation, the waterfall process doesn't.  As noted by Dr. Goddard,
"It is not a simple matter to differentiate unsuccessful from successful experiments. . . .(Most) work that is finally successful is the result of a series  of unsuccessful [short-cycle] tests in which difficulties [risks and issues] are gradually eliminated." (Written to a correspondent, early 1940s, See Lehman, Milton, This High Man: The Life of Robert H. Goddard [N.Y., N.Y.: Farrar, Strauss, and Co., 1963], p. 274.)
 Experimentation is not economic in the calculations of finance engineering, since most experiments are failures, and cannot be preplanned.  The first generation of NASA personnel and management understood this, but later generations of NASA management have not.  Since the Moon Mission, NASA has lost its way.  First, NASA's Mission keeps changing with the political winds, both within government and within the "scientific" community.  There has been no clarion clear Mission since the Moon landing Mission.  In the last 10 years, NASA was given a Mission to set up a Moon colony, a Mars colony, then a Moon colony (again), then a manned Mars mission.  Each of these had different strategies for achieving their mission.  From what I've read, none of these include short-cycle processes...that's too expensive.  Instead, they have been Program Management and Finance Engineering controlled waterfall-like programs.
[Sidebar: If it were me, I would choose a Mission to Mars by way of the Moon.  The reason being to learn more about space travel and colonization of a planet in an environment where emergency and other short-cycle risk reduction flights would have a chance, rather than one that the "short duration" is seven months.  But, that's only my opinion.]
Unfortunately, as events over the past 10 to 15 years have shown, it's not only NASA, but the entire Federal Government that needs to overhaul its Missions, Strategies, laws, regulations, policies, and standards.  Given the two current diameterically opposed views of the roll of government (see my post on "The Purpose of Government" and linking posts, for my thoughts on the roll of government) the US Federal Government remains uncontrolled for all practical purposes.

Thursday, June 30, 2011

Systems Engineering, Product/System/Service Implementing, and Program Management

A Pattern for Development and Transformation Efforts
Recently a discussion started in a LinkedIn Group that recruiters, HR, and Management was using the term "Systems Engineer" indiscriminately.   The conclusion was that the discipline of Systems Engineering and the role of the Systems Engineer in Development and Transformation Efforts is poorly understood by most people, and perhaps by many claiming to be Systems Engineers.  In my experience of building a Systems Engineer group from 5 to 55, I can attest to this conclusion.

Currently, I am working on a second book, with the working title of "Systems Engineering, System Architecture, and Enterprise Architecture".  In the book, I'm attempting to distill 45+ years of experience and observation of many efforts, from minor report revisions to the Lunar Module, F-14, B-2, and X-29 aircraft creation efforts, to statewide IT outsourcing efforts.  This post contains excerpts of several concepts from this manuscript.

The Archetypal Pattern for Product/System/Service Development and Transformation
At a high level there is an architectural process pattern for Product/System/Service development and transformation.  I discuss this pattern in my current book, Organizational Economics: The Formation of Wealth, and it is one key pattern for my next book.  This pattern is shown in Figure 1.
Figure 1--The Three Legged Stool Pattern

As shown in Figure 1, the architectural process model posits that all development and transformation efforts are based on the interactions of three functions (or sub-processes), Systems Engineering, Design and Implementation, and Program Management.  This is true whether a homeowner is replacing a kitchen faucet or NASA is building a new spacecraft.  Each of these sub-processes is a role with a given set of skills.

Consequently, as shown in Figure 1, I call this process pattern "The Three-legged Stool" pattern for development and transformation.  I will discuss each sub-process as a role with requirements.  Therefore, this is what I see as the needs or requirements for the process and the skills for the role.  In my next book, I will discuss more about how these can be done.

As shown in Figure 1, the program management role is to enable and support the other two roles with financial resources and expect results, in the form of a product/system/service meeting the customer's requirements.

Systems Engineering (and System Architecture) Role
The first role is the Systems Engineer/System Architect.  This role works with the customer to determine the requirements--"what is needed."  I've discussed this role in several posts including Enterprise Architecture and System Architecture and The Definition of the Disciplines of Systems Engineering.  Three key functions of this sub-process are:
These are the key responsibilities for the role, though from the posts, cited above, "The devil (and complexity of these) is in the detail".

The key issue with the Systems Engineering/System Architect role within a project/program/effort is that the requirements analysis procedure becomes analysis paralysis.  That is, the Systems Engineer (at least within the "waterfall" style effort, that assumes that all of the requirements are known upfront) will spend an inordinate amount of time "requirements gathering"; holding the effort up, to attempt to insure that all of the requirements are "know"--which is patently impossible.

 I will discuss solutions to this issue in the last two sections of this post.

Design and Implementation Role
When compared with Systems Engineering, the Design and Implementation functions, procedures, methods, and role are very well understood, taught, trained, and supported with tooling.  This role determines "How to meet the customer's needs", as expressed in the "What is needed (requirements)", as shown in Figure 1.  These are the product/system/service designer, developers, and implementers of the transformation; the Subject Matter Experts (SMEs) that actually create and implement.  These skills are taught in Community Colleges, Colleges, Universities, Trade Schools, and on-line classes.  The key sub-processes, procedures, functions, and methods are as varied as the departments in the various institutions of higher learning just mentioned.

There is a significant issue with designers and implementers, they attempt to create the "best" product ever and go into a never ending set of design cycles.  Like the Systems Engineering "analysis paralysis", this burns budget and time without producing a deliverable for the customer.  One part of this problem is that the SMEs too often forget is that they are developing or transforming against as set of requirements (The "What's Needed").  In the hundreds of small, medium, and large efforts in which I've been involved, I would say that the overwhelming percentage of time, the SMEs never read the customer's requirements because they understand the process, procedure, function, or method far better than the customer.  Therefore, they implement a product/system/service that does not do what the customer wants, but does do many functions that the customer does not want.  Then the defect management process takes over to rectify these two; which blows the budget and schedule entirely, while making the customer unhappy, to say the least. The second part of this problem is that each SME role is convinced that their role is key to the effort.  Consequently, they develop their portion to maximize its internal efficiency while completely neglecting the effectiveness of the product/system/service.  While I may be overstating this part somewhat, at least half the time, I've seen efforts where, security for example, attempts to create the equivalent of "write only memory"; the data on it can never be used because the memory cannot be read from.  This too, burns budget and schedule while adding no value.

Again, I will discuss solutions to this issue in the last two sections of this post.

Program Management Role
As shown in Figure 1, the role, procedures, and methods of Program Management is to support and facilitate Systems Engineering and Design and Implementation roles.   This is called Leadership.   An excellent definition of leadership is attributed to Lao Tzu, the Chinese philosopher of approximately 2500 years ago.  As I quoted in my book, Organizational Economics: The Formation of Wealth:
  • "The best of all leaders is the one who helps people so that, eventually, they don’t need him.
  • Then comes the one they love and admire.
  • Then comes the one they fear.
  • The worst is the one who lets people push him around.
Where there is no trust, people will act in bad faith.  The best leader doesn’t say much, but what he says carries weight.  When he is finished with his work, the people say, “It happened naturally"."[1]
[1] Lao Tzu, This quote is attributed to Lao Tzu, but no source of the quote has been discovered.
If the program manager does his or her job correctly, they should never be visible to the customer or suppliers; instead they should be the conductor and coordinator of resources for the effort.  Too often the project and program managers forget that this is their role and what the best type of leader is. Instead, they consider themselves as the only person responsible for the success of the effort and "in control" of the effort.  The method for this control is to manage the customer's programmatic requirements (the financial resources and schedule).  This is the the way it works today.

The Way This Works Today: The Program Management Control Pattern
There are two ways to resolve the "requirements analysis paralysis" and the "design the best" issues, either by the Program Manager resolving it, or through the use of a process that is designed to move the effort around these two landmines.

The first way is to give control of the effort to manager.  This is the "traditional" approach and the way most organization's run development and transformation efforts .  The effort's manager manages the customer's programmatic requirements, (budget and schedule), so the manager plans out the effort including its schedule.  This project plan is based on "the requirements", most often plan includes "requirements analysis".

[Rant 1, sorry about this: My question has always been, "How is it possible to plan a project based on requirements when the first task is to analyze the requirements to determine the real requirements?"  AND, I have seen major efforts (hundreds of millions to billions) which had no real requirements identified...Huh?]

The Program or Project Manager tells the Systems Engineer and Developer/Implementer when each task is complete; because that's when the time and or money for that task on the schedule is done, regardless of the quality of the work products from the task.  "Good" managers keep a "management reserve" in case things don't go as planned.  Often, if nothing is going as planned, the manager's knee jerk reaction is to "replan"; which means creating an inch-stone schedule.  I've seen and been involved in large efforts where the next level of detail would be to schedule "bathroom breaks".  This method for resolution of "analysis paralysis" and "design the best" will almost inevitably cause cost and schedule overruns, unhappy customers, and defective products because the effort's control function to control costs and schedules.

The Program Management Control Pattern
Figure 2 shows the Program Management Control Pattern.  The size of the elipse shows the percieved importance of each of the three roles.



Figure 2--The Program Management Control Pattern


First, the entire "Three Legged Stool" Pattern is turned upside down is the Program Management Control Pattern.  Rather than the Program Manager enabling and supporting the development process by understanding and supporting the development or transformation process, the Program Manager "controls" the process.  In Lao Tzu leadership taxonomy, this process pattern makes the Program Manager one of the latter increasingly ineffective types.  It also reverses importance of who produces the value in the effort.

To be able to "Control" the effort, the Program Manager requires many intermediate artifacts, schedules, budgets, and status reports, which use up the resources of the efforts and  are non-valued work products, the customer might look at these artifacts once during a PMR, PDR, CDR, or other "XDR" (Rant 2: Calling these review Program Management Reviews, instead of some type of Design Review", Preliminary, Critical, etc., demonstrates the overwhelming perceived importance of the programmatic requirements by Program Managers.)  I submit that all of these intermediate artifacts are non-value added because 3 months after the effort is completed, the customer or anyone else will not look at any of them except if the customer is suing the the development or transformation organization over the poor quality of the product.  All of these management reviews require resources from the Developers/Implementers and the Systems Engineers.

One extreme example of this management review procedure was the procedures used in development of new aircraft for the US Air Force and Navy during the 1980s and 90s--sometimes facts are stranger than fantasy.  The DoD required some type of "Development Review" every 3 months.  Typically, these were week-long reviews with a large customer team descending on the aircraft's Prime Contractor.  Program Management (perhaps, rightly) considered these of ultimate importance to keeping the contract and therefore wanted everyone ready.  Consequently, all hands on the effort stopped work 2 weeks prior to work on status reports and presentation rehearsals.  Then, after the "review" all hands would spend most of an additional week reviewing the customer's feedback and trying to replan the effort to resolve issues and reduce risk.  If you add this up, the team was spending 1 month in every 3 on status reporting.  And I have been part of information technology efforts, in this day of instant access to everything on a project where essentially the same thing is happening.  Think about it, these aircraft programs spent one third of their budget, and lengthened the programs by 1/3 just for status for what?  Intermediate artifacts of no persistent value--Who looked at the presentations of the first Preliminary Design Review after the aircraft was put into operations?  [Rant 3: Did the American citizen get value for the investment or was this just another Program Management Entitlement Program funded by the DoD?]

Second, as shown in Figure 2, the Systems Engineering role is substantially reduced  in the perception of the Program Manager.  An example of this was brought home to me on a multi-billion program, when I asked the chief engineer where the requirements were stored, he quoted the Program's Director as saying, "We don't need no damn requirements, we're too busy doing the work."  This Director underlined this thinking; he kept hiring more program management, schedule planners, earned value analysts, and so on, while continuous reducing then eliminating the entire Systems Engineering team and leaving only a few System Architects.  He justified this by the need to increased control and cost reduction to meet his budget [Rant 4: and therefore to get his "management bonus"--no one ever heard of the Design or a System Engineering Bonus].  Actually, I've seen this strategy put into play on large (more than $20M) three programs with which I was associated and I've heard about it on several more within the organization I was work for and in other organizations, over the past 10 years.  

Another program that I worked on as the Lead Systems Engineer that had the same perception of the Systems Engineer (including the System Architect's role within the Systems Engineering discipline/role).  It is an extreme example of all that can go wrong because of lack of Systems Engineering.  This effort was development of a portal capability for the organization.  It started with a that had 10 management personnel and myself.  They articulated a series of ill-thought-out capability statements, continued by defining a series products that had to be used (with no not identification of Customer System or IT Functional requirements), with a 6 weeks schedule, and ended with a  budget that was 50 percent of what even the most optimistic budgeteers could "guessitmate".  They (the three or four levels of management represented at the meeting) charged me with the equivalent of "Making bricks without straw or mud in the dark", that is, creating the portal.  Otherwise, my chances of getting on the Reduction In Force (RIF) list would be drastically increased.

Given that charge, I immediately contacted the software supplier and the development team members from two successful efforts within the organization to determine if there was any hope of the effort within the programmatic constraints to accomplish the task.  All three agreed, it could not be done in less than 6 months.  Faced with this overwhelming and documented evidence, they asked me what can be done.  The result was based on their "capability" statements, and "Requirements (?)" documents from the other two projects, I was able to cobble together a System Architecture Document (SAD) that these managers could point to as visible progress.  Additionally, I used a home grown risk tool to document risks as I bumped into them.  Additionally, I instituted a risk watch list report on a weekly basis, which all the managers ignored.

At this point one fiscal year ended and with the new year, I was able to have the whole, nationwide, team get together, in part, to get everyones requirements and design constraints.  Additionally, I presented an implementation plan for the capabilities I understood they needed.  This plan included segmenting the functions for an IOC build in May, followed by several additional several additional builds.  Since this management team was used to the waterfall development process, the rejected this with no consideration; they wanted it all by May 15th.  In turn, I gave them a plan for producing, more or less, an acceptable number of functions, and an associated risk report with a large number of high probability/catastrophic impact risks.  They accepted the plan.  The plan failed; here is an example of why.

One of the risks was getting the hardware for the staging and production systems in by March 15th.  I submitted the Bill of Materials (BOM) to the PM the first week in February.  The suppliers of the hardware that I recommended indicated that the hardware would be shipped within 7 days of the time the order was received.  When I handed the BOM to the PM, I also indicated the risk if we didn't get the systems by March 15th.  On March 1st, I told him that we would have a day for day slippage in the schedule for every day we didn't receive the hardware.  The long and the short of it was that I was called on the carpet for a wire brushing on July 28th when we had the program held up because of lack of hardware.  Since I could show the high-level manager that, in fact, I had reported the risk (then issue) week after week in the risk report she received, her ire finally turned on the PM, who felt he had the responsibility.

The net result of these and several other risks induced either by lack of requirements or lack of paying attention to risks resulted in a system that was ready for staging the following December.  Management took it upon themselves to roll the portal into production without the verification and validation testing.  The final result was a total failure of the effort due to management issues coming from near the top of the management pyramid.  Again, this was due to a complete lack of understanding of the role of Systems Engineering and Architecture.  In fact, this is a minor sample of the errors and issues--maybe I will write a post on this entire effort as an example of what not to do.

In fact the DoD has acknowledged the pattern shown in Figure 2 and countered it by creating System Engineering Technical Advisory (SETA) contracts.

The Utility of Program Management
[Rant 5: Here's where I become a Heritic to many, for my out of the warehouse thinking.]  In the extreme, or so it may seem it is possible that projects don't need a project manager.  I don't consider that a rant because it is a fact.  Here are two questions that makes the point.  "Can an excellent PM with a team of poorly skilled Subject Matter Experts (SMEs) create a top notch product?" and  "Can a poor PM with a team of excellent SMEs create a top notch product?"  The answer to the first is "Only with an exceptional amount of luck", while the answer to the second is "Yes! Unless the PM creates too much inter-team friction."  In other words, except for reducing inter-team friction, which uses resources unproductively, and for guiding and facilitating the use of resources, the PM produces no value, in fact, the PM creates no value, just reduces friction, which preserves value and potential value.

None of the latter three types of leaders, as described by Lao Tzu, can perform perform this service to the team, the ones I call in my book, the Charismatic, the Dictator, or the Incompetent. In other words, the PM can't say and act as if "The floggings will continue until morale improves".

Instead, the PM must be a leader of the first type as described by Lao Tzu and as I called in my book as "the coach or conductor".  And any team member can be that leader.  As a Lead Developer and as a Systems Engineer, I've run medium sized projects without a program manager and been highly successful--success in this case being measured by bringing the effort in under cost, ahead of schedule, while meeting or exceeding the customers requirements  Yet, on those none of the programs, for which I was the lead systems engineer and which had a program manager and who's mission was to bring in the effort on time and within budget, was successful.  On the other hand, I've been on two programs where the PM listened with his/her ears rather than his/her month and both paid attention to the System Requirements; those efforts were highly successful.

The net of this is that a coaching/conducting PM can make a good team better, but cannot make a bad team good, while a PM in creating better projects plans, producing better and more frequent status reports, and creating and managing to more detailed schedules will always burn budget and push the schedule to the right.

A Short Cycle Process: The Way It Could and Should Work
As noted near the start of this post, there are two ways to resolve the "requirements analysis paralysis" and the "design the best" issues, either by Program Management Control, or through the use of a process that is designed to move the effort around these two landmines.

This second solution uses a development or transformation process that assumes that "Not all requirements are known upfront".  This single change of assumption makes all the difference.  The development and transformation process must, by necessity, take this assumption into account (see my post The Generalize Agile Development and Implementation Process for Software and Hardware for an outline of such a process).  This takes the pressure off the customer and Systems Engineer to determine all of the requirements upfront and the Developer/Implementer to "design the best" product initially.  That is, since not all of the requirements are assumed to be known upfront, the Systems Engineer can document and have the customer sign off on an initial set of known requirements early in the process (within the first couple of weeks), with the expectation that more requirements will be identified by the customer during the process.  The Developer/Implementer can start to design and implement the new product/system/service based on these requirements with the understanding that as the customer and Systems Engineer identify and prioritize more the of the customer's real system requirements.  Therefore, they don't have to worry about designing the "best" the first time; simply because they realize that without all the requirements, they can't.
 
Changing this single assumption has additional consequences for Program Management.  First, there is really no way to plan and schedule the effort; the assumption that not all the requirements are known upfront means that if a PM attempts to "plan and schedule" the effort is an "exercise in futility."  What I mean by that is if the requirements change at the end/start of the new cycle, then the value of a schedule of more than the length of one cycle is zero because at the end of the cycle the plan and schedule, by definition of the process, change.  With the RAD process I created, this was the most culturally difficult issue I faced with getting PM and management to understand and accept.  In fact, a year after I moved to a new position, the process team imposed a schedule on the process.

Second, the assumptions forces the programmatic effort into a Level Of Effort (LOE) type of budgeting and scheduling procedure.  Since there is no way to know what requirements are going to be the customer's highest priority in succeeding cycles, the Program Manager, together with the team must assess the LOE to meet each of the requirements from the highest priority down.  They would do this by assessing the complexity of the requirement and the level of risk with creating the solution that meets the requirement.  As soon as the team runs out of resources forecast for that cycle, they have reached the cutoff point for that cycle.  They would present the set to the customer for the customer's concurrence.  Once they have customer sign off, they would start the cycle.  Sometimes a single Use Case-based requirement with its design constraints will require more resources than are available to the team during one cycle.  In that case, the team, not the PM, must refactor the requirement. 

For example, suppose there is a mathematically complex transaction, within a knowledge-based management system, which requires an additional level of access control, new hardware, new COTS software, new networking capablities, new inputs and input feeds, new graphics and displays, and transformed reporting.  This is definitely sufficiently complex that no matter how many high quality designers, developers, and implementers you up on the effort, it cannot be completed within one to perhaps even three months (This is  the "9 women can't make a baby in a month" principle).  Then the team must refactor (divide up) the requirement into chunks that are doable by the team within the cycle's period, say one to three months.  For example, the first cycle might define and delimit the hardware required and develop the new level of access control; and so on for the number of cycles needed to meet the requirement.

Third, with this assumption of "not having all the requirements", the PM must pay most attention to the requirements, their verification and validation, and to risk reduction.  All of these functions lay within the responsibility of the Systems Engineer; but the PM must pay attention to them to help best allocate the budget and time resources.

Fourth, there is no real need for PMRs, status reports, or Earned Value metrics.  The reason is simple, high customer involvement.  The customer must review the progress of the effort every month at a minimum, generally every week.  This review is given by the developers demonstrating the functions of the product, system, or service on which they are working.  And if the customer is always reviewing the actual development work, why is there a need for status, especially for an LOE effort?

Fifth, rolling a new system or service has significant implications for the customer.for the timing and size of the ROI for the development or transformation effort.  With an IOC product, system, or service, the customer can start to use it and in using the IOC will be able to, at a minimum, identify missing requirements.  In some cases, much more.  For example, in one effort, in which I performed the systems engineering role, during the first cycle the team created the access control system and the data input functions for a transactional website.  During the second cycle, the customer inserted data into the data store for the system.  While doing this, the customer discovered sufficient errors in the data to pay for the effort.  Consequently, they were delighted with the system and were able to fund additional functionality, further improving their productivity.  If the effort had been based on the waterfall, the customer would have had to wait until the entire effort was complete, may not have been as satisfied with the final product (more design defects because of unknown requirements), would not have discovered the errors, and therefore, would not have funded an extension to the effort.  So it turned out for a win for the customer-- more functionality and greater productivity--and for the supply--more work.

In using a short cycle process based on assuming "unknown requirements", there will always be unfulfilled customer system requirements at the end of this type of development or transformation process.  This is OK.  It's OK for the customer because the development or transformation team spent the available budgetary and time requirements in creating a product, system, or service that meets the customer's highest priority requirements, even if those requirements were not initially identified; that is, the customer "got the biggest bang for the buck".  It's OK for the team because a delighted customer tends to work hard at getting funding for the additional system requirements.  When such a process is used in a highly disciplined manner, the customer invariably comes up with additional funding.  This has been my experience on over 50 projects with which I was associated, and many others that were reported to me as Lead Systems Engineer for a Large IT organization.

Conclusions and Opinions
The following are my conclusions on this topic:
  1. If a development or transformation effort focuses on meeting the customer's system requirements, the effort has a much better chance of success than if the focus is on meeting the programmatic requirements.
  2. If the single fundamental assumption is changed from "All the requirements are known up front" to "Not all the requirements are known up front" the effort has the opportunity to be successful or much more successful by the only metric that counts, the customer is getting more of what he or she wants, and that increases customer satisfaction.
  3. If the development or transformation effort can roll out small increments will increase the customer's ROI for the product, system, or service.
  4. Having a Program Manager, who's only independent responsibility is managing resources be accountable for an effort is like having the CEO of an organization report to the CFO; you get cost efficient, but not effective products, systems, or services.  [Final Rant: I know good PMs have value, but if a team works, that is because the PM is a leader of the first type: a coach and conductor.] Having a Program Manager that understands the "three legged stool" pattern for development or transformation, and who executes to it will greatly enhance the chance for success of the effort.