Thursday, June 30, 2011

Warwick McKibbin tells it straight

RBA director Warwick McKibbin has a reputation for speaking his mind on key economic matters. It appears he is at it again, and it is worth considering his informed views on some global and local matters.

Here are my favourite points from the linked article.

Referring to the most recent global economic crisis as a mere ''blip'', he said the coming crisis could undo the mining boom and bring on inflation of the kind not seen since the 1970s.

The response globally to the financial crisis was mostly to kick the can down the road.  At some point this must stop, and the longer it goes on, the worse the resolution must be.

Joking that he could not talk about Australian interest rates, which were in any event ''always appropriate'', the Reserve Bank board member warned that the inflation would spread worldwide.

I would say that Australia has been severely buffered from global inflation by our exchange rate.  Who knows how long this can last.  My suspicion is that if interest rates go up to fight inflation, our local economy will flounder and we will end up having to drop interest rates severely and get our share of inflation anyway.  

Australia needed a sovereign wealth fund to store mining income while it lasted, ideally stored in a separate account for each taxpayer so the government could not raid it.

Of course I agree about using the tax revenues raised from the mining investment boom to save for our future.  The idea about giving each taxpayer an account seems particularly interesting.  I haven't put much thought into it but at first glance like the idea.

The $50 billion national broadband network epitomised the sort of waste Australia could not afford. ''I would say to any politician who thinks that spending is worthwhile, take your salary as shares in NBNCo. If you think it's a good investment, you'll be ahead,'' he said.

While I think the idea is great, the positive externalities generated by the NBN should factor into the equation, yet these can't be captured by revenue from broadband access.  But in general I like the idea. 

Systems Engineering, Product/System/Service Implementing, and Program Management

A Pattern for Development and Transformation Efforts
Recently a discussion started in a LinkedIn Group that recruiters, HR, and Management was using the term "Systems Engineer" indiscriminately.   The conclusion was that the discipline of Systems Engineering and the role of the Systems Engineer in Development and Transformation Efforts is poorly understood by most people, and perhaps by many claiming to be Systems Engineers.  In my experience of building a Systems Engineer group from 5 to 55, I can attest to this conclusion.

Currently, I am working on a second book, with the working title of "Systems Engineering, System Architecture, and Enterprise Architecture".  In the book, I'm attempting to distill 45+ years of experience and observation of many efforts, from minor report revisions to the Lunar Module, F-14, B-2, and X-29 aircraft creation efforts, to statewide IT outsourcing efforts.  This post contains excerpts of several concepts from this manuscript.

The Archetypal Pattern for Product/System/Service Development and Transformation
At a high level there is an architectural process pattern for Product/System/Service development and transformation.  I discuss this pattern in my current book, Organizational Economics: The Formation of Wealth, and it is one key pattern for my next book.  This pattern is shown in Figure 1.
Figure 1--The Three Legged Stool Pattern

As shown in Figure 1, the architectural process model posits that all development and transformation efforts are based on the interactions of three functions (or sub-processes), Systems Engineering, Design and Implementation, and Program Management.  This is true whether a homeowner is replacing a kitchen faucet or NASA is building a new spacecraft.  Each of these sub-processes is a role with a given set of skills.

Consequently, as shown in Figure 1, I call this process pattern "The Three-legged Stool" pattern for development and transformation.  I will discuss each sub-process as a role with requirements.  Therefore, this is what I see as the needs or requirements for the process and the skills for the role.  In my next book, I will discuss more about how these can be done.

As shown in Figure 1, the program management role is to enable and support the other two roles with financial resources and expect results, in the form of a product/system/service meeting the customer's requirements.

Systems Engineering (and System Architecture) Role
The first role is the Systems Engineer/System Architect.  This role works with the customer to determine the requirements--"what is needed."  I've discussed this role in several posts including Enterprise Architecture and System Architecture and The Definition of the Disciplines of Systems Engineering.  Three key functions of this sub-process are:
These are the key responsibilities for the role, though from the posts, cited above, "The devil (and complexity of these) is in the detail".

The key issue with the Systems Engineering/System Architect role within a project/program/effort is that the requirements analysis procedure becomes analysis paralysis.  That is, the Systems Engineer (at least within the "waterfall" style effort, that assumes that all of the requirements are known upfront) will spend an inordinate amount of time "requirements gathering"; holding the effort up, to attempt to insure that all of the requirements are "know"--which is patently impossible.

 I will discuss solutions to this issue in the last two sections of this post.

Design and Implementation Role
When compared with Systems Engineering, the Design and Implementation functions, procedures, methods, and role are very well understood, taught, trained, and supported with tooling.  This role determines "How to meet the customer's needs", as expressed in the "What is needed (requirements)", as shown in Figure 1.  These are the product/system/service designer, developers, and implementers of the transformation; the Subject Matter Experts (SMEs) that actually create and implement.  These skills are taught in Community Colleges, Colleges, Universities, Trade Schools, and on-line classes.  The key sub-processes, procedures, functions, and methods are as varied as the departments in the various institutions of higher learning just mentioned.

There is a significant issue with designers and implementers, they attempt to create the "best" product ever and go into a never ending set of design cycles.  Like the Systems Engineering "analysis paralysis", this burns budget and time without producing a deliverable for the customer.  One part of this problem is that the SMEs too often forget is that they are developing or transforming against as set of requirements (The "What's Needed").  In the hundreds of small, medium, and large efforts in which I've been involved, I would say that the overwhelming percentage of time, the SMEs never read the customer's requirements because they understand the process, procedure, function, or method far better than the customer.  Therefore, they implement a product/system/service that does not do what the customer wants, but does do many functions that the customer does not want.  Then the defect management process takes over to rectify these two; which blows the budget and schedule entirely, while making the customer unhappy, to say the least. The second part of this problem is that each SME role is convinced that their role is key to the effort.  Consequently, they develop their portion to maximize its internal efficiency while completely neglecting the effectiveness of the product/system/service.  While I may be overstating this part somewhat, at least half the time, I've seen efforts where, security for example, attempts to create the equivalent of "write only memory"; the data on it can never be used because the memory cannot be read from.  This too, burns budget and schedule while adding no value.

Again, I will discuss solutions to this issue in the last two sections of this post.

Program Management Role
As shown in Figure 1, the role, procedures, and methods of Program Management is to support and facilitate Systems Engineering and Design and Implementation roles.   This is called Leadership.   An excellent definition of leadership is attributed to Lao Tzu, the Chinese philosopher of approximately 2500 years ago.  As I quoted in my book, Organizational Economics: The Formation of Wealth:
  • "The best of all leaders is the one who helps people so that, eventually, they don’t need him.
  • Then comes the one they love and admire.
  • Then comes the one they fear.
  • The worst is the one who lets people push him around.
Where there is no trust, people will act in bad faith.  The best leader doesn’t say much, but what he says carries weight.  When he is finished with his work, the people say, “It happened naturally"."[1]
[1] Lao Tzu, This quote is attributed to Lao Tzu, but no source of the quote has been discovered.
If the program manager does his or her job correctly, they should never be visible to the customer or suppliers; instead they should be the conductor and coordinator of resources for the effort.  Too often the project and program managers forget that this is their role and what the best type of leader is. Instead, they consider themselves as the only person responsible for the success of the effort and "in control" of the effort.  The method for this control is to manage the customer's programmatic requirements (the financial resources and schedule).  This is the the way it works today.

The Way This Works Today: The Program Management Control Pattern
There are two ways to resolve the "requirements analysis paralysis" and the "design the best" issues, either by the Program Manager resolving it, or through the use of a process that is designed to move the effort around these two landmines.

The first way is to give control of the effort to manager.  This is the "traditional" approach and the way most organization's run development and transformation efforts .  The effort's manager manages the customer's programmatic requirements, (budget and schedule), so the manager plans out the effort including its schedule.  This project plan is based on "the requirements", most often plan includes "requirements analysis".

[Rant 1, sorry about this: My question has always been, "How is it possible to plan a project based on requirements when the first task is to analyze the requirements to determine the real requirements?"  AND, I have seen major efforts (hundreds of millions to billions) which had no real requirements identified...Huh?]

The Program or Project Manager tells the Systems Engineer and Developer/Implementer when each task is complete; because that's when the time and or money for that task on the schedule is done, regardless of the quality of the work products from the task.  "Good" managers keep a "management reserve" in case things don't go as planned.  Often, if nothing is going as planned, the manager's knee jerk reaction is to "replan"; which means creating an inch-stone schedule.  I've seen and been involved in large efforts where the next level of detail would be to schedule "bathroom breaks".  This method for resolution of "analysis paralysis" and "design the best" will almost inevitably cause cost and schedule overruns, unhappy customers, and defective products because the effort's control function to control costs and schedules.

The Program Management Control Pattern
Figure 2 shows the Program Management Control Pattern.  The size of the elipse shows the percieved importance of each of the three roles.



Figure 2--The Program Management Control Pattern


First, the entire "Three Legged Stool" Pattern is turned upside down is the Program Management Control Pattern.  Rather than the Program Manager enabling and supporting the development process by understanding and supporting the development or transformation process, the Program Manager "controls" the process.  In Lao Tzu leadership taxonomy, this process pattern makes the Program Manager one of the latter increasingly ineffective types.  It also reverses importance of who produces the value in the effort.

To be able to "Control" the effort, the Program Manager requires many intermediate artifacts, schedules, budgets, and status reports, which use up the resources of the efforts and  are non-valued work products, the customer might look at these artifacts once during a PMR, PDR, CDR, or other "XDR" (Rant 2: Calling these review Program Management Reviews, instead of some type of Design Review", Preliminary, Critical, etc., demonstrates the overwhelming perceived importance of the programmatic requirements by Program Managers.)  I submit that all of these intermediate artifacts are non-value added because 3 months after the effort is completed, the customer or anyone else will not look at any of them except if the customer is suing the the development or transformation organization over the poor quality of the product.  All of these management reviews require resources from the Developers/Implementers and the Systems Engineers.

One extreme example of this management review procedure was the procedures used in development of new aircraft for the US Air Force and Navy during the 1980s and 90s--sometimes facts are stranger than fantasy.  The DoD required some type of "Development Review" every 3 months.  Typically, these were week-long reviews with a large customer team descending on the aircraft's Prime Contractor.  Program Management (perhaps, rightly) considered these of ultimate importance to keeping the contract and therefore wanted everyone ready.  Consequently, all hands on the effort stopped work 2 weeks prior to work on status reports and presentation rehearsals.  Then, after the "review" all hands would spend most of an additional week reviewing the customer's feedback and trying to replan the effort to resolve issues and reduce risk.  If you add this up, the team was spending 1 month in every 3 on status reporting.  And I have been part of information technology efforts, in this day of instant access to everything on a project where essentially the same thing is happening.  Think about it, these aircraft programs spent one third of their budget, and lengthened the programs by 1/3 just for status for what?  Intermediate artifacts of no persistent value--Who looked at the presentations of the first Preliminary Design Review after the aircraft was put into operations?  [Rant 3: Did the American citizen get value for the investment or was this just another Program Management Entitlement Program funded by the DoD?]

Second, as shown in Figure 2, the Systems Engineering role is substantially reduced  in the perception of the Program Manager.  An example of this was brought home to me on a multi-billion program, when I asked the chief engineer where the requirements were stored, he quoted the Program's Director as saying, "We don't need no damn requirements, we're too busy doing the work."  This Director underlined this thinking; he kept hiring more program management, schedule planners, earned value analysts, and so on, while continuous reducing then eliminating the entire Systems Engineering team and leaving only a few System Architects.  He justified this by the need to increased control and cost reduction to meet his budget [Rant 4: and therefore to get his "management bonus"--no one ever heard of the Design or a System Engineering Bonus].  Actually, I've seen this strategy put into play on large (more than $20M) three programs with which I was associated and I've heard about it on several more within the organization I was work for and in other organizations, over the past 10 years.  

Another program that I worked on as the Lead Systems Engineer that had the same perception of the Systems Engineer (including the System Architect's role within the Systems Engineering discipline/role).  It is an extreme example of all that can go wrong because of lack of Systems Engineering.  This effort was development of a portal capability for the organization.  It started with a that had 10 management personnel and myself.  They articulated a series of ill-thought-out capability statements, continued by defining a series products that had to be used (with no not identification of Customer System or IT Functional requirements), with a 6 weeks schedule, and ended with a  budget that was 50 percent of what even the most optimistic budgeteers could "guessitmate".  They (the three or four levels of management represented at the meeting) charged me with the equivalent of "Making bricks without straw or mud in the dark", that is, creating the portal.  Otherwise, my chances of getting on the Reduction In Force (RIF) list would be drastically increased.

Given that charge, I immediately contacted the software supplier and the development team members from two successful efforts within the organization to determine if there was any hope of the effort within the programmatic constraints to accomplish the task.  All three agreed, it could not be done in less than 6 months.  Faced with this overwhelming and documented evidence, they asked me what can be done.  The result was based on their "capability" statements, and "Requirements (?)" documents from the other two projects, I was able to cobble together a System Architecture Document (SAD) that these managers could point to as visible progress.  Additionally, I used a home grown risk tool to document risks as I bumped into them.  Additionally, I instituted a risk watch list report on a weekly basis, which all the managers ignored.

At this point one fiscal year ended and with the new year, I was able to have the whole, nationwide, team get together, in part, to get everyones requirements and design constraints.  Additionally, I presented an implementation plan for the capabilities I understood they needed.  This plan included segmenting the functions for an IOC build in May, followed by several additional several additional builds.  Since this management team was used to the waterfall development process, the rejected this with no consideration; they wanted it all by May 15th.  In turn, I gave them a plan for producing, more or less, an acceptable number of functions, and an associated risk report with a large number of high probability/catastrophic impact risks.  They accepted the plan.  The plan failed; here is an example of why.

One of the risks was getting the hardware for the staging and production systems in by March 15th.  I submitted the Bill of Materials (BOM) to the PM the first week in February.  The suppliers of the hardware that I recommended indicated that the hardware would be shipped within 7 days of the time the order was received.  When I handed the BOM to the PM, I also indicated the risk if we didn't get the systems by March 15th.  On March 1st, I told him that we would have a day for day slippage in the schedule for every day we didn't receive the hardware.  The long and the short of it was that I was called on the carpet for a wire brushing on July 28th when we had the program held up because of lack of hardware.  Since I could show the high-level manager that, in fact, I had reported the risk (then issue) week after week in the risk report she received, her ire finally turned on the PM, who felt he had the responsibility.

The net result of these and several other risks induced either by lack of requirements or lack of paying attention to risks resulted in a system that was ready for staging the following December.  Management took it upon themselves to roll the portal into production without the verification and validation testing.  The final result was a total failure of the effort due to management issues coming from near the top of the management pyramid.  Again, this was due to a complete lack of understanding of the role of Systems Engineering and Architecture.  In fact, this is a minor sample of the errors and issues--maybe I will write a post on this entire effort as an example of what not to do.

In fact the DoD has acknowledged the pattern shown in Figure 2 and countered it by creating System Engineering Technical Advisory (SETA) contracts.

The Utility of Program Management
[Rant 5: Here's where I become a Heritic to many, for my out of the warehouse thinking.]  In the extreme, or so it may seem it is possible that projects don't need a project manager.  I don't consider that a rant because it is a fact.  Here are two questions that makes the point.  "Can an excellent PM with a team of poorly skilled Subject Matter Experts (SMEs) create a top notch product?" and  "Can a poor PM with a team of excellent SMEs create a top notch product?"  The answer to the first is "Only with an exceptional amount of luck", while the answer to the second is "Yes! Unless the PM creates too much inter-team friction."  In other words, except for reducing inter-team friction, which uses resources unproductively, and for guiding and facilitating the use of resources, the PM produces no value, in fact, the PM creates no value, just reduces friction, which preserves value and potential value.

None of the latter three types of leaders, as described by Lao Tzu, can perform perform this service to the team, the ones I call in my book, the Charismatic, the Dictator, or the Incompetent. In other words, the PM can't say and act as if "The floggings will continue until morale improves".

Instead, the PM must be a leader of the first type as described by Lao Tzu and as I called in my book as "the coach or conductor".  And any team member can be that leader.  As a Lead Developer and as a Systems Engineer, I've run medium sized projects without a program manager and been highly successful--success in this case being measured by bringing the effort in under cost, ahead of schedule, while meeting or exceeding the customers requirements  Yet, on those none of the programs, for which I was the lead systems engineer and which had a program manager and who's mission was to bring in the effort on time and within budget, was successful.  On the other hand, I've been on two programs where the PM listened with his/her ears rather than his/her month and both paid attention to the System Requirements; those efforts were highly successful.

The net of this is that a coaching/conducting PM can make a good team better, but cannot make a bad team good, while a PM in creating better projects plans, producing better and more frequent status reports, and creating and managing to more detailed schedules will always burn budget and push the schedule to the right.

A Short Cycle Process: The Way It Could and Should Work
As noted near the start of this post, there are two ways to resolve the "requirements analysis paralysis" and the "design the best" issues, either by Program Management Control, or through the use of a process that is designed to move the effort around these two landmines.

This second solution uses a development or transformation process that assumes that "Not all requirements are known upfront".  This single change of assumption makes all the difference.  The development and transformation process must, by necessity, take this assumption into account (see my post The Generalize Agile Development and Implementation Process for Software and Hardware for an outline of such a process).  This takes the pressure off the customer and Systems Engineer to determine all of the requirements upfront and the Developer/Implementer to "design the best" product initially.  That is, since not all of the requirements are assumed to be known upfront, the Systems Engineer can document and have the customer sign off on an initial set of known requirements early in the process (within the first couple of weeks), with the expectation that more requirements will be identified by the customer during the process.  The Developer/Implementer can start to design and implement the new product/system/service based on these requirements with the understanding that as the customer and Systems Engineer identify and prioritize more the of the customer's real system requirements.  Therefore, they don't have to worry about designing the "best" the first time; simply because they realize that without all the requirements, they can't.
 
Changing this single assumption has additional consequences for Program Management.  First, there is really no way to plan and schedule the effort; the assumption that not all the requirements are known upfront means that if a PM attempts to "plan and schedule" the effort is an "exercise in futility."  What I mean by that is if the requirements change at the end/start of the new cycle, then the value of a schedule of more than the length of one cycle is zero because at the end of the cycle the plan and schedule, by definition of the process, change.  With the RAD process I created, this was the most culturally difficult issue I faced with getting PM and management to understand and accept.  In fact, a year after I moved to a new position, the process team imposed a schedule on the process.

Second, the assumptions forces the programmatic effort into a Level Of Effort (LOE) type of budgeting and scheduling procedure.  Since there is no way to know what requirements are going to be the customer's highest priority in succeeding cycles, the Program Manager, together with the team must assess the LOE to meet each of the requirements from the highest priority down.  They would do this by assessing the complexity of the requirement and the level of risk with creating the solution that meets the requirement.  As soon as the team runs out of resources forecast for that cycle, they have reached the cutoff point for that cycle.  They would present the set to the customer for the customer's concurrence.  Once they have customer sign off, they would start the cycle.  Sometimes a single Use Case-based requirement with its design constraints will require more resources than are available to the team during one cycle.  In that case, the team, not the PM, must refactor the requirement. 

For example, suppose there is a mathematically complex transaction, within a knowledge-based management system, which requires an additional level of access control, new hardware, new COTS software, new networking capablities, new inputs and input feeds, new graphics and displays, and transformed reporting.  This is definitely sufficiently complex that no matter how many high quality designers, developers, and implementers you up on the effort, it cannot be completed within one to perhaps even three months (This is  the "9 women can't make a baby in a month" principle).  Then the team must refactor (divide up) the requirement into chunks that are doable by the team within the cycle's period, say one to three months.  For example, the first cycle might define and delimit the hardware required and develop the new level of access control; and so on for the number of cycles needed to meet the requirement.

Third, with this assumption of "not having all the requirements", the PM must pay most attention to the requirements, their verification and validation, and to risk reduction.  All of these functions lay within the responsibility of the Systems Engineer; but the PM must pay attention to them to help best allocate the budget and time resources.

Fourth, there is no real need for PMRs, status reports, or Earned Value metrics.  The reason is simple, high customer involvement.  The customer must review the progress of the effort every month at a minimum, generally every week.  This review is given by the developers demonstrating the functions of the product, system, or service on which they are working.  And if the customer is always reviewing the actual development work, why is there a need for status, especially for an LOE effort?

Fifth, rolling a new system or service has significant implications for the customer.for the timing and size of the ROI for the development or transformation effort.  With an IOC product, system, or service, the customer can start to use it and in using the IOC will be able to, at a minimum, identify missing requirements.  In some cases, much more.  For example, in one effort, in which I performed the systems engineering role, during the first cycle the team created the access control system and the data input functions for a transactional website.  During the second cycle, the customer inserted data into the data store for the system.  While doing this, the customer discovered sufficient errors in the data to pay for the effort.  Consequently, they were delighted with the system and were able to fund additional functionality, further improving their productivity.  If the effort had been based on the waterfall, the customer would have had to wait until the entire effort was complete, may not have been as satisfied with the final product (more design defects because of unknown requirements), would not have discovered the errors, and therefore, would not have funded an extension to the effort.  So it turned out for a win for the customer-- more functionality and greater productivity--and for the supply--more work.

In using a short cycle process based on assuming "unknown requirements", there will always be unfulfilled customer system requirements at the end of this type of development or transformation process.  This is OK.  It's OK for the customer because the development or transformation team spent the available budgetary and time requirements in creating a product, system, or service that meets the customer's highest priority requirements, even if those requirements were not initially identified; that is, the customer "got the biggest bang for the buck".  It's OK for the team because a delighted customer tends to work hard at getting funding for the additional system requirements.  When such a process is used in a highly disciplined manner, the customer invariably comes up with additional funding.  This has been my experience on over 50 projects with which I was associated, and many others that were reported to me as Lead Systems Engineer for a Large IT organization.

Conclusions and Opinions
The following are my conclusions on this topic:
  1. If a development or transformation effort focuses on meeting the customer's system requirements, the effort has a much better chance of success than if the focus is on meeting the programmatic requirements.
  2. If the single fundamental assumption is changed from "All the requirements are known up front" to "Not all the requirements are known up front" the effort has the opportunity to be successful or much more successful by the only metric that counts, the customer is getting more of what he or she wants, and that increases customer satisfaction.
  3. If the development or transformation effort can roll out small increments will increase the customer's ROI for the product, system, or service.
  4. Having a Program Manager, who's only independent responsibility is managing resources be accountable for an effort is like having the CEO of an organization report to the CFO; you get cost efficient, but not effective products, systems, or services.  [Final Rant: I know good PMs have value, but if a team works, that is because the PM is a leader of the first type: a coach and conductor.] Having a Program Manager that understands the "three legged stool" pattern for development or transformation, and who executes to it will greatly enhance the chance for success of the effort.

Wednesday, June 29, 2011

Apologies - overzealous comment spam filter

A number of people have made thought-provoking comments on this blog in the past months, only to have their comments disappear moments after submitting.  They have been accumulating in the blogger spam filter.  I have let through the legitimate comments now.

My apologies.  I will keep an eye on this from now on.

Tuesday, June 28, 2011

What pay rise?

I work with a bunch of economists. Now is the time of year we have performance reviews and negotiate pay rises and promotions. But no one has yet discussed the fact that high effective marginal tax rates (EMTR) greatly reduce the real take-home benefits of a pay rise.

(EMTR is an estimate of the change in take home income after tax, and after accounting for reduced welfare payments)

An EMTR higher than 50% is very common in Australia for low and middle income earners, and any push for greater middle class welfare will simply increase this perverse tax incentive.

For example, you get a pay rise approximating CPI of 4%.  Your average tax rate is 25% (meaning you get 75% of your gross pay in the hand) and your EMTR is 50% (meaning that you only get 50c out of every extra dollar of your gross salary). In this case you actually have a pay reduction in real terms. 

You take home pay only increased 2.7% (but the government net tax portion of your income has increased by 8%).

In the above example for your take home pay to keep pace with CPI you need an 6% pay rise.  If you are a lower income earner whose cost of living increases typically exceed CPI, you will need an even higher pay rise just to break even.

High effective marginal tax rates might be a contributing factor in the rise of middle class welfare.  High EMTRs mean that employers payroll costs must grow at a rate much faster than CPI just for employees to break even.  If these type of pay rises are not supported by the real growth of the economy governments may increase welfare to maintain standards of living.  This further increases EMTRs in a reinforcing cycle.

An additional point is that the significant impacts of effective marginal tax rates on changes in take home pay is generally ignored when comparing changes in gross household incomes to the cost of living, or the cost of housing.

In sum, one major economic problem with high EMTRs is that your employer faces a 4% increase in the cost of employing you for a 2.7% increase in your net pay.

Monday, June 27, 2011

Smoking decreases health costs to society


The academic literature generally concludes that smoking reduces health costs to society. This is in stark contrast to commonly held beliefs about the health care costs borne by society from vices such as smoking, alcohol consumption and fatty foods (which are the target of future regulations).

In fact I will argue that as a society we would be better off if more people would take health risks, and it would be a simple solution to the aged care burden many fear will occur when the baby boomers retire.

The following academic results are typical (my emphasis).

Health care costs for smokers at a given age are as much as 40 percent higher than those for nonsmokers, but in a population in which no one smoked the costs would be 7 percent higher among men and 4 percent higher among women than the costs in the current mixed population of smokers and nonsmokers. If all smokers quit, health care costs would be lower at first, but after 15 years they would become higher than at present. In the long term, complete smoking cessation would produce a net increase in health care costs, but it could still be seen as economically favorable under reasonable assumptions of discount rate and evaluation period.(here)

Until age 56 y, annual health expenditure was highest for obese people. At older ages, smokers incurred higher costs. Because of differences in life expectancy, however, lifetime health expenditure was highest among healthy-living people and lowest for smokers. Obese individuals held an intermediate position.(here)

As I have said repeatedly

My core argument in this field has been that increasing preventative health care, while having the benefits of a healthier and long life, often come at increased total lifetime health costs, rather than decreased costs as is often proposed. Remember, we all die some day, and any potential cause of death postponed will allow another to take its place, which of course has its own health costs. Alternatively, a more healthy existence may make us more productive for longer and lead to us contributing more in taxes over our lifetime than the potential increase in health costs which were paid through the tax system for our preventative care.

Governments, and subsequently economists, worry about these things because many health care costs are borne by others though tax revenue, yet the net economic effect is anything but straightforward.

To understand the health costs borne by others, it is necessary to determine whether the unhealthy vice decreases your lifetime tax contribution to health costs (due to illness) by more, or less, than the decrease in your lifetime health costs themselves (due to an early death).

I argue that most unhealthy vices provide a net benefit to society in terms - they reduce health costs by more than the reduction in tax contributions to health care which may occur due to illness.

The reason is simple. Most of the serious health problems associated with drinking, smoking and obesity take a long time to present. A smoker whose habit had no impact on their lifetime employment, but dies as a result of lung cancer upon retirement at age 65, has still contributed all his lifetime tax to society, including plenty of taxes on tobacco itself, but avoided ongoing health costs from ageing, and costs of the pension.

It sounds cruel, but it is true. The rest of us are better off if people die soon after they retire (unfortunately they are not). The costs of these health vices are therefore borne directly by the people who partake in them, to the benefit of those who choose not to. Perhaps an alcohol and tobacco subsidy is in order?

The only situation where  relatively healthy people are worse off is if the illness resulting from the vice occurs early in life and is a barrier to employment. In this case the vice would result in a massive reduction in their contribution to social health care, while the rest of society is left supporting the person’s medical costs and welfare costs.

The academic literature seems to suggest that this situation is relatively uncommon.

We can see then that the aged care burden we face is a result of people living healthier and longer lives, especially in the period after retirement. It is not the result of a lifetime of unhealthy consumption habits, which actually have a net effect of reducing the health care burden to society.

As a final note, the amazing gap between academic understanding, public perception, and political ramblings, suggest that taxes on tobacco and alcohol are more about raising revenue than reducing society wide health care costs. The counterintuitive nature these academic conclusions make them easy isolate from policy discussions, allowing politicians to keep any debate at the most superficial level.

*I am not a smoker, but am an occasional drinker, and generally want to live a long time - so I selfishly choose to stay as healthy as I can.

Sunday, June 26, 2011

Myth: Tight rental market boosts home prices

A common housing market myth is that low vacancy rates lead to rent increases, which lead to price increases (or at the very least, put a limit on any loss in home values). For example -

...this market imbalance will at some points cause an acceleration in rentals growth and a tightening in rental vacancies, so setting the stage for a recovery in prices through 2012.

Unfortunately, if history is anything to go by, this argument fails in real world conditions.

The two graphs below make the point clearly. In the early 1990s, vacancy rates soared and prices remained flat. But in the early 2000s, rental vacancies matched these highs during the strongest period of price growth observed in 25 years. How can these two opposing relationships been reconciled?

(Images from here and here)

I have a hypothesis.  During boom times overbuilding results in a slight glut in homes entering the rental market (eg 2000-2005). As the construction boom subsides, these homes are slowly absorbed by rental demand. When the market begins to fall (bringing much of the economy with it) potential sellers become reluctant landlords, boosting rental supply (eg 1990-1995). Additionally, nervous householders reign in spending on housing, resulting in an increased occupancy rate and lower rental demand.

There are many ways the occupancy rate increases, which don’t necessarily imply a shortage of homes. Downsizing leads to more efficient use of existing homes -

For example, the parents of a family whose adult children have moved out with friends or partners might find that the upkeep of a large house conflicts with their ‘grey nomad’ retirement plans. They can sell their 5-bedroom house and move into a new 2-bedroom unit, pocketing the price difference for their retirement.

In this scenario the construction of a 2-bedroom apartment resulted in a 5-bedroom home being available to meet the housing needs of population growth.

Other ways include university students moving home with their parents, and grandparents moving in with their children’s families.

If my hypothesis holds, then the ‘rental market cycle’ has two periods for each economic cycle, and tight markets are a signal of a price boom only if the previous trough was prior to a price fall. Therefore our next 'rental market cycle' will be one accompanied by falling prices, or flat at best.  The evidence in Brisbane seems to suggest that this pattern is beginning to occur (although prices have already fallen 10%).

(I also have a suspicion that auction results show a similar cycle - increasing in booms and busts, with low clearance rates at turning points.)

Interesting TEDx video on risk taking and helmets



...and a light read on safety measures that don't work (but probably make us feel better).

Thursday, June 23, 2011

Helmet laws hit the headlines - again

The public debate about mandatory helmet wearing laws in Australia has raged since Sue Abbott won an appeal to the District Court last August defending her failure to wear a helmet. Since then media coverage on the matter has been generally poor, often confusing the effectiveness of helmets in reducing head injuries following a fall, with the net social benefits of the law itself.

THE DEBATE

The debate is about mandatory helmet laws (MHL). The pro-choice side advocate repealing the law so that helmet wearing is voluntary (not compulsory non-helmet wearing as some mistakenly believe).

The argument is about whether the law itself provides net social benefits – not about whether an individual rider involved in a fall is more or less likely to injure their head by wearing a helmet. Evidence points to the fact that yes, a falling rider with a helmet will, on average, suffer less severe head injuries than a bare headed rider.

But is this a justification for a law?

Not at all. You see wearing a helmet while walking and driving will also prevents head injuries in the case of an accident. But one side of the debate seems happy to leave these other activities alone, even though it fits logically with their argument. 

HEALTH COSTS

Indeed, supporters of MHLs often cite tax payer funded public health care as a justification. Yet this makes no sense whatsoever for the MHL debate, the tobacco taxes, or any other preventative health care issue.

As I have said before 

…that increasing preventative health care, while having the benefits of a healthier and long life, often come at increased total lifetime health costs, rather than decreased costs as is often proposed. Remember, we all die some day, and any potential cause of death postponed will allow another to take its place, which of course has its own health costs.

Alternatively, a more healthy existence may make us more productive for longer and lead to us contributing more in taxes over our lifetime than the potential increase in health costs which were paid through the tax system for our preventative care.

Governments, and subsequently economists, worry about these things because many health care costs are borne by others though tax revenue, yet the net economic effect is anything but straightforward.


THE ARGUMENTS

The only argument remaining in favour of MHLs is that we are saving people from themselves. It is a pretty weak argument for making law in my view.

The pro-choice advocates usually cite a variety of factors to demonstrate that any benefits an individual may receive by wearing a helmet can be significantly offset by their own risk compensation, and the changes to behavior of other road users.

For example -

1. Drivers will pass helmeted cyclists closer than bare headed cyclists (with cyclists with long blonde hair getting the most room).

2. Helmets make cyclists feel safer, and they adjust by taking more risks (risk compensation)

3. Helmet laws decrease the number of cyclists on the road, making car drivers less familiar with cyclist behavior and making each remaining cyclist less safe.

4. Helmets increase diffuse axonal injuries of the brain and neck due to their increased diameter (and increased the likely of impacts due to the larger volume). As Sue Abbott argued in her court case – a helmet can increase angular acceleration which an oblique impulse imparts to the head, increasing the risk of damage to the brain, especially diffuse axonal injury

5. Helmets can be a hazard in many circumstances (with many child deaths recorded as a result of helmet wearing)

6. Any deterrent to cycling is likely to increase time spent on sedentary activities, further contributing to the obesity epidemic.

7. The law allows governments to appear to be acting in the interests of cyclist safety, while neglecting other measures to improve cyclist safety, such as bike lanes or driver education.

Added together, it is argued that mandatory helmet laws are anything but a clear winner on the social benefit measure.

MISSING THE POINT

As I suggested at the opening, most media commentary has missed the point of the debate. The pro-choice side does not argue that helmets are worthless for any individual rider. They simply claim that helmets are not as effective at reducing injuries as they are made out to be, and that there are many flow-on social effects that further reduce cyclist safety that are not considered.

Even the academics have a hard time finding strong evidence that helmet laws have reduced head injuries significantly. The Voukelatos and Rissel paper I referenced in a previous post, showed evidence that the benefits of helmet laws in reducing the ratio of head to arm injuries for hospitalized cyclists was insignificant compared to other road safety improvements in the late 1980’s and early 1990s. It was later retracted after criticism over data inaccuracies (corrected data in the graph below), with the critics now publishing their own study using similar statistics to examine the effect of the law in NSW. They find that there is a statistical significant impact of the law in reducing the ratio of head to arm and leg injuries.

Unfortunately their model also found that the helmet law led to fewer hospitalisations of pedestrians with arm injuries.

For cyclists who do fall in a manner leading to significant injuries, a helmet may reduce head injuries. That it is so difficult to see the effect of helmet laws in the data suggests that any benefits of helmet wearing must be very small, even at an individual level. Unfortunately the MHL supporters usually feel that helmets prevent almost all head injuries, which is clearly not the case. They AT BEST provide a marginal improvement in head safety.



In all, I hope the readers who now stumble across ‘helmet headlines’ can bring a bit of perspective to the issue.

Tuesday, June 21, 2011

Go back to where you came from - Australia's talking

Last night SBS aired their new three part series Go Back To Where You Came From, where six Aussies take part in a 'reverse refugee' experience.

I think most viewers would agree that it was particularly interesting to watch the participant's reactions to meeting refugees and visiting detention centres.  Participants are from quite different backgrounds, and they have a variety of opinions on refugee policy.

The show apparently has twitter all a buzz, and is generating quite a deal of media commentary.  Much of the reaction focuses on the apparent ignorance of one particular participant to the real situation of refugees - especially in light of their strong opinions on the matter.

My wife suggested that there was a clear pattern in the participant's attitudes - those with broad travel experience seem to have more tolerant views.  It was telling that a couple of participants had never left Australia before the show.

What I found missing from the show, which would have been a nice complement to the emotional dimension, is reference to the actual statistics on refugees, their country of origin, the proportion coming by boat, and the changes in refugee numbers over time.

This is important because the public debate usually overlooks a couple of key points.

1. Boat people are a minority of asylum seekers and a tiny fraction of total immigration (graph below from here)

2. The number of asylum seeker arriving in Australia correlates strongly with global numbers, suggesting that it is not so much the policy of the destination country that influences the number of arrivals, but the situation in the country of origin (see the graph below).

For more detailed analysis of the factors involved in refugee outcomes, read this detailed article. I look forward to the follow up episodes tonight and tomorrow, and recommend the program to anyone even slightly interested in the topic.

Monday, June 20, 2011

Transformation Benefits Measurement, the Political and Technical Hard Part of Mission Alignment and Enterprise Architecture

Pre-Ramble
This post will sound argumentative (and a bit of Ranting--in fact, I will denote the rants in color.  Some will agree, some will laugh, and Management and Finance Engineering may become defensive), and probably shows my experiences with management and finance engineering (Business Management Incorporated, that owns all businesses) in attempting benefits measurement.  However, I'm trying to point out the PC landmines (especially in the Rants) that I stepped on so that other Systems Engineers, System Architects, and Enterprise Architects don't step on these particular landmines--there are still plenty of others, so find your own, then let me know.

A good many of the issues result from a poor understanding by economists and Finance Engineers of the underlying organizational economic model embodied in Adam Smith's work, which is the foundation of Capitalism.  The result of this poor understanding is an incomplete model, as I describe in Organizational Economics: The Formation of Wealth.

Transformation Benefits Measurement Issues
As Adam Smith discussed in Chapter 1, Book 1, of his Magna Opus, commonly called The Wealth of Nations, a transformation of process and the insertion of tools transforms the productivity processes.  Adam Smith called the process transformation "The division of labour", or more commonly today, the assembly line.  At the time, 1776, where all industry of "cottage industry" this transformation Enterprise Architecture was revolutionary.  He did this using an example of straight pin production.  Further, he discussed that concept that tooling makes this process even more effective, since tools are process multipliers. In the military, their tools, weapons, are "force multipliers", which for the military is a major part of their process. Therefore, both transformation of processes and transforming tooling should increase the productivity of an organization.  Productivity is increasing the effectiveness of the processes of an organization to achieve its Vision or meet the requirements of it various Missions supporting the vision.

The current global business cultural, especial finance from Wall St. to the individual CFOs and other "finance engineers", militates against reasonable benefits measurement of the transformation of processes and insertion and maintenance of tools.  The problem is that finance engineers do not believe in either increased process effectiveness or cost avoidance (to increase the cost efficiency of a process).

Issue #1 the GFI Process
Part of the problem is the way most organizations decide on IT investments in processes and tooling.  The traditional method is the GFI (Go For It) methodology that involves two functions, a "beauty contest" and "backroom political dickering".  That is, every function within an organization has its own pet projects to make its function better (and thereby its management's bonuses larger).  The GFI decision support process is usually served up with strong dashes of NIH (Not Invented Here) and LSI (Last Salesman In) syndromes.

This is like every station on an assembly line dickering for funding to better perform its function.  The more PC functions would have an air conditioned room to watch the automated tooling perform the task, while those less PC would have their personnel chained to the workstation, while they used hand tools to perform their function; and not any hand tools, but the ones management thought they needed--useful or not.  Contrast this with the way the Manufacturing Engineering units of most manufacturing companies work.  And please don't think I'm using hyperbole because I can cite chapter and verse where I've seen it, and in after hours discussions with cohorts from other organizations, they've told me the same story.

As I've discussed in A Model of an Organization's Control Function using IDEF0 Model, The OODA Loop, and Enterprise Architecture, the Enterprise Architect and System Architect can serve in the "Manufacturing Engineer" role for many types of investment decisions.  However, this is still culturally unpalatable in many organizations since it gives less wiggle room to finance engineers and managers.

Issue #2 Poorly Formalized Increased Process Effectiveness Measuring Procedures
One key reason (or at least rationale) why management and especially finance engineers find wiggle room is that organizations (management and finance engineering) is unable (unwilling) to fund the procedures and tooling to accurately determine pre- and post-transformation process effectiveness because performing the procedures and maintaining the tools uses resources, while providing no ROI--this quarter. [Better to use the money for Management Incentives, rather than measuring the decisions management makes].

To demonstrate how poorly the finance engineering religion understands the concept of Increased Process Effectiveness, I will use the example of Cost Avoidance, which is not necessarily even Process Effectiveness, but is usually Cost Efficiency.  Typically, Cost Avoidance is investing in training, process design, or tooling now to reduce the cost of operating or maintaining the processes and tooling later. 

[Rant 1: a good basic academic definition and explanation cost avoidance is found at http://www.esourcingwiki.com/index.php/Cost_Reduction_and_Avoidance.  It includes this definition:
"Cost avoidance is a cost reduction that results from a spend that is lower then the spend that would have otherwise been required if the cost avoidance exercise had not been undertaken." ]
As discussed in the article just cited, in the religion of Finance Engineering, cost avoidance is considered as "soft" or "intangible".  The reason finance engineer cite for not believing cost avoidance number is that the "savings classified as avoidance (are suspect) due to a lack of historical comparison." 

[Rant 2: Of course Cost Reduction Saving is like that of avoiding a risk (an unknown) by changing the design is not valid, see my post The Risk Management Process because the risk never turned into an issue (a problem).] 

This is as opposed to cost reduction, where the Finance Engineer can measure the results in ROI.  This makes cost reduction efforts much more palatable to Finance Engineers, managers, and Wall St. Traders.  Consequently, increased cost efficiency is much more highly valued by this group than Increased Process Effectiveness.  Yet, as discussed above, the reason for tools (and process transformations) is to Increase Process Effectiveness.   So, Finance Engineering puts the "emphassus on the wrong salobul".

They are aided an abetted by (transactional and other non-leader) management.  A discussed recently on CNBC Squawk Box, the recent the CEOs of major corporations cite for their obscenely high salaries is that they make decisions that avoid risk. 

[Rant 3: Of course this is ignoring the fact that going into and operating a business is risky, by definition; and any company that avoids risk is on the "going out of business curve".  So most executives in US Companies today are paid 7 figure salaries to put their companies on "the going out of business curve"--interesting]

However, Cost Avoidance is one of two ways to grow a business.  The first is to invent a new product or innovate on an existing product (e.g., the IPAD) such that the company generates new business.  The second, is to Increase Process Effectiveness. 

Management, especially mid- and upper-level management, does not want to acknowledge the role of process transformation or the addition or upgrade of tooling as increasing the effectiveness of a process, procedure, method, or function.  The reason is simple, it undermines the ability for them to claim it as their own ability to manage their assets (read employees) better and therefore "earn" a bonus or promotion.  Consequently, this leaves those Enterprise and System Architects always attempting to "prove their worth" without using the metric that irrefutably prove the point.

These are the key cultural issue (problems) in selling real Enterprise Architecture and System Architecture.  And frankly, the only organizations that will accept this cultural type of change are entrepreneurial, and those large organization in a panic or desperation.  These are the only ones that are willing to change their culture.

Benefits Measurement within the OODA Loop
Being an Enterprise and an Organizational Process Architect, as well as a Systems Engineer and System Architect, I know well that measuring the benefits of a transformation (i.e., cost avoidance) is technically difficult at best; and is especially so, if the only metrics "management" considers are financial. 

Measuring Increased Process Effectiveness
In an internal paper I did in 2008, Measuring the Process Effectiveness of Deliverable of a Program [Rant 4: ignored with dignity by at least two organizations when I proposed R&D to create a benefits measurement procedure], I cited a paper: John Ward, Peter Murray and Elizabeth Daniel, Benefits Management Best Practice Guidelines (2004, Document Number: ISRC-BM-200401: Information Systems Research Centre Cranfield School of Management), that posits four types of metric that can be used to measure benefits (a very good paper by the way).
  1. Financial--Obviously
  2. Quantifiable--Metrics that organization is currently using to measure its process(es) performance and dependability that will predictably change with the development or transformation; the metrics will demonstrate the benefits (or lack thereof).  This type of metric will provide hard, but not financial, evidence that the transformation has benefits.  Typically, the organization knows both the minimum and maximum for the metric (e.g., 0% to 100%).
  3. Measurable--Metrics that organization is not currently using to measure its performance, but that should measurably demonstrate the benefits of the development or transformation.  Typically, these metrics have a minimum, like 0, but no obvious maximum.  For example, I'm currently tracking the number of pages accessed per day.  I know that if no one reads a page the metric will be zero.  However, I have no idea of the potential readership for anyone post because most of the ideas presented here are concepts that will be of utility in the future. [Rant 5: I had one VP who was letting me know he was going to lay me off from an organization that claimed it was an advance technology integrator that "he was beginning to understand was I had been talking about two years before"--that's from a VP of an organization claiming to be advanced in their thinking about technology integration--Huh....]  Still, I have a good idea of the readership of each post from the data,  what the readership is interested in and what falls flat on its face.  Measurable metrics will show or demonstrate the benefits, but cannot be used to forecast those benefits.  Another example is of a RAD process I created in 2000.  This process was the first RAD process that I know of, that the SEI considered as Conformant; that is, found in conformance by an SEI Auditor.  At the time, I had no way to measure its success except by project adoption rate (0 being no projects used it).  By 2004, within the organization I worked for, that did several hundred small, medium, and large efforts per year, over half of them were using the process.  I wanted to move from measurable to quantitative, using metrics like defects per roll out, customer satisfaction, additional customer funding, effort spent per requirement (use case), and so on, but "the management considered collecting this data, analyzing and storing it to be an expense, not an investment and since the organization was only CMMI level 3 and not level 4, this proved infeasible.   [Rant 6: It seems to me that weather forecasters and Wall St. Market Analysts are the only ones that can be paid to use measurable metrics to forecast, whether they are right wrong, or indifferent--and the Wall St. analysts are paid a great deal even when they are wrong.]
  4. Observable--Observable is the least quantitative, which is to say the most qualitative, of the metric types.  These are metrics with no definite minimum or maximum.  Instead, they are metrics that the participants agree on ahead of time--requirements? (see my post Types of Requirements.)  These metrics are really little more than any positive change that occurs after the transformation.  At worst they are anecdotal evidence.  Unfortunately, because Financial Engineers and Managers (for reasons discussed above) are not willing to invest in procedures and tooling for better metrics like those above, unless they are forced into it by customers, (e.g., requiring CMMI Level 5), Enterprise Architects, System Architects, and Systems Engineer must rely on anecdotal evidence, the weakest kind, to validate the benefits of a transformation.
Metric Context Dimensions
Having metrics to measure the benefits is good, if and only if, the metrics are in context.  In my internal paper, Measuring the Process Effectiveness of Deliverable of a Program, which I cited above, I found a total of four contextual dimensions, and since I have discovered a fifth.  I give two, to illustrate what I mean.

In several previous posts I've used the IDEF0 pattern as a model of the organization (see Figure 1 in A Model of an Organization's Control Function using IDEF0 Model, The OODA Loop, and Enterprise Architecture in particular).  One context for the metrics is whether the particular metric is measuring improvement in the process, the mechanisms (tooling), or in the control functions; a transformation may affect all three.  If it affects two of the pattern's abstract components or all three, the transformation may affect each either by increasing or decreasing the benefit.  Then the Enterprise Architect must determine the "net benefit."

The key to this "net benefit" is to determine how well the metric(s) of each component measures the organization's movement or change in velocity of movement toward achieving its Vision and/or Mission.  This is a second context.  As I said, there are at least three more.

Measuring Increased Cost Efficiency
While measuring the Benefits that accrue from a transformation is difficult (just plain hard), measuring the increased cost efficiency is simple and easy--relatively--because it is based on cost reduction, not cost avoidance.  The operative word is "relatively", since management and others will claim that their skill and knowledge reduced the cost, not the effort of the transformation team or the Enterprise Architecture team that analyzed, discovered, and recommended the transformation.  [Rant 7: More times than I can count, I have had and seen efforts where management did everything possible to kill off a transformation effort, then when it was obvious to all that the effort was producing results "pile on" to attempt to garner as much credit for the effort as possible.  One very minor example for my experience was that in 2000, my boss at the time told me that I should not be "wasting so much time on creating a CMMI Level 3 RAD process, but instead should be doing real work."  I call this behavior the "Al Gore" or "Project Credit Piling On" Syndrome (In his election bid Al Gore attempted to take all the credit for the Internet and having participated in its development for years prior, I and all of my cohorts resented the attempt).  Sir Arthur Clarke captured this syndrome in his Law of Revolutionary Development.
"Every revolutionary idea evokes three stages of reaction. They can be summed up as:
–It is Impossible—don’t Waste My Time!
–It is Possible, but not Worth Doing!
–I Said it was a Good Idea All Along!"]

Consequently, "proving" that the engineering and implementation of the transformation actually reduced the cost, and not the "manager's superior management abilities" is difficult at best--if it weren't the manager's ability, then why pay him or her the "management bonus" [Rant 8: which is were the Management Protective Association kicks in to protect their own].

The Benefits Measurement Process
The two hardest activities of Mission Alignment and Implementation Process are Observe and Orient, as defined within the OODA Loop (see A Model of an Organization's Control Function using IDEF0 Model, The OODA Loop, and Enterprise Architecture for the definitions of these tasks or functions of the OODA Loop).  To really observe processes the results and affects of a process transformation requires an organizational process as described, in part, by the CMMI Level 4 Key Practices or some of the requirements of the ISO 9001 standards.

As usual, I will submit to the reader that the keys (culturally and in a business sense) to getting the the organization to measure the success (benefits) of its investment decisions and its policy and management decisions is twofold.  The first high-level activity is a quick, (and therefore, necessarily incomplete) inventory of its Mission(s), Strategies, Processes and tooling assets.  As I describe in Initially implementing an Asset and Enterprise Architecture Process and an AEAR, this might consist of documenting and inserting the data of the final configuration of each new transformation effort as it is rolled out into an AEAR during an initial 3 month period; and additionally inserting current Policies and Standards (with their associate Business Rules) into the AEAR.  Second, analyze the requirements of each effort (the metrics associated with the requirements, really) to determine the effort's success metrics.  Using the Benefits Context Matrix determine where these metrics are incomplete (in some cases), over defined (in others), obtuse and opaque, or conflicting among themselves.  The Enterprise Architect would present the results of these analyses to management, together with recommendations for better metrics and more Process Effective transformation efforts (projects an programs).

The second high-level activity is to implement procedures and tooling to more effectively to efficiently observe and orient the benefits through the metrics (as well as the rest of the Mission Alignment/Mission Implementation Cycles).  Both of these activities should have demonstrable results (an Initial Operating Capability, IOC) by the end of the first 3 month Mission Alignment cycle.  The IOC need not be much, but it must be implemented, not some notional or conceptual design.  This forces the organization to invest resources in measurements of benefits and perhaps, in which component the benefits exist, control, process, or mechanisms.

Initially, expect that the results from the Benefits Metrics to be lousy for at least three reasons.  First, the AEAR is skeletal at best.  Second, the organization and all the participants, including the Enterprise Architect have a learning curve with respect to the process.  Third, the initially set of benefits metrics will not really measure the benefits, or at least not effectively measure the benefits. 

For example,I have been told, and believe to be true, that several years ago, the management of a Fortune 500 company chose IBM's MQSeries as middleware, to interlink many of its "standalone" systems in its fragmented architecture.  This was a good to excellent decision in the age before SOA, since the average maintenance cost for a business critical custom link was about $100 per link per month and the company had several hundred business critical links.  The IBM solution standardized the procedure for inter-linkage in a central communications hub using an IBM standard protocol.  Using the MQSeries communications solution required standardized messaging connectors.  Each new installation of a connector was a cost to the organization.  But, since connectors could be reused, IBM could right claim that the Total Cost of Ownership (TCO) for the inter-linkage would be significantly reduced. 

However, since the "benefit" of migrating to the IBM solution was "Cost Reduction", not Increased Process Effectiveness [RANT 9: Cost Avoidance in Finance Engineering parlance], Management and Finance Engineering (Yes, both had to agree), directed that the company would migrate its systems.  That was good, until they identified the "Benefit Metric" on which the management would get their bonuses.  That benefit metric was "The number of new connector installed".  While it sounds reasonable, the result was hundreds of new connectors were installed, but few connectors were reused because the management was not rewarded for reuse, just new connectors.  Finance Engineering took a look at the IBM Invoice and had apoplexy!  It cost more in a situation where they had a guarantee from the supplier that it would cost less [RANT 10: And an IBM guarantee reduced risk to zero].  The result was that the benefit (increased cost efficiency) metric was changed to "The number of interfaces reusing existing connectors, or where not possible new connectors".  Since clear identification and delineation of metrics is difficult even for Increased Cost Efficiency (Cost Reduction), it will be more so for Increased Process Effectiveness (Cost Avoidance).
Having effectively rained on every one's parade, I still maintain that with the support of the organization' s leadership, the Enterprise Architect, can create a Transformation Benefits Measurement procedure with good benefit (Increased Process Effectiveness) metrics in 3 to 4 cycles of the Mission Alignment Process.  And customer's requiring the suppliers to follow CMMI Level 5 Key Practices, SOA as an architectural pattern or functional design, together with Business Process Modeling, and Business Activity Monitoring and Management (BAMM) tooling will all help drive the effort.

For example, BAMM used in conjunction with SOA-based Services will enable the Enterprise Architect to determine such prosaic metrics as Process Throughput (in addition to determining bottlenecks) before and after a ttransformation. [RANT 11: Management and Finance Engineering are nearly psychologically incapable of allowing a team to measure a Process, System, or Service after its been put into production, let alone measuring these before the transformation.  This is the reason I recommend that the Enterprise Architecture processes, Like Mission Alignment be short cycles instead of straight through one off processes like the waterfall process--each cycle allow the Enterprise Architect to measure the results and correct defects in the transformation and in the metrics.  It's also the reason I recommend that the Enterprise Architect be on the CEO staff, rather that a hired consulting firm.] Other BAMM-derived metrics might be the cost and time used per unit produced across the process, the increase in quality (decreased defects), up-time of functions of the process, customer satisfaction, employee satisfaction (employee morale increases with successful processes), and so on.  These all help the Enterprise Architect Observe and Orient the changes in the process due to the transformation, as part of the OODA Loop-based Mission Alignment/Mission Implementation process.

Sunday, June 19, 2011

Concern over the AUD

The Aussie dollar has maintained its above USD parity levels for some time now.  Given Australia's reliance on imported consumer goods, this has raised many concerns.

Now it appears that the actions of foreign central banks might be another factor to consider. The Russian central bank recently bought $4.7billion of AUD.  It seems that our relatively good economic performance is  attracting a fair bit of attention.

There are suggestions that the RBA might want to intervene to take the pressure off the AUD, and indeed their current dealing reflect concern over the high value of the dollar.  Maybe this a partly a factor in their fx dealings, but it doesn't appear to be (see chart below for RBA fx dealings).


The question comes down to whether we choose to buffer our domestic firms from foreign economic upheaval, or are we ‘all in’ in this global economy game?

I think a reasonable middle ground is to support a mining resource tax and a sovereign wealth fund to balance our economy.  I recently suggested that Quarry Australia was not a desirable place to be, and some type of counterbalancing policy would be inherently stabilising.

Whether this would have much of an impact on the AUD I don’t know. But another part of me thinks – stuff it. Go all in. AUD to $1.25+. The non-mining sectors of the economy will get the shake up they really need to improve efficiency and competitiveness in the long term.   You see I wasn't thinking long term.  Maybe short term pain would give productivity the jolt it so desperately needs.

But of course, there are real people's lives to consider as well, and shake-ups like that are painful. 

The other argument to consider is that Australian businesses benefit when foreign markets are booming, so why not let them face the risks when foreign markets are failing?

Given the high probability that any government intervention will be an ineffective mess anyway, maybe we should just let the market function for a while.

There are plenty of questions, but unfortunately no easy answers.   Expect much more debate on this situation in the coming months.

Wednesday, June 15, 2011

Real estate commission madness

The Queensland government is set to remove the maximum commission that residential real estate agents can charge from the Property Agents and Motor Dealers Regulation 2001. Currently the regulation prescribes in Schedule 1A that

The maximum commission payable on the purchase or sale of residential property is—

(a) if the purchase or sale price is not more than $18000—5% of the price; or

(b) if the purchase or sale price is more than $18000—

(i) $900; and

(ii) 2.5% of 


The common practice since the introduction of the regulation has been for all agents to charge this maximum.

The Deputy Premir Paul Lucas is spinning that dergulation will somehow benefit home sellers. Yeah. Right.

Admittedly, NSW, Vic and the ACT don’t have regulated commissions for real estate agents, and the common practice in these states is to charge 2.5% for homes in urban areas, and between 2.5% and 4% for homes in outer and remote areas. It appears from this comparison that Queensland’s regulation mostly benefits those in outer areas and rural and remote areas - those with the lowest value homes.

The question you must ask, is whether the regulation is disrupting functioning markets such that there is an efficieny gain from removing it? I have yet to hear of a real estate agent refusing a listing because the commission is too low. There are always other agents willing to try their luck.

In fact, recent competition suggests that more agents are negotiating below this maximum. I wrote earlier how lower commissions could be a massive competitive advantage for new agencies.  

Indeed, when the regulation came in home prices across Queensland were less than half of their current levels. So for every sale, an agent now makes double from charging the same commission. If this regulation was a problem we should have felt it years ago, not now.

Poor REIQ Chairman Pamela Bennet reckons “the regulation of commission rates for residential property transactions has not kept pace with the changing market resulting in consumers not receiving the benefits originally intended” What now?

For the maximum to have kept pace with the changing market it should have been reduced over time so that the inflation adjusted average commission was constant – theoretically providing the same benefits as originally intended.

The only logic I can see is that the government thinka this change will stimulate sales and hand them back some stamp duty revenue? Sorry. That's not going to happen.