The one frustration that started me blogging more than three years ago was the confusing pricing practices of phone and internet service providers. It was quite obvious to me that their 'plans' where meant to be confusing to ensure the consumer could not easily identify the cheapest provider. Today, the Australian Communications and Media Authority (ACMA) has released a report that recommends improving price information for telecommunications contracts to avoid a 'confusopoly' (here). Amongst other things -
The authority also wants to prohibit what it says are misleading advertising practices, such as the use of the term "cap" on mobile and broadband plans.
"It's not a cap, it's not a maximum, it's a minimum," Mr Chapman said
"We want to prohibit that unless its a genuine hard cap, so that if you exceed your limit the service ends or you get the opportunity to upgrade." (here)
Most recently I have been comparing mobile phone plans. Some of the cheap plans don't allow you to call 13, 1300 and 1800 numbers under the cap, and they all have different call rates, flag fall and penalties for exceeding cap limits. To actually compare providers you need to know your calling needs in advance and have the mathematical skills to run this call profile, and other scenarios, through a model of each available phone plan. Insanity.
As I previously wrote -
By consciously manipulating these two criteria of a free market [low barriers to entry and perfect, or at least good, information], all firms in the market are able to avoid a state of true competition that would produce the most efficient allocation of services, and are able to artificially inflate the value of the commodity, hence producing more profit for each firm in the market.
This is not meant to sound like a conspiracy, because indeed each firm does not need to meet in back rooms with the other firms in the market and all agree to limit customer information and the comparability of their products. They each simply need to aspire to the great marketing ideal of product differentiation, a concept that is fundamentally designed to artificially eliminate direct competition by removing direct comparability.
...
The power of product differentiation, through its ability to remove comparability and create an information gap to distort what could be a perfectly competitive market, can be demonstrated by the case of the term life insurance market in the US in the late 1990s. There was a mysterious and dramatic drop in prices across all firms that did not correlate to price drops in other forms of insurance, which themselves where steadily rising.
According to economist Steven D. Levitt, this can be attributed to the realisation of a perfect market through the power of the internet. Although term life insurance policies had been quite homogeneous before this period of time, the process of shopping around for the cheapest price had been convoluted and time consuming, whereas websites such as Quotesmith.com suddenly made the process almost instantaneous.
In just a few years, the value of the term life insurance market in the US had dropped by USD$1bilion because of the new found ease of comparability. What insurance firm would want this to happen? Even if you were a small player in the market, say a 1% market share, your turnover had just dropped by $10million. It is perhaps one of the great recent examples of the power of perfect competition in allocating resources efficiently, yet possibly one of the greatest blunders by the insurance industry.
I believe that the power of private enterprise is its innovative response to the financial risks it incurs, but with very simple regulation the innovative confusopoly, which comes at a cost to cosumers, can easily be avoided. Indeed, most of the pushback against the telco confusopoly is from webpages which keep up-to-date tabs on plans from each service provider and enable you to take some rough guesses about future use and compare the cost effectiveness of each offering (eg here)
Tuesday, May 31, 2011
GDP down 1.2% for the March qtr
In the next 24 hours there will be a frenzy of economic commentary about the national accounts data and the importance of the last quarter's figure for the RBA board meeting next Tuesday. My money is on no move by the RBA and more poor economic data this year.
For interest, below is the GDP per capita and Real net national disposable income per capita over the past deacde. Notice the last few years (since end 2007) have been very flat for GDP per capita, and volatile but not really moving for net national disposable income per capita.
For interest, below is the GDP per capita and Real net national disposable income per capita over the past deacde. Notice the last few years (since end 2007) have been very flat for GDP per capita, and volatile but not really moving for net national disposable income per capita.
Monday, May 30, 2011
Brisbane and Perth housing slide continues
RP Data-Rismark released their dwelling price data for April today (here). Brisbane and Perth are leading the price slide with Sydney and Canberra showing small gains. This follows a stream of poor economic data recently.
I March 2010 I suggested that the next interest rate move by the RBA would be down. I was wrong. They increased another 75 basis points in total in their April, May and November decisions.
My reason for suggesting they must move down is that the economy was much weaker than they anticipated, and the outlook far less bouyant. Given this recent data one must think that their optimism is slowly fading.
2011 will be a very interesting year indeed.
I March 2010 I suggested that the next interest rate move by the RBA would be down. I was wrong. They increased another 75 basis points in total in their April, May and November decisions.
My reason for suggesting they must move down is that the economy was much weaker than they anticipated, and the outlook far less bouyant. Given this recent data one must think that their optimism is slowly fading.
2011 will be a very interesting year indeed.
Sunday, May 29, 2011
Learning to judge risk
No, this is not a post on financial risk. It is about child development and learning to judge risks yourself (a hot topic in my household).
As an economist parent this article, and the comments that follow, is very interesting. It begins...
Play equipment designed by "safety nazis " doesn't allow children to learn from risk-taking, an expert has warned.
More kids aged two to seven were getting injured in playgrounds because they didn't know how to take calculated risks.
While it may seem obvious, learning to take risks involves... taking risks! There is an old saying that epitomises this attitude – if you want to learn to swim, jump in the water.
But it seems that Councils are not going to replace their plastic low velocity slippery slides and bouncy foam ground covers with splintered old wooden climbing frames in hurry. The experts still haven’t grasped the implications of their research. They conclude with the following advice.
To improve playgrounds, Ms Walsh suggested longer and bigger slides built into embankments to eliminate falls.
Also, smooth boulders for balancing, shallow ponds for exploring and plenty of vegetation to provide nooks and crannies for children to crawl around.
But if children learn from risk taking, shouldn’t they build high fast slides, with no ground protection and sharp jagged boulders for balancing and deep ponds for exploring?
In any case, the finger pointing at engineers and playground designers was thoroughly dismissed in the comments, with molly-coddling litigious parents copping a bit of heat.
As an economist parent this article, and the comments that follow, is very interesting. It begins...
Play equipment designed by "safety nazis " doesn't allow children to learn from risk-taking, an expert has warned.
More kids aged two to seven were getting injured in playgrounds because they didn't know how to take calculated risks.
While it may seem obvious, learning to take risks involves... taking risks! There is an old saying that epitomises this attitude – if you want to learn to swim, jump in the water.
But it seems that Councils are not going to replace their plastic low velocity slippery slides and bouncy foam ground covers with splintered old wooden climbing frames in hurry. The experts still haven’t grasped the implications of their research. They conclude with the following advice.
To improve playgrounds, Ms Walsh suggested longer and bigger slides built into embankments to eliminate falls.
Also, smooth boulders for balancing, shallow ponds for exploring and plenty of vegetation to provide nooks and crannies for children to crawl around.
But if children learn from risk taking, shouldn’t they build high fast slides, with no ground protection and sharp jagged boulders for balancing and deep ponds for exploring?
In any case, the finger pointing at engineers and playground designers was thoroughly dismissed in the comments, with molly-coddling litigious parents copping a bit of heat.
Getting my head examined - a Chris Joye rebuttal
Don't get me wrong. I agree with Chris Joye on some things - lowering the inflation target (perhaps not right at the moment), pushing for a more streamlined NBN, and supporting Malcolm Turnbull's political ambitions.
But when it comes to the housing market the guy with all the numbers is happy to overlook the strikingly obvious and adores a verbal stouch with his foes - the group he calls 'housing nutters'. In fact he just recently recommended the following
At the same time, anyone who claims that a 1% year-on-year retracement in dwelling values is a major asset-class event (cf. the share market frequently falling more than 5% on a given day) needs their head examined, with the greatest of respect. And I sincerely meant that latter caveat: you genuinely should seek medical advice if you are convinced that house prices are plummeting.
Let's take his advice and examine what is in my head (noting that I don't believe a 1% year-on-year fall in national home prices in isolation is a concern).
Joye often likes to draw attention to the low volatility of the housing market compared to the share market (eg here and here). But he neglects a few important differences.
1. The housing market has at best monthly data only. Moreover each month's price data point is essentially an average. The share market would be far less volatile if you measured it that way and averaged away each month's price extremes. Not that volatility represent risk in any case.
2. The share market is an equity market. If you want to compare like with like you need to compare the change in home equity to the change in share prices. If there is $1.7trillion in housing debt outstanding against $3.5trillion worth of housing, you can double any housing price change to calculate the change in equity of homeowners on average. Of course prices are set at the margins so perhaps for the price setting buyers and sellers the leverage, and importance of small price movements, is even greater.
3. The negatively geared investor sets the market price (apart from the recent burst of FHBs). This means they are losing money every year. Any small decline in value decreases their equity substantially in addition to losses already incurred.
4. The marginal homebuyer is heavily leveraged - 80% plus. This is not the case in the share market. Remember, leverage works to improve both gains and losses.
5. The wealth effect is much stronger in the housing market than other markets, particularly due to leveraging and the sheer size of the asset compared to household incomes.
6. Cost of home ownership is much greater than simply the interest cost. For most homes around 25% of the gross rent is spent on annual costs (refer to 3.)
7. Housing market crashes, while they feel almost spontaneous, actually take some years to eventuate.
Irish housing - 5 years to fall 28%, or 0.41%/month
US housing - 6 years to fall 31%, or 0.38%/month
UK - 2 years to fall 21%, or 0.8%/month (and still 18% down from their peak 4 years later)
Irish housing - 5 years to fall 28%, or 0.41%/month
US housing - 6 years to fall 31%, or 0.38%/month
UK - 2 years to fall 21%, or 0.8%/month (and still 18% down from their peak 4 years later)
8. Lastly, I feel sorry for anyone who shared Joye's property optimism and bought into the Brisbane or Perth property markets in the past two years. While the share market hasn't been crash hot, 6% returns on cash has been pretty flash.
Anyway, if these notes are a sign of a mad man, well so be it ;-)
Saturday, May 28, 2011
Why Separate Design Constraints from the Customer's System Requirements?
Background
Traditionally, Requirements Identification, Analysis, and Management was based on "Shall" statements, that is, "the product (service) shall...". Fundamentally, the "Shall Statements" were contractual obligations to be met by the contractor in creating a new product. Systems Engineers and Requirements Analysts would attempt to identify both the requirements' description and metrics for each (i.e., when the requirement was met) by starting the Requirements Identification and Management effort by "shredding" the contract to identify the requirements within "the shall" statements.
Once the contractually contracted "requirements" were identified, the Systems Engineer/System Architect would decompose those requirements, partially in an effort to:
1) identify any requirements gaps,
2) ensure that all requirements were single requirements and not a group of requirements, and
3) identify evaluation procedures to ensure that the requirements were met.
This last reason was a requirement of any Systems Engineering process because most of the contractual requirements had no method for identifying when they were met; they were statements of the capabilities the customer required. For example, within Information Technology (IT) "the system shall be user friendly" is typical. However, what the customer meant by "user friendly" was left to the imagination of the software developer. This particular requirement has led to many "programmer friendly" systems being initially rolled out, a great many cost overruns to "fix the defects" (defects in the requirements), and a great deal of customer dissatisfaction.
With respect to the first two reasons for decomposition, I have seen both the contracts and the requirements stripped from the contracts of several historic aerospace programs and poor the resulting requirements were. The consequence was that Systems Engineers, from the 1960s to the 1990s, spent a great deal of effort and time to decompose the requirements in an attempt to truly identify the real requirements (those that the customer wants, is willing to pay for, and with a definable and quantifiable measurement of when it is met.
Ralph Young, in his book Effective Requirements Practices from 2001, discussed studies that found that less than 50% of the real requirements for a system were in the contractual agreement. Further there were significant numbers of the known requirements that were either conflicting or simply wrong. Since this has been the case for most efforts most of the time, you can understand why requirements decomposition (definition) became such an important function of the Systems Engineer. The decomposition process included State Diagrams, Functional Flow Diagrams and many other "requirements analysis" techniques that enable the Systems Engineer (Requirements Analyst) to tease out as many "missing" requirements as possible.
____________
The Program Management Issue with Requirements (A Sidebar)
Once completed (though many times before completion) the Program Managers and Finance Engineers put a project plan together based on the Heroic Assumption that all of the customer's requirements are known. That is the foundational assumption of the Waterfall process for System Realization; and is patently false. Consequently, project after program after effort fails to meet the programmatic requirements unless the Program Manager simply declares victory and leaves the mess for the operational Systems Engineers and Designers/Developers to clean up. Generally, they do this through abbreviated validation or no validation at all.
The reason they make the assumption that "all the requirements are known upfront" is that it is the only way they can see to create a project schedule, project plan, and earned value metrics to control the effort. Therefore, they religiously believe that their assumption is true, when time after time its proven false. Einstein's definition of insanity is "perform the same test over and over and expecting different results".
Making the assumption that "not all of the requirements are known upfront" makes life much more difficult for the Program Manager, unless the process and functions is designed around that assumption. While this has been done, I for one have done it, it is heretical to the Program Management catholic doctrine. Until Program Management and Financial Engineering "disciplines" learn to put more emphasis on the Customer's System or Product requirements and less on cost and schedule, and understand that the known requirements are merely an incomplete and partially incorrect set of requirements, development and transformation programs will always miss their cost and schedule marks; in addition to having very dissatisfied customers.
(End of Sidebar)
From the mid-1960s to ~2000, I dealt with issue of requirements identification, trying most of the methods described in Ralph Young's book cited above. The net result was that all requirements identification methods failed for one of four reasons.
Once the contractually contracted "requirements" were identified, the Systems Engineer/System Architect would decompose those requirements, partially in an effort to:
1) identify any requirements gaps,
2) ensure that all requirements were single requirements and not a group of requirements, and
3) identify evaluation procedures to ensure that the requirements were met.
This last reason was a requirement of any Systems Engineering process because most of the contractual requirements had no method for identifying when they were met; they were statements of the capabilities the customer required. For example, within Information Technology (IT) "the system shall be user friendly" is typical. However, what the customer meant by "user friendly" was left to the imagination of the software developer. This particular requirement has led to many "programmer friendly" systems being initially rolled out, a great many cost overruns to "fix the defects" (defects in the requirements), and a great deal of customer dissatisfaction.
With respect to the first two reasons for decomposition, I have seen both the contracts and the requirements stripped from the contracts of several historic aerospace programs and poor the resulting requirements were. The consequence was that Systems Engineers, from the 1960s to the 1990s, spent a great deal of effort and time to decompose the requirements in an attempt to truly identify the real requirements (those that the customer wants, is willing to pay for, and with a definable and quantifiable measurement of when it is met.
Ralph Young, in his book Effective Requirements Practices from 2001, discussed studies that found that less than 50% of the real requirements for a system were in the contractual agreement. Further there were significant numbers of the known requirements that were either conflicting or simply wrong. Since this has been the case for most efforts most of the time, you can understand why requirements decomposition (definition) became such an important function of the Systems Engineer. The decomposition process included State Diagrams, Functional Flow Diagrams and many other "requirements analysis" techniques that enable the Systems Engineer (Requirements Analyst) to tease out as many "missing" requirements as possible.
____________
The Program Management Issue with Requirements (A Sidebar)
Once completed (though many times before completion) the Program Managers and Finance Engineers put a project plan together based on the Heroic Assumption that all of the customer's requirements are known. That is the foundational assumption of the Waterfall process for System Realization; and is patently false. Consequently, project after program after effort fails to meet the programmatic requirements unless the Program Manager simply declares victory and leaves the mess for the operational Systems Engineers and Designers/Developers to clean up. Generally, they do this through abbreviated validation or no validation at all.
The reason they make the assumption that "all the requirements are known upfront" is that it is the only way they can see to create a project schedule, project plan, and earned value metrics to control the effort. Therefore, they religiously believe that their assumption is true, when time after time its proven false. Einstein's definition of insanity is "perform the same test over and over and expecting different results".
Making the assumption that "not all of the requirements are known upfront" makes life much more difficult for the Program Manager, unless the process and functions is designed around that assumption. While this has been done, I for one have done it, it is heretical to the Program Management catholic doctrine. Until Program Management and Financial Engineering "disciplines" learn to put more emphasis on the Customer's System or Product requirements and less on cost and schedule, and understand that the known requirements are merely an incomplete and partially incorrect set of requirements, development and transformation programs will always miss their cost and schedule marks; in addition to having very dissatisfied customers.
(End of Sidebar)
From the mid-1960s to ~2000, I dealt with issue of requirements identification, trying most of the methods described in Ralph Young's book cited above. The net result was that all requirements identification methods failed for one of four reasons.
- The Systems Engineer didn't describe the requirements using the customers language and ideas and the Systems Engineer didn't rephrase or translate them into the language of the developers. This leads to many defects, finger pointing exercises, and customer dissatisfaction.
- The second is described in the sidebar. That is, the waterfall process used in most development and transformation efforts was founded on an assumption that is just plain wrong.
- For the types of efforts I was working on, IT transformation projects, the customer's system requirements were only partially based on the process(es) that they were supposed to support. Most of the time they were short-sightedly based on the function that the particular customer was performing. Consequently, "optimizing the functions, sub-optimized the process".
- All customer system requirements were treated the same. I found this to be incorrect and would lead to many defects.
Types of Requirements: Capabilities, Functional Requirements, and Design Constraints
When I started analyzing the requirements in various requirements documents and systems in the early 1990s, I found that there were two classes of requirements, and then requirements that weren't requirements at all, just wishful thinking. The two classes are those that the system "must perform" (or process functions it must enable and support) and those the system "must meet". An example of a must perform requirement is "The system will show project status using a standard form". The standard form would be considered as a must meet requirement because a form is nothing that a system "must perform", but it is a requirement that the system "must meet". This requirement constrains the design. The designer/developer can't create a new form; he or she must create a system the produces the standard form. Therefore, I define it as a Design Constraint, since it constrains the design. Likewise, a requirement that the system "must perform" shows action. Therefore, I define this type of requirement as a customer system functional requirement, or more simple a Customer System Requirement. Finally, there are these "wishful thinking" statements. Many times, these are, in fact, genuine customer needed capabilities, (unknown requirements). The reason they are not requirements is that they have no metric associated with them to enable the implementer to know when they have been met. Therefore, rather than ignore them, I've put them into a category (or type) I call Capabilities. These are statements the Systems Engineer must investigate to determine if they are requirements or not.
In any software development or transformation effort, frequently, there are more design constraints than customer system requirements because there are so many types of design constraints and the relative number of functions is, normally, limited. Typically, an organization does not radically reorganize or transform all of its processes and supporting systems concurrently; if there is a higher risk method to move to the going out of business curve, other than fraud, I don't know it. Instead, the organization will develop or transform a single business function or service in a single project; though it may have several projects.
On the other hand, all development and transformation efforts of an organization are subject to the policies and standards of the organization, to contractual obligation of the organization, and to external laws and regulations. All of these constrain the design of systems. For example, Section 508 of the US Rehabilitation Act effects the development of all governmental websites by ensuring that the visually and auditorially impaired can use the site. Again, the Clinger-Cohn Act effects both the choice of transformation efforts and the products IT can use within those projects, by not allowing software, which has been sunsetted or is being sunsetted, to be used in the operation of many businesses within the United States. This act mandates that those systems must be updated to a current version of the system. Obviously, these constrain the design of a new or a transforming system or service of an organization, irrespective of the function the system or service of the organization.
The organization has other design constraints, like having the organization's logo on all user interface displays, for example, or producing the same formats for the reports coming out of the new system, as from the prior system. The organization may use Microsoft databases and therefore have expertise supporting them; in which case using Oracle or IBM products would not be feasible--except in very special circumstances (e.g., a contractual obligation). Consequently, these "must meet" requirements are pervasive across all of an organization's development and transformation efforts.
In any software development or transformation effort, frequently, there are more design constraints than customer system requirements because there are so many types of design constraints and the relative number of functions is, normally, limited. Typically, an organization does not radically reorganize or transform all of its processes and supporting systems concurrently; if there is a higher risk method to move to the going out of business curve, other than fraud, I don't know it. Instead, the organization will develop or transform a single business function or service in a single project; though it may have several projects.
On the other hand, all development and transformation efforts of an organization are subject to the policies and standards of the organization, to contractual obligation of the organization, and to external laws and regulations. All of these constrain the design of systems. For example, Section 508 of the US Rehabilitation Act effects the development of all governmental websites by ensuring that the visually and auditorially impaired can use the site. Again, the Clinger-Cohn Act effects both the choice of transformation efforts and the products IT can use within those projects, by not allowing software, which has been sunsetted or is being sunsetted, to be used in the operation of many businesses within the United States. This act mandates that those systems must be updated to a current version of the system. Obviously, these constrain the design of a new or a transforming system or service of an organization, irrespective of the function the system or service of the organization.
The organization has other design constraints, like having the organization's logo on all user interface displays, for example, or producing the same formats for the reports coming out of the new system, as from the prior system. The organization may use Microsoft databases and therefore have expertise supporting them; in which case using Oracle or IBM products would not be feasible--except in very special circumstances (e.g., a contractual obligation). Consequently, these "must meet" requirements are pervasive across all of an organization's development and transformation efforts.
Advantages of Separation
When, in the 1990s, I discovered this, I recommended to my management that we start a repository of design constraints to use on all projects of our aerospace and defense corporate organization and have recommended it many time since. In my presentation I envisioned four benefits:
When, in the 1990s, I discovered this, I recommended to my management that we start a repository of design constraints to use on all projects of our aerospace and defense corporate organization and have recommended it many time since. In my presentation I envisioned four benefits:
- It enables and supports the reuse of Design Constraints and their evaluation methods. This makes evaluating them both consistent and cost efficient.
- It minimizes the risk of leaving a Design Constraint out of the requirements since the Systems Engineer has a long list of these from the repository. This is a much better way than either trusting the Systems Engineer to remember or figure it out, or any private lists of design constraints kept by any single Systems Engineer.
- If the Customer System Requirements are separated from the Design Constraints, then the System Architecture process is more effective and cost efficient because the System Architect does not have to attempt to sort these out and keep them straight during the process. (Note: the Customer System Requirements are the requirements that are decomposed and from which the System Architecture is derived.)
- Finally, if the organization uses an Enterprise Architecture process like I have described in I discussed in my posts A Model of an Organization's Control Function using IDEF0 Model, The OODA Loop, and Enterprise Architecture and Asset and Enterprise Architecture Repository then, understanding of what Systems will be affected with a change in policies or standards becomes a simple effort.
Wednesday, May 25, 2011
Wealth effect driven by the housing market
Leith van Onselen over at Macrobusiness has written a couple of very important and timely articles on the wealth effect. Put simply, the wealth effect is an increase in spending that accompanies and increase in perceived wealth.
In relation to housing, this paper suggests the wealth effect increases our propensity to consume by 9c per dollar of increased housing value (which is further supported here). So if the housing stock of Australia is valued at $3trillion (some say between 3.5 and 4 trillion), and market values increase 10% in a year, then we will spend on average 9% of the $300billion of new 'wealth', or $27 billion - with $6 billion of spending occurring prior to the end of the next quarter.
Importantly, this money is spent before it is earned by selling the asset. The easy access to home equity lending has been a contributing factor to the size of this effect, enabling households to spend their capital gains before they have been realised which increases their financial risk.
There are few readily available studies about the size of this effect in reverse, but if the same values hold in both directions we can look at some interesting scenarios.
If prices fall 2.5% nationally over a quarter then we lose $75billion of perceived wealth, with an immediate reduction in spending in the following quarter/half year of about $1.5billion and ongoing reductions in spending totalling $7billion
With about $1.7trillion of bank loans outstanding, that is about the same effect on spending as an increase in interest rates of 0.25% and keeping them there for two years (which will mean $4billion extra is spent on interest repayments per year). This of course assumes that house prices are not dramatically affected. Indeed, if we consider that interest rate moves are likely to also bring down home prices, we can expect a much greater effect from the monetary lever.
That’s why house price falls of just a few percent can cascade into a crash so easily.
I would suggest the reason the wealth effect in relation to housing is much higher than found elsewhere is that many people who benefit/lose from house price changes are highly geared, which increases/decreases their equity more quickly for a given price change.
On this note I would add that you can’t directly compare share market volatility to house price volatility, since the share market is an equity market. To make a direct comparison you need to compare the volatility of the equity component of the housing market with share market, or the volatility of the share market value plus the value of debts held by those listed businesses to the housing market.
In relation to housing, this paper suggests the wealth effect increases our propensity to consume by 9c per dollar of increased housing value (which is further supported here). So if the housing stock of Australia is valued at $3trillion (some say between 3.5 and 4 trillion), and market values increase 10% in a year, then we will spend on average 9% of the $300billion of new 'wealth', or $27 billion - with $6 billion of spending occurring prior to the end of the next quarter.
Importantly, this money is spent before it is earned by selling the asset. The easy access to home equity lending has been a contributing factor to the size of this effect, enabling households to spend their capital gains before they have been realised which increases their financial risk.
There are few readily available studies about the size of this effect in reverse, but if the same values hold in both directions we can look at some interesting scenarios.
If prices fall 2.5% nationally over a quarter then we lose $75billion of perceived wealth, with an immediate reduction in spending in the following quarter/half year of about $1.5billion and ongoing reductions in spending totalling $7billion
With about $1.7trillion of bank loans outstanding, that is about the same effect on spending as an increase in interest rates of 0.25% and keeping them there for two years (which will mean $4billion extra is spent on interest repayments per year). This of course assumes that house prices are not dramatically affected. Indeed, if we consider that interest rate moves are likely to also bring down home prices, we can expect a much greater effect from the monetary lever.
That’s why house price falls of just a few percent can cascade into a crash so easily.
I would suggest the reason the wealth effect in relation to housing is much higher than found elsewhere is that many people who benefit/lose from house price changes are highly geared, which increases/decreases their equity more quickly for a given price change.
On this note I would add that you can’t directly compare share market volatility to house price volatility, since the share market is an equity market. To make a direct comparison you need to compare the volatility of the equity component of the housing market with share market, or the volatility of the share market value plus the value of debts held by those listed businesses to the housing market.
Monday, May 23, 2011
Technology Change Management: An Activity of the Enterprise Architect
As I see it, Enterprise Architects have three responsibilities, key to the success of the organization. They are:
- Technology Change Management.
I discussed the first two, Mission Alignment and Governance and Policy Management in my post A Model of an Organization's Control Function using IDEF0 Model, The OODA Loop, and Enterprise Architecture. In this post I will discuss the third, Technology Change Management (TCM).
The Mission of Technology Change Management
The Mission of TCM is to insert new technology or replace existing technology, to increase the process effectiveness, increase the cost efficiency, while minimizing the impact on the organization of the technology change transformation process. While that is a mouthful, it does identify the three key strategies of all TCM and in the following priority order:
- First, increase the organization's process effectiveness--this is the only reason for having tools
- Second, increase the cost efficiency of the tools--i.e., reduce the Total Lifecycle Cost (TCO) of the tooling while maintaining the process effectiveness
- Third, transform the tool set while maintaining the process effectiveness--while this is third in priority, it is a strategy that the Enterprise Architect must keep in mind, especially when upgrading a current system to new technology.
The TCM Process and Its Activities
To meet the strategies, above the TCM process should have three high-level activities:
- Technology Evaluation
- Supplier Technology Roadmapping
- Internal Technology Roadmapping
Technology Evaluation
There are two very different types of Technology Evaluation, those evaluating the potential of upgrades of current products and those that evaluate the potential of immature versions of potentially disruptive technologies. Additionally, Technology Evaluation activity enables and supports the evaluation of two distinctly different reasons for TCM, those that enable and support a product for sale and those that have the potential to disrupt the organization's current, strategies, processes, the current systems, or any and all of those. This is the source of a significant conflict, which is currently an unidentified issue within many organizations.
TCM and the CTO versus the Enterprise Architect (or Chief Architect)
The source of the conflict is over whether the technology is a Product or a support for Process. If it is a product the organization is producing for sale (in its many forms) then it is a product. Product include computer and networking hardware, and software for sale or rental/leasing. The CTO is definitely responsible for new products or new functions and components within current products. The CTO organization should also evaluate the product's production readiness, even for internal systems.
However, if the technology enables and supports an internal process or the processes supporting tools, the Chief Process Officer (CPO) or Enterprise Architect should evaluate the technology with respect to its applicability to supporting the ongoing or radically altered strategies and processes. This is especially true if the organization is a "service providing" firm, like those in the financial industry, IT systems integration, or legal and governmental organizations. In fact, I really wonder why that for those industries that use a "unique" process to sell as a service to other organizations, that there is never a CPO that is on the same level as the CFO, with the CTO and CIO as direct subordinates. The reason that I wonder this is that if the process is the value producer for the organization, then if should be of the highest importance and responsibility in the organization (without it the responsibilities of the CFO are non-value added, because the support the organization's value producing engine).
New releases of current technology
Since one of the TCM strategies is to insert the technology into the organization's tooling with as little disruption as possible, the Enterprise Architect, together with the Subject Matter Experts (SMEs) will perform the evaluation activities. In fact, large organizations may want to have a specific SME designated for the main products supporting their processes. These people must be evaluators, rather than product advocates, which many of them turn into. In fact, when I held a position as a technical leader in one major organization, I got into serious political trouble because I accurately identified the shortcomings of a product to the supplier, rather than lauding it. The supplier reps reported my evaluation to my boss (who reported to the corporate CIO) and this is one of the real reasons he eventually figured out that he did not have budget for all the Enterprise Architects and so removed me. Still, there is some Pyrrhic justice. When a new CIO replaced the prior one, the product in question was replaced as quickly as was possible, given the size of the effort. Further, the supplier that apparently "listened with his mouth", has been steadily loosing market share.
Disruptive Technology
Disruptive Technology
Another reason that the Enterprise Architects and product SMEs must be evaluators, rather than Evangelists, is that from time to time disruptive technologies change everything--that's why they are called disruptive (e.g., one of my forbearers was a major manufacturer of buggy harnesses in the 1880s, just as something called the automobile was being innovated into a practical and affordable product...). Therefore, the organization must maintain membership on standards bodies, and so on, where there may be early warning that that suppliers and research organizations are creating these technologies, then bringing the technologies in for evaluation as early as possible--to evaluate the new function or functional group, and its production readiness.
Supplier Technology Roadmapping
While Mission Alignment and Governance and Policy Management are short-term, short-cycle activities, Technology Change Management has a much longer time horizon, since supplier technology plan data and information is available, though semi-fugitive, for two to three years out. Many suppliers will release their plans under nondisclosure agreements. Additionally, as suppliers and entrepreneurs get ready to release potentially disruptive technologies, they tend to publicize it. Therefore, the Enterprise Architect has some significant ability to identify, track, and evaluate (for use in his or her organization) new tooling and plan for its introduction into the organization. These plans are called supplier technology roadmaps.
Organizational Technology Implementation Roadmapping
Once the organization understands when new versions of a product, either a new release or a disruptive technology is be production ready, the organization will require a second type of roadmap, an organizational technology implementation roadmap. This roadmap is one process instantiating the third Strategy of TCM, minimizing the impact of transformation. One reason for needing this roadmap is that the Enterprise Architect and SME team needs time to evaluate the new release within the organization's systems and nfrastructure. Many times, new releases of software require new releases of operating systems, databases and other software components to operate or operate properly. I have found this a key consideration. Therefore, the organizational technology implementation roadmap must account for the migration of all hardware and software within the organization, not just the single product.
Further, the roadmap must account for exceptions. For example, if most of an organization's customers (or clients) are on past release of software and there is a need to share data or information, then it may be a contractual obligation (or simply make sense) to stay at the same release until completion of the work, even if the rest of the organization is migrating to a new release. If the software supplier is sunsetting a release of software (i.e., no longer supporting it) within the next 3 months, then it would make sense to change the organization's standard for that software to disallow any installations of that release and mandate a migration to the next release or the latest release, while waviering any migration for those projects or efforts with contractual obligations.
Finally, the leadership of the organization will use the organizational technology implementation roadmap as one of the criteria to determine which Architectural Blueprints to fund in a particular cycle of the Mission Alignment/Mission Implementation process.
Further, the roadmap must account for exceptions. For example, if most of an organization's customers (or clients) are on past release of software and there is a need to share data or information, then it may be a contractual obligation (or simply make sense) to stay at the same release until completion of the work, even if the rest of the organization is migrating to a new release. If the software supplier is sunsetting a release of software (i.e., no longer supporting it) within the next 3 months, then it would make sense to change the organization's standard for that software to disallow any installations of that release and mandate a migration to the next release or the latest release, while waviering any migration for those projects or efforts with contractual obligations.
Finally, the leadership of the organization will use the organizational technology implementation roadmap as one of the criteria to determine which Architectural Blueprints to fund in a particular cycle of the Mission Alignment/Mission Implementation process.
Wednesday, May 18, 2011
A Model of an Organization's Control Function using IDEF0 Model, The OODA Loop, and Enterprise Architecture
In my post The IDEF0 Model and the Organization Economics Model, I outlined the IDEF0 model, shown in Figure 1 below, and outlined the three functions contained within the Domain of the organization.
Figure 1
This post will describe the two high-level functions within control function and link the IDEF0 model's control function with the process of the OODA Loop pattern as shown in Figure 2 (for more details on the OODA Loop see my post The OODA Loop in Mission Alignment Activities). I chose to use both the IDEF0 pattern for an organization and the OODA Loop for the Control Process pattern because they are simple models that, in my experience, when properly applied, produce useful results.
Figure 2
The Functions of Control
As shown in Figure 2, above, there are two high-level functions by which leaders and managers control an organization, Mission alignment, and Governance and Policy Management.
Mission Alignment
Mission Alignment is the function by which the leadership decides where to invest its limited resources cost efficiently achieve the vision and mission of the organization. If the organization uses Enterprise Architecture to enable and support Mission Alignment then the definition is the process of aligning the organization's enterprise architecture with the organization's mission to ensure the organization's investments in processes and infrastructure most optimally enable and support the organization's mission and vision.
An organization's Vision is the ultimate goal that it is attempting to achieve. For example, the Vision of the United States Federal Government is embodied in the Preamble to the United States Constitution. In Built to Last, Jim Collins and his team describe several businesses that have and execute against Vision Statements; in some cases, over 100 years.
While the Vision statement of an organization describes the organization's goal, final state, to-be state, or optimal state, the Mission statement(s) describe the intermediate objectives for achieving the goal. This means that the Mission is of a more limited duration and that it must be measurably linked to the Vision. Strategies enable and support the Mission by defining processes, procedures, and methods, or the road map for achieving the objective of the Mission. Consequently, the strategies should be measurably linked to the Mission--otherwise how does the organization determine if a strategy is successful.
An example of this hierarchy from vision to mission to strategies might be that a country's vision is a secure citizenry (which is part of the Preamble of the US Constitution). A Security Mission during WWII was to liberate Europe from Hitler. For the United States, Mission supporting Strategies included growing the US Army from about 140,000 (in 1939--fewer troops than landed in Normandy on D-Day), to over 10 million, equipping all of the US forces and providing much of the equipment for the rest of the allies, and invading Europe to kill or capture Hitler. Today, the Security Mission supporting the Vision might be to eliminate terrorists' attacks on the United States. The Strategies enabling and supporting the Mission could include, creating and operating a high technology intelligence system, using highly skill special operations troops, supporting with sophisticated tools, and developing and operating unmanned combat air vehicles (UCAVs).
Processes enable and support the Strategies by making them operational. Process is defined as a set of activities ordered to achieve a goal; that goal is the Strategy that enables the Mission. For example, a strategy of "using highly skilled special operations troops" requires a processes for recruiting, training, and supporting such troops. Each of these processes should be measurable, in terms of achieving the strategy. If not, then how can the leadership determine whether or not the investments made in the process are providing value to the organization. Further, this will help the leadership to determine which processes the organization requires as the organization's external context changes, that is, is the Mission or Strategy still required and if not, does the organization still require the processes supporting the strategy.
Additionally, the processes should be measured in terms of effectiveness, that is, how well the process is working. Only when the leadership knows "how well" the current process is working can they make intelligent decisions as to where further investment is needed. These "internal" metrics determine whether or not the flow through the process meets, does not meet, or exceeds demand for the product of the process. These metrics may also determine the changes in cost efficiency of the process--notices this is the first layer where cost really enters the architecture (and most organizations really do not have effectiveness metrics coupled with cost efficiency metrics for processes; if anything, just cost efficiency metrics).
Tooling enables and supports the Processes by multiplying the effectiveness of the processes. As indicated by Adam Smith, in Chapter 1 of An Inquiry into the Nature and Causes of the Wealth of Nations, or simply The Wealth of Nations, the only reason to use tools is to increase the process effectiveness of a process. The military considers their weapons, intelligence, and logistic support to be "Force Multipliers", so that a smaller force with better tools can out think or out do a larger force.
For example, at the start of the battle of Gettysburg in the US Civil War, two cavalry brigades under the command of Brigadier General Buford, held off held off three infantry divisions from two CSA Corps for more than 2 hours. Up to then, an infantry brigade had always been able to chase away any cavalry fairly easily. However, Buford gave his brigades two advantages, good position on the battlefield, and good tools. The tool was Sharp's Carbine. It could shoot 5 to 8 times per minute compared with the infantry, whose muskets could fire 2 to 3 times per minute. The volume of fire made up for the small size of the force--this is the force (process) multiplier. Likewise, carpenters use nail guns instead of rocks to drive nails because with the nail guns they can drive nails more uniformly (increasing the quality, a measure of process effectiveness) and faster (increasing the throughput, another measure of process effectiveness).
Finance Engineering often does not consider The concept of tooling increasing the effectiveness of a process in making decisions, especially in IT, because there may not be clear and provable financial metrics for the benefits (process multiplier) directly attributable to the implementation of new tools. For example, many accountants and CFOs will not accept "cost avoidance" as a benefit, since, if a cost is avoided there is no way to measure what the cost would be if it were not avoided. Enterprise Architects can avoid this type of conundrum only by implementing a feedback loop like Business Activity Monitoring and Management (BAMM). The problems for the Enterprise Architect are that demonstrating the benefits of BAMM and the investment decision-making feedback loop, 1) takes time, two or more investment cycles, 2) may show that the metrics poorly describe what is happening, or 3) may show that the leadership and management are making poor and/or bad investment decisions. The Enterprise Architect may be able to ameliorate the first problem by creating a 3 month investment cycle, instead of the usual yearly investment cycle. This is in line with the concepts of short cycle transformation processes as I described in my post SOA in a Rapid Implementation Environment. The second is almost inevitable at the start. The leadership will propose ill conceived metrics because they fit with their best understanding. Once there are two or three investment cycles, the leadership, guided by the Enterprise Architect will define and delimit better metrics; this process will continue for the life of the organization. The third is more difficult because leadership and, particularly, management do not like anyone to even question their decisions, let alone demonstrate that their decision-making ability is poor, or that their decisions are in their own best interests and not in the organization's best interest, that is, their private agenda.
The Enterprise Architect has the task of integrating the Vision, Mission, Strategies, Processes, and Tools into an "organizational functional design", that is, an Enterprise Architecture, while using it to support the organization (see my posts Initially implementing an Asset and Enterprise Architecture Process and an AEAR and Asset and Enterprise Architecture Repository for a RAD-like process for implementation). Any Enterprise Architect that can perform this process successfully is worth every penny he or she is paid and more, since the organization will reap major dividends in terms of effectiveness, cost efficiency of meeting its Mission, and longevity and agility in moving toward its Vision.
Governance and Policy Management is the second function of leadership and management. Governance and Policy Management identifies what policies and standards to set, defines each, determines when and how to enforce them, and how to mediate and/or adjudicate them. Policies and Standards fall into two categories, constraints (the "thou must") and restraints (the "thou must not"). Consequently, for an organization, policies and standards are the equivalent of Design Constraints for a new product or process transformation project (see my post Types of Customer Requirements and the System Architecture Process).
The only rational reason for setting a policy or standard is to reduce the intra-organizational process friction. Process friction is occurs when the interfaces among activities or between processes don't align, when there are conflicting policies and standards, or when Mission, Strategies, Processes, or Tools don't align. Most importantly, no policy or standard should conflict with the Vision or Mission of the organization, as a whole. This is sand in the gears of any process, which can bring that process to a halt. An example, in the military, of constraints and restraints on a Mission are called "Rules of Engagement". A restraint on a military mission might be "Don't kill thy fellow soldier", that is, no friendly fire incidents. Organizations set policies and standards for much the same reason, though normally the result is not as catastrophic. For example, using the same version of a Web Service standard will reduce interface friction among the Web Service Components of a Composite Application in a Private Cloud.
Governance and Policy Management is really a pair of conjoint processes. Governance, and Policy Management.
Governance
As attributed to Winston Churchill, "To Govern is to Decide." In the case of the organization, it is to decide on which policies and standards to set to minimize intra-organizational process friction. Normally, the highest level of leadership of an organization decides on what policies and standards to set. However, while in their minds eye the new policy will not conflict with existing policies or standards, when enacted within Policy Management, it may. Therefore, the Governance process should have a feedback loop to enable the leadership to understand the consequences, both good and bad, of the new policy. This is another task of the Enterprise Architect.
The three primary activities of management within the Policy Management process are to enact, enforce, and mediate/adjudicate policies and standards.
- Enact - Management creates new or updates old policies and standards based on the decisions of the leadership as a result of the Governance process. Creating a policy (standard or for government, a law) is more than documenting a policy description. To make the policy enforceable requires the association of organizational "business rules" that define and delimit metrics for when the policy has been violated. In government, these business rules may be known as "regulations". In IT architecture, these rules may be parametrized and instantiated in a rules repository. This work especially well in Enterprise SOA-based Composite Applications.
- Enforce - Once management has described, documented, and delimited a policy or standard and instantiated them with business rules, the management must enforce the rules; otherwise the policy or standard becomes a mere admonishment to virtue (something that looks nice a paper, but is never followed). While enforcement may seem fairly straightforward and simple, it turns out, that in detail, it's quite complex.
- Mediate/Adjudicate - Management must also either mediate or adjudicate penalties. While many people assume that judging is "yes or no" and that the penalties are imposed in a uniform manner. In fact, many cultural/social/political forces militate against uniform penalties including the "management protective association" (which ensures that any of its members receive light penalties, as long as there is no big stink, since management controls all facets of the Policy Management process, so it is easy for them to protect their own). This is the key reason that the creators of the US Constitution separated each of these activities into the three branches of government.
There are many patterns for the Governance process, and there are many ancillary and sub-processes associated with Policy Management, that I have not discussed, though may in future posts.
The IDEF0 pattern's use of the OODA Loop process pattern in Control
The IDEF0 functional architectural pattern (shown in Figure 1), and the Control architectural pattern (shown in Figure 2), show two levels of detail of an abstract architectural pattern of functions for control of the organization. However, the Control of an organization is much more than levels of functions in an architect. Control requires a pattern of process (also shown in Figure 2), that is activities and procedures that the leadership, management, and Enterprise Architecture use for aligning the mission and governing the organization. I am recommending the OODA Loop process pattern, the functions of which I discussed in my post The OODA Loop in Mission Alignment Activities. The reason is that it works, like the IDEF0 pattern. The acronym OODA stands for the four activities in the loop:
- Observe
- Orient
- Decide
- Act
These terms are defined in my post cited above.
Mission Alignment
In the process of Mission Alignment, the OODA Loop is used in the following way.
Observe - The Enterprise Architect observes the current processes and tooling by measuring them during each cycle through the Mission Alignment process. In effect, the observe activity is the feedback loop of Mission Alignment and the OODA Loop process pattern.
Orient - The Enterprise Architect orients the observations by inserting them into the "as is" architecture. The architect pays particular attention to the changes in any processes or tooling that was changed in the last cycle to determine if the benefits he or she predicted were, in fact, met. This helps the architect to determine if there is further change required in the processes, the tooling, or in the method for measuring the benefits (both increased process effectiveness and cost efficiency). This activity includes modeling of the current or "as-is" Enterprise Architecture and comparing with the results of models with candidate changes. This helps the Enterprise Architect to determine which candidate changes has the greatest potential for increasing the process effectiveness and/or cost efficiency.
Decide - Once the Enterprise Architect determines the recommended candidates for change, he or she needs to create a blueprint for each of the recommended candidates. A blueprint has three section. The first section is a technical description of the change, that is, a notional design. Additionally, this section should include all risks identified and accessed by the Enterprise Architect--and risks are no bad things to identify or access. The second section of the blueprint is a benefits analysis, documenting the process effectiveness and cost efficiency potential benefits in modeling the candidate change. These potential are much more accurate than many of the current "benefits analysis" because they are measurements of support for the organization's Vision and Mission, instead of mere support of an activity or even a process. The final section is the estimated cost for the candidate change. Since it is based on the notional design, generally, it will fall into the Rough Order of Magnitude (ROM) category, though many customers, managers, and finance engineers treat the number as a "not to exceed" number. Consequently, many Enterprise Architects err on the high side of the estimate, which kills many good candidate efforts.
Once the Enterprise Architect presents the blueprints for the candidate change efforts to the organization's leadership, the leader or leadership team must decide which to implement. This is still difficult because there will never be as much funding to time (the programmatic requirements) as there are good potential efforts.
Act - Once the organization's leadership has decided on which efforts to fund, the Systems Engineer and System Architect, together with the Program Manager, perform the transformation effort. The first task, before project planning is performed by the Systems Engineer. Starting from the requirements on which the Enterprise Architect based the notional design, the notional design, and the identified risks, the Systems Engineer works with the customer to create a much more detailed set of functional requirements and design constraints.
The reason for the System Engineer gathering and documenting an initial set of detailed customer system requirements before the start of project planning is that the project plan should be based on those requirements. Many projects and programs fail simply because they create the project plan first, then attempt to perform the requirements analysis second. The result is that the plan is based on poor or non-existent requirements causing a complete replan after the requirements analysis is performed ("getting the cart before the horse" is never a good thing).
If you make the assumption that "Not all of the real requirements and known upfront", hardly a heroic assumption, then using a RAD or rapid implementation process, such as the one I describe in SOA in a Rapid Implementation Environment would be a good thing to do.
When performing the first OODA Loop for Mission Alignment process, the Enterprise Architect should start with this Act activity for two reasons. First, the Systems Engineer has identified the customer's requirements. These have associated metrics. The Enterprise Architect can measure the current system to determine the same metrics for the current system. The Enterprise Architect can insert that data into the AEAR, then measure the transformed system to show success of the effort. In doing so, the Enterprise Architect can demonstrate the operation of the OODA Loop process to the leadership, while starting the build out of the AEAR..
Second, there is no way to collect all of the necessary data for an Enterprise Architecture for all medium and large-sized organizations before the data is out of date. The reason is that it typically takes 6 months or more to create the AEAR Framework and to determine and insert the necessary data into the framework. Since most medium and large-sized organizations are constantly inserting new system, software, and hardware, and retiring old ones, by the time the Enterprise Architect has enough data to start performing the Orient activity, the "as-is" data has become the "as-was" data; that is, the tempo of change is faster than the data about all current systems can be collected. Any Enterprise Architect that is intent on making recommendations on a complete, correct, consistent, and current set of assets is doomed to failure. Many organizations, including those in the US Federal government have found this to be the case--creating an Enterprise Architecture is an exercise in futility. Consequently, the organization's leadership often kills an Enterprise Architecture project before it can proved to be useful.
There are only two ways I know of, that an Enterprise Architect can be successful, either start building the Enterprise Architecture with one sub-organization or start with the current projects.
If an Enterprise Architect starts with a single sub-organization, it must be small enough that the Enterprise Architect can gather all of the data for the AEAR in 3 months or less. There are two reasons for this. First, customers tend to loose interest if there are no results within that time frame. Second, if a RAD-like process is used, then the Enterprise Architect could be synced-up with the next implementation cycle.
I would not recommend this method for initializing the Mission Alignment process because it will take two or three cycles before the process is really capable of producing good assessments and blueprints; this is not terrible impressive to the customer (the leadership).
Instead, I would recommend start with data gathered for current projects and efforts. The reason is that, for whatever reason, the leadership has already intuitively considered the benefits and costs of the effort, so that they are in tune with any measurements of the before and after transformation, that is, they have a vested interest in the results. The Enterprise Architect should be ready to defend the methodology of analysis, especially if the results are poor. Remember that, typically, the first set of metrics do not measure what the leadership intended. Second, that there is a process learning curve--it may take 6 months before the users employ the process and system with facility. Third, many of the efforts to transform processes, activities, and systems are based on inaccurate understanding of these. Fourth, because there is no Mission Alignment process--the Enterprise Architect is just starting it--the decisions of what projects and other efforts to fund is based on beauty contests, backroom politicking and private management agendas.
This project-based method for building out the architectural data of the AEAR assumes that within 3 to 4 years, at least 98 percent of the processes and tooling will be touched by a project or other effort. This is a near certainty because both software and hardware technology are changing so rapidly that hard not to transform and/or update nearly every facet of an organization's architecture. This means that as systems are transformed the AEAR is updated and becomes more and more valuable to the organization. All the while, the Enterprise Architect is performing his or her role instead of trying to build-out the AEAR.
Governance and Policy Management
Any one Policy or Standard is almost always considered independently from all other Policies and Standards. Most organizations have no architectural framework for policies and standards and never really measure the effects of a new or revised policy on the processes and systems of an organization. Consequently, many policies and standards create as much or more process friction as they resolve. Without a feedback loop, which many, if not most, Governance and Policy Management processes and systems do not have, then the leadership and management has little chance to ferret out the conflicts, anomalies, and non-value added Policies and Standards, except when the policy or standards causes so much friction or so many issues that the issues become obvious.
Again, I would recommend that the OODA Loop decision-support pattern be used. For policies and standards, there may be no standard duration (like 3 months for a Mission Alignment cycle). Instead, an OODA Loop for the Governance and Policy Management process will occur on an as required bases. However, I would recommend that all policies and standards be reviewed at least once every 18 months to two years.
Observe - Like the observe activity of Mission Alignment, the Enterprise Architect uses the Observe activity to measure the effects of the change, only, in this case, the Enterprise Architect would measure change on the processes the policy or standard affect. This is the feedback loop for the Governance and Policy Management process.
Orient - Once the Enterprise Architect has made the measurements, he or she has to orient these measurements within the Enterprise Architecture, as embodied in the AEAR. This may require significant effort since, as discussed above, Policy Management must enact, enforce, and mediate/adjudicate policies and standards. The Enterprise Architect must evaluate the policy or standard within each of these sub-processes. Additionally, the Enterprise Architect must analyze and evaluate the "business rules" associated with each policy or standard to understand their impact on the processes and systems they effect.
As with the Orient activity in Mission Alignment, the Enterprise Architect can model the "as is" architecture using the measurements. Then, as required by the leadership, the Enterprise Architect can change the metrics of the business rules, the business rules themselves, or the rules (and policies/standards) linkage with the processes or strategies to determine if there are options for further process friction reduction. If the Enterprise Architect finds an issue or opportunity, the Enterprise Architect can create a recommendation for a change and present it to the Governance board (i.e., leader, leadership team, lead or other designee).
Decide - The Governance Board will then decide whether or not to move forward with the recommendation. If the Governance Board decides to move on the recommendation, they will forward it to the appropriate Policy Manager.
Summary
The IDEF0 architectural pattern, in part, describes the functions Mission Alignment and Governance and Policy Management that an organization's leadership and management uses to control the organization. The OODA Loop architecture is the process pattern for the decision-making process for both Mission Alignment and Governance and Policy Management.
Postscript: There is one other major activity that the Enterprise Architect should be involved with, that is, supporting the CTO and CIO in Technology Change Management (TCM), but more on that in another post.
Tuesday, May 10, 2011
Peter Schiff predictions
Peter Schiff was ridiculed for years when he predicted that the US housing bubble and credit binge would result in a massive asset price bust and recession. You could describe his economic philosophy as Austrian, and as the recent Keynes and Hayek rap video explains, the bust is a direct consequence of the boom, not some seperate economic event that can be avoided through government intervention.
In this early 2009 video Schiff predicts the collapse of the US dollar and makes some very astute observations that may resonate with Australians. Enjoy.
SOA, The (Business) Process Layer, and The System Architecture
As I discussed in my posts The Paradigm Shift of Service Oriented Architecture and SOA in a Rapid Implementation Environment, there is a significant cultural/business/technical shift in thinking when an organization starts to migrate from a fragmented, monolithic IT architecture, or Object Oriented/Web Service using code development using COTS or custom developed applications to Composite Applications within an SOA. This key shift is formal linking of the IT application with the business process instead of the functions that make up the process. I discuss several of the advantages in Assembling Services: A Paradigm Shift in Creating and Maintaining Applications.
Process and The Assembly Line
While it may seem like much of a change to link the application to the process instead of the function, consider this concept in the context of an assembly line. An assembly line is the tooling enabling and supporting the process of assembly. In turn, a process is "a set of activities ordered to achieve a goal", the goal or operational mission for the assembly line is to assemble the product with no defects. I suspect that all manufacturing engineers would agree with that goal, but understand that Murphy's law works on all assembly lines, so they attempt to minimize the number of defects, still... On an assembly line, as much as is feasible, all of the activities and their enabling and supporting tooling works in concert with one another, orchestrated by the manufacturing engineering team. This team spends significant time and effort measuring the flows through the assembly line to ensure, a) that the line produces products with as few defects as possible, while b) producing those products as cost efficiently as possible (e.g., minimizing the work in process [WIP]), and c) meeting the constantly changing demand with as little inventory as possible (which constitutes operational agility for the assembly line). Additionally, the manufacturing engineers want the tooling to be agile, so that as the components of the product change, with changes in customer's requirements (and wants), the cost of retooling is minimized.
Process and IT
Contrast that with the way organizations engineer IT. From its inception as supporting single functions, like accounting and payroll, with card-based semi-mechanical "electronic computers" in the late 1950s, IT has focused on supporting individual functions. By the early 1980s the result was apparent to anyone that developed or used applications; they were information silos, that is, the tooling supporting each function had associated data, but there were no interfaces between functions to enable activity 2 in the process to share its data with activity 3, for example. During this time, I worked at a major university. In one Dean's office, that I supported, the Dean's administrative assistant had five terminals on her desk. Why? For two reasons; she needed data from five applications to perform her job of supporting the Dean, and she needed output from one application to serve as input to another--she was, in fact, the interface between the functions.
Since this situation was intolerable and expensive for operating the functional applications, the developers/coders created interfaces as the customers identified the need (and had funding to provide for it). Consequently, the application layer functional design went from a series of virtually unconnected silos of data and code to something approaching a picture of a bowl of spaghetti. In one study I did measuring the cost of maintenance of each link in this "fragmented" architecture, I found that for one small but business critical process for one of the business units, the cost was approximately $100/month/link. Considering that even in mid-sized organizations, if half the maintenance cost of what I found is representative (i.e. $50 per link per month), then even a mid-sized organization would be spending a significant portion of its IT budget on link maintenance. This cost created great pressure by finance engineering on IT management to become more cost efficient.
Again, manufacturing engineering provided the concept, Materials Resource Planning (MRP). The MRP systems interlinked all of the data systems and IT functions supporting the manufacturing floor by integrating them into a single application, thereby eliminating all of the linkages among functions. This concept was expanded into MRPII, and then into the Enterprise Resource Planning (ERP) systems, like the SAP and Oracle tools. The architecture is call Monolithic because all of the functions are tightly coupled in a single application.
Within a couple of years system users found several issues with ERP systems.
- The ERP systems are difficult to tailor to support the organization's processes. Typically, the suppliers of ERP systems build the functions to a group of standards that are used by most industries or most organizations within an industry. If for whatever reason, a) the enterprise that is implementing the ERP does not have a process that uses those standards, then either the enterprise must change its process--unlikely--b) it must attempt to tailor the ERP system to match its process, or c) some combination of the two. Generally, none of the alternatives are particular effective or popular in large organizations with a diverse product mix. The result is that either the initiative fails completely or that the personnel of all of the business lines work much harder and longer (and possibly create "shadow systems") to overcome the limitations imposed by the ERP system.
- The Systems are difficult to update. Frequently, the software supplier implements the next version of a package the equivalent of installing, configuring, and tailoring the original system. The reason is that they have updated the technology, perhaps the functional architecture (design) sufficiently, that any bolt-ons, add-ins, or other tailoring will not work with the new version.
- The Systems are operationally inflexible. That is, they are not agile; able to respond successfully to unexpected challenges and opportunities. Because the tailoring required for an ERP system can be major, the costs and time involved becomes a significant item in the organization's budget. Consequently, the time to change becomes long and the cost of change becomes high. This leads to the problem that IT systems have become an impediment to change rather than an enabler, as studies by Gartner and Forester have shown.
The advent of Object Oriented Programming and its successor, Web Services, enabled organizations to partially address the issues above. As discussed in my post SOA and a Rapid Implementation Environment, OOD design enables a developer to treat an application as a set of objects. If these objects use Web Services Standards (either WS or JSR) for their interfaces, then they are Web Services. In effect, this transformed an application into a set of components (classes, or Web Service components) that could be lashed together to create the application. The advantage of the OO software architecture is that each of the objects is a small code module that could be updated or replaced with new technology and as long as the objects external interfaces and the resulting functions remained the same, the application would operate in the same manner. This change met two of the three issues noted above; updating of the application's technology is possible without a complete re-installation and configuration of the application and initial tailoring of the application is much more feasible.
While this tailoring of an application made up of objects or Web Service components does address the agility issue, it is only in a very minor sense. Since Object Oriented Architecture only enables an implied workflow, rather than making it explicit and independent of the Service Components, it would require reprogramming to change in response to unexpected challenges and opportunities.
Process, SOA, and the System Architect
As discussed in my post SOA and a Rapid Implementation Environment, SOA formally separates the process flow from the functions. This formal separation means that the organization can have its transformation teams restructure and order the functions without reprogramming of the functions themselves. Instead, they can simply be reassembled (see my post Assembling Services: A Paradigm Shift in Creating and Maintaining Applications).
Formally and explicitly assembling Composite Applications using orchestration and choreography enables the System Architect to model the organization's (business) process for both effectiveness (in achieving the organization's mission) and cost efficiency and then to model the Composite Application linked with the process to determine its ability to optimally enable and support that process in both an effective and cost efficent manner. Consequently when the organization's processes require change in response to changes in the organization's mission, strategies, the organization's environment, or technology, the System Architect, together with the rest of the transformation team may only need to change the process flow Component, may need to add or update certain Service Components supporting functions, or both. Still, this will make much more agile support for the processes. This links the SOA-based Composite Application directly to the organization's process.
There is an additional benefit to separating the process flows from the Service Components. When the organization's automated business rules change in response to changes in policies and standards (which in turn should be in response to changes in Governance), the System Architect can very quickly evaluate the change by modeling the process and Composite Application to understand the consequences of the change. In the best situation, the System Architect would model all affected applications before any proposed change in policies, standards, or business rules is put into place because this would enable both the Governance and Policy Management teams to understand the consequences and results of the proposed change before those changes become unmanageable negative externalities.
Obviously, this is a significant shift in the organizational culture.
There is an additional benefit to separating the process flows from the Service Components. When the organization's automated business rules change in response to changes in policies and standards (which in turn should be in response to changes in Governance), the System Architect can very quickly evaluate the change by modeling the process and Composite Application to understand the consequences of the change. In the best situation, the System Architect would model all affected applications before any proposed change in policies, standards, or business rules is put into place because this would enable both the Governance and Policy Management teams to understand the consequences and results of the proposed change before those changes become unmanageable negative externalities.
Obviously, this is a significant shift in the organizational culture.
Sunday, May 8, 2011
1980s Texas Housing Bubble Myth - A Reply
Recently the debate on the price impacts of planning regulations has been a hot topic here and elsewhere. Leith van Onselen at Macrobusiness is one of the more sophisticated proponents of supply side impacts on home prices and recently responded to a comment of mine about Houston Texas. My comment was that if Houston is an example of how responsive supply can help cities avoid house price volatility, why did Houston experience a house price bubble in the 1980s?
Leith argued that Houston's apparent price bubble was a mere mild blip, especially on the grounds of price to income multiples. In his typically evenhanded fashion Leith does note many of the demand side factors at play during that time - the oil boom, liberalisation of loan standards and population growth. He brings together these points with the following conclusion.
What makes Texas’ home price performance in the early 1980s particularly impressive is that prices managed to remain relatively stable in the face of significant demand-side influences that should have caused home prices to rise significantly and then crash.
An additional point is made that Houston has managed to avoid the 2000s property bubble infecting most of the rest of the US, and much of the world.
My reply.
Leith argued that Houston's apparent price bubble was a mere mild blip, especially on the grounds of price to income multiples. In his typically evenhanded fashion Leith does note many of the demand side factors at play during that time - the oil boom, liberalisation of loan standards and population growth. He brings together these points with the following conclusion.
What makes Texas’ home price performance in the early 1980s particularly impressive is that prices managed to remain relatively stable in the face of significant demand-side influences that should have caused home prices to rise significantly and then crash.
An additional point is made that Houston has managed to avoid the 2000s property bubble infecting most of the rest of the US, and much of the world.
My reply.
Houston prices declined around 40% in real terms following the 1982 market peak - that is indeed volatile - and it took 15 years for prices to recover in nominal terms. The Case-Shiller 10 city index has dropped by a similar amount since the US peak in 2006 (30.5% nominally) - so much for the volatility aspect.
But why do prices in Houston still appear so dramatically affordable?
One major reason is the relatively high property tax rate.
Property tax rates in Houston more that doubled from 1984 to 2007 becoming one of the highest rates in the US. Depending on your area you can pay between 2-3% of your properties improved market value in annual State taxes, while the US National average is 1.04%.
One would expect areas with higher property taxes to have structurally lower prices, reduced price volatility, and much lower price to income ratios.
An illustrative example is shown below. The three comparisons are intended to roughly represent the early 1980s, the early 2000s and today. The Houston property tax rates increase from two to three percent, while the comparison taxes increase from half to one percent. Interest rates also represent mortgage rates at the time.
From these examples we can see that from just this single factor, the property tax differential, we should expect prices in Houston to currently be structurally around 30% lower than national averages (more on the impact of the property tax differential here).
An important factor at play in this example is that at lower interest rates a fixed percentage property tax leads to greater price differences. Therefore, over time, we would expect Houston to home prices to be a smaller fraction of comparable homes elsewhere as the property tax differential has a greater price impact at lower interest rates. Remember in the table above, rents and returns are the same for each comparison - only the tax rate is different.
Of course, this does not mean that housing is lower cost. It just means that cost of housing is borne by annual tax obligations rather than capitalised in the price. A far better comparison of whether housing is structurally cheaper in Houston would be to compare quality-adjusted rents to incomes over time and across cities.
Lastly I would add that the memory of such a deep and prolonged property price slump would be motivation enough to dampen speculative housing demand in Houston. Who in their right mind would bid up prices in Houston knowing that increased tax liability and the history of dramatic losses on the property market?
But why do prices in Houston still appear so dramatically affordable?
One major reason is the relatively high property tax rate.
Property tax rates in Houston more that doubled from 1984 to 2007 becoming one of the highest rates in the US. Depending on your area you can pay between 2-3% of your properties improved market value in annual State taxes, while the US National average is 1.04%.
One would expect areas with higher property taxes to have structurally lower prices, reduced price volatility, and much lower price to income ratios.
An illustrative example is shown below. The three comparisons are intended to roughly represent the early 1980s, the early 2000s and today. The Houston property tax rates increase from two to three percent, while the comparison taxes increase from half to one percent. Interest rates also represent mortgage rates at the time.
From these examples we can see that from just this single factor, the property tax differential, we should expect prices in Houston to currently be structurally around 30% lower than national averages (more on the impact of the property tax differential here).
An important factor at play in this example is that at lower interest rates a fixed percentage property tax leads to greater price differences. Therefore, over time, we would expect Houston to home prices to be a smaller fraction of comparable homes elsewhere as the property tax differential has a greater price impact at lower interest rates. Remember in the table above, rents and returns are the same for each comparison - only the tax rate is different.
Of course, this does not mean that housing is lower cost. It just means that cost of housing is borne by annual tax obligations rather than capitalised in the price. A far better comparison of whether housing is structurally cheaper in Houston would be to compare quality-adjusted rents to incomes over time and across cities.
Perhaps once the property tax differential and other demand side factors are properly considered we will see Houston's supply-side impact on housing prices diminish to zero.
Lastly I would add that the memory of such a deep and prolonged property price slump would be motivation enough to dampen speculative housing demand in Houston. Who in their right mind would bid up prices in Houston knowing that increased tax liability and the history of dramatic losses on the property market?
Evidence of supply-side effects on home prices remains elusive.
Thursday, May 5, 2011
A sign of desperate times?
Saw this advertisement today in the Financial Review. I haven't seen anything like it before but it reeks of desperation. Is it some kind of joke?
I like the first part of the fine print "Real Estate agents tell me I can get $2.1million for my luxury home but..."
I like the first part of the fine print "Real Estate agents tell me I can get $2.1million for my luxury home but..."
Wednesday, May 4, 2011
Subscribe to:
Posts (Atom)