Sunday, July 31, 2011

RBA pragmatism and global stagflation

Since the higher than expected CPI print last Wednesday, the economic blogosphere has flooded our screens with opinions on the likely RBA decision at its board meeting tomorrow. Some have argued that the CPI was filled with ‘once-off’ movements in price (eg, the deposit and loan facilities and some fruits) and should therefore be taken with a grain of salt. Others have argued that the CPI is clear evidence that the RBA should move on interest rates to get ahead of the inflation curve.

I have a different opinion.

Raising the cash rate while Australia could be in a technical recession is a situation the RBA needs to avoid more than anything else. Think about the criticisms – “How could our central bank be so out of touch?” “Give Glenn the boot!” The very institution itself would be at risk. Forget demonstrating independence. Self preservation is the name of the game (note also that the inflation target is not a mandate of the RBA, but its own interpretation of how to fulfil is statutory role).

Therefore, the only logical decision for the RBA board tomorrow is to leave the cash rate unchanged, even if it has strong concerns about inflation. It is the same action central banks are taking in the UK and other developed countries in similar situations.

But there is more to this story. The present bout of high inflation and low growth is global, and there is little our domestic policy can do to intervene. Further, I suspect that this has much to do with physical constraints to global oil supply (at least in the short term).

As I said two years ago during the financial crisis –

...some interesting trends should occur in the next year or two. First, we should see the price of oil rise again from its current price of around $60 a barrel. Second, we should see an increase in the inflation rate on a relatively global scale. (Note that in the UK, inflation is currently at 4.4%. With the base interest rate at 4.5%, the real interest rate is now effectively zero). Third, we will see a sustained decline in global output. Taken together, a recipe for stagflation. (I also predict continued volatility on financial markets as demand and supply expectations feed back on each other).

The following three graphs show the oil production, oil price and the correlation between oil price and inflation in Australian, Asia, and other developed markets (DM). (Thanks Ricardian Ambivalence for the third graph).


The simple explanation for oil price led inflation is that a century of capital equipment, particularly in transport, is reliant on oil, has very little ability to substitute to other energy sources.  Therefore, the cost of goods is at the mercy of the oil price due to our invested capital.

Typically, there is an expectation oil production will respond to higher prices. But if there are short term physical and technological limitations, this cannot occur. In 2007 the oil price was double the price in 2005, yet total global oil production was identical. If there was not a physical limit to oil production, oil producers should have responded to this price by greatly increasing supply.

Ricardian Ambivalence has weighed in with an opinion that global inflation is not about oil. Oil price leading inflation globally in the above graph is explained away because “Oil leads CPI, in part, because variations in demand lead variations in CPI”. There may be some element of demand and oil price as co-contributors to price volatility, but my suspicion is that physical production limits to oil are the key.

Indeed, the reason these limits are having such a dramatic effect is because they were not foreseen, and investment decisions were made on the expectation of higher volumes of oil available at similar prices.

As a final statement, I want to address the ‘lunacy’ of peak oil. Many economic thinkers rule out the possibility of such an occurrence, as high prices lead to more inaccessible reserves becoming viable, as well as substitute energy sources becoming economical. Yet the recent evidence is that global oil production is back where it was in the late 1990s even though the oil price is more than 5x higher. This doesn’t seem consistent with the economic rationalism, which ignores the major prolonged adjustments necessary for these investments and subsitutions to occur.

Some may still be arguing in their mind that the reason for lower oil production currently is because of a global demand slump. But again, this fails to explain why we are willing to pay 5x the price for oil, and producers are not willing to sell any more oil at that price.

In the end, Australia is at the mercy of global forces as much as anyone, and it would be foolish for the RBA to believe that our domestic interest rate will have any significant effect on inflation without crushing our economy.

Wednesday, July 27, 2011

The housing market's 'once-off adjustment' meme

There is a meme floating around which has its origins in Chris Joye's numerous articles on the Australian housing market. While I often challenge Joye's economic arguments on this blog, I hope that readers realise this is simply part of a rigorous intellectual debate, and not a personal attack. Indeed, I admire his quest to provide better housing data, and agree with quite a few of his economic and political beliefs.

The meme is that the surge in debt levels and the price of Australian homes since the late 1990s was a once off adjustment to a period of low interest rates and inflation. Therefore, if these conditions hold, current prices are sustainable.

RBA Governor Glenn Stevens mentioned this 'once-off' adjustment in his recent speech 

The period from the early 1990s to the mid 2000s was characterised by a drawn-out, but one-time, adjustment to a set of powerful forces. Households started the period with relatively little leverage, in large part a legacy of the effect of very high nominal interest rates in the long period of high inflation. But then, inflation and interest rates came down to generational lows. Financial liberalisation and innovation increased the availability of credit. And reasonably stable economic conditions – part of the so-called ‘great moderation’ internationally – made a certain higher degree of leverage seem safe. The result was a lengthy period of rising household leverage, rising housing prices, high levels of confidence, a strong sense of generally rising prosperity, declining saving from current income and strong growth in consumption. (here

Chris Joye recently reiterated the argument here

This was a once-off "level-effect" (ie, sustainable adjustment reflecting the huge reduction in the cost of debt), not a permanent growth effect, and now these ratios are flat-lining. This is why the household debt-to-disposable income ratio, as shown below, has gone sideways since 2005, years before the GFC first materialised. That is, credit has been tracking incomes, as you would expect.

The household debt to disposable income graph is below, as is a graph demonstrating the structural adjustment of interest rates.

What makes this meme powerful is its truth. Australian interest rates did see a structural adjustment in the mid 1990s. There is also no denying that lower interest rates should lead to asset values rising relative to other prices in the economy. It also makes sense that the level of debt able to be sustainably managed, as a portion of incomes, is greater.

In the housing context, the 'once-off adjustment' argument can be demonstrated as follows.

Prior to the structural adjustment in interest rates, a buyer looking to buy a home that rents for $15,000pa, who is willing to pay a 20% over the cost of renting to buy the home, would capitalise $18,000 at the going rate of 12.8%. That's a price of $140,625. After a structural adjustment, the cost would be capitalised at 7.3%, giving a price of $246,575. A 75% real price increase should be as sustainable as the previous price (almost).

The same calculation can be made against household income, where for a fixed percentage of incomes, a 75% greater price, and level of debt, can be sustained.

Unfortunately, this logical argument only accounts for a part of the debt build up and house price growth since the mid 1990s. The RBAs graphs of household finances and real house prices (below) show clearly why this is the case.


The graph of interest paid as a proportion of disposable income shows that the actual cost of debt relative to incomes has doubled (6% to 12%) since the mid 1990s. This is clear evidence that much of the debt binge, and the subsequent house price inflation, is not attributable to the 'once-off adjustment'. This adjustment would only account for the amount of debt, and home prices, that could be supported with interest costs of 6-8% of household incomes - not 12%.

The RBA also shows that real home prices have more than doubled (>100% growth) since the mid 1990s to 2007, rather than seeing 75% real gains. Indeed the 2009 boom saw real home prices inch up again (with some subsequent falls in real terms).

The ABS home price figures (though not ideal for this purpose) suggest that real home prices gained approximately 150% since 1996. That's twice what is expected from interest rate conditions alone.

To get back to that 'sustainable' point, either home prices need to fall by around 30%, or interest rates need to fall by 30% (mortgage rates to 4-5%), or some combination of the two (noting also the geographical disparity any correction is likely to have). With today's CPI print surprising many on the high side, the market prediction (and mine) of rate cuts by year's end seems far less likely.  The negatively geared housing investor should take note.  

In all, the meme is powerful because it is true, but dangerous because alone it is an incomplete explanation of debt and home price trends of the past two decades. What appears clear from the data is that we have overshot the expected price and debt adjustment due to the changing interest rate environment. With this in mind, the downside risks for property values appear to far outweigh any upside potential.

Monday, July 25, 2011

Is Australia a net food importer?


Measuring food is difficult. Do we use kilograms, or calories?

I’ve covered the value of food security before.  But the obvious truth that Australia is a massive exporter of food, in terms of both kilograms and calories, does not stand in the way of the grocery lobby group, the Australian Food and Grocery Council (and yes, I am very late to this story).

Here are some examples

This alarming result shows food and grocery manufacturing – which employs 288,000 people – is now a net-importer of food and grocery products which impacts industry’s growth and competitiveness (here)

But Ross Gittins' b%&*$it detector was straight on to it

According to figures compiled by the Department of Foreign Affairs and Trade, last year we had total exports of food of $25.4 billion and total food imports of $11 billion, leaving us with a surplus of $14.4 billion. Even if we ignore unprocessed and look only at processed food, we still had a trade surplus of $5.8 billion. (here)

He continues to pick apart the claims.

So how did the food and grocery council get exports of $21.5 billion and imports of $23.3 billion for 2009-10, giving that deficit of $1.8 billion? By using its own definition of ''food and groceries''. We're not talking about farmers here, but the people who take their produce and process it for supermarkets.

So the council's figures exclude all our unprocessed food exports, including wheat (worth $4.8 billion in 2009), other grains and live animals. On the other hand, they include ''grocery manufacturing products'' such as medicines and pharmaceuticals, plastic bags and film, paper products and detergents.

That's food? It turns out that our exports of ''groceries'' totalled $4.9 billion in 2009-10, whereas our imports totalled $12.9 billion, leaving us a ''grocery'' trade deficit of $8 billion. This is hardly surprising. Since when was Australia big in the manufacture of medicines? If you leave out groceries, the report's figures show we had exports of processed food and beverages worth $15.9 billion, compared with imports of $9.9 billion, plus exports of fresh produce worth $700 million against imports of less than $500 million.

That leaves us with a trade surplus of $6.2 billion for fresh and processed food and beverages. We've been conned.


This all leads me back to the arguments I made about the value of food security. If food security is important, why isn’t grocery security, or medicine security, or car making security, or plane making security, or any other fundamental economic ingredient? Indeed, we could not produce the amount of food we currently do without imported picking, packing and transport equipment, so unless we ‘secure’ those, we won’t ever have food security.

The graph below is a final reminder about our food net export position relative to other nations, and our relatively low direct agricultural subsidies.

Sunday, July 24, 2011

The Believing Brain



Michael Shermer talks about his theory of the brain as a ‘pattern believing machine’.  Put simply, we first believe in patterns subconsciously, then add logical explanations post hoc. This partly explains why debating passionate people on their topic of choice often leads each person more entrenched in their beliefs than afterwards, since logic doesn’t govern our already held subconscious beliefs.

He has a book exploring the idea in more detail, and if you want another brief take on his theories you can read one of his articles here.

And the conclusion from one reviewer -

Having presented the case that we form beliefs on the basis of unconscious, often irrational processes, and that all our argumentation in support of these beliefs is then added post hoc and subject to a wide range of cognitive biases which he lists and explains, Dr. Shermer leaves us in a near-hopeless state. The human condition, according to this perspective, is one of deep-rooted, biased subjectivity and perpetual, unresolvable conflict between believers with different sets of beliefs.

Short Cycle, Agile, Level of Effort efforts, and Changes in Roles and Responsibilities

In a recent post I briefly discussed the changes in roles and emphasis when a development or transformation effort changes from a waterfall (Big Bang) effort to a short cycle-agile effort.  This post will discuss the topic in more detail in terms of a Short-Cycle, Agile, Level of Effort projects and programs.

Short Cycle
A development and transformation effort is a program or project that is changing some process, component, procedure, or tooling that supports an organization's Vision, Mission, and Strategies.  There are two types of efforts, Big Bang, and Short Cycle.  Big Bang development and transformation processes are straight line processes (see my post Product Architecture Thinking Versus System Architecture Thinking) in which there series of steps from requirements identification, through design, implementation, validation, and roll out.  It delivers the product in a single delivery--one Big Bang.

Alternatively, the Short Cycle (1 to 3 months) development and transformation process is a process that delivers the functional of the total product or system in small increments through a series of short duration development or transformation cycles.  Typically, the deliverable from the first cycle (a one month cycle) is "usable", after a fashion, and is called the Initial Operating Capability (IOC) of the system. 

I've found that the IOC will consist of the initial version of the system's access control together with a minimal set of input screens with associated data stores.  The reason for this IOC functional architecture is that security of an operational system is normally a "business" requirement; it's not wise to operate an open system.  Second, you need to "put garbage in" to "get garbage out."  If the team builds a report during the first cycle with no way to insert data to support the report, then the system is not operational in any sense.  However, if you build input screens with associated data store during the first cycle, then the customer can start inserting data during the second, while the team will likely be adding to the number of input screens and developing at least one report or information display.  My experience has been that it will take the customer about a month to get enough data in to have usable reports and output displays.  This means that by the end of the second cycle the system will be producing at least a minimum ROI for the customer.  This ROI will increase with each subsequent short cycle, which delights most customers.

Agility
The Agility of an organization  is defined as "its ability to successfully respond to unexpected challenges and opportunities".  The "successfully respond" phrase has two dimensions: making the right decision and making a timely decision.  Various components of the US military have a saying  "improvise, adapt, and overcome" that embodies the concept of agility.  This has long been a tradition of the US military.  For example, the Allies won D-Day in part, because they were much more agile.  That is, the NCOs, junior officers, and field grade officers improvised new tactics, based the manpower and equipment available, to achieve the initial mission of creating a beach head, when all of the plans for the invasion were breaking down--the capture of Omaha Beach is the seminal example.  On the other hand, the German forces followed their defense plans as closely as possible.  Consequently, senior officers were asking permission to move units to defend the coast.  By the time Hitler, hundreds of miles away, made the decision to release the panzer units, the Allies had enough units ashore to defend the beach head. 

The same definition of Agility can be used for a development or transformation process.  If a process is founded on the assumption that "All of the customer's System Requirements are known up front" then I would suggest that the process is rigid and inflexible--a totally brittle process.  That is, if new customer System Requirements are discovered as the development or transformation team is designing and implementing the system, then a formal Engineering Change Order (ECO) process (with contractual changes) is required.  All of this "administriva" costs time and resources that the team could otherwise use to create more of the product or system the customer wants.  Consequently, the customer will identify only those new requirements that will make the product or system completely unusable if not met.  This leads to poor systems and unhappy customers.  This is in contrast, agile processes based on "Not all the customer's System Requirements are known up front" so as to support the inclusion of new requirements as the system develops.

Level of Effort
Short cycle, agile development and transformation efforts require using the Level of Effort (LOE) pattern of contractual management rather than any other type.

What is a Level of Effort Project or Program?
A Level of Effort (LOE) program or project is one where the budget for the effort is spent uniformly across the duration of the effort.  Therefore, the amount of development or transformation work is uniform across the effort's duration.  This differs from the Big Bang style of development in that, normally, the Big Bang starts with a small team performing the initiation tasks, builds up the effort during detailed design and component and assembly verification and reduces the effort during post-roll out product or service support.  If the product has too many defects the PM as the ability to add personnel and use up the budget faster.

With a LOE project or program, the PM does not have that ability.  Instead, if the design team has under estimated the complexity or risk in meeting a requirement, they have the ability to refactor the requirement into two or more requirements.  They can then complete the first of these during the current segment and the others during later segments.  The fact that I'm using the term segment implies that I think that an LOE effort can only be successfully used with short cycle efforts; each segment being a cycle.  So, in short cycle, agile, LOE efforts, the System Requirements are the variable, while the budget and schedule are held as constants.

Why is it important?
Currently most efforts are still in the Big Bang style because most formal processes are of that type.  These processes include all of the required intermediate artifacts like a project plan, schedule, PDR, CDR, EWBS, IWBS, documentation for PMRs, ECOs, weekly status reports, and so on.  The reason I call these intermediate artifacts (or work products) is that they have no lasting value for the customer.

For example, since LOE projects and programs produces a uniform amount of output for each equal segment of these development or transformation efforts, there is little need for metrics like "Earned Value".  This is particularly true in short cycle development and transformation efforts where the customer can see and use the work products.  If properly performed at the end of each cycle, be it monthly or every 3 months, either the IOC version or an upgrade is rolled out.  It can be rolled out into the operational environment, or in some cases into a preproduction environment (an environment where the customer can use the new system in parallel with the old system).  Since the customer can use the IOC or upgraded system, what is the purpose of metrics like the "Earned Value" metrics (since the goal of the EV metrics is to show progress)?

Changes in Roles and Responsibilities of the Team
If, as I've experienced, there is little need for most intermediate artifacts supporting the PM procedures and methods because there is little need for those procedures and methods, then there must be changes in the roles and responsibilities of the program or project's team using a short-cycle, agile, LOE development or transformation process.

First, the process is short-cycle (one to three months) and agile ("not all the requirements are known up front"), so the process must have these two characteristics.  Having these two characteristics, immediately reduces the role of the PM, as discussed above.  No longer are all of the PM procedures  for a PDR, CDRs, EWBS, IWBS, PMRs, ECOs, weekly status reports, earned value reports, and so on necessary; in fact, they are counterproductive in that they reduce the effectiveness and cost efficiency of the effort.

Second, the emphasis is on creating or transforming a product or system to meet the customer's highest priority System Requirements, which may or may not be known at the start of the effort.  The last clause in the prior sentence is the high-level capability statement for agility.  This has two consequences.
  • There is a requirement for capturing new requirements in every cycle of the effort.  This is within the role of the Systems Engineer.
  • There is a requirement for the customer to prioritize the complete set of requirements at the start of each cycle.
These requirements indicates that the Systems Engineer's responsibility in identifying and managing requirements has increasing importance to any development or transformation effort.
[Sidebar:  There are studies to indicate that in software development efforts 10 to 20 percent of the software developer's effort is spent on functions not called for in the requirements.  Frequently, the customer asked for these to be removed, which costs more time and resources; and none of which produces value for the customer.  But this isn't only a problem for software developers today. A famous example is that at the start of WWII, the M-3 tank came rolling out of the factory with a police siren. No one could explain why since it wasn't in the requirements.  So, having the entire effort focus on the requirements will frequently greatly reduce costs of design and development as well as program management.]

Third, the development of functions or services are tied directly to the Functional Requirements, which link through a traceability matrix to the customer's System Requirements (see my post Types of Requirements for definitions) [though, in some smaller software development efforts, this level of decomposition may not be necessary].  Since, by the definition of a requirement, it must have a metric for when it is met, all verification and validation procedures and methods must be traceable to these metrics.  Additionally, with short-cycles the risk that the designers/developers/implementers will induce defects greatly increases.  An Induced Defect is one that is created in the process of a roll out or update of a system.  The key procedure to reduce the number of induced defects is regression testing--performing all of the same verification and validation procedures and methods used on prior releases.  Within short-cycle and agile processes, linking the V&V procedures and methods to the requirements and regression V&V emphasize both the importance of the requirements--therefore the requirements management system--and the role and responsibilities of the Systems Engineer.

Fourth, because, as discussed above, short-cycle, agile processes require LOE program management, the role of the PM is diminished further.  No longer can the PM "control" the effort by adding to reducing resources to a particular task.  Instead, the Systems Engineer must decide in each cycle how many of the highest priority requirements the team can meet during the next cycle.  Consequently, the PM is really only responsible for working with the suppliers and consultant (both external and internal to the organization), ensuring that the processes, procedures, and methods of the short-cycle, agile process are properly executed, and reporting results to higher management.

These changes in roles and responsibilities are a dramatic shift in the cultural of development and transformation.

Thursday, July 21, 2011

Real per capita wealth trend

As part of my recent habit of examining trends from the perspective of the individual or household, I have compiled an index of real net wealth per capita. 

The reason for this is to add another perspective to the more general question of how the Australian economy has fared post-GFC.

As you can see, the average Australian's real net wealth is exactly where it was at the end of 2006.  Have we really spent four and a half years just treading water?

The interesting relationship is between the trend in real wealth, and the trend in retail turnover.  The 2007 peak of per capita wealth also happened to be the end of the growth trend in retail spending.  It is also important to note that in the last decade, home values have comprised around 60% of total household assets, which leads on to conclude that the fate of retail rests heavily on the fate of home prices.

The Sydney housing boom ripple effect

Sydney is different. Since 2003 rents have risen faster than prices. I imagine the rest of the country would find that hard to believe, given their experience. But this is just one piece of evidence to show that the property cycles in Australian cities are non-synchronous.

The past 25 years data shows that the Sydney residential property market is the least volatile, and is always first to boom. In fact, you can chase the price growth ripples from Sydney and Melbourne across the country – to Adelaide, Brisbane, Perth then Darwin. This might be one reason that such divergent opinions exist in the media, academic and professional circles.


If we looked only at the above graph, we would note that the two biggest markets, and arguably most attractive cities, have had the least growth since 2000. That seems particularly counterintuitive.

But if we look long term the explanation is clear – Sydney and Melbourne became their major boom years before the other cities in the mid 1990s.

Each of the charts that follow this post compare the timing of booms in capital cities against Sydney’s booms. This exercise reveals a number of things.
  • Sydney and Melbourne booms in the 80s and 90s started and finished within a year of each other. In fact, their cycles are the most in synch of all markets.
     
  • Brisbane lagged Sydney’s late 1990s boom by 4 years – making it an early 2000’s boom. This appears connected to the fact that Brisbane’s 1980’s boom lasted about 4 years longer than Sydney’s.
     
  • Adelaide followed Sydney’s lead more closely than Brisbane in the 1990s, and lagged Syndey more closely in the late 1980s.
     
  • Perth’s 2000s cycle was similar to Brisbane, although in the 1980s it had sharper and shorter price rises.
     
  • Darwin is a world of its own - booming when other capitals had prices tracking below trend.
     
  • Brisbane, Melbourne and Perth prices have been ‘catching up’ to Sydney over this 25 year period. This could be because the quality of homes is catching up to those in Sydney, and also due to a convergence of income levels between the cities. 
  • For some reason Adelaide is falling behind other major cities (lowest long term growth trend)
     
  • Sydney never falls as far below its trend as any other city. My eyeballing suggests that price volatility is lowest in Sydney.
This might have lessons for property investors outside of Sydney. If you are in Brisbane, Perth or Adelaide and follow the Sydney trend a couple of years behind, you will do well. If prices are flat in the major capitals, take your money to Darwin.

What about from 2011 on? Sydney appears below its long term trend, and it rarely drops far below this trend. The other capitals are above their trend and do fall quite far below trend during economic downturns. My personal view is that Sydney stability will continue.

The other question to ponder is the trends in this period could validly be applied from now on. Deleveraging is the most important new consideration, and we have seen the dramatic affects this can have on asset values if we simply look to the US and some European property markets.

My expectation is that prices will fall until such time as yields are high enough to be attractive to investors who aren’t expecting capital gains in the near future. To me, this might mean yields might get higher, relative to interest rates, than we have seen for 30 years. And for that to happen, prices will fall. Of course, if the RBA drops rates significantly, this will dampen falls, but I doubt lead to the market grinding out modest growth (ie. matching inflation) for a few years yet.

Tuesday, July 19, 2011

Economic images

Sometimes I stumble across humourous images and quotes in which I instantly find a deeper meaning. Here are a few recent ones, and my accompanying thoughts.

The first I stumbled across at Bryan Kavanagh's blog (which is worth a read).
What makes it funny is that it is so close to the truth. To me, the deeper meaning is that we have lost an understanding with what real productivity actually is.

The next image can be found all over the web now, but to me provides insights into exactly how new technology integrates into society. 
While we can laugh that the publicly run enterprise is stuck with 1960s technology, to me it says much more. It shows that aggregating many new technologies (computing, flight control, materials etc) into one much larger and more ambitious technology (the space shuttle) takes a long time. Also, it shows me that there are lock-in effects. The car has not changed much at all. This is partly because roads and associated infrastructure are still much the same, and drivers are trained to use the same controls in the car itself. This limits scope for macro improvements in car transport. The same applies to the space shuttle.

I also stumbled across this quote -

As Douglas Adams wrote in 1999, "Anything that gets invented after you're thirty is against the natural order of things and the beginning of the end of civilisation as we know it until it's been around for about ten years when it gradually turns out to be alright really." Yes, the world is different now. Do try to keep up 

This is an important one to keep in the back of our minds when we imagine seeing society deteriorate before our eyes.  I recall that the ancient Greeks worried about the proliferation of written texts, because it meant people no longer needed to remember and recite long passages. Only if you could remember a passage word for word did it show you truly understood its meaning. 

Monday, July 18, 2011

Product Architecture Thinking Versus System Architecture Thinking

Cultural Thinking about Architecture
Until the early 1960s, the discipline of architecture (or functional design) focused on the creation/design/ development/implementation of products like buildings, cars, ships, aircraft, and so on.  Actually, other than buildings, most of the Architects were called "functional" designers, or some such term, to differentiate them from detailed designers and engineers/design analysts.  This is part of the reason that most people associate architecture and an architect with the design of homes, skyscrapers, and other buildings, but not with products, systems, or services.  In fact Architects themselves are having a hard time identifying their role.

In the late 1990s, the US Congress mandated that all Federal Departments must have an Enterprise Architecture to purchase new IT equipment and software.  The thrust of the reasoning was that a Department should have an overall plan, which makes a good deal of sense.  I suspect the term "Enterprise Architecture" to denote the unification of the supporting tooling, though they could have used "Enterprise IT Engineering" in the manner of Manufacturing Engineering, which unifies the processes, procedures, functions, and methods of the assembly line.  And yet, Enterprise Architecture means something more, as embodied the the Federal Enterprise Architecture Framework (FEAF).  The architecture team that created this framework to recognize that processes, systems, and other tooling must support the organization's Vision and Mission.  However, its up to the organization and Enterprise Architect to implement processes that can populate and use the data in the framework effectively.  And that's the rub.

Functions vs Processes and Products vs Systems
In the late 1990s and early 2000s the DoD referred to armed drones as Unmanned Combat Air Vehicles (UCAVs), then in the later 2000s, they changed the name of the concept to Unmanned Combat Air Systems (UCAS).  Why?

There are three reasons having to do with a change in western culture, the most difficult changes for any organization.  These are: 1) a change from linear process understanding to linear and cyclic, 2) a change from thinking about a set of functions to understanding a function as part of a process, and a change in thinking from product to system.

Linear vs Cyclic Temporal Thinking
Product thinking is creating something in a temporally linear fashion, that is, creating a product has a start and an end.  D. Boorstin in the first section of his book, The Discovers, discusses the evolution of the concept of time, from its cyclic origins through the creation of a calendar to the numbering of years, to the concept of history as a sequence of events.  To paraphrase Boorstin, for millennia all human thinking and human society was ruled by the yearly and monthly cycles of nature.  Gradually, likely starting with the advent of clans and villages a vague concept of a linear series of events formed.  Still, the cycles of life are still at the core of most societies (e.g., in the east, the Hindu cycles, and the Chinese year, and in the West, Christmas and New Years, and various national holidays). 

The concept of history change cultural thinking from cycles to a progression through a series of linear temporal events (events in time that don't repeat and cause other events to occur).  In several centuries this concept of history permeated Western Culture.  The concept of history broke and flattened the temporal cycles into a flat line of events.  With this concept and with data, information, and knowledge, in the form of books, meant that Western culture now had the ability to fully understand the concept of progress.  Adam Smith applied this concept to manufacturing, in the form of a process, which divided the process into functions (events), and which ended up producing many more products from the same inputs of raw materials, labor, and tooling.

Function vs Process
In the Chapter 1 of Book 1 of An Inquiry into the Nature and Causes of the Wealth of Nations (commonly called The Wealth of Nations), Adam Smith discussed the concept of  the"Division of Labour".  This chapter is the most important chapter of his book and the concept of the Division of Labor is the most important concept; far more important than "the invisible hand" concept or any of the others.  It is because this concept of a process made from discrete functions is the basis for all of the manufacturing transformation of the Industrial Revolution.  Prior to this, the division of labor was an immature and informal concept; after, many cottage industrialists adopted the concept or were put out of business by those that did.

Adam Smith did this by using a very simple example, the making of straight pins.  In this example he demonstrated that eight men each serving in a specialized function could make more than 10 times the number of pins in a day when compared with each of the men performing all the functions.  He called it the division of labor; we call it "functional specialization".

Functional specialization of skills and tooling permeates Western Culture and has led to greater wealth production than any prior concept that has been created.  Consequently, as Western Civilization accreted knowledge, the researchers, engineering, and skilled workers became more expert in their specialized function and increasingly less aware of the rest of the process.

Currently, most organizations are structured by function, HR, accounting, contracts, finance, marketing or business development, and so on.  In manufacturing there are designers (detailed design engineers), engineers (analysts of the design), manufacturing engineers and other Subject Matter Experts (SMEs).  Each of these functions vie with one another for funding to better optimize their particular function.  And most organizations allocate funding to these functions (or sometimes groups of functions) for the type of optimization.

Unfortunately, allocating funds by function is a very poor way to allocate funds.  There is a principle in Systems Engineering that, "Optimizing the sub-systems, sub-optimizes the system".   J.B. Quinn, in “Managing Innovation: Controlled Chaos”, (Harvard Business Review, May-June 1985), demonstrated this principle, as shown in Figure 1.
Figure 1--Function vs Process Funding

As shown in Figure 1, at the bottom where you cannot really see it, for every unit of money invested in a function, the organization will get, at best, one unit of money improvement in the total process.  However, if the investment effects more than one function would yield 2(N-1)-1 in total improvement in the process.  So focusing on investing in the process will yield much better results and focusing on the function.  This is the role of the Enterprise Architect, and the organization's process and systems engineer using the Mission Alignment process.  While this point was intuitively understood by manufacturing (e.g., assembly line manufacturing engineering) for well over 150 years, and was demonstrated in 1985, somehow Functional Management is not willing to give up their investment decision perquisite.

Product vs System
Influenced by the Wealth of Nations, from about 1800 on, industries, first in Britain, then across the Western world, and finally globally, used Adam Smith's concept of a process as an assembly line of functions to create more real value than humankind had ever produced before.  But this value was in the form of products--things.  Developing new "things" is a linear process.  It starts with an idea, an invention, or an innovation.  Continues with product development to initial production and marketing.  Finally, if successful, there is a ramp up of production, which continues until superseded by a new product.  This is the Waterfall Process Model. 

The organization that manufactured the product had only the obligation to ensure that the product would meet the specifications the organization advertised at the time the customer purchased the product, and in a very few cases, early in the product's life cycle.  Generally, these specifications were so general, so non-specific, and so opaque that the manufacturing company could not be held responsible.  In fact, a good many companies that are over 100 years old, exist only because they actually supported their product and its specifications.  Their customers turned into their advertising agency.

This model is good for development (what some call product realization) and transformation projects, but the model has two fatal flaws, long term.  The first (as I discuss in my post Systems Engineering, Product/System/Service Implementing, and Program Management) is that the waterfall process is based on the assumption that "All of the requirements have been identified up front"; a heroic assumption to say the least (and generally completely invalid).  The second has equal impact and was caused by the transportation and communications systems of the 1700s to the 1950s.  This flaw is that "Once the product leaves of the factory it is no longer the concern of the manufacturer."

This second flaw in historical/straight line/waterfall thinking effects both the customer and the supplier.  The customer had and has a hard time keeping the product maintained.  For example, most automobile companies in the 1890s did not have dealerships with service departments; in fact they did not have dealerships, as such.  Instead, most automobiles were purchased by going to the factory or ordering by mail.  And even today, most automobile manufacturers don't fully consider the implications of disposal when design a vehicle.  So they are thinking of an automobile as a product not a system or system of systems (which would include the road system and the fuel production and distribution systems.  The flavor of this for the United States is in its disposable economic thinking; in everything from diapers to houses (yes, houses...many times people are purchasing houses in the US housing slump, knocking them down, to build larger much more expensive housing...at least in some major metropolitan areas).  Consequently, nothing is built to last, but is a consumable product.

Systems Thinking and The Wheel of Progress
Since the 1960s, there has been a very slow, but growing trend toward cyclic thinking with organizations.  Some of this is due to the impact of the environmental movement, and ecosystems models.  More of this change in thinking is due to the realization that there really is a "wheel of progress".  Like a wheel on a cart, the wheel of progress goes through cycles to move forward.
 
The "cycle" of the "wheel of progress" is the OODA Loop Process, that is, Observe, Orient, Decide, Act (OODA) loop.  The actual development or transformation of a system occurs during the "Act" function.  This can be either a straight-line, "waterfall-like" process or a short-cycle "RAD-like" process.  However, only when the customer observes the of the transformed system in operation, orients the results of the observation of the system in operation to the organization's Vision and Mission to determine if it is being effective and cost efficient, then deciding to act or not during the rest of the cycle.  The key difference between product and systems thinking is that each "Act" function is followed by an "Observe" function.  In other words, there is a feedback loop to ensure that the output from the process creates the benefits required and that any defects in the final product are caught and rectified in the next cycle before the defect causes harm.  For example, Ford treated is Bronco SUV as a product rather than a system.  "Suddenly", tire blowouts on the SUV contributed to accidents, in some of which the passengers were killed.  If Ford had treated the Bronco as a system, rather than a product, and kept metrics on problems that the dealers found, then they might have caught the problem much earlier.  Again, last year, Toyota, also treating their cars as products rather than systems, found a whole series of problems.

OODA Loop velocity
USAF Col. John Boyd, creator of the OODA Loop felt that the key to success in both aerial duels and on the battlefield is that the velocity through the OODA Loop cycle was faster than your opponent's.  Others have found that this works with businesses and other organizations as well.  This is the seminal reason to go to short cycle development and transformation.  Short cycle in this case would be 1 to 3 months, rather than the "yearly planning cycle" of most organizations.  Consequently, all observations, orientation and deciding should be good enough, not develop for the optimal, there isn't one. [this follows the military axiom that Grant,  Lee, Jackson, and even Patton followed "Doing something now is always better than doing the right thing later".]  Expect change because not all of the requirements are known, and even if they are known, the technological and organizational (business) environment will change within one to three months.  But remember the organization's Mission, and especially its Vision, change little over time; therefore the performance the metrics, the metrics that measure how optimal the current systems and proposed changes are, will change little.  So these metrics are the guides in this environment of continuous change.  Plan and implement for upgrade and change, not stability--this is the essence of an agile systems. 

This is true of hardware systems as well as software.  For example, in 1954, Haworth Office Furniture started building movable wall partitions to create offices.  Steel Case and Herman Miller followed suit in the early 1960s.  At that point, businesses and other organizations could lease all or part of a floor of an office building.  As the needs of the organization changed these partitions could be reconfigured.  This made for agile office space, or office systems (and the bane of most office workers, the cubicle), but allows the organization to make most effective and cost efficient use of the space it has available.

The Role of the Systems Engineering Disciplines
There are significant consequences for the structure of an organization that is attempting to be highly responsive to the challenges and opportunities presented to it, while in its process for achieving its Mission and Vision in a continuously changing operational and technical environment.  It has to operate and transform itself in an environment that is much more like basketball (continuous play) than American football (discrete plays from the scrimmage line with its downs)--apologies to any international readers for this analogy.  This requires continuous cyclic transformation (system transformation) as opposed to straight line transformation (product development). 

Treating Process in Product Thinking Terms
Starting in the 1980s, after the publication of Quality is Free, by Phil Crosby in 1979, the quality movement and quality circles, the concept of Integrated Product Teams (IPTs, which some changed to Integrated Product and Process Teams, IPPTs) organizations have been attempts to move from a focus on product thinking toward a focus on system thinking).  Part of this was in response to the Japanese lean process methods, stemming in part from the work of Edward Deming and others.  First international attempt to is ISO 9000 quality Product Thinking (starting in 2002), though in transition to Systems thinking, since it is a one time straight-through (Six Sigma) methodology, starting with identifying a process or functional problem and ending with a change in the process, function, or supporting system.

Other attempts at systems thinking were an outgrowth of this emphasis on producing quality products (product thinking).  For example, the Balanced Scorecard (BSC) approach, conceptualized in 1987. The BSC was attempting to look at all dimensions of an organization by measuring multiple dimensions.  It uses four dimensions to measure the performance of an organization and its management instead of measure the performance of an organization on more than the financial dimension.  The Software Engineering Institute (SEI) built layer four, measurement, into the Capability Maturity Model for the same purpose.

In 1990, Michael Hammer began to create the discipline of Business Process Reengineering (BPR), followed by others like Tom Peters and Peter Drucker.  This discipline treats the process as a process rather than as a series of functions.  It is more like the Manufacturing Engineering discipline that seeks to optimize the processes with respect to cost efficiency per unit produced.  For example, Michael Hammer would say that no matter size of an organization, it's books can closed at the end of each day, not by spending two weeks at the end of the business or fiscal year "closing the books".  Or in another example, you can tell if an organization is focused on functions or processes by its budgeting model; either a process budgeting model or a functional budgeting model.

Like the Lean concept, and to some degree, ISO 9000, ITIL,and other standards, BPR does little to link to the organization's Vision and Mission, as Jim Collins discusses in Built to Last (2002); or as he puts the BHAG, BIG HARRY AUDACIOUS GOALS.  Instead, it focuses on cost efficiency (cost reduction through reducing both waste and organizational friction, one type of waste) within the business processes.

System Architecture Thinking and the Enterprise Architect
In 1999, work started on the Federal Enterprise Architecture Framework (FEAF) with a very traditional four layer architecture, business process, application, data, and technology.  In 2001, a new version was released that included a fifth layer, the Performance Reference Model.  For the first time the FEAF links all of the organization's processes and enabling and supporting technology to its Vision and Mission.  Further, if properly implemented, it can do this in a measurable manner (see my post Transformation Benefits Measurement, the Political and Technical Hard Part of Mission Alignment and Enterprise Architecture).  This enables the Enterprise Architect to perform in the role that I have discussed in several of my posts and in comments in some of the groups in the LinkedIn site.  These are decision support for investment decision-making processes and support for the governance and policy management processes (additionally, I see the Enterprise Architect as responsible for the Technology Change Management process for reasons that I discuss in Technology Change Management: An Activity of the Enterprise Architect).   Further, successful organizations will use a Short Cycle investment decision-making (Mission Alignment) and implementing (Mission Implementation) process, for reasons discussed above. [Sidebar: there may be a limited number of successful project that need multiple years to complete.  For example, large buildings, new designs for an airframe of aircraft, large ships--all very large construction effort, while some like construction or reconstruction of highways can be short cycle efforts--much to the joy of the motoring public.]   The Enterprise Architect (EA), using the OODA Loop pattern, has continuous measured feedback as the change operates.  Given that there will be a learning curve for all changes in operation; still, the Enterprise Architect is in the best position to provide guidance as to what worked and what other changes are needed to further optimize the organization's processes and tooling to support its Mission and Vision.  Additionally, because the EA is accountable for the Enterprise Architecture, he or she has the perspective of entire organization's processes and tooling, rather than just a portion and is in the position to make recommendations on investments and governance.

System Architecture Thinking and the Systems Engineer and System Architect
One consequence of the short-cycle processes is that all short-cycle efforts are "level of effort" based.  Level of Effort is a development or transformation effort is executed using a given a set level of resources over the entire period of the effort.  Whereas in a waterfall-like "Big Bang" process scheduling the resources to support the effort is a key responsibility of the effort (and the PM), with the short-cycle the work must fit into the cycles. With the waterfall, the PM could schedule all of the work by adding resources or lengthened the time required to design, develop, implement and verify; now the work must fit into a given time and level of resource.  Now, the PM can't do either because they are held constant.
 If, in order to make an agile process, we use axiom that "Not all of the requirements are known at the start of the effort", rather than the other way around, then any scheduling of work beyond the current cycle is an exercise in futility because as the number of known requirements increases, some of the previously unknown requirements will be of higher priority for the customer than any of the known requirements.  Since a Mission of a supplier is to satisfy the needs of the customer, each cycle will work on the highest priority requirements, which means that some or many of the known requirements will be "below the line" on each cycle.  The final consequence of this is that some of the originally known requirements will not be met by the final product.  Instead, the customer will get the organization's highest priority requirements fulfilled.  I have found that when this is the case, the customer is more delighted with the product, takes greater ownership of the product, and finds resources to continue with the lower priority requirements.

On the other hand, not fulfilling all on the initially known requirements (some of which were not real requirements, some of which contradicted other requirements) gives PMs, the contracts department, accountants, lawyers, and other finance engineers the pip!  Culturally,generally  they are incapable of dealing in this manner; their functions are not built to handle it when the process is introduced.  Fundamentally making the assumption that "Not all the requirements are known up front" makes the short-cycle development process Systems Requirements-based instead of Programmatic Requirements-based.  This is the major stumbling block to the introduction of this type of process because it emphasizes the roles of the Systems Engineer and System Architect and de-emphasizes the role of the PM.

The customer too, must become accustomed to the concept, though in my experience on many efforts, the once the customer unders the customer's role in this process, the customer becomes delighted.  I had one very high-level customer that said after the second iteration through one project, "I would never do any IT effort again that does not use this process."

Sunday, July 17, 2011

Retail in detail

My recent post on broad retail trends might have provided a reasonable picture of the sector as a whole, but retailing is a diverse beast. One aggregate number is insufficient to describe the performance of the sector.

My approach is to examine retail from a household perspective. Rather than look at total turnover in current prices, I will examine real spend per capita in each of the main retailing subsectors. I do this because economic theory has a lot to say about changes to household spending patterns during economic cycles.

Economic theory would suggest that in boom times, retailers of luxury goods would see turnover increase more rapidly than incomes. As Wikipedia explains - In economics, a luxury good is a good for which demand increases more than proportionally as income rises. The reverse should also be true for these goods.

Importantly, retail trends need to be seen in the context of a housing driven wealth effect. The wealth effect is an increase in spending that accompanies and increase in perceived wealth, rather than spending which is driven by growth in incomes

The wealth effect is also behind many of the saving decisions of households. Since 2005 the trend of declining household savings rates was dramatically reversed. We now have a household saving ratio not seen since 1987 (see the RBA’s chart below). This is an important backdrop to the retail story.

These factors are important to consider if you foresee near term home price declines. In this scenario, spending in wealth driven retail sectors would be expected to fall more than flat or falling household incomes, and increased savings alone would suggest.


Now to the detail.

The graphs below show the performance key subsectors in retailing. Note the log scales, which mean a straight line indicates a constant rate of growth – the steeper the line, the higher the rate of growth. Note also that this is a real per capita measure, which is indicative of trends in household spending decisions. Quarterly chain volume data is used, with May 2011 current price data adjusted to substitute for June 2011 data. The ABS explains some of the trends in more specific subcategories here (definitely worth reading the context of this post).



A few points jump out at me from the graphs. First, household goods (maroon in first graph) have outperformed by a long way, for a long time. This category includes furniture and appliances, hardware and gardening, floor coverings and electrical. This sector also appears to have seen the sharpest shock around the end of 2007 – from having the strongest rate of growth to nearly the weakest. The rising part of the curve might partly be attributed to a greater appetite for expensive furniture and appliances, which is indicative of a luxury good effect. Also important is the impact of the construction boom of the early 2000s which has since collapsed in many areas.

Second, clothing and accessories (green line) was on a declining trend for 14 years until 1997. For a decade since then, the growth rate in this sector was only bettered by household goods. Spending recovered strongly since the GFC. I’m not exactly sure why this might be the case. Perhaps some readers have experience in this sector.

Food retailing has been the steadiest (as you would expect) with only a slight easing from the growth trend since 2009 (maroon in second graph).

Other retailing (which includes pharmaceuticals, recreational goods, cosmetics and books) appears very sensitive to the housing wealth effect, seeing big spending boost during the 2002-03, the 2007, and the 2009 house price booms. Surprisingly spending has remained strong since the GFC – the only retail sector where this has occurred.

We might attribute some of the recent robustness to the high Aussie dollar. The ABS explains that pharmaceuticals and cosmetics and toiletries are the strongest components of this sector.

Cafe and restaurant spending (orange line) also appears sensitive to the wealth effect, and is noticeably one of the more volatile sectors.

Department store spending has been declining steadily since the end of 2007 (purple line). Anyone who had closely examined this data would not have been so surprised about David Jones’ recent profit downgrade. Spending at department stores is now back where it was in 2003 on a per capita basis. 

Finally, the second graph has the period of 2002-03 circled. This is simply to highlight that all retail sectors grew at abnormally high rates during the house price boom of this period. Indeed, we can see the wealth effect correlation between house prices and retail growth in many sectors in 2007 and 2009, although to a lesser extent.

My near term outlook is for a subdued retail sector. As I have said before, I believe that in these challenging times for retailers, innovation will be the key to staying ahead. New business models that use internet shopping to good effect, with a small physical store presence might be one path for many. Those companies who adapt quickest will benefit.