Friday, April 29, 2011

SOA in a Rapid Implementation Environment

"The farther backward you can look, the farther forward you can see. "
Winston Churchill
Assertion
SOA is particularly well suited to Rapid Implementation and Rapid Transformation, which is an outgrowth of RAD Processes. 

I've divided this post into two sections.  First, a brief history of the trends  in software coding and development, I've experienced; then, a discussion of how these trends will be amplified using Rapid Application Implementation processes, coupled with SOA.

Background
In my post The Generalized Agile Development and Implementation Process, I indicate that in my experience (that is over 40 years of experience (boy, time goes fast)), RAD software implementation process produces more effective and cost efficient software than any other method.  There have been many changes in that time;  binary to machine language, from machine languages to assemblers, from assemblers to procedural languages, from procedural languages to structured code, from structured code to object oriented languages, from object oriented to web services, which have been extended to service oriented. 

Up to Object Oriented, all of these changes were developed to enhance the cost efficiency of creating and maintaining code.  For example, going from coding in an assembler to coding in FORTRAN 1 enabled coders to construct an application much faster by using a single statement to insert a standard set of assembler code into the object deck of the application.  For example, no longer did the coder have to rewrite all of the assembler instructions for saving data to given location on a disk--the instruction together with disk utilities took care of ensuring there was adequate space, writing the data, and the associated "housekeeping" activities.  The downside to this change was that the object code was much larger, requiring more CPU cycles and memory.  Consequently, there were (and are) applications that still require coding in an assembler (e.g., real time flight controls).

The second evolutionary transformation of coding was from procedural to structured.  Due, simply, to the limitations in CPU cycles and memory in early computers and the programming culture of assembler where code was reused by branching into a section and then to another to minimize the number of separate instructions.  With increasing computing power and the implementation of procedural languages, like FORTRAN and COBOL, the branching of code continued; to the point that maintenance of the code became infeasible in many cases, depending on the programmer.  One way that many programmers, including me, used to avoid the issue way to use a modular IT system architecture--that is using callable subroutines as part of the code.  In some cases the majority of the code was in subroutines with only the application work flow as part of the main--Sound familiar?

Anyway, this concept was formalized with the third evolutionary transformation, from procedural to structured (also known initially as Goto-less programming).  This was a direct assault on programmers that used unconditional branching statements with great abandon.  The thought here was to reduce the ball of tangle yarn logic found in many programs to increase the code readability and therefore maintainability.  That's when I first heard the saying, "Intel giveth, structured programming taketh away", meaning that structured code use a great deal more computing power.  At this point, compiler suppliers started to supply standardized sub-routine libraries as utilities (a current example of this type of library is Microsoft's Dynamic Link Library (DLLs)).  Additionally, at this time Computer Aided Software Engineering (CASE) tools began to come on the market.  These tools aided the code developer and attempted to manage the software configuration of the application.

The fourth transformation was from structure programming to object oriented development.  Object Oriented Development (OOD) is not a natural evolution from structured programming.  Instead, it's a paradigm shift in the programming concept--notice the shift in terminology from "programming" to "development".  OOD owes as much to early simulation languages as coding.  These languages, like Simscript, GPSS, GASP, all created models of "real-world" things (i.e., things with names or objects).  Each of these model things was programmed (or parametrized) to mimic the salient characteristics of a "real-world" thing.  OOD applied this concepts of objects with behaviors interfacing with each other to "real-world" software.  So that, for example, a bank might have a class of objects called customers.  Each object in the class represents one customer.  The parametrize attributes of each customer object would include the customer's name, account number, address, and so on.  Each of these customer objects has "behaviors" (methods) that mimic, enable and support what the customer wants and is allowed to do.  For example, the customer object will have deposit, withdrawal, and inquiry methods.  The interaction of the input and output methods, together with the objects and attributes, create the application.

A key problem in creating Object Oriented applications is with the interfaces between and among methods supporting the interaction among objects.  OOD did not have standards for these interfaces so that sharing objects among applications was hit or miss.  Much time and effort was spent in the 1990s on the integration among object orient application objects.

To resolve this interface, application developers applied the concepts of HTML so successfully use by web developers.  HTML is based on the Standard Generalized Markup Language (SGML), developed, in part, by the aerospace industry and the Computer Aided Logistics Supporting initiative to support the inter-operation of the publishing of technical documentation.  And this industry had a lot of it.  At the time, (the late 1980s) a Grumman Corporation technical document specialist reported that when the documentation for all aircraft on an aircraft carrier was initially loaded on, the carrier sank 6".  The SGML technical committee came up with standard tagging and standardized templates for document interchange and use.  Web development teams latched on to the SGML concepts to enable transfer of content from the web server to the web browser.

Likewise, those seeking to easily interlink objects were looking for standardizing the interface and its description.  The consequence of this process research was XML.  The application of this standard to objects creates Web Service style Service Components.  These components have a minimum of three characteristics that enable them to work as Web Service style Service Components:
  1. They persistent parametrized data mimicking the characteristics of some "real-world" object
  2. They have methods that create the object's behavior and
  3. They have interfaces adhering to the XML standard.
Having a great many Web Services is goodness, but it could turn the Internet into the electronic equivalent of a large library with no card catalog.  The W3C and OASIS international standards organizations recognized this issue early and created to standards to help address the problem, Web Services Description Language (WSDL) and the Universal Description Discovery and Integration (UDDI).  These standards, with many additional "minor" standards are converging on a solution to the "Electronic Card Catalog for Web Services" and/or the "Apps Store for Web Services".

In summary, initially programming was about creating code in as little space as possible.  As hardware increased in power (according to Moore's Law), programming and its supporting tools focused on creating more cost effiecent code, that is, code that is easily to develop and maintain.  This led to the paradigm shift of Object Oriented Development, which began to create code that enabled and supported particular process functions and activities.  In total, this code was much more complex, requiring more CPU cycles, more memory, and more storage.  Since, by the 1990s, the computing power had doubled and redoubled many times, this was not a problem, but linking the objects through non-standard interfaces was.  The standards of Web Services provided the answer to this interfacing issue and created the requirements for another paradigm shift.

Now that Service Components (Web Services and Objects meeting other interface standards) can enable and support functions or activities of a process, organizations had a requirements for these functions to be optimally ordered and structured to support the most effective process possible, and the most cost efficient manner (as technology changed) and to be agile enough to enable the organization to successfully respond to unexpected challenges and opportunties.

Rapid, Agile, Assembly
Even a superb research library with a wonderful automated card catalog is of no value to anyone that has no idea what they are looking for.  To use another analogy; my great uncle was an "inventor".  But since he didn't what type of mechanical invention he might want to work on, he would go to auctions of mechanical parts--which meant that he had a shed sized room filled with boxes of mechanical part.  He didn't know if he would ever use any of them, but he had them just in case. 

Many organization's initial purchase of Service Components (Web Services) has been much like my uncle's acquisition of mechanical parts.  Many times, the purchase of Service Components and supporting infrastructure components from multiple suppliers creates as many issues as it resolves because later versions and releases of "the standards" are incompatible with earlier versions of the same standard, and many software suppliers interpreting the standard to better match their existing products.  The result is "the Wild West of Web Services" with supporters of one version of a standard or one product suite "gunning for" anyone that doesn't agree with them.  However, the organization IT is cluttered with the equivalent of a Service Component bone yard (which is like an auto junkyard--a place where you can go to potentially find, otherwise difficult to locate car parts.

To convert these (Service Component) parts into Composite Applications supporting the organization's processes requires a blueprint in the form of a functional design (System Architecture), an identification of the design constraints on the process and tooling (including all applicable policies and standards).  The key is the functional design and its implementation as a Composite Application using Assembly methods; see my posts Assembling Services: A Paradigm Shift in Creating and Maintaining Applications and SOA Orchestration and Choreography Comparison.

In addition to the paradigm shift from software development (coding) to Service Component assembly, there are two major changes in programming when moving from creating Web Services to creating SOA-based Composite Applications.  If the processes for these are not well thought out, or have poor enabling and supporting training and tooling, or are poorly executed, then assembling Service Components will be difficult, time and resource consuming.  This means the creating Composite Applications would be Slow Application Development (SAD???) instead of RAD.  This happens frequently when initially operating the assembly process and seems to follow the axiom that "the first instance of a superior principle is always inferior to a mature example of an inferior principle".  However, many books on the development of SOA-based applications describe processes that either are inherently heavy weight and slow and resources consuming, or that will process applications with a great many defects; and rectifying a defect in a product that is being used costs much more than rectifying it in design.  Therefore, spending the time and effort to fully develop the process and will continue to provide a genuine ROI with each use of the process--though the ROI is difficult to measure unless you've already have a poor process that measure first, or if you can measure the ROI in a simulation of alternatives.

Here is an outline of the RAD-like implementation process I would recommend.  It fits very nicely within The Generalized Agile Development and Implementation Process I discussed.  Implementing and operating/ maintaining a SOA-based Composite Application is much more than simply coding into a work flow Service Component and then executing it in production.  There are several activities required to implement or transform a Composite Application in an effective and cost efficient manner using a RAD-like implementation process.

This implementation process is predicated on at least 5 items in the organization's supporting infrastructure.
  1. That the implementation team has established or is establishing an Asset and Enterprise Architecture Repository (AEAR).  This should include a Requirements Identification and Management system and repository (where all of the verification and validation (tests) are stored).
  2. That the implementation team is using a CMMI Level 3 conformant process (with all associated documentation
  3. That there are business process analysis (including requirements identification and modeling), and a RAD-like Systems Engineering and System Architecture process preceding the implementation process.
  4. That the implementation team has access to a reasonable set of Composite Applications modeling tools that link to data in the AEAR.
  5. That the organization's policies and standards are mapped into automated business rules that the process flow can use through a rules engine linked to both the Orchestration engine and the Choreography engine.
Model and Verify the Composite Application's flow Before Assembly
First, create and model the process or work flow of the Composite Application before assembling the Service Components.  The key technical differentiator of an SOA-based application from a web service or other type of application is that there are three distinct elements of the SOA application; the work flow, the Service Components (that support the activities and functions of the process), and the business rules that are derived from the organization's policies and standards.  This first activity creates the process flow and ensures that all applicable business rules are linked to the flow in the correct manner.

For those Composite Applications using Orchestration, the entire Composite Application should be statically and dynamically modeled to verify the functions using Service Components models to ensure that they are in proper order, that their interfaces are compatible, that they meet all design constraints, that the Composite Application will meet all organization's business rules (that automate the organization's internal and external policies and standards, and that the Composite Application's performance and dependability meet their requirements.

I found that with RAD, performing this type of verification activity can take from 1 hour for simple applications to two days for a complex application with complete regression testing.  Additionally, I found that both software coders/programmers and program management wanted to skip all of the verification activities that I recommended, but for somewhat different reasons. 

The reason for developers not wanting to do it, was, according to them, "Boring"; actually, it is more "programmer friendly", that is, to ignore verifying that the requirements have been met and that there are no induced defects (defects introduced in the process of updating the Composite Application's functionality), which required in a RAD process to ensure reliability.  Additionally, it's more fun to program than it is to ensure that the version verified is the version that goes into production.

The reason why program managers don't pay attention to the customer's requirements and the requirements verification is that the earned value procedures--the supposed measurement of a program's progress--is not based on measuring how many of the customer's system requirements have been met through verification and validation (V&V), but on the resource usage (cost and schedule) versus the implementer's best guess at how much of which task in the "waterfall" schedule have been completed.  Since producing the earned value is the base for payment by the customer and the PM's and management's bonuses, that is what they focus on--naturally.  Therefore, neither the implementers or management have any inducement to pay a great deal of attention to ensuring that the customer's requirements are met.

However, with a good RAD process like the Generalized Application Development and Implementation Process, the customer's system requirements are paramount (additionally, my experience has been focusing on the programmatic requirements (cost and schedule) pushes the schedule to the right (it takes longer), while focusing on ensuring the customer's requirements (functions, and design constraints) are met, pushes the schedule to the left).

Assemble the Composite Application and Verify
Second, once the work flow has been verified, then it is time to link the Service Components to it in the development environment.  The verification, in this case, is to assure that the Service Component behaviors are as expected (the components meet the component requirements), and that the components have the expected interfaces.

Verify the Sub-systems
Third, once components have been assembled into a portion of the system, or a sub-system, these too, must be verified to ensure that they enable and support the functions allocated to them from the System Architecture (functional design) and meet all applicable design constraints.  Again, this can be done in the development environment, and must be under configuration control.

System Validation
Fourth, System Validation means that the current version is evaluated against the customer's system requirements, that is, that it performs as the customer expects, with the dependability that the customer expects, and meets all of the policies and standards of the organization (as embodied in the design constraints).  Typically, this step is performed in a staging environment.  The are several reasons for doing this, as most configuration management training manuals discuss.  Among these are:
  • Separating the candidate application from the developers, since the developers have a tendency to "play" with the code, wanting to tweak it for many different reasons.  These "tweaks" can introduce defects into code that has already been through the validation process.  Migrating the code from the development environment to the staging environment and not allowing developers access to the staging environment, can effectively lock out any tweaking.
  • The migrating the application to the staging environment, itself, can help to ensure the migration documentation is accurate and complete.  For example, the order in which code patches are applied to their COTS product can change, significantly, the behavior of the product.
  • The staging environment is normally configured in a manner identical with the production environment.  It uses the same hardware and software configurations, though from large installation, the size of the systems is generally smaller than production.  Since the code or product implementers have a tendency to "diddle with" the development environment.  Very often I've been part of efforts where the code runs in the development/assembly environment, but not production because the configuration of the two environments is very different with different releases of operating system and data base COTS software.  Having the staging environment configuration control to match the production environment obviates this problem.
The result is an application that rolls out quickly and cleanly into production.

Roll-out
Fifth, Roll out consists of releasing the first or Initial Operating Capability (IOC) or the next version of the Composite Application.  In the RAD or Rapid Application Implementation process, this may occur every month to 3 months.  If all of the preceding activities have been successfully execute, post roll out stablization will be very boring.

Advantages of SOA in a RAD/I Process
There are several advantages to using SOA in a Rapid Application Development/Implementation process including:
  1. The implementer can use Service Component or assemble a small group of Service Components to perform the function of a single Use Case.  This simplifies the implementation process and lends itself to short-cycle roll-outs.
  2. Since the Service Components are relatively small chunks of code (and generally perform a single function or a group of closely coupled functions) it allows the implementers at least the following actions:
    1. Replace one Service Component with another technologically updated component without touching the rest of the Composite Application as long as the updated component has the same interfaces as the original.  For example, if a new developmental authentication policy is set
    2. Refactoring and combining the Service Components is simplified, since only small portions of the code are touched.
    3. Integration of COTS components with components that are create for/by the organization (for competitive advantage) is simplified.  While my expectation is that 90 percent or more of the time, assembling COTS components and configuring their parameters will create process enabling and supports Composite Applications, there are times when the organization's competitive advantage rests on a unique process, procedure, function, or method.  In these cases unique Service Components must be developed--these are high-value knowledge-value creating components.  It's nice to be able to assemble these with COTS components in a simple manner.
  3. The Structure of an SOA-based Composite Application has three categories of components: Service Components, Process or Workflow Components, and the Business Rules.  Change any of these and the Composite Application changes.  Two examples:
    1. Reordering the functions is simplified because it requires a change in the process/workflow component rather than a re-coding of the entire application.
    2. If properly created in the AEAR, when an organization's policy or standard changes, the business rules will reflect those changes quickly.  Since in well architected SOA-based systems will have a rules engine tied to the rules repository in the AEAR and to the orchestration and choreography engines executing the process or work flows, any change in the rules will be immediately reflected in any Composite Application using the rule.
These two example demonstrate that a Composite Application can be changed by changing the ordering or rules imposed on the application, without ever write or revising any code.  This is the reason that SOA is so agile, and another reason it works so well with short-cycle Rapid Application Development and Implementation processes.

Tooling
In addition to good training on a well thought out Rapid Application Development and Implementation process, the implementation of the process for creating SOA-based Composite Applications requires good tools and a well architected AEAR.  While implementing the process and acquiring the tooling and instantiating the AEAR may seem like it's a major cost risk, it really doesn't need to be, (see my post, Initially Implementing the Asset and Enterprise Architecture Process and an AEAR).  Currently, IBM, Microsoft and others have tooling that can be used "out of the box" or with some tailoring to enable and support the development and implementation environment that is envisioned in this post.  Additionally, I have used DOORS, SLATE, CORE Requisite-Pro, and other requirements management tools on various efforts and can see that they can be tailored to support this process Rapid Appliction Development and Implementation process.

However, as indicated by the Superiority Principle, it will take time, effort, and resources to create the training and tooling, but the pay back should be enormuous.  I base this prediction on my experience with the RAD process that I developed in 2000, which by 2005 had been used on well over 200 software development efforts and which was significantly increasing the process effectiveness of the IT tools supporting the process, and the cost efficiency of the implementations. 

(PS--the Superiority Principle is "The first instance of a superior system is always inferior to a mature example of an inferior system").

Thursday, April 28, 2011

Faulty Reasoning

I’ve come across some fine examples of faulty reasoning lately in two key areas.

1. Analysing the economic importance of declining environmental quality, and
2. Assessing the impact of price drops in the Australian property market.

So let us take a closer look.

Pro-urban sprawl advocates (I didn’t really know there were so many until just recently) try to shrug off the claim of deterimental impacts on agricultural production from urban sprawl due to irreversibly land use changes. For example –

Suburban Development is not destroying farmland. Smart growth activists claim farmland is disappearing at dangerous rates and that government needs to protect farmland lest we lose the ability to feed ourselves. As growth expert Julian Simon wrote, this claim is "the most conclusively discredited environmental-political fraud of recent times." United States Department of Agriculture data show that from 1945 to 1992 cropland area remained constant at 24 percent of the United States. Though urban land uses increased, they now account for only 3 percent of the land area of the United States. Today, American farmers produce more food per acre than ever before. In fact, the number of acres used for crops peaked in 1930, but because of the ingenuity and innovation of American farmers, the United States continues to produce more food on less land. (here)

Why is this argument based on faulty reasoning?



(As an aside, I would say that most smart growth advocates are not so extreme as to suggest that sprawl will destroy our ability to feed ourselves, just that sprawl makes it more difficult to produce the same quantity food that would otherwise be the case).

The faulty reasoning occurs because Simon compares the past situation with the present situation. He does not compare the present situation to the alternative present situation where urban sprawl did not replace some of the most highly productive agricultural land.

To be clear three scenarios are as follows.

a) 1945 levels of agricultural productivity with 1945 area of land under production.

b) 1992 levels of agricultural productivity with 1945 area of land under production, less land lost to sprawl, plus marginal land brought under production.

c) 1992 level of agricultural productivity with 1945 area of land under production plus marginal land brought under production.

Simon compares a) and b), not b) and c) which would reveal the true impact of sprawl on agricultural production. Just because we are now more productive, doesn’t mean that losing some of the better agricultural land is not important.

To apply Simon’s argument in another context, he would argue that having my car stolen today does not matter because I will still be wealthier in ten years than I am today, therefore I should not bother protecting my car from being stolen. It’s absurd. He should compare my wealth in ten years under the stolen and not stolen scenarios.

This same faulty reasoning can be found in many discussions on environmental policy.

The second area where faulty reasoning gets plenty of air time comes from overlooking that financial leverage works in reverse. This is why small percentage drops in housing prices have major impacts on the household balance sheet.

As an example, consider the following comment -

Why is it that when the stock market falls 2.1 per cent in a day it hardly makes the news, yet when property prices fall by the same amount over a three month period, considering that the median price rose by 160 per cent over a ten year period, it is a slump?

The property industry needs to stop with its hysteria
(here)

Okay then, let’s just see.

For the leveraged investor small falls in value are very bad news. Say they borrowed $80,000 to buy a $100,000 home with annual costs of $1,500 and annual rental income of $5,000. Interest costs on the loan are $6,000, so the investor makes an annual loss $2,500/year (2.5%). If prices increase 10% over two years, they cash out with $110,000, and subtracting their $5,000 of losses, gives them a $5,000 return on the original $20,000 invested, or a 12% annualised return on equity.

But if prices instead fall by just 5% over those two years the story is very different.

They sell the home for $95,000, repay the $80,000 loan, but have also lost $5,000 in the two years from negative gearing. This leaves them with just $10,000 of their initial $20,000 deposit. That’s a 22% annualised loss over two years. In other words THEY LOST 50% OF THEIR CASH! Remember this when you look at today’s housing data – home values in Brisbane and Perth are both down more than 6% over the past year.

If prices fall just 15% over two years the 80% leveraged investor makes a 100% loss on their equity.


(There is much more detail to examine here, including tax benefits of negative gearing, opportunity cost of the deposit, transaction costs and so on. But the principle stands.)

So yes, small falls in home values have a very serious effect on the balance sheet of homes due to leverage. It is the same reason that small gains had such a great positive benefit and why a few years of 10% plus growth made us all very wealthy in the short term.

Not pretty

Latest Econ Theory Keynes v Hayek rap video

Wednesday, April 27, 2011

Enterprise SOA vs Ecosystem SOA, Private Cloud Computing vs Public Cloud Computing, Choreography vs Orchestration: Much Violent Agreement

Ownership Domain Confusion Reigns
There seems to be a great deal of confusion and much violent agreement regarding terminology within SOA and Cloud Computing.  In particular there is confusion within the SOA community regarding the difference between Choreography and Orchestration (see my post SOA Choreography and Orchestration Comparison), and much confusion about Enterprise SOA versus Ecosystem SOA and how they relate to Private versus Public Cloud Computing.  Actually, all of these relate in a very straightforward manner, once the definitions of each is understood, and once the proper architectural framework is applied.  That framework is based on the Ownership Domain of the Application.  The Ownership Domain is defined as the person or persons, or organization or organizations that either own and maintain the application code.  If there is one owner/steward for the installation, configuration, and maintenance of the code, then the application resides in a single ownership domain.  If  the code runs on an IT infrastructure with a single owner/steward, then it is running in a "Private Cloud", if the code runs within two or more IT infrastructure ownership domains, it is running on in a "Public Cloud".

Let's take a look these concepts using this Ownership domain framework.

Single Ownership Domain Composite Applications
An Enterprise SOA-based composite application is one where all of the application's Service Components are within a single organizational ownership domain.  Characteristics and attributes of the Enterprise SOA-based application include:
  • It supports a high-value producing process.  Organization's have two types of processes, those that have some competitive advantage, which produce value for the organization and those supporting processes that are required by the organization to "stay in business".  These are processes like accounting, HR, and finance.
  • Enterprise SOA uses:
    • Orchestration to assemble the Composite Application because all of the linkages among the various Service Components are controlled by the personnel within the ownership domain.
    • An ESB to interconnect the Service Components during Composite Application operation.  In fact, the key characteristic of an Enterprise SOA and an Enterprise Composite Application is that it uses the ESB within a single ownership domain.  (while most ESB suppliers add many functions to their ESB product offerings, functionally, the ESB is simply an application layer communications function.)
    • An internal UDDI as the repository of Service Component descriptions (for those using Web Services, this would be the WSDL repository.
  • A Private Cloud enables and supports Enterprise SOA.  The caveat is that only the top two layers of the Cloud architecture (the SAAS and PAAS) are required to be within the single ownership domain.  However, by definition, an Enterprise SOA-based application can never operate within the Public Cloud.
Multiple Ownership Domain Composite Applications

An Ecosystem SOA-based composite application is one where the application's Service Components operation across multiple organizational ownership domains.  Characteristics and attributes of the Ecosystem SOA-based application include:
  • Typically, it supports, organizational supporting and capacity value producing processes.  These are all processes that are not part of the core competence of the organization, the ones that produce the organization's competitive advantage.  Personnel within an organization may also implement an Ecosystem SOA-based application for applications support ephemeral processes, like a research effort.
  • Ecosystem SOA uses:
    • Choreography to assemble the application since choreography is much more dynamic during the execution of a Composite Application.  For example, if a Composite Application in one ownership domain uses a Service Component in another, and the owner of the second decides to change the location of the Sevice Component, without the use of Choreography, the Composite Application is very likely to fail from not finding the Service Component.
    • the UDDI to a much greater degree than for a Composite Application in an Enterprise SOA. Choreography is much more dependent on UDDI repositories for the reasons just described.  However, the process of discovery during execution is likely to significantly reduce the performance and throughput of the Composite Application; so this is a tradeoff between process effectiveness and cost efficiency.
    • Internet instead of the ESB.  The fundamental function of the ESB is to enable and support communications among the Service Components of a Composite Application, that is, it supports application layer communications within an Enterprise (or organization).  However, with Ecosystem SOA, interfacing with Service Components across the Internet is the norm.  While there is much discussion about cross-domain ESBs, the issue is difficult given the security requirement involved in application layer communications across the boundaries.
  • A Public Cloud enables and supports Ecosystem SOA.  The Public Cloud is by definition, a multi-ownership domain entity.  Therefore, the Platform as a Service (PaaS) layer and the Infrastructure as a Service (IaaS) layer perfectly support Ecosystem SOA.

Tuesday, April 26, 2011

Milk wars and Anti-Dumping

While there are many questionable assumptions in some economic theories, there are also many solid foundations to economic analysis. One of these was identified by Coles in its submission to the Senate inquiry into milk pricing (available in the Coles factsheet here).

The farm gate price dairy farmers receive is set by the world price because most Australian milk products are exported.


The first implication of this fact is that because prices are set by global markets, domestic buyers cannot buy at prices below the export market price - although they could perhaps be higher.

By following this logic Coles, or any other domestic dairy retailer, cannot exhibit bargaining power as a buyer from milk processors (or distributors). Dairy processors would simply sell all their products abroad, whereas the only alternative for retailers is to buy imported dairy products with associated freight costs.  Processors can then bargain the price up to the price of the retailers next best alternative of imported products. Thus, even though we are net exporters of dairy products we still pay a retail price for domestic dairy products very close to the retail price for imported dairy products.

And to provide further evidence against dairy industry claims, even if Coles did have market power, one must question why Coles would not already be getting milk for the lowest price anyone would be willing to produce for?

The sceptic in me might even go so far as to suggest that upsetting the political milk cart might have been a publicity strategy for Coles itself. News outlets have told the public that Coles is aggressively dropping prices for months now – all free of charge. You really can't buy publicity like that.

Of even greater concern than the media beat-up, and public perception of danger from falling milk prices, is that the law entrenches protection of local industries from international competitors through anti-dumping laws. As the Productivity Commission describes

Australia’s anti-dumping system seeks to remedy the injurious effects on Australian industry caused by imports deemed to be unfairly priced. It allows local industry to apply for anti-dumping duties on goods ‘dumped’ in Australian markets at prices below those prevailing in the exporter’s domestic market or to apply for countervailing duties on goods that have been subsidised by the government of the country of export. Where the dumping or subsidization results in material injury to local industry, anti-dumping or countervailing duties can be applied.

I would have thought that the Productivity Commission would at least understand that export prices cannot be above domestic prices.

In fact, I would have thought that the Productivity Commission would be more interested in local price impacts when the shoe is on the other foot – when it is our exporters who receive protections against foreign competition. For example, fruit and vegetable growers get massive protection from foreign food under the guise of pest and disease control. These producers get to sell to international markets at the global market price, but can sell to the local market at a higher price since there is not competition from imports. This explains why food in Australia and New Zealand can be so expensive even though we are massive food exporters.

The Productivity Commission’s second justification for anti-dumping has a lot more promise – that foreign government s have subsidised their own producers to give them an unfair advantage in global markets.

Yet I can’t help but feel that foreign producer subsidies give the same effect as foreign natural production advantages, such as mineral deposits, labour prices and skills, long term capital investments and so on. Such natural comparative advantages benefits all trading partners as well. So why are unnatural advantages achieved through subsidisation of one sector by others in that country not also beneficial?

These subsidies simply change the comparative advantages in production of different goods. Our response, rather than protect our now disadvantaged producers, should be to adapt to our new relative specialisation.

But what happens when the foreign country removes its subsidy?

The immediate impact is that the total level of production of both trading partners falls as the relative price of the subsidised good increases while the ability of other producers to increase production levels takes time to be realised through capital investments.

This shock will be felt in both countries but one must reason that such impacts will be mitigated by the subsidising country should they decide to cease their current policy. For example, they may phase out the subsidies over a long period of time to allow industry to adjust. The other trading partner also benefits from these decisions.

In this light we could argue that protection of industries from foreign subsidised competitors is a kind of economic insurance against foreign policy risk.

The final problem with anti-dumping laws is the enforcement. Don Boudreaux explains quite clearly how any subsidies other than a direct cash payment or tax break are almost impossible to define.

Another important reason springs from the fact that subsidies are surprisingly difficult to define and identify. The classic case of a government paying a producer a fixed amount of money per unit of output is straightforward. But beyond this blatant method of subsidizing producers, things quickly get fuzzy and foggy.

Does a government subsidize an industry if it cuts that industry's taxes?

How about if the government builds a first-rate system of highways, roads and bridges in proximity to the chief firms in an industry?

Is an industry subsidized if it benefits from a government-financed engineering school?

How about if some of the industry's firms are paid by government to build cutting-edge military equipment?

Are firms that depend upon export markets subsidized if their government provides a top-flight navy to ensure the safety of cargo ships sailing under that country's flag?

Do governments that use tax revenues to maintain law and order and ensure reliable enforcement of contracts subsidize businesses within their borders?

Answering such questions is surprisingly difficult. And, sad to say, if Uncle Sam commits himself to protecting American producers from foreign competition whenever that competition is subsidized, these producers will exploit the ambiguous nature of subsidies as they petition Washington for protection from competitive pressures. They will too freely and loosely allege that their foreign rivals are subsidized.


In the end, these laws and the controversy around milk prices is evidence that economic illiteracy is extremely common, and that political outcomes often override better economic outcomes. But we need to see these economic debates as they are – rent seeking behaviour of existing producers trying to avoid real competition.

Saturday, April 23, 2011

The Generalize Agile Development and Implementation Process for Software and Hardware

Background
In 2000, I created a Rapid Application Development (RAD) process, based on eXtreme Programming (XP) that was CMMI Level 3 conformat. Conformance means an outside audit was performed and the process conforms to the requirements of the CMMI level 3 key practices (note that CMMI levels 4 and 5 are organizational practices rather than process practices, so the Level 3 is as high as a process can get).

Advantages of the RAD Process
This RAD process, as I created it, has several advantages over traditional software development efforts, like "waterfall process".  The following advantages derive from XP.
  1. Works in monthly to 6 week cycles, enabling the customer to start using an Initial Operating Capability (IOC) version of the system, while the system in under development.  This meant that the customer was gaining some value even while the application was being implemented.  In the case of one small but unusual asset management, accounting, and accounts payable system, the customer, in the process of loading data into the IOC version found enough errors in the accounts to recover more than twice the cost of the effort to implement the application, that is, $100K+ errors during the implementation of a $50K system.
  2. Enables the customer to get their highest priority functions.  The RAD process does this by allowing the customer to add requirements during each development cycle and reprioritize the requirements between cycles.  This produces a much more satisfied customers, than "Big Bang" processes like "the waterfall" process.
  3. The development process is agile because it allows the customer to reprioritize their requirements.  In a book by Dr. Ralph Young on requirements, he cites a statistic that only 49% of the real customer requirements are known at the start of the development effort.  This RAD process allows the 51% to be incorporated, while keeping in budget using this repriortization function.  Since the product always meets the customer's highest priority requirements, they are much more satisfied with the result.
  4. Focuses the developer on developing, gathering than documenting--this is what the software developer is good at, while documenting isn't.  Consequently, the developer's morale improved (which meant their effectiveness and cost efficiency improved) so both the quantity and quality of the products improved.
  5. Minimizes the number of intermediate artifacts, that is, it eliminates the need to create documents like the Program Management Review (PMR) reports, status reports, design documents, Engineering Change Orders and so on.  This greatly reduces the need for developers and the program manager to create much of the documentation of other formal development processes.  In fact, as I created it, this RAD process eliminated the need for PMRs, because the customer was involved with the development team on a weekly basis.  And since the process assumed "that the program/project doesn't have all of the requirements at the start and enables the customer to add and reprioritize the requirements throughout the process, the resource requirements are kept level.  Therefore, from a programmatic perspective, this is a Level of Effort.  This means the same amount of money is spent each period; which reduces management of cost to nearly zero.  And since the implementation schedule is based on an unknown set of requirements, there is no way for a program manager to really create a schedule at the start of the effort and keep to it.  Instead, there is a monthly agreed set of requirements that must be fulfilled in development and rolled out--which by the way works, but which makes transaction managers and finance engineers paranoid--because they have so little CONTROL.  However, this process produced products of both a higher quality, and much greater quantity, and much more satisfied customers than the previous CMMI Level 3 processes.
  6. Like XP, I used Use Cases to gather requirements.  A Use Case is One way for one type of actor (either a person in a particular role, or another application) to use the application under development.  In my experience and in the experience of many of the 50+ systems engineers that I coached or mentored, or otherwise trained that Use Cases were the best way to gather requirements because they could be documented in ways the meant something to the customer (they can be documented to use the customer's own words); most other ways, like "shall statements" communicated much less because they are a transliteration of the customer's requirements.  Additionally, in general, they communicate as much, if not more to the developer.  Since the goal of requirements identification and management is to "communicate the customer's requirements to the developer and to ensure that these requirements are met", using the Use Cases, as recommended by XP proved both highly effective and cost efficient.
The net of all this is more satisfied customers, happier developers, and much more effective and cost efficient processes and products--in this case a real live "Win-Win" situation.  One consequence was that by 2005, literally, hundreds of small and medium (and even a few large) effort adopted this RAD process--and it proved highly successful within its domain (software development).

Limitations of Rapid Application Development (RAD) Processes
The key limitation of the RAD category of formal processes (processes that meet CMMI Level 3 practices) is that they are strictly for software development.  Formally, it is not possible to use this process for software or hardware integration, and for linkage to systems external to the organization, those activities that are most likely to occur in today's SOA and cloud-based environments.  Since most organizations purchase software and integrate it into systems rather than develop their own, a short cycle, agile implementation and transformation process is needed for such efforts.  This process should be the equivalent  for integration of the RAD process  that I designed and documented for software development.  In fact, it should have the same advantages as the RAD process, but applied to integration and tailoring of systems.
The Generalized Agile Development and  Implementation Process
Starting in about 2005, I gave thought to this Generalized Agile Implementation and Development (GAID) process would work.   This is a notional design and the challenges in designing such a process.  The notional design consists of four loops.
  1. The Requirements Identification and Validation Loop (Approximately 3 Months).  The purpose of this loop is to work with the customer to identify their requirements (the Systems Engineering Role), to decompose and derive the functions, to structure the functions, to allocate functions to components, to determine the acquisition method for components through trade off studies (make/buy/rent) (all in the System Architect role), and to validation that the product meets the customer's system requirements for this loop and all prior loops.  Additionally, each three month cycle allows for the acquisition and installation of any needed software, hardware, and networking components.
  2. The System Assembly/Implementation and Verification Loop (Approximately 1 Month).  The purpose of this loop is to assemble the various sub-assemblies (either those developed or those acquired from a supplier) and to verify that they meet their allocated functional and design constraint requirements.  Additionally, this loop enables the customer to see the overall progress of the product and suggest minor changes that will enhance the utility of the product for the customer. For example, this might include the placement of various data windows on a particular display.
  3. The Sub-system Design and/Assembly and Verification Loop (Approximately 1 Week).  The purpose of this loop is to ensure that components perform the functions specified in the component requirements to the level of dependability required and that the interfaces among components match for integration.  Additionally, this loop give the customer a first loop at the components, as they are being implemented, which means they can communicate changes they to like to see to the designers before the product start full up assembly and testing.  This greatly reduces the cost of the changes.
  4. The Component Construction and Verification Loop (Daily).  This is the loop where all of the detailed design, development, and COTS product configuration takes place.  While this loop might strike the reader as silly, all loops including this consist of engineering functions with a single program management meeting at the start/completion of a loop.  In this loop, it is a 15 minute long "stand up" meeting.  The reason for the meeting is to identify risks and issues, and to have the implementers set aside time to reduce the risk or close the issue before the next stand up meeting.
If this works like the RAD process I developed worked (and it should), after the initial period of learning the process, both the customer and the implementers will find it much more straightforward, simpler, and easy to use, while producing much more effective and cost efficient products.

Thursday, April 21, 2011

Enterprise Architecture, Chaos Theory, Governance, Policy Management, and Mission Alignment

Chaos and Organizational Process Friction
The first paragraph of the article on Chaos Theory found in Wikipedia sums up the theory quite well:
"Chaos theory is a field of study in applied mathematics, with applications in several disciplines including physics, economics, biology, and philosophy. Chaos theory studies the behavior of dynamical systems that are highly sensitive to initial conditions; an effect which is popularly referred to as the butterfly effect. Small differences in initial conditions (such as those due to rounding errors in numerical computation) yield widely diverging outcomes for chaotic systems, rendering long-term prediction impossible in general.[1] This happens even though these systems are deterministic, meaning that their future behavior is fully determined by their initial conditions, with no random elements involved.[2] In other words, the deterministic nature of these systems does not make them predictable.[3][4] This behavior is known as deterministic chaos, or simply chaos."
Notice that the definition emphasizes formal, deterministic systems.  These are systems that have a functional design (or system architecture).  Unfortunately, most organizations (both private and public) do not have deterministic control function or systems (see my post IDEF0 Model and the Organizational Economic Model for a description of the control function), so the amount of chaos is much greater than necessary.  In this case, the chaos creates intra-organizational friction.  This type of friction is akin to having junior officers misinterpret orders (in some cases to the advantage of the junior officer), march down the wrong road in the wrong direction, and so on.  Often, the result is a lost battle, a friendly fire incident, or some other disaster.

However, even in military organizations have their "peace time" control/process friction issues.  In history there are many examples.  Here is one taken from mid-year 1941.  This was just prior to the start of WWII.  In his book, General of the Army: George C. Marshall, Soldier and Statesman, Ed Cray Writes:
"For all his acumen, Marshall had not yet grasped the sheer complexity of the army he was resuscitating, nor how large it would grow if the United States declared war.  Another member of the secretariat, Omar Bradley, noted, 'I often shudder at some of the antiquated percepts that underlay our thinking.  The quaintest by far was the notion that after we had trained a force of four field armies (over a million soldiers), Marshall himself would lead it, perhaps to Europe, as Pershing had led the American Expeditionary Force to France in 1917.'  For eighteen months the General Headquarters of this putative expeditionary force duplicated the efforts of planners in the Munitions Building, snarling authority and communication into a grand bureaucratic tangle."  
The portions I've underlined of this quotation indicate four types of intra-organizational friction (there are many more) and the result--"a grand bureaucratic tangle." There are many more causes for organizational friction, and the results of this friction are decreased effectiveness of the organization's strategies in achieving the mission, decrease effectiveness of the processes in enabling and supporting the organization's strategies, and a decrease of cost efficiency for the processes and enabling and supporting tooling--none of which is good from the perspective of all the process stakeholders.

Among the worst cases of process friction are large, supposedly integrated, organizations created by mergers and acquisitions.  The vast major of the time mergers and acquisitions of two large organizations ends up with one of three results.
  1. The managers, culture, processes, and tooling of one organization come to dominate the entire organization.  This has the unfortunate result that there is a major drop in the morale of the personnel of the dominated organization, which in itself will cause a major increase in intra-organizational friction, but additionally, the transition to new processes and systems causes a great deal of additional process friction, and finally, the learning curve of the personnel and operations of the dominated organization will cause significant process friction that I've seen results in many of the best and brightest personnel to leave--greatly reducing the value of the operations of the dominated organization.  These include the dominated organization's transformational managers and skilled personnel, the ones that produce future value.
  2. Two sets of managers, cultures, processes, and tooling, one from each organization are used concurrently.  With respect to process and tooling, this result keeps the process effectiveness of both organizations prior to the merger in place.  Consequently, the new organization will need to enable two potentially redundant sets of processes and support two redundant sets of tools.  This may be fine in the case of a "holding company", that is, an organization that owns many divergent sub-organizations.  The reason is that the processes and tooling is (or can be) tailored for the requirements of that industry - requirements that come from the standards and policies set within the particular industry (for example, the type of geometry used to depict components).  However, if these organizations produce goods and services within the same industrial category, then, from the perspective of cost efficiency, a single process and tool set make more sense.  I'm familiar with one case where through mergers and acquisitions the merged organization ended up with 45 or more separate accounting and payroll processes and systems.  Since both accounting and payroll processes are not tied directly to the production of the good or service, and since each of these 45+ systems requires separate operations and maintenance, this really makes no sense at all; instead, consolidation of these systems makes sense.  Still, this is what can happen in a situation where the result is maintaining multiple strategies, processes, and tooling within merged organizations in generally the same industry.
  3. Finance Engineers take over.  In this case, both sets of vision, mission, strategies, processes, and tooling is decimated and replaced by the vision and mission of "increasing shareholder value", which is shorthand for "taking as much value out of the organization, while putting as little as possible in."  This puts the combined organization on the "going out of business curve."  There are two reasons.  First is "The Borg Syndrome", that is, "Resistance is futile, you, too, will be assimilated."  The "Finance Engineering Department", from the CFO on down--and including all transactional managers in both organizations--set a new vision and mission to increase cost efficiency.  This mission is the same one that the farmer in the story of the dairy framer had.  As the story goes, "A dairy farmer, who was a good financial engineer noticed that his feed costs were the highest variable costs.  Therefore, he decided to reduce the amount of feed to determine what happened to milk production.  To his edification, he found very little.  Consequently, he continued to reduce the amount of feed.  However, when he reduced it to zero, all the cows died."  In today's economy, "the traders" of Wall St., and all organization's based on the mission of "increasing shareholder value" are identical with the farmer.  When "the number are made" the transactional managers get paid large bonuses.  When it looks like the organization "will not make its numbers" the CFO declares war on the "cow feed" or the "seed corn."  Suddenly there is a great reduction in funding of all research and development--but especially that for apparently "risky" ideas (ideas that the CFO doesn't or doesn't want to understand), training of personnel (it's not needed, the personnel only want to boondoggle at some hot spot when going to a standards committee or getting a certification), new equipment can wait until there is a better quarter, and so on.
Even in a single organization, external policies can lead to a significant amount of intra-organizational process friction.  For example, in the 1960s to the early 1980s, during the infancy of information technology, when computers had less computing power and storage (by one or more orders of magnitude), programs for the US Federal Government required separate computers.  This policy made sense, since, in addition to the fact that a single small program could use all of the power and storage of an entire computer, the finance engineers of the time had no way to allocate costs for CPU cycles and storage to more than one program--it was just too hard.  Consequently, Federal contract program and project managers, and other finance engineers got used to having one or more separate computers for each effort.  This policy of each program or project was institutionalized into the organization's financial thinking, processes, and systems.  As a result of Moore's Law, by the 1990s, many times the minimum computing power a program or project could purchase was many times its needs, so that up to 95 percent of a typical computer's CPU cycles were idle and less than 10 percent of the storage was used.  However, because of the low initial costs, every program and project still insisted on having its own systems and its own software.  This led to this large organization having thousands of sparsely used computers.

The problem was and is that each system requires installation, hardware and software maintenance, operations management, and IT and physical security.  From the perspective of both IT and the Organization's finance engineering, this is highly cost inefficient, while from the perspective of the program or project, this is the way the budget, the accounting, and the bonus systems were set up, so therefore everything is right with the world.  Any change was unacceptable, even though it would significantly reduce the overhead costs of the program or project. 

To show how entrenched this thinking was, many of these efforts would under run their expense budgets throughout the year, then in the last month the program would look to spend this under run.  One way they would do this is through the purchase of licensed software--software with a significant initial cost and with a significant yearly maintenance license fee.  Many programs and projects purchased this type of software based on potential engineering or programmatic needs.  In the following year or more, the program did not use the software because of intervening challenges and opportunities.  Still, it continued pay the license fee.  When another effort approached the program with an immediate need for the software, the nearly universal answer was no, the program could part with it "because it might be needed in the future" (and there was no accounting method to allow the program manager to participate in the bonus program of the other effort).  Consequently, there were licensed software packages costing tens of thousands of dollars with thousands of in licensing fees per year, simply sitting on the self.

When the corporate IT engineering and architecture team proposed consolidation of all of this standard engineering software, the universal answer from the program and project managers was a resounding "No Way!", even though it would reduce the cost per engineering seat for the program by an order of magnitude while increasing the agility of all participating efforts, that is they could use the software on short notice and increase or decrease the number of seats on nearly an hourly basis (paying for only what they used).  The additude continued to be "We need our own; just in case, and the accounting system will not accept the change."  This inflexibility of thinking and accounting led to much intra-organizational friction, cost, and chaos.

Another cause of friction is any change in vision, mission, or strategies.  This always results in an increase in intra-organizational friction.  Even in the most absurdly informal organization on the "going out of business" curve there will always be remainates of the successful organization standards and policies--though they may be difficult to discern.  Optimizing the investments, policies, and standards of a new organization always takes time--much more than most finance engineers expect.

However, when the objective is increased "shareholder value", then detailed cost efficiency takes precedence over everything else.  This means that the organization will purchase the low cost tools and software--that may almost work in the processes (I've seen that on many occasions).  At times the operating costs and the reduce effectiveness of "the least expensive solution" are such that the organization does or should have back "the solution" out.  I know of two instances where this cost the organization a great deal, one that costs millions and helped create a situation where they almost lost a billion dollar contract, and one that cost tens of millions to implement and tens of millions more to back out--in both instances, management did not want to acknowledge they were wrong, so they through good money after bad.  And this is what Wall St. traders and institutional investors force on management when they insist on "increased shareholder value" that is a synonym for "more money in my pocket this quarter".
In summary, chaos occurs in at least three key categories of process friction: 1) The alignment of the processes (with associated activities, functions, and procedures) and tooling, 2) The policies and standards set, and 3) changes in the vision and mission.

Enterprise Architecture and Chaos Management
Enter the Enterprise Architect and his or her processes to help to minimize the process friction of the organization in achieving its vision and mission.  If the overt or covert mission of the organization is to "create shareholder value" then there is little an Enterprise Architect can do; and it is likely that this type of organization will not allow good Enterprise Architecture (Mission Alignment, and Governance and Policy Management processes [that enable and support the elimination or amelioration the three categories of process friction]).

These Enterprise Architecture Processes are:

Sunday, April 17, 2011

Housing supply follow up – more evidence (UPDATED)

I promised to search around for some more evidence that local councils approve far more dwellings than are built. This would go some way to addressing the argument than planning is restrict, particularly zoning controls and approvals processes.

This report, by the Queensland Office of Economic and Statistical Reseach, adds to the previous evidence on a deevlopment approvals for subdivisions greatly exceeding the ability of the market to absorb the new land.  It outlines the number of development approvals for infill multiple unit dwellings in the pipeline at various stages of approval for South East Queensland.

The telling figure is that there are 48,152 approved new infill dwellings in SEQ, with another 29,014 at earlier stages of approval.  Remembering that there are also 30,566 approved subdividede housing blocks, we have a total approved supply in this region of 88,718 dwellings! Even at its recent peak population growth in SEQ was only 88,000 per year.  That makes about 2.5 years supply of dwellings already approved.

Other government reports which have compiled useful information on the potential housing supply available under current planning regimes.

This report notes the following

“...there appears to be a very low risk of the current broadhectare land not providing at least 15 years supply, particularly when the increased density and infill targets set by the SEQ Regional Plan are considered. Based on the SEQ Regional Plan assumptions for infill, then only 244,000 lots would be required over a fifteen year period”

Moreover it explains that the stock of approved lots represents 3.3 years supply.



We can take a look at a national level here and see that current planning schemes have the potential to yield 131,000 new homes per year for a decade from 2008! This excludes the increase in housing stock from developments with less than 10 dwellings.  In the abovelinked OESR report they state that smaller developments of 10 or fewer dwellings accounted for 69.5 per cent of projects at June 2010. This means the estimate of 131,000 new homes accounts for only 40% of the actual supply available under current planning schemes.

Even so, they sum up their analysis of land supply by stating that there was approximately 7–8 years supply of zoned broadhectare land in 2007.  

Wednesday, April 13, 2011

If this doesn't blow your mind...

I stumbled across this video on the web.  It's about growing organs... from scratch.  Absolutely amazing! 12 minutes very well spent.

Tuesday, April 12, 2011

Risk homeostasis, Munich Taxi-cabs and the Nanny State


There is an odd coexistence between two conflicting safety policies that may well be pursued by the same accident prevention agency. The first seeks to improve safety by alleviating the consequences of risky behaviour. It may take the form of seat belt installation and wearing, airbags, crashworthy vehicle design, or forgiving roads (collapsible lamp posts and barriers). This policy offers forgiveness for a moment of inattention or carelessness. The second policy seeks to improve safety by making the consequences of imprudent behaviour more severe and includes things such as speed bumps, narrow street passages, and fines for violations. Here, people are threatened into adopting a safe behaviour; a moment of inattention or carelessness may have a dire outcome. 

While these two policies seem logically contradictory, neither is likely to reduce the injury rate, because people adapt their behaviour to changes in environmental conditions. Both theory and data indicate that safety and lifestyle dependent health is unlikely to improve unless the amount of risk people are willing to take is reduced. (here - my emphasis) 

The above passage points out a common logical absurdity, and contains an important lesson for Australian’s with and overeager obsession of controlling personal choices through ‘nanny state’ regulations. More on the nanny state a little later. 

First, it is important to examine the hypothesis of risk homeostasis to properly understand the implication of the opening quote, since it claims that neither of the two contradictory policies aimed towards improving safety are effective. 

The essential argument of risk homeostasis is that humans have an inbuilt level or risk that they gravitate towards in response to their external environment. If we reduce the risk of an activity, people will compensate by finding other risky activities as a replacement, or undertaking the activity in a more extreme manner. For example, if we ban smoking tobacco, which doesn’t seem like such a remote possibility, do we really expect smokers to replace their habit with fruit snacks and yoga? Or might they compensate by increasing their alcohol consumption or perhaps smoking dope instead. 

Risk homeostasis is not to be confused with risk compensation, which suggests that individuals will behave less cautiously in situations where they feel "safer" or more protected, but that we don’t necessary return to a predetermined risk equilibrium point. 

Improving transport safety is an area where there is strong evidence risk compensation, and indeed of risk homeostasis. 
One reason I am against mandatory bicycle helmets is due to the increased dangers posed by this perception of safety. Not only do cyclists tend to behave more aggressively, but helmet wearing changes the behaviour of other road users. It decreases the rate of cycling, and the effect of fewer cyclists on the road is to increase the danger of the remaining cyclist, but it also makes vehicle drivers act more carelessly around cyclists. 

This study fitted a number of bicycles with video cameras and ultrasonic sensors to detect the proximity of vehicles as they passed cyclists on the road. They found that vehicles passed helmeted riders about 8.5cm closer than cyclists without helmets. The suggestion is that drivers perceive there is less risk from clipping a helmeted rider and that helmeted riders are more experienced and unlikely to ride erratically. They also found driver give female cyclists much more room! 

Of course this behaviour is all based on perceptions. Helmets themselves provide minimal protection in a limited number of head collisions, and can exacerbate brain injury for some other types of collisions where helmets can increase rotational acceleration during a collision. 

The best experimental evidence of risk homeostasis is the famous Munich Taxi-cab experiment, where for three years half the taxi cabs in a fleet had ABS brakes and the other half didn’t, and various monitoring and testing took place including fitting all vehicles with accelerometers. 

Among a total of 747 accidents incurred by the company's taxis during that period, the involvement rate of the ABS vehicles was not lower, but slightly higher, although not significantly so in a statistical sense. These vehicles were somewhat under-represented in the sub-category of accidents in which the cab driver was judged to be culpable, but clearly over-represented in accidents in which the driver was not at fault. Accident severity was independent of the presence or absence of ABS.
...
Subsequent analysis of the rating scales showed that drivers of cabs with ABS made sharper turns in curves, were less accurate in their lane-holding behaviour, proceeded at a shorter forward sight distance, made more poorly adjusted merging manoeuvres and created more "traffic conflicts". This is a technical term for a situation in which one or more traffic participants have to take swift action to avoid a collision with another road user. Finally, as compared with the non-ABS cabs, the ABS cabs were driven faster at one of the four measuring points along the route. All these differences were significant. 

To put this experiment in the context of our original two options for reducing risk, ABS brakes are an example of an action that reduces the consequences of risky behaviour. Hence such actions decrease total risk. 

But the study did not end there, and finds some evidence that the opposite type of strategy, increasing the consequences of risk taking, has quite an effect. 

In a further extension of their study, the researchers analysed the accidents recorded by the same taxi company during an additional year. No difference in accident or severity rate between ABS and non-ABS vehicles was observed, but ABS taxis had more accidents under slippery driving conditions than the comparison vehicles. A major drop, however, in the overall accident rate occurred in the fourth year as compared with the earlier three-year period. The researchers attributed this to the fact that the taxi company, in an effort to reduce the accident rate, had made the drivers responsible for paying part of the costs of vehicle repairs, and threatened them with dismissal if they accumulated a particularly bad accident record. 

My favourite example of different effects of increasing consequences of risk taking versus decreasing the consequences is here

Sometimes, students would deny that they drive more recklessly when wearing a seatbelt. Tullock liked to illustrate the idea of offsetting behaviour for them by asking what they'd do if a large spike extended from the steering wheel and pointed directly at their heart. Wearing a seatbelt is a mild form of that effect, but in reverse. Tullock's students came to call the thing the "Tullock Spike"

Australian policy makers could learn some lessons from risk homeostasis. For any returning Aussie from time abroad the degree of over-regulation can be a shock. One friend recently returned from three years in Paris and said that it was the one thing that enraged them the most about coming home. 

Think about it. We can’t buy alcohol at the supermarket, nor drink it in a public place, nor smoke in a building even if the owner is trying to run a cigar smoking cafe. In fact the body corporate of an apartment building recently passed a by-law to stop people smoking in their own homes! 

With the recent surge in anti-smoking opinion, not only will smokers pay ridiculous taxes, cigarette producers will need to adorn their prized products with pictures of diseased organs, but not their own brand label. Luckily, alcohol is exempt from such measures which might make the Chinese communist party blush, yet arguable alcohol is a far greater public health concern. 

We can’t buy fireworks or ride without a helmet. There are now calls to ban topless bathing on NSW beaches and there is the infamous internet filter proposed my communications minister Conroy. Oh, I almost forgot the proposal to ban teachers from using red pens when marking because red is an aggressive colour!

We can’t be expected to navigate construction zones on the street without 17 different warning signs, stop-go lollipop ladies, flashing lights and orange fencing, nor, it seems, are we expected to navigate over treacherous cracks in footpaths, with councils often paying compensation to ‘victims’ of such treachery. We have the slowest speed limits yet many would say the worst drivers. 

Policy makers must believe the average Australian has both multiple personality disorder and signs of schizophrenia. Someone who can perfectly judge their own financial risk when taking on massive debt, can choose their own career, raise their own children, run their own businesses and judge the associated risks, yet when it comes to the basics of life, like having a beer, tanning your breasts, or navigating the street, we all become complete morons incapable of rational behaviour. 

I have no problems with governments intervening to protect people for the actions of others, but they should be make their own choices about their personal safety. The strongest argument in favour of this position is the theory of risk homeostasis. It seems we really can’t save people from themselves. In fact the parent in me suggests that all this molly-coddling decreases our ability to judge real risks when they arise.  

Rant over.