The People’s Economy

There is an enormous amount of energy. Given that the amount of energy in the universe is for most intents and purposes limitless, it can be said that the real cost of achieving most things is a triviality. The cost of heating the world, or running the world’s transport, or feeding the world might be measured in kilojoules, and if there is an overwhelming abundance of kilojoules, then the cost in terms of energy is comparatively trivial.

Without even having to consider something as huge and remote as a galaxy or the universe itself, the amount of energy available from the Earth and from the Sun is staggering.

We live in an extremely thin layer of organic life, protected above by ozone gas and below by continental crust. This sandwich is, when properly scaled, more delicate than an eggshell. It is more akin to the floating skin on a cup of hot milk.

On the underside is the mantle, a raging ball of hellfire almost three thousand kilometers deep, surrounding a huge iron and nickel ball, the core, a further three and a half thousand kilometers deep, with a temperature of about five and a half thousand Celsius at the middle. Most of the heat (about eighty percent) contained there is actually generated by decay of various radioactive isotopes. Only the remaining twenty percent actually comes from the original formation of the Earth. The total energy content of our planet from all this heat is estimated at about 3*1031 Joules, the equivalent of approximately three thousand billion (3*1012) year’s worth of electricity consumed by the USA. At current world total annual energy consumption of 5×1020 Joules, that energy would take over a hundred thousand million years to deplete if it were available to us.

On the other side, above our organic shell, is a great cosmic microwave oven threatening to incinerate us with everything from ultraviolet to gamma radiation. Arriving from the Sun just to the edge of the Earth’s atmosphere is a huge 174 Petawatts of solar energy. That’s 174 million million kilowatts and just arriving in near space. If we were to even harvest the Sun directly, using solar panels placed in space close to the Sun, the amount of energy available would be unimaginable to us.

Anything we want can be expressed as requiring a certain amount of energy. To mine silver requires energy. To make the machines that do the mining requires energy. Even if the silver has to be mined by robots that travel to different moons or planets in our solar system, or even if the silver has to be directly synthesized by nuclear fusion, if the method exists then the only price to pay is the energy required to produce and deliver it. However, as we have seen, there is enough energy there to fuel a god-like lifestyle for everyone on our planet.

So what is stopping us?

One immediate objection to the above might be that the energy exists but is not available to us. Perhaps immediately not, but whether this is absolutely true depends on if the process of striving to make that energy available consumes more energy than it returns. For example, if the process of making a photovoltaic cell consumes more energy than it produces, then people will only make photovoltaic cells while our society operates on the basis of ‘free’ reserves of energy from stores of fossil fuels. If there is no alternative technology that can help to make solar, geothermal, tidal or other energy sources available at an energy return higher than the energy consumed in their production, or if no such technology can be achieved by us, then we are doomed. Our population will be forced to dwindle as fossil fuels dwindle until it reaches numbers that can be sustained by agriculture.

As it happens, with current technology, the energy returned on energy invested (EROEI) of photovoltaic solar cells is about 30. This means that over the lifetime of a photovoltaic cell it typically produces enough energy, after deductions for the maintenance and upkeep of the cell, to make about thirty more photovoltaic cells. What this should mean is that if some group decided to switch to solar power now, if half of the power (or half of the cells) were devoted to general consumption and the other half to making new cells, within thirty years one half of the initial investment would have multiplied itself thirty times whilst the other half would have provided usable energy at the same time. After thirty years, there would be fifteen times as many solar cells and everyone would have been kept warm and lit in the process. After sixty years there would be four hundred and fifty times more photovoltaic cells in that group. In short, all that energy mentioned earlier should be increasingly available to us, energy affordability should be increasing, and our quality of life should be dramatically increasing.

Again, so what is stopping us?

The way we measure cost is not in terms of kilojoules or ergs. Most of the time cost is expressed in terms of a particular currency as a ‘price’. Complex social interactions and structures result in products and services advertised at certain prices. Those prices help to shape the decisions that people make. Conversely however the decisions that people make help to shape those prices. This bidirectional effect is true regardless of whether the system is a free price system or not.

In a planned economy, prices are set by central administration according to both planned changes in production and previous buying behaviour. That information about buyer behaviour in response to previous prices is the feedback into creation of new prices. In this sense there is no fundamental difference between a planned economy with fixed prices and a free price system. The difference is in the multitude, size and involvement of those signaling feedback loops.

Those pricing feedback loops serve to amplify or dampen investment in the things to which they apply: Infrastructure develops to support a particular type of product or service, which affects pricing, peripheral products, marketing and so on, to attract further consumption, further speculation and further development of the same type of infrastructure.

A topical example of such a feedback loop is called “debt deflation.” This is where people become increasingly motivated to rush and pay off debt in response to decreasing ability to repay the debt. What this results in is a reduction in the circulation of money, which means that less gets spent on goods, which means nominal prices drop and businesses earn less, which means less get’s paid on wages and the incentive to clear debt increases still. It is a particularly nasty condition, because while nominal prices and interest rates drop, real rates rise as wages drop, sales slump and unemployment rises. It is an unregulated decay that describes the Great Depression and arguably one effect of the Global Financial Crisis in the West.

Price itself has little meaning other than something that can be loosely correlated with trends in human behaviour. Price is part of a signaling mechanism. Essentially it is marketing: information about something that hopes to change behaviour in relation to it. A high price may increase demand or it may be decrease demand; it just depends on the type of product. There is nothing absolute or intrinsic in a Euro or a Dollar, or even a gram of gold. The potential exchange of one Euro is not the same as the potential exchange of ten kilojoules. The latter occurs according to immutable laws of physics, the former is merely a signal that helps to modify individual behaviour, and according to sociological, biological and neurological patterns that are not yet well understood at all (though, yes, both ultimately occur according to immutable laws of physics, but one occurs on a low level where the other does not! For now, physics does not explain buyer behaviour, but no doubt one day it will).

Whether they be empires, territories, mass production infrastructures, energy networks, roads or railways, it may be more useful to think of human society as constantly developing new and tearing down old structures depending on which way the winds are blowing, where price systems supply the underlying but undirected mechanism of those winds, allowing them here and there to billow in gusts of investment amplification and stall into calms of stagnation and decay.

With this bird’s-eye view, it is clear that there is no reason at all why anyone should enjoy a god-like lifestyle at all. There is no overall regulation, no overall architecture and no general direction. There is nothing to say that those feedback loops should not all switch to dampening and decay at the drop of a hat. In fact, the evidence and several famed economists show that the system we have had in place for the recent past is not a stable one, and often it is the centrally planned, state regulated, tax funded bailout, war or convenient trip to the moon that drags society out of the inevitable collapses whenever they happen.

Price almost never increases as a product becomes more abundant. The price system is set up in such a way that infrastructure must wane if the output becomes abundant. If something becomes as available as the air, the price drops to nothing. Nobody is interested in selling or buying air. The unfortunate consequence of this is that the price system becomes self limiting. Before things become ‘too’ abundant, artificial scarcity is introduced to help maintain sufficient competition between people to justify production. If it could be done, respectable entrepreneurs would find ways to make air scarce, in order to sell it. Given that they have succeeded in decorating simple water with nicely distinguishing bottles, and people are willing to spend their earnings on this, it is a wonder that bottled air is not more popular. Another good example of artificial scarcity is the diamond. Diamonds are maintained at artificially high prices by restricting the quantities sold. Yet another example is digital music. Now that distribution of music can be done without the use of physical movement of plastic, using the internet, it is the response of the old business model that relied on this monopoly of physical distribution to introduce artificial scarcity in the form of ‘digital rights’ enforcement, without pro-active adaptation and endorsement of new, efficient, faster business models.

As technology and automation advances, things become more abundant. The only problem is that while investment in automation is made according to price systems, the output product or service cannot over time tend down towards a zero price. The investor makes his decisions about where to stimulate development by examining expected returns, measured in a currency according to prices. The stimulation in a certain direction – the amount of machinery that can be built or bought – depends on the amount fronted by the investor expressed in a currency and relative to current price indexes (a factory may have cost ten thousand pounds some time ago, but might cost ten million today). If for example the business plan is to provide a cheap, abundant form of energy with a projected exponential decline in cost price over the next ten years, then given that there may be a competitor, there is no way either could expect to recoup their investment.

While abundance is linked to automation and technology, it will be hamstrung as long as investment in automation is made according to unsophisticated calculations of private ‘profit.’ Further, the output of these automated supply processes must be bought using the fruit of remuneration, but automation reduces the numbers of employees. While ideally this would allow people to move on to new areas, in actuality it results in both unemployment, inequality, and an increasing number of “non-jobs”: sub-optimal shuffling and time-wasting with little fulfillment or purpose aside from the monthly pay packet.

So in conclusion, what I hoped to present was the notion that our ‘economy,’ while based on a price system, is a system or structure that relies on scarcity and competition between ourselves. It is a self-perpetuating thing. We delude ourselves into accepting scarcity because when faced with abundance our paradigm breaks down. When told of alternatives to the petrodollar, despite the obvious overwhelming abundance of energy, we are given to understand that other sources of energy make no ‘economic sense.’ It is directly analogous to Galileo’s eyes being blinded after he informed the establishment of celestial arrangements not compatible with the social order of the time. The price system entices us with the promise of conflict-free rationing and distribution of scarce resources but ensures that for as long as we adhere to its rules scarcity must continue and so we will be doomed to inequality, suffering and war. Like any doctrine it offers structure at the expense of flexibility and potential.

Advertisements

Flexibility In the Modern Organisation (part 2)

(Click here for part 1)

Automated roles and flexibility

If on an abstract level there is not a great difference between the traditional manual process and the modern automated one, why should there be a difference in flexibility? Is it actually the case that effecting organisational change is getting more difficult?

Compare the previous, traditional process with todays, computerised equivalent:

The warehouse delivery process involving the use of a stock control system.

Looking at the ‘Update stock records with loaded goods process’ more closely, we see:

To reiterate and drive the point home, the processes of updating the stock records with loaded goods are in each case fundamentally the same. For the non-technical reader those specifications of the stock level update in ‘pseudo-code’ (a slightly more readable, informal rendition of a computer program) might be hard to understand, but that does not impact the execution of the process because they must only be understood by the computers themselves.

The key difference between the two processes is in the impact on the processes needed to change those processes themselves. In the past, changing processes involved telling people the new rules. Now they involve telling people to tell computers the new rules.

Changing processes where tasks have been automated requires the construction of specifications for software developers. Depending on the specific software development methods employed (and there are many), new tasks for analysts, architects, developers and whoever else needs to be involved need to be created. This becomes increasingly prohibitive. The very act of automation results in a loss of control over the processes themselves. The processes get carried out, allowing high throughput, but they cannot be easily stopped and changed. Not only that, but the areas of the business the processes cover after time become lost and forgotten to the business itself. Over time the IT specialists become the domain experts, but effecting change in the domain is beyond their remit.

Let’s assume that our warehouse only ever stocked items in units. A new requirement to receive liquids into tanks then arrives. Tanks and vats are installed in the warehouse, each with its own label. The tanks and vats are general purpose but it is not permitted for liquids to be mixed, and each container requires a proper cleaning when they become empty or product contents are changed. The process of changing the processes themselves, before automation, might look like this:

1) Plan with warehouse manager for the change.

2) Warehouse manager introduces a new stock ledger called ‘liquid products’, which describes products in terms of containing vat or tank and volume in litres.

3) Test the receipt of stock and reporting from the new ledgers and approve the new ledgers.

4) Update standard procedure documentation, if it exists.

5) Warehouse manager simply adapts his behaviour in response to the change, updating the correct ledgers on receipt of either liquid or unit based goods.

After automation, well the process of introducing such a change can be so elaborate and complex that an entire book could be dedicated to it. The software development processes chosen by an organisation, or in operation in an organisation, involving the various approval processes, coordination with networking infrastructure, liaison with suppliers, datacenter and storage capacity planning, software design, business analysis and so on and so on, can be mind bogglingly complicated.

How the experience of age can make the organisation set in its ways

As changes that focus on growth at the expense of flexibility are made, the organisation gets more rigid. Perhaps this is symptomatic of all aging, living things. However, as the fitness trainer says, it is never too late to start exercising.

Here I look in more detail at how companies age, with the aim of helping to regain some of that lost agility.

Overwhelmed by increasing volumes, frightened by mounting costs, attracted by the need for monitoring and higher information processing capacity, and perhaps motivated to keep technical pace with competitors, organisations rushed to automate and deploy systems. As business objectives emerged, in a keenness to fulfil these objectives, systems were extended, customised and complemented with yet more automated systems.

It has been said that this early phase of rapid growth is a hallmark of youth and eventually a successful organisation is bound to reach equilibrium where it can no longer expand organically, its internal complexity reaching the limits for its earning potential. While this may characterise the evolution of many companies, it is somewhat defeatist in that it makes no attempt to find an underlying mechanism of that organisational aging process in an attempt to resist it.

Indeed, rapid growth with unmanaged increase in complexity can lead to confusion and quality reduction at any stage in a company’s history. This is clearly illustrated in the recent example of Toyota’s overemphasis on expansion, followed by the unfortunate recall of several million vehicles.

Viewing the organisation as a set of processes, involving activities assigned to roles, we have seen that change itself often leads to decreasing ability to effect further change, primarily because the detailed process of effecting change is not itself included in the process model under change, or is just ignored. In other words, change is made without really understanding the consequences. Standard techniques like the “gap analysis” show you the difference between the now and the expected, but do not include future gap analyses, or the ease and effectiveness of future gap analyses, as a subject of change. Immediate savings or growth opportunities from automating or ‘improving’ something can be perceived with some clarity, but the cost of future changes to the deployed result is uncertain and often simply ignored.

The consequences of ignoring the impact on detailed change processes are manifold:

Automating an area introduces communication barriers
As soon as something is removed from a department to the world of software then change to that process now involves at least two departments instead of one.
For example, when the process of updating the record of stocks was done manually and managed by a single department, it was not too difficult to add a new attribute to the stock books describing some new feature should it have been required. If it would have become necessary to make records of when shelves were last cleaned in the stock control books, it could have been managed by the employees responsible for the stock room or warehouse themselves. Once it is automated, it is no longer trivial to make procedural changes. Changes to the stock processes now involve a more elaborate software development process.
This software development procedure involves a prohibitive communication overhead such that small changes are often simply abandoned.
Another consequence is that the details of the processes become lost to the business. Those who really ought to know about how, when and why the stock levels are recorded often only do so for a duration of time around about the time the software is being designed and deployed. As months and years go by, people leave, software changes, and the precise mechanisms might become forgotten. Although not frequent, it is common across companies to see investments in system rediscovery, essentially re-learning how their business really works.

Automating an area can cause dependencies on specialised technical knowledge

Now that the warehouse information process has been automated, the company is now the proud owner of “Commercial Third Party System X.” Although “System X” met ninety five percent of functional requirements out of the box, it required some customisation by one or more external consultants. The external consultancy was selected on the basis of competency and likelihood that they would as a company remain around for the next five or ten years. It is now not possible to make changes to the system or its integration with other systems without involving these consultants. Also there is Mr. Woodworm, an in-house technician with knowledge of “System X,” who is the last remaining employee in the company who understands key areas of the installation.
This is a typical scenario for many companies. It has become the norm to perceive line of business systems as fragile, indispensible and critical, while those who keep these systems propped up are treated in precisely the same way.
Further, as more changes are introduced, the stovepipe system can evolve. Smaller systems are developed to compensate for deficiencies in the larger ones. Processes evolve to simply move data from one system to the next, as for some reason they cannot talk to one another. These in turn require automation. Efforts are made to consolidate things and cut down in complexity (such as attempts at SOA integration), only to discover that half way through the project the budget dries up or the obstacles become insurmountable, leaving a mixture of systems aligned with either the new or the old paradigm. This evolution of complexity results in ever granular domains of specialisation, each with its own experts, and each a stakeholder in any future project for change.
Both internal and external specialists monopolise their knowledge as a job security strategy, and make sure that changes come at a high price.

Costs and risks associated with staff turnover increase

Processes have been changed without regard for the impact on the process of achieving further change. This manifests itself in yet another way: that staff turnover, a type of change that is not always given proper consideration, may require additional costs. This may be because of a need for specialised knowledge, or that the environment has increased in complexity or technicality and requires a steeper learning curve. It may be that the knowledge of how and why things are the way they are is simply no longer there, or so disparate between teams that takes longer to compile and learn it.
I have seen the almost humorous situation of technology being selected for its capability, without regard for the software’s popularity, only to find that after deployment internal staff trained in its upkeep and use actually left the company to set up external consultancies for multiples of their original income. On occasion the consultants sold their services back to the same former employer. Needless to say, change that produces specific solutions to specific problems results in artefacts such as Mr.Woodworm above, who represent their own type of risk and cost in loss of expert knowledge.
Personnel or human resources departments often cannot cope with hiring or firing. In the past, recruitment for a particular field meant finding personable candidates with appropriate education and experience in the business area. Today the business processes are automated and hiring is more about looking for familiarity with software applications and the various techniques required to customise or operate them. HR departments are not usually conversant with the plethora of technologies in use. This encourages them to push the responsibility of hiring and firing onto the relevant managers, washing their hands of it to some extent. However, this is not the main reason.
Prior to computerisation, where those responsible for induction, training, hiring and firing needed most importantly an understanding of the business area, costs associated with candidate performance, and hence HR department performance, were reasonably easy to measure. HR, recruitment, or personnel departments could be held accountable, at least partially, for costs associated with training, learning curves and so on. In contrast, today it is practically unheard of for companies to maintain correlations and studies of learning curve, training, hiring or other HR related costs and the impact on them as a result of technology choices. Today, managers insist they need some rare skill, are prepared to pay through the nose for them, make technology decisions without any involvement of the HR department, and the end result is an apathetic HR team who leaves management to deal with training, hiring and firing, while they become little more than paper-pushers in an organisation that seems to be more dominated by machines than people.

Use of third party systems means more business critical processes are outsourced

The most accessible example of this is perhaps the small online shop. Many today who would like to or already participate in trade of goods and services find themselves in the situation that traditional knowledge of commerce no longer suffices. In the past one learnt about bookkeeping, stock keeping, buying, selling and the laws and lore of their particular business domain. Today it is somewhat different. There are three options for the simple shop owner that wants to sell online: learn to program, hire programmers (external or internal) or use a third party software package with some limited customisability.
Historically businesses have always been dependent on third parties for provision of infrastructure, but the actual orchestration of internal operations was very often the responsibility of the business’s key decision makers. Today however it seems that more and more are willing to let software run their business. Online marketing, ordering, accounting, stock keeping, bookkeeping, tax calculation and more, can all be done by machines. Simply purchase the system, host it somewhere, add liberal sprinklings of creative design and personal connections, then spend the next few years running around at the behest of the system making sure that the physical order and delivery processes keep up with what the system demands.
When a change becomes necessary, such as adding an entirely new product category, suddenly the business “owner,” must either become an adept programmer and web designer, manage programmers, or pay through the nose for application customisations. I can clearly imagine the multitude of modern-day shopkeepers lamenting these very same problems and I wonder why there is not more of a backlash against the hijacking of their profession.
In larger corporations the same kind of difficulty applies. Entire sections of critical business processes become effectively owned by a third party supplier. In my experience the costs of internally developed systems versus externally supplied and customised systems often vary wildly from expectation. Sometimes companies invest so much into a system, only to discover that it lacks the customisability they need, that the entire business folds. Sometimes huge systems that would take years to develop by huge teams can be substituted by smarter systems developed in house at orders of magnitudes cost reductions (and vice versa). The main problem with outsourcing business processes, particularly by purchasing applications, is that they meet a need, but flexibility in those areas becomes minimal. Further, these systems can become quickly obsolete and/or require frequent upgrades.

Technical solutions to problems of organisational complexity miss the point

To reiterate: the business is a set of processes; businesses change and adapt through processes for change; the business is a set of processes including those processes of realising change. A change to the business results in a change to those processes for change – a change to the business means a change to how it can change. As changes occur, how flexible or how rigid the organisation becomes is determined by the impact on those processes for change. In order to maintain flexibility, the impact on change management and realisation processes must be considered.
The problem is methodological and cultural. Business processes must first be recognised as real artefacts, identified, described and maintained. They must be as descriptive as possible, as opposed to prescribed dogmas, so that the business can be well understood and remain understood. Without this knowledge, there is no governance, only the illusion of governance. Once this knowledge is available, it must be made to include identified processes for effecting change. Any proposed change to the organisation would have an effect on the steps necessary to realise yet further change and so this total impact must be grasped holistically. With the necessary approach and mindset, it should become possible to take control of organisational flexibility, anticipating what changes lead down the path of ossification and age, and which paths can lead away from it.
Consequently, any attempt to address organisational flexibility through technical ideals, such as enterprise SOA integration, homogenised platforms, monolithic packages (including even our online shop software above) and so on, can not come with any guarantee of success because they completely miss the point. These company-wide architectural panaceas are attempts at serving technical solutions to problems of structural complexity, but with no obvious connection to organisational flexibility. The assumption that one must follow from the other is naïve, and without this connection it is difficult to understand why there should be any return on investment at all.
For example, the decisions involved in centralising a system of redundant sub-units and ad-hoc ‘glue layer’ data-flows into a SOA workflow engine do not usually address the costs associated with specialised consultancy in the specific workflow engine; do not calculate the risks involved in centralisation into a hub-and-spoke model; do not properly account for the resistance to be expected from domain fiefdoms and expertise monopolies; and most importantly, they do not address the cultural aspects of the organisation that led to the inflexible solutions in the first place.

Loss of quality

Both cause and effect of rigidity is loss of knowledge about systems and processes. People become tentative and hesitant to make changes, but when they do they cannot effectively test systems. For various reasons the test processes become lax, circumvented, perceived as too costly to maintain. In some companies experiencing growth, it is the necessity for expedience that causes systems to be deployed without retention of testability, and so quality suffers.
A climate of uncertainty is established when quality and testability is secondary. This uncertainty is accompanied by a resistance to change. This resistance to change can also often result in ‘patches’ and supplementary systems, put in place out of fear of upgrading or replacing the original, resulting in an increasingly complex and less understood stovepipe of an organisation.
De-emphasis of quality results in failure, and organisational rigidity results in poor quality. For some it may seem counterintuitive that stringent Quality Assurance could result in flexibility. That very emphasis was the key ingredient in the rise of Japanese manufacturing processes:
Ford Motor Company was simultaneously manufacturing a car model with transmissions made in Japan and the United States. Soon after the car model was on the market, Ford customers were requesting the model with Japanese transmission over the USA-made transmission, and they were willing to wait for the Japanese model. As both transmissions were made to the same specifications, Ford engineers could not understand the customer preference for the model with Japanese transmission. Finally, Ford engineers decided to take apart the two different transmissions. The American-made car parts were all within specified tolerance levels. On the other hand, the Japanese car parts had much closer tolerances than the USA-made parts – e.g., if a part were supposed to be one foot long, plus or minus 1/8 of an inch – then the Japanese parts were within 1/16 of an inch. This made the Japanese cars run more smoothly and customers experienced fewer problems.”

The worst organisation in the world

Here I present a fictional case study of a large organisation, demonstrating all the problems of rigidity identified above. This also serves as a summary of the problems I have identified to be addressed in solutions presented later.

For the sake of familiarity, let our stricken company be a shop. So that the example is non trivial, it has a central, shared warehouse, some branches around town, and each branch has its own small warehouse. All ordering for replenishment of the central warehouse is done from a central department at HQ, and all local ordering is placed on the central warehouse. The shop currently sells a narrow range of products from a small selection of suppliers. All sales are brick and mortar. No online ordering is supported.

Up until recently the branch cash tills generated a spreadsheet of daily sales, which was sent by email as soon as available to HQ for processing. It was decided however that this process could be automated to save some branch staff time. The end result was that the HQ accounts package was integrated with a web service, and custom software was created by in-house technicians to send till data via the web service to the accounts system. The custom software was not completely trivial: there was some cleaning of the data to be done, some recorded sales were not actual sales but goods returns or item cancellations, the till data was expressed in terms of bar codes and had to be matched with a file of product descriptions, and sometimes the data was not available at the expected time and had to be merged with data from the previous day. These are just some of the complications previously handled manually that had to be automated.

Time passed and the technicians who originally wrote that software left the organisation. The software ran with few hiccups, and was usually left alone by newer programmers. If on the rare occasion that something needed changing, it was tentatively patched with little in depth understanding and left to continue sending data to HQ.

Much later, as sales volumes picked up, two things became apparent: the selection of products needed widening, and predictive ability would be improved if the sales data could be available to HQ hourly. Suddenly it was realised that the task of simply identifying what work would be involved in realising this change was complicated. No clear answers were immediately available, and it became obvious that investments would need to be made to discover how the business actually operated. Adding insult to injury, putting the change into effect involved much more than just telling someone what to do. It became clear that an elaborate and expensive set of methods would need to be followed in order to successfully document, change, test and deploy a system that was now critical to the operations of the business.

To the dismay of the business owners, it slowly dawned on them that this kind of piecemeal automation of business tasks had been going on now for some time in various other parts of the business too. It was even discovered that one enthusiastic technician had implemented a simple stock control system to help manage the small stock room at one of the branches, while another store manager with some knowledge of macro programming in spreadsheet programs had created yet another system to do the same thing for another branch. As these hand-crafted solutions migrated to the auspices of the technicians, the knowledge of the business operations slowly drifted away.

In a desperate attempt to regain some semblance of control of the business, the bosses decided that all stock control for the central warehouse and stock rooms would be dealt with using a single third party system. Cutting a long story short, the implementation of this third party stock control system ended up costing much more than anticipated, because of customisations necessary, integrations necessary, and all the reverse engineering that had to be done just to discover what needed replacing at all. This was further complicated by the resistance put up by the technicians who had grown attached to these home-baked systems their jobs depended on.

Pleased with the results, the shop management, after a significant investment of time and money, became the proud owners of a reasonably obscure system that they did not understand, and which their company was dependent on, and cost consultant rates for changes or advice. Granted, they had a system that was ‘configurable,’ but then those who did the configuration were trained experts in that third party system. They take the place of the previous technicians as expertise monopolies. In short, for an illusory sense of control, the management had just overspent and solved nothing.

The HR department is now perplexed. Interviewing for new hires over the last six months has been complicated. Those doing the interviewing no longer understand the skills being advertised. In the past it was simple: you had to have common sense, industry experience, personable approach and knowledge of the basic administrative systems and processes. Suddenly that is all by the by. The processes are automated, the systems are obscure, the skills are technical and it is hard to gauge what kind of soft skills are really needed. More importantly, some staff members with a negative influence are hard to change – they command higher salaries and more important areas of expertise. Further, the learning curves expected are much longer. New hires typically spend three months before even grasping the basics. There are trainings in these new systems that need to be scheduled and completed, and then all the local customisations need to be learnt. Trial periods become longer. The HR department in essence washes its hands of hiring and firing and becomes the paper-pushing team it is today typically recognised as. Nobody in management seems to be officially tracking and correlating costs associated with hiring and retention in relation to technology choices, so HR cannot offer real metrics about itself and becomes somewhat apathetic.

Finally, through inadequacies in understanding of the systems and processes, the approach to introducing change becomes tentative and reticent. This is a self-reinforcing situation because in such circumstances it is difficult to test that changes really work, and it is difficult to see the unintended consequences. This leads to yet further hesitance. Glitches in business operations appear frequently. Quality suffers for both the staff members and the customers.

The company exhibits all of the problems identified in the previous sections. It is now a paralysed entity. At worst, no real long term growth is possible and the only way for it to go is down. At best, it will get lucky, generate sales, and in the long term learn to change somehow.

Small organisations

It no longer makes sense to talk about large and small organisations in terms of numbers of employees. An online shop processing thousands of transactions per day can, in theory, operate with only a handful of employees. However, the uncomplicated business with a small number of staff running their business by word of mouth can be seen as flexible and unburdened by management overhead. This type of company is either a seminal venture or it is a long-established firm that occupies a niche and has peaked in terms of growth. In the former case it can follow only three paths: to become another old niche firm; to collapse for one reason or another; or to undergo the kind of fundamental transformations described earlier in order to allow it to scale.

Basically this type of company must either remain a fixture in a small niche, fold, or reform itself entirely in order to scale. This is not a picture of flexibility.

The best organisation in the world

Here I attempt to present a situation in which every problem of flexibility above has been eliminated. This serves to exemplify possible solutions.

First, let us revisit what went wrong with the ‘worst company in the world:’

  • Automation of consolidating sales data took an understanding of the sales data and consolidation process away from the general staff awareness.
  • The knowledge of the software itself diminished, and so in effect the whole company forgot about how this area really worked.
  • Introducing change to that area became expensive and uncertain, as it always involved rediscovery.
  • Introducing change required complex software systems development processes, which are expensive and risky.
  • Similar automations had gone on unnoticed, with these ad-hoc results being understood and manageable only by a handful of people.
  • Ad-hoc automated solutions drifted to the technicians for supervision, resulting in an overall drain on the knowledge of processes and some repetition of the problems above.
  • The consolidation of ad-hoc systems into third party systems resulted in expensive dependencies on third party suppliers for business critical processes, and increased the costs related to hiring, training and so on.
  • The HR department became apathetic, their responsibilities were diminished, and they were held accountable for cost increases that were not their doing, because management were not correlating technology choices with personnel costs.
  • Lack of system quality and lack of knowledge of those systems resulted in further lack of quality. Internal processes became fragile, the atmosphere at the office worsened and customer service suffered.

Now let us look at the same picture in negative to start to get some idea of the “Best organisation in the world”:

WORST

BEST

Automation of consolidating sales data took an understanding of the sales data and consolidation process away from the general staff awareness.

It was possible to reduce long term headcount and increase transactional throughput by improving the way sales data was consolidated from tills to HQ, while at the same time keeping this process under the supervision of and fresh in the minds of the relevant business staff.

The knowledge of the software itself diminished, and so in effect the whole company forgot about how this area really worked.

The knowledge of the processes was always fresh in the minds of those business line managers responsible for their oversight. The overall picture of the business operations was readily available to anyone who needed to know.

Introducing change to that area became expensive and uncertain, as it always involved rediscovery.

Nothing more than trivial rediscovery was ever needed to implement any kind of procedural or structural change.

Introducing change required complex software systems development processes, which are expensive and risky.

Introducing change involved lightweight processes that minimised costs.

Similar automations had gone on unnoticed, with these ad-hoc results being understood and manageable only by a handful of people.

Either processes were entirely maintainable by anyone with knowledge of the business area, or the process was not automated.

Ad-hoc automated solutions drifted to the technicians for supervision, resulting in an overall drain on the knowledge of processes and some repetition of the problems above.

While technicians may have remained involved, at least for infrastructure, no knowledge of processes was lost from the business.

The consolidation of ad-hoc systems into third party systems resulted in expensive dependencies on third party suppliers for business critical processes, and increased the costs related to hiring, training and so on.

All systems purchased had to meet the criteria of being well understood, popular platforms.

The HR department became apathetic, their responsibilities were diminished, and they were held accountable for cost increases that were not their doing, because management were not correlating technology choices with personnel costs.

Any technology choice was carefully monitored for its long term impact on the HR processes, including hiring, firing, training, etc. A scientific approach was adopted to help ascribe costs appropriately, and to help the business learn from its decisions. The HR department was made a stakeholder in key technology decisions, and was asked for estimates in any technology purchasing decision concerning long term costs.

Lack of system quality and lack of knowledge of those systems resulted in further lack of quality. Internal processes became fragile, the atmosphere at the office worsened and customer service suffered.

Change processes always involved necessary testing processes. Processes were transparent, which helped towards a culture of flexibility.

Each one of these possibilities in the ‘Best’ column raises the question, “How?” I will now address each item and present a general answer, and it will become apparent where these solutions are heading.

It was possible to reduce long term headcount and increase transactional throughput by improving the way sales data was consolidated from tills to HQ, while at the same time keeping this process under the supervision of and fresh in the minds of the relevant business staff.

How? The business processes automated and coordinated by machines must be clearly readable and understandable by business process participants and their execution must be clearly visible, just as it was before automation.

The knowledge of the processes was always fresh in the minds of those business line managers responsible for their oversight. The overall picture of the business operations was readily available to anyone who needed to know.

How? The business processes automated and coordinated by machines must be clearly readable and understandable by managers. They must be able to change the automated processes themselves. When processes are changed, the changes must remain clearly readable and understandable by process participants.

Nothing more than trivial rediscovery was ever needed to implement any kind of procedural or structural change.

How? The business processes automated and coordinated by machines must be clearly readable and understandable.

Introducing change involved lightweight processes that minimised costs.

How? Minimisation of costs means understanding what all the costs would be, for each possible change or outcome of change. This is infeasible. What is necessary is to be able to test the impact of change easily, to permit some freedom to experiment. The processes should be clearly readable, understandable, as above, and changes to them should be as simple as directly altering them, just as prior to automation, but testing is necessary because there will always be unforeseen consequences. Testing should always be part of the change process, making the change process itself more bulky, but reducing the overall cost and cultivating an atmosphere of safety and freedom to play.

Either processes were entirely maintainable by anyone with knowledge of the business area, or the process was not automated.

How? First, the incentive structures need to be such that people always incorporate company wide flexibility into their decision making. When someone decides to create a spreadsheet macro to save time on a daily task, if most of the rest of the organisation is not familiar with spreadsheet macros then they should be aware of the cost to the company of their action, rather than the personal benefit to themselves. The next response is in the same vein as above: any automated business process must be clearly readable, understandable and changeable by all relevant parties.

While technicians may have remained involved, at least for infrastructure, no knowledge of processes was lost from the business.

How? The business processes must be clearly readable and understandable by all relevant parties, including those managing the data storage and transmission facilities. Introducing changes to processes that are clearly readable by everyone might be fine, unless the volumes of data simply exceed the capabilities of the network and databases. Planning for changes needs quick assessments by those who manage IT infrastructure. Indeed, IT infrastructure would have its own internal processes subject to the same requirements.

All systems purchased had to meet the criteria of being well understood, popular platforms.

How? The platform must be very popular with an abundance of open documentation, like Windows or Unix for example. The processes automated on these platforms must be readable and understandable by everyone relevant, without creating dependencies on third party suppliers or internal knowledge monopolies.

Any technology choice was carefully monitored for its long term impact on the HR processes, including hiring, firing, training, etc. A scientific approach was adopted to help ascribe costs appropriately, and to help the business learn from its decisions. The HR department was made a stakeholder in key technology decisions, and was asked for estimates in any technology purchasing decision concerning long term costs.

How? The comment is self explanatory, but it I feel it is important to emphasise this point: most businesses simply do not understand the personnel related costs that automation or technology choices have had on them. Most executives fail miserably to put these two aspects of the business together, and the end result is poor decision making. It is all too common to hear of ROI for an IT choice or ‘strategic technology investment,’ and yet even more common to see a business fold, a department close or an IT project fail for the simple reason that organisations do not and cannot understand the actual cost.

Change processes always involved necessary testing processes. Processes were transparent, which helped towards a culture of flexibility.

This is covered above.

Conclusions

All the above can be summarised:

  • Flexibility is strength and health, the broad ability to respond to changes and adapt, to withstand shocks and survive.
  • Automation of business processes permitted a continuation of organisational growth by allowing increases in transactional throughput.
  • Automation of business processes has resulted in a separation between the business domains and specialised technical domains, where knowledge critical to the decision making of both lies across a difficult division.
  • Any organisation can be described purely in terms of business processes.
  • Business processes have not dramatically changed. What has changed in response to automation is the roles assigned to the execution of tasks are increasingly machines.
  • The main change in response to automation is the process of introducing further change. In response to automation, this increasingly involves the processes found in the acquisition or development of software, and is often poorly executed by organisations whose core skill is not software acquisition and development.
  • Automation of business processes very often has all kinds of unwanted side effects – including causing dependencies on external suppliers, causing expertise monopolies, loss of business knowledge, introducing communication barriers, aggravating the costs and risks in personnel management, baffling and confusing decision makers with technical concerns, degrading quality of service and products.
  • A major issue is that costs cannot be ascribed to personnel or personnel management. Accountability of individuals and consequently HR departments is practically impossible to achieve. HR departments suffer apathy and delegate responsibilities away to departmental managers.
  • Almost all of the above problems are caused by describing processes in ways that only specialists can understand. Process oriented thinkers have been pushed into IT, and processes have been pushed to IT.

If the basic functions of management are staffing, planning, organising, leading and monitoring, then in most cases automation of business processes has helped with monitoring at the expense of all other functions.

Finally, the solution to all these problems is a cultural emphasis and understanding of flexibility and its benefits, coupled with the right technologies that allow processes to be described and prescribed in ways that all can understand.

Such technologies and initiatives are coming into maturity. It is not my aim here to promote specific instances of these, so I will not mention them. The aim is neither to kill the business architects nor the software architects but to kill the distinction between the two. The ‘two cultures’ of business and information technology needs to end, so we can all reap the rewards of a flexible workplace, a flexible supply chain, and a working life that is happier and more creative.

Flexibility in the Modern Organisation (part 1)

Frank Szendzielarz, 2010

First draft, April 2010

Introduction

Any organization can be viewed as a set of processes, where, loosely, a process is comprised of smaller processes or activities, each of those working on inputs and producing outputs for successive activities, and each involving the participation of some machines or individuals. Sometimes it is necessary to implement change in those processes, for example because of the addition of new products, services or as a result of the need to optimise. In particular, computerisation has led to dramatic increases in transactional throughput by automating activities. This increase has come at the expense of organisational flexibility. Here I present my thoughts on how that flexibility has been lost and what can be changed to improve flexibility.

Is flexibility desirable in an organisation?

Flexibility is a very broad term. It is having choices. It is the ability to steer. It is the capacity to remember and learn. It is a structural property: the ability to survive unexpected shocks without shattering. A plane’s wings, though exhibiting great strength, are designed to retain their shape through flexibility. A plane’s wings remain in the form of wings as a consequence of their flexibility. Flexibility is the ability to adapt to a wider range of conditions while remaining a coherent body.

If we look at nature we do not see highly optimised, economical structures. What we do see is redundancy and inefficiency. A typical microcomputer, while able to count to a trillion in no time, will fail catastrophically if just one component is compromised. Employing redundancy, a brain will degrade over years, is capable of withstanding losses, can repair and rewire itself, and operates effectively despite the apparent inefficiency of carrying a great number of normally unused parts.

Some of the great debates on social orders revolve around the issues of centralisation and decentralisation and in essence these debates are about systems, redundancy and rigidity. A highly centralised system carries the risk of fragility through absence of redundancy. A critical lynchpin may come unstuck and the whole system may come toppling down. A highly decentralised system carries the risk of breaking in response to unexpected events through simply being unable to remain a coherent body. The self organising flock of birds can suddenly become two, or evaporate into a cloud of lost individuals. The highly centralised system may suffer the cost of long communication and planning cycles, and may find it hard to respond to necessary changes. Large dinosaurs developed multiple autonomous brains to cope with nervous system signal delays. On the other hand, the highly decentralised system may be unable to pull in the right direction, or more often may simply be unable to avoid implosion and becoming centralised.

In either case, the system carries the weaknesses of inflexibility. Flexibility is having options and being able to survive. Flexibility is strength.

Then and now

In the past, information management involved recording details of events on paper, filing them in cabinets, manually transcribing from template to template, duplicating documents by hand and physically moving them from department to department. In place of the database management system, there were books of indexes, folders and cupboards. In place of the web user interface showing outstanding daily tasks, there were racks of forms to process. In place of message queues, there were scribes who sat at desks with ink and blotter. Where today there are global unique identifiers and electronic random numbers, in the past mechanical devices generated codes on the pull of a handle.

For many companies the paper trail was less critical. Organisations were often small and agile enough to conduct their operations by word of mouth. While operations by and large followed typical daily routines, there was sufficient flexibility to deal with the unexpected by ad-hoc adaptation through a series of quick communications between all relevant parties. The paper trail lagged behind in a primarily descriptive role for the benefit of the bean counters and regulators. For many smaller firms this is still the case today.

As companies became larger, either by organic growth or through acquisitions, or as they became more concerned with accuracy of accounting, the informal, word of mouth method of management ceased to suffice. Transactional throughput increased and the quality of the paper trail suffered. More information and calculations than could be humanly managed led to the inability to monitor, and sense of lack of control. Soon the management were compelled to sacrifice flexibility and began to insist on stricter adherence to formal processes, where participants had to produce records as they went, where the records became as much prescriptive as descriptive in that messages generated as the result of some task became the instructions to start some other.

As time went by, mass production, mass telecommunication or simple company expansion resulted in even greater transactional throughput, and computerisation began to automate the management and production of that ‘paper trail’. The quills, blotters and cabinets vanished, to be replaced by data centres, database clusters, webfarms, intranets, and ‘enterprise resource planning’ (ERP) systems.

For most organisations though, one thing has remained comparatively constant: the basic processes themselves. A withdrawal of funds is still a withdrawal of funds to a large extent. The customer arrives at the business boundary, whether that is a physical desk at a branch or an internet banking web server, makes a request, which is first recorded somewhere and then sets off a whole chain of events. Accounts in ledgers get updated. Funds get reinvested. Portfolios change. Audit trails are kept. The actors in these chains may no longer be scribes and cabinets, but in essence the processes remain recognisable and familiar.

Before computerisation, each departmental head was not only responsible for making sure that the physical resources were available for this information processing, but was there to make sure that people carried it out efficiently. This was a comprehensive type of line management that involved understanding and communicating detailed business rules to orchestrate those scribes and cabinets (or stackers and shelves, or what have you) and in such a way that the processes made best use of the resources available. The departmental managers had to lay down the information processing procedures and make sure that subordinates put them into effect. Procedural improvements may have meant literal, physical rearrangement of those people and files, while specifying which boxes and batches had to be shipped to which other departments and when.

Today however a great majority of establishments find themselves in the situation that those processes are now being conducted by computers, almost entirely automated with only the occasional semi-automated or manual activity. The detailed knowledge of how the business operates no longer lies in the domain of the executives, but in the domain of software specialists. It is also not uncommon to find this knowledge outsourced. It is no longer the departmental manager who lays down these information processing procedures and rules; it is the technical analyst, software architect and software developer. While it may seem that those rules and procedures are dictated by business operatives and management, as a general rule this is not the case. The actual behaviour of line of business systems, and thus the behaviour of large parts of the organisation, is usually gathered from various sources by analysts into a uniquely assimilated, integrated view committed to software code and configuration. If an organisation can be described as a set of business processes, that description is now most often understandable and readable only by programmers, technical analysts or others from the world of IT.

While IT departments may nominate a domain expert as representative and facilitator of communication, that expert or analyst is rarely capable of coordinating changes in software itself. Similarly, those IT experts who are very often also business domain experts, the architects, the lead developers, the technical analysts, are rarely authorised to make business process changes.

The end result, after many decades of computerisation and IT infrastructure development, is that companies are now capable of high transactional throughput, but with lower flexibility than the firms of days gone by, and always much less than the informal, ‘word-of-mouth’ small company. Putting change into effect can be a daunting and demoralising undertaking, involving many stakeholders and is often unsuccessful. IT is commonly perceived as expensive and unreliable in its delivery track record. Even the small business of today, with its online distribution, often finds itself in the stifling position of either having to learn to program web applications or rely on outsourcing the majority of its operations to a busy and expensive team of software developers, or tie itself down to a limited and rigid software package.

Specifications upon specifications upon specifications

It could be said after a skim through the above that the obvious solution to flexibility would be to authorise the IT experts, analysts and architects to run the business processes. Let them decide what gets automated, what gets replaced, and let them make key decisions in areas like product development and so on. After all, the understanding of overall operations lies within the software and systems realm.

This is infeasible. These systems are nearly always implemented using programming languages and other computing artefacts that require highly specialised knowledge. Those technologies are subject to constant change and improvement and they require individuals dedicated to their field.

Conversely, that same requirement of special technical knowledge is a barrier to the business departments putting those changes into effect themselves. Managers overseeing life insurance processes are usually not programmers. The sad truth is that those very same managers are also denied up to date knowledge of the pertinent business processes and rules they should be familiar with. The big picture and the small details are often lost in the software code, and it is not uncommon to find IT departments trying to reverse engineer existing systems to learn and describe how the business actually operates. These exercises of rediscovery are a painful consequence of some of the problems presented later here, including staff turnover where there is inadequate documentation or quality control.

With these seemingly unbreakable barriers, changes are made painstakingly through an elaborate process of specification. Change originates from a primary business goal and proceeds as sequences of delegations, fanning out like a tree, with each boundary involving ever more remote descriptions of what needs to be done. Each different role attempts to translate incoming specifications into outgoing specifications, sometimes expressed in entirely different terms.

The illustration shows how a primary business goal fans out into further goals with each specification becoming more remote in terms of language and skills.

Let us consider an example: A large retailer has identified a problem that locally administered promotional activities, such as advertising of certain products at specific stores by local store management, is causing unexpected demand on warehouses and resulting in blips of stock shortage. This causes ‘demand noise’ that amplifies back up the chain. The store wants to solve this problem, improving the supply chain without having to curtail the local promotional activities.

This primary goal leads to the creation of a project, and usually some kind of governing body, a steering committee, is assembled involving stakeholders from various departments. This project results in a gap analysis of processes as they currently stand, and processes as they should be, after long consultation by analysts with various employees. Each business unit is made responsible for putting the necessary changes into effect: The local store manager needs better reports about stock levels in his local warehouse and centrally ordered lead times; the local warehouse manager needs to know about upcoming promotions and central stock levels and planned ordering; the central buyers need better information about upcoming promotions and expected demand increases and will be notified daily of such things; the central buyers need to know current stock levels from local up to central warehouses and expected depletion times and should have reports available.

From the perspective of the warehouse manager at the local store, a change needs to be made to the software system used to record deliveries, losses and shipments to track overall stock levels. The system now needs to know about planned promotions and the manager needs to be given automated alerts to help him manage his orders placed on central warehouses. He does not know how those planned promotions will get recorded in the system and does not particularly care. It is the store manager who is concerned with making sure that the local marketing boss records all upcoming promotions into the system via some newly provided web application that IT are supplying. Let us say that in fact IT are merely modifying the local warehouse application so that it exposes a web page that allows its database to be updated by the local marketing team. The warehouse manager need have no knowledge of this.

There are many types of promotion in a retail organisation. There are fliers, TV adverts, placing products on the end of a shopping aisle, reducing the price, moving products to near the entrance, and so on. In order to forecast changes in demand as a result of a promotion, some statistical sales analysis needs to be performed with information like the type, date and ‘scale’ of the promotion as input. It is decided that this task can be fully automated, but that this kind of statistical analysis should be done by a central, shared service available to various warehouses and marketing departments.

At a high level there are changes in interdepartmental communication, which place requirements on messaging and reporting on the respective departmental systems. At that level architects are responsible, with nominated business representatives, for integrating the overall picture. Within the scope of that overall project, changes to each department’s systems (eg: the warehouse system) can be gathered together as specific subprojects and kicked off in parallel.

Each subproject involves the work of analysts, who study the business domain, discover terms like ‘aisle promotion’, ‘TV promotion’, ‘promotion start date’, ‘promotion duration’, ‘discount’ and so on, discover specific activities in business processes like ‘plan promotion,’ ‘cancel promotion,’ ‘commit promotion,’ and assemble these to create specifications for how the software system should appear to the end user. Those include a description somewhere of the web page that allows the local store manager to select from a single drop-down list “Select Promotion Type:” (aisle, TV, flier, etc). There will be a description in the warehouse system that the “Reports” page should now include a new “Automatic Alerts” subsection and that one of these should be the “Promotion Demand Analysis Alert”.

Usually the outputs of this analysis are documents in the language of the business representative or system end user. They portray the business artefacts, the roles, entities and processes, in natural language using the terminology of the business domain. This output is often called the “business analysis” or merely “requirements.”

In our example scenario, this is taken as input by a more technical analyst, who is tasked with specifying requirements in ways that can be understood by programmers. Sometimes this is done using languages like the Unified Modelling Language (UML), or other dedicated modeling languages for describing things with formal semantics. The output is often called a “technical analysis”, or “requirements specification.” It is a very detailed description of the concepts and artefacts described in the earlier analysis, avoiding ambiguity, showing the relationships between concepts (“does a promotion apply to many products or only one?” “does a promotion of many products have an end-date for the promotion, or different end-dates per product?”, “should the drop-down list of promotion types be on the left of the page or on the right?”).

Software designers then take these specifications and write software, design web screens, install network connections, buy hardware, and put all the changes into effect.

So, let us zoom back out. In summary, from that original business goal of improving the supply chain so that unexpected local promotions don’t cause shortages, a specification was put in place by a project team involving business representatives and IT architects to change the high-level interdepartmental processes. This changed the specification of the input messages and output reports (the general operations) between the departments. The collections of changes to each department and departmental system, and the introduction of new or decommissioning of old systems, were run as parallel subprojects. Each subproject accepted that set of changes to inputs and outputs of the department as the basic specification of departmental goals. Departmental teams of analysts and local representatives identified changes to processes, described those processes, and passed them as specifications to IT analysts. (While some manual tasks were changed by the departmental bosses, the majority of changes are in software.) Finally, those analysts interpreted the requirements and passed them as specifications to software teams.

Of course, in reality it is not as smooth as that, and many typical challenges are omitted. In general, most of the problems can be attributed to discovery of unexpected changes in the status quo, impacting the original plans and causing negotiations between various parties, and also problems during transcription of specifications from stage to stage.

It’s all a big process model

The key point to recognise though is that the outputs of the software teams are yet further specifications. These are not for people to follow like in the old days, but for computers to follow. Software is just a set of processes, just the same as those business processes that were once conducted manually a long time ago, but written in programming code. In the past, when a heavy batch of copied documents was carried in a box by an assistant from the claims department to the accounting department, today a line of code runs a bulk insert SQL command to stuff data into an accounting database’s general ledger table, or sends an XML file by web service hosted on the accounting department’s server.

In the example above, from top to bottom, at each level and stage of the project, what was being passed as input to the successive activity was an updated procedural specification. The project steering committee commissioned a gap analysis, which recommended that the local warehouses be notified of expected promotions by local marketing departments – a change in macroscopic level business process. This change was included amongst other procedural changes for that department, and at a departmental level, in a dedicated subproject, the business analysts and IT architects recommended that the marketing operative be responsible for entering data into a modified warehouse stock control system. The technical analyst interpreted this into software change specifications specific to that system. The software change specifications went to the respective owners of all the processes embodied in the warehouse system as software, who merely changed those processes.

(The only process that did not change, if it was identified at all, was the overall process for implementing change itself. It is this one process that seems to be neglected most of all, and that very neglect results in the ossification and inflexibility this document seeks to address. We will return to this later.)

Changes to nested processes, illustrating a part of the case above. The boxes illustrate processes. The red processes are actual computer systems, and the red arrows represent changes to software systems.

It is important to recognise that on an abstract level, a business is a set of processes, where each process is a set of sub-processes. Each process, or sub-process, is executed by or involves the participation of a role, which is an abstraction from a concrete employee or physical resource. A process begins with some message, or combination of messages, and results in the generation of messages. Here is an illustration of a simple warehouse process, the receipt of a delivery of goods:

Illustration of a business process for handling the receipt of goods to a warehouse.

The process was initiated by the message or event that a delivery truck has arrived with a package of goods. It involved the participation of roles such as the warehouse stock register, warehouse manager and so on. Depending on if the delivery note is matched with an order, the process ends with either the shipment being rejected or being accepted, the stock updated, and the record of the stock levels updated. (In essence the record of stock levels is an undirected message useful for whoever needs to know what the state of the warehouse is. By altering the stock records, other processes may be invoked by those who subscribe to stock record changes. In this way, processes may interlink.)

The activities in the process may also be treated as nested sub-processes. When the warehouse manager updates the stock levels, a message is being generated describing the products and their quantities, and the act of recording in the stock levels is carried out according to whatever the process is of updating the stock records. Clearly, this may be a manual or an automatic activity. Processes on this level, as mentioned earlier, have not changed much since even before the days of computerised automation.

The process of updating the stock levels records as conducted and understood by the warehouse manager. The product delivered and loaded is first found in the stock ledger, and if it is there the stock levels adjusted, and if not, then a new product line is recorded in the ledger. In either case the quantity damaged during load is also recorded. The process if repeated for each item delivered.

The following illustration shows the process of updating the stock levels record nested within the warehouse goods receipt process:

What should be clear now is that the whole business can be seen as a set of processes. It can be described in terms of processes, and change to the business can be prescribed in terms of processes.

Going back to our historical business of papers and filing cabinets, if we were to map that business as a complete process map, using illustrations such as above, and if we were to contrast that with a similar process map for the same organisation after computerisation, we would find that not much has changed in terms of activities. The main change is in the assignment of roles. Where once there were desks, ink, quills and scribes, now there is electronic messaging. Likewise for cabinets of paper documents and databases or Windows and UNIX files systems.