Flexibility in the Modern Organisation (part 1)

Frank Szendzielarz, 2010

First draft, April 2010


Any organization can be viewed as a set of processes, where, loosely, a process is comprised of smaller processes or activities, each of those working on inputs and producing outputs for successive activities, and each involving the participation of some machines or individuals. Sometimes it is necessary to implement change in those processes, for example because of the addition of new products, services or as a result of the need to optimise. In particular, computerisation has led to dramatic increases in transactional throughput by automating activities. This increase has come at the expense of organisational flexibility. Here I present my thoughts on how that flexibility has been lost and what can be changed to improve flexibility.

Is flexibility desirable in an organisation?

Flexibility is a very broad term. It is having choices. It is the ability to steer. It is the capacity to remember and learn. It is a structural property: the ability to survive unexpected shocks without shattering. A plane’s wings, though exhibiting great strength, are designed to retain their shape through flexibility. A plane’s wings remain in the form of wings as a consequence of their flexibility. Flexibility is the ability to adapt to a wider range of conditions while remaining a coherent body.

If we look at nature we do not see highly optimised, economical structures. What we do see is redundancy and inefficiency. A typical microcomputer, while able to count to a trillion in no time, will fail catastrophically if just one component is compromised. Employing redundancy, a brain will degrade over years, is capable of withstanding losses, can repair and rewire itself, and operates effectively despite the apparent inefficiency of carrying a great number of normally unused parts.

Some of the great debates on social orders revolve around the issues of centralisation and decentralisation and in essence these debates are about systems, redundancy and rigidity. A highly centralised system carries the risk of fragility through absence of redundancy. A critical lynchpin may come unstuck and the whole system may come toppling down. A highly decentralised system carries the risk of breaking in response to unexpected events through simply being unable to remain a coherent body. The self organising flock of birds can suddenly become two, or evaporate into a cloud of lost individuals. The highly centralised system may suffer the cost of long communication and planning cycles, and may find it hard to respond to necessary changes. Large dinosaurs developed multiple autonomous brains to cope with nervous system signal delays. On the other hand, the highly decentralised system may be unable to pull in the right direction, or more often may simply be unable to avoid implosion and becoming centralised.

In either case, the system carries the weaknesses of inflexibility. Flexibility is having options and being able to survive. Flexibility is strength.

Then and now

In the past, information management involved recording details of events on paper, filing them in cabinets, manually transcribing from template to template, duplicating documents by hand and physically moving them from department to department. In place of the database management system, there were books of indexes, folders and cupboards. In place of the web user interface showing outstanding daily tasks, there were racks of forms to process. In place of message queues, there were scribes who sat at desks with ink and blotter. Where today there are global unique identifiers and electronic random numbers, in the past mechanical devices generated codes on the pull of a handle.

For many companies the paper trail was less critical. Organisations were often small and agile enough to conduct their operations by word of mouth. While operations by and large followed typical daily routines, there was sufficient flexibility to deal with the unexpected by ad-hoc adaptation through a series of quick communications between all relevant parties. The paper trail lagged behind in a primarily descriptive role for the benefit of the bean counters and regulators. For many smaller firms this is still the case today.

As companies became larger, either by organic growth or through acquisitions, or as they became more concerned with accuracy of accounting, the informal, word of mouth method of management ceased to suffice. Transactional throughput increased and the quality of the paper trail suffered. More information and calculations than could be humanly managed led to the inability to monitor, and sense of lack of control. Soon the management were compelled to sacrifice flexibility and began to insist on stricter adherence to formal processes, where participants had to produce records as they went, where the records became as much prescriptive as descriptive in that messages generated as the result of some task became the instructions to start some other.

As time went by, mass production, mass telecommunication or simple company expansion resulted in even greater transactional throughput, and computerisation began to automate the management and production of that ‘paper trail’. The quills, blotters and cabinets vanished, to be replaced by data centres, database clusters, webfarms, intranets, and ‘enterprise resource planning’ (ERP) systems.

For most organisations though, one thing has remained comparatively constant: the basic processes themselves. A withdrawal of funds is still a withdrawal of funds to a large extent. The customer arrives at the business boundary, whether that is a physical desk at a branch or an internet banking web server, makes a request, which is first recorded somewhere and then sets off a whole chain of events. Accounts in ledgers get updated. Funds get reinvested. Portfolios change. Audit trails are kept. The actors in these chains may no longer be scribes and cabinets, but in essence the processes remain recognisable and familiar.

Before computerisation, each departmental head was not only responsible for making sure that the physical resources were available for this information processing, but was there to make sure that people carried it out efficiently. This was a comprehensive type of line management that involved understanding and communicating detailed business rules to orchestrate those scribes and cabinets (or stackers and shelves, or what have you) and in such a way that the processes made best use of the resources available. The departmental managers had to lay down the information processing procedures and make sure that subordinates put them into effect. Procedural improvements may have meant literal, physical rearrangement of those people and files, while specifying which boxes and batches had to be shipped to which other departments and when.

Today however a great majority of establishments find themselves in the situation that those processes are now being conducted by computers, almost entirely automated with only the occasional semi-automated or manual activity. The detailed knowledge of how the business operates no longer lies in the domain of the executives, but in the domain of software specialists. It is also not uncommon to find this knowledge outsourced. It is no longer the departmental manager who lays down these information processing procedures and rules; it is the technical analyst, software architect and software developer. While it may seem that those rules and procedures are dictated by business operatives and management, as a general rule this is not the case. The actual behaviour of line of business systems, and thus the behaviour of large parts of the organisation, is usually gathered from various sources by analysts into a uniquely assimilated, integrated view committed to software code and configuration. If an organisation can be described as a set of business processes, that description is now most often understandable and readable only by programmers, technical analysts or others from the world of IT.

While IT departments may nominate a domain expert as representative and facilitator of communication, that expert or analyst is rarely capable of coordinating changes in software itself. Similarly, those IT experts who are very often also business domain experts, the architects, the lead developers, the technical analysts, are rarely authorised to make business process changes.

The end result, after many decades of computerisation and IT infrastructure development, is that companies are now capable of high transactional throughput, but with lower flexibility than the firms of days gone by, and always much less than the informal, ‘word-of-mouth’ small company. Putting change into effect can be a daunting and demoralising undertaking, involving many stakeholders and is often unsuccessful. IT is commonly perceived as expensive and unreliable in its delivery track record. Even the small business of today, with its online distribution, often finds itself in the stifling position of either having to learn to program web applications or rely on outsourcing the majority of its operations to a busy and expensive team of software developers, or tie itself down to a limited and rigid software package.

Specifications upon specifications upon specifications

It could be said after a skim through the above that the obvious solution to flexibility would be to authorise the IT experts, analysts and architects to run the business processes. Let them decide what gets automated, what gets replaced, and let them make key decisions in areas like product development and so on. After all, the understanding of overall operations lies within the software and systems realm.

This is infeasible. These systems are nearly always implemented using programming languages and other computing artefacts that require highly specialised knowledge. Those technologies are subject to constant change and improvement and they require individuals dedicated to their field.

Conversely, that same requirement of special technical knowledge is a barrier to the business departments putting those changes into effect themselves. Managers overseeing life insurance processes are usually not programmers. The sad truth is that those very same managers are also denied up to date knowledge of the pertinent business processes and rules they should be familiar with. The big picture and the small details are often lost in the software code, and it is not uncommon to find IT departments trying to reverse engineer existing systems to learn and describe how the business actually operates. These exercises of rediscovery are a painful consequence of some of the problems presented later here, including staff turnover where there is inadequate documentation or quality control.

With these seemingly unbreakable barriers, changes are made painstakingly through an elaborate process of specification. Change originates from a primary business goal and proceeds as sequences of delegations, fanning out like a tree, with each boundary involving ever more remote descriptions of what needs to be done. Each different role attempts to translate incoming specifications into outgoing specifications, sometimes expressed in entirely different terms.

The illustration shows how a primary business goal fans out into further goals with each specification becoming more remote in terms of language and skills.

Let us consider an example: A large retailer has identified a problem that locally administered promotional activities, such as advertising of certain products at specific stores by local store management, is causing unexpected demand on warehouses and resulting in blips of stock shortage. This causes ‘demand noise’ that amplifies back up the chain. The store wants to solve this problem, improving the supply chain without having to curtail the local promotional activities.

This primary goal leads to the creation of a project, and usually some kind of governing body, a steering committee, is assembled involving stakeholders from various departments. This project results in a gap analysis of processes as they currently stand, and processes as they should be, after long consultation by analysts with various employees. Each business unit is made responsible for putting the necessary changes into effect: The local store manager needs better reports about stock levels in his local warehouse and centrally ordered lead times; the local warehouse manager needs to know about upcoming promotions and central stock levels and planned ordering; the central buyers need better information about upcoming promotions and expected demand increases and will be notified daily of such things; the central buyers need to know current stock levels from local up to central warehouses and expected depletion times and should have reports available.

From the perspective of the warehouse manager at the local store, a change needs to be made to the software system used to record deliveries, losses and shipments to track overall stock levels. The system now needs to know about planned promotions and the manager needs to be given automated alerts to help him manage his orders placed on central warehouses. He does not know how those planned promotions will get recorded in the system and does not particularly care. It is the store manager who is concerned with making sure that the local marketing boss records all upcoming promotions into the system via some newly provided web application that IT are supplying. Let us say that in fact IT are merely modifying the local warehouse application so that it exposes a web page that allows its database to be updated by the local marketing team. The warehouse manager need have no knowledge of this.

There are many types of promotion in a retail organisation. There are fliers, TV adverts, placing products on the end of a shopping aisle, reducing the price, moving products to near the entrance, and so on. In order to forecast changes in demand as a result of a promotion, some statistical sales analysis needs to be performed with information like the type, date and ‘scale’ of the promotion as input. It is decided that this task can be fully automated, but that this kind of statistical analysis should be done by a central, shared service available to various warehouses and marketing departments.

At a high level there are changes in interdepartmental communication, which place requirements on messaging and reporting on the respective departmental systems. At that level architects are responsible, with nominated business representatives, for integrating the overall picture. Within the scope of that overall project, changes to each department’s systems (eg: the warehouse system) can be gathered together as specific subprojects and kicked off in parallel.

Each subproject involves the work of analysts, who study the business domain, discover terms like ‘aisle promotion’, ‘TV promotion’, ‘promotion start date’, ‘promotion duration’, ‘discount’ and so on, discover specific activities in business processes like ‘plan promotion,’ ‘cancel promotion,’ ‘commit promotion,’ and assemble these to create specifications for how the software system should appear to the end user. Those include a description somewhere of the web page that allows the local store manager to select from a single drop-down list “Select Promotion Type:” (aisle, TV, flier, etc). There will be a description in the warehouse system that the “Reports” page should now include a new “Automatic Alerts” subsection and that one of these should be the “Promotion Demand Analysis Alert”.

Usually the outputs of this analysis are documents in the language of the business representative or system end user. They portray the business artefacts, the roles, entities and processes, in natural language using the terminology of the business domain. This output is often called the “business analysis” or merely “requirements.”

In our example scenario, this is taken as input by a more technical analyst, who is tasked with specifying requirements in ways that can be understood by programmers. Sometimes this is done using languages like the Unified Modelling Language (UML), or other dedicated modeling languages for describing things with formal semantics. The output is often called a “technical analysis”, or “requirements specification.” It is a very detailed description of the concepts and artefacts described in the earlier analysis, avoiding ambiguity, showing the relationships between concepts (“does a promotion apply to many products or only one?” “does a promotion of many products have an end-date for the promotion, or different end-dates per product?”, “should the drop-down list of promotion types be on the left of the page or on the right?”).

Software designers then take these specifications and write software, design web screens, install network connections, buy hardware, and put all the changes into effect.

So, let us zoom back out. In summary, from that original business goal of improving the supply chain so that unexpected local promotions don’t cause shortages, a specification was put in place by a project team involving business representatives and IT architects to change the high-level interdepartmental processes. This changed the specification of the input messages and output reports (the general operations) between the departments. The collections of changes to each department and departmental system, and the introduction of new or decommissioning of old systems, were run as parallel subprojects. Each subproject accepted that set of changes to inputs and outputs of the department as the basic specification of departmental goals. Departmental teams of analysts and local representatives identified changes to processes, described those processes, and passed them as specifications to IT analysts. (While some manual tasks were changed by the departmental bosses, the majority of changes are in software.) Finally, those analysts interpreted the requirements and passed them as specifications to software teams.

Of course, in reality it is not as smooth as that, and many typical challenges are omitted. In general, most of the problems can be attributed to discovery of unexpected changes in the status quo, impacting the original plans and causing negotiations between various parties, and also problems during transcription of specifications from stage to stage.

It’s all a big process model

The key point to recognise though is that the outputs of the software teams are yet further specifications. These are not for people to follow like in the old days, but for computers to follow. Software is just a set of processes, just the same as those business processes that were once conducted manually a long time ago, but written in programming code. In the past, when a heavy batch of copied documents was carried in a box by an assistant from the claims department to the accounting department, today a line of code runs a bulk insert SQL command to stuff data into an accounting database’s general ledger table, or sends an XML file by web service hosted on the accounting department’s server.

In the example above, from top to bottom, at each level and stage of the project, what was being passed as input to the successive activity was an updated procedural specification. The project steering committee commissioned a gap analysis, which recommended that the local warehouses be notified of expected promotions by local marketing departments – a change in macroscopic level business process. This change was included amongst other procedural changes for that department, and at a departmental level, in a dedicated subproject, the business analysts and IT architects recommended that the marketing operative be responsible for entering data into a modified warehouse stock control system. The technical analyst interpreted this into software change specifications specific to that system. The software change specifications went to the respective owners of all the processes embodied in the warehouse system as software, who merely changed those processes.

(The only process that did not change, if it was identified at all, was the overall process for implementing change itself. It is this one process that seems to be neglected most of all, and that very neglect results in the ossification and inflexibility this document seeks to address. We will return to this later.)

Changes to nested processes, illustrating a part of the case above. The boxes illustrate processes. The red processes are actual computer systems, and the red arrows represent changes to software systems.

It is important to recognise that on an abstract level, a business is a set of processes, where each process is a set of sub-processes. Each process, or sub-process, is executed by or involves the participation of a role, which is an abstraction from a concrete employee or physical resource. A process begins with some message, or combination of messages, and results in the generation of messages. Here is an illustration of a simple warehouse process, the receipt of a delivery of goods:

Illustration of a business process for handling the receipt of goods to a warehouse.

The process was initiated by the message or event that a delivery truck has arrived with a package of goods. It involved the participation of roles such as the warehouse stock register, warehouse manager and so on. Depending on if the delivery note is matched with an order, the process ends with either the shipment being rejected or being accepted, the stock updated, and the record of the stock levels updated. (In essence the record of stock levels is an undirected message useful for whoever needs to know what the state of the warehouse is. By altering the stock records, other processes may be invoked by those who subscribe to stock record changes. In this way, processes may interlink.)

The activities in the process may also be treated as nested sub-processes. When the warehouse manager updates the stock levels, a message is being generated describing the products and their quantities, and the act of recording in the stock levels is carried out according to whatever the process is of updating the stock records. Clearly, this may be a manual or an automatic activity. Processes on this level, as mentioned earlier, have not changed much since even before the days of computerised automation.

The process of updating the stock levels records as conducted and understood by the warehouse manager. The product delivered and loaded is first found in the stock ledger, and if it is there the stock levels adjusted, and if not, then a new product line is recorded in the ledger. In either case the quantity damaged during load is also recorded. The process if repeated for each item delivered.

The following illustration shows the process of updating the stock levels record nested within the warehouse goods receipt process:

What should be clear now is that the whole business can be seen as a set of processes. It can be described in terms of processes, and change to the business can be prescribed in terms of processes.

Going back to our historical business of papers and filing cabinets, if we were to map that business as a complete process map, using illustrations such as above, and if we were to contrast that with a similar process map for the same organisation after computerisation, we would find that not much has changed in terms of activities. The main change is in the assignment of roles. Where once there were desks, ink, quills and scribes, now there is electronic messaging. Likewise for cabinets of paper documents and databases or Windows and UNIX files systems.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s