Digital Juggling Between Simplicity And Simplism


 
“Make things as simple as possible, but not simpler.” – Albert Einstein
Gone are the days when automation could be neglected. As manual labor has become mechanized and factory halls have gradually become robotized, so have many mental tasks and administrative work processes gradually become computerized and increasingly absorbed into the digitized enterprise fabric.
Not only has this many CxO trying to drive down the cost of ICT with the effectiveness of an EU austerity package, many companies have cultivated a deep divide between business and IT, as the following story shows:
A man is flying in a hot air balloon and realizes he is lost. He reduces height and spots a man down below. He lowers the balloon further and shouts: “Excuse me, can you tell me where I am?”
Says the man below: “yes, you’re in a hot air balloon, hovering 30 feet above a field.” “Aha. You must work in Information Technology” says the balloonist.
“I do” replies the man. “How did you know?”

“Well” says the balloonist, “everything you have told me is technically correct, but it’s no use to anyone.”
The man below says “Aha, so you must work in business.”
“I do”, replies the balloonist, “but how did you know?”
“Well”, says the man, “you don’t know where you are, or where you’re going, but you expect me to be able to help. You’re in the same position you were before we met, but now it’s my fault.”
Although not an often discussed topic systems design has been evolving with some significant steps and design patterns are common tools for a modern IT Architect.
The use of patterns originates in the 1960’s with the work of Christopher Alexander where his Notes on the Synthesis of Form had been required reading for the students in computing science, and since the publication of A Pattern Language in the late 70’s using design patterns has become increasingly common and one can easily apply them outside of the ICT domain. For example, the larger a company grows, the more time is spent on coordination of the activities in and of themselves, involving all sorts of delegation and specialist functions. Increases in volume, size, scale, and scope of factories, warehouses, distribution centers or the software application landscape, any changes in any of these aspects add to the overall complexity of closely-knit mutual dependencies. Here we focus on these dependencies, to make it clear that quantity has a quality all its own.
When a small company scales up, there is a surprisingly high chance they simply collapse under the weight of its own unregulated business if they don’t bother to take a step back. This is the so-called ‘big ball of mud’ design pattern, or better, design anti-pattern. Global companies have learned their lesson and maintain worldwide consolidation programs to continuously reduce not only the cost of doing business, but also to reduce the overall complexity, moving from a possible ten thousand different applications to just several hundred. This simplification is recognized as the greatest challenge in the coming decades, where the most dramatic reports indicate that some two-thirds of CIOs expect a fatal crash within the next three years. “Fatal”, meaning a loss of several business days, or more.
Not only do they have to deal with very many different applications, but these applications are entangled also in a variety of ways, sharing the same customer details or being part of one or several business processes, so that processing runs need to be coordinated in serial and parallel patterns, to ensure that the right task happens at the right moment. If the average system has between ten to fifteen connections with another system, there may be 50,000 to 100,000 connections in total that have to be dealt with.
While already an ominous task, with increased globalization the demand for zero downtime is increasing, as well as functionality that spans and unifies different applications into a composite whole. This “dependency hell” is for many companies reason enough to apply a Schumpeterian “creative destruction” to their tools. To let them continue as-is, run aground, and then replace and revamp the lot. That is much cheaper than trying to create some order and maintain it. Even though a temporary simplification may occur after five or ten years, the company is likely to be drowning in complexity again. This applies to all sorts of automation, both the computerization of mental tasks and the mechanization of manual labor, as most companies fail to achieve the balance between optimization and adaptivity, and they organize their business according to a mass-production ‘economy of scale’ model of a highly optimized and entangled production line. When a new product is introduced, instead of re-using parts of the existing production line (economy of scope), a whole new production line needs to be introduced. The same happens for companies as a whole, and even entire markets.
This is why manufacturing during its mega-shift from manual to robotics has been hopping all over the world towards Greenfield regions where the cost of setting up a business does not have to carry the legacy costs, and all the investments can go to the newest equipment. You don’t have the costs of breaking down an old office to get the space to build your new one. This usually can be done in a group formation, as grouping much of the supply chain can leverage enormous logistical and communication advantages. These geo-economical shifts will continue until the competitive advantage diminishes as manufacturing itself reaches a stage where the majority of the tools have become interchangeable ‘general purpose’ devices, similar to a computer that can emulate just about any other tool.
As a design pattern, as economies of scale focus on sheer optimization the resulting system will converge towards maximal complexity, whereas with the focus on adaptivity economies of scope converge to a collective of parts, each at a minimal level of irreducible simplicity. It is important not to simplify ones ICT to the assembly line model of a factory hall, just because both are ‘technical’.
With the increased digitization of work processes, a top-down approach such as IT Governance is vital to establish transparent accountability, and ensures the traceability of decisions to assigned responsibilities. Governance is not just a strategic tool, but a tactical means which allows for a high level of flexibility. Flexibility leads to agility, scalability, reliability, reduced costs, increased security and operations.
Moreover, with the increased possibilities offered by digitization, companies are increasingly reflecting the actual value networks in both the way their information systems are organized, or vice versa. Additionally, company formats are increasingly moving towards being a small marketplace on its own, and when the yellow pages are interactive enough to form a marketplace itself, companies will be able not only to register a set of functional offerings, such as bookkeeping, distribution, warehousing, industrial design, repair services, but companies can choose to join forces in a collaborative distribution network, a cooperative virtual warehouse while other companies can e.g. choose to spin off a successful department and offer collective purchasing services to others.
IT Governance, albeit a top-down approach, can very well work with parts that manage themselves. Instead of a static blueprint which is aimed to pre-scribe how things ‘should be’, one can approach Governance in a much more dynamic and involved way. One can do so in three different phases. The first one leans on a system management solution that has matured during the last 15 years and has now become available as Application Discovery and Dependency Mapping. Such tools create a real-time interactive map of applications, interfaces, functions, scripts, descriptors, services, and infrastructure components—and the dependencies among them. When combined with simple search features and recommendation engines it becomes easy to invert the situation of the data center. A reconciliation program can help clean up the mess and create some order. After an initial quick-win bulk-load a repository describing all functionality will continue to fill up while the recommendation engine offers bits of work according to an overall improvement program to maximize impact, so that renewed order and improvements are re-introduced in the IT environment. With meta-programming tools doing the reverse translation, a developer/operator needs to know just one or some programming languages; the rest is translated by the tool to the language-system combination, so that even relatively unknown systems can be re-injected with new functionality.
The second phase involves query-based monitoring and management. Using a centralized repository and dependency injection it is possible to encapsulate a system with a ‘tag cloud’ of descriptors and control points. This simple approach allows for a federated approach, linking systems together as if a virtualized whole, allowing for consolidation, rationalization and virtualization. This can result in coordinated synchronized processing (faster time-to-market), smart provisioning, downtime optimization, improved corporate scheduling, easier maintenance and upgrades, easier matching with system interdependencies and interoperability. Furthermore it allows for a well-integrated approach combining Business Process Management of work streams with underlying Business Service Management of activity streams. Minimizing operations overhead while introducing smarter usage leads to both cost reduction and competitive advantageous usage.
The third phase involves an emerging approach in IT Service Management, a mix of Collaborative Service Lifecycle Management and machine-learning capabilities. Rapid uninterrupted improvement cycles can be realized via closed-loop end-to- end issue management approach. Typical improvements claimed are a 60% reduction in Mean-Time-To-Resolution, a similar reduction in problem resolution (without manual intervention) as well as a large reduction in manual effort. What previously used to be representatives and experts from separate departments, embedding in such goal-oriented and rapid communication cycles allow for the appropriate setting for a faster, iterative and incremental agile software delivery model, one where everyone becomes part of the same team.
Bringing digitization in line with corporate strategy can be an overly complex task, especially when outsourcing deals have broken the chain of experience and too much critical expertise has been externalized. Yet, there is a fairly straightforward way to regain control over one’s IT environment without being faced with an ominously large program. One can automate automation, and by using several modern artificial intelligence applications much of the effort can be reduced by simply automating manual tasks while at the same time facilitating both IT and business requirements. If there ever existed a chasm between these two worlds because the complexity has become unmanageable, this can revert the situation. There is no “IT vs Business” or “Business vs IT”, there is a communication gap, but one that can bring them closer, not further apart.

Lascia un commento

Il tuo indirizzo email non sarà pubblicato. I campi obbligatori sono contrassegnati *