Two trends of software engineering converging to the PMDA Framework

In the early days of computing, software was not thought of as an independent category of activity or product. In fact, the word software itself only acquired widespread usage in the 1960’s. Up to the middle of the 1960s, there were programs, delivered to the buyer with the computing machinery (hardware) by the manufacturer.

The more adventurous users slowly undertook to write their own programs in FORTRAN, Algol or, later COBOL. The first companies dedicated to developing software were founded in 1955 (Computer Usage Company) and 1959 (Computer Services Corporation), and the plan was to offer programming services.

Typically, the early applications of computers were military or scientific (mainly in fields of interest to the military or government, e.g., census, weather forecasting or mechanical translation). The first business-oriented languages were introduced in the late 1950’s (FLOW-MATIC, COMTRAN, FACT, then COBOL).

The introduction of symbolic, compilable languages was the first step of abstraction in the separation of software (yet to be openly so named) from the underlying machinery. This step gave rise to two development trends leading to an independent software industry.

The first trend was motivated by the commercial imperatives of fledgling software companies. Bespoke program writing in large contracts for public or private organisations represented a narrow, constricted market, by comparison with the large field of business. There was strong incentive to develop “pre-written” products which would be usable on any machine of a certain type, or, more ambitious still, on a variety of hardware platforms (compilable, “high-level” languages made this conceivable).

Given IBM’s domination of the hardware market, it is not surprising that many of the more successful software companies were founded by ex-IBM engineers.

The initial strategy was to identify a part of business activity which was similar if not identical across the range of businesses, and to reduce it to automation. Not surprisingly, the first successes were in the domain of business accounting.

The approach was extended to all repeatable aspects of business. By the early 1970s, the concept of Enterprise Resource Planning (ERP) was taking root, promoted in particular by SAP. ERP software is typically made up of modules for each business function; adopting modules from the same vendor almost guarantees perfect integration. However, not all of one company’s modules are necessarily the “best of breed”; some vendors may specialise in particular areas (e.g., supply chain, or customer relationship) and produce superior products in their particular specialty. This in turn raises the issue of integrating offerings from different vendors.

At this point, the attractiveness of “packaged software” brought to the fore a new issue in business automation strategy: the “make or buy” decision. Wholly integrated solutions were typically expensive, while combining individual best-of-breed modules or developing in-house raised architecture, design and cost issues.

Software vendors responded to this new challenge in two ways. On the one hand, some vendors kept the concept of a unified business solution, but specialised to particular industry “verticals”, such as banking or telecommunications. This approach enabled them to address the idiosyncratic needs of a given industry in a more precise way. Other vendors developed middleware products targeting the integration problem (transaction monitors, messaging platforms, business hubs). Middleware is positioned between system software, which addresses issues of machinery (hardware drivers and operating systems) and application software, which addresses issues of business needs.

The whole domain of Enterprise Application Integration (EAI) has proved to be a major step in the evolution of business software. Initially, EAI approaches were individual and ad hoc. Progressively, a body of wisdom and best practices emerged, and vendors followed by offering tools and products to implement those. As a result, there is a new balance in the enterprise software industry: while adopting integrated solutions from a single vendor still offers some advantages, the availability of integration resources makes best-of-breed approaches less risky.

The second trend concerns the engineering of software.

The term software engineering appeared almost simultaneously with the word software itself. Programmers, inspired by existing branches of engineering, were investigating how the  engineering stance might apply to their work.

This concern with engineering was exacerbated by the software crisis in the 1960s. The lack of an established discipline for software construction has disastrous effects on software project costs and on the quality of results. The profession of software engineer was acknowledged as a result of two NATO conferences in the late 1960s.

The objective of software engineering is to apply a systematic, disciplined, quantifiable approach to the design, development, operation, and maintenance of software. This trend is independent of subject matter. The notion of reuse had flourished much earlier. As early software was free, it was shared without restraint. Some groups (e.g., IBMSHARE) offered catalogues of reusable components. But there were no principles to guide reuse.

The first successful, systematic form of reuse was based on the concept of libraries. A library offers a collection of implementation of common functions or algorithms, which can be invoked from any client program. The programmer remained responsible for designing and implementing the overarching logic flow of the program. Libraries were very successful in boosting the efficiency of programming, beginning with FORTRAN mathematic and scientific libraries.

Another approach inverted the problem: how to address a family of software tasks which share an overall logic flow, but in which small individual parts need to be customised. This gave rise to frameworks. In a typical framework, the core software can run “out of the box” and provide a useful function. This is made possible by the fact that all possible variable options have been given a “default” value. However, for each such point on the software, a mechanism is available to override the default and introduce newly defined behaviour. An interesting form of framework architecture is the “plug-in,” introduced in Apache and systematised in Eclipse.

The concept of framework has been generalised to more complex arrangements, while maintaining the original characteristic of customisation points. In each case, the effect is to facilitate software development by allowing designers and programmers to devote their time to meeting requirements, rather than dealing with the details of providing a working system. Software frameworks include support programs, application programming interface (API) and tool sets that bring together all the different components to enable development of a solution. This approach is particularly suited to business software development, once the fundamental features of a given business problem are understood.

Another development in software engineering relates to the relationship between algorithms and data. According to Niklaus Wirth’s formula “Algorithms + Data Structures = Programs”. There is therefore a necessary coupling between some algorithms and some data structures. Identifying and investigating the nature of such couplings led to the formulation of Abstract Data Types (ADT). An ADT is a packaging of a Data type typically complex) with the functions (algorithms) which can operate on it. Expressing a software design in terms of ADT eliminates the need to manage the coupling separately, supports the factoring of the software into self-contained units which are more amenable to reuse, and supports the principle of information hiding, in that data structures are only accessible to functions which must operate on them, which protects against integrity violations and increases the potential for software quality.

The natural progression from ADT led to Objects. While ADTs are defined individually, it quickly appears that they in fact form “families” of similar types in which one is derived from another by specialisation – or conversely several can be grouped together by abstraction. Several forms of inheritance have been defined and codified in various languages. “Object-oriented programming” has known an overwhelming success, among other reasons because it is appealing to ordinary human cognition. In particular, objects defined in software engineering have often been interpreted as reflections of objects in the “real world”, partly because much of object orientation arose from simulation.

On the other hand, objects tend to partition a domain of interest and fail to capture features common to different domains. Thus arose another perspective, around the concept of Aspects. Tooling has been developed to incorporate Aspect-oriented elements of code in Object-oriented software, thus taking advantage of both kinds of factoring.

All the above inventions amounted to various ways of raising the level of abstraction in the representation of what the software is meant to accomplish. In effect, the larger, more complex units identified in these efforts (library routines, framework structures, ADTs, Objects, Aspects) consisted in clustering the low-level commands to the hardware (instructions) into aggregates associated with a concept more amenable to human cognition.

The next step in this progression was the concept of a software engine, that is, a construct which had a defined, parameterised behaviour, such that the parameters could be declared in an appropriate notation (“high-level language”, “4GL”, etc.). In this group of innovations can be found interpreters, virtual machines, rules engines, process engines, reasoning engines, etc.

Of special importance toEnterprise applications are process engines which mechanise the execution of a (business) process. The desired behaviour of the process is declared using an appropriate language, e.g., BPMN (Business Process modeling notation). Changes in the process can now be implemented without having to write new code explicitly (with some process engines the high-level notation must be translated by a tool similar to a compiler; others are interpretive, and execute the notation directly)

Similarly, rules engine can execute rules, declared in a rule notation (e.g., Rule Speak), either interpretively or after compilation. This approach to software blurs the distinction between modelling and execution, and leads to the concept of an executable model.

If the granularity of the units identified and enacted by engines is such that they map to well-understood business concepts, the resulting software architecture is that known as Service-oriented.

Many commercial packages are architected as combinations of engine and framework: like frameworks, they provide for specific locations where the user can insert code additions to override defaults (software extensions). Like engines, they offer a rich set of parameters which can be declared by the user (configuration). However, this flexibility has strict limits: if the user introduces customisations, i.e., modifications which do not fit the prescribed extensions or configurations, the vendor offers no guarantee as to compatibility with future versions. Also, because extensions and customisations are specific to a given package, the associated concepts are not portable across packages, and cannot support an enterprise-wide approach.

Service Oriented Architecture (SOA) is an approach to address the resulting challenge. While anEnterprise’s IT resources must serve the changing needs of the business, they must also be free to evolve in the best and most efficient way possible with technology trends. The concept of service supports the decoupling of Business dynamics (associated with the first trend), from Technology dynamics (associated with the second trend).

The dynamics of the business and of the information systems that support it are quite different. Business is driven internally by the enterprise mission and its strategy; and externally by the nature of the markets and by statutory obligations. Information systems are driven internally by the automation strategy (if any); and externally by technology trends and available vendors.

SOA is the first architectural approach which gives “first-class” status to the business view of services. While ITIL and other accepted norms of Information Technology refer to services, and may provide guidance on their management, there was no accurate definition of a service as such. SOA declares a service to be a unit of technology deployment which provides a definite business function. In simple terms, like a word in a language, a service within SOA has a “form” – its technical implementation and deployment – and a “meaning” – the function it provides to the business endeavour.

The analogy with language goes further. If services are like words in a language, there must be a way to combine them into meaningful “sentences” (read: service invocations must be combinable to form business-valuable scenarios). The “syntax” of a service language is called orchestration, and draws on the concepts of process engine and process notation.

The SOA approach has a major implication for IT professionals: a service’s meaning is defined in the context of business operations. This means that any SOA effort presupposes a precise definition of the structure and behaviour of the business endeavour. IT Architects have long realised that they cannot devise sensible solutions without first having defined the business problem. However, in the past this definition was done piecemeal, addressed a limited scope, and took the form of a collection of requirements often lacking any sort of internal structure. SOA requires that services should be defined in a properly articulated business context, and brings to the fore the need for a Business Architecture as an essential part of the architecture of the enterprise.

With SOA, the level of abstraction has finally been raised all the way to “business value”. In principle, services defined by their business value can be reused as long as the business meanings remain stable. Introduction of a new line of business might reuse some services and specify a few new ones. Innovations in technology might warrant the design of a new implementation; new deployment techniques (e.g., cloud deployment) may warrant changes in physical detail. All this can be accomplished economically and flexibly provided that the Business architecture exists to provide the base context.

The PMDA Framework is positioned at the convergence of the two trends discussed above. It supports defining a Business Architecture in terms of the activity types targeted in the first (commercial) trend, thereby leveraging the experience acquired in domain specification. It supports a fine, recursive decomposition of these domains into granular services (a necessity of structure, as will be shown in the following section on complex systems). Finally it supports the articulation of the business “meaning” of the services with a variety of implementation and deployment “forms”, leveraging the engineering resources discussed in the second trend, which make it possible to anchor the conceptual business model in practical realisation.

The challenge of IT Architecture arises from the fact that Business Information Systems are complex systems, which embody several kinds of regularities at once. Any notation designed to document one kind of regularity (e.g., process) may be incompatible with notations for other kinds. A major contribution of PMDA is the devising of a meta-grammar which makes it possible to define grammars suitable for each regularity and yet to connect them as compatible formal devices. The detail of this development is the topic of the Treating the Enterprise as a Complex System post.

Leave a Reply

Your email address will not be published. Required fields are marked *