2011-12-18

Enterprise pattern: #Cloud-Ready Estimation and Evaluation Procedure (CREEP)

The aim of this post is to consider a systematic procedure for estimation and evaluation of cloudability of an IT service – what type of cloud (GOLD, YELLOW, GREEN, BLUE, VIOLET) is acceptable for a particular IT service and for what cost. Quick reminder re the zone types (from http://improving-bpm-systems.blogspot.com/2011/07/1-relationships-between-enterprise.html ):
  • classic within enterprise computing centre – zone type GOLD; 
  • within enterprise private cloud – zone type ORANGE; 
  • outside enterprise and enterprise-managed private cloud – zone type GREEN; 
  • outside enterprise and service-provider-managed private cloud – zone type BLUE; 
  • public cloud (outside enterprise and service-provider-managed by definition) – zone type VIOLET. 
The evaluation consists of two parts:
1. ranking of an IT service by several characteristics,



2. decision table for acceptability of cloud solution (actually, what ZONEs are acceptable for this IT service). 
Note: “maybe” means that further investigation is necessary;
Note: more than one column for GREEN, BLUE, and VIOLET can be possible if you work with several providers

Example: SharePoint Extranet


Ranking applied


Decision table applied



Rules for the recommendation :
  1. If at least one “NO” then “NO”
  2. If no “OK” and some “maybe” and “IaaS/PaaS/SaaS” then “maybe” + “IaaS/PaaS/SaaS”
  3. If some “OK” and some “maybe” and “IaaS/PaaS/SaaS” then “OK” + “IaaS/PaaS/SaaS”

Note: CAPEX and OPEX are functions of a service and a zone type.

Resume: 3 zones are not recommended, 2 accepted as SaaS:
1st preference SaaS in BLUE zone; CAPEX = ...  OPEX =... Lead time=....
2nd preference SaaS in GREEN zone; CAPEX =.. OPEX =... Lead time=...

Thanks,
AS

2011-10-15

Enterprise pattern: Structuring IT Organisation (SITO)

How to decompose an IT organisation into smaller units?

Approach

  1. Collect major IT-related functions (approx. 30-50) to be carried out at an IT organisation; potential sources COBIT, ITIL, PMBOK, PRINCE2, HERMES, etc. 
  2. Draw a matrix of mutual relationships between those functions or group of functions (about 10) 
  3. The relationships may be like “synergy” (functions to be carried-out rather together) 
  4. The relationship may be like “prohibition” (functions to be carried-out by different units because of SoD principle, good practices, etc.) 
  5. Each particular relationship has to be justified 
  6. Find clusters in that matrix 

Example of  the relationship matrix:

Potential functional groups

  • GOVERN – administrative coordination as the whole -- set and maintain internal policies, controls and processes
  • ARCHitect – technical coordination – define structural changes (of core capabilities and services) in response to business and technologies changes
  •  Make SAFE (added to be conceptually complete) – define policies concerning the confidentiality, integrity and availability of information services 
  • Supervise building of core services and capabilities – project management (PM) Supervise operating of core services and capabilities – operations monitoring (OM
  • BUILD core capabilities and services: application services, information services and infrastructure services 
  • OPERate core capabilities and services – integration, pilotage and service desk
  • EVALuate (as an independent control) capabilities 
  • INTERNal support capabilities 

Define YOUR rules for decomposition

Depending on your current needs and concerns, define two group of rules.

Prohibition rules:
  • P1 Separate doing and supervising/controlling – SoD 
  • P2 Separate architecture/design and implementation – SoD, specialisation and quality at entry 
  • P3 Separate implementation and operation – SoD, specialisation and quality at entry 
  • P4 Policy vs applying it – legislation vs executive separation 
  • P5 Specialisation

Synergy rules:
  • S1 Close work (e.g. there is a primary / single client for services of that function) 
  • S2 Architecture role to guide (an architect is a person who translates a customer’s requirements into a viable plan and guides others in its execution) 
  • S3 Synergy between technical and administrative activities (how you do something may be more important what you do)

Example of relationship matrix

 

Arrangement of functions into smaller units (divisions)

BUILD-related functions are decomposed into three (process-centric, knowledge and infrastructure) due to specialisation.


So, the structure should be as shown below.


Thanks,
AS

2011-10-09

EA view on Enterprise Risk Management (ERM) platform

In many cases, it is impossible to find a single ERM product which spans all business areas to be covers by ERM. So, it requires building an internal ERM platform on top of which different ERM-related applications will be built (following the PEAS enterprise pattern – see http://improving-bpm-systems.blogspot.com/2011/04/enterprise-patterns-peas.html ).

Business architecture view

Risk must be carefully monitored (through data collection), evaluated and acted upon. This means (see also the illustration below):
  1. Enterprise business functions should be enriched to generate the risk-related data.
  2. Those risk-related data need to be collected at the enterprise data warehouse together with other business data.
  3. Some business processes need to be updated to embed risk-related activities.
  4. A set of risk-related rules, logic and risk-related knowledge should be able to use the risk-related and other business data to detect acceptable limits of risk as well as interdependencies and correlations between different risks.
  5. Some business processes for risk mitigation maybe automatically activated.
  6. A lot of risk-related indicators, alerts should be available in the form of dashboards and reports available for different staff members.
  7. Staff members should be able to initiate business processes based on the observed risk-related information.



Business-generic capabilities involved

The following business-generic capabilities are involved in the ERM platform:
  • Management by processes
  • Efficient data gathering channels
  • Single version of truth for data
  • Ingesting (into the data warehouse) of external information
  • Efficient dissemination channels
  • Effortless collaboration within groups / communities of practices
  • Formalized business logic

Supremacy of management by processes

Managing any work by processes is the key business capability with allows to address the risk-related issues in a proactive manner. The risk is strongly related to how the business processes are carried out. By understanding a process (i.e. through being able to simulate it) the business may predict how the risk is changing during the execution of that process. The explicit description of processes permits to add a few “check-points” within any process to examine its risk-related “health”.

Business processes act as a skeleton to which the enterprise adds risk management (as shown on the picture below) – each usual activity is enriched by risk-related monitoring and evaluation.

The risk evaluation may initiate some risk mitigation processes. The risk evaluation may be as complex as necessary, and it may include simulations (e.g. value at risk and stress testing), and the conduct of statistical and scenario analysis.

IT-generic capabilities involved

The following IT-generic capabilities are involved into the ERM platform:
  • Enterprise resource planning platform
  • Data analytic
  • Business process management platform
  • Business intelligence platform
  • Business rules management platform
  • Document management platform
  • Corporate portal

Thanks,
AS

2011-09-22

Writing IT strategy


The IT strategy development logic is the following:
Business strategy -> business architecture TO-BE -> application + information architectures TO-BE -> technical architecture TO-BE -> IT (or EA) AS-IS -> IT roadmap

Main topics in the IT strategy document are the following:
  1. Executive summary (½ page – summary for senior management, e.g. the Board members)
  2. Business context (½ page – WHY the major directions in the IT strategy have been taken)
    • Strategic directions for the IT as an enabler for the company’s mission and vision
    • Potentials of the IT as a strategic driver for the company’s business
  3. IT contribution to business success (1 page - HOW IT capabilities and plans will contribute value to the business or delta in business is reflected by the delta in IT)
  4. IT guiding principles (½ page – rationales to guide IT decision making)
  5. Assessment of the current state of the IT environment
  6. IT Roadmap (WHEN and WHAT to reinforce and build)
  7. Etc.

Item 3 is illustrated by the following image:


Ideally, such dependencies can be generated from business processes and applications (i.e. from your EA repository).

In the real situation this illustration may become rather complex, so some techniques for better understanding can be necessary. For example, the selection of a rectangular highlights all connected rectangulars and links, as shown below:


And the final advise: be careful with arrows - people may interpret them differently.

Thanks,
AS


2011-08-17

Relationship between #BPM and #SDLC and Software Engineering (SE)

In my experience, BPM and SE are very natural friends (with the help from SOA, EA and BA) which work well together within a proper architecture.

Some basics: Any complex system is a dynamic set of artefacts (or building blocks?), e.g. in case of a typical business system those artefacts are: processes, services, events, data structures, documents, rules, roles, activities, audit trails, KPIs. Artefacts are interconnected and interdependent. We have to anticipate potential changes: policies, priorities, compliance, technology, etc. Implementation of such changes necessitates the evolution of some artefacts and the relationships between them. It must be easy to modify all artefacts and relationships without causing any negative effects.

My main architectural principles for creating flexible systems:
  • All artefacts must be evolved to become digital, external, virtual and components of clouds
  • All artefacts must be versionable throughout their lifecycle
  • All relationships between these artefacts are modelled explicitly
  • All models are made to be executable
(See http://www.improving-bpm-systems.com/pubs/AS-AW08-keynote.pdf)

So, BPM (with the help from BA, see http://improving-bpm-systems.blogspot.com/2011/02/explaining-ea-business-architecture.html) can derive the artefacts. And SE is responsible for creating good services for all artefacts.

Then BPM, EA and SE have to work together to make explicit and executable models. The best example of executable models is executable business processes. Any business process is a relationship between many artefacts: who (roles) is doing what (business objects), when (business events), why (business rules), how (business activities or other processes) and with which results (KPIs). At the same time, such a process is an explicitly-defined coordination of services to create a particular result. So, there is a recursive relationship between services and processes:
  • all our processes are services,
  • some operations of a service can be implemented as a process,
  • a process includes services in its implementation.
This is the base of a modelling procedure (the core of the SE) whose purpose is to analyse a building block (processes or just activities – what is it supposed to do and should it be considered as a whole?) and to synthesize its implementation (how does it carry out its function and should it be considered as a composite?) as the explicit coordination of other building blocks (processes or just activities).

It is an iterative procedure – it can be applied until we only have left indivisible building blocks (i.e. activities). During modelling it is necessary to collect and refine the different artefacts. To avoid getting bogged down in detail it is useful to construct building blocks recursively, like Russian dolls.

Of course, owing to very nature of modelling as a creative problem-solving human activity, each person does it in his/her own way, and for the same subject two different people may produce two different models. The proposed modelling procedure can’t change this, but it does help to uncover the same artefacts.

Mode details are in my book http://www.improving-bpm-systems.com/book

Thanks,
AS

2011-07-30

E-Tunisia / e-consultations: solution architecture

General


 Continuation from the previous post - E-Tunisia / e-consultations: overview


Definition


E-consultations constitute interactive “tell-us-what-you-think” on-line services where ordinary citizens, civic actors, experts, and politicians purposively assemble to provide input, deliberate, inform, and influence policy and decision making.

Privacy considerations

  1. Only authorized person can actively contribute (i.e. add some text) in e-consultation services.
  2. The identity of the person may is hidden.
  3. The enrollment will include the identity verification.
  4. Further a person can hide his/her identity under an avatar.
  5. The correspondence between an avatar and the identity is secret, but may be disclosed in case of misbehavior. Example: Facebook.

Nomenclature of e-consultation services

 

Maybe this nomenclature is not full yet.

 

Question and answer discussion forums

It is a free form on-going thematic discussion initiated in a community of interests. Each contribution is named. A discussion may be closed (only within the community) or open for everyone (or even to the Internet). Examples: discussions in Linkedin.



On-line polls

A time-bounded questionnaire.



E-petitions or on-line testimonies

A person (or an association) initiates a formal demand to public services. Such a demand should start a process which should lead to a meaningful response. People, other than the initiator, can express their opinion (support or not) about the demand. Example: Reporting Damaged Roads and Paths https://www.contact.act.gov.au/app/answers/detail/a_id/22/~/reporting-damaged-roads-and-paths

E-panels

A time-bounded nominee-only group open discussion on issues of public interest.

Editorial consultations

A time-bounded multi-authoring of a document or a set of documents. There are several options about who can edit the text. A possible option is that many people can contribute to the content via comments and small (editorial) group can modify the text to reflect those comments.



Use of the e-government platform

E-consultation services are applications implemented on top of the e-government platform (see chapter 5). The latter provides different common services for facilitating the implementation of such application and keeping the same look and feel for better user’s experience. Each application is self-contained, developed in accordance with platform’s rules and may evolve without causing negative effects to others. Number of applications within the platform is not limited.


Enabling the public-private partnership

Systematic approach to very critical IT issues such as authorization, data security, access control, etc. clears a way for re-use of available data. It will be possible to estimate how the disclosure of some of those data will effect to the level of protection of remaining data.

Ability to open some data makes possible to employ private investments for improving some applications (it is considered that all applications are developed mainly by centralized capital investments).

Such investment attraction should be estimated for each application.

The big picture of e-government platform

E-consultation services are e-government services. The e-government (as one of its functions) provides (via ICT) to partners (citizens, enterprises, associations, etc.) the governmental services. To introduce the e-government without disturbances to existing governmental applications, it is proposed to position the e-government is a layer between the partners and the existing governmental applications (as shown in figure below).


The partners-facing part of the e-government is a collaborative extranet which is similar to popular social networking tools (Facebook, LinkedIn, etc.) and e-banking. Its main functionalities are the following:
  • secure repository for short messages, documents, and video;
  • dedicated (including role-based) information and functionality;
  • diverse services as small pluggable applications,
  • direct channel to the governmental business processes; and
  • unified view of central, regional and local governments.

The government-facing part of the e-government is the integration and coordination capabilities which are necessary to fulfill needs of partners.

Important that the whole e-government is separated from the existing governmental environment. This separation means operational and evolution independence.
The common functionality of the e-government platform is presented below.


Expected advantages:
  • Quick implementation
  • Easier maintenance
  • Explicit security
  • Uniformity for the users


Implementation principles

  • Keep the conceptual integrity.
  • Take into account socio-technical aspects, because how you do something is sometimes more important than what you do.
  • Unity the infrastructure and reach different mobile tools
  • Systematically use open source software
  • Provide security at the level of private banking
  • Ruthlessly validate the implementation by international experts and hacker groups and political parties
  • Develop agile and deploy step-by-step within the common architecture
  • Guaranty the total traceability and records management
  • Exchange by electronic documents

Infrastructure implications

To cover the population, it is necessary to establish a network of social computing centres. The latter may be located at local community premises (e.g. a public library, “hotel de ville”, etc.). Those centres may also provide wireless access points.

Possible next actions

  1. Validate of this architecture
  2. Organise wide consultations with all involved partners
  3. Solicit the feedback from international experts
  4. Launch the feasibility study

Thanks,
AS

2011-07-29

E-Tunisia / e-consultations: overview


Continuation from the previous post - E-government for Tunisia (E-Tunisia) : Help to move forwards

We, the people, are the government



E-consultations constitute interactive “tell-us-what-you-think” on-line services where ordinary citizens, civic actors, experts, and politicians purposively assemble to provide input, deliberate, inform, and influence policy and decision making. E-consultation is complimentary to existing practices.

E-consultations are also more formal and structured than discussions in the informal virtual public sphere. They tend to have a set duration, agenda, employ the use of moderators, with topics for discussion pre-defined by the host.

There are five common types of e-consultations:
  1. question and answer discussion forums
  2. on-line polls
  3. e-petitions or on-line testimonies
  4. e-panels
  5. editorial consultations
Usual challenges:
  1. population coverage
  2. integrity
  3. visibility
  4. transparency and disclosure obligations are vital (with confidentiality only applying on matters of a personal nature)
  5. usual distrust towards new electronic applications
  6. balance of central and local power
Local challenges:
  1. 10 million people
  2. huge diversity in education and income
  3. political instability
  4. lack of a comprehensive Internet infrastructure
Local opportunities:
  1. the top African country in the UN e-government study 2010
  2. high level of the IT local resources

Solution architecture will be another post .

Thanks,
AS

E-government for Tunisia (E-Tunisia) : Help to move forwards

It has been proven that the deployment of e-government [E-government is the use of information and communication technologies (ICTs) to improve the activities of public sector organisations] brings the following advantages:
  • streamlining of the interactions of the citizens and business with the central, regional and local governments;
  • increase in the performance of workers at governmental agencies;
  • reduction in the possibilities for corruption.
How can an e-government implementation help Tunisia (which is already the top African country according to the UN e-government survey) to move forwards at this moment in its history?

Which e-government services (out of about 1000 items in e-government catalogues) should be the first priority?

Today, it appears that Tunisia urgently needs a much improved handling of political rights, provisioning of social security and, in general, establishment of trust between the population and the public sector organisations. Bearing this in mind, a list of potential e-government capabilities by domain could be the following.

Political domain:
  • E-consultation as an umbrella for different means of expressing the voice of the people (similar to direct democracy): e-polls, e-voting, etc.  -- E-Tunisia / e-consultations: overview
  • Transparency of the legislative powers (e.g. parliament, deputies, decisions)
  • Easy access to the legal knowledge base
  • Authorization of demonstrations
  • Voting rights (different levels, as well as expatriates)
Social domain:
  • Management of who is legible for different types of social support (“welfare”, social housing, etc.)
  • Management of correct provision of social support
Citizen-to-government and business-to-government communication domain:
  • Transparency of the governmental business processes with respect to promised deadlines, use of objective rules, and traceability of internal work
  • Use of electronic means for exchange through secured documents
Potential benefits for African countries:
  • Many elements of this e government project will be applicable in many African countries
Potential benefits for donor countries:
  • Test some e-government solutions in a green-field project
Potential contribution from the development community:
  • Coordination and help with overall architecture / technical solutions

Thanks,
AS

2011-07-18

Linkedin: In one word, what is the single largest problem facing #entarch?

A quick statistical analysis of the responses (with some categorization, e.g.” cacophony” -> “chaos” -> “confusion”) in the linkedin discussion "In one word, what is the single largest problem facing Enterprise Architecture?" http://www.linkedin.com/groupAnswers?viewQuestionAndAnswers=&discussionID=43593317&gid=36781&commentID=45630743&trk=view_disc
  • 373 unique replies from unique people (the first reply is used if not explicitly specified)
  • signal to noise ratio is almost 1 to 4


So, a possible resume about the EA status is “immature (discipline, practice, etc.) with problems in many aspects (acceptance, people, tool, definition, etc.) which is requested to deliver results with higher speed”. Maybe executability will help?

Disclose: executability is my choice.

Thanks,
AS

2011-07-13

Enterprise patterns: Petit Informaticien (PI) for "What is the key to getting business value from IT?"

As reply to EBIZQ.net question "What is the key to getting business value from IT?" http://www.ebizq.net/blogs/ebizq_forum/2011/07/what-is-the-key-to-delivering-business-value-with-it.php

I use the algorithm or enterprise pattern “petit informaticien” (PI):

-1) learn the big picture
0) prepare an initial set of tools to implement incrementally business solutions within that big picture
1) systematically visit the users (similar to “milking tour”)
2) listen their needs
3) map their needs into the big picture
4) quickly deliver maybe-not-perfect-but-useful business solution
5) get the feedback
6) improve business solutions
7) sharpen tools

Thanks,
AS

2011-07-08

Enterprise patterns: CAPS

A quote from my previous blogpost http://improving-bpm-systems.blogspot.com/2011/07/1-relationships-between-enterprise.html about Enterprise Architecture (EA) and cloud computing: Ideally, a cloud-optimised solution is a set of interrelated and interconnected services which are good cloud citizens (or highly cloudable).

This blogpost describes a pattern Cloud-Aware Processes and Services (CAPS) to deliver cloud-friendly solutions instead of monolithic applications. The importance of this pattern is demonstrated by the recent post “All cloud roads lead to applications” http://news.cnet.com/8301-19413_3-20075526-240/all-cloud-roads-lead-to-applications/. This blogpost is based on the materials from my book http://www.improving-BPM-systems.com/book.


BPM helps to deliver cloud-friendly distributed systems


Enterprise BPM systems/solutions (see http://improving-bpm-systems.blogspot.com/2009/04/should-we-consider-third-forgotten-bpm.html) architected with a multi-layer implementation model (see the figure below) can serve as an example of cloud-friendly distributed systems.



In this model, each layer is a set of services, each of which addresses particular concerns. The services are cloudable.

The business data layer comprises many pieces of information – names, dates, files, etc. – which are stored in existing repositories (e.g. databases, document management systems, web portals, directories, e-mail servers, etc.). Services at this layer are stateless, contain no business logic (although they may contain some access logic) and, usually, co-locate with their underlying databases. They are highly cloudable.

The business objects layer comprises the many objects specific to a particular business, e.g. a business partner, a product, etc. This layer hides the complexity for manipulating the objects, which are actually collections of data together with any dependencies between them. Again, services at this layer are stateless, contain no business logic (although they may contain some technical transformation logic), and are implemented as simple compositions. They too are highly cloudable.

The business routines (or regulations) layer comprises the actions which must be carried out on the business objects to perform the business activities. Services at this layer are stateless and implemented as complex compositions. The latter are defined in a normal programming language (e.g. Java, Python), an interpretive language (e.g. Jython) or, even, in BPEL. A specialised environment (actually a service called a “robot”) may be needed to execute these services, but this “robot” is rather cloudable.

The business execution layer carries out the business processes. The principal service at this layer is a business process execution engine. It executes business processes which are rather explicit compositions (e.g. SAP uses a BPMN subset as a language for executable business processes – see http://improving-bpm-systems.blogspot.com/2011/06/first-impression-sap-netweaver-bpm-tool.html). Any business process execution engine is a stateful service, but each business process instance can be executed independently. Hence, this service is cloudable. Also, business processes are modelled taking into consideration the concept of idempotency (see http://improving-bpm-systems.blogspot.com/2011/07/1-relationships-between-enterprise.html).

The business monitoring layer analyses the execution of the business processes. A large amount of data (events and audit trails produced as the result of execution) is treated to extract any correlations and meaningful information. Services at this layer are both stateful and stateless, but they mainly operate in the “background” and thus are rather cloudable.

The business intelligence layer implements enterprise-wide planning, performance evaluation and control actions applied to the business processes. Services at this layer are cloudable.

A tip to remember the layers is the following: Data, Objects, Routines, Execution, Monitoring, Intelligence – DO-RE-MI.

The multi-layer implementation model and some technologies


The figure below shows the relation of the multi-layer implementation model to some other technologies. Normally, some services are accessible from a portal or workplace. They “float” in an Enterprise Service Bus (ESB). The latter is used only for service-to-service connections at the technical level. Ideally, an ESB should be based on a solid computing basis which can be provided by a grid, modern virtualisation infrastructure or cloud computing.




Extra considerations about the composition of services (or integration)


Some of the services mentioned above can be qualified as compositions of other services. In addition, interactive services from a portal or workspace are also, in general, compositions. For example, a user may invoke different services whilst staying on the same page (thanks to AJAX).
Any composition of services may manipulate business data and thus may disclose them. The following considerations may help to reduce the risks related to data disclosure:

 

To greater agility


The multi-layer implementation model is a tool which helps an enterprise to design process-enabled solutions
  • in business terms, but not in terms of IT tools,
  • via the combination of services,
  • in a structured way, and
  • with a high level of built-in flexibility.
In my experience, the multi-layer implementation model and the modelling of executable business processes (see http://www.slideshare.net/samarin/towards-executable-models-within-bpm-presentation) are pillars for agility. Today they can be reinforced by cloud computing, and so it is now possible to achieve synergy (and thus even greater agility) between
  • Business Architecture (via BPM and executable business processes),
  • Application Architecture (via SOA and the multi-layer implementation model) and
  • Technical Architecture (via cloud computing).
And of course, it is the responsibility of EA that all of above-mentioned architectures work together.

Thanks,
AS

2011-07-01

Relationships between Enterprise Architecture (EA, #entarch) and #cloud computing

Big picture


The effective use of cloud computing at the enterprise level is a two-way street:
  1. the use of cloud should be architected for the needs and realities of a particular enterprise and
  2. the application portfolio, technologies, etc. used in an enterprise should be adapted to achieve the full potentials of cloud computing.
In general, EA deals with a system of systems. In general, those systems are distributed – each of them is an interrelated and interconnected set of business artefacts [events, rules, processes, documents, etc.] and technical artefacts [servers, OSes, databases, storage, applications, etc.] spread over the network. With cloud computing, the network become rather versatile (many zones with different characteristics) and transparent (easy to move some artefacts from one zone of the network to another).

Considering that EA knows all artefacts and (ideally all) relationships between them, EA should also know the impact (implementation time, risks, cost, performance, etc.) of particular allocation of some artefacts into some zones to optimise (easy to create, easy to operate, easy to maintain and easy to evolve) the allocation of all artefacts.

A simple allocation model


Let us consider cloud as a set of the following zone types (they are named using different colours):
  1. classic on-premises computing centre – zone type GOLD;
  2. on-premises private cloud – zone type ORANGE;
  3. off-premises and enterprise-managed private cloud – zone type GREEN;
  4. off-premises and service-provider-managed private cloud – zone type BLUE;
  5. public cloud zone type (off-premises and service-provider-managed by definition) – zone type VIOLET.
Although, some of these zone types (e.g. the VIOLET one) may never exist in a particular enterprise, all of them are listed for completeness. The BLUE and VIOLET zone types are built with a set of trusted service providers. The term “zone types” is used because an enterprise may have several zones of the same type (e.g. more than one provider for VIOLET zones).

The allocation of artefacts to zone types is governed through a decision framework which provides a set of rules for putting a particular artefact into a particular zone type. See http://improving-bpm-systems.blogspot.com/2011/12/enterprise-pattern-cloud-ready.html

Practically all artefacts may be in any of these zone types. During the continuous virtualisation of technical artefacts, almost all of them may be moved from the GOLD to ORANGE and GREEN zone types and then to the BLUE and VIOLET zone types. Artefacts, like applications, may be transformed before the move or not.

The decision framework takes into account factors such as
  • data sensitivity,
  • security of data,
  • network latency,
  • the intensity of use,
  • artefact architecture,
  • the technologies involved,
  • dependencies between services,
  • SLAs,
  • BCDR requirements,
  • the existing zone (including its operating cost and risks),
  • the target zone (including its operating cost and risks),
  • the cost of moving,
  • etc.
Also, the decision framework reflects the business strategy, e.g. an organisation which anticipates a rather aggressive decentralisation shouldn’t promote the use of the ORANGE zone type.

The artefacts (business and technical) mentioned above are actually services which implement related artefacts, and sometimes there is a trivial dependency between business and technical artefacts. For example, business documents (business artefact) are implemented by Enterprise Content Management (ECM) services (technical artefacts) which require a database, file storage, application server, backup, monitoring, etc. (technical artefacts).

Services are useful for building cloud-optimised solutions


Ideally, a cloud-optimised solution is a set of interrelated and interconnected services which are good cloud citizens (or highly cloudable). In reality however, there are still many classic monolithic applications which are actually conglomerates of many potential services and therefore it is not easy to evaluate how cloudable they are. For this reason, any approaches for replacing monolithic applications (existing and/or new) by coordinated sets of services are very welcome. Some of the related concepts are mentioned below.

Services (defined as explicitly-defined and operationally-independent repeatable units of functionality that create a particular result) and, especially, stateless services are the best candidates for clouds (i.e. they are highly cloudable) – just add more instances but be careful about dependencies.

SOA (defined as an architectural approach for constructing software-intensive systems from a set of universally interconnected and interdependent services) is a way of thinking in terms of services (e.g. large, more functional, services are assembled from small, less functional, ones).

Enterprise Service Bus (ESB) is the best way to provide universal connectivity (mentioned in the previous paragraph). But, it is necessary to avoid trying to solve all integration problems with an ESB – see http://www.slideshare.net/samarin/example-use-of-bpm-to-monitor-an-esbcentric-integration

Idempotency (defined by Wikipedia as the property of certain operations to be applied multiple times without changing the result) applied to services helps to build reliable compositions of services – see the IRIS pattern from my book. Recently, the power of idempotency was demonstrated in April’s AWS issue – see http://www.twilio.com/engineering/2011/04/22/why-twilio-wasnt-affected-by-todays-aws-issues/. Also, note that the SAP BIT420 training course has an example of idempotent services.

The functioning of any enterprise is the coordination of many activities (human and automated) – see http://improving-bpm-systems.blogspot.com/2011/02/explaining-ea-business-architecture.html. Considering that the majority of those activities are actually invocations of services, it is possible to say that the functioning of the enterprise is the coordination of many services via different techniques: token-based, rule-based, event-based, data-based, manual-based, etc. If all coordination is made explicit (via BPM), this will provide the necessary information about all static and some dynamic relationships between services.

Typical questions about cloud computing


The following provides a typical list of enterprise-wide concerns relative to cloud computing.
  • How can we know if cloud computing is even appropriate for our company?
  • Which systems, applications and business processes are the best candidates for cloud outsourcing?
  • How can we effectively manage the interrelationships between systems, business processes and data we want to outsource with those that will remain in-house?
  • What would be the most effective cloud configuration for our company (private, public, hybrid, community, etc.)?
  • How do we protect sensitive agency data in the cloud?
  • How do we comply with industry records management requirements in a cloud environment?
  • What’s the best way to assess and manage our company’s risk profile in a cloud environment?
  • What are the actual costs of our IT operations today? What cost savings can be expected by transitioning to cloud computing?
  • How well are our current corporate IT investments performing? What performance improvements are possible in a cloud environment?
  • Where do we start? What are the steps to get from where we are today to a cloud environment?

It is clear that the simple allocation model (in which the decision framework is filled by your rules) and the ability to deliver solutions as a set of services will be of considerable help to address systematically those concerns.

Thanks,
AS

2011-06-25

Practical process patterns: DIP


Decompose Into Patterns (DIP)


A friend of mine asked me to have a look at his first try of business process modelling in BPMN. The modelled process is well-known – “gestion de sinistres” or “claim processing”.

An apartment owner/leaseholder, who got an accident, inform the property managing company (régie), they call a repair service and validate the repair cost with the insurance company. Then the managing company control the work by the repair service and ask the insurance company about to reimburse the cost. The latter transfer the money to the former to pay the invoice.

The following picture is an attempt to model this process.

This diagram does not show the structure of the process thus not easy to understand. Actually, there are four big steps in this process:
  1. Submission a claim to the managing company
  2. Selection of the acceptable repair service by the managing company
  3. Repair and control of repair
  4. Submission the invoice from the managing company to insurance company and further payment

For all of those steps there is a proper practical process pattern to follow on.
  1. Submission interface (SI) – http://www.slideshare.net/samarin/process-practical-patterns-si
  2. Proposal, Action, Reaction (PAR) – see my book
  3. Initial Process Skeleton (IPS) – see my book
  4. Submission interface (SI) – http://www.slideshare.net/samarin/process-practical-patterns-si

So, decompose your process and try to apply practical process patterns. Maybe not exactly – slightly modified to a particular use.

Thanks,
AS

2011-06-17

EBIZQ.NET: Should the language of #BPM be the language of business?

<discussion ref="http://www.ebizq.net/blogs/ebizq_forum/2011/06/should-the-language-of-bpm-be-the-language-of-business.php" />

In my experience, it is the best option so far. BPM (how to use processes to manage the enterprise) is good as the language of business for the following reasons:
  1. BPM main “tool” -- process (an explicitly-defined coordination of activities to create a particular result) – makes the business EXPLICIT.
  2. BPM makes its processes EXECUTABLE (what you model is what you run) – thus predictable (if you want).
  3. Processes in BPM can be rather flexible (see http://improving-bpm-systems.blogspot.com/2010/12/illustrations-for-bpm-acm-case.html ).
  4. BPM uses the business artefacts: events, rules, roles, data, documents, KPIs, audit trails, activities, etc. – practically everything from business architecture (see http://improving-bpm-systems.blogspot.com/2011/02/explaining-ea-business-architecture.html ).
  5. With BPMN, BPM may express different practical patterns which are applicable in different business areas (those patterns are easier than well-known workflow patterns).
  6. Proper implemented BPM can considerably speed up the evolution of the business.
  7. If the business wants to share its language with the IT then BPM works well with EA, PMO, SDLC, SOA, etc.

Sure, that BPM is in favor of control-based coordination which is not sufficient in all cases. Nevertheless, BPMN allows also some event-based coordination (see http://improving-bpm-systems.blogspot.com/2011/01/explicit-event-processing-agents-in.html ).

Although BPM has no commonly-agreed-between-BPM-gurus terminology but those differences are not dramatic.

Thanks,
AS

2011-06-11

First impression -- #SAP #NetWeaver #BPM tool

SAP NetWeaver BPM 7.2  looks (after the 4-days TZBPM training course) rather good:
  • Eclipse-based design environment or "composition environment"
  • Good naming conventions by default
  • Business view and technical view
  • Explicit definition of events in addition to several other artefacts
  • One pool is one process
  • Direct interpretation of BPMN without compiling it into BPEL
  • A small and reasonable subset of BPMN shapes
  • Three ways to implement UI (and SAP work on another one)
  • Integration with the existing run-time environment (not easy to handle but doable)
Potential improvements:
  1. Inter-process communication - no explicit way to say that this event is consumes by that process 
  2. More comprehensive environment for human tasks (e.g. for processing of escalations without BPMN)
  3. Active and non-active pools
  4. Link the events to SAP PI
Thanks,
AS

2011-04-17

Enterprise patterns: PEAS

Related blogposts are available at http://improving-bpm-systems.blogspot.ch/search/label/PEAS

I noticed an enterprise pattern which is Platform-Enabled Agile Solutions (PEAS). It is applicable to situation when it is highly desirable to advance with a new enterprise-wide initiative in an incremental way. It means that developing the final user requirements is virtually impossible because the users just do not know exactly what should be built and they prefer to try those news things in real life. As well as the different departments (or target communities) advance with their (obviously different) speed. The classic approach to IT project management – define everything up-front – just does not work.

From the systemic point of view, it is necessary to provide many solutions (SOLs) which have a lot of similar functionality. The provisioning of SOLs should be carried out with the pace of the target community of practice. At each moment of time, each community may have different pace and may need different functionality.


The proposed architecture (see the illustration above) is based on the following considerations:
  • The platform must standardise and simplify core elements of future enterprise-wide system. For any elements outside the platform, new opportunities should be explored using agile principles. These twin approaches should be mutually reinforcing: the platform frees up resource to focus on new opportunities while successful agile innovations are rapidly scaled up when incorporated into the platform.
  • An agile approach requires coordination at a system level.
  • To minimise duplication of effort in solving the same problems, there needs to be system-wide transparency of agile initiatives.
  • Existing elements of the platform also need periodic challenge. Transparency, publishing feedback and the results of experiments openly, will help to keep the pressure on the platform for continual improvement as well as short-term cost savings.

In this pattern, technical concerns are decoupled from business concerns. All of those concerns are addressed TOGETHER by the enterprise architecture.

Added later: the following illustration shows that amount of efforts for implementation of solutions (which is proportional to "Functionality" x "Scope span")  is reduced by the platform. Of course, the latter is a "common" good and a decision to build a platform should be taken strategically.



Thanks,
AS

2011-02-19

Explaining EA: business architecture basics 1


Note: a revised version of these three posts is available at http://www.improving-bpm-systems.com/pubs/Explaining-EA-BA-basics_v7.pdf

The purpose of this post is to provide an explanation about Business Architecture (BA). Informally speaking, BA defines how work gets done within an enterprise. How work gets done is, of course, not completely unknown, but the knowledge is diffused throughout different instructions, strategic papers, reports, e-mails and in peoples’ heads. The aim of BA is to make this knowledge explicit, i.e. formal, externalized and operational, so it can be used for decision making, operating control, daily work, knowledge transfer, etc.

First, it is necessary to achieve a common understanding about certain concepts (and the relationships between them) used for constructing BA. Examples of such concepts are: function, process, service, capability, etc. These concepts are used to provide different views of the enterprise. It is important that these views are coherent and that interdependencies between them are explicit.

BA is a part of Enterprise Architecture (EA), and usually BA is the least understood / developed / implemented part of EA.

1 General


An enterprise creates a result which has value to a customer who pays for this result. The enterprise acts as a provider (supply-side) and the customer acts as a consumer (demand-side).

There is a (business) transaction between the provider and the consumer. From the point of view of the consumer (the outside-in view) the transaction is bounded by the pair “request and result”, e.g. from making an order to receiving goods. From the point of view of the provider (the inside-out view) the transaction is a set of several distinct activities (or units of work) which function together in a logical and coordinated manner to satisfy / delight the consumer. These activities are carried out in response to the consumer’s request which is an external business event for the provider.

2 Business functions


The collection of an enterprise’s activities serves as the foundation for the discovery of business functions (functions deliver identifiable changes to assets). Each function is an abstract and self-contained grouping of activities that collectively satisfy a specific operational purpose (e.g. management of relationships with partners). Functions are unique within the enterprise and should not be repeated. Some functions can be decomposed into smaller groups of activities, and thus the function architecture has a hierarchical structure. The structure of functions is not always the same as that of the organisation chart; in many cases, some organisational units can span several functions. Furthermore, organization charts may change while the function architecture does not.

A business function typically has the suffix "management" in its name (e.g. "Customer Relationship Management"), but it can also be a noun (e.g. "Marketing"); usually, function name specifies something that is performed continuously. Some examples of business functions (from http://www-935.ibm.com/services/us/imc/pdf/g510-6163-component-business-models.pdf) are given below.



The functional view emphasizes WHAT the whole enterprise does to deliver value to the customer (without the organizational, application, and process constraints). Usually, the hierarchical structure of business functions is very static (with a low rate of change). Meanwhile, business processes can change more frequently as a result of business process improvement or re-engineering initiatives.
The function architecture can be used in a number of ways:
  1. to understand how organisational units are supporting each function and to identify instances where a function is supported by several organisational units (or is not supported by any organisational unit);
  2. to reveal how functions are currently automated, including occurrences of where there is an overly complex use of applications (e.g. multiple applications) and when there is no automation of functions in place;
  3. to understand how assets (information) flow between functions, and to map out which functions produce information, which function(s) consume information and where there is no clear understanding of information movement and ownership;
  4. to clarify how business processes can be constructed;
  5. to determine which business performance metrics should be used.
    In some senses, functions are the players in a team (i.e. the enterprise), but it is not clear how they are going to play together.

3 Value-streams


The collective use of activities to satisfy a customer’s request leads to the notion of a value-stream which is an end-to-end collection of those activities (both value-added and non-value-added) currently required by an enterprise to create a result for a customer. Value-streams are named according to an initiating event and its result. A few examples of value streams are provided below (mainly from www.enterprisebusinessarchitecture.com):
Prospect-to-CustomerOrder-to-Cash (order fulfilment process)
Manufacturing-to-Distribution (manufacturing process)Request-to-Service
Design-to-BuildBuild-to-Order
Build-to-StockInsight-to-Strategy
Idea-to-ConceptConcept-to-Product
Product-to-LaunchInitiative-to-Results
Relationship-to-PartnershipForecast-to-Plan
Requisition-to-Payables (procurement process)Resource availability-to-Consumption
Acquisition-to-ObsolescenceFinancial close-to-Reporting
Recruitment-to-RetirementAwareness-to-Prevention

Value-streams are directly linked to the enterprise’s aspirations – its vision and related “ends” chain (see http://www.omg.org/spec/BMM/): desired results, goals and objectives. Ideally, each value-stream should align with at least one long-range objective and its business performance metrics [key performance indicators (KPIs)]. For example, one objective of the success of the “Order-to-Cash” value-stream may be measured as “96% of orders delivered within 3 days”. If this value-stream’s actual performance is delivering only “90% of orders within 3 days” then a corrective action should be taken (e.g. a new strategic initiative is developed and its priority determined).

In addition to the reason WHY a value-stream exists, related to each value-stream there is an explicit HOW the desired results are achieved. Looking inside a value-steam reveals that there may be a few “integrated components” (or business cases – business transactions between a consumer and a provider). Usually, one of the “integrated components” is the main transaction which does the job and the others are collections of supporting/housekeeping activities. For example, “Order-to-Cash” includes “Fulfil order” (main), “Change order” (for cancellation and modification of an order by the customer), and “Review order” (for the consultation of an order by the customer).

Each “integrated component” is an ordered sequence of acts with applying functions to assets. Such a sequence is the explicit assets flow (called inputs and outputs, or I/O). KPIs and timelines associated with the sequence provide additional execution details (e.g. duration of the process from one point of I/O hand-off to another point). Thus the value-steam view provides the context (without the organisational and application constraints) for its constituent activities, e.g. what timing, level of performance, etc. are necessary to reach the objective of the complete value-stream.


An enterprise consists of a collection of value-streams. Most large enterprises can be broken down into a dozen or more value-streams. The nomenclature of value-streams differs somewhat from one enterprise to another. Within an enterprise, its value-streams are interdependent; a value-stream may rely on the results of other value-streams. An example of this interdependency is the value-chain of an enterprise, i.e. a network of strategically relevant integrated components of value-streams of the enterprise.

Two further posts will cover "Linking WHY, WHAT and HOW" and "Managing the complexity of VEB".

Continue at Explaining EA: business architecture basics 2

Explaining EA: business architecture basics 2


Note: a revised version of these three posts is available at http://www.improving-bpm-systems.com/pubs/Explaining-EA-BA-basics_v7.pdf

Continued from Explaining EA: business architecture basics 1

4 Linking WHY, WHAT and HOW


So, an enterprise’s value-chain and value-streams are the high-level decomposition of the work of the (whole) enterprise into the work of many different activities. In such a decomposition, WHY + WHAT of the whole enterprise should be used to define WHY + WHAT of each activity. The glue between them is HOW. Let’s look at a fictitious scenario.

Stakeholders:
OK, your business model looks good. Now tell us about the operating model.

Future CEO:
Our business model is the WHY for our operating model. The latter starts by showing the relationships between the enterprise and its partners (suppliers, providers, customers, etc.) from the economic ecosystem. Within the enterprise we have identified 4 aggregations of value-streams: customer-centric (green), strategic-visioning (blue), people-caring (yellow) and business enabling (red), as well as the relationships between them.

The enterprise and related external partnersThe enterprise as a set of aggregations of selected value-streams





Example from www.enterprisebusinessarchitecture.com.


We know all our value-streams and their integrated components. Each value-stream is connected to a particular objective. Also, we know our value-chain.


Value-streamsValue-chain


Example from www.enterprisebusinessarchitecture.com.


So, for each value-stream (FUNC1), we know its input WHAT0, its output WHAT1 as well as its operating requirements WHY0.

Stakeholders:
Sounds great. And, can you assure us that FUNC1 is capable of operating as required?

Architect:
The desired performance of FUNC1 is guaranteed by its implementation (HOW1) as the explicit coordination of “smaller” functions. In some way, WHAT1 is decomposed into a set of WHAT2x. WHY0 is decomposed into a set of WHY1x, and FUNC1 is decomposed into a set of FUNC2x. They are all coordinated together. In the illustration below, the coordination is trivial, but in real cases it may be rather complex (e.g. an interaction of activities carried out by several interdependent functional roles).


Stakeholders:
Please continue until all FUNC# become “manageable” activities so that they can be bought, rented, outsourced and easily implemented.

Architect:

This will involve the explicit decomposition of each value-stream to reveal the horizontal (peers) and vertical (subordinated) structure.

... Some time later ...

Architect:
As a result of this decomposition, a directed graph can be obtained (see the figure below). This directed graph is represented as a river basin; it could also be represented as an iceberg in which the value-stream is the tip of the iceberg.


In this graph, nodes (i.e. activities) are connected by edges to show the dependencies between results (i.e. the result of activity C depends on the results of activities I, K, L and B). This means that the result of a particular activity contributes to the result of another activity (which is probably more valuable and thus more expensive). The timing of result generation may be different: some results can be produced in advance and stored for later, some results can be produced on demand and some results can be acquired just before they are needed.

The primary importance of such a graph (called a “value & expenses basin” or “VEB”) is to represent business performance – the business wants to delight the customers (by giving them what they want to pay for) and the shareholders (by creating a profit). As shown in the figure below, different activities contribute differently to the generation of the value (green arrows) and the associated expenses (red arrows). The width of the arrows signifies the relative amount of value or expense.


The VEB should help in the management of an enterprise. It represents a dynamic, actual and contextual contribution of different activities to the value and expenses associated with a particular result. The business can be attentive to different “tributaries” which are

a) the most value-adding,
b) the most wasteful,
c) doing worse than defined by WHY, and
d) doing better than defined by WHY.

Depending on the business needs, such a representation can display a particular instance of value creation or a set of instances (usually over a given period of time).

So, how is a VEB constructed?

A VEB is not a flow of control, an event processing network (EPN) or a PERT diagram. It can be considered as a flow of assets (or a data flow diagram), but this will be just an externally-visible representation of internal mechanisms. Such a representation is good enough for the reactive analysis of behaviour, but is not sufficient for active control and pro-active (predictive) analytics. It is necessary to have a dynamic model which can be used for execution (e.g. simulation) and from which the VEB can be generated.

The set of “internal mechanisms” (as mentioned above) is a superposition of different coordination techniques (token-based, rule-based, event-based, data-based, etc.) as illustrated in the following.
  1. An activity from one value-stream (or business process) can obtain some assets (business objects) which belong to another value-stream (or business process). This is pull-like communication, e.g. the “Order-to-Cash” value-stream should know the customer’s address which is maintained by the “Prospect-to-Customer” value-stream.
  2. An activity from one value-stream (or business process) can send some assets to another value-stream (or business process). The latter interprets appearing of the assets as an event to be treated. This is push-like communication. Usually, there are three ways in which this treatment can occur:
    • a new instance should be started (e.g. for the manufacture of something) – initiating event;
    • an existing instance, which is waiting for this event, consumes the event and continues its work (e.g. the confirmation of a payment) – solicited event;
    • an existing instance, which does not expect this event, has to react to it – unsolicited event.

In reality, the situation is rather complicated. An enterprise may have several value-streams running in parallel. Some activities can be shared between different value-streams and some value-streams may compete for limited resources. Some activities may be outsourced or insourced, etc. All of these complexities need to be taken into account.

Furthermore, in addition to the activities, there are several other artefacts (see chapter 6) which should be defined explicitly in the model.

Continue at Explaining EA: business architecture basics 3

Explaining EA: business architecture basics 3


Note: a revised version of these three posts is available at http://www.improving-bpm-systems.com/pubs/Explaining-EA-BA-basics_v7.pdf

Continued from Explaining EA: business architecture basics 2

5 Managing the complexity of VEB


The interactions between activities reveal the different relationships between them. In order to manage the complexity, the primary interest of any architecture is to bring structure to those activities and their relationships. There are several techniques (services, capabilities, and processes) which are discussed below.

Activities which are used by a number of other activities (i.e. commonly-used functions which are the result of specialisation) are wrapped as services (which function as some kind of independent building blocks). A service is a consumer-facing formal representation of a self-contained provider’s repeatable set of activities which creates a result for the consumer. (It is considered that there are internal [even within an enterprise] providers and consumers.) It is important that the internal functioning of a service is hidden from its consumers, so that some parts of the enterprise can be changed independently. For example, a “proper” service can be relatively easily outsourced. Services are expressed in terms of expected products, characteristics and delivery options (cost, quality, speed, capacity, geographic location, etc.) – this is the Service Level Agreement (SLA).

Complex services are created by means of the coordination of more simple services and/or activities (in the same way that an orchestra is a coordination of individuals and their actions). In this sense, an enterprise is a mega-service composed of a network of nano-services. Each service is associated with an owner who is responsible for delivering the promised results in all instances in which that service has been requested. That owner has
  1. to know/estimate the demand-side needs (the service may have many different consumers who will be using it with different frequencies), and
  2. to design/organise/create in advance the supply-side capabilities to ensure those needs are satisfied.

Capability is the proven possession of characteristics required to perform a particular service (to produce a particular result, which may include the required performance). Capability needs to “understand” the mechanics of delivering that service. The mechanics include the resources, skills, policies, powers/authorities, systems, information, other services, etc., as well as the coordination of work within the service.

So, how can one ensure that a service has the required characteristics? There are three options:
  1. by contract (“re-active” approach) – acquire a service with the required characteristics, use it, check that its performance is acceptable and replace it if something is wrong with it;
  2. by measurement (“active” approach) – implement a service, use it, measure it, improve or re-build it, etc.;
  3. by design (“pro-active” approach) – build a service model, run a simulation test, improve the model, build the service, use it, measure it, improve it, etc.
The first option works with some support services, the second option can work satisfactorily with lead services and the third option should be used for core business services. The core business services can’t be outsourced, can’t be bought and must not be “damaged” (otherwise the enterprise may no longer function).

One of the models of the mechanics of delivering a service is a business process – an explicitly-defined coordination of services and/or activities to produce a particular result. The explicit coordination brings several advantages.
  • It allows planning and simulation of the behaviour of a service to evaluate its performance. If that service uses other services, then the demand-side needs for those services can also be evaluated.
  • It can be made to be executable, thus guiding how work is done.
  • It allows control that the actual behaviour of the service matches its intended behaviour, thus pro-actively detecting potential problematic situations.
  • It allows the measurement within a service of the dynamics of different characteristics, e.g. valuing, costing, risk, etc.

So, there is a structure of services in which some services are composed from others via explicit processes. The use of explicit processes allows the objective definition of the capabilities of composed services.




6 Typology of business architecture artefacts


6.1 Motivation artefacts (why to do what)

Vision and related “ends” chain – desired result, goals, objectives

Mission and related “means” chain – course of action, strategy, tactic/projects

6.2 Value and profit proposition artefacts (what to do)

Value, value-streams, value-chain, value creation, value system, TOM?

Products or assets (tangible and intangible)

6.3 Organisation artefacts (who is doing)

Organisation structure

Governance structure

Supplier, providers, customers, and other partners

6.4 Execution artefacts (how to do what)

Process, Services, Functions

6.5 Knowledge/information artefacts (with what resources)

Terms, facts, rules, policies, etc.

6.6 Performance artefacts (how well to do what)

Capabilities, KPIs

2011-02-11

Illustration to ebizq.net "How big is a process?"

An illustration to http://www.ebizq.net/blogs/ebizq_forum/2011/02/how-big-is-a-process.php

From my book "Improving enterprise business process management systems":

We recommend introducing control-oriented coordination using a step-by-step approach
via the “eclipse” pattern (see figure 5.6). At first, we “cover” only a tiny area of the whole process. Usually we start with the intra-application coordination, because this part of IT is considered as boring and not very rewarding. The first fragment of explicit coordination may be quite primitive; it is a duplication of some existing functionality which is just eclipsed by this process. Then we introduce more and more fragments. With time, we cover bigger and bigger areas by explicit coordination of existing fragments.


Figure 5.6 Use of the “eclipse” pattern for making coordination explicit

Thanks,
AS

2011-02-10

Practical Process Patterns: FRAP


Functional roles are pools (FRAP)

BPMN pool is normally associated with a participant. Often such a participant is associated with an organisational role, e.g. CFO. Obviously, an organisational role may include more than one functional role. As the result, within the same business process an organisational role may participate with different functional roles to carry out different activities. This looks like a typical use of swimlines, but the question – are those activities from same process instance?

Consider the following process:
  • periodically (e.g. monthly), a manager orders several service-engineers to visit several clients for carrying out some work
  • a service-engineer contacts the assigned client, plans a visit and reports back to the manager the visit details
  • the service-engineer pays a visit to the client
  • after the visit, the service-engineer submits to the manager a report about the work done at client's site

How many pools and instances?
  1. Manager as a work planner – 1 instance (as quick as possible)
  2. Manager as a report validator – N instances (usual duration is a few days) 
  3. Service-engineer (actually, per visit) – N instances (usual duration is a few weeks)


So, pools should be associated with functional roles.

Thanks,
AS

2011-01-26

From EPN to BPMN

Book "Event Processing in Action" contains "Fast Flowers Delivery" use case. Below I tried to reproduce this use case in BPMN to see the internal behavior of each participant.


I think, I have to switch to BPMN 2.0 to better handle exceptions.

Thanks,
AS


2011-01-23

Contribution to: ACM: Feature or Paradigm

This is a contribution to a very interesting discussion "ACM: Feature or Paradigm" at http://social-biz.org/2011/01/22/acm-feature-or-paradigm/ and http://mainthing.ru/item/401/

Some of Keith’s arguments do not correspond to my experience with collaborative and process-based applications. Attention, please – those applications were designed for clients (including international ones) based in Switzerland – maybe similar applications for the US-based clients should be different.

At first, as usual, it is necessary to emphasize that BPM is a process-oriented management methodology, BPMS is a technology and ACM is a technology. So, it is not correct to compare BPM vs. ACM. My point of view about their relationships was expressed in http://improving-bpm-systems.blogspot.com/2010/12/illustrations-for-bpm-acm-case.html

<quote>BPM needs process architecture, ACM has no such need </quote>

Work of a social worker is based on the existing rules, procedures and laws. Some of them are expressed in as processes. So, the process architecture is necessary; it must exist but it is not visible (similar to the 90% of an iceberg); and preferably it should be explicit.

For example, an application for automating “Office de faillite” (a governmental structure to implement bankruptcies) is a mixture of ACM features and classic BPMS because the bankruptcy process template is defined in the law with many slight variations. Although each bankruptcy case (process instance) is different, they use the same process architecture which is the proof that each case follows the law.

<quote>In BPM the person who designs the process needs to be a data architect, but in ACM these are different roles.  The person who designes the “process” does not need to be a data architect. </quote>

Although many BPMS vendors provide data modelling capabilities, it is not always that a BPMS-based implementation of process-managed application forces the process architect to be a data architect. Some process-oriented applications are just moving existing data from one place to another or collecting process metrics.

<quote>BPM needs strong capabilities for integration, but in ACM there is little or no need for field-level integration. ACM can work well with documents, reports, and links to other application user interface.</quote>

At the beginning, the users of collaborative applications are very happy with just the access to documents, reports and links. Then those users ask for provisioning more case-related information which is usually “mastered” in central resources. For example, a Word document should contain several attributes extracted from SAP.

The mentioned above “Office de faillite” application is integrated with a corporate finance system, a corporate electronic publishing system, a corporate document management system, a country-wide postal-addresses system, etc.

In conclusion: considering that “knowledge workers” and “workers who are doing repeatable work” are working TOGETHER, the capabilities from both ACM and BPMS should work together. As the first step for achieving this synergy,  it is necessary to provide the commonly-agreed reference models and reference architectures (independent from the tools).

Thanks,
AS

2011-01-20

Explicit event processing agents in BPMN?

Sometimes we need to process in an instance a group of events collected from different instances. For example, incoming orders are collected and treated each hour all together. I call this pattern CPP:


Anatoly Belychook uses “interprocess communication via data” pattern (see http://mainthing.ru/item/332/) - something like that:


One of the building blocks of Event Processing Network (EPN) presented in “Event processing in action” (see http://epthinking.blogspot.com/) is event processing agent. It can, in particular, aggregate many events from a stream. Use of such an agent (between pools, of course) looks like that:


I found it rather explicit. Maybe a next version of BPMN should consider some building blocks of EPN?

Thanks,
AS

2011-01-17

Relationship between EA, PMO, an SDLC methodology and ITIL

Continue of the post “Relationships between EA and PMO”.

For the moment, I don’t discuss the “local” SDLC methodology. It is considered that it translates (as a project) a request for a business solution into a set of interdependent services. Some of those services are new; some of those services are new versions of existing services. The main steps of such a translation are:
  • Architect a solution as a set of services (BPM, SOA, etc. is are used for quick prototyping to understand WHY and WHAT for each service as well as the effect on the whole enterprise environment)
  • Design each service (supply HOW for that service – buy, build, rent, outsource)
  • Deploy each service (of course, provide the ruthless monitoring for each service before deployment)
So, it is necessary to guarantee that newly created services or versions of services will be the good ITIL citizens. For this reason, many of ITIL processes have to be “invoked” during projects as shown in figure below.


Thanks,
AS