2017-09-28

Beauty of #blockchain - game of intermediaries

Any blockchain-based solution is a virtual (invisible but real) intermediary (with its data, computing resources, workers, miners and decision makers) between people using this solution (the users). Actually, the users agreed to trust the goodwill of networked miners (not technology as claimed by many blockchain enthusiasts) to be able to work between themselves (e.g. carry out some transactions) without trusting each other. Although, some blockchain-based financial solutions already formed real infrastructures for their world-wide operations but they are lacking the legal endorsements from financial and governmental authorities.



Obviously, blockchain or not-blockchain, following the financial and governmental regulations and usual good business practices is even more important in the digital era than before because the damages can be huge and done quickly ( see http://taxilife.ru/nationalnews/7124/ and http://www.independent.co.uk/voices/uber-tfl-london-taxi-black-cabs-regulation-a7964066.html ).

It seems that the blockchain-technology and cryptocurrencies is a way for replacing the existing intermediaries by new ones. The majority of blockchain enthusiasts claims that their technology removes intermediates although this is not true.

However, it is possible to do such replacements differently. Possible examples are the following.

In any of such replacements, it is mandatory to
  • avoid sudden creation of a powerful intermediary like Uber, Airbnb, Facebook, Amazon, Alibaba, etc.
  • understand what services are provided by new intermediaries, 
  • what are contractual agreements (including SLA) for these services;
  • impose transparency of new intermediaries and 
  • exercise necessary control via explicit ownership (in different forms) or external testability.
See also https://www.linkedin.com/feed/update/activity:6320376284153282561/ 

Thanks,
AS

See the whole collection of bogposts about blockchain - http://improving-bpm-systems.blogspot.ch/search/label/%23blockchain

2017-09-15

Relationships between AS-IS, TO-BE and transition architectures

Just an illustration



BTW, the idea "stolen" from the agile development methodology.

Thanks,
AS

2017-08-31

Beauty of #blockchain – separating the wheat from the tares

As we know, the blockchain technology is actually a multi-user, centralised (logically) and distributed (physically) archive with excellent availability and integrity characteristics. Such an archive collects various records and packs them into chained and (practically) immutable blocks.

Why is it centralised? Because of the single uniform code base and the consensus process, i.e. a combination of “administrative” means and technology.

Numerous applications (e.g., bitcoin) use the blockchain technology to resolve the problem of “double spending”, i.e. if a record (which is a transaction in this case) spends the same “piece” of cryptocurrency more than once then that double “piece” of cryptocurrency it will be detected as “untrusted” and, finally, such a record (i.e. transaction) will be rejected by the blockchain-as-an-archive of records (or a ledger).

Thus, in such applications, blockchain plays two roles: ( see also http://improving-bpm-systems.blogspot.ch/2016/06/disassembling-blockchain-concept.html ).
  1. validating that a used “piece” of cryptocurrency is “trusted” (logical integrity or counterparty risk is acceptable) and 
  2. guarantying that a record is safely stored (physical integrity).
Because these roles are not explicitly separated, the average time to store a transaction in the bitcoin application is about 10 mins because each transaction must be packed into a block and the “block time” is 10 mins. And, the "wait time" for POW blockchain is approximate 60 mins for minimal risk of a transaction is rejected. Obviously, this is not practical at the point-to-sale to buy a cup of coffee.

Actually, at the point-of-sale, a buyer and a seller need only the logical integrity, i.e. validating that a “piece” of cryptocurrency to be used in the transaction is “trusted”. The physical integrity is an “internal business” of the blockchain-as-an-archive.

So, the blockchain-as-an-archive has to have a validating function that confirms a particular “piece” of cryptocurrency is “trusted” by three conditions:
  1. it is based on existing transactions which are stored in the blockchain-as-an-archive, 
  2. those transactions used “trusted” “pieces” of cryptocurrency and 
  3. those transactions are “old” enough (e.g. they were included into the blockchain-as-an-archive at least 60 mins ago). 
There can be several simple algorithms for implementing such a validating function, e.g. by asking to vote a random collection of miners.

Of course, not all “pieces” of cryptocurrency are always “trusted”. Their normal life cycle is “under validation” and then “trusted” or “untrusted”. This means that an owner of some “pieces” of cryptocurrency will have to use in his/her transactions only “trusted” “pieces” of cryptocurrency.

The both sides of any transaction (the seller and the buyer) may check independently the level of trust for involved “pieces” of cryptocurrency. For example, the seller may define its own level of “trust” thus rejecting some “untrusted” (from his/her point of view) “pieces” of cryptocurrency. This is very similar to old good practices when a cashier (or booking-clerk) was checking some new banknotes.

Also, the blockchain-as-an-archive has to have an adding function that sends a transaction to the blockchain-as-an-archive.



Thus, by separating the logical integrity and the physical integrity, it would be possible to improve performance of some applications which are based on blockchain-as-an-archive.


Thanks,
AS

And thanks to Charles Moore for reviewing this blogpost.

2017-08-23

Towards Software-Defined Organisations

My presentation for BrightTalk


And some feedback from the organisers.

You had 362 pre-registered users and 122 views so far. 36 people downloaded your slides and you got 4.2/5 rating for your webinar.

All users said it was a very useful presentation, but I would like to highlight one in particular about acronyms:
"I missed the 1st minute or so of the presentation. There may have been a table of acronyms presented then because I didn't see any explanation later during the presentation. It would be very helpful if acronym definitions were provided. otherwise it was a very broad presentation encompassing a host of smaller topics worthy of discussion in themselves. It was well done and the presenter was fully competent concerning the subject matter. Thank you"

Thanks,
AS

2017-08-14

Beauty of #microservices - from #DevOps to #BizDevOps via #microservices first

As we all know, usage of MicroService Architecture (MSA) requires the very comprehensive operational practices and infrastructure. A microservice is a unit-of-functionality (or “class” in the informal IT terminology) within its own unit-of-deployment (or “component” in the informal IT terminology) acting as a unit-of-execution (or “computing process” in the informal IT terminology). Some applications may comprise a few hundred of microservices. This is certainly a serious barrier for exploiting MSA benefits such as easy to update and easy to scale to absorb heavy workloads.

Fortunately, as we know, various performance characteristics (e.g. easy to update, easy to scale) are not spread uniformly within applications. For example, 95% of CPU consumption is located in 5% of program code. Thus, it is not necessary to implement the whole application via microservices.

Let us ask a simple question, if a microservice is, actually, a service then can we use microservices and services together? Yes, and some functionality from platforms or monoliths may be used (via API) as well.

Now, let us reformulate the problem. Let us consider that any application is built from many units-of-functionality which must be deployed and then executed. What is the optimal arrangement of units-of-functionality into units-of-deployment and then units-of-execution? In other words,
  • which units-of-functionality have to be implemented as microservices (microservices are agile and good for easy to update, but have some execution and management overhead);
  • which units-of-functionality have to be implemented as monoliths (monoliths are not agile and not easy to update, but have no execution and management overhead);
  • which units-of-functionality have to be implemented as services (classic services are something in between microservices and monoliths).
Thus, a few recommendations may be formulated.
  • Units-of-functionality which are “often” updates must be implemented as microservices (so BizDevOps will be happy).
  • Units-of-functionality which require to absorb heavy workloads must be implemented as microservices (so DevOps will be happy).
  • Units-of-functionally which are “rarely” updated may be packed in a few units-of-deployment (different “packing” criteria may be used) and each unit-of-deployment has its own computing process (so DevOps will be happy). Another option is dynamic loading of those units-of-functionality.
  • Units-of-functionality which are “never” updated may be packed as a monolith or platform, i.e. one unit-of-deployment and one unit-of-execution (so DevOps will be extremely happy).
Applying these recommendations to some phases of the whole application life cycle (conception, development, deployment, production, support, retirement and destruction) the following recommendations may be formulated:
  • At the beginning of the application life cycle (concept, i.e. prototyping, and initial development), the majority of the units-of-functionality must be implemented as microservices, because easy to update characteristic is very important (especially for the business people) and, fortunately, performance characteristics are not an issue. 
  • More close to the end of the development phase, it becomes clear which units-of-functionality have to changed more often than others; so those others may be considered as services and even monoliths or platforms.
  • Also, the load tests (during the development and deployment phases) must show which units-of-functionality will require to absorb heavy workloads thus to be implemented as microservices.
  • Other criteria may be considered as risk, security, etc. 

Obviously, that “moving” a unit-functionality from microservice-like implementation to service-like implementation and to platform-like implementation is much easier that “moving” a unit-of-functionality from monolith-like implementation to service-like implementation and to microservice-like implementation.

This confirms the primacy of the “microservices first” approach. This approach, actually, provides support for BizDevOps practices ( see http://improving-bpm-systems.blogspot.ch/2017/05/beauty-of-microservices-ebanliing.html ). Additionally, this approach enables interesting transformations such as automatic reconfiguration of applications to absorb the heavy workloads by moving temporarily some units-of-functionality from service-like implementation to microservice-like implementation.

Remember from prof. Knuth "Premature optimisation is the root of all evil".

Thanks,
AS

The collection of posts about microservices - http://improving-bpm-systems.blogspot.ch/search/label/%23microservice 

2017-07-27

Better Architecting With – systems approach

All blogposts on this topic are at the URL http://improving-bpm-systems.blogspot.ch/search/label/%23BAW 


1 The systems approach basics


The systems approach is a holistic approach to understanding a system and its elements in the context of their behaviour and their relationships to one another and to their environment. Use of the systems approach makes explicit the structure of a system and the rules governing the behaviour of the system.

The systems approach is based on the consideration that functional and structural engineering, system-wide interfaces and compositional system properties become more and more important due to the increasing complexity, convergence and interrelationship of technologies.

The goal of the systems approach is to walk people and organisations working on complex systems through various stages and steps of analysis and synthesis in order to build a comprehensive understanding of the system-of-interest and, ultimately, be able to architect and engineer that system at any desired level of detail depth.

The systems approach helps to produce the following digital work products.
  • artefacts (entities made by creative human work) which are used to implement the system-of-interest;
  • system-of-interest terminology to explain various system approach concepts and relationships between them
  • nomenclatures (or classifications) of artefacts of the same type;
  • models to formally codify some relationships between some artefacts;
  • views (collections of views) to address of some concerns of some stakeholders, and
  • architecture descriptions which consists of several views.

To facilitate the production of those digital work products, the systems approach provides:
  • system approach terminology to explain various concepts of the system approach and relationships between them;
  • several templates to define various artefacts;
  • several nomenclatures with artefacts related to the systems approach;
  • several model kinds which formally defines views;
  • several architecture viewpoints conventions which can include languages, notations, model kinds, design rules, and/or modelling methods, analysis techniques and other operations on architecture views; architecture views are system-of-interest dependent and architecture viewpoints are system-of-interest independent, and
  • several patterns with techniques for transforming (not necessary fully automatically) some model kinds into other models kinds.

Many viewpoints and views are possible.

   


Different stakeholders see the same system differently and recognise different artefacts. 


2 Four levels of architecting


If the system-of-interest is rather complex, then it is recommended to use the following four levels of architecting:
  1. reference model is an abstract framework for understanding concepts and relationships between them in a particular problem space (actually, this is terminology)
  2. reference architecture is a template for solution architectures which realizes a predefined set of requirements
    Note: A reference architecture uses its subject field reference model (as the next higher level of abstraction) and provides a common (architectural) vision, a modularization and the logic behind the architectural decisions taken 
  3. solution architecture is an architecture of the system-of-interest
    Note: A solution architecture (also known as a blueprint) can be a tailored version of a particular reference architecture (which is the next higher level of abstraction)
  4. implementation is a realisation of a system-of-interest

The dependencies between these 4 levels are shown in illustration below.


The purpose of the reference architecture is the following:
  • Explain to any stakeholder how future implementations (which are based on the reference architecture) can address his/her requirements and change his/her personal, professional and social life for the better; for example, via an explicitly link between stakeholders’ high-level requirements and the principles of reference architecture.
  • Provide a common methodology for architecting the system-of-interest in the particular problem space, thus different people in similar situations find similar solutions or propose innovations.

In case of the very complex system to be implemented in several projects and the necessity to collaborate and coordinate between those projects, it is recommended to develop a reference solution architecture and, if required, a reference implementation (see illustration below). It helps to identify smaller systems elements (e.g. services, data, etc.) and relationships between them (e.g. interfaces) thus they can be shared between projects.


The reference solution architecture and the reference implementation are often experimental prototypes which are not production quality.

3 An example of digital work products


The digital work products below are listed in an approximate order because some modifications of a digital work product may necessitate some modifications in some other digital work products. The patterns to transform some digital work products into some other digital work products are not mentioned below.

3.1 Value viewpoint

The value viewpoint comprises several digital work products which describe the problem space, and provides some ideas about the future solution and its expected value for the stakeholders. The digital work products of this viewpoint:
  • problem space description;
  • system-of-interest terminology (as an initial version the system-of-interest ontology);
  • business drivers;
  • problem space high-level requirements (or some kind of guiding principles);
  • dependencies between viewpoints, stakeholders and stakeholders’ roles;
  • dependencies between viewpoints, stakeholders, stakeholders’ roles, stakeholders’ concerns and categories of concerns;
  • beneficiaries, i.e. stakeholders who/which benefit from the system-of-interest;
  • beneficiaries’ high-level requirements;
  • scope of the future solution space;
  • mission statement and vision statement, and
  • goals (if the vision statement must be further detailed).

3.2 Big picture viewpoint

The big picture viewpoint comprises several digital work products which describe the future solution as the whole::
  • system-of-interest ontology as a reference model;
  • some classifications which are specific for this solution space;
  • illustrative model;
  • essential characteristics of the future solution;
  • dependency matrix: high-level requirements vs. essential characteristics;
  • architecture principles model kind, and
  • dependency matrix: essential characteristics vs. architecture principles.

3.3 Capability viewpoint

The capability viewpoint comprises several digital work products which describe the future solution as a set of capabilities:
  • level 1 capability map;
  • level 2 capability map;
  • level 3 capability map (if necessary), and
  • heat maps (if necessary).

3.4 TOM engineering viewpoint

The engineering viewpoint comprises several digital work products which describe the future solution as sets of some artefacts:
  • data model
  • process map
  • function map
  • service map
  • information flow map
  • document/content classification
  • etc.

3.5 Some other viewpoints

  • Organisational viewpoint
  • Operational viewpoint
  • Implementation viewpoint
  • Compliance framework
  • Regulations framework
  • Security, safety, privacy, reliability and resilience framework
  • Evolution viewpoint
  • etc.

4 Some definitions


1. reference model

abstract framework for understanding concepts and relationships between them in a particular problem space or subject field
  • Note 1 to entry: A reference model is independent of the technologies, protocols and products, and other concrete implementation details.
  • Note 2 to entry: A reference model uses a concept system for a particular problem space or subject field.
  • Note 3 to entry: A reference model is often used for the comparison of different approaches in a particular problem space or subject field.
  • Note 4 to entry: A reference model is usually a commonly agreed document, such as an International Standard or industry standard.

2. reference architecture
template for solution architectures which realize a predefined set of high-level requirements (or needs)
  • Note 1 to entry: A reference model is the next higher level of abstraction to the reference architecture.
  • Note 2 to entry: A reference architecture uses its subject field reference model and provides a common (architectural) vision, a modularization and the logic behind the architectural decisions taken. 
  • Note 3 to entry: There may be several reference architectures for a single reference model.
  • Note 4 to entry: A reference architecture is universally valid within a particular problem space (or subject field).
  • Note 5 to entry: An important driving factor for the creation of a reference architecture is to improve the effectiveness of creating products, product lines and product portfolios by
    • managing synergy,
    • providing guidance, e.g. architecture principles and good practices,
    • providing an architecture baseline and an architecture blueprint, and
    • capturing and sharing (architectural) patterns.

3. solution architecture
system architecture (or solution blueprint)
architecture of the system-of-interest
  • Note 1: A solution architecture can be a tailored version of a particular reference architecture which is the next higher level of abstraction.
  • Note 2: For experimentation and validation purposes, a reference solution architecture may be created. It helps in the creation of other solution architectures and implementations.

4. implementation
realisation of the system-of-interest in accordance with its solution architecture
  • Note 1: A reference implementation is a realisation of the system-of-interest in accordance with its reference solution architecture. It can be production quality or not.

Thanks,
AS

2017-06-20

Smart Cities from the systems point of view

Thanks,
AS