The Eclipse Arrowhead project consists of systems and services that are needed for anyone to design, implement and deploy Arrowhead-compliant System of Systems. The generic concept of the Arrowhead Framework is based on the concept of Service Oriented Architectures, and aims at enabling all of its users to work in a common and unified approach – leading towards high levels of interoperability.
The Arrowhead Framework is addressing IoT based automation and digitalisation. The approach taken is that the information exchange of elements in the Internet of Things is abstracted to services. This is to enable IoT interoperability in-between almost any IoT elements . The creation of automation is based on the idea of self-contained Local Clouds. Compared to the well-known concept of global clouds, in Arrowhead a local cloud can provide improvements and guarantees regarding:
- Real time data handling
- Data and system security
- Automation system engineering
- Scalability of automation systems
The Arrowhead Framework provides support for building system of systems (SoS) based on service-oriented architecture patterns. Each SoS consists of various Application systems already existing or under development. These Application systems then utilize the Core Systems developed as part of the Arrowhead project and their Services that provide support in addressing fundamental issues related to governance, operational management and security, for example:
- How does a service provider system make its possible consumers aware of its available service instance(s)?
- How can a service consumer find (discover) what service instance(s) it might be interested in and allowed to consume?
- How do we remotely control (i.e. orchestrate) which service instances a consumer shall consume?
- How does a service provider determine what consumer(s) to accept?
The smallest unit of governance within the Arrowhead Framework is hence related to a Local Cloud (LC), which in general is a closed, local industrial network. Each Local Cloud must at least host the mandatory core systems within its network: creating the minimal supported functionality needed to enable collaboration and information exchange between the various systems within the local cloud. The three mandatory systems for each Local Cloud are:
- Service Registry,
In addition to the mandatory core systems, a number of additional, supporting core systems and services are provided to enable the design, engineering, operation and maintenance of IoT-based automation system of systems. Such supporting core systems are:
- Gatekeeper and Gateway systems
- Event Handler system
- SystemRegistry system
- DeviceRegistry system
- Data Manager system
- Quality of Service (QoS) Manager and Monitor systems
- Translation system
- Plant Description system
- System Configuration system,
- ...and many more.
Inter-cloud information exchange is supported by Gatekeeper (control plane) and Gateway Systems (data plane) together with Arrowhead Relays, whereas security issues are covered through various measures, including AAA functions (Authentication, Authorization, Accounting), certificate handling or data encryption.
This section, briefly describes the capabilities and functions of each core system.
Service Registry: Enables Service Discovery, application systems who offer services, can publish thier services, and service consumer application systems can register themself, to discover services. The system manages the removal of stale services, or services of which "time-to-live" has expired. It's vast management functionality, enables the cloud administrators, to easily manage the tiniest details of every system, service, interface, or instances of above.
Authorization: Stores the authorization rules, whether they be intracloud or intercloud. An intracloud rule descibes an access policy between a consumer system and a provider system in a local cloud, for a given service, interface pair. Without a rule in place, communication is impossible. An intercloud rule describes an access policy between a consumer system and a provider system in different local clouds. Security is based on X509 client certificates and JSON Web Tokens.
Orchestrator: Enables Orchestration, consumer application systems thus have the ability consume services via orchestration. Of course, the proper authorization rules must be in place, for the consumer to be able to access the provider. The orchestrator supports three types of orchestration. Firstly, the store orchestration allows, pre-determined orchestration rules to be set-up, for specific consumer-provider pairs. Secondly, the dynamic orchestration enables on the fly service matching, based on the currently available provider systems. Lastly, the flexible store orchestration, enables orchestration rules, based on partial data, metadata.
Event Handler: The purpose of Event Handler supporting core system is providing authorized publish-subscribe messaging system to the Arrowhead Framework. Application systems are allowed to publish and subscribe to events. When the necessary authorization rules are in place, then the subscriber will recieve the published event with all its data.
Gatekeeper: It has the purpose of providing Inter-Cloud servicing capabilities in the Arrowhead Framework by Global Service Discovery (GSD), Inter-Cloud Negotiation (ICN). These Services are part of the Inter-Cloud orchestration process, but the Gatekeeper is only available for the other core systems. Gatekeeper is the only one core system which has the functionality of discovering other Clouds via Relay systems. Neighbor Clouds and Relay systems are stored in the database of this core system.
During the Inter-Cloud orchestration, the Global Service Discovery is the first process which aims to collect the known clouds with providers serving the specified service. After GSD, the Inter-Cloud Negotiation process steps in place with the purpose of establishing the way of collaboration. Working together with the Orchestrators of both Clouds, at the end a servicing instace can be created.
Gateway: The core system has the purpose of establishing a secured datapath between a consumer and a provider located in two different clouds by its Connect to Consumer, Connect to Provider services.
These Services are part of the Inter-Cloud Negotiation (ICN) process initiated by the requester cloud's Gatekeeper. During the ICN process, when a Gateway is required by one of the cloud, then the Gatekeepers in both cloud establish a new datapath to their application systems and ensure the data exchange via a Relay system.
Data Manager: The purpose of Data Manager supporting core system is to provide storage of sensor data. It provides features for providers and consumers to, store SenML sensor and actuator data, fetch cached data and perform database queries.
Time Manager: The purpose of Time Manager supporting core system is to provide time and location based services. It provides features for a local cloud systems to Fetch accurate and trusted time and location information.
Certificate Authority: The main purpose of the Certificate Authority supporting core system is issuing signed certificates to be used in the local cloud. The issued certificates may be revoked from the Management Interface. Systems may check whether a certificate has been revoked, and refuse their acceptance.
Onboarding Controller: The purpose of this system is to be the entry board for the onboarding procedure. The onboarding controller sits at the edge of the Arrowhead local cloud. It is not only reachable from within the cloud by authorized systems, but also from the public through its "accept all" interfaces. Any client may authenticate itself through an Arrowhead certificate, through an authorized manufacturer certificate, or simply through a shared secret.
Device Registry: This system provides a database, which stores information related to the Devices within the Local Cloud. The purpose of this system is therefore to allow: Devices to register themselves, making this announcement available to other Application Systems on the network. They are also allowed to remove or update their entries when it is necessary. Generate a client certificate which can be used by the Device to register its systems.
Choreographer: This supporting core system makes it possible to execute pre-defined workflows through orchestration and service consumption. Each workflow can be divided into three segments: Plans, Actions, and Steps. Plans define the whole workflow by name and they contain Actions which group coherent Steps together for greater transparency and enabling sequentialization of these Step groups. Workflow execution in this generation can only be accomplished if the requested providers in each Step are all available (they are registered with the same name in the service registry as in the plan description) and the requested services call back (notify) to the Choreographer through the Choreography service that the execution on their end is done. Only this way can the Choreographer continue the execution of the Plan.
Plant Description Engine:
This supporting core system has the purpose of choreographing the consumers and producers in the plant (System of Systems / Local cloud). An abstract view, on which systems the plant contains and how they are connected as consumers and producers, is used to populate the Orchestrator with store rules for each of the consumers. The abstract view does not contain any instance specific information, instead meta-data about each system is used to identify the service producers.
The plant description engine (PDE) can be configured with several variants of the plant description of which at most one can be active. The active plant description is used to populate the orchestrator and if no plant description is active the orchestrator does not contain any store rules populated by the PDE. This can be used to establish alternativ plants (plan A, plan B, etc).
The PDE gathers information about the presence of all specified systems in the active plant description. If a system is not present it raises an alarm. If it detects that an unknown system has registered a service in the service registry it also raises an alarm. For a consumer system to be monitored the system must produce the Monitorable service and hence also register in the service registry.