This proposal has been approved and the Eclipse Service Lifecycle Management project has been created.
Visit the project page for the latest information and development.

Eclipse Service Lifecycle Management

Monday, November 21, 2022 - 15:15 by Benjamin Goetz
This proposal is in the Project Proposal Phase (as defined in the Eclipse Development Process) and is written to declare its intent and scope. We solicit additional participation and input from the community. Please login and add your feedback in the comments section.
Parent Project
Proposal State
Created
Background

The development of the "Service Lifecycle Management" (SLM) was initiated in the german research project "FabOS - open, distributed, real-time capable and secure operating system for production". FabOS is a joint research project of industry and research supported by the federal ministry for economic affairs and climate action (BMWK) on the basis of a decision by the German Bundestag. The Fraunhofer Institute for Manufacturing Engineering and Automation leads the project and has initiated the development of SLM.

Scope

The Eclipse Service Lifecycle Management (SLM) is an application that enables the user to manage the lifecycle of AI services in production/factory environments and utilizes the concept of the asset administration shell (AAS) to provide a semantic description of all managed entities

Description

Eclipse Service Lifecycle Management (SLM) provides a set of applications to manage the lifecycle of AI (artificial intelligence) services in production environments. The service lifecycle consists of the release, deploy, and operate phase and exits with the decommissioning of the service. It has a connection to the software development lifecycle (idea, design, code, build, test) and the AI model development lifecycle (idea, data acquisition, data analysis, data preparation, model training, model evaluation).

In the context of the service lifecycle three different user groups were identified:

  • service developer: a person who is responsible for creating services and providing deployable executables
  • service consumer: a person who wants to use services provided by a service developer
  • system administrator: a person who takes care of the infrastructure the services are running in and makes sure the services are working properly

For the user group system administrator, it is important to manage IT Resources (Bare Metal / Virtual Machines, Virtual Resource Provider / Hypervisor) and to provide the ability to roll out basic resource configurations (monitoring purpose, service runtime environments, etc.) - so-called capabilities. In order to develop and test the scripts which are used to (un)install and use capabilities the SLM provides a test environment as a sandbox. The test environment allows the provisioning of virtual machines with different operating systems and versions of those operating systems to simulate the circumstances of the real world. By doing so, it is possible to handle heterogeneous IT landscapes and to ensure the functionality of capabilities without interfering with the real world.

When the system administrator releases capabilities, the service developer can develop services building on top of those capabilities. Therefore, system administrators and service developers have to agree on common deployment definition types (e.g. docker-compose or kubernetes definition) which on the one hand the capabilities are able to handle and on the other hand the service developers will provide for their services. When a service developer wants to release a service he can publish the service as a service offering in the SLM. Service offerings provide basic meta-information about the service and service requirements the offerings have regarding their runtime environment (e.g. access to specific hardware, network connection, etc.). 

Based on the capabilities and the service offerings the service consumers can apply configurations on their IT resources and deploy service instances in a self-service manner. In service deployment, the SLM will check which requirements a service offering has and which capabilities and hardware specifications the consumer's IT resources provide. After a match-making process, the SLM will provide a filtered list of resources the service can potentially run on. To finish the deployment the user selects a resource and the SLM will deploy the service.

In order to semantically describe all entities in this context (like resources, capabilities, service offerings/instances, requirements, ...) in a consistent way the SLM utilizes the concept of the asset administration shell (AAS). AAS are used to provide a digital representation of entities and pave the way for interoperability in production environments. The AAS can hold different submodels describing specific aspects of an entity, e.g. providing one submodel showing almost static information about a PC (like hardware specification, operating system) and another submodel showing the current load state/metrics of a PC (like utilization of CPU and RAM). In order to implement this concept, the SLM will use the Eclipse Basyx Framework to provide an AAS Registry (a discovery server for AAS) and AAS Server which host the AAS of entities and their submodels. Finally, the SLM will use the AAS to describe IT (information technology) and OT (operation technology) components in the same way.

Source Repository Type