Proposals

Eclipse SEALMAN

Tuesday, January 14, 2025 - 09:26 by Jos Zenner

Eclipse SEALMAN is an open-source project born from the collaboration of machine builders, offering a comprehensive suite of building blocks for intelligent machines. At its core, Eclipse SEALMAN empowers machine builders to seamlessly integrate edge computing, robust device management, seamless cloud connectivity, and efficient remote maintenance capabilities into their solutions. With a strong emphasis on cyber security, Eclipse SEALMAN provides a secure architecture that safeguards machines from the edge to the cloud, ensuring the integrity and confidentiality of data throughout the entire lifecycle.

Eclipse AASPortal

Tuesday, December 3, 2024 - 09:46 by Florian Pethig

Eclipse AASPortal is written in TypeScript and based on a distributed software architecture, that  includes a Node.js based “aas-node” and an Angular based web application “aas-portal” (using Bootstrap 5 and NgRx). Via the standardized AAS REST API, Eclipse AASPortal connects to AAS endpoints, e.g. based on Eclipse AASX Package Explorer and Server, Eclipse BaSyx, or Eclipse FA³ST. These endpoints can be configured during operation by authorized users. Additionally, operational data can be integrated via OPC UA (based on node-opcua) and visualized in corresponding plots. AASPortal enables full text search and filtering of AAS, as well as basic modelling capabilities like adding Submodel templates or editing  values of properties.

Cyber Resilience Practices

Wednesday, November 13, 2024 - 10:15 by Tobie Langel

The Cyber Resilience Practices Project develops specifications designed to help improve the cyber resilience of open source projects and of the products that incorporate these projects and facilitate compliance with related regulation worldwide.

The first specification to be developed by this project is the Vulnerability Handling Specification.

The Vulnerability Handling Specification focuses on vulnerability management for products with digital elements, as outlined by the Essential Requirements of the CRA. It details the necessary components of a vulnerability handling policy, including procedures for receiving reports, resolving issues, and disclosing vulnerabilities. Additionally, it specifies the requirements for managing vulnerable dependencies.

Eclipse zserio

Monday, November 11, 2024 - 06:55 by Fabian Klebert

Eclipse zserio enables automatic code generation for supported languages like C++, Java, and Python, allowing developers to focus on application logic rather than low-level data handling. With its emphasis on efficiency and compatibility, Eclipse zserio is especially valuable in industries with stringent performance requirements and interoperability needs, such as automotive, telecommunications, and financial services.

The Eclipse zserio toolchain includes a schema compiler, runtime libraries, and various utilities to support schema evolution, compression, and seamless integration into CI/CD pipelines. It is designed to be scalable, making it suitable for both simple data models and highly complex, large-scale schemas. As part of the Eclipse Foundation, Eclipse zserio will foster an open community to drive innovation and collaboration in serialization and data interoperability across diverse applications and systems.

Eclipse LMOS

Thursday, October 31, 2024 - 10:14 by Kai Kreuzer

The Eclipse LMOS project (Language Model Operating System) is essentially a platform for building and running AI systems that can handle complex tasks. Imagine it like an operating system for your computer, but instead of managing applications, it manages AI agents. These agents are like smaller, specialized AI programs that each handle a specific part of a larger problem.

The key idea behind Eclipse LMOS is to break down complex tasks into smaller parts that can be handled by different AI agents. For instance, if you're building a customer service chatbot, one agent might handle basic greetings, another might answer questions about billing, and another might deal with technical support issues. This way, each agent can be really good at its specific job, leading to better overall performance.

Eclipse LMOS helps these agents work together by providing a common platform where they can communicate and share information. It's like a central hub that keeps everything organized and running smoothly. This platform also makes it easier to manage and scale the system as needed. If you need to add more agents or handle more traffic, LMOS can handle it without breaking a sweat.

Eclipse LMOS was designed to be very flexible and user-friendly. You don't need to be an AI expert to use it. The platform provides tools and features that make it easy to build, deploy, and manage AI agents.

Eclipse Fennec

Wednesday, October 9, 2024 - 05:03 by Mark Hoffmann

We see Eclipse Fennec as incubator and space for any EMF and OSGi related projects. Over the years we have massively used these two technologies in our projects. 

We put some extensions to EMF to make it work in an optimal OSGi manner. Based on that we build additional components for existing OSGi specifications like the Whiteboard for Jakarta Restful Web Services that are able to serialize and de-serialze EMF Instances. 

We build a lot of other frameworks on top of that, like a Serializer / De-Serializer frameowrk to be capabile to save load EMF with the same configuration in different formats like MongoDB, JPA, Lucene, Json / Yaml.

We needed all this components to use EMF end-to-end in an application.

  • EMF OSGi - EMF Framework as OSGi Service, Code Generator for these components

  • EMF-Util - Extensions based on EMF OSGi to customizer serialzing, Jakarta RS extension to work with EMF

  • Model-Atlas - Distributed EMF Model Registry

  • EMF- Codec - Serializer, De-serializer framework for EMF in an OSGi based way

  • EMF Persistence - Persistence extension for EMF in an OSGi environment

  • Mapping Layer (‘QVT Transformation in an OSGi way)

  • Privacy layer - Framework to analyze model and/or model instances for privacy related information

Eclipse Safe Open Vehicle Core

Tuesday, October 1, 2024 - 03:41 by Thilo Schmitt

The Eclipse Safe Open Vehicle Core project aims to develop an open-source core stack for Software Defined Vehicles (SDVs), specifically targeting embedded high-performance Electronic Control Units (ECUs).

As these ECUs carry multiple processors, the project also targets for interoperability between these processors.

To ensure applicability in the automotive domain we ensure compliance with relevant safety standards, such as ISO 26262 for functional safety, providing a reliable foundation for safety-critical applications and adherence to stringent security standards, implementing robust cybersecurity measures in accordance with ISO/SAE 21434 and UNECE WP.29.

A key aspect of the project is the design of a modular and extensible architecture, allowing easy integration and customization for various automotive applications, ensuring flexibility and scalability. Additionally, the project focuses on end-to-end optimization throughout the stack to achieve maximum efficiency and performance.

The project is guided by several key principles:

Common Stack & Industry-Wide Collaboration

The Safe Open Vehicle Core project aims to create a common full stack solution of a software runtime that serves as the best possible solution for shared industry problems. By achieving efficiencies through a single, joint solution instead of multiple specific ones, the project addresses non-differentiating scopes and ensures that the scope is significant for multiple parties, rather than catering to singular interests.

Speed

The project accelerates development by working in open source, focusing on code-centric and iterative methods rather than primarily on textual specifications.

Abstraction and Extensibility

The project emphasizes the decoupling of hardware (HW) and software (SW), ensuring that applications do not depend on specific hardware characteristics. It establishes predetermined breaking points to enable the exchange of implementations of individual layers, aspects, and components, such as ECU communication protocols. Additionally, it focuses on enabling project-specific extensions of the stack, providing a flexible framework that can be customized and extended to meet the specific requirements of different projects.

Quality & Efficiency

The Safe Open Vehicle Core project aims for a lean, no-frills solution to lower complexity and increase efficiency. The project strives for support of modern implementation paradigms and languages like Rust or C++, uses human-readable specification languages that are domain and target-driven, and avoids complex exchange data formats. It seeks the optimal balance between modularity and resource consumption and follows state-of-the-art processes to develop safe and secure software in an open-source environment.

By achieving these goals and adhering to these key principles, the [SafeOpenVehicleCore] Project aims to deliver a versatile and secure core stack that supports the evolving needs of the automotive industry and accelerates the adoption of software-defined vehicle technologies.

Eclipse TRAICE (Tracking Real-time AI Carbon Emission)

Tuesday, September 17, 2024 - 11:20 by Matthew Khouzam

Horizontal Federated Machine Learning (H-FML)

Federated learning enables AI/ML model training at the network nodes by exploiting large scale distributed data and compute resources. Federated learning also restricts explicit data sharing so that confidentiality and privacy associated with the use case are preserved. FL differs from classical AI/ML in four main domains: data privacy (no end-user data leaves the device, worker, node or client), data distribution (data could be IID or no-IID), continual learning (the communication time
between client and central server may be too long to provide a satisfactory user experience), and aggregation of data (some privacy notions and rules are violated when user data aggregation occurs in the central server) [4,5].

Federated learning will require $\mathcal{K}$ devices to upload and aggregate parameters iteratively to train the global model [6, 7]. In such a scenario, distributed devices (Mobile devices, workers) collaborate, to train a common AI/ML model under the coordination of an access point (AP) or parameter server.

H-FML occurs over multiple communication (encapsulated into upload cost and download cost) and computation rounds. In each training round, five-stages process is repeated until model convergence.

-   In step 1, The FML starts when a training task is created by the sever (coordinator) who initializes the parameters of the global model and sends to each worker (client or participant), over first download cost. 

-   In step 2, each worker k in K (participants) independently completes training on its local dataset to minimize the loss on their local data distribution D_k of size n_k.

-   In step 3, Each worker submits its local model to the server (coordinator) over upload cost.

-   In Step 4, the global model is consolidated by aggregating local models received from workers by the server.

-   In Step 5, The global model is then dispatched back to workers over second download cost. This updated global model will be used by each worker in the next training round.

To achieve the goal in Step 2, FML train a global model that minimizes the average loss across parties.

In subsequent training iteration (Step 2, Step 3, Step 4, and Step5 are the single round of FL), the process is repeated until the training loss converges or the model converges, or a time limit is exceeded, or the maximum number of iterations is reached.

ECLIPSE TRAICE

Eclipse TRAICE is proposed as an interactive visualization software designed for Real-Time monitoring of AI/ML system carbon emission. It is designed to handle horizontal federated machine leaning algorithms (H-FML) and enabling user to track the carbon footprint. TRAICE allows users to simultaneous observe the environmental impact and the model training
effectiveness.

TRAICE is build based on three main components:

-   A python library

-   A Server and

-   A client application

The python library binds to the code ran by the nodes (workers) of a H-FML based on a TRAICE package and collects real-time data generated by the modes. The library then sends those metrics to the server. The library exploits the codecarbon library [3] to extract the amount of energy and carbon emission spent by the nodes.

The server aggregates the generated data and computes several metrics on the real-time training session of participant workers.

The client is a user's visualization tool, allowing the users to monitor and interact with the framework.

The components communicate via WebSockets, ensuring a real-time bidirectional communication between the workers and the servers, as well as between the server and client.

System Requirement

Eclipse TRAICE runs and requires a Docker engine installed and up running on the host machine. Python should be installed to run scripts and components. Python version 3.10.x is recommended. The following requirements are also mandatory:

-   The library must be installed on all workers nodes of the H-FML. The workers can run anywhere (as per user preference) and must only be able to communicate with the server.

-   The server can be deployed everywhere but, must expose port 3000 and be accessible by all services. Appropriated access should be allowed proper routing techniques and firewall configurations.

-   The database can be deployed anywhere and only needs to communicate with the server. Ideally it could be on the same subnet as the server to minimize latency.

-   The client can be installed anywhere but must communicate with the server. On a typical client-server architecture, the client sends requests to the server and received response on return. The process
   is activated via socket connections.

ECLIPSE TRAICE System Components

Eclipse TRAICE system is deployed based on three parts. Each part is deployed as a separated Docker container.

1)  TRAICE- frontend: The TRAICE client application for visualizing the graphs of emissions in real-time.

2)  TRAICE-backend: Is the server responsible for receiving, aggregating and sending the carbon emission data of the worker participating in the training to the TRAICE client.

3)  TRAICE-database: Used for storing training data, as SQL database.

Docker Compose facilitates the communication and network setup between the containers, ensuring they operate seamlessly as a unified system. The system components each expose different ports for communication.

Here are the exposed ports:

-   Server: port :3000

-   Client: port :4200

-   Database: port :8001 (The Docker container exposes port 8001 and
   redirects it to port 5432 inside the container to communicate with
   the PostgreSQL, service).

Step-by-step installation of Eclipse TRAICE with Docker Compose

Three distinct bidirectional communications have been identified. The TRAICE library communicates bidirectionally with the TRAICE sever. The TRAICE server communicates bidirectionally with the client and finally, the TRAICE server communicates bidirectionally with the database.

The steps of the installations are as follows:

-   Clone the repository and navigate to the project root.

-   Build the application images with Docker Compose: docker -- compose
   build

-   Run the application: docker -- compose up -d

Access the frontend by navigating to http://localhost:4200 in your local web browser. To stop all containers related to TRAICE: docker -- compose down

Installing Eclipse TRAICE library

To build the library from sources, follow the steps below to obtain the ".whl" file:

cd library/traice

pip install --upgrade setuptools

pip install --upgrade build

python -m build

The ".whl" will be created in the "dist" folder, you can then install the package by doing "pip install <filename> .wh" , (e.g. "pip install Traice-0.0.1-py3-none-any.whl").

Library Usage

The "example" folder contains examples of usage. The library exposes a class "TraiceClient" that handles all the tracking and communication logic. Once this is done, you should be able to see information about the training and its energy usage in the frontend!

-   Typical example: cifar10

This example shows the federated training on CIFAR-10 dataset with 3 workers and carbon emissions tracking using TRAICE. This code is based on the Flower Federated Learning library example available here:

[Flower Quickstart PyTorch\](https://github.com/adap/flower/tree/main/examples/quickstart-pytorch).

To use, first install the dependencies using "pip install -r requirements.txt" (if you don't have "TRAICE" installed, please follow instructions given above to build and install it).

# 1. Start federated learning server

python server.py

# 2. Start TRAICE server (follow instructions in section c))

# 3. Start workers

python worker.py --node-id 0

python worker.py --node-id 1

python worker.py --node-id 2

You can now open the TRAICE client and access the visualization.

IV/- References

1.  K. Ahmad, A. Jafar, and K. Aljoumaa, "Customer churn prediction in    telecom using machine learning in big data platform," Journal of Big Data, vol. 6, no. 1, pp. 1--24, 2019.

2.  L. Bariah, H. Zou, Q. Zhao, B. Mouhouche, F. Bader, and M. Debbah, "Understanding telecom language through large language models," in IEEE Global Communications Conference (GLOBECOM), 2023, pp. 6542--6547.

3.  S. Luccioni, "Code carbon: Track and reduce CO2 emissions from your computing," https://github.com/mlco2/codecarbon, 2013.

4.  Y. Chen et al, "Federated learning for privacy preserving IA", Communications of ACM, vol. 63, no. 12, pp. 33-36, 2020.

5.  Q. Yang et al, "Federated machine learning: concept and applications", ACM Transaction on Intelligent Systems and Technology (TIST), vol. 20, no. 2, pp. 12-19, 2019.

6.  D. Ye et al, "Federated Learning in Vehicular edge computing: A selective model aggregation approach", IEEE Access, vol. 8, pp. 23 920-23 935, 2020.

7.  Zhilu Chen and Xinming Huang, "E2E learning for lane keeping of self-driving cars", IEEE Intelligent Vehicles Symposium (IV), 2017,  pp. 1856-1860
 

Eclipse TMLL (Trace Server Machine Learning Library)

Monday, September 16, 2024 - 13:49 by Matthew Khouzam

Eclipse TMLL provides users with pre-built, automated solutions that integrate general trace server analyses (e.g., CPU usage, memory, and interrupts) with machine learning models. This allows for more precise, efficient analysis without requiring deep knowledge in either trace server operations or ML. By streamlining the workflow, TMLL empowers users to identify anomalies, trends, and other performance insights without extensive technical expertise, significantly improving the usability of trace server data in real-world applications. 

Capabilities of TMLL 

  • Anomaly Detection: TMLL employs unsupervised machine learning techniques, such as clustering and density-based methods, alongside traditional statistical approaches like Z-score and IQR analysis, to automatically detect outliers and irregular patterns in system behavior. This helps users quickly identify potential anomalies, such as unexpected spikes in CPU usage or memory leaks.
  • Predictive Maintenance: Using time-series analysis, TMLL can forecast potential system failures or performance degradation. By analyzing historical data, the tool can predict when maintenance or adjustments will be necessary, helping users avoid costly downtime and improve system reliability.
  • Root Cause Analysis: TMLL leverages supervised learning techniques to identify the underlying causes of performance issues. By training models on labelled trace data, users can determine which factors contribute to problems such as bottlenecks or system crashes, leading to faster resolution and more effective troubleshooting.
  • Resource Optimization: Through a combination of classical optimization techniques and Reinforcement Learning (RL), TMLL helps users optimize system resources like CPU, memory, and disk I/O. This ensures efficient use of system resources and helps avoid unnecessary waste, while also adapting to changing workloads for better overall performance.
  • Performance Trend Analysis: TMLL provides comprehensive tools to analyze long-term performance trends. By evaluating historical data and identifying patterns, users can detect performance shifts, regressions, or improvements over time, providing valuable insights for ongoing system optimization and future planning. 

Eclipse Wheel Speed Sensor Signal Packer

Wednesday, September 11, 2024 - 07:42 by Daniel Fischer

Making Eclipse Wheel Speed Sensor Signal Packer a lossless packing SW-module available as FOSS will avoid a multitude of proprietary solutions. Instead, a generic packer SW-module shall be made available which for example can be integrated by brake system suppliers as default into their main path brake SW.
A key purpose of the project is to establish an open industry standard for losslessly packed WSS signals for subsequent spectrum analysis.
It shall avoid competition restrictions which otherwise could arise through the choice of a certain proprietary packer solution by the brake system supplier leading to incompatibility with applications from other sources selected by the vehicle manufacturer ("lock-in"). In the best case, it enables through interoperability creating an (admittedly small) ecosystem of WSS-spectrum-based functions from various, fairly competing players and promotes the creation of new and innovative solutions.