Proposals

Eclipse Trustable Software Framework

Wednesday, January 22, 2025 - 17:31 by John Ellis

The Eclipse Trustable Software Framework project's focus is practical approaches to understanding risks in software engineering.

Describing and quantifying these risks requires a scalable framework that can be applicable for systems of varying degrees of complexity. Complex systems involving software, hardware, safety, and security properties can greatly benefit from continuous quantitative analysis to inform decision-making.

A single generalised system cannot address all domains effectively, so Eclipse Trustable Software Framework aims to evolve into an ecosystem built around a unified methodology in the longer term. The first step in this journey is the Trustable Software Framework (TSF) and its set of Tenets and Assertions: short statements (propositions that could be either true or false) identifying what objectives software projects must consider in order to be considered Trustable. 

TSF specifies how metadata about a software project is stored and managed in a git repository, alongside the software's source code and documentation. This involves systematically tracking a set of statements that specify the software project, which form a directed acyclic graph (DAG) and provide metrics for continuous analysis and iteration.

These statements document claims about the software or the project, identify evidence that supports these, or specify requests that must be satisfied by another claim - or by something or someone external to the project. The graph describes how high-level or external goals for the software project (Expectations) are supported by more specific objectives (Assertions) and ultimately Evidence. The latter must necessarily reference artifacts: files in a git repository, or the result of automated processes operating on such files.

This approach is intended to replace traditional techniques that rely on office tools like MS Word or Excel to author documentation, and specialised commercial tooling to manage system specifications and requirements, and trace these to implemented software and tests. Such approaches frequently fail to maintain integrity in complex, continuously evolving systems, because they are not closely integrated with the software that they describe.

Eclipse OSCAT

Thursday, January 16, 2025 - 05:02 by Franz Höpfinger

Advantages of Transferring OSCAT to the Eclipse Foundation

Introduction to OSCAT

The Open Source Community for Automation Technology (OSCAT) is a well-established initiative that provides a suite of software packages designed to enhance automation technology. OSCAT consists of three main packages: OSCAT-Basic, OSCAT-Network, and OSCAT-Building. These packages are compatible with various platforms, including CODESYS 2.3, CODESYS 3.5, Siemens S7, and PC Worx (All other brand names and trademarks are the property of their respective owners and are used for descriptive purposes only). OSCAT's mission is to offer open-source solutions that facilitate automation in industrial and building environments, promoting innovation and accessibility in the field.

Benefits to OSCAT

  1. Enhanced Governance and Structure:
    • Vendor-Neutral Governance: The Eclipse Foundation offers a vendor-neutral governance model that ensures fair and transparent decision-making processes. This structure can help OSCAT maintain its integrity and independence, avoiding potential conflicts of interest.
    • Intellectual Property Management: The Eclipse Foundation has established processes for managing intellectual property, which can protect OSCAT's assets and ensure compliance with legal standards.
  2. Increased Visibility and Adoption:
    • Global Recognition: Being part of the Eclipse Foundation can significantly boost OSCAT's visibility within the global open-source community. The Eclipse brand is well-respected and widely recognized, which can attract more users and contributors.
    • Community Engagement: The Eclipse Foundation's extensive network can facilitate greater community engagement, fostering collaboration and innovation. This can lead to more rapid development and improvement of OSCAT.
  3. Access to Resources and Infrastructure:
    • Technical Infrastructure: OSCAT can benefit from the Eclipse Foundation's advanced technical infrastructure, including build systems, code repositories, and continuous integration tools. This can streamline development processes and improve software quality.
    • Funding and Support: The Eclipse Foundation can provide financial support and resources for marketing, events, and other activities that promote OSCAT. This can help sustain and grow the project over time.

Benefits to the Eclipse Foundation

  1. Expansion of Project Portfolio:
    • Diverse Ecosystem: Integrating OSCAT into the Eclipse Foundation's portfolio can diversify its ecosystem, attracting new contributors and users from different domains. This can enhance the Foundation's overall impact and reach.
    • Innovation and Collaboration: OSCAT's unique features and capabilities can inspire new projects and collaborations within the Eclipse community, driving innovation and technological advancement.
  2. Strengthening Open-Source Standards:
    • Standardization Efforts: The Eclipse Foundation's collaboration with standards organizations can benefit from OSCAT's inclusion. OSCAT can contribute to the development of open-source standards, promoting interoperability and best practices.
    • Compliance and Security: By adhering to the Eclipse Foundation's rigorous development processes, OSCAT can enhance its compliance with industry standards and improve its security posture.
  3. Community and Ecosystem Growth:
    • Broader Community: The addition of OSCAT can attract a broader community of developers, users, and stakeholders to the Eclipse Foundation. This can lead to a more vibrant and diverse ecosystem, fostering greater collaboration and knowledge sharing.
    • Ecosystem Synergies: OSCAT can create synergies with other Eclipse projects, enabling the development of integrated solutions and expanding the Foundation's capabilities in various technological areas.

Special Focus on Home and Building Automation

The Eclipse Foundation has ongoing activities and projects focused on home and building automation. By integrating OSCAT, which includes the OSCAT-Building package, the Eclipse Foundation can strengthen its position in this domain. OSCAT's expertise and solutions can complement existing Eclipse projects, leading to more comprehensive and innovative automation solutions for smart homes and buildings.

Industrial Automation

OSCAT is widely used in Industrial Automation, which widens the Scope of the above mentioned Activities into a wider Field of Applications, giving multiple synergy effects in reusability and stability of control Software.

Conclusion

The transfer of OSCAT to the Eclipse Foundation as Eclipse OSCAT offers substantial advantages for both parties. OSCAT can benefit from enhanced governance, increased visibility, and access to resources, while the Eclipse Foundation can expand its project portfolio, strengthen open-source standards, and grow its community. This strategic move can ultimately lead to a more robust, innovative, and collaborative open-source ecosystem, particularly in the field of home and building automation.

Eclipse SEALMAN

Tuesday, January 14, 2025 - 09:26 by Jos Zenner

Eclipse SEALMAN is an open-source project born from the collaboration of machine builders, offering a comprehensive suite of building blocks for intelligent machines. At its core, Eclipse SEALMAN empowers machine builders to seamlessly integrate edge computing, robust device management, seamless cloud connectivity, and efficient remote maintenance capabilities into their solutions. With a strong emphasis on cyber security, Eclipse SEALMAN provides a secure architecture that safeguards machines from the edge to the cloud, ensuring the integrity and confidentiality of data throughout the entire lifecycle.

Eclipse AASPortal

Tuesday, December 3, 2024 - 09:46 by Florian Pethig

Eclipse AASPortal is written in TypeScript and based on a distributed software architecture, that  includes a Node.js based “aas-node” and an Angular based web application “aas-portal” (using Bootstrap 5 and NgRx). Via the standardized AAS REST API, Eclipse AASPortal connects to AAS endpoints, e.g. based on Eclipse AASX Package Explorer and Server, Eclipse BaSyx, or Eclipse FA³ST. These endpoints can be configured during operation by authorized users. Additionally, operational data can be integrated via OPC UA (based on node-opcua) and visualized in corresponding plots. AASPortal enables full text search and filtering of AAS, as well as basic modelling capabilities like adding Submodel templates or editing  values of properties.

Cyber Resilience Practices

Wednesday, November 13, 2024 - 10:15 by Tobie Langel

The Cyber Resilience Practices Project develops specifications designed to help improve the cyber resilience of open source projects and of the products that incorporate these projects and facilitate compliance with related regulation worldwide.

The first specification to be developed by this project is the Vulnerability Handling Specification.

The Vulnerability Handling Specification focuses on vulnerability management for products with digital elements, as outlined by the Essential Requirements of the CRA. It details the necessary components of a vulnerability handling policy, including procedures for receiving reports, resolving issues, and disclosing vulnerabilities. Additionally, it specifies the requirements for managing vulnerable dependencies.

Eclipse zserio

Monday, November 11, 2024 - 06:55 by Fabian Klebert

Eclipse zserio enables automatic code generation for supported languages like C++, Java, and Python, allowing developers to focus on application logic rather than low-level data handling. With its emphasis on efficiency and compatibility, Eclipse zserio is especially valuable in industries with stringent performance requirements and interoperability needs, such as automotive, telecommunications, and financial services.

The Eclipse zserio toolchain includes a schema compiler, runtime libraries, and various utilities to support schema evolution, compression, and seamless integration into CI/CD pipelines. It is designed to be scalable, making it suitable for both simple data models and highly complex, large-scale schemas. As part of the Eclipse Foundation, Eclipse zserio will foster an open community to drive innovation and collaboration in serialization and data interoperability across diverse applications and systems.

Eclipse LMOS

Thursday, October 31, 2024 - 10:14 by Kai Kreuzer

The Eclipse LMOS project (Language Model Operating System) is essentially a platform for building and running AI systems that can handle complex tasks. Imagine it like an operating system for your computer, but instead of managing applications, it manages AI agents. These agents are like smaller, specialized AI programs that each handle a specific part of a larger problem.

The key idea behind Eclipse LMOS is to break down complex tasks into smaller parts that can be handled by different AI agents. For instance, if you're building a customer service chatbot, one agent might handle basic greetings, another might answer questions about billing, and another might deal with technical support issues. This way, each agent can be really good at its specific job, leading to better overall performance.

Eclipse LMOS helps these agents work together by providing a common platform where they can communicate and share information. It's like a central hub that keeps everything organized and running smoothly. This platform also makes it easier to manage and scale the system as needed. If you need to add more agents or handle more traffic, LMOS can handle it without breaking a sweat.

Eclipse LMOS was designed to be very flexible and user-friendly. You don't need to be an AI expert to use it. The platform provides tools and features that make it easy to build, deploy, and manage AI agents.

Eclipse Fennec

Wednesday, October 9, 2024 - 05:03 by Mark Hoffmann

We see Eclipse Fennec as incubator and space for any EMF and OSGi related projects. Over the years we have massively used these two technologies in our projects. 

We put some extensions to EMF to make it work in an optimal OSGi manner. Based on that we build additional components for existing OSGi specifications like the Whiteboard for Jakarta Restful Web Services that are able to serialize and de-serialze EMF Instances. 

We build a lot of other frameworks on top of that, like a Serializer / De-Serializer frameowrk to be capabile to save load EMF with the same configuration in different formats like MongoDB, JPA, Lucene, Json / Yaml.

We needed all this components to use EMF end-to-end in an application.

  • EMF OSGi - EMF Framework as OSGi Service, Code Generator for these components

  • EMF-Util - Extensions based on EMF OSGi to customizer serialzing, Jakarta RS extension to work with EMF

  • Model-Atlas - Distributed EMF Model Registry

  • EMF- Codec - Serializer, De-serializer framework for EMF in an OSGi based way

  • EMF Persistence - Persistence extension for EMF in an OSGi environment

  • Mapping Layer (‘QVT Transformation in an OSGi way)

  • Privacy layer - Framework to analyze model and/or model instances for privacy related information

Eclipse Safe Open Vehicle Core

Tuesday, October 1, 2024 - 03:41 by Thilo Schmitt

The Eclipse Safe Open Vehicle Core project aims to develop an open-source core stack for Software Defined Vehicles (SDVs), specifically targeting embedded high-performance Electronic Control Units (ECUs).

As these ECUs carry multiple processors, the project also targets for interoperability between these processors.

To ensure applicability in the automotive domain we ensure compliance with relevant safety standards, such as ISO 26262 for functional safety, providing a reliable foundation for safety-critical applications and adherence to stringent security standards, implementing robust cybersecurity measures in accordance with ISO/SAE 21434 and UNECE WP.29.

A key aspect of the project is the design of a modular and extensible architecture, allowing easy integration and customization for various automotive applications, ensuring flexibility and scalability. Additionally, the project focuses on end-to-end optimization throughout the stack to achieve maximum efficiency and performance.

The project is guided by several key principles:

Common Stack & Industry-Wide Collaboration

The Safe Open Vehicle Core project aims to create a common full stack solution of a software runtime that serves as the best possible solution for shared industry problems. By achieving efficiencies through a single, joint solution instead of multiple specific ones, the project addresses non-differentiating scopes and ensures that the scope is significant for multiple parties, rather than catering to singular interests.

Speed

The project accelerates development by working in open source, focusing on code-centric and iterative methods rather than primarily on textual specifications.

Abstraction and Extensibility

The project emphasizes the decoupling of hardware (HW) and software (SW), ensuring that applications do not depend on specific hardware characteristics. It establishes predetermined breaking points to enable the exchange of implementations of individual layers, aspects, and components, such as ECU communication protocols. Additionally, it focuses on enabling project-specific extensions of the stack, providing a flexible framework that can be customized and extended to meet the specific requirements of different projects.

Quality & Efficiency

The Safe Open Vehicle Core project aims for a lean, no-frills solution to lower complexity and increase efficiency. The project strives for support of modern implementation paradigms and languages like Rust or C++, uses human-readable specification languages that are domain and target-driven, and avoids complex exchange data formats. It seeks the optimal balance between modularity and resource consumption and follows state-of-the-art processes to develop safe and secure software in an open-source environment.

By achieving these goals and adhering to these key principles, the [SafeOpenVehicleCore] Project aims to deliver a versatile and secure core stack that supports the evolving needs of the automotive industry and accelerates the adoption of software-defined vehicle technologies.

Eclipse TRAICE (Tracking Real-time AI Carbon Emission)

Tuesday, September 17, 2024 - 11:20 by Matthew Khouzam

Horizontal Federated Machine Learning (H-FML)

Federated learning enables AI/ML model training at the network nodes by exploiting large scale distributed data and compute resources. Federated learning also restricts explicit data sharing so that confidentiality and privacy associated with the use case are preserved. FL differs from classical AI/ML in four main domains: data privacy (no end-user data leaves the device, worker, node or client), data distribution (data could be IID or no-IID), continual learning (the communication time
between client and central server may be too long to provide a satisfactory user experience), and aggregation of data (some privacy notions and rules are violated when user data aggregation occurs in the central server) [4,5].

Federated learning will require $\mathcal{K}$ devices to upload and aggregate parameters iteratively to train the global model [6, 7]. In such a scenario, distributed devices (Mobile devices, workers) collaborate, to train a common AI/ML model under the coordination of an access point (AP) or parameter server.

H-FML occurs over multiple communication (encapsulated into upload cost and download cost) and computation rounds. In each training round, five-stages process is repeated until model convergence.

-   In step 1, The FML starts when a training task is created by the sever (coordinator) who initializes the parameters of the global model and sends to each worker (client or participant), over first download cost. 

-   In step 2, each worker k in K (participants) independently completes training on its local dataset to minimize the loss on their local data distribution D_k of size n_k.

-   In step 3, Each worker submits its local model to the server (coordinator) over upload cost.

-   In Step 4, the global model is consolidated by aggregating local models received from workers by the server.

-   In Step 5, The global model is then dispatched back to workers over second download cost. This updated global model will be used by each worker in the next training round.

To achieve the goal in Step 2, FML train a global model that minimizes the average loss across parties.

In subsequent training iteration (Step 2, Step 3, Step 4, and Step5 are the single round of FL), the process is repeated until the training loss converges or the model converges, or a time limit is exceeded, or the maximum number of iterations is reached.

ECLIPSE TRAICE

Eclipse TRAICE is proposed as an interactive visualization software designed for Real-Time monitoring of AI/ML system carbon emission. It is designed to handle horizontal federated machine leaning algorithms (H-FML) and enabling user to track the carbon footprint. TRAICE allows users to simultaneous observe the environmental impact and the model training
effectiveness.

TRAICE is build based on three main components:

-   A python library

-   A Server and

-   A client application

The python library binds to the code ran by the nodes (workers) of a H-FML based on a TRAICE package and collects real-time data generated by the modes. The library then sends those metrics to the server. The library exploits the codecarbon library [3] to extract the amount of energy and carbon emission spent by the nodes.

The server aggregates the generated data and computes several metrics on the real-time training session of participant workers.

The client is a user's visualization tool, allowing the users to monitor and interact with the framework.

The components communicate via WebSockets, ensuring a real-time bidirectional communication between the workers and the servers, as well as between the server and client.

System Requirement

Eclipse TRAICE runs and requires a Docker engine installed and up running on the host machine. Python should be installed to run scripts and components. Python version 3.10.x is recommended. The following requirements are also mandatory:

-   The library must be installed on all workers nodes of the H-FML. The workers can run anywhere (as per user preference) and must only be able to communicate with the server.

-   The server can be deployed everywhere but, must expose port 3000 and be accessible by all services. Appropriated access should be allowed proper routing techniques and firewall configurations.

-   The database can be deployed anywhere and only needs to communicate with the server. Ideally it could be on the same subnet as the server to minimize latency.

-   The client can be installed anywhere but must communicate with the server. On a typical client-server architecture, the client sends requests to the server and received response on return. The process
   is activated via socket connections.

ECLIPSE TRAICE System Components

Eclipse TRAICE system is deployed based on three parts. Each part is deployed as a separated Docker container.

1)  TRAICE- frontend: The TRAICE client application for visualizing the graphs of emissions in real-time.

2)  TRAICE-backend: Is the server responsible for receiving, aggregating and sending the carbon emission data of the worker participating in the training to the TRAICE client.

3)  TRAICE-database: Used for storing training data, as SQL database.

Docker Compose facilitates the communication and network setup between the containers, ensuring they operate seamlessly as a unified system. The system components each expose different ports for communication.

Here are the exposed ports:

-   Server: port :3000

-   Client: port :4200

-   Database: port :8001 (The Docker container exposes port 8001 and
   redirects it to port 5432 inside the container to communicate with
   the PostgreSQL, service).

Step-by-step installation of Eclipse TRAICE with Docker Compose

Three distinct bidirectional communications have been identified. The TRAICE library communicates bidirectionally with the TRAICE sever. The TRAICE server communicates bidirectionally with the client and finally, the TRAICE server communicates bidirectionally with the database.

The steps of the installations are as follows:

-   Clone the repository and navigate to the project root.

-   Build the application images with Docker Compose: docker -- compose
   build

-   Run the application: docker -- compose up -d

Access the frontend by navigating to http://localhost:4200 in your local web browser. To stop all containers related to TRAICE: docker -- compose down

Installing Eclipse TRAICE library

To build the library from sources, follow the steps below to obtain the ".whl" file:

cd library/traice

pip install --upgrade setuptools

pip install --upgrade build

python -m build

The ".whl" will be created in the "dist" folder, you can then install the package by doing "pip install <filename> .wh" , (e.g. "pip install Traice-0.0.1-py3-none-any.whl").

Library Usage

The "example" folder contains examples of usage. The library exposes a class "TraiceClient" that handles all the tracking and communication logic. Once this is done, you should be able to see information about the training and its energy usage in the frontend!

-   Typical example: cifar10

This example shows the federated training on CIFAR-10 dataset with 3 workers and carbon emissions tracking using TRAICE. This code is based on the Flower Federated Learning library example available here:

[Flower Quickstart PyTorch\](https://github.com/adap/flower/tree/main/examples/quickstart-pytorch).

To use, first install the dependencies using "pip install -r requirements.txt" (if you don't have "TRAICE" installed, please follow instructions given above to build and install it).

# 1. Start federated learning server

python server.py

# 2. Start TRAICE server (follow instructions in section c))

# 3. Start workers

python worker.py --node-id 0

python worker.py --node-id 1

python worker.py --node-id 2

You can now open the TRAICE client and access the visualization.

IV/- References

1.  K. Ahmad, A. Jafar, and K. Aljoumaa, "Customer churn prediction in    telecom using machine learning in big data platform," Journal of Big Data, vol. 6, no. 1, pp. 1--24, 2019.

2.  L. Bariah, H. Zou, Q. Zhao, B. Mouhouche, F. Bader, and M. Debbah, "Understanding telecom language through large language models," in IEEE Global Communications Conference (GLOBECOM), 2023, pp. 6542--6547.

3.  S. Luccioni, "Code carbon: Track and reduce CO2 emissions from your computing," https://github.com/mlco2/codecarbon, 2013.

4.  Y. Chen et al, "Federated learning for privacy preserving IA", Communications of ACM, vol. 63, no. 12, pp. 33-36, 2020.

5.  Q. Yang et al, "Federated machine learning: concept and applications", ACM Transaction on Intelligent Systems and Technology (TIST), vol. 20, no. 2, pp. 12-19, 2019.

6.  D. Ye et al, "Federated Learning in Vehicular edge computing: A selective model aggregation approach", IEEE Access, vol. 8, pp. 23 920-23 935, 2020.

7.  Zhilu Chen and Xinming Huang, "E2E learning for lane keeping of self-driving cars", IEEE Intelligent Vehicles Symposium (IV), 2017,  pp. 1856-1860