Proposals

Eclipse AutoSD Automotive Integration Project

Wednesday, July 30, 2025 - 06:31 by Leonardo Rossetti

Eclipse AutoSD Automotive Integration Project uses an AutoSD image, built and tailored for its community, to run and test both Eclipse SDV projects and blueprints.

Eclipse AutoSD Automotive Integration Project offers a foundation for projects to build, run and test their stack, components and services, including blueprints.

The image setup will expose projects into thinking how their projects or blueprints would work in an Mixed Critical Orchestration[0]  architecture.

Several upstream tools from the CentOS Automotive SIG can used, from building images[1] to performance testing[2], but this integration project could also provide its own set of specialized tools for Eclipse SDV, to ease the process of deploying blueprints into this reference AutoSD image and so on.

This integration project gets several supported platforms (read boards) for free[3], including virtual ones, such as AWS and Azure, allowing the possibility of running blueprints tests in said cloud providers.

[0] - https://sigs.centos.org/automotive/features-and-concepts/con_mixed-criticality/
[1] - https://gitlab.com/CentOS/automotive/src/automotive-image-builder
[2] - https://sigs.centos.org/automotive/performance_monitoring_with_pcp/#arcaflow-workflow
[3] - https://sigs.centos.org/automotive/provisioning/

Eclipse Open Vehicle API

Friday, July 25, 2025 - 07:13 by Thomas Pfleiderer

The Eclipse Open Vehicle API contains tools and a runtime to create a vehicle abstraction interface for signal- and event-driven functions. 

  • Component-based
  • Transfer existing signal-based ECUs to HPC
  • Implement new signal- and event-based vehicle functions
  • Vehicle independent implementation (vehicle abstraction)
  • Multi-vendor – open for play-store approach
  • Standardized interface for functions
  • Automate as much as possible – reduce coding
  • Allow HIL and SIL
  • Safety aspects for use with chassis and ADAS functions

PoC implementations for demonstration purposes

Eclipse stratOS

Friday, July 25, 2025 - 03:01 by Ignacio Lacalle

In the ongoing quest to develop a comprehensive Meta-OS for the continuum, the landscape is marked by existing concepts and frameworks aimed at unifying edge, cloud, and IoT resources.

Compatibility with diverse container management frameworks

In a computing world where virtualization of workloads is imperative, some companies still base on Docker management, while others have shifted to cloud-native environments, mostly relying on Kubernetes-only setups. Eclipse stratOS, however, stands out by leveraging existing concepts to significantly extend the current state-of-the-art architecture. It advances orchestration and management capabilities beyond what is currently available, offering a more flexible, robust, scalable, and efficient solution for the modern Cloud-Edge-IoT landscape. It offers the capacity to execute loads in both environments, and opens up customization for future incorporations (e.g., containerd, Wasm-based).

Seem centralized, act distributed 

Eclipse stratOS overcomes the impression of vendor-agnosticity. Using a single interface, all resources are presented the same way for the user. They can be monitored and manipulated albeit residing in different networks, be owned by different companies or having disparate characteristics. Nonetheless, Eclipse stratOS avoids centralization: the access to the information is ubiquitous and the orchestration decisions are taken in a decentralized manner. Thanks to balancing algorithms, the requests for workload commissioning (and other key processes) are handled in different spots avoiding single point of failure effects.

Standard-based communication and data management

The Meta-OS relies on using de-facto standard communication technologies, such as HTTP REST APIs, OpenAPI documentation, solid data formats such as NGSI-LD and tunnelling based on Fully-Qualified-Domain-Names for required networking (employing Wireguard VPNs). Also, due to inheriting from a research project, it aligns with impactful on-going initiatives, such as the TF3 Architecture from EUCEI.

What does Eclipse stratOS provide?

It provides a series of components that are installed over a baseline infrastructure to be provided by the adopter (typically Kubernetes clusters, virtual machines, native systems or physical machines). 

Federation of domains

Infrastructure Elements (IEs) are the fundamental computing units Eclipse stratOS, providing a unified runtime environment. A group of IEs forms a domain, the smallest administrative entity, sharing core services. Domains connect to form a Eclipse stratOS continuum, supporting a federated orchestration model facilitated by the Context Broker Orion-LD which implements NGSI-LD interface. This structure ensures peer-to-peer collaboration among domains, enabling autonomous, decentralized decision-making and fine-grained resource control. 

As the Meta-OS builds on such concepts of Domains and Infrastructure Elements, it presents differences in the components to be installed based on the topology (that is decided by the IT administrator of the adopter entity).

Deployment of containerized workloads

Eclipse stratOS introduces a two-layer orchestration model separating decision-making from execution. The High-Level Orchestrator (HLO) -a custom Python-based framework that employs Redpanda as messaging bus- acts as the decision engine, using global resource awareness to optimize workload placement across domains. It is formed of various modules, namely HLO Frontend, HLO Data Aggregator, HLO Allocator, HLO engine and HLO Local Allocation Manager. The Low-Level Orchestrator (LLO), which materializes in Go-based Kubernetes operators- handles local enforcement, translating HLO decisions into actionable commands on specific resources. This hierarchical design ensures scalability and adaptability without disrupting other domains.

Continuous observability of computing elements

To unify diverse and distributed resources, Eclipse stratOS uses a Smart Data Model from project aerOS to abstract and pool them for dynamic workload execution. It gathers real-time data on resource capabilities and availability using Prometheus or customized scripts running on all Infrastructure Elements. This comprehensive view enables the Meta-OS to make efficient placement decisions and support proactive workload migration.

Unified management portal

Eclipse stratOS is accessed via a web-based modern GUI that connects with the Meta-OS backend. It allows observing the computing resources and the deployed services, as well as to commission workloads (specifying the requirements) but also to compare the performance of such elements against relevant (edge, IoT) benchmarks. 

Move decision and intelligence to the edge

Nodes are no longer passive elements that take orders and push monitoring/logging data. In Eclipse stratOS, those are (depending on their capacities) able to trigger local orchestration notifications (e.g., to offload if saturated), scale horizontally, detect anomalies on data or on their behaviour, and to adapt the sampling frequency to certain circumstances.

Trustworthiness in the continuum

Eclipse stratOS integrates cybersecurity services for robust authentication, authorization, and access control, ensuring secure access to domain resources through role-based policies and validated identity registries (using LDAP). Its trust management framework assesses the reliability of Infrastructure Elements via a Trust Agent and Trust Manager, using metrics like behavior, health, and reputation. Trust and reputation are key, with mechanisms to ensure message immutability and trust-based resource selection (via the usage of IOTA Tangle). Security is handled holistically, combining centralized IAM (Keycloak, KrakenD) with encrypted communications (TLS, VPN) and fine-grained data access control. 

In addition, it includes the option of using Shapley weights to reinforce explainability of compatible ML models used over the Meta-OS.

Custom functions definition and deployments

Eclipse stratOS embeds a customized version of OpenFaaS to allow the execution of one-shot, serverless applications on demand. Once installed (typically, in the most cloud-liked infrastructure of the continuum), the adopter may code and upload their own applications based on pre-defined templates, which could asynchronously trigger process such as ETA, data curation or statistics visualization in natively-equipped Grafana.

Eclipse SOKRATES

Wednesday, July 2, 2025 - 11:18 by Thierry Fraudet

Eclipse Sokrates is an open source project providing modular, high-quality training material about Open Source and InnerSource, from basic to advanced concepts. Eclipse Sokrates stands for Sharing OSS Knowledge Resources And Training for Education in Software. It is built with a "training-as-Code" philosophy, using AsciiDoc for ease of maintenance, modularity, and collaboration.

This modular training is provided as a webinar series that aims to foster a strong foundation in open source best practices, legal compliance, security considerations, and community engagement, all within the collaborative spirit of the Eclipse Foundation. Each module includes learning objectives, training materials and comprehensive and extensive speaker notes, so anyone can use the material out-of-the-box to give a training session or record a webinar.

The training modules can be forked by any organisation to tailor them for their particular environment.

Eclipse OpenSOVD

Monday, May 26, 2025 - 03:16 by Thilo Schmitt

Eclipse OpenSOVD provides an open source implementation of the Service-Oriented Vehicle Diagnostics (SOVD) standard, as defined in ISO 17978. The project delivers a modular, standards-compliant software stack that enables secure and efficient access to vehicle diagnostics over service-oriented architectures. By offering an open and community-driven implementation, Eclipse OpenSOVD serves as a foundation for developers, OEMs, and tool vendors to build, test, and integrate SOVD-based solutions. The project will hence facilitate adoption and ensure industry coherence with the standard.

Eclipse OpenSOVD complements and integrates the Eclipse S-CORE project by providing an open implementation of the SOVD protocol that can be used for diagnostics and service orchestration within the S-CORE environment. This integration ensures that diagnostic capabilities are natively supported in SDV architectures, enabling developers and OEMs to build more robust, maintainable, and standards-compliant vehicle software stacks.

Key components include:

  • SOVD Gateway: REST/HTTP API endpoints for diagnostics, logging, and software updates.
  • Protocol Adapters: Bridging modern HPCs (e.g., AUTOSAR Adaptive) and legacy ECUs (e.g., UDS-based systems).
  • Diagnostic Manager: Service orchestration for fault reset, parameter adjustments, and bulk data transfers.

Future-Proofing:

  • Semantic Interoperability: JSON schema extensions for machine-readable diagnostics, enabling AI-driven analysis and cross-domain workflows.
  • Edge AI/ML Readiness: Modular design to support lightweight ML models (e.g., predictive fault detection) via collaboration with projects like Eclipse Edge Native.
  • Support for Extended Vehicle logging and publish/subscribe mechanisms.

Eclipse DataGrid

Wednesday, May 21, 2025 - 11:20 by Florian Habermann

The Eclipse DataGrid project delivers a high-performance, distributed, in-memory data processing layer for Java applications. Built upon the robust foundation of EclipseStore, Eclipse DataGrid extends its capabilities to enable seamless replication and distribution of native Java object graphs across multiple JVMs in a cluster. This innovative approach empowers developers to leverage the full potential of the Java language and JVM, eliminating the impedance mismatch and performance bottlenecks associated with traditional data solutions.

Eclipse DataGrid can also be seamlessly integrated into existing database application infrastructures, acting as an in-memory caching, searching and data processing layer to significantly improve performance, reduce the load on primary databases, and lower overall database infrastructure costs, including potential savings on database license fees. Target group are both, Java enterprise and cloud-native application builders.

 

1. Project Goals

  • Provide Java-Native In-Memory Data Processing: To offer a distributed Java in-memory data processing layer, that is deeply integrated with the Java language, utilizing core Java features and the native Java object model.
  • Eliminate Impedance Mismatch: To remove the need for complex and inefficient mappings between Java objects and external data formats or structures.
  • Use JVM Performance for In-Memory Data Processing: To enable applications to achieve microsecond-level look-up, response, and query times by leveraging the performance of the JVM’s runtime, memory management, and JIT compiler.
  • Simplify Distributed Java Development: To provide a straightforward way for Java developers to work with distributed Java object graphs, data and clusters, using familiar Java concepts and tools.
  • Offer ACID Compliance in a Distributed Environment: To ensure data consistency and reliability in clustered deployments by using EclipseStore's ACID properties.
  • Optimize Database Performance and Costs: To enable the use of Eclipse DataGrid as a caching, searching, and processing layer, reducing the load on underlying databases and lowering infrastructure expenses.
  • Support of any Programming Language: A REST interface enables access Eclipse DataGrid with any program language.

 

 

2. How the Project Works

Eclipse DataGrid comprises several key components that work together to provide a distributed Java in-memory data processing solution:

  • Java Object Graph Model: Unlike traditional key-value-based data grids, Eclipse DataGrid preserves Java’s object-oriented paradigm, enabling developers to work with complex object graphs without sacrificing performance or simplicity. Eclipse DataGrid replicates this graph across the cluster, allowing distributed access to the data. The Java object graph is used as an in-memory data storage system at runtime that enables execution of CRUD operations and a rollback mechanism.
  • Java Streams API and Lucene: Eclipse DataGrid leverages Java's Streams API for efficient data searching, filtering, and manipulation. It will also integrate Lucene for advanced full-text search capabilities.
  • Indexing: A special HashMap enables indexing and fully-automated lazy-loading to minimize I/O traffic.
  • EclipseStore Integration: The integration of EclipseStore provides ACID-compliant persistence to all Eclipse DataGrid nodes.
  • Replicate Java Object Graphs: Eclipse DataGrid extends EclipseStore by providing a specific storage function that can distribute the storage process of an object graph across multiple JVMs within a cluster via event streaming. The standard consistency model is eventual consistency. In a later version, a configurable strong consistency model is provided.
  • Kubernetes Integration: A dedicated Helm chart will be provided to facilitate the creation, setup, and provisioning of a cluster environment on Kubernetes. This streamlines the deployment and management of Eclipse DataGrid in modern, cloud-native environments.
  • Management GUI: A Java application with a graphical user interface will be developed to simplify cluster operations. This GUI will enable users to:
    • Provision and set up Eclipse DataGrid clusters.
    • Perform ongoing maintenance of the cluster.
    • Monitor cluster health and performance using observability tools (e.g. Grafana, Prometheus)
    • Troubleshoot issues that may arise.

 

3. Project Components and Features

Eclipse DataGrid will provide the following key components and features:

  • Distributed Store Function: A core extension to EclipseStore that enables the distribution of data across multiple JVMs in a cluster.
  • Eventual Consistency: The standard consistency model is eventual consistency.
  • Kubernetes Cluster Management: Helm chart for automated cluster provisioning and management on Kubernetes.
  • Graphical Management Interface: A user-friendly Java application for cluster setup, maintenance, monitoring, and troubleshooting.
  • Native Java Object Graph Replication: The ability to replicate and distribute native Java object graphs across a cluster.
  • ACID Compliance: Distributed transactions and data consistency, building upon EclipseStore's ACID properties.
  • High-Performance Data Access: Microsecond-level read and write access to distributed data.
  • Java Streams API Integration: Seamless integration with Java's Streams API for efficient data manipulation.
  • Lucene Integration: Full-text search capabilities for complex data querying.
  • Secure Serialization: Protection against deserialization attacks through the use of Eclipse Serializer.
  • Flexible Data Modeling: Users can define their data structures using any Java class, allowing for a fully customized and domain-driven approach.
  • In-Memory Performance: Leveraging JVM memory management for optimal speed.
  • JIT Compiler Optimization: Benefiting from the JVM's JIT compiler for runtime performance enhancements.
  • Database Optimization: Ability to serve as an in-memory caching, searching and processing layer for existing database applications.

 

 

4. Core Java Features Utilized

Eclipse DataGrid is designed to exploit the full power of the Java language and the JVM. It leverages these core Java features:

  • Java Object Model: The project works directly with Java's native object model, eliminating the need for object-relational mapping (ORM) or other impedance-matching techniques.
  • Java Memory Management: Eclipse DataGrid relies on the JVM's efficient memory management, including garbage collection, to handle large volumes of in-memory data.
  • Java Streams API: The project utilizes the Java Streams API for efficient and expressive data manipulation, including filtering, mapping, and aggregation.
  • Concurrency Utilities: Java's concurrency utilities will be used to manage distributed operations, ensuring thread safety and optimal performance.
  • JVM Internals: The project is designed to work efficiently with the JVM, taking advantage of its architecture and optimizations such as Virtual Threads.

 

5. Use Cases

Eclipse DataGrid is ideal for a wide range of use cases where high-performance, low-latency, and scalable data access is critical:

  • High-Performance Caching: Dramatically improve application performance by caching frequently accessed data in a distributed in-memory grid, reducing the load on the primary database.
  • Real-Time Analytics: Enable real-time data analysis and decision-making by providing microsecond-level access to data for complex queries and aggregations.
  • Scalable Web Applications: Build highly scalable and responsive web applications by distributing session data and application state across a cluster.
  • Microservices Architectures: Facilitate the development of microservices-based applications by providing a shared, distributed data layer that can be accessed by multiple services.
  • Complex Event Processing: Process and analyze high-velocity data streams in real-time for applications such as fraud detection, algorithmic trading, and IoT data analysis.
  • Distributed Graph Processing: Efficiently store and process graph data for applications such as social network analysis, recommendation engines, and knowledge graphs.
  • Online Gaming: Power real-time, multiplayer online games with low-latency data access and distributed state management.
  • E-commerce Applications: Handle high-volume transactions, personalize shopping experiences, and manage product catalogs with extreme speed and scalability.
  • Financial Services: Support high-frequency trading, risk management, and fraud detection with real-time data access and processing.
  • Healthcare Applications: Enable fast access to patient data, support real-time monitoring, and facilitate data-intensive research.
  • Database Optimization and Cost Reduction: Offload data processing from primary databases, reducing their workload and enabling the consolidation of multiple database types, leading to lower infrastructure costs and license fees.

  

6. Benefits

Eclipse DataGrid offers numerous benefits to Java developers:

  • Unparalleled Performance: Microsecond-level data access for demanding, high-performance applications.
  • Simplified Development: Develop distributed applications using familiar Java concepts and the native Java object model.
  • Reduced Complexity: Eliminate the need for complex data mapping and integration with external data stores.
  • Increased Scalability: Easily scale applications horizontally by adding more nodes to the cluster.
  • Improved Reliability: Ensure data consistency and availability with ACID-compliant distributed transactions.
  • Lower Infrastructure Costs: Optimize resource utilization and potentially reduce the need for multiple specialized databases.
  • Faster Time to Market: Accelerate application development by providing a ready-to-use, high-performance data grid solution.
  • Full Java Power: Ability to implement any complex business logic.
  • Unified Data Layer: Handle various data needs (key-value, documents, graph-like structures) within a single, consistent system.
  • Database Efficiency: Improved performance and reduced load on primary databases.

 

7. Conclusion

Eclipse DataGrid represents a significant advancement in Java application development, providing a powerful and intuitive way to build high-performance, distributed, in-memory data solutions. By leveraging the core strengths of Java and the JVM, Eclipse DataGrid empowers developers to create a new generation of data-intensive applications with unparalleled performance, scalability, and reliability. We believe that Eclipse DataGrid will become a valuable asset within the Eclipse ecosystem, driving innovation and growth in the Java community, and invite the Eclipse community to collaborate on shaping Eclipse DataGrid into a cornerstone of modern data processing with Java.

Eclipse Piranha Cloud

Monday, May 19, 2025 - 16:28 by Arjan Tijms

Traditional Jakarta EE implementations are application servers; which are software products mostly intended to be installed. They have the ability to deploy and undeploy one or more applications on them, and typically contain support for monitoring, clustering and many additional features. GlassFish is one example of such implementation.

Eclipse Piranha Cloud intends to provide an implementation of Jakarta EE without being an application server. It focuses on being highly composable, both in regards to the Jakarta EE implementation components being used, as with regard to some of the features that are provided. 

A special focus is on embedded use, where a runtime composition can be used that commits functionality not necessarily used for embedded use. E.g. Eclipse Piranha Cloud could be used to just render a Jakarta Faces page into a String, so such composition would not need an HTTP server. The code embedding such Eclipse Piranha Cloud instance would only include the Mojarra lib and the embedding code would programmatically create and pass an object representing an HTTPServletRequest in.

 

Eclipse Plug-in for Copilot

Thursday, May 15, 2025 - 00:42 by Rome Li

Eclipse Plug-in for Copilot provides code completion suggestions, chats, and deals with coding tasks using agents. It's a client app that talks to the GitHub online services. The focus of the app is UI integrations to enable GitHub Copilot features in Eclipse IDE. GitHub online services are not part of the scope.

Jakarta Portlet Bridge

Monday, March 24, 2025 - 16:06 by Neil Griffin

The Jakarta Portlet Bridge project is responsible for defining the Specification and API, which enables the development of Jakarta Faces web applications that can be deployed within a portlet container running inside a Jakarta EE servlet container or application server. The Portlet Bridge Specification defines requirements for mapping the portlet lifecycle (HEADER_PHASE, EVENT_PHASE, ACTION_PHASE, RESOURCE_PHASE, and RENDER_PHASE) to the Faces lifecycle (RESTORE_VIEW, APPLY_REQUEST_VALUES, PROCESS_VALIDATIONS, UPDATE_MODEL_VALUES, INVOKE_APPLICATION, and RENDER_RESPONSE). The goal is to provide Jakarta Faces developers with the ability to deploy their web applications in a portlet container with little-to-no-modification.

Jakarta Portlet

Monday, March 24, 2025 - 12:46 by Neil Griffin

The Jakarta Portlet project is responsible for defining the Specification and API, which enables the development of modular, server-side components that can be deployed within a portlet container running inside a Jakarta EE servlet container or application server. 

The Portlet Specification defines requirements for the portlet container, including an execution lifecycle and phases such as HEADER_PHASE, EVENT_PHASE, ACTION_PHASE, RESOURCE_PHASE, and RENDER_PHASE. The portlet lifecycle follows a request/response design similar to the servlet lifecycle, where each portlet has the opportunity to participate in the lifecycle and contribute output to the response. The Specification also specifies requirements for a client-side JavaScript API, including a client-side facility called the Portlet Hub. The Portlet Java API, Javadoc, and JSDoc define the contract for a portlet application to interact with the portlet container. A portlet application consists of one or more portlets that are packaged within a single Jakarta EE Web Application Archive (WAR) artifact.

The Specification also defines minimal requirements for a portal, which is a related technology responsible for aggregating output from a portlet container into a cohesive view, typically a complete HTML document rendered in a browser. Portals typically provide other features, such as page definition, portlet location on pages, and user management.