Proposals

Eclipse KuDECO

Monday, July 21, 2025 - 20:05 by Rute C. Sofia

Eclipse KuDECO is an open-source Kubernetes-oriented framework designed to enable intelligent, adaptive orchestration of containerised applications across highly dynamic and distributed edge-cloud environments. Unlike traditional cloud-centric orchestrators, Eclipse KuDECO introduces cognitive, decentralised decision-making capabilities that align with the operational demands of modern industrial and cyber-physical systems.

As Edge computing becomes foundational to digital transformation across different competitiveness domains such as Manufacturing, Smart Cities, Eclipse KuDECO addresses the limitations of centralised orchestration platforms like Kubernetes when operating at the network edge, and across an heterogeneous, multi-tenant IoT-Edge-Cloud continuum. KuDECO assumes that  nodes may disconnect, network conditions vary, and latency-sensitive services require rapid, localized decision-making. KuDECO overcomes these challenges by embedding reasoning and context-awareness directly into the orchestration layer.

Key Features:

  • Cognitive Orchestration at the node/cluster Level
    KuDECO augments Kubernetes with decentralized intelligence at each node and at a cluster level, enabling real-time container scheduling based on live context (e.g., CPU usage, network usage, energy consumption, and data freshness).
  • Cross-layer Context Awareness
    KuDECO integrates monitoring of resources, network conditions, and data lifecycle to inform orchestration decisions that meet application-specific goals.
  • Decentralized Architecture
    KuDECO components operate with minimal reliance on central control, promoting scalability, resilience, and autonomy across the edge-cloud continuum.
  • Unified Management via a common operator
    The KuDECO Automated Configuration Management (ACM) component offers a cohesive user interface for developers and cluster managers, managing deployments, configuration policies, and integration with non-Kubernetes systems.
  • Data-Centric Observability
    The Metadata Manager (MDM) provides observability into the full data workflow, treating data as a first-class entity and improving orchestration decisions.
  • Seamless Workload Migration
    While KuDECO can be used with different Kubernetes schedulers, it integrates SWM, which uses a solver-based approach to match containerized workloads with compute, data, and network resources using a min-max graph model.
  • Network Awareness by design
    The Network Management and Adaptability component (NetMA) enables secure, adaptive connectivity and exposes metrics for optimized workload distribution.
  • Privacy-Preserving Learning and Context-Awareness via PDLC
    PDLC is the cognitive “brain” of KuDECO , performing node cost estimation and system stability analysis using decentralized, privacy-preserving learning algorithms.
  • End-to-End Monitoring
    KuDECO offers multi-layered observability, covering system resources (ACM), data workflows (MDM), and network state (NetMA), enabling more intelligent, fault-tolerant orchestration.

Validation and Relevance

KuDECO has been validated in real-world experimental setups, including:

  • Edge clusters using Raspberry Pi and k3s
  • Cloud deployments using scalable virtual machines
  • Test automation via the Horizon Europe CODECO Experimentation Framework (CODEF)

It has demonstrated applicability in several industrial contexts:

  • Real-time orchestration in factory automation
  • Adaptive workloads in smart city scenarios
  • Resilient infrastructure for critical systems

Open Ecosystem

KuDECO is designed for extensibility and collaboration. Its open-source codebase and modular design invite contributions from both academia and industry. Integration with standard Kubernetes environments ensures compatibility and ease of adoption, while its decentralized AI-driven architecture makes it a future-ready alternative for Edge-Cloud orchestration.

Eclipse OSILK

Wednesday, July 2, 2025 - 11:18 by Thierry Fraudet

Eclipse OSILK is an open source project providing modular, high-quality training material about Open Source and InnerSource, from basic to advanced concepts. Eclipse OSILK stands for Sharing OSS Knowledge Resources And Training for Education in Software. It is built with a "training-as-Code" philosophy, using AsciiDoc for ease of maintenance, modularity, and collaboration.

This modular training is provided as a webinar series that aims to foster a strong foundation in open source best practices, legal compliance, security considerations, and community engagement, all within the collaborative spirit of the Eclipse Foundation. Each module includes learning objectives, training materials and comprehensive and extensive speaker notes, so anyone can use the material out-of-the-box to give a training session or record a webinar.

The training modules can be forked by any organisation to tailor them for their particular environment.

Eclipse openK Niederspannungscockpit

Wednesday, June 11, 2025 - 12:23 by Mathias Schoen…

Eclipse openK Niederspannungscockpit visualizes technical data of an electrical distribution network such as current, voltage, power, etc. imported as time series. This data is used to evaluate the state of the grid and derive measures in order to avoid overloading the equipment. The measurements from network equipment, such as transformers and feeders, are integrated using connectors. The connectors are out of scope of the project. The measurements from prosumers are imported from a measurement management system, which is also out of scope of this project. 

Niederspannungscockpit” is the German term for low voltage cockpit, which summarizes the goal of this project: a software to monitor and, to a limited extent, control assets of the low voltage electric network. This means the focus is set on the distribution network with a nominal voltage of 400 V, which supplies electricity to consumers. 

The Eclipse openK Niederspannungscockpit addresses two main aspects of network control systems: 

  • Transparency for better short-term decision making and planning: the Niederspannungscockpit visualizes measurement data of the low voltage network, such as voltage, current, active and reactive power.

     
  • State Assessment and Control of assets for continuous grid operation and accelerated electrification: the Niederspannungscockpit can assess the state of the low voltage grid. When technical constraints are reached, a low voltage controller identifies measures to resolve the constraints and sends instructions for power reduction to controllable loads in the network. 
     

The regulatory framework was adapted in 2023, making it mandatory for German distribution system operators (DSOs) to prepare a system able control specific loads such as heatpumps, EV-chargers, cooling devices and electric storage systems. The main goal is the acceleration of the electrification process of consumers, in order to reduce carbon emissions. Since the electrification is gaining traction, the limited capacities of the low voltage grid are seen as a bottleneck. The Niederspannungscockpit is one part of the solution to resolve this bottleneck, through digitalisation of the (low voltage) grid operation. 

The quantity structure of the low voltage grid requires a scalable system, able to process millions of data points, for which typical network control systems where not designed.

Eclipse TModeler

Monday, June 9, 2025 - 02:32 by Luc Olivier FO…

The Eclipse TModeler project delivers an open-source suite of three interoperable components designed to support secure, decentralized, and model-driven software development:

- TModeler : a multi-language modeling and ORM engine that allows developers to define, validate, and bind complex data structures across platforms (C++, Java, Python), including spatial and secure fields.
- TSM : a synchronization engine that ensures real-time consistency between client and server model instances, eliminating the need for manually written APIs or bindings.
- THC : a cryptographic layer integrated at the model level, offering encryption, digital signatures, and identity management directly within the development workflow.

In-scope :
- Declarative model-driven development tools
- Automatic synchronization and code binding between frontend and backend
- Field-level cryptographic protection (encryption, signatures)
- Cross-platform compatibility (C++, Java, Python)
- Developer empowerment in under-resourced environments

Out-of-scope :
- Development of a full IDE or general-purpose cloud platform
- Standardization of formal APIs beyond this project’s context
- Proprietary deployment models or integration with closed-source ecosystems

The project complements existing development tools by automating and securing core architectural layers, without seeking to replace them.

Eclipse OpenSOVD

Monday, May 26, 2025 - 03:16 by Thilo Schmitt

Eclipse OpenSOVD provides an open source implementation of the Service-Oriented Vehicle Diagnostics (SOVD) standard, as defined in ISO 17978. The project delivers a modular, standards-compliant software stack that enables secure and efficient access to vehicle diagnostics over service-oriented architectures. By offering an open and community-driven implementation, Eclipse OpenSOVD serves as a foundation for developers, OEMs, and tool vendors to build, test, and integrate SOVD-based solutions. The project will hence facilitate adoption and ensure industry coherence with the standard.

Eclipse OpenSOVD complements and integrates the Eclipse S-CORE project by providing an open implementation of the SOVD protocol that can be used for diagnostics and service orchestration within the S-CORE environment. This integration ensures that diagnostic capabilities are natively supported in SDV architectures, enabling developers and OEMs to build more robust, maintainable, and standards-compliant vehicle software stacks.

Key components include:

  • SOVD Gateway: REST/HTTP API endpoints for diagnostics, logging, and software updates.
  • Protocol Adapters: Bridging modern HPCs (e.g., AUTOSAR Adaptive) and legacy ECUs (e.g., UDS-based systems).
  • Diagnostic Manager: Service orchestration for fault reset, parameter adjustments, and bulk data transfers.

Future-Proofing:

  • Semantic Interoperability: JSON schema extensions for machine-readable diagnostics, enabling AI-driven analysis and cross-domain workflows.
  • Edge AI/ML Readiness: Modular design to support lightweight ML models (e.g., predictive fault detection) via collaboration with projects like Eclipse Edge Native.
  • Support for Extended Vehicle logging and publish/subscribe mechanisms.

Eclipse DataGrid

Wednesday, May 21, 2025 - 11:20 by Florian Habermann

The Eclipse DataGrid project delivers a high-performance, distributed, in-memory data processing layer for Java applications. Built upon the robust foundation of EclipseStore, Eclipse DataGrid extends its capabilities to enable seamless replication and distribution of native Java object graphs across multiple JVMs in a cluster. This innovative approach empowers developers to leverage the full potential of the Java language and JVM, eliminating the impedance mismatch and performance bottlenecks associated with traditional data solutions.

Eclipse DataGrid can also be seamlessly integrated into existing database application infrastructures, acting as an in-memory caching, searching and data processing layer to significantly improve performance, reduce the load on primary databases, and lower overall database infrastructure costs, including potential savings on database license fees. Target group are both, Java enterprise and cloud-native application builders.

 

1. Project Goals

  • Provide Java-Native In-Memory Data Processing: To offer a distributed Java in-memory data processing layer, that is deeply integrated with the Java language, utilizing core Java features and the native Java object model.
  • Eliminate Impedance Mismatch: To remove the need for complex and inefficient mappings between Java objects and external data formats or structures.
  • Use JVM Performance for In-Memory Data Processing: To enable applications to achieve microsecond-level look-up, response, and query times by leveraging the performance of the JVM’s runtime, memory management, and JIT compiler.
  • Simplify Distributed Java Development: To provide a straightforward way for Java developers to work with distributed Java object graphs, data and clusters, using familiar Java concepts and tools.
  • Offer ACID Compliance in a Distributed Environment: To ensure data consistency and reliability in clustered deployments by using EclipseStore's ACID properties.
  • Optimize Database Performance and Costs: To enable the use of Eclipse DataGrid as a caching, searching, and processing layer, reducing the load on underlying databases and lowering infrastructure expenses.
  • Support of any Programming Language: A REST interface enables access Eclipse DataGrid with any program language.

 

 

2. How the Project Works

Eclipse DataGrid comprises several key components that work together to provide a distributed Java in-memory data processing solution:

  • Java Object Graph Model: Unlike traditional key-value-based data grids, Eclipse DataGrid preserves Java’s object-oriented paradigm, enabling developers to work with complex object graphs without sacrificing performance or simplicity. Eclipse DataGrid replicates this graph across the cluster, allowing distributed access to the data. The Java object graph is used as an in-memory data storage system at runtime that enables execution of CRUD operations and a rollback mechanism.
  • Java Streams API and Lucene: Eclipse DataGrid leverages Java's Streams API for efficient data searching, filtering, and manipulation. It will also integrate Lucene for advanced full-text search capabilities.
  • Indexing: A special HashMap enables indexing and fully-automated lazy-loading to minimize I/O traffic.
  • EclipseStore Integration: The integration of EclipseStore provides ACID-compliant persistence to all Eclipse DataGrid nodes.
  • Replicate Java Object Graphs: Eclipse DataGrid extends EclipseStore by providing a specific storage function that can distribute the storage process of an object graph across multiple JVMs within a cluster via event streaming. The standard consistency model is eventual consistency. In a later version, a configurable strong consistency model is provided.
  • Kubernetes Integration: A dedicated Helm chart will be provided to facilitate the creation, setup, and provisioning of a cluster environment on Kubernetes. This streamlines the deployment and management of Eclipse DataGrid in modern, cloud-native environments.
  • Management GUI: A Java application with a graphical user interface will be developed to simplify cluster operations. This GUI will enable users to:
    • Provision and set up Eclipse DataGrid clusters.
    • Perform ongoing maintenance of the cluster.
    • Monitor cluster health and performance using observability tools (e.g. Grafana, Prometheus)
    • Troubleshoot issues that may arise.

 

3. Project Components and Features

Eclipse DataGrid will provide the following key components and features:

  • Distributed Store Function: A core extension to EclipseStore that enables the distribution of data across multiple JVMs in a cluster.
  • Eventual Consistency: The standard consistency model is eventual consistency.
  • Kubernetes Cluster Management: Helm chart for automated cluster provisioning and management on Kubernetes.
  • Graphical Management Interface: A user-friendly Java application for cluster setup, maintenance, monitoring, and troubleshooting.
  • Native Java Object Graph Replication: The ability to replicate and distribute native Java object graphs across a cluster.
  • ACID Compliance: Distributed transactions and data consistency, building upon EclipseStore's ACID properties.
  • High-Performance Data Access: Microsecond-level read and write access to distributed data.
  • Java Streams API Integration: Seamless integration with Java's Streams API for efficient data manipulation.
  • Lucene Integration: Full-text search capabilities for complex data querying.
  • Secure Serialization: Protection against deserialization attacks through the use of Eclipse Serializer.
  • Flexible Data Modeling: Users can define their data structures using any Java class, allowing for a fully customized and domain-driven approach.
  • In-Memory Performance: Leveraging JVM memory management for optimal speed.
  • JIT Compiler Optimization: Benefiting from the JVM's JIT compiler for runtime performance enhancements.
  • Database Optimization: Ability to serve as an in-memory caching, searching and processing layer for existing database applications.

 

 

4. Core Java Features Utilized

Eclipse DataGrid is designed to exploit the full power of the Java language and the JVM. It leverages these core Java features:

  • Java Object Model: The project works directly with Java's native object model, eliminating the need for object-relational mapping (ORM) or other impedance-matching techniques.
  • Java Memory Management: Eclipse DataGrid relies on the JVM's efficient memory management, including garbage collection, to handle large volumes of in-memory data.
  • Java Streams API: The project utilizes the Java Streams API for efficient and expressive data manipulation, including filtering, mapping, and aggregation.
  • Concurrency Utilities: Java's concurrency utilities will be used to manage distributed operations, ensuring thread safety and optimal performance.
  • JVM Internals: The project is designed to work efficiently with the JVM, taking advantage of its architecture and optimizations such as Virtual Threads.

 

5. Use Cases

Eclipse DataGrid is ideal for a wide range of use cases where high-performance, low-latency, and scalable data access is critical:

  • High-Performance Caching: Dramatically improve application performance by caching frequently accessed data in a distributed in-memory grid, reducing the load on the primary database.
  • Real-Time Analytics: Enable real-time data analysis and decision-making by providing microsecond-level access to data for complex queries and aggregations.
  • Scalable Web Applications: Build highly scalable and responsive web applications by distributing session data and application state across a cluster.
  • Microservices Architectures: Facilitate the development of microservices-based applications by providing a shared, distributed data layer that can be accessed by multiple services.
  • Complex Event Processing: Process and analyze high-velocity data streams in real-time for applications such as fraud detection, algorithmic trading, and IoT data analysis.
  • Distributed Graph Processing: Efficiently store and process graph data for applications such as social network analysis, recommendation engines, and knowledge graphs.
  • Online Gaming: Power real-time, multiplayer online games with low-latency data access and distributed state management.
  • E-commerce Applications: Handle high-volume transactions, personalize shopping experiences, and manage product catalogs with extreme speed and scalability.
  • Financial Services: Support high-frequency trading, risk management, and fraud detection with real-time data access and processing.
  • Healthcare Applications: Enable fast access to patient data, support real-time monitoring, and facilitate data-intensive research.
  • Database Optimization and Cost Reduction: Offload data processing from primary databases, reducing their workload and enabling the consolidation of multiple database types, leading to lower infrastructure costs and license fees.

  

6. Benefits

Eclipse DataGrid offers numerous benefits to Java developers:

  • Unparalleled Performance: Microsecond-level data access for demanding, high-performance applications.
  • Simplified Development: Develop distributed applications using familiar Java concepts and the native Java object model.
  • Reduced Complexity: Eliminate the need for complex data mapping and integration with external data stores.
  • Increased Scalability: Easily scale applications horizontally by adding more nodes to the cluster.
  • Improved Reliability: Ensure data consistency and availability with ACID-compliant distributed transactions.
  • Lower Infrastructure Costs: Optimize resource utilization and potentially reduce the need for multiple specialized databases.
  • Faster Time to Market: Accelerate application development by providing a ready-to-use, high-performance data grid solution.
  • Full Java Power: Ability to implement any complex business logic.
  • Unified Data Layer: Handle various data needs (key-value, documents, graph-like structures) within a single, consistent system.
  • Database Efficiency: Improved performance and reduced load on primary databases.

 

7. Conclusion

Eclipse DataGrid represents a significant advancement in Java application development, providing a powerful and intuitive way to build high-performance, distributed, in-memory data solutions. By leveraging the core strengths of Java and the JVM, Eclipse DataGrid empowers developers to create a new generation of data-intensive applications with unparalleled performance, scalability, and reliability. We believe that Eclipse DataGrid will become a valuable asset within the Eclipse ecosystem, driving innovation and growth in the Java community, and invite the Eclipse community to collaborate on shaping Eclipse DataGrid into a cornerstone of modern data processing with Java.

Eclipse Piranha Cloud

Monday, May 19, 2025 - 16:28 by Arjan Tijms

Traditional Jakarta EE implementations are application servers; which are software products mostly intended to be installed. They have the ability to deploy and undeploy one or more applications on them, and typically contain support for monitoring, clustering and many additional features. GlassFish is one example of such implementation.

Eclipse Piranha Cloud intends to provide an implementation of Jakarta EE without being an application server. It focuses on being highly composable, both in regards to the Jakarta EE implementation components being used, as with regard to some of the features that are provided. 

A special focus is on embedded use, where a runtime composition can be used that commits functionality not necessarily used for embedded use. E.g. Eclipse Piranha Cloud could be used to just render a Jakarta Faces page into a String, so such composition would not need an HTTP server. The code embedding such Eclipse Piranha Cloud instance would only include the Mojarra lib and the embedding code would programmatically create and pass an object representing an HTTPServletRequest in.

 

Eclipse Plug-in for Copilot

Thursday, May 15, 2025 - 00:42 by Rome Li

Eclipse Plug-in for Copilot provides code completion suggestions, chats, and deals with coding tasks using agents. It's a client app that talks to the GitHub online services. The focus of the app is UI integrations to enable GitHub Copilot features in Eclipse IDE. GitHub online services are not part of the scope.

Jakarta Portlet Bridge

Monday, March 24, 2025 - 16:06 by Neil Griffin

The Jakarta Portlet Bridge project is responsible for defining the Specification and API, which enables the development of Jakarta Faces web applications that can be deployed within a portlet container running inside a Jakarta EE servlet container or application server. The Portlet Bridge Specification defines requirements for mapping the portlet lifecycle (HEADER_PHASE, EVENT_PHASE, ACTION_PHASE, RESOURCE_PHASE, and RENDER_PHASE) to the Faces lifecycle (RESTORE_VIEW, APPLY_REQUEST_VALUES, PROCESS_VALIDATIONS, UPDATE_MODEL_VALUES, INVOKE_APPLICATION, and RENDER_RESPONSE). The goal is to provide Jakarta Faces developers with the ability to deploy their web applications in a portlet container with little-to-no-modification.

Jakarta Portlet

Monday, March 24, 2025 - 12:46 by Neil Griffin

The Jakarta Portlet project is responsible for defining the Specification and API, which enables the development of modular, server-side components that can be deployed within a portlet container running inside a Jakarta EE servlet container or application server. 

The Portlet Specification defines requirements for the portlet container, including an execution lifecycle and phases such as HEADER_PHASE, EVENT_PHASE, ACTION_PHASE, RESOURCE_PHASE, and RENDER_PHASE. The portlet lifecycle follows a request/response design similar to the servlet lifecycle, where each portlet has the opportunity to participate in the lifecycle and contribute output to the response. The Specification also specifies requirements for a client-side JavaScript API, including a client-side facility called the Portlet Hub. The Portlet Java API, Javadoc, and JSDoc define the contract for a portlet application to interact with the portlet container. A portlet application consists of one or more portlets that are packaged within a single Jakarta EE Web Application Archive (WAR) artifact.

The Specification also defines minimal requirements for a portal, which is a related technology responsible for aggregating output from a portlet container into a cohesive view, typically a complete HTML document rendered in a browser. Portals typically provide other features, such as page definition, portlet location on pages, and user management.