Eclipse Mnestix AAS Browser

Friday, June 14, 2024 - 02:15 by Alwin Hoffmann

AAS made easy! - Eclipse Mnestix AAS Browser is the easy entry to the world of AAS and is an easy-to-use browser for AAS.

The Eclipse Mnestix AAS Browser is basically a single page application written in React which allows you to visualize Asset Administration Shells and their submodels. It supports the AAS Metamodel and API in version 3.

You configure the endpoint of a AAS repository and search for AAS-Ids and if a Discovery Service is available, you also can search for AssetIds and visualize the corresponding AAS.

Eclipse Mnestix AAS Browser is also optimized for mobile view to have a great user experience on mobile phones.

Eclipse Mnestix AAS Browser can visualize every submodel even if it is not standardized by IDTA. There are some submodels visualized in a extra user friendly manner. These are:

  • Digital Nameplate
  • Handover Documentation
  • Hierarchical Structures enabling BoM
  • Carbon Footprint

Moreover dedicated visualizations for submodels can be added as a further feature.

Eclipse OpenBSW

Saturday, May 25, 2024 - 05:12 by Marco Langerwisch

The project provides an embedded basic software (BSW) stack for microcontrollers written in C++ (language features up to C++14 are used). 
It provides libraries for

  • lifecycle (startup/shutdown)
  • communication (CAN, DoCAN, UDS)
  • simple OS abstraction layer (reduced to scheduling and events, FreeRTOS implementation provided)
  • STL-like container classes and a lock-free SPSC queue for inter task/core  communication
  • basic I/O
  • development tools (serial console, logging framework)

A demo implementation supporting two platforms, POSIX as a virtualized environment for rapid prototyping and NXP S32K148EVB with an example BSP, is provided.

Eclipse Data Rights Policies Profile (DRP)

Wednesday, May 15, 2024 - 12:16 by Rajiv Rajani

Trust forms the basis for any data exchange transactions to occur. However, the control and transfer as well as claims protocols are trust agnostic and they are expected to connect to (one or multiple) trust frameworks. 

Since each dataspace is free to choose its own topology as well as standards and trust frameworks it uses/requires, the participants must be able to establish trust and fulfill requirements of the trust framework in order to perform trusted data exchange.

The proposal aims to establish requirements in the form of profiles which enables organizations to use these profiles with any of the protocols underneath and complete the connection to trust framework.

In a Data exchange transaction there may be various parties involved in different roles as defined in the trust framework and the mapping to those roles in a transaction is necessary to understand how the compliance and other requirements of the trust framework apply.



  1. A Connector that is requesting the data from another connector is acting on behalf of a participant and the policies of trust framework as well as dataspace apply to it. 


The role model is used to determine the appropriate functions and requirements for those components in a given data exchange context.


Thursday, April 25, 2024 - 04:50 by Chulhee Lee

The main goal of Eclipse PICCOLO project is to develop an efficient vehicle service orchestrator framework to realize the potential benefits of cloud native technologies for in-vehicle services and applications. In this direction, Eclipse PICCOLO shall ensure the activation of pre-defined use case scenarios or policies in a well-organized and streamlined fashion depending upon the various contexts of vehicle status, environment, connected devices and service requirements. Eclipse PICCOLO shall enable the deployment of vehicle scenarios and policies in short development cycle by reducing the development lead time. In addition, it provides necessary management framework for the deployment of micro services as per the requirements of vehicle applications and thus saving the integration costs, time, and efforts.

Eclipse SWTImageJ

Wednesday, March 20, 2024 - 09:31 by Marcel Austenfeld


ImageJ is a versatile platform for image analysis, offering a broad range of functionalities through its scripting capabilities and extensive plugin ecosystem in the scientific domain. However, its reliance on AWT for the GUI presents limitations in terms of Graphical User Interface, platform compatibility, and user experience. Porting ImageJ to SWT,  an efficient modern Java GUI toolkit, can address these limitations and unlock new opportunities for plugin developers and users of scientific image applications.


The primary objective of this proposal is to port ImageJ to SWT to enhance its performance, usability, and integration within the Eclipse IDE (a plugin using SWT/AWT already exists).

By leveraging the native capabilities of SWT, we aim to provide users with a more responsive and intuitive interface while ensuring compatibility with Eclipse's development environment.

The original software file structure (organization of the classfiles, sourcecode in packages and plugins, macros) should be preserved too, to ensure that existing ImageJ developers feel comfortable and motivated to improve and use the ported application.


1. SWT Integration: Polishing the rewrite of the graphical user interface of ImageJ using SWT, leveraging its native components and rendering capabilities. This will ensure better performance and a more consistent user experience across different operating systems.

2. Native Look and Feel: Utilize SWT's support for native widgets to ensure that ImageJ maintains the look and feel of the host operating system, enhancing its integration with the overall desktop environment.

3. Future Improved Performance: Take advantage of SWT's architecture and optimized event handling to improve the performance of ImageJ, especially when dealing with large datasets and complex image processing tasks.

4. Eclipse Integration: Ensure seamless integration of the SWT-based ImageJ with the Eclipse IDE, allowing users to leverage Eclipse's features for project management, version control, and collaborative development.

5. Compatibility and Accessibility: Ensure backward compatibility where it is possible with existing ImageJ macros,  plugins and functionalities while providing accessibility features to accommodate users with diverse needs and preferences.

6. User Experience Enhancements: Implement usability improvements and user interface enhancements to streamline common workflows and make ImageJ more intuitive and user-friendly.

7. Community Engagement: Engage with the ImageJ community to gather feedback, address concerns, and ensure that the ported version meets the needs and expectations of users and developers.

Expected Outcomes:

1. Enhanced performance and responsiveness of the ImageJ GUI through SWT integration.

2. Improved usability and user experience, leading to increased adoption and satisfaction among users.

3. Seamless integration with Eclipse, facilitating a more productive development workflow for researchers and practitioners.

4. Continued compatibility with existing macros plugins and functionalities where it is possible, ensuring a smooth transition for current ImageJ users.


Porting ImageJ to SWT offers significant benefits in terms of performance, usability, and integration within the Eclipse IDE. By undertaking this porting effort, we aim to modernize ImageJ's graphical interface, improve its performance, and enhance its usability, ultimately empowering users to achieve more in their image analysis tasks.


The porting budget has already been covered by the Lablicate GmbH for the development effort required for porting ImageJ to SWT, including software development, testing, documentation, and community engagement activities. Additional resources may be allocated for ongoing maintenance and support to ensure the long-term sustainability of the ported version.



Rasband, W.S., ImageJ, U. S. National Institutes of Health, Bethesda, Maryland, USA,, 1997-2018.

Schneider, C.A., Rasband, W.S., Eliceiri, K.W. "NIH Image to ImageJ: 25 years of image analysis". Nature Methods 9, 671-675, 2012. (This article is available online.)

Abramoff, M.D., Magalhaes, P.J., Ram, S.J. "Image Processing with ImageJ". Biophotonics International, volume 11, issue 7, pp. 36-42, 2004. (This article is available as a PDF.)


Eclipse Dataspace Decentralized Claims Protocol

Thursday, March 7, 2024 - 16:00 by James Marino

Technical Details

DCP defines the following protocol flows.

1. Base Identity Protocol (BIP)

The *Base Identity Protocol* defines how to obtain and communicate participant identities and claims using self-issued security tokens. BIP defines:

- A format for self-issued tokens based on the Decentralized Identifiers (DIDs) v1.0
(, did:web Method ( and Self-Issued OpenID Provider v2 ( specifications. 

- Endpoints and a flow to obtain self-issued security tokens.

2. Verifiable Presentation Protocol (VPP)

The *Verifiable Presentation Protocol*  defines a protocol for storing and presenting Verifiable Credentials (VC) and other identity-related
resources. The Verifiable Presentation Protocol (VPP) covers the following aspects:

- Endpoints and message types for storing identity resources belonging to a holder
- Endpoints and message types for resolving identity resources
- Secure token exchange for restricting access to identity resource endpoints

The VPP makes use of the following standards (among others):

- Verifiable Credentials Data Model v1.1 (
- DIF Presentation Exchange (
- Jason Web Token (JWT) (

The VPP is designed to make integrating existing software wallets and identity systems easy. 

3. Credential Issuance Protocol (CIP)

Verifiable Credentials enable a holder to present claims directly to a Relying Party (RP) without
the involvement or knowledge of the `Credential Issuer`. The *Credential Issuance Protocol* (CIP) provides an interoperable mechanism for parties (potential holders) to request credentials from a `Credential Issuer.` Specifically:

- Formats and profiles for verifiable credentials based on 
- The protocol defines the endpoints and message types for requesting credentials to be issued from a `Credential Issuer.`
- The protocol is designed to handle use cases where credentials can automatically be issued and a manual workflow is required. 

Use of Relevant Standards

DCP is based on relevant existing standards where possible. See the above description for specific examples. 

Integration with Identity-related projects

DCP is designed to integrate existing wallet and identity systems. For example, Catena-X has integrated [Keycloak]( with the Verifiable Presentation Protocol as a token issuer.  Existing wallets and credential issuance systems can support VPP and CIP endpoints.

Open Source Implementations and Industry Adoption

Two open-source Eclipse projects are currently implementing the specifications, the EDC Identity Hub ( and the Tractus-X Managed Identity Wallet ( 

A TCK is currently planned and will be based on the [Dataspace TCK Framework](

Dataspaces that use DCP

Eona-X | EONA-X is a dataspace in the domain of Mobility, Transport and Tourism. It leverages EDC capabilities to power data exchanges between its participants.

Contact: phebant[@]amadeus[.]com

Catena-X | Catena-X is offering the first open and collaborative data space for the automotive industry to boost business processes using data-driven value chains.

Contact: info[@]catena-x[.]net

Eclipse Zenoh-Flow

Tuesday, February 20, 2024 - 08:50 by Julien Loudet

Eclipse Zenoh-Flow aims at simplifying and structuring (i) the declaration, (ii) the deployment and (iii) the writing of complex and, potentially, safety-critical applications that can span from the Cloud to the Microcontroller.

To these ends, Eclipse Zenoh-Flow leverages the data flow programming model — where applications are viewed as a directed graph of computing units, and Eclipse Zenoh — an Edge-native, data-centric, location transparent, communication middleware. This makes for a powerful combination as Zenoh offers flexibility and extensibility while data flow programming structures computations. The main benefit of this approach is that this allows us to separate the applications from the underlying infrastructure: data are published and subscribed to (automatically with Zenoh-Flow) without the need to know where they are actually located.

Eclipse Zenoh-Flow further separates the business logic contained within each computing unit from the constraints of the overall application. An application is described in a descriptor file that groups together the different blocks that compose it as well as (i) how these blocks are connected, (ii) how they should be deployed on the infrastructure and (iii) possible execution constraints (e.g. time requirements or execution order).

As Eclipse Zenoh-Flow is also targeting safety-critical applications, it leverages the Rust programming language for its core. It also offers bindings in other languages to facilitate integration with existing software.

Eclipse Dataspace Protocol

Friday, February 16, 2024 - 03:05 by Sebastian Steinbuss

The Eclipse Dataspace Protocol is used in the context of data spaces as described and defined in the subsequent sections with the purpose to support interoperability. In this context, the specification provides fundamental technical interoperability for participants in data spaces and therefore the protocol specified here is required to join any data space as specified here. Beyond the technical interoperability measures described in this specification, semantic interoperability should also be addressed by the participants. From the perspective of the data space, interoperability needs to be addressed also on the level of trust, on organizational level and on legal level. The aspect of cross data space communication is not subject of this document, as this is addressed by the data spaces' organizational and legal agreements.

The interaction of participants in a data space is conducted by the participant agents, so-called Connectors, which implement the protocols described above. While most interactions take place between Connectors, some interactions with other systems are required. The figure below provides an overview on the context of this specification.

An Identity Provider realizes the required interfaces and provides required information to implement Trust Framework of a data space. The validation of the identity of a given participant agent and the validation of additional claims is the fundamental mechanism. The structure and content of such claims and identity may vary between different data spaces, as well as the structure of such an Identity Provider, e.g. a centralized system, a decentralized system or a federated system.

A connector will implement additional internal functionalities, like monitoring or Policy Engines, as appropriate. It is not covered by this specification, if a connector implements such or how.

The same applies for the data, which is transferred between the systems. While this document does not define the transport protocol, the structure, syntax and semantics of the data, a specification for those aspects is required and subject to the agreements of the participants or the data space.

Eclipse Conformity Assessment Policy and Credential Profile

Monday, February 12, 2024 - 09:16 by Pierre Gronlier

This proposal contributes to implements an end-to-end holistic approach to seamlessly integrate, anchor, and enforce negotiated data exchange agreements throughout the underlying infrastructure, encompassing processes related to data processing, storage, and transfer.

Conformity assessment is one of the industry answers to handle risk management, yet there is no commonly accepted specification for interoperable conformity assessment.

This specification goal is to combine the existing world of conformity assessment and the needs for a Provider and a Consumer to reach an agreement based on a common understanding using existing claims and evidence.


  • How can Provider express that an unknown Consumer shall only process data on ISO 27001 certified services ?
  • How does the Consumer demonstrate such requirement ?


To demonstrate the viability of the current proposal, Gaia-X used this specification to build the Gaia-X Compliance and an open-source implementation of the elements described in the Scope section has been done

The Gaia-X compliance and its implementation based on the current specification proposal is being used by several dataspaces such as Catena-X, Eona-X, Prometheus-X, Agdatahub, ...

The implementation used to validate the current specification is based on:

  • W3C Verifiable Credential, W3C SHACL and W3C SPARQL for the validation and verification of the models mentioned in the Scope section.
  • W3C DID, X509 and JOSE for the cryptographic chains of trust.
  • ETSI TS 119 312 and EBSI APIs for the presentations of the trust anchors.
  • OIDC4VCi and OIDC4VP for the exchange of W3C Verifiable Credentials.
  • TRAIN to discover the ecosystem rules and trust anchors.

Eclipse Kanto-Auto

Friday, January 12, 2024 - 16:23 by Martin Brown

Eclipse Kanto-Auto offers the automotive-specific capabilities to enhance the Eclipse Kanto project to offer a comprehensive solution focused on the automotive industry. Kanto-Auto provides extensions to the Eclipse Kanto project focused on automotive market segment capabilities by standardizing and securing CAN communications. 

It is critical that Automotive IoT applications have a standardized and secure solution for CAN communications with embedded controllers. The Eclipse Kanto-Auto extensions will make Eclipse Kanto more appropriate for automotive-focused implementations.