GitHub

Project is hosted in the Eclipse organization at GitHub.

Dirigible

Date Review
9 years 7 months ago
Date Trademark
9 years 5 months ago
Date Provisioned
9 years 2 months ago
Date Initial Contribution
9 years 1 month ago
Date Initial Contribution Approved
8 years 9 months ago
This proposal is in the Project Proposal Phase (as defined in the Eclipse Development Process) and is written to declare its intent and scope. We solicit additional participation and input from the community. Please login and add your feedback in the comments section.
Parent Project
Proposal State
Created
Background

Dirigible project came out of an internal SAP initiative to address the extension and adaptation use-cases related to SOA and Enterprise Services. On one hand in this project were implied the lessons learned from the standard tools and approaches so far and on the other hand, there were added features aligned with the most recent technologies and architectural patterns related to Web 2.0 and HTML5. This made it complete enough to be used as the only tool and environment needed for building and running on-demand applications in the cloud.

Scope

Full development lifecycle of on-demand applications leveraging In-System Programming Model and Rapid Application Development techniques:

  • Database modeling and management;
  • RESTful services authoring using various dynamic languages;
  • User interface – pattern-based generation as well as custom forms and reports;
  • Injected services;
  • Role based security;
  • External services integration;
  • Testing;
  • Debugging;
  • Documentation;
  • Extensions management;
  • Operations and monitoring;
Description

Overview

Dirigible is an open source project that provides Integrated Development Environment as a Service (IDEaaS) as well as runtime containers integration for the running applications. The environment itself runs directly in browser and therefore does not require additional downloads and installations. It packs all the needed components, which makes it self-contained and well integrated software bundle that can be deployed on any Java based Web Server.

Target applications built with Dirigible are atomic yet self-contained cloud based modules that cover end-to-end vertical scenarios. These applications comply with the Dynamic Applications concepts and structure.

Dynamic Applications

The overall process of building Dynamic Applications lies on well-known and proven principles:

In-System Development - known from microcontrollers in business software systems. The major benefit is working on a live system where all the changes made by a developer take effect immediately, hence the impact and side-effects can be realized in a very early stage of the development process.

Content Centric - known from networking, in the context of Dynamic Applications it proclaims that all the artifacts are text-based models and executable scripts stored in a generic repository (along with the related binaries, such as images). This makes the life-cycle management of the application and the transport between IT landscapes (Dev/Test/Prod) simple and straightforward. In addition, a desired effect is the ability to setup the whole system, only by pulling the content from a remote source code repository, such as Git.

Scripting Languages - programming languages written for a special run-time environment that can interpret (rather than compile) the execution of tasks. Dynamic languages existing nowadays as well as the existing smooth integration in the web servers make possible the rise of the in-system development in the cloud.

Shortest turn-around time - instant access and instant value became the most important requirement for the developers, that's why it is the major goal of our tooling.

In general, a Dynamic Application consists of components, which can be separated in the following categories:

Data Structures - the artifacts representing the domain model of the application. There is no intermediate adaptation layer in this case; hence all the entities represent directly the database artifacts - tables and views.

Entity Services - domain model entities exposed as web services, following the modern Web 2.0 patterns and scripting language capabilities.

Scripting Services - general purpose services for utilities, custom modules and extensions.

User Interface – pattern-based, generated user interfaces, as well as custom forms and reports based on the most popular AJAX frameworks or any other used by the developer.

Integration Services - in the real world there are always some external services that have to be integrated into your application - for data transfer, triggering external processes, lookup in external sources, etc. For this purpose Dirigible provides capabilities to create simple and dynamic routing services following the Enterprise Integration Patterns guidance

Documentation - an integral part of any application is its documentation. Dirigible supports authoring documentation content in the widespread wiki (confluence) format.

Architecture

Dirigible architecture follows the well proven principles of simplicity and scalability in the classical service-oriented architecture.

The design-time and the runtime components are well separated. They interact with each other through a dedicated repository where the only linking point is the content itself. At design-time programmers and designers use the Web-based integrated development environment based on the Eclipse Remote Application Platform (RAP). Leveraging this robust and powerful framework the tooling can be easily enhanced using well-known APIs and concepts - SWT, JFaces, OSGi, extension points, etc.

The Repository is the container of the project artifacts. It is a generic file-system like content repository built on a relation database (via JDBC).

On top are the Dirigible's containers for services execution depending on the scripting language and purpose - Rhino, jRuby, Groovy, Camel, CXF, etc.

The runtime can scale independently by the design time part, and even can be deployed without design time at all (for productive landscapes).

Depending on the target cloud platform, Dirigible can be integrated to use the services provided by the underlying platform.

Why Here?

The natural choice of a project like Dirigible targeting development scenarios in favor of the open-source community and at the same time to be respected by the enterprises is the Eclipse Foundation.

Recent initiatives and projects related to the cloud development scenarios, such as Orion, Flux and Che, makes Eclipse the most viable organization in this context.

The Web IDE, part of Dirigible is based on the strong foundation of Eclipse projects, such as Equinox, SWT, JFaces, Remote Application Platform (RAP), Mylyn, etc.

Project Scheduling

The full set of features is planned for contribution by Q2 of 2015.

Future Work

Planned functionality:

  1. Improvement of JavaScript debugger using Web Sockets
  2. Continuous improvements on WYSIWYG HTML Editor
  3. Integration of Orion/Flux/Che Java Editor
  4. Focus on Java for In-System Programming
  5. Externalize pluggability for Repository implementations (no-SQL, etc.)
  6. Form-based editors for the project's artifacts (data structures, extensions, service definitions, etc.)
  7. Build-your-own-Dirigible - update site, target platforms, features separation, etc.

Community efforts:

  1. EclipseCon Talks
  2. Blogs, Twitter, Youtube, etc.
Initial Contribution

The initial contribution is to be completed by the end of 2014.

Source Repository Type

RISE V2G

Date Review
9 years 7 months ago
Date Trademark
9 years 5 months ago
Date Provisioned
9 years 2 months ago
Date Initial Contribution
9 years ago
Date Initial Contribution Approved
8 years 9 months ago
This proposal is in the Project Proposal Phase (as defined in the Eclipse Development Process) and is written to declare its intent and scope. We solicit additional participation and input from the community. Please login and add your feedback in the comments section.
Parent Project
Proposal State
Created
Background

A worldwide increasing interest in technologies which are aiming towards the integration and control of producers of (renewable) energy, energy storage devices, consumer loads, and network operating equipment in a so-called “smart grid” can be observed worldwide. This integration can be achieved through the use of intelligent information and communication technology (ICT).

At the same time, the renaissance of the electric vehicle (EV) as an enabler technology for a more sustainable and a resource-saving means of transport as well as a mobile energy storage device is very much linked to the smart grid discussion. The breakthrough of electromobility can however only be achieved if the technology and communication flow related to the charging process of an EV is going to be standardised.

The ISO/IEC 15118 standard, entitled "Road vehicles – Vehicle-to-Grid Communication Interface", is a digital IP-based communication protocol which defines the communication between an EV and a charging station, also known as an Electric Vehicle Supply Equipment (EVSE). The communication mechanisms are defined with regards to the conductive as well as inductive charging process and allow for an automated authentication, authorisation, charge control and billing based on a single contract installed in the EV and without the need of further user interaction.

The source code of this project originates from the electromobility research project iZEUS (izeus.kit.edu) at the Karlsruhe Institute of Technology (KIT). The research was funded by the German Federal Ministry of Economics and Technology in the context of the ICT for Electromobility II initiative. The project lead of this Eclipse project is also an active member of the ISO/IEC 15118 standardisation body.

Scope

RISE V2G is a Reference Implementation Supporting the Evolution of the Vehicle-2-Grid communication interface ISO/IEC 15118 which provides an interoperable communication interface between an EV and an EVSE. A rise in the wide application of this standard is essential for reaching the goal of integrating EVs as flexible energy storage devices into a smart grid.

Description

RISE V2G allows you to create an EVCC instance acting as the client sending request messages related to the respective charging scenario as well as an SECC instance acting as the server which is responding to those requests. EVCC stands for Electric Vehicle Communication Controller (inside the EV) whereas SECC is short for Supply Equipment Communication Controller (inside the EVSE).

This project currently focuses on the implementation of part 2 (ISO/IEC 15118-2) of this standard [1] defining the protocol requirements from the network up to the application layer (layer 3 to 7 of the ISO/OSI layer model) for the conductive charging scenario. As this standard describes a client/server-based protocol with the EV being the client and the EVSE being the server, this reference implementation covers both entities. The charging process according to [1] can be authenticated and authorised via a so-called plug-and-charge mechanism (PnC) or via external identification means (EIM) such as an RFID card. Furthermore, there are several message sets defined for AC (alternatic current) and DC (direct current) charging. This project covers all defined message sets and identification means.

The current status of the project consists of three subprojects which implement the conductive charging scenario:

  • the EVCC project covering its state machine and request messages
  • the SECC project covering its state machine and response messages
  • a shared project with common classes used by both entities

The overall aim of this Eclipse project is to offer a reference implementation for all parts of the ISO/IEC 15118 standard.

There are several interfaces available through which an actual EVCC or SECC instance can be realised:

  • An interface for the information exchange between the EVCC and the internal communication bus of the EV (e.g. CAN) in order to request the relevant charging parameters from the EV as well as to communicate e.g. charging profiles to the EV 
  • An interface for the information exchange between the SECC and the internal controller of the EVSE in order to request status information (e.g. about the RDC and smart meter values) or open/close the contactors for example 
  • An interface for the communication with a backend (e.g. for further communication via the Open Charge Point Protocol (OCPP)) to request a charging profile for the respective EV 

Extensive logging through log4j is available and can be adjusted from debugging level to error level.

Certain properties regarding the EV as well as EVSE can be configured in the respective .properties files (EV.properties and EVSE.properties respectively) in each subproject.

Note that this project relies on a Java 8 runtime environment.

[1] http://www.iso.org/iso/home/store/catalogue_tc/catalogue_detail.htm?csnumber=55366

Why Here?

This project strives to serve as a reference implementation with open access in order to promote the wide adoption of this international standard and to enhance the electromobility experience in terms of ease of use for end customers charging their EVs at any charging station. With the Eclipse foundation being a respectable and trustworthy organisation with regards to their hosted projects, we encourage interested companies, research institutions and individuals to join this project and contribute to our goal.

Project Scheduling

A first release of the implementation of ISO/IEC 15118-2 regarding AC and DC charging in EIM and PnC mode is planned by February/March 2015.

AC and DC charging in EIM mode is implemented and has been tested against a number of implementations from other companies on various testivals.

The implementation of the secure communication via a TLS channel, digital signatures and certificates is currently under development (update from January 14, 2015). It is the project owners intention to have a complete and fully compatible implementation of ISO/IEC 15118-2 with its first release. 

Implementations regarding part 3 will be available by Q1/Q2 2015.

As the parts of the standard which define the inductive charging are not yet available as an international standard - or at least a final draft version - it cannot be stated exactly when a first implementation of those parts will be available.

Future Work

Planned functionality: see project scheduling. JUnit tests need to be created as well.

 

Community efforts:

  1. Building up the community by advertising this project at various companies and institutions which are interested in smart charging
  2. Blog, Twitter, e-mobility newsletters, professional articels and news on company website
Project Leads
Committers
Interested Parties

All companies and institutes which are part of or cooperating with the eNterop (http://www.enterop.net/cms/index.php?page=home-en) research project. Furthermore, any company which is interested to enable their charging stations and electric vehicles to be able to communicate via this smart charging standard and which are interested in testing their implementation against a reference implementation. Last but not least, any experienced software developper who wants to be a part of this e-mobility project by inspecting the code and providing helpful suggestions with regards to code footprint reduction, stability and security. 

Source Repository Type

Leshan

Date Review
9 years 8 months ago
Date Trademark
9 years 8 months ago
Date Provisioned
9 years 5 months ago
Date Initial Contribution
9 years 4 months ago
Date Initial Contribution Approved
8 years 2 months ago
This proposal is in the Project Proposal Phase (as defined in the Eclipse Development Process) and is written to declare its intent and scope. We solicit additional participation and input from the community. Please login and add your feedback in the comments section.
Parent Project
Working Group
Proposal State
Created
Background

Most IoT (Internet of Things) developers these days are building their solutions with an approach where the Internet connectivity is taken as granted. That is, they focus very much on the application level without taking into consideration the network infrastructure. In the context of the Internet of Things, we’re talking about very constrained computing devices, that very often do not offer any sort of human-machine interface for operating purposes besides the fact that they are – hopefully – connected to the Internet. In order to manage and operate large fleets of such devices, and be able to track their overall “health” (battery level, quality of the cellular signal, etc) or upgrade them over the air, specific protocols as well as server-side infrastructure must be deployed.

 

The Open Mobile Alliance (OMA) is a standards body which develops open standards for the mobile phone industry like Multimedia Messaging Service (MMS), OMA Data Synchronization (OMA-DS) and OMA Device Management (OMA-DM).

OMA Lightweight M2M is a protocol for device and service management. The main purpose of this technology is to address service and management needs for constrained IoT devices, over a number of transports and bearers. 

 

A device which is LWM2M-ready can be managed remotely from a LWM2M server, and therefore be e.g. rebooted, upgraded, etc.

Scope

The Eclipse Leshan project provides a complete infrastructure for building IoT solutions using OMA LWM2M and the Java programming language:

 

  • A device management server library.

  • A device management client library.

  • A device management server with a web user interface.

  • A bootstrapping server (server in charge of the initial security configuration (keying) of the devices).
Description

Leshan is an OMA Lightweight M2M (LWM2M) implementation in Java.

Eclipse Leshan relies on the Eclipse IoT Californium project for the CoAP and DTLS implementation. It is tested against the LWM2M C cliented provided by the Eclipse IoT Wakaama project.

Why Here?

Eclipse Leshan is a natural fit for the Eclipse IoT ecosystem since it builds on top of existing standard IoT protocols and implementations that already live at Eclipse. Moreover, in order to facilitate the adoption of open standards like LWM2M, it is important to leverage the IP policies of the Eclipse Foundation to provide open source implementations that have clean IP and can safely be used in commercial products.

Project Scheduling

The plan is to implement the whole OMA Lightweight M2M specification incrementally.


The main addition to the features already available on GitHub would be the implementation of the device-initiated bootstrap specification.

 

First 0.2 release expected in Q4 of 2014.

 

Project Leads
Committers
Julien Vermillard
Simon Bernard
Kai Hudalla
Mentors
Interested Parties

Sierra Wireless

Bosch Software Innovations

Zebra Technologies Corporation, Zatar

Intel

Vodafone

Initial Contribution

The initial code contribution consists of several Java-based components built using Apache Maven:

  • leshan-core is a Java library implementing a Lightweight M2M server. It uses Eclipse Californium for implementing the CoAP device API and the DTLS security layer.

  • leshan-standalone is a standalone Java server that uses “leshan-core” for running a Lightweight M2M server, with a web user interface.

  • The Web UI is built using Angular JS and Twitter Bootstrap. A REST API is exposed, using Eclipse Jetty and Google GSon, and used by the web UI for communicating with the server.


A Dockerfile is also provided in order to facilitate the deployment of Leshan in Docker containers.

Source Repository Type

This is great news, Julien :-)

+1 for adding leshan to eclipse IoT. We strongly support the development and will continue to contribute code. However, could you please replace "Bosch S.I." with "Bosch Software Innovations" in the Interested Parties section?

Thanks a lot,

Kai

Esprima

This proposal is in the Project Proposal Phase (as defined in the Eclipse Development Process) and is written to declare its intent and scope. We solicit additional participation and input from the community. Please login and add your feedback in the comments section.
Parent Project
Background

Advanced ECMAScript (popularly known as JavaScript) language tooling relies on composable building blocks. Among those building blocks, a syntax parser is very important. Esprima was created to serve as a standard-compliant and high-performant ECMAScript parser written in ECMAScript.

Scope

Esprima offers two very simple API function with the same input, i.e. a string that represents an ECMAScript code. The first function is the tokenizer function, the return value is an array of tokens. The second function is the parser function, the return value is a tree representing the syntax (AST). Esprima does not provide a tree traversal function, a code generator, or any other tools not related to parsing.

Description

Esprima is a standard-compliant, high performance ECMAScript parser written in ECMAScript.

Why Here?

Eclipse Orion has been using Esprima as the back-end for its Content Assist. It makes sense to join the effort and make Eclipse Orion as the ultimate home for Esprima.

Project Scheduling

The current master branch is quite stable. It can be released at any time. Currently it is targeted to carry the version number of 2.0.

Future Work

In next 12 months, the majority of the work involves adding support for ECMAScript 6, starting from features which are more stable and less likely going to change dramatically while ECMAScript 6 specification is being finalized.

Project Leads
Committers
Mentors
Interested Parties

Eclipse Orion.

Initial Contribution

Current code is hosted on GitHub: https://github.com/ariya/esprima. According to GitHub statistics, there are over 30 contributors. However, the majority of the code is are still from me (as the maintainer).

Source Repository Type

Eclipse Titan

This proposal is in the Project Proposal Phase (as defined in the Eclipse Development Process) and is written to declare its intent and scope. We solicit additional participation and input from the community. Please login and add your feedback in the comments section.
Parent Project
Proposal State
Created
Background

Titan development was started in Ericsson as a research project on IP (Internet Protocol) performance testing around 2000. The function test tool used by the company at  that time was implementing the earlier version of the language, TTCN-2 (Tree and Tabular Combined Notation version 2), and was not capable of performance testing. At the same time ETSI (European Telecommunications Standards Institute) was specifying the TTCN-3 (Test and Test Control Notation version 3 - note that the name has been changed in a way leaving the abbreviation intact) test language, and it was a natural choice to use it for the new tool's prototype. In 2003 it was decided to replace the earlier TTCN-2 based test toolset by a more modern and powerful one. After a year of investigation, Titan was further developed  (TTCN-3 semantic analysis, GUI, ASN.1 support etc. added). and TTCN-2 test suites were completely replaced by TTCN-3. The Titan toolset has been evolving continuously during the last decade, and is now a proven industrial-strength product with over 4000 active licenses.

Titan

This project proposal, will use the current name of the toolset that is well-known within Ericsson.

What Titan is:

Titan is the TTCN-3-based test toolset widely used within Ericsson, providing Eclipse-based and command line user interfaces, and multi-platform support.

It is intensively used for functional testing as well as for non-functional performance testing. It can also be used for unit testing.

Titan is an integration and execution environment for test cases generated from models.

Titan supports attended and unattended (nightly) automatic test execution as well as exploratory testing.

Titan can be efficiently used for small testing tasks as well as for huge and complex test scenarios, where the tester has to communicate with the tested entity via many interfaces in a test case, at the same time. A test suite (including a test framework) of about 1M TTCN-3 LOC exists and is being used.

Titan Eclipse components contain approximately 300,000 LOC in Java, the other components contain about 1,600,000 LOC in C++ and other languages (including tests).

What Titan is not?

Titan is not a GUI testing tool.

Titan is not a test tool for low-level high capacity traffic like (Giga)Ethernet, SDH, ATM etc.

TTCN-3

TTCN-3 (Test and Test Control Notation version 3) is the standard test specification language, developed and maintained by the European Telecommunication Standards Institute (ETSI). It is also a worldwide standard as has been endorsed by the ITU-T without technical changes. ETSI standards and ITU-T's TTCN-3 Recommendations are available for free of charge to everyone.

The portal of the TTCN-3 community is at http://www.ttcn-3.org.

A rich bibliography of TTCN-3 related papers can be found at Fraunhofer Fokus's TTCN-3 site and Bernard Stepien's website.

What TTCN-3 is:

"The standardized testing language has the look and feel of a regular programming language but without the complexity that such languages often introduce, as it concentrates exclusively on testing. There are many tutorial and courses to learn TTCN-3, as well as a certification program. The standard itself provides examples that demonstrate the usage of the specific features of the language. The aim of TTCN-3 is to provide a well-defined syntax for the definition of tests independent of any application domain. The abstract nature of a TTCN-3 test system makes it possible to adapt a test system to any test environment. This separation significantly reduces the effort required for maintenance allows experts to concentrate on what to test and not on how."

Source: http://www.ttcn-3.org/index.php/about/why-use-ttcn3

See more details in the Descriptions section.

What TTCN-3 is not:

TTCN-3 is not a telecom-specific language. It is a generic-purpose test language, TTCN-3 is suited for a large variety of application domains:

  • Mobile and fixed-line communications, telecommunication networks (LTE, WiMAX, 3G, TETRA, GSM, ISDN, SS7 etc.)
  • Broadband technologies (ATM, DSL)
  • Middleware platforms (WebServices, CORBA, CCM, EJB)
  • Internet protocols, IP-based networks and applications (SIP, IMS, IPv6, SIGTRAN, XMPP, SOAP and REST based web services and many more)
  • Smart Cards, ePassport
  • Automotive (AUTOSAR, MOST, CAN)

TTCN-3 is not object oriented, it is a procedural language. One of the design principles was to specify an easy-to-learn and easy-to-use language.

TTCN-3 is not designed for developing applications, it is a testing language. Nevertheless, several protocols have been implemented with rational limitations in TTCN-3 to bridge the actually tested layer(s) and the truly available transport layers within the test tool.

Standard test suites and libraries

Several standard test suites are available. We have information that TTCN-3 is used e.g. in the automotive industry, however our knowledge is limited to test suites available from 3GPP and ETSI. Test suites for mobile and fixed-line communication, Intelligent Transport Services (ITS), ePassport, etc. are available from these organizations. These are produced for conformance, network integration/end-to-end and interoperability testing. See more information at http://www.ttcn-3.org/index.php/downloads/publicts/publicts-etsi.

Except test suites, ETSI also publishes a number of TTCN-3 libraries for the IP version 6, SIP, Diameter protocols and for ITS. See available libraries at http://www.ttcn-3.org/index.php/development/devlibraries

 

Scope

The project aims to provide an Eclipse-based IDE for TTCN-3 based test design and execution environment. The following are within the projects scope:

  • Provide a complete test design, execution and log analysis environment for TTCN-3 within the Eclipse IDE.
  • Provide a command line test execution and result reporting interface.
  • Utilize the TTCN-3 standard.
  • Analyze TTCN-3 code quality and report metrics, code smells, code structure, all kind of information that helps the users to maintain robust and high quality code.
  • Assist the users in refactoring their TTCN-3 code.
  • Allow testing of XML interfaces and applications.
  • Allow testing of JSON interfaces and developing JSON schemas.
  • Permit the ingestion of ASN.1 and IDL specifications, describing the messaging and signal structures at the tested interfaces.
  • Utilize the capabilities of other programming languages in TTCN-3 and allow other programming languages to utilize TTCN-3 and/or Titan's advantages.
  • Provide message and signal encoding/decoding functionality within the tool, to keep test cases concentrating on the test behaviour at the higher abstraction level.
  • Provide runtime features for distributed, multi-platform and load-balanced multi-process test execution on POSIX-based operational systems as Linux, Solaris and Cygwin over Windows.
  • Provide the features to specify test execution logic, like conditional, looped, repeated execution with different sets of test data etc. 
  • Collect local test verdicts from the distributed processes (test components) of the system and calculate the overall test case verdict.
  • Provide statistics of attended or unattended test execution sessions.
  • Generate logs in different possible formats and verbosity during test execution.
  • Support continuous integration (CI), for example by providing test results for CI tools like Jenkins.
  • Provide Eclipse-based and command line log collecting and post-processing utilities.
  • Provide means for high-level test result- and detailed (low-level) log analysis.
  • Allow viewing logs in different presentation formats, like graphical. tabular/textual etc.

To help start using the toolset, a few popular, existing IP-based transports and protocols are also submitted to open source.

The following test ports (see the description section) are included into this proposal:

  • TCP, UDP, TELNET, SQL, PIPE (creating and executing command line shells from TTCN-3), SCTP, HTTP, PCAP (reading wireshark traces into TTCN-3), LANL2 (handling Ethernet frames), SIP, and Abstract Socket (it is not a test port on is own, but a library handling the TCP socket of the Linux kernel; it is used in our test ports and making it easy to develop any new test port that is using TCP).

See more information on the test port's capabilities in the Description section.

The following protocol modules are also included into this proposal:

  • DHCP, DHCPv6, DNS, DIAMETER, ICMP, ICMPv6, XMPP, RTP, RTSP, SMTP, SNMP.

The useful functions library contains non-domain specific utilities, like reading/writing files, get access to operational system variables, system time etc.

Any further test ports, protocol modules or generic, non-domain specific libraries, developed by contributors wil be part of this project, as they are technically closely related to the tool (due to using the test port API and message/signal encoding control of Titan).

Titan can be used for automated testing by developing test frameworks and test cases manually. But, when integrating it with modeling tools, thus providing a complete model-based-design AND model-based-testing environment, testing efficiency can be increased: test cases are generated instead of manual development, and the same environment can be used starting from the requirements engineering phase of system design up to testing of the system's functionality.

Test frameworks are domain-specific and often specific to the system under test (SUT), e.g. through managing the SUT via its management interface or accessing the SUTs internal status and data using embedded components. Sharing of libraries and frameworks developed by (Titan) users and contributors is important to the success of the test system, but due to their domain-specific nature they should not be part of this project but should create other projects, related to this one.

Description

Titan provides an Eclipse-based IDE for TTCN-3. The user of the tool can develop test cases, test execution logic and build the executable test suite for one or more platforms.

Titan

 

Titan compiles and executes test cases. It has four major roles:

* The TTCN-3 design environment provides an efficient way to create the TTCN-3 test suites (called the abstract test suite, ATS). The IDE is Eclipse‑based and is called Titan Designer.

It has editors that provide the usual IDE features for TTCN-3, ASN.1 and the Titan runtime configuration file. Like creating and configuring Titan projects, syntax highlighting, name convention checking, jump to definition, on-the-fly syntax and semantic analysis, simple content assist, outline of the modules, mark occurrences etc. ASN.1 and XSD sources are typically not created by testers, but are coming from specifications. Nevertheless Titan contains a full-featured ASN.1 editor. XSD sources can be viewed, edited and validated by using any Eclipse XML editors capable of validating XSD.

The Designer also allows building the project. It has a built-in Makefile generator for GNU and GCC makes and allows setting Titan compiler and GCC compiler and linker flags. The source code is first compiled by the Titan compiler to C++ code, then external C++ compiler and linker are called to build the executable file (which is called the host controller - HC). The whole building process is invoked and controlled by the Designer and is running in a command line shell in the background .

* The Titan compiler builds an executable test suite (ETS) from the ATS, the test port code (see below) and the Titan runtime library. The ETS is not always directly executable as the TTCN-3 language and Titan allows very flexible runtime parameterization of the test cases (e.g. IP addresses, port numbers etc. in the lab); the values of runtime parameters need not be defined in development time - though default values can be specified-, but they can be provided just before the test execution session. In this way flexible execution scenarios can be created without re-building the ETS;

* Titan runtime control has several tasks:

- the Titan Main Controller (MC) read and distributes to all test components the runtime configuration parameters;

- it controls the execution of test cases in a distributed multiplatform environment;

- it keeps up and running the test system: in case of a runtime error, Titan runtime control cleans up the test system, assigns an "error" verdict to the given test case and starts execution of the next test case);

- it produces logs in different formats; Titan has a logging API and logging is done via plugins. Currently we have plugins for Titan’s own textual log format and for Jenkins (junit format). Thus, developing a new logging plugin, e.g. for LTTng doesn’t require much knowledge of Titan's code. Several logging mechanisms may be activated at the same time. Logging can be configured by verbosity and by event types. Logs can be written into files or be send to another process or via a network connection to a 3rd party tool.

Titan contains two MC implementations: a command line MC and an MC implemented in the Titan Executor Eclipse plugin.

* Command line log post-processing utilities and the Eclipse-based Titan LogViewer help analyzing the test results. These tools currently processing Titan's own log format only.

In a model-based testing (MBT) scenario, the test cases generated from the model are necessarily abstract, as the model itself does not contain low level information. Therefore the abstract test cases has to be completed by a test harness, to become executable test cases. TTCN-3 and Titan in several projects have proved to be an ideal platform for developing the test harness, for integrating the generated abstract test cases with the test harness and the test environment, and for test execution.

Though Titan is told to be "a TTCN-3 test tool", in fact it is able to use TTCN-3, ASN.1, XSD and IDL specifications describing the message and signal structures at the tested interfaces. ASN.1 is imported directly, while XSD and IDL are first converted to TTCN-3 and then the generated TTCN-3 modules are used in the projects. In case of XSD the TTCN-3 module is decorated with XML encoding instructions. Titan also supports codec control decorators in TTCN-3 files for binary and textual protocol encodings. Defining the test configurations and the dynamic behaviour of the tests are written in TTCN-3. Functions written in C/C++ can be also be called in the TTCN-3 code.

Titan consists of the following components, these are all subject of this project:

 

Name

Function

Implementation language

Titan Designer

Eclipse plugin, TTCN-3 & ASN.1 design (advanced editing, on-the-fly syntax & semantic checking) and build the executable

Java

Titan Executor

Eclipse plugin, test execution and result reporting

Java

Titan LogViewer

Eclipse plugin, offline log representation in tabular and graphical (UML SD-like) formats

Java

Titanium

Eclipse plugin for code quality analysis (identifies code smells, draws module dependency graph)

Java

TTCN-3 and ASN.1 compiler

Command line parser, semantic analyzer and C++ code generator. Input files can be TTCN-3 and ASN.1.

C++

xsd2ttcn

Command line tool converting XSD documents to TTCN-3 modules, according to part-9 of the TTCN-3 standard.

C++

Runtime library

Implementation of TTCN‑3 language elements (types, statements, operations, predefined functions etc.) and ETS side of runtime control

C++

mctr_cli

Command line main controller: runtime control of component distribution and central runtime control of test execution

C++

makefilegen

Command line Makefile generator

C++

logmerge

Command line utility to merge the log events events based on their timestamps from the set of textual logfiles produced by the different test components independently.

C++

logformat

Command line utility to nice-format the textual log files.

C++

logfilter

Command line utility to post filtering large log files based on the kind of logged events.

C++

repgen

Command line utility to present not only the formatted log files but the description and TTCN–3 source code of test cases as well as the output of other network monitor programs (like tcpdump) in HTML format.

C++

tcov2lcov

Titan is able to instrument the generated C++ code and output code coverage data in xml during runtime. This command line utility Collects and merges these output files into an LCOV input format.

C++

Documentation

Installation ~, user ~, programmer reference guides and API specification.

Microsoft Word

 

Test Ports

The TTCN-3 code is generic: the interfaces between the tester and the tested entity (SUT, AUT etc.) are specified at the level of the exchanged abstract data messages and signals. Setting up and maintaining the transport connections, and sending/receving "real" messages and signals are the tasks of interface adaptors. Adaptors are called test ports (TPs) and are plugins written in C++. Titan has a C++ API for adaptors that complete the ATS with the connectivity layer(s) between the test system and the SUT. 

Titan test port (adaptors) included into this project proposal provide the following capabilities:

Name

Function

Implementation language

TCP

Provides communicatin over TCP-type connections in IP networks. It uses the Absrtact Socket library, therefore its supports the features of the Abstract Socket described below. Connections can be static for the time of the test case (opened/starts listening at the TTCN-3 map operation and closed at unmap) or can be opened/closed dynamically from the TTCN-3 code. Multiple TCP connections are supported by one port instance.

C++

UDP

Provides UDP-type connections in IP networks. It maps between Titan's test port API and Linux kernel's UDP socket services; supports open socket, close socket, send and receive data. One test port can handle multiple UDP connections.

C++

TELNET

Allows using remote telnet login from TTCN-3 via the TCP layer of the operating system. The test port supports client and server mode operation. In server mode operation the Telnet Test Port can handle one connection simultaneously. Supports the capabilities of Network Virtual Terminal. Telnet connection parameters (login creadentials, terminal type, prompt format with or without wildcards etc.) can be configured in Titan's runtime configuration file.

C++

SQL

The SQL test port executes SQL statement against the SQL database. The SQL test port is able to handle different SQL engines and databases, and provides an unified interface towards them. Currently MySQL, SQLite C API-s are supported (and required from the database).

C++

PIPE

The PIPE test port is developed to execute shell commands from the TTCN-3 test suite. It provides abstract service primitives to communicate with the user. The stdin, stdout and stderr of the process are returned to the user.

C++

SCTP

ProvidesSCTP-type connections in IP networks. It maps between Titan's test port API and Linux kernel's SCTP socket services; supports open socket, close socket, send and receive data. One test port instance can handle multiple SCTP connections in either server or client mode.

C++

HTTP

Allows sending and receiving HTTP messages between the test suite and SUT via a TCP/IP connection. It uses the Abstract Socket library. Both IP version 4 and 6 are supported and it can handle multiple connections.

C++

PCAP 

The PCAP test port has basically two operating modes.

In reading mode, it is able to process recorded network traffic, saved in a file in libpcap format (used e.g. also by the open source Wireshark tool), and decode various application layer protocol messages. These messages are then delivered to TTCN-3.

In capturing mode the test port can be used to capture Ethernet packets to a file, controlled from the TTCN-3 environment.

Filtering of messages can be set both at capturing and reading modes.

C++

LANL2 

Allows communicating with the SUT at the low level Ethernet connectivity. The test port translates the LANL2 service primitives (ASPs) and messages (PDUs) sent from the TTCN-3 code to Ethernet II frames when sending and translates the received packets to LANL2 ASPs att receipt. The test port uses the Packet Socket on Linux and the DLPI interface on Solaris (using DLIOCRAW mode).

C++

SIP

Handles SIP connections; The test port can handle SIP request and SIP response messages and it can use both UDP and TCP connection to send and receive messages. The built in encoder can encode SIP request and SIP response messages and it is possible to send raw and fragmented [28] messages trough the test port. The name of the header can be encoded in short or long format.

In basic mode the test port can handle only one TCP connection or one UDP socket. It is not possible to send and receive messages using both protocols at the same time, but the test port can switch between protocols and remote hosts.

In advanced mode the test port can handle several TCP connections and listen on both UDP and TCP ports at the same time. Each connection is distinguished by the protocol id, remote host name and the remote port number.

The test port used to be compatible with ETSI's SIP type definitions, therefore ETSI SIP library could be used with it. ETSI has slightly changed one of its type definition lately, therefore the test port needs to be updated to be compatible with ETSI's LibSip v2.0.

C++

Abstract Socket

It is not a test port, but a library that can be used to build test ports that are using the TCP internet protocol connection; it uses the services provided by the UNIX socket  interface and implements basic sending, receiving and socket handling routines. It supports both IPv4 and IPv6, client and server modes and also supports SSLv2, SSLv3 and TLSv1secure connections (the used secure protocol is selected during the SSL handshake automatically).

C++

 

Protocol Modules

Protocol modules implements static data structure and message encoding/decoding, defined in the specification of the given protocol. Protocols are most often specified by standard bodies for the interfaces, where equipments of different vendors may or need communicate with each other, but of course, interface specifications may also be vendor-specific. Protocol modules can support both cases and shall be in TTCN-3, ASN.1, XSD, or IDL (Corba). JSON support is being developed currently. The protocol modules made available in this project proposal are all defined by IETF and all these specifications are publically available.

TTCN-3

TTCN-3 is a high-level, abstract language.  The code itself is platform independent (e.g. no integer value range or float precision requirements in the language, these are defined by tool implementations; memory allocation is completely solved by tools).

TTCN-3 is test environment independent. In TTCN-3 only the abstract messages/signals, exchanged between the test system and the tested entity (SUT, AUT, etc.) are defined, the transport layers and connections are provided and handled by the tools. Message/signal encoding (serialization) and decoding (deserialization) is completely done by the tool in the background. The user, if wants, can access the encoded data in TTCN-3, and the encoded data is also logged for debugging purposes. TTCN-3 also allows leaving data open in the source code and providing the actual values at execution time (i.e. typically IP- and other addresses, IDs and passwords, SUT/AUT parameters for automatic test case selection etc.). All this allows executing the test cases, unchanged, on different platforms and in different test environments.

Its main purpose is functional testing (conformance, function, integration verification, end-to-end and network integration testing) , and can also be used for performance testing. It supports testing of message-based (asynchronous), API-based (synchronous) and analog (continuous signals) interfaces and systems.

The TTCN-3 language has four major parts:

A rich data/type system

It allows describing the messages/signals of practically all possible protocols and APIs. 

Except defining data structures in TTCN-3, the language specify the ways of importing other data type/schema languages - as ASN.1, IDL, XSD and currently JSON mapping is being defined, i.e. using them in TTCN-3 test suites without the need of manual conversion. TTCN-3 can also be used as a schema language for XML and JSON (ongoing).

As an extension, Titan also allows adding encoding instructions to TTCN-3 types to automatically encode them in binary (bit-oriented) or (simple) textual forms, XML or JSON (for ASN.1 data BER/CED/DER encodings are supported).

Test configuration

TTCN-3 allows defining multi-process distributed test cases and test execution logic easily (processes are called “test components”). It is the tools' responsibility to deploy and control the test components on the available pool of machines, possibly running different OS-es. The user need not bother about the details (see also the example below). Types of the test components and their interfaces (ports) to other test components and the tested entity are defined in TTCN-3. The number of test component instances and their port connections are controlled from the test case code dynamically, using built-in language statements.

Test dynamic behaviour

TTCN-3 is a procedural programming language specialized for testing. This means that is has the usual programming language features and in addition, constructs needed for testing are built-in the language. Sending/receiving messages, checking the content of the incoming messages, alternative test behaviours depending on the response  of the tested entity(including no answer), handling timers and timeouts, logging of events, verdict assignment and collection from the different components of the test system and alike. 

Test execution control

Test case execution logic and dynamic test selection can be controlled from the TTCN-3 code itself.

TTCN-3 is a dynamically developing language. In 2003, at publication of its first implemented version, it consisted of two documents (core language and operational semantics), 316 pages in total. In 2005, in TTCN-3 edition 3, tool implementation parts and interworking with ASN.1 has been added and the language specification was published in 7 documents, on 846 pages in total. The first version of TTCN-3 edition 4, in 2009, added interworking with IDL and XSD that increased the number of published pages to 948 in 8 documents (two earlier parts had been made historical in edition 4) . From 2010 language extensions for real time and performance testing, configuration and deployment, testing of interfaces with continuous signals and some advanced language features are published in separate language extension documents. Up to now 6 language extension packages have been published and today, in June 2014, the whole language specification consists of 1422 pages in 14 documents. ETSI is securing the maintenance and development of the language by establishing a project team from ETSI members. The project is financed from ETSI budget.

Example

The following TTCN-3 code shows a Hello Word! example, but it doesn't simply prints out the string.

The control part of the module calls the parameterized test case TC with a string parameter. TC is executed as many times in a loop as many strings are defined in the variable vl_inputs. Each test case uses two threads (called test components): one is created implicitly and automatically, when the control part of the module is started and the  testcase TC is called on this test component (during test case execution this will be the main test component - MTC). The another one is created by TC explicitly, by executing the create statement. This test component is called a parallel test component - PTC. TC then connects the ports of the two test components, commands to start execution of the function f_PTC() on the PTC and sends the string, received as parameter, to it. The MTC then waits until the PTC finishes (the done statement) and returns to the control part.

The PTC first matches the received message against the pattern specified by the TTCN-3 template t_expected. "hello world!" in all small, all capital or small with capital first letters are allowed. Whitespaces are allowed (also at the beginning and end of the string), but there shall be at least one space between "hello and "world". Exclamation mark is optional. If this match successful, the pass verdict, if unsuccessful, the fail verdict is assigned to the test case and the reason is logged (expected/not the expected message has been received plus the message itself).

module hello_world

{

//====================== Port Types ======================

type port MYPORT message {

  inout charstring

} with {extension "internal"}

 

//====================== Component Types ======================

type component MYCOMP {

  port MYPORT myport

}

 

//====================== Constants ======================

const charstring ws0 := "(\n| )#(0,)"

 

//====================== Templates ======================

//Strings containing the words "hello" and "world", in all small, or

//all capital letters, or small letters with first letter capital;

//exclamation mark is optional; whitespaces allowed

template charstring t_expected :=

  pattern "{ws0}(((h|H)ello {ws0}(w|W)orld)|HELLO {ws0}WORLD){ws0}!#(0,1){ws0}"

 

//====================== Functions ======================

function f_PTC() runs on MYCOMP {

  alt {

    [] myport.receive(t_expected){

      setverdict(pass,"expected message received: ",t_expected)

    }

    [] myport.receive {

      setverdict(fail,"not the expected message received: ",t_expected)

    }

  }

}

 

//====================== Testcases ======================

testcase TC(charstring pl_toSend) runs on MYCOMP {

  var MYCOMP vl_PTC := MYCOMP.create;

  connect (self:myport, vl_PTC:myport);

  vl_PTC.start(f_PTC());

  myport.send(pl_toSend);

  vl_PTC.done

}

 

//=========================================================================

// Control Part

//=========================================================================

control {

  var charstring vl_inputs [6] :=

    {"HELLO WORLD!","hello world","Hello World!","hello WORD!","hELLO wORLD!","helloworld!"}

  var integer i, vl_noStrings := sizeof(vl_inputs);

  for (i:=0; i< vl_noStrings; i:=i+1){

    execute (TC(vl_inputs[i]))

  }

} // end of control

 

}  // end of module

 

 

Titan's screenshot, showing the results of the control part's execution is given below. The test case TC has been executed 6 times with different parameters. 3 times execution resulted pass 3 times fail verdict. On the left window at the middle row, the generated log file is shown. Test case results are extracted automatically from it and by clicking on a test case, its log is opened either in a graphical or in a textual/tabular form. In the middle window of the upper row the graphical view the fragment of the last execution of TC can be seen, when the string is received by the PTC and the fail verdict is assigned. By clicking on the "charstring", its content is opening in a value window that can be seen in the window below the graphical log window and at the same time the source code line producing the log event is highlighted in the TTCN-3 editor (rigth window of the middle row). The left window of the top row allows selecting the test cases/control part to be executed and shows the actual status of test components during test case runs.

 

Why Here?

Eclipse Foundation is a perfect place for the Titan project for the following reasons:

  • existing Eclipse IDE, code QA, test execution control and result analysis tools.
  • the Polarsys project targets robust industry-grade toolset, Titan is such a tool and it targets Polarsys
  • TTCN-3 and Titan is an ideal target language and tool for model based testing. Modeling support and tools exist in Eclipse, but no support for model based testing (MBT). Titan can complement the existing modeling tools to create a complete MBT environment from modeling to test execution and result (log) evaluation.  Such project proposal is being submitted by CEA List, with agreement and co-operation from us.
Project Scheduling

The initial contribution is already available. It will be contributed as soon as the project creation process is finished successfully.

Future Work

Titan will further be developed in Ericsson and the changed and new code be submitted to this project. Other future developments depend on further contributors to the project and their needs.

In parallel with this project proposal, a related project is being proposed by CEA List of the topic "Model based formal verification and validation". These two proposals are complementing each other with the aim to create a complete model-based-testing environment, from model specification to test generation, test execution and test result analysis. In the current stage the test cases generated from the model can be integrated with TTCN-3 based test harnesses and be executed using Titan. This will require aligning the code generator's output with the test harness interfaces and/or APIs manually. The two tools are seen by the user as two separate tools currently. We plan closer integration later, but at the point time details are not yet known as the technical discussions are still ahead of us.

We are planning to take several actions to build the community. We plan to announce open sourcing Titan and promote it at the UCAAT and HUSTEF conferences in September-October 2014. We plan to create a flyer, public website, newsletter and introduction video. Also we are planning to disseminate information about open source Titan in the traditional TTCN-3 community places as the ttcn-3.org website, the TTCN-3 article in Wikipedia, LinkedIn etc. Of course, Eplipse conferences, like EclipseCon Europe 2014, are the natural places to share information and build community. Eclipse Foundation's advice and involvement are welcome.

Project Leads
Mentors
Interested Parties

- Concordia University, Montrèal: interested as user, adaptor and most probably contributors.

- CEA is interested to use Titan in a model based testing scenario. We plan to start discussion on possible roadmaps to create an integrated toolchain for model based testing using Papyrus, Titan and other open source tools.

- Thales is interested as user/adaptor.

- Eötvös Lóránd University, Budapest: they are contributors already today by contributing to the development of the Titanium plugin.

- Budapest University of Technology and Economics: they are interested as users and adaptors: Titan is being used for several years in their university course (TTCN-3 is a lab measurement topic)

Discussion with further potentially interested parties are ongoing.Further interested parties will be added during the review process.

Initial Contribution

Initial contribution exists today. It will include all tool components listed in the description section are included into the scope, and the  test ports, protocol modules and the useful functions library listed in the scope section. All these components are ready and widely used today within Ericsson.

Source Repository Type

Cloud Application Management Framework

Date Review
9 years 9 months ago
Date Trademark
9 years 8 months ago
Date Provisioned
9 years 7 months ago
Date Initial Contribution
9 years 6 months ago
Date Initial Contribution Approved
8 years ago
This proposal is in the Project Proposal Phase (as defined in the Eclipse Development Process) and is written to declare its intent and scope. We solicit additional participation and input from the community. Please login and add your feedback in the comments section.
Parent Project
Proposal State
Created
Background

Application deployment and management in Infrastructure as a Service (IaaS) Clouds can be a complex and time consuming endeavor, typically requiring manual effort on the users' behalf and relying on vendor-specific, proprietary tools. Existing IaaS tools do not provide users with vendor-neutral mechanisms for describing application configuration, deployment and runtime application preferences, as well as policies that govern automatic resource adaptation. Consequently, the migration of applications between different IaaS providers requires significant re-configuration and re-deployment effort and time, leading to vendor lock-in. With the growing number of IaaS-provider service offerings and the increasing complexity of applications deployed on Clouds, the selection of the most appropriate provider to host an application becomes challenging. While seeking to identify the deployments that suit best their needs, IaaS clients need to overcome vendor lock-in in order to test and/or deploy their applications on multiple IaaS providers. Therefore, it becomes evident that there is a need for application management tools that facilitate the description of applications in a vendor neutral manner, enabling easy application deployment, management, and migration across different providers, thereby preventing vendor-lock in. Many application management frameworks have been developed lately to support Cloud Computing. The majority are proprietary, locking their users to specific providers. Other frameworks which are generic enough and allow management of applications on different infrastructures are predominantly web-based, therefore lacking tight integration to unified application development and team collaboration environments such as Eclipse.

We propose a new Eclipse project, called "Cloud Application Management Framework (CAMF)", that leverages the reliable Eclipse platform for offering extensible graphical tools that enable interoperable description of Cloud applications and facilitate lifecycle management operations in a transparent and vendor-neutral manner. CAMF focuses on three distinct management operations, particularly application description, application deployment and application monitoring. To this end, it adopts the OASIS TOSCA open specification for blueprinting and packaging Cloud Applications. In addition it utilizes open-source tool kits such as Apache® jclouds for portable across-Cloud application deployment, as well as Chef for writing "recipes" that orchestrate application configuration processes upon deployment. Furthermore, CAMF provides the necessary programming interfaces that enable Cloud developers to specify resource adaptation policies and desired actions, as well as various monitoring operations at different levels of an application's structure.

Scope

The Cloud Application Management Framework can be promoted/adopted by any Cloud-related party as a tool for configuring, deploying and managing applications on different infrastructures in a vendor-neutral manner. This is beneficial both for resource vendors and end-users; the former can be urged to enhance their existing Cloud operations with additional open standardization and thus achieve great potential in customer base increase; the latter are able to describe the deployment and management lifecycle of their applications with minimal effort (GUI-based), in a way that promotes smoother migration to the Cloud while avoiding vendor lock-in.

Specifically the project aims to provide a Cloud Application management framework that provides the necessary tooling to assist with Cloud application lifecycle management operations. The following are within the project scope:

  • Develop the framework within the Eclipse RCP platform.
  • Provide graphical tools to facilitate the description of a Cloud application structure (blueprint), its principal components and inter-relationships.
  • Utilize the OASIS TOSCA open-specification for encoding such graphical interactions/input to vendor-neutral instructions that define and drive the operational behavior of these applications.
  • Provide editing tools for creating open-source Chef "recipes" that automate and streamline the configuration process during application deployment phase.
  • Integrate an abstraction library (Apache jclouds) to hide the complexity of interacting with the Cloud provider.
  • Provide the necessary programming interfaces for specifying resource adaptation policies and actions, as well as monitoring operations at different levels of an application's structure during runtime phase.
  • Provide testing and debugging information in situations where application deployments fail.

Out of scope for the project is:

  • Develop a TOSCA processing environment.
  • Debug the Cloud application itself.
  • Develop an IDE for Chef Ruby recipes.
Description

The project aims to develop and sustain the necessary tooling that will assist Cloud application lifecycle management operations, using open standards and languages, where appropriate. As aforementioned, these operations are classified into three distinct categories: (1) application description, (2) application deployment and (3) application monitoring. CAMF will follow the Eclipse OSGi plug-in based software architecture for each of the aforementioned operations and will inherit the same look-and-feel that Eclipse users are accustomed to. To guarantee the quality of the resulting product, the project will follow designated development cycles with rigorous code reviews, unit tests and release cycles.

The Cloud Application Project

Figure 1 - The Cloud Application Project view.

 

Similar to other Eclipse frameworks, CAMF organizes all the files related to a Cloud application in a structured hierarchy that utilizes the Eclipse file system (see Figure 1). A Cloud Application project, acts as a placeholder for a single application and any of its runtime dependencies. To this extend it provides containers (folders) for:

  • Application Descriptions; containing application structure blueprints described using the TOSCA open specification.
  • Application Deployments; containing important historical details (past and current) about application deployments to various IaaS. Among other, these include date/time, the particular IaaS, reference to a particular version of the application description, operational costs, etc.
  • Artifacts; containing artifacts required for the deployment and correct operation of the Cloud Application such as executable files and/or third-party libraries, custom virtual machine images, Chef configuration and deployment scripts, SSH keys etc.
  • Monitoring Data; containing users' custom monitoring probes. These probes interface with the underlying monitoring system and obtain information that report the runtime health and performance of the application.

Also each project is associated with one or more IaaS-provider profile, that contains important information for interfacing and utilizing/querying the offered resources of each provider. Such information include communication endpoints, authentication credentials, granted permissions and rights, etc. which are manually provided by the user or wizard-imported from a standardized format input file.

Cloud Application Description

Figure 2 - The Cloud Application Management Framework User Interface while describing a 3-Tier Web-based video streaming service.

 

OASIS TOSCA open standard provides a language for describing the structure of Cloud applications, along with their management operations. The structure of an application presents its principal components and defines any existing relationship among them. Such components are described in TOSCA by means of Nodes and are used to represent specific types of executing entities in a deployment (i.e. an application component can be a Tomcat application server in a 3-tier web application). Each Node can have certain semantics such as: (i) Requirements against its hosting environment, (ii) the Capabilities it offers and (iii) the Policies that govern its execution such as resource security or elasticity. Similarly, as the name suggests Relationships represent the associations among Nodes and have their semantics based on their individual type. TOSCA provides the necessary grammar to describe management aspects of an application by means of lifecycle operations or by more complex Management Plans. For each Node its lifecycle operations can be defined, (i.e deploy or start an instance of the corresponding Node Type).

Nevertheless, compiling TOSCA documents (XML content) manually is cumbersome and quite error-prone endeavor. CAMF's Application Modeling Tool (see Figure 2), enables the compilation of large and complex TOSCA-based application descriptions, simply by following through a users' graphical input. The Application Modeling Tool associates all TOSCA elements with visual elements (available via a Palette view) that are consequently used to model an application schematically. All graphical interactions with available visual elements (drag'n'drop, move, delete, etc.) are translated on the fly into TOSCA XML content by considering the semantics of each underlying utilized element. Inherent compilation complexities are hidden from the user and possible errors that can be introduced via non-valid graphical actions are avoided/minimized using various visual cues and textual warnings. Moreover, continuous checks are performed on the generated TOSCA XML content to assure conformance with the standard. To facilitate the above, CAMF utilizes and improves upon the capabilities provided by the Graphiti graphics framework, for generating state-of-art diagrams for particular domain models.

The Palette View (see Figure 2, right-hand side) acts as a front-end to a Cloud Information System for visually representing all the resources available by one or more IaaS. The Cloud Information System provides a vendor-agnostic model for storing fundamental resource metadata that can be utilized during the application description and deployment processes. Among other these resource metadata include handles to: available public/private VM images and corresponding flavors, available software packages, Chef recipes, application-level and resource-level monitoring probes, etc. Using the jclouds toolkit, metadata are retrieved from each IaaS "marketplace" once during the creation of an IaaS-specific profile (endpoint and credential provision), and are consequently refreshed on intelligently adapted time intervals. In addition, the Palette View can display information about local resources, included/added by the user in Cloud Application Project hierarchy. Finally, to minimize the information displayed and swiftly identify any required component, the Palette includes standard searching and filtering mechanisms.

Cloud Application Deployment

Figure 3 - The Application deployment view, showing the status of the application on Amazon AWS EC2 and an OpenStack-compliant IaaS.

 

The TOSCA description along with the artifacts realizing all management operations of a particular application are packaged into a single fully self-contained archive, called Cloud Service Archive (CSAR). In case of a TOSCA-compliant provider, CAMF enables the submission of CSARs to a dedicated endpoint so as to be processed and interpreted accordingly by a TOSCA runtime environment, such as the one implemented in OpenTOSCA. According to the specification, Cloud providers that wish to become TOSCA-compliant should provide a Container entity as part of their Cloud architecture (see Figure 4). This entity is responsible to communicate with an IaaS orchestrator to make the necessary IaaS-specific operations to satisfy the respective TOSCA description.

 

Figure 4 - Exemplary Application Deployment Process at TOSCA-compliant Cloud providers.

 

In case that a Cloud provider is not yet TOSCA-compliant, CAMF adheres with the standard by providing the necessary abstractions and extension points for the implementation of TOSCA containers at the tools's side. These are jclouds-based connectors for specific IaaS, which can be easily developed following other exemplary implementations, or installed (if available) from a public p2 repository. In such scenarios, the application description generated by the Application Modeling Tool, is consequently parsed locally by the implemented connector responsible to execute IaaS-specific API calls for deployment and runtime configuration of the particular Cloud application.

 

Cloud Application Monitoring

After deployment, the application developer can interact with the Deployment View (see lower part Figure 3), so as to obtain instantly the deployment status without leaving the Eclipse environment. The Deployment view provides a snapshot of all application deployments grouped per target IaaS. Each deployment is accompanied with provider-specific properties such as component IP addresses, instance IDs, running times etc. A background polling mechanism refreshes the view and tries to always provides the latest information from each IaaS. Finally, c-Eclipse provides interfaces so as to be integrated with existing monitoring systems, enabling thus its users to monitor the performance of their deployed applications from a single environment. Currently, it is fully integrated with the JCatascopia Cloud monitoring system, nevertheless JCatascopia is not a CAMF dependency and thus can be swapped with another Cloud monitoring system.

Why Here?

The proposed Cloud Application Management Framework project is a natural fit for the Eclipse Foundation. Nowadays, in the Cloud computing era CAMF provides the capacity to application developers that currently make use of Eclipse as an IDE, to smoothly migrate their applications to the Cloud. More specifically, it provides them with the capability to deploy their application not only on one, but possibly many candidate IaaS in the quest of seeking the infrastructure and settings that suit best their needs. The well-integrated workbench environment of Eclipse and its accompanied tooling facilitate the inclusion of various, programming language-specific, compiled artifacts (i.e. JAR, WAR) in a Cloud Application description with just a few clicks. Furthermore, developers are able to seamlessly describe, deploy and monitor the runtime behavior of their application, without leaving the Eclipse environment, which are well-accustomed to.

Furthermore, the Eclipse platform and its satellite projects provide huge tooling advantages in the quest of building a Cloud Application Management Framework. For instance the g-Eclipse project (now archived) provides an excellent model for abstracting resources and operations when interacting with large-scale remote distributed systems. The Graphiti and EMF projects provide an extensive graphics and modeling frameworks respectively, that enable rapid development of state-of-the-art diagram editors for domain models, such as the OASIS TOSCA. The SOA platform Winery project allows modeling application structure using TOSCA. Despite providing a Web-based environment without tight connection with the Eclipse workbench, collaboration with the Winery project community and integration among the two tools and their components thereof, is deemed very valuable.

Finally, the Eclipse foundation is proven to creating and sustaining healthy environments for open source projects. It has a significantly large user-base that faithfully utilize the Eclipse platform and its satellite projects on a daily basis, while at the same time actively engaging/contributing in the Eclipse forums. In line with the efforts of the Flux and Orion projects, we aim in fostering and establishing a solid and experienced Cloud computing community in the Eclipse eco-system.

Future Work

We plan to grow our community in the following manner:

  • By promoting collaboration among existing open-source Cloud Projects. This involves projects under the Eclipse foundation, and US, EU and/or national funded research projects.
  • Initiate open discussions with key Eclipse foundation projects like Flux and/or Orion.
  • By fostering a Cloud application management tools community within the Eclipse eco-system.
  • Provide a rich user and developer documentation for CAMF.
  • Full tutorials on the project site.
  • Establishing forums and mailing lists.
  • Organizing and attending local Eclipse Demo camps.
  • Where possible submit articles for EclipseCon.

Some of our longer term plans include 

  • Alignment with the upcoming TOSCA version 1.1.
  • Successful review of CAMF and exiting the incubation phase.
  • Attract more committers.
  • Establish CAMF and aligning with future Eclipse release train.
Initial Contribution

The initial contribution of the Cloud Application Management Framework is available in GitHub, as work performed in the context of an EU FP7 funded research project called CELAR. We note that currently the project is referred to as "c-Eclipse" in GitHub. In the context of CELAR c-Eclipse, will be utilized in the management lifecycle of two large Cloud applications, namely a) an online game from PlayGen and b) a cancer gene detection workflow from the University of Manchester. The current implementation has a fully functional Application Modeling Tool that fully supports the OASIS TOSCA v.1.0 specification. It employs descriptive resource metadata from the "marketplace" of Amazon AWS EC2 and any OpenStack-compliant IaaS provider to create a Cloud application blueprint using TOSCA. Since the two above systems do not currently support TOSCA, two exemplary connectors that act as local Containers were implemented and are provided in GitHub. Some parts of the initial design and implementation were adapted from g-Eclipse project (now archived). Also the exemplary implementations of the Amazon and OpenStack IaaS connectors make direct references to the following third-party libraries:

Finally, each of the components in the initial contribution have a MAVEN nature and thus are configured to be build using Eclipse Tycho. A number of open-source Tycho plugins are used to fetch 3rd Party dependencies from MAVEN and Eclipse p2 repositories in order to build a complete Eclipse feature.

Source Repository Type

Che

Date Review
9 years 10 months ago
Date Trademark
9 years 11 months ago
Date Provisioned
9 years 9 months ago
Date Initial Contribution
9 years 1 month ago
Date Initial Contribution Approved
8 years 3 months ago
This proposal is in the Project Proposal Phase (as defined in the Eclipse Development Process) and is written to declare its intent and scope. We solicit additional participation and input from the community. Please login and add your feedback in the comments section.
Project
Parent Project
Proposal State
Created
Background

Che is a project to create a platform for SAAS developer environments.  A SAAS developer environment is one that contains all of the tools, infrastructure, and processes necessary for a developer to edit, build, test, and debug an application.  Since a SAAS developer environment is fully hosted, whether it is running on your desktop, in your datacenter, or in the cloud, that environment can be cloned, embedded, and shared in ways that desktop-installed environments cannot be.  SAAS developer environments also contain the tools to orchestrate developer workflows, exposed in a variety of interfaces either through REST APIs, a browser-based client providing IDE functions, or a CLI.  The Che project contains a structured way to create server- and client-side extensions that are authored in Java, but generated as JavaScript, a set of standard developer-related REST APIs for interacting with development workflows, a large set of language & tooling extensions (Java, git, etc.), a default cloud IDE, and a developer environment cloud for scaling environments with large populations.

The success of the platform depends on how well it enables a wide range of tool builders to build best of breed integrated tools that run on a distributed system and accessed by a variety of clients. But the real vision of an industry platform is only realized if these tools from different tool builders can be combined together by users to suit their unique requirements, in ways that the tool builders never imagined.

Scope

The Che project provides a commercial grade platform for building and managing SAAS developer environments following the Eclipse RCP model of modularity.

This project includes:

  1. A structured format for an extension, authored in Java and using dependency injection for defining both the client-side and server-side logic of the resulting Web application that is generated.
  2. The Che Web Client Platform (WCP), a kernel for loading, managing, and running extensions authored in Java that get translated into client-side JavaScript and server-side Java.
  3. Tools to simplify and automate the packaging of Java extensions for deployment into the Che WCP. This is called the SDK. Also included in the Che SDK is a large set of helper APIs for creating developer-oriented application, similar to the APIs provided by the Eclipse RCP project.
  4. A large set of language and tooling extensions covering the range of developer needs and expectations, such as tools for Java, git, deployment, and subversion.  We also envision many external extensions to be created, some of which may become sub-projects or projects of their own.
  5. A default cloud IDE authored as a Che WCP application including a large set of pre-packaged extensions, with an emphasis on Java, database, SCM, and deployment tooling.
  6. A set of synchronous REST APIs for representing the key workflow of development activity, such as Project, Builder, Debugger, and Instance management. Additionally, these APIs will ship with implementations that provide behavior for running on a desktop, and can offer different provider implementations when deployed in an enterprise cloud development platform. This will also include any desktop IDE plug-ins designed to interact with the SAAS developer environment over these APIs. The initial contribution will include an Eclipse plug-in.
  7. A developer environment cloud (PAAS) for the management of large populations of concurrent developers and developer environments.  For the sake of simplification, we call this Che Enterprise.  Che Enterprise provides an multi-server architecture for scaling developer workflows in a virtualized way.   Che Enterprise is a PAAS system that will run on any public or private IAAS implementation, which will provide hardware node elasticity.  Provided in Che Enterprise is cloud administration, administrator dashboards, logic to separate editing, code assistant, builder, and debugger logic onto decoupled clusters, structured access to external resources such as secure access to GitHub or an internal data warehouse required for use during development, a repository for managing users, accounts, and subscription configurations, a system for applying behavior and access policies to development workflows, and an embedded analytics system for measuring development activity and engagement.
  8. The development of a compiler and translator that takes certain EMF / GMF models and creates Che WCP editors authored in Java and GWT that are ready for deployment. This tooling can be used to create extensions that have editors that can be used in a drag-and-drop, drawing, modeling context.

This project is a sibling to the Orion project, and the two teams have discussed many areas of alignment and collaboration moving forward. We see Orion as complementary and necessary relative to Che, as each solution is optimized for different scenarios. Che and Orion will initially collaborate on APIs for communication between browser clients and server systems; finding ways to standardize plug-in formats to work within both systems; making editors interoperable (Che will use Orion editor); and standardizing on a workspace description format so that temporary environments with fully provisioned code, project, builder, runner, and plug-ins can be automatically generated in a decoupled, command-line or URL format.

Description

Many development tools that are deployed on the desktop create two challenges for large organizations: configuration and compliance. When a developer or system administrator needs to install a tool individually on each machine, or each tool needs a reconfiguration for each branch / project / issue, there is a lot of manual step-by-step configuration tasks that a developer or admin must perform (repeatedly) to achieve a proper configuration.

Centralized cloud development systems, which orchestrate all of a developer’s activity such as project creation, builders, debuggers, editors, and code assistants, offer a way for environment configuration to be automated by the centralized system or offloaded to third parties such as system administrators. The second class of issues are compliance, audit, and visibility related. With development taking place on desktops, those computers become threat vectors with all of the software that gets installed, managers are unable to track usage to improve productivity, and there are limited ways to audit development activity.

Again, centralized development systems offer a way to organize this activity with a higher level of efficiency for an organization. Additionally, a cloud development system that targeted the enterprise needs to exist. Enterprises depend upon an open infrastructure combined with a set of tools (plug-ins) that represent their workflows, processes, and tool integrations that they require. The existing Eclipse project has developed a broad and diverse ecosystem of tooling plug-ins that are bound to the desktop.

Che is a project to create a platform for SAAS developer environments.  A SAAS developer environment is one that contains all of the tools, infrastructure, and processes necessary for a developer to edit, build, test, and debug an application.  Since a SAAS developer environment is fully hosted, whether it is running on your desktop, in your datacenter, or in the cloud, that environment can be cloned, embedded, and shared in ways that desktop-installed environments cannot be.  SAAS developer environments also contain the tools to orchestrate developer workflows, exposed in a variety of interfaces either through REST APIs, a browser-based client providing IDE functions, or a CLI.

Architecturally, Che is an SDK, IDE, set of APIs, a set of plug-ins, and an enterprise cloud system for running many environments at scale with security, high availability, and structured identity management. Che is a system not only for providing a SAAS developer environment that runs locally for a single individual, but also creates solutions for running many developers environments accessed by large developer populations concurrently. Essentially, Che is both a Web application that contains client- and server-side systems while also providing a specialized PAAS infrastructure optimized for developer-specific workflows.

The Che kernel applies the concepts, principles and best practices of the Eclipse RCP architecture to a Web-based architecture, running in a servlet runtime.  The Che kernel provides a Web Client Platform for building Web applications written in Java.  These Web applications are simultaneously three parts: a browser-based Web client, a hosted server-side logic repository, and a structured REST API for allowing clients and servers to communicate with each other. Applications deployed into the Che kernel (running in the SDK) have their clients translated to optimized and cross-browser portable JavaScript, and their server-logic packaged into a servlet-style deployment. The architecture of applications are modular, with extensions being structured similarly to how Eclipse extensions are built, packaged, and managed. Che Extensions, authored in Java & GWT, married with the Che Web Client Platform create a JavaScript optimized Web application. Extensions are both server- and client-side, where a single Java authored extension is transparently generated into both a client- and server-side logic that operates seamlessly within the Che kernel.

Also provided with this project are a wide set of extensions for various programming languages, tools (such as git & subversion), deployment (such as Google App Engine, Heroku, IBM, and Cloud Foundry), and enterprise needs (like datasource / SQL editing). Additionally, a default cloud IDE is provided, designed as a workbench for the enterprise developer, similar to the Eclipse project, with a default packaging of extensions and developer workflow optimized around a cloud IDE environment.

The Che system is intended to run on any servlet container, and can be deployed either as a stand alone packaging (run as a desktop application with a browser interface) or as part of an enterprise cloud platform, which turns the packaged application into a multi-tenant, secure, and scalable distributed development system. When deployed within an enterprise cloud development platform, the Che packaging (kernel + SDK + IDE + extensions) operate in a central, orchestrated fashion on a system designed to ensure HA and scalability of large developer populations,  including providing the potential for large organizations to apply sophisticated behavioral / access permissions, derive intelligence from analytics of activity, and meet compliance obligations with audit / access logs.

The Che kernel generates JavaScript that is optimized for all major browsers and also certain mobile devices. The benefit of such a system is a lightweight approach that can be used to build any type of Eclipse-style extension, accessible via a browser, but authored entirely in Java.  With the architectural approach, the migration of existing Eclipse plug-ins to a browser IDE format will be reduced as they can stay in Java and use many of the same API / SDK practices.

Additionally, this system provides a way to quickly create new distributions of IDEs that can be run on the desktop and then graduate to a full enterprise cloud platform that is centrally provisioned and managed.

Why Here?

With our target being both individual developers and large enterprises, Che needs an organization and set of processes that allow it to achieve maturity and exposure to a global community of developers and contributors who can build the extensions necessary to drive adoption within enterprises.

Che started in 2009 at Exo Platform, and eventually spun out to Codenvy at the beginning of the year 2013. Che has attracted developers steadily, with up to 38 people having contributed to the project so far. Che has been deployed for usage at Codenvy.com for a while and there are 125,000 user developers who have used the system, and 1000s of those have encouraged openness and provided input into its future direction.

Eclipse provides a well defined governance structure and development process. Eclipse also provides a very rich ecosystem of development that Che wants to attract and retain. Becoming an Eclipse project is the next logical step onto maturity, increase its visibility and attract other contributors and companies to an open source environment.

Che could especially contribute to the Orion and Flux projects, which are solving problems in the same related space.  Che is strongly based on the Eclipse platform using several of its technologies (e.g., JDT libraries).  Furthermore existing projects like Orion and Flux can benefit from the collaboration as Che provides infrastructure components both could use or which could be developed in collaboration reducing the effort for all.

 

Project Scheduling

Che plans to finish the initial contribution by the end of Q3 2014 using the 1.1 release.  The contributions could take many quarters to complete.  The Che SDK + WCP + IDE + plug-ins will be the first contribution done throughout 2014.  The Che Enterprise system will be done throughout 2015.

 

Future Work

Che plans to finish the initial contribution by the end of Q3 2014 using the 1.1 release.  The contributions could take many quarters to complete.  The Che SDK + WCP + IDE + plug-ins will be the first contribution done throughout 2014.  The Che Enterprise system will be done throughout 2015.

 

Committers
Alexander Garagatyi
Anatoliy Bazko
Anna Shumilova
Dmitry Kuleshov
Dmytro Nochevnov
Evgen Vidolob
Max Shaposhnik
Maxim Musienko
Oleksii Orel
Roman Iuvshin
Roman Nikitenko
Sergey Kabashnyuk
Sergey Leschenko
Sun Seng David Tan
Valeriy Svydenko
Vitalii Parfonov
Vitaliy Guliy
Vladyslav Zhukovskii
Artem Zatsarynnyy
Stephane Tournie
Mentors
Interested Parties

The following individuals, organisations, companies and projects have expressed interest in this project:  SERLI, WSO2, Nuxeo, Eclipse Orion, Eclipse Flux, Jeremy Whitlock (Apigee), TaskTop, RedHat, SAP Dirigible Team

 

Initial Contribution

The initial contribution for Che will consist of all current open source code available at the Codenvy SDK repositories. This includes the Codenvy SDK (runtime for managing plug-ins and tools to create packaged Web client applications), Codenvy IDE (a default application with extensions for developers and many predefined developer workflows), Codenvy Plug-Ins (a wide gamut of extensions supporting programming languages, tools, and developer workflow extensions), and Codenvy Platform API (the REST interfaces and implementation of those interfaces within server-side components for stand alone implementations). The components of Codenvy Enterprise will be open sourced and contributed starting in 2015 after a number of proprietary IP restrictions are resolved.

 

Source Repository Type

score

Date Review
9 years 9 months ago
Date Trademark
9 years 9 months ago
Date Provisioned
9 years 6 months ago
Date Initial Contribution
9 years 5 months ago
This proposal is in the Project Proposal Phase (as defined in the Eclipse Development Process) and is written to declare its intent and scope. We solicit additional participation and input from the community. Please login and add your feedback in the comments section.
Project
Parent Project
Proposal State
Created
Background

Process orchestration and automation of tasks require a solid and scalable engine.

With score, we aim to provide an open-source, generic orchestration engine that can be used in multiple environments and scenarios such as: cloud setup and maintenance, build systems, QA, and many more.

score's code base comes from HP's Operation Orchestration product.

Scope

score is a generic workflow engine based on Java. It supports multiple common orchestration languages. The following are within the projects scope:

 

  • A scalable core engine that can run execution plans. May be created either with code or by provided compilers.
  • Provide compilers that can convert a given workflow description format (including XML/JSON) to an execution plan that the engine can execute.
  • Provide example AFL (Advanced Flow Language) content. These are workflows that are compiled using the AFL orchestration language.
  • Provide Out-of-the-box (OOTB) content that can be run in the engine. This includes common actions written in Java; score's native language.
  • Support multiple common orchestration languages. AFL is already included while a BPMN compiler is in the projects roadmap.
  • Provide a standard compiler interface that orchestration languages will use.
  • To encourage the adoption of the engine, score provides comprehensive documentation and code samples using the engine to showcase its versatility.

Content management and visual history reporting capabilities are out of scope.

Description

score is a generic engine that is able to execute workflows. A workflow has a logical structure that resembles a flow-chart.

Workflows contains both logical operations and actions. A workflow must be compiled using one of the available orchestration languages before use.

A compiled workflow is known as content and can take several forms such as jar files or a binary object stored in a database.

 

The fundamental architecture of the project consists of:

  • Worker - The unit that actually executes the steps from the execution plan. Execution logic is optimized for high throughput and is horizontally scalable.
  • Orchestrator - A queue-based work distribution mechanism. Highly available and horizontally scalable.
  • Persistency for cluster management - Allowing a cluster of orchestrator and worker nodes. Optional for simple single-node deployments.
  • Compiler - Compiles a given flow format into an execution plan that can be executed by the workers. The introduction of a new flow formatting language is achieved by hooking the right compiler. Currently, we have an AFL OOTB compiler implemented.
  • Remote worker - A worker with remote communication abilities. Can be used to execute work across firewalls.

 

Note that score deployment has two flavours:

  • Simple flavour - Consists of a single-node deployment of orchestrator and worker in the same runtime container. No external DB is required.
  • Distributed flavour - A highly available and scalable deployment that requires an external DB schema and a servlet container for hosting the orchestrator node(s).
Why Here?

score, being an engine, is most useful when integrated into other tools and/or frameworks based on different use-cases.

We believe that Eclipse Foundation is the right place for score to be exposed to developers seeking an orchestration engine.

This will also offer an opportunity for score to be inherently used as part of core projects within Eclipse.

For example, the Eclipse IDE could be used in order to author score flows. When the BPMN compiler is implemented, it will process BPMN standard XML to execution plan that score can execute.

This will allow model flows developed in the BPMN2 project to be utilized. Stardust uses XPDL which extends the BPMN language. So if a compiler was developed that supported XPDL, the user will be able to create a flow in Stardust modeler and then execute in score.

 

Project Scheduling

We aim to release a beta version towards the end of 2014.

Future Work

In our roadmap, over the next 1 to 1.5 years we plan to provide:

  • A workflow engine that can execute content in multiple languages.
  • OOTB content that will provide value to users and support single-node deployment.
  • Provide OOTB flows that will enable users to orchestrate Docker capabilities.
  • Allow authoring of score workflows within the Eclipse IDE.

 

Community Building In order to grow the community, we plan to:

  1. Provide examples of project usage.
  1. Create a website to promote score.
  1. Integrate with other software solutions.
Initial Contribution

The code is now part of the HP software products, and is named “HP Operations Orchestration”. Copyright is held by HP Software, and was approved to be open-sourced.

Currently, the community around the code is only HP software employees who work in the HP OO product.

Source Repository Type

Moquette MQTT

Date Review
10 years 1 month ago
Date Trademark
10 years 1 month ago
Date Provisioned
9 years 8 months ago
This proposal is in the Project Proposal Phase (as defined in the Eclipse Development Process) and is written to declare its intent and scope. We solicit additional participation and input from the community. Please login and add your feedback in the comments section.
Project
Parent Project
Working Group
Proposal State
Created
Background

In the last couple of years machine-to-machine communication and monitoring needs of remote controlled devices has raised interest also for hobbists, has attracted more developers to the Internet-Of-Things world. Moquette MQTT is positioned in this scenario and propose to create a simple and small self contained Java implementation of an MQTT broker.



The current version is 0.5; it's gaining interest in the MQTT community.

Scope

Moquette is a Java-based fully compliant MQTT lightweight message broker that can be easily configured. The project maintains compliance with the evolution of the MQTT protocol specification.

Description

Moquette is a Java implementation of an MQTT 3.1 broker. Its code base is small. At its core, Moquette is an events processor; this lets the code base be simple, avoiding thread sharing issues.

The Moquette broker is lightweight and easy to understand so it could be embedded in other projects. By default it lives standalone, but could be integrated into an OSGi container to create more significant integrations, for example running inside an embedded OSGi broker like Concierge.

Why Here?

The Eclipse Foundation focuses on building great community on great projects, and also is becoming the home to many IoT-related projects, for example Paho and Mosquitto to name the prominent examples. Having a freely-available pure MQTT broker written in the Java language with a commercial-friendly license could be a great win for the growing IoT community and for the Moquette broker in substaining its growth and adoption in IoT related projects.

Project Leads
Committers
Interested Parties

 

  •  Kai Kreuzer, Eclipse SmartHome Project Lead
  • openHAB UG
Initial Contribution

The code is written mainly in Java with some Groovy scripted integration utilities/use cases, in the form of multimodule Maven project. The code has been mainly written by a single committer and project lead Andrea Selva with some suggested bugfixes in form of issues notifications. The source code is freely available as http://code.google.com/p/moquette-mqtt

Source Repository Type

DAWNSci

Date Review
10 years 1 month ago
Date Trademark
10 years ago
Date Provisioned
9 years 11 months ago
Date Initial Contribution
9 years 6 months ago
Date Initial Contribution Approved
7 years 5 months ago
This proposal is in the Project Proposal Phase (as defined in the Eclipse Development Process) and is written to declare its intent and scope. We solicit additional participation and input from the community. Please login and add your feedback in the comments section.
Project
Parent Project
Proposal State
Created
Background

Scientific software on the Eclipse Platform is common and wide-spread. Because The Eclipse Rich Client Platform (RCP) is a feature rich and productive environment many science organizations have picked it up and run with it, today lots of functional applications exist in this sector. The architecture of RCP means that much of the user interface can be interchanged and is inter-operable to some extent. So for instance a science project wanting to support Python can add the Pydev feature to its product and extend functionality quickly for little cost. When it comes to inter-operable algorithms and inter-operable plotting however, this is not the case: each science project has its own definitions. This means that they cannot profit from others work or make serendipitous discoveries with an unexpectedly useful tool. The DAWNSci project is one option for solving these issues.

Scope

The DAWNSci project defines Java interfaces for data description, plotting and plot tools, data slicing and file loading. It defines an architecture oriented around OSGi services to do this. It provides a reference implementation and examples for the interfaces.

Description

This project provides features to allow scientific software to be inter-operable. Algorithms exist today which could be shared between existing Eclipse-based applications however in practice they describe data and do plotting using specific APIs which are not inter-operable or interchangeable. This project defines a structure to make this a thing of the past. It defines an inter-operable layer upon which to build RCP applications such that your user interface, data or plotting can be reused by others.

Why Here?

The Eclipse Foundation is the right place to collaborate for DAWNSci because of the Science Working Group. This group has attracted several universities and software companies. The Eclipse Foundation gives the most opportunity for discovering new projects that DAWNSci can make use of and add value to the scientists working at or visiting Diamond and the ESRF.

Project Scheduling

The DAWN software will have releases scheduled with the shutdown phases of Diamond Synchrotron. The sub-components which are part of the dawn Eclipse project, DAWNSci , will be going through the same cycle. The releases are maintained by Diamond Light Source and are not on the Eclipse web site.

Future Work

Implementations of:

1. Data description (numpy like layer)

2. Plotting (based on nebula and in-house)

3. Plot tools

4. Slicing

5. File loading including HDF5

Project Leads
Committers
Mark Basham
Baha El-Kassaby
Irakli Sikharulidze
Mentors
Interested Parties

Members of the science working group.

Initial Contribution

The copyright of the initial contribution is held ~100% by Diamond Light Source Ltd. There may be some sections where copyright is held jointly between the European Synchrotron Radiation Facility and Diamond Light Source Ltd. No individual people or other companies own copyright of the initial contribution. Expected future contributions like the implementation of various interfaces will have to be dealt with as they arrive. Currently none are planned where the copyright is not European Synchrotron Radiation Facility and/or Diamond Light Source Ltd.

Plugins:
org.dawb.common.services
org.dawb.hdf5
org.dawb.hdf5.test
org.dawnsci.doe
org.dawnsci.plotting.api
org.dawnsci.plotting.examples
org.dawnsci.slicing.api
uk.ac.diamond.scisoft.analysis.api
uk.ac.diamond.scisoft.analysis.dataset

Third party plugin which is a dependency:
ncsa.hdf

Source Repository Type

The initial contribution is actually in the repo dawn-eclipse and the plugins start with org.eclipse.dawnsci.