Eclipse Trace Compass 2.0.0

Release Date
Deliverables

Trace Compass 2.0 focuses on new features, bug fixes, and API addition and cleanup.

Some of the new features are:

  • Added Extension points

    In this release 2 new extension points have been added:
    • org.eclipse.tracecompass.tmf.core.analysis.ondemand

      This extension point is used to provide on-demand analyses. Unlike regular analyses, the on-demand ones are only executed when the user specifies it.
    • org.eclipse.tracecompass.tmf.ui.symbolProvider

      This extension point can be used to transform from symbol addresses that might be found inside a TmfTrace into human readable texts, for example function names (e.g. used for Call Stack views)
  • Integration with LTTng-Analyses
    • Project View additions
    • Running an analysis
    • Result reports
    • Creating custom charts
    • Importing custom analyses
  • Critical flow view
  • Data-driven pattern detection

    The data-driven analysis and view of Trace Compass has been augmented to support data-driven pattern analysis. With this it possible to define a pattern matching algorithm and populate Timing Analysis views. Other types of latency or timing analyses can be now defined using this approach.
  • Time graph view improvements
    • Gridlines in time graph views
    • Grid lines have been added to the time graph views such as the Control Flow view.
    • Persist settings (e.g. filters) for open traces
    • Support for searching within time graph views (e.g. search for process in Control Flow view.
    • Support of vertical zooming in time graph views
    • Bookmarks and custom markers in time graph views
    • Horizontal scrolling using Shift+MouseWheel
  • System call latency analysis
    • System call latency table
    • System call latency scatter graph
    • System call latency statistics
    • System call latency density
  • Critical flow view

    A critical flow analysis and view has been added to show dependency chains for a given process.
  • Virtual CPU view (analysis of virtual machines)
  • Bookmarks and custom markers
    • Support for user bookmarks in time graph views
    • Lost event markers in time graph views
    • Navigation for trace markers in time graph Views
    • API for trace specific markers
  • Resources view improvements
    • Display of soft IRQ names in the Resources view
    • Execution context and state aggregation
    • Following a single CPU across views (per CPU filtering)
  • Control Flow view Improvements
    • Support for sorting of process based on columns in the Control Flow view
    • Following single thread across views (thread filtering)
    • Grouping threads under a trace parent
    • Selecting flat or hierarchical thread presentation
  • CPU Usage view improvements
    • Per CPU filtering in CPU Usage view
  • Kernel memory usage analysis and view
  • Linux Input/Output analysis and I/O Activity view
  • Manage XML analysis files
  • Display of analysis properties
  • Importing traces as experiment
  • Importing LTTng traces as experiment from Control view
  • Events Table filtering UI improvement
  • Symbol provider for Call stack view
  • Per CPU thread 0
  • Updated analysis requirement API
    • API clean-up of analyis requirement API
    • Added API to support requirements on event types and event fields. Using event field requirements it possible to define analysis requirements on LTTng event contexts
    • Updated LTTng UST Call Stack and Kernel CPU Usage analysis requirement implementation
  • Pie charts in Statistics view
  • Support for LTTng 2.8+ Debug Info
Compatibility

New API were added to Trace Compass 2.0 as well as API have been cleaned-up for this release.

The feature "LTTng Live trace reading" has been disabled for Trace Compass 2.0. It was decided to disable the LTTng Live trace support because of the shortcomings documented in bug 486728. We look forward to addressing the shortcomings in the future releases in order to re-enable the feature.

This release is part of Neon