HeadSpin Documentation
Documentation

Providing Feedback on Performance Sessions

The HeadSpin AI-based issue detection system is designed to work with humans to debug issues in the actual and perceived performance of distributed applications. We use AI to convert heuristic expert systems into learning systems that can be improved with user feedback. If you see an area where we can improve, you can provide feedback directly in the Waterfall UI or via the Session Annotation API. This document outlines how to provide useful feedback on HeadSpin Performance Session data.

Description of a Feedback Annotation

Feedback on HeadSpin Performance Sessions leverages our Session Annotation feature to surface suggestions, areas for improvement, or errors to the HeadSpin team. As described in the Session Annotation API documentation we currently support the following types of feedback:

  • <code class="dcode">Suggestion</code>: A tip, hint, or friendly request regarding ways HeadSpin could better meet your needs.
  • <code class="dcode">Needs Improvement</code>: Feedback that an existing solution is insufficient or inadequate and needs to be improved to be useful.
  • <code class="dcode">Report an Error</code>: Feedback that the existing solution is broken or unusable.

Providing Feedback Directly in the Waterfall UI

On the timeline, click and drag your mouse across the region on which you would like to provide feedback. This opens the Create Label popup with the pre-filled Start Time and End Time that correspond to the region that you highlighted. From the Type drop-down menu, select one of the feedback label types that best describes the type of feedback you wish to provide. Note that the User label type is reserved for custom annotations not associated with feedback on Session data.

user label types

Fill out the required <code class="dcode">Name</code> field as well as any of the optional fields that may be useful, and then click the <code class="dcode">Submit</code> button to create the annotation.

Guidelines for Providing the Most Useful Feedback

Generally, feedback will be most useful if it is limited in scope, and specific to a single issue. For example, should an existing analysis such as the <code class="dcode">Loading Animation</code> issue card fail to detect an animated loading icon, the most useful feedback would limit the feedback time interval to the region of the timeline where the animation may be found. Feedback should indicate the affected analysis or data in the label <code class="dcode">Category</code> (<code class="dcode">loading animation</code> in this example) and summarize the nature of the issue in the label <code class="dcode">Name</code>. In this example the loading animation card failed to identify an issue and thus had a <code class="dcode">false negative</code> in the issue classification. Finally, feedback should provide any additional context necessary in the <code class="dcode">Data</code> field. An example of this feedback is shown below.

Example of feedback