While artificial intelligence (AI) concepts and implementations have been around for decades, it has only been in recent years that breakthroughs have started to have an impact on a wide range of industries and domains. Over the course of more than five years, the data science team at HeadSpin has been helping customers harness the massive potential of AI. Today, HeadSpin customers can reap the rewards of this effort. In this post, we offer a Q&A with Brian Perea, our Director of Data Science, who shares some insights into the past, present, and future of AI in HeadSpin solutions.
What is the core set of problems HeadSpin is tackling with AI?
Fundamentally, we’re focused on the problem of understanding the user experience. To do this, we’re applying AI to analyze the results of performance tests run by HeadSpin-enabled devices. Any time a test case is run on a device connected to the HeadSpin Digital Intelligence Platform™, our AI-based issue detection engine runs a suite of analytics to identify, quantify, and prioritize issues observed during the test session. Our analytics suite provides insight into end-user experience issues, such as long loading times or drops in video quality, as well as issues associated with server misconfiguration and networking errors, client-side app performance, and device performance. These insights and supporting metrics are surfaced on a common timeline in our performance session reports. With these reports, customers can correlate high-level issues (for example, an image failing to load) with root causes (such as an HTTP error code indicating that the image was not available). In combination with our global device cloud, the HeadSpin platform’s AI can deliver unprecedented 24/7 insight into issues affecting the quality of experience across applications, devices, networks, and locations.
To maximize the value AI delivers, we’re focused on three key objectives:
- Ensuring the way we quantify user experience and performance is closely aligned with the perceptions and expectations of end users.
- Continually tuning our AI systems so they can accurately model new apps and functionality.
- Providing developers with actionable insights into the root causes of issues and how they can be addressed and preempted. To do so, we need to establish effective baselines for user experience and app performance, and track how performance compares to these baselines over time.
Recommended Post: Harnessing AI to Track and Optimize Video Quality
Why use AI instead of traditional methods?
Performance issues that are obvious to end users are usually not trivial to identify using heuristics or rule-based approaches. The case of loading animations provides a great example of the kind of significant benefits AI can provide. These animations are common ways an app will indicate that content is loading, for example, when a video stream is buffering. These animations are context-specific and vary widely by app and device. Animations range from simple spinning graphics that are common in iOS and Android mobile apps to highly complex animations such as a visualization of a growing tree or an animated airplane.
Visual load time is a critical KPI to many of our customers. To measure visual load time as perceived by an end user, customers would need to manually view and annotate each session or perform a complicated visual match between curated loading animations for an application and a screen recording. Since loading animations are by definition constantly changing, any of these efforts are tremendously difficult and time consuming. In contrast, based on recorded animations from thousands of tests, our AI-based issue detection engine has learned how to identify a wide variety of loading animations. Every time a test is run on a HeadSpin-enabled device, our issue detection engine automatically classifies and surfaces metrics for these animations.
Video quality is another example of how our AI models can provide significant advantages. HeadSpin offers capabilities for measuring the mean opinion score (MOS) of video content. A MOS is a holistic subjective quality score that represents the perceptual quality of a video as perceived by an end user. With these capabilities, the AI engine can measure how streaming video quality evolves over the course of a test and flag regions of poor user experience without having any reference to the source video content. Without AI, a team would either have to show a video to a pool of users and aggregate their feedback or curate high quality reference videos for each video to be evaluated. Not only are full reference video quality metrics expensive and difficult to maintain, but many rich media applications, such as live video and game streaming, have no reference to compare to.
Our reference-free MOS model is backed by the largest video quality data set of its kind. This data set was collected from real devices using local networks on the HeadSpin device cloud. Videos were shown to users to build an AI model that was calibrated against real user experiences. By pooling feedback from thousands of subjective quality scores on diverse video content, our model can estimate the subjective quality score on video content that has never been seen before. In combination with our suite of reference-free video quality metrics, including blockiness, blurriness, contrast, and more, our video quality MOS is a transformative capability for any customer who values the quality of their streaming video content.
How is HeadSpin’s approach to AI different?
The HeadSpin AI is designed to work with humans to debug issues in the actual and perceived performance of distributed applications. Recent advances in AI techniques have made it possible to use computing power and volumes of raw data to produce a model that outperforms humans at quantification and labeling tasks. Over more than five years, we’ve worked with user experience and performance experts and analyzed millions of data points to develop AI models that can leverage these advances and data from our distributed device cloud to monitor performance and quality at scale.
We use AI to convert heuristic expert systems into learning systems that can be improved with user feedback. At HeadSpin, we receive feedback directly from end users and tune our AI models to support customer use cases based on that feedback. We also work closely with our customers to ensure that our AI can be easily integrated with their existing DevOps processes.
Recommended Post: The State of AI in Software Testing: What does the Future Hold?
Insights from HeadSpin AI are designed to make identifying and diagnosing issues easy. Each insight derived from our AI engine is surfaced on a common timeline alongside any custom notes, events, or KPIs added by customers. Framing insights on a common timeline allows customers to quickly identify and diagnose issues in a context that’s relevant to their use cases. These insights may be aggregated by test to continuously monitor for regressions in a test suite. When a regression is identified, customers can dig into the performance session report associated with the test to diagnose both the symptoms and the causes of the issue.
Another distinct advantage we provide is in the area of privacy. HeadSpin’s AI models identify regions of poor user experience and performance. As a result, customers can continuously gain insight into the quality of the user experience before exposing potentially sensitive new features or products to users. Additionally, customers gain these insights without needing to track or monitor end-user data. Customers can keep sensitive releases and end-user information private, while benefiting from detailed insight into the performance of their applications.
What’s possible today and what can we expect in the future?
To measure the user experience and app performance, each HeadSpin test session includes a 40+-point analysis from more than 40 separate AI models. Our AI models use computer vision techniques to quantify blank screens, time to interact, loading time, loading or buffering animations, and content quality. Our platform can automatically diagnose server-side issues that arise due to infrastructure deployment, poor performance, or API errors. Client-side issues, including device bottlenecks and code hotspots are also quantified. This analysis is run on a 24/7 basis for thousands of daily tests on devices around the world.
In the future, HeadSpin’s AI models will continue to analyze more data for more users and support more use cases. IoT and 5G are new areas where performance-focused AI will help teams deliver better products. We’ll also be able to embed these models in devices that have a screen raster, such as smart TVs. Since our AI models are completely reference-free and do not require any content identification, we’ll be able to run tests continuously in the background to identify issues, while keeping the source material private.
FAQs
1. What are AI-powered issue cards by Headspin?
HeadSpin uses AI to generate "issue cards" which contain the issue or problem, root cause analysis of the issue, and actionable insight to fix your application. For example, an issue card may inform you that the application page response is slow, provide the root cause analysis, and recommend how to fix the issue.
2. What is HeadSpin's AI-generated reference-free video MOS, and how is it different from traditional MOS?
A mean opinion score (MOS) calculates how an individual perceives a video quality. Traditional approaches use a reference video, which does not accurately represent quality in scenarios that lack a reverence video like gaming, conference, or live content. A reference-free MOS is based on AI and ML models calibrated against the experience of real users and provides a better understanding of video quality.
3. How is AI being used in testing?
AI is helping QA teams to
Build test scripts: AI helps teams build scripts that provide complete coverage of the applications.
Historical bug patterns: AI can help identify places in code that might impact the application. For example, AI might help point out that using excessive scripts or having large images on a page might slow the application down.
Customer Satisfaction: AI can help continuously monitor the applications for issues that may impact customer satisfaction, like having low page content or blank screens on a video.
4. What are some advantages of using AI to write test cases?
AI is being used to write test cases with higher speed and accuracy. An AI engine can crawl through pages and user stories of the website/application that a human might miss. The AOI will also continuously improve itself by using the data generated from tests.