The mobile performance testing space is gaining attention, which begs the question: why does performance matter?
On screen, we’ve all experienced bad user experiences before. We’ve all experienced lagging (or even blank) screens, perhaps reducing our affinity for certain apps. When it comes to mobile apps, performance describes how efficiently your app works and how smooth it is to use.
Recommended Post: Best Practices for Application Performance Testing
There are three considerations to keep in mind when thinking about performance:
1. User engagement
Human engagement studies dating back to the 1960s have shown that actions that take under 100 milliseconds are perceived to be instant, and actions that take a second or longer cause the human mind to be distracted. So, even the perception of slowness in your application can be a big killer of app engagement.
2. Sales and in-app purchases
Take, for example, an app with an e-commerce component. Company analytics show that the average shopping session is five minutes long, that each product in the item UI view takes 10 seconds to load, and that it takes 30 views on average to complete a sale.
Reducing the load-time for each product UI view by just one second allows for three additional screen views of products to load in an average session, allowing customers to add more items to their cart, or complete the entire transaction 30 seconds faster altogether. Performance has a significant impact on potential transactions.
3. Cost-saving on infrastructure
Mobile applications download lots of content from remote servers, so lowering the number of requests or reducing the size of each request can yield huge improvements in speed in your application. Taking these two steps will also yield huge reductions in traffic on your backend, allowing you to grow your infrastructure less expensively.
HeadSpin App Performance Sessions
At HeadSpin, we’ve developed a tool called Performance Sessions, which allows you to explore and understand your apps’ characteristics. Performance Sessions are useful in identifying where to make improvements within your app, enhancing overall user experience.
The most powerful part? You can conduct HeadSpin Performance Sessions via a remote control session, or a manual session on the platform in which you can use a real Android or iOS device from the comfort of your web browser. You can also trigger it via automation tests like Appium, Espresso, or XCUI.
During these Sessions, we capture four components:
- Network traffic from the cell interface
- The video of the test session
- Client-server data
- Client data
These inputs are then fed into our AI-based issue detection engine, which generate the dashboard below, showcasing our Waterfall UI.
This dashboard presents a high-level view of how you can improve app performance. It entails the:
- Project Info panel, which conveys the location in which the test was conducted, as well as the underlying data from the test session.
- Metrics Graph panel, which displays a live video of the test session. As you scroll through the waterfall, you’ll notice that the data correlates with the video during test execution.
- Issues palette, which generates an Issue Card for each issue detected, coupled with suggestions on how to combat the issue. For example, our platform can detect which servers were slow during the session, which downloads took longer than expected, and what made those downloads slow.
- Network timeline, which provides an overlay of time-series data and network transactions.
- Detail palette, which helps you understand request and response headers, which are crucial for debugging and comprehending the performance of your app on a real network.
Let’s take a deeper dive into the capabilities of the Metrics panel.
Think of it as your heads-up display tool before you jump into the data. Metrics Graphs are useful because they provide the sums and averages of different types of network data captured during the session, such as the total number of http requests and the average download speed.
We calculate the metrics on four groups of requests.
- The first one is session-wide—consider this is your bird’s eye view for every network message in a session.
- Second is the domain view, which captures every message sent to or received from a specific domain.
- The third group concerns host metrics, or every message sent to or received from a particular host.
- Finally, the fourth group is burst metrics, a set of measurements we designed at HeadSpin. This captures every message in the area of continuous network activity.
The Metrics panel captures data like the average wait time spent for a response from the server. From that metric, you can discern what the traffic is caused by, the average rate of data transfer, and the number of requests made.
In fact, the Metrics panel can also display visualizations on the traffic content and help you identify the source of any unexpected content received by your app.
When people test or develop applications, they do so to emulate simulators, to monitor real devices, and in office broadband. Unfortunately, these conditions aren’t commensurate to the entropy and chaos that can occur over the network.
Because of this, the Waterfall UI gives you powerful tools to answer questions that you wouldn’t otherwise be able to answer.
Some common use cases of HeadSpin Performance Sessions are:
- Improving slow downloads
- Preventing duplicate messages
- Detecting duplicate messages
If your app contains lots of images that download over the network, consider having server-side ability to configure the height and width parameters of these images through URL-cleared parameters. If you have query string parameters, resizing images from the server side instead of the client side will improve user experience.
You can also reduce the impact time to download images if query string parameters return the width and height of the image that match the device’s screen resolution and pixel density. That said, try not to download images larger than the device resolution, or else the burden falls onto the client side.
Additionally, by performing lossless image compression, whether to PNG, JPG or, even better, the WEBP image format, you can reduce file size (meaning shorter download times, too) while preserving image quality.
And, when you have video content, make the downloads for that content resumable by specifying the content range header, which will prevent against duplicate copies being downloaded.
Check out: Mobile Application Performance Testing Guide
Why network effects matter for app performance testing?
The network on which customers receive data considerably impacts app user experience.
Being able to test your app across different devices in different networks is crucial. At HeadSpin, because we have devices all over the world that are carrier-activated—that is, real Android and iOS devices on carrier-activated SIMs—developers can understand how their app performs in different network conditions. Download speed and latency depends on network subtype.
For example, HeadSpin’s platform automatically detects both frame rate and frame freezes, or the perception of the user experience freezing. This is important because if you have an interactive, rich user experience, you don’t want the network to be the bottleneck between you and the content being delivered to clients.
Also check: Client-Side Performance Testing - Metrics to Consider
So, how can you optimize your network?
- Make as few HTTP requests as possible
- Use a content delivery network (CDN)
- Reduce the number of DNS lookups
- Avoid redirects, because they involve new TCP connections, a TLS, and then a DNS lookup
And, to optimize files and download data faster over the network, consider:
- Lowering the number of requests
- Reducing the size of those requests using gzip
With HeadSpin Performance Sessions, you’re able to see the breakdown of requests made, separated by new and used requests. When you’re delivering text content, like JSON, HTML, CSS, and JavaScript, using gzip, you can compress it on a server and deliver it to that application. Smaller file size means fewer round trips and a faster delivery.
If you have a hybrid application or a mobile-responsive website, consider (just as you would want to do on a desktop) minifying JavaScript and CSS.
File caching
Another way to optimize things from a file perspective is implementing file caching, which helps save mobile data usage. Great for users and their battery!
On the client side, download files that are used frequently and store them locally for use. The mantra here is to download once and use multiple times. And, on the server side, by setting a server caching content policy, you can not only ensure that customers receive up-to-date data, but also limit the number of duplicate files downloaded.
Performance Strategies
The benefit of performance testing mobile and browser apps on real devices on a real network is that you can learn what your performance is like under real world conditions. To achieve optimal performance, make your app aware of the network it’s on.
On the Android and iOS platform, you can query the connectivity managers to see if the user is on a WiFi or cellular connection. If the user is on a cellular connection, you can defer non-urgent communication until they are on WiFi.
You can also make your app network-aware by delivering content and features based on the user’s connection. For example, if the user is on a cellular (and not a WiFi) network, deliver smaller images. If you have a query string parameter that allows for the image to be server-side, simply check if the user is on WiFi or cellular and provide image and video content that is optimal for their network speed.
Additionally, by pre-fetching content, especially in the case of list, image, or table views, you can account for network latency. This way, when users arrive at the pre-fetched view, they immediately see content, resulting in a more full, rich user experience. By pre-fetching, you also ensure that you’re making the appropriate number of requests in a manner that will still make the experience fluid for users.
As mentioned earlier, making downloads for video content resumable by utilizing range headers, and compressing images, you can improve app performance considerably.
About HeadSpin
HeadSpin is a complete solution for mobile app performance and testing.
The beauty of the HeadSpin platform is that you can use the platform for both the pre- and post-release of your app. Because there isn’t an SDK integration requirement to use HeadSpin software, using HeadSpin’s Remote Control capabilities, you can remotely access more than thousands of devices and endpoints (across 1000+ networks in more than 50+ locations around the world) from the comfort of your own browser.
HeadSpin’s Performance Management tool offer detailed diagnostics through performance reports that provide network captures and video captures regarding your app. Our AI engine analyzes the data from these captures and points out issues in those tests.
After conducting performance tests, it’s useful to monitor how those test cases are behaving across different regions, networks, and device-types, which you can do with HeadSpin’s Continuous Monitoring tool post-app-launch. HeadSpin has experienced huge market adoption, serving some of the world’s most reputed companies.
We’ve built a custom pin-lock-enabled box, each accommodating three servers. Our software is connected through our custom USB hub, each accommodating a maximum of eight devices. Coupled with three servers per box, that’s 24 devices (both Android and iOS) per box.
And because we’ve built our software and hardware from the ground-up and have a deep understanding of the space, we’re able to provide support for any new iOS or Android devices the day they enter the market, resulting in 100% device up-time.
Read: A Mobile Application Testing Guide for Optimizing Apps
How HeadSpin enables the transition from conventional to power testing
To better-distinguish between conventional and power testing, take the following example. When running 100 test cases, 90 pass and 10 fail. Under methods of conventional testing, a QA manager would only receive a report pointing out which cases failed, leaving the manager to identify why they failed by painstakingly navigating test logs and debugging the case.
HeadSpin’s solution enables developers to instead engage in power testing by providing QA managers and development teams with network captures and diagnostics from AI-based analyses of network traffic, offering teams complete visibility into each of these 100 test cases.
Now, when trying to decipher what went wrong with those 10 failed cases, teams have more data to work with and can much more quickly identify whether the issues stemmed from the code, the automation framework, a certain device, or something else altogether.
The end result is an abundance of evidence targeted at revising those test cases, and a significant reduction in the entire development life cycle.
FAQs
1. What is the crash rate?
Ans: The crash rate is the percentage calculated by dividing the number of times a user opens an application by the number of times the application crashed during that same period. Generally, this calculation is done over a 24-hour period, showing how well or poorly your app is performing in terms of app crashes in a day.
2. What is scalability testing?
Ans: Scalability testing is a type of performance testing that measures the app’s ability to scale up and down when there is an increase in the number of users.
3. What is soak testing?
Ans: Soak testing is a load test where you hold the load over an extended period to check the long-term effects, like memory leaks and disk space filling up. The duration of the soak tests depends on the situation. Usually, soak testing runs for several hours.
4. How does the HeadSpin Platform help testers in performance testing?
Ans: The HeadSpin Platform uses its advanced AI capabilities to identify performance issues during testing before they impact users. Some of the crucial features of the Platform include root-cause analysis of user-impacting performance issues, recommendations to improve performance proactively, and issue predictions based on historical data.