Digital disruption is real.
It is transforming not only industries but also how products are developed. The days of long development cycles are outdated. Standalone testing cycles are shrinking from weeks to hours.
Offline customer reviews have become online and instantaneous with the social media boom. A tweet or live update about a product or service from a social media influencer can significantly influence its near-term fortunes. This creates the need for much more agile software development and a faster release framework.
Nobody likes sluggish mobile apps. In today’s hyper-competitive mobile app landscape, users demand fast, frictionless mobile experiences. If an app doesn’t deliver, they’ll simply take their attention (and dollars) elsewhere. With specific and actionable information, developers can squeeze every possible ounce of performance out of their product and optimize user experience. With the fast pace of life today, people expect information to be available to them at the speed of light, regardless of which device or apps they’re using. Through the power of social media, their experiences — both good and bad — are instantly shared with the world, and many of those tweets, comments, and reviews are about apps being slow.
As an app developer looking at this feedback, we must ask ourselves:
- What do they mean by slow?
- Where do they notice the issue in the app?
- Do other users or other devices experience the same?
- Is it performing better or worse for them over time?
To answer all these questions, performance metrics are essential to translate subjective impressions to quantitative measurements that app developers can continuously monitor and improve with each iteration.
In order to reflect the most frequent user actions and their perception of the app’s responsiveness, developers need to pick a set of performance metrics to capture users’ experience, including how long it takes to launch the app, how long it takes to perform specific actions, and the frame rates as they navigate the app. Developers need to transform formerly distinct performance indicators on each client app to a set of unified metrics. This is why developers need cross-platform metrics, with a deep dive into HOW to implement the collection of these metrics. These metrics have to be collected in each release cycle.
Faster release cycle can help fix issues pointed out by customers in the shortest possible time to mitigate damage/loss of revenue. As customers get increasingly digital-savvy, enterprises need to scale up to match customer expectations not only to stay profitable but also to stay relevant.
Some of the questions that face Digital Engineering teams working hard to enhance customer engagement are:
- How can an enterprise get to the root cause of an issue before it starts to hurt?
- How can an enterprise learn from past releases and predict a future slowdown or bug?
- How can an enterprise detect every bottleneck in customer experience and iron it out?
Quality-Driven Design enables enterprises to continuously monitor applications in production and derive insights about user patterns, feature consumption, load patterns, customer feedback and even about features that are ignored by customers. These insights help enterprises quickly update applications in line with customer sentiments and expectations.
Quality-Driven Design – Code-level Insights
Today’s smartphone is much more powerful than a laptop in its hardware and processing capabilities. Mobile operating systems are continuously evolving to ensure a “better than ever” experience to customers.
These 2 continuous transformations have made it inevitable for enterprises to work harder on improving their mobile application “performance”. Studies comparing performance of mobile applications to revenue are suggesting that faster the app, better the revenue prospects. This puts immense pressure on Enterprise Digital leaders to do everything possible to maintain their apps as the smartest out here.
One of the key factors that define “best performing application” is application start time. It’s one of the critical benchmarks for the application. Instagram starts in 1 second whereas WhatsApp starts in 1.2 seconds. LinkedIn and Netflix take 2.8 seconds to start. Enterprises need to compete with this kind of standards set by most popular mobile applications. It’s no more an apple to apple comparison in Digital land. Where does your application stand? What can help your application improve its “performance” significantly.
Application start time is influenced heavily by application programming. If a tool can inspect application executables and factors that are slowing down the application, it will be extremely useful for developers.
The recent acquisition of Nimble Droid allows HeadSpin to evaluate application performance on both the client and server side with high accuracy. HeadSpin can provide accurate and detailed insights to developers about the bottlenecks in application execution that are hurting its performance and thereby user experience and revenue potential. Nimble Droid helps you pinpoint elements that are hurting your application performance. It allows you to gather deeper insights about what should improve on the application side (leave aside network and other factors) to deliver highly competitive (remember Insta, WhatsApp) application performance. It also allows you to track the progress or deterioration that happens to application build to build. The Nimble Droid dashboard clearly measures the improvement/deterioration in performance build over build. This helps eliminate performance regression, as features get added and enhanced. With Nimble Droid integrated into the development environment, developers can evaluate the application for performance parameters before going to production and waiting for feedback from customers. Instead, every time a build is made, developers have deeper insights about what should be fixed or modified at code level to achieve desired performance.
There are three key advantages to using HeadSpin:
- Meet performance expectation consistently with shared performance goals
- Improve communication internally and externally with common metrics definition and implementation
- Have a single data pipeline for data aggregation and analytics
Additionally, there are four key lessons we learned would improve performance:
- Defining a user-centric metric ensured we actually improved the experience.
- Preventing regressions is the No. 1 way to keep an app fast.
- Develop best practices in optimization performance.
- A faster app encourages more engagement.
Encouraged by customer results, we’re beginning to focus our efforts on exploring the relationship between improving performance and engagement. We strongly recommend customers to continue to monitor every critical user journey (CUJ) for every pull request to detect any performance regressions.
With such unified cross-platform performance metrics, application developers can set shared goals for a consistent end-user experience on all client apps.
Quality-Driven Design - Production Insights
Today, enterprises are becoming increasingly aware of the need for quality-driven design for supreme customer experience and 100% uptime. Design inputs from production via quality engineering can significantly transform an application’s customer experience and performance. Enterprises that leverage such design inputs have a clear edge over competing products that rely only on business for design inputs.
Quality-driven design provides enterprises with detailed insights about what’s happening behind the scenes. It can identify factors that could be hampering customer experience real time.
Quality-driven design helps improve application experience continuously and rollback releases that are degrading customer experience while adding new functionality. These insights also translate to design requirements for upcoming releases as these insights are derived from real-time customer interaction on production releases.
HeadSpin’s AI-powered Application Performance monitoring Platform allows enterprises to continuously monitor applications from multiple geographies and understand potential bottlenecks, issues and areas for improvement. This platform observes application transactions from cities, networks and devices of interest and comes up with a holistic view of application performance. It clearly calls out various factors contributing toward application slowness and poor UX experience. Headspin has a presence in 150 cities across 100 countries globally. This allows enterprises to run tests from any city of their interest based on their customer base.
A leading US retailer leveraged the application-monitoring platform to analyse its application in production from three different cities for its effectiveness and for areas of improvement. During this exercise, they had numerous observations like duplicate image downloads, repeated pings by third-party SDK calls, and UX blocking the content from rendering on time. All these were having an adverse impact on customer experience. It could also identify an improvement area in image management, which could potentially save 50,000 USD per month by improving the payment experience.
Challenges and lessons learned
Cross-platform metrics measurement requires an extensive collaboration of developers from multiple clients, backend, and data teams. There are numerous challenges during the process:
- Platform constraints
- Different implementations on each platform (e.g. how messages are cached and rendered)
- Standardizing the metrics definition and deployments
- Managing data updates and versioning
- Collaboration among multiple platform teams
Here are some lessons we have learned so far from tackling these challenges.
Business objective-driven workflow
Working backward from mock analytics and a mock dashboard is super helpful to define the data format and to decide upon a reasonable range of stats. Have an expectation of what is normal and what is not. For example: What range should the metrics stats fall in? What possible values should metadata have? What should the distribution of metadata values be? If anything falls out of range, it could indicate either a tracking error or an actual performance problem to be addressed.
Identify leads to follow through and make decisions.
Since this work involves multiple teams on several metrics over a relatively extended period, it’s important to have directly responsible individuals leading the decision-making process. Try to pair up developers from each platform to work on the same metrics at the same time, and minimize the chances of developers joining or leaving the project as this can introduce friction.
Invest in processes and tooling
It’s essential to track the metrics updates as soon as changes are deployed to tighten the feedback loop. This process will be much smoother with investment in real-time debugging tools on all platforms; it’s impractical to wait for 2–4 weeks for production data to verify the validity of data, especially for the mobile release cycle. You need to be able to detect tracking errors and performance trends both locally and on dog food or beta builds.
Early knowledge sharing and training
There will be knowledge gaps for client developers to understand how other platforms work: how data is sent, formatted and stored in the data warehouse and the most efficient way to organize, query data and set up dashboards. It’s beneficial to encourage and coordinate knowledge sharing to get on the same page and avoid surprises down the road.
With unified cross-platform performance metrics, application developers can set shared goals for a consistent end-user experience on all client apps.
This isn’t the end of the story for metrics measurement improvements, though; we’re working on automated performance testing and regression detection, along with adding more granularity to metrics during app sessions and on specific user actions. It’s just the beginning for the cross-platform performance metrics to mirror our users’ experience and help us make every app faster.
Digital requires continuous testing and monitoring to ensure customer loyalty. It’s no more a scenario of design, develop, test and release. The advent of new technologies and tools has made continuous monitoring easier and more comprehensive. Modern digital testing tools can gather deeper insights about the application experience and performance from production. Enterprises with a focus on digital revenue shall leverage appropriate technology and tools that allow them to ensure best-in-class experience to their customers.
HeadSpin, a Silicon Valley Technology Start-up based out of Mountain View, California, is a technology partner for Wipro’s Quality Engineering & Testing Practice. Wipro Ventures has also made an investment in HeadSpin.