Insights Driven Quality EngineeringInsights Driven Quality Engineering

Insights-Driven Quality Engineering for Digital Channels

December 22, 2020
 by 
 Rajeev Ranjan Rajeev Ranjan
Rajeev Ranjan
Sudheer MohanSudheer Mohan
Sudheer Mohan

The global pandemic has had an unexpected and profound impact on social and work behaviors. With their mobility curbed, consumers and businesses now accomplish everyday tasks using digital channels, a change that may be permanent. Enterprise digital channels must therefore be more effective and user-centric than ever if companies hope to stay in business. Indeed, with data showing 28% of customers plan to switch brands permanently in light of the digital rush, the onus is on enterprises to ensure a best-in-class experience for their customers.

Customers expect high-quality, lightning-fast digital interactions, and they’re most likely to walk away when those expectations aren’t met – sometimes after just one bad experience. Delivering optimized digital experiences is paramount, but the teams tasked with managing those experiences have an exceedingly difficult job. IT environments and device ecosystems continue to grow more complex, interrelated, and distributed, and that’s before considering the growing diversity of devices, applications, and networks.

Connect to our real local devices worldwide and run geolocation tests on your mobile apps and websites. Learn more.

In addition, the disjointed landscape of testing, app monitoring, app performance, and user analytics tools has forced teams into a state of inefficiency – one in which they struggle to keep up with demands or gather the insights necessary to optimize customers’ digital experience. Without the testing coverage they need, businesses discover critical flaws in production and are left exposed to a range of risks, including shopping cart abandonment, lost revenues, customer defections, and more. Clearly a new approach is needed: insights-driven Quality Engineering.

Check: How to Achieve B2B Customer Experience Success With 5G?

Transforming Automation

Automation began as a record-and-playback capability, gradually expanding beyond UI automation. Eventually enterprise automation software was replaced by open-source alternatives, but automated testing remained a parallel initiative, albeit with no synergy with manual-testing teams. Many times, product managers had low confidence in automation’s reliability and accuracy, rendering it unused and deemed to not deliver any value to business outcomes.

That changed as mobile channels proliferated. The need to validate a build on numerous devices forced Quality Managers to rethink their automation strategy. Many adopted an automation-first approach, with some percentage of regression testing automated. But manual testing often continued, with automation providing an additional check for build quality rather than serving as the primary tool.

As organizations adopted agile approaches, Quality Engineering shifted left significantly. Business requirements (and pressures) to shrink quality cycles and accelerate speed to market brought huge changes in how software testing happens. Almost all projects started shifting to agile model, and testing got closer to development. The division that once existed between manual and automated testing broke down, and in most enterprises disappeared entirely.

Automation engineers in sprint teams tried to automate everything possible; around 60% of scenarios were automated during in-sprint periods. Scenarios that required special setup and logic were passed to business regression automation team, which focused on executing the automated suites downstream and expanding the automation coverage. Automation further matured with CI tool integration, as companies encountered a need for unattended testing to achieve the velocity and ROI expected from automation investments.

Cloud-based test infrastructure helped to run tests anytime from anywhere without setting-up test devices, making automation execution much more efficient and effective. Quality engineering teams could also automation for longer hours, thereby reducing test execution times and helping enterprises accelerate release certification and get to market sooner. Finally viewed as reliable and scalable, automation began helping enterprises and quality engineering teams focus on other aspects – items such as Result Analysis, Defect Analysis & Classification, and Defect logging – thereby improving their speed and efficiency even further.

Read: Why Understanding Regression Defects Is Crucial

As automation found a foothold, enterprises were able to:

  1. Accelerate testing cycles
  2. Reduce the time required to certify a regression build (from weeks to hours)
  3. Test more for the same investment
  4. Get more value from their automation investments

Where Automation and Quality Engineering are Going Next

Quality Engineering leadership at every enterprise is focused on enhancing the value delivery of digital transformation programs. Earlier, the focus used to be in introducing automation and expediting its adoption. Today, the focus is to further enhance the value of enterprise offerings via complete testing. Now that automation has become the de facto way to test, two trends in quality engineering have emerged:

  1. How to automate automation script development?
  2. How to gather insights about app experience and performance while running end-to-end user journey tests?

Automating automation is becoming an interesting area, with many AI-powered solutions in play. Some of these solutions can crawl an entire app and build end-to-end user journeys in an automated fashion, while others help testers build test scripts faster. Ultimately, automating automation helps enterprises achieve reduce their overall testing spend and further enhance their testing velocity.

Also read: Calculation of ROI in Mobile App Automation Testing

Most enterprises leveraging automation at scale maintain the test infrastructure on cloud. While the primary purpose of this transition was initially to ensure availability of a stable and 24/7 test infrastructure, companies are realizing additional benefits, especially leveraging AI.

Since entire tests happen on the cloud, the platform provider gains a lot of test-execution data and therefore an opportunity to extract value from the data. Some device cloud providers are deploying machine-learning models to analyze test execution to gain meaningful insights about the user journey and experience, app launch/wait/refresh times, variations across builds, content and so on. This in turn helps engineering teams learn more about experience bottlenecks, which they can remove on continuous basis to make applications faster and more engaging.

Also See: What Are The Different Types Of Test Automation Frameworks?

Although these benefits sound purely technical, taking action on them can be business critical. Around 70% of users tend to abandon an app (and brand) that’s slow, and almost 43% tend to discard apps that take more than 3 seconds to load. Cloud-based device infrastructure platforms that provide additional insights about application experiences are therefore gaining traction and popularity.

Classic spectres of poor software performance (e.g., memory leaks or spinning CPUs) can plague a digital experience, so there is good reason to measure and profile such metrics. However, a bad user experience can also be related to network latency, device compatibility, or even human perception. Modern user experience goes beyond input/output functionality. It must now be considered more broadly, with businesses ensuring their app responds to user attributes, behaviors, and directions across all dimensions, devices, and networks.

Read: Critical KPIs that affect user experience in the mobile games

Delivering superior mobile experience requires extensive testing, both locally and in the field. A dizzying number of factors must come together seamlessly and instantaneously to deliver a positive user experience. From optimized code to app performance in realistic network conditions, various factors need to be considered around software, device, backend, and network to ensure a positive mobile experience.

Quality Engineering Teams cannot afford to get mired in an endless cycle of post-production fire drills, reacting to problems with production systems rather than identifying and fixing issues proactively. If they do, problem resolution can turn into a battle between teams rather than a collaborative effort, hindering the organization’s ability to focus on more strategic long-term priorities and innovation.

In today’s mobile and experience-driven economy, testing, monitoring, and analytics must not be managed in isolation. Comprehensive testing (including functional, performance, and load testing) needs to be employed alongside monitoring and analytics, with a unified approach that provides actionable insights about the user experience. All gathered intelligence needs to be seamlessly integrated with advanced analytics, including predictive analytics powered by artificial intelligence. And enterprises need to crush invisible walls that still seem to exist between IT and business, positioning them to improve their digital experience and realize sustained success.

Quality engineering teams, if empowered with the right platforms and strategy, can play a key role in these digital efforts and meaningfully enhance the enterprise’s monetization potential.

FAQs

1. Why do we use data-driven testing?

Automated scripts can run thousands of tests every day. However, the data sets in these tests are usually hardcoded, and the script uses the same data set each time. A data-driven test extracts the inputs from a different source, usually a data file, and tests with various possible inputs.

2. Can AI be used for software testing?

Software quality assurance teams are increasingly using AI and ML tools. These tools can design, create, and execute test scripts without significant interference from human testers or developers. For example, Nimbledroid can run automated crash detection scripts and identify the scenarios where your app might break. 

3. Why should we do regression testing?

Regression testing ensures that any new code or functionality does not break the existing software. It is responsible for ensuring that the overall application remains functional and stable after adding a new piece of code.

4. What are positive and negative testing?

Positive testing tests an application with valid data sets to check if the application is working as intended. On the other hand, negative testing tests the application with invalid user inputs to check if the application handles the input errors gracefully.

Share this

Insights-Driven Quality Engineering for Digital Channels

4 Parts