Food Delivery Apps, and Low-hanging Fruit

Using HeadSpin’s mobile experience platform, we did some high-level comparisons of three of the top food delivery apps available

When we’re hungry, we tend to be more impatient. In fact, we’re more likely to be grumpy, irritable, and impatient (or is it just me?). These are not the people you want to keep waiting, and these are the people food delivery app vendors serve daily. How are they doing in terms of satisfying their hungry users? A few hungry folks at HeadSpin recently did some analysis on the technical performance of  three of the top food delivery apps. Read on to see how these apps stack up, and where they’re falling down. 

Introduction: Criticality of Responsiveness in The Food Delivery App Space

The food delivery app market is huge and expanding, expected to grow to $16.61 billion by 2023. According to eMarketer research, food delivery app usage will surpass 44 million people in the US by 2020, reaching nearly 60 million by 2023. However, this is a market that’s also growing more competitive, especially given that consumers can now use  multiple apps to order from the same restaurant. This market is also very fluid, with low barriers to entry and exit, which means customer retention is proving to be very tough. In fact, a CleverTap report found that 86 percent of new users will stop using an app within two weeks of the first launch. 

The challenge is that, for many dev and QA teams, performance of mobile applications is still something of a black box. Prior to releasing an app, developers have to contend with a lot of variables, including different devices, operating systems, and network carriers across a range of  locations. Further, if a user reports an issue, it can be difficult for teams to recreate the circumstances that led to the problem, identify the root cause, and figure out how to fix it. This is the problem that we at HeadSpin help our customers solve.

Food Delivery Apps Put to the Test

Using our mobile performance platform, we did some high-level comparisons of three of the top food delivery apps available in our area (Bay Area, California). For a number of reasons, we’ve chosen to mask the vendors’ identities. In the sections below, we look at some of the key takeaways from our analysis, then we recap a few of the specific findings. 

For one app (app A referenced below), we conducted more than 300 tests, which ensures the metrics derived are statistically significant. These tests were conducted with three Android-based mobile devices: a budget, mid-range, and high-end device. These were all based in our headquarters.  (Note, these were just three of the more than 32,000 SIM-enabled devices we have around the world across 1000+ network carriers. Additionally, we equip our customers with capabilities for ensuring  pre-release readiness of apps over a wide range of carrier networks on an array of Android/iOS devices, including mobile phones, tablets, Apple TV, Amazon Fire TV, and more.)

Key Takeaways

In our testing, we found app A delivered a consistently superior experience over apps B and C. Launch time, login time, and search time were all demonstrably faster. While the team responsible for app A can take a pause to pat themselves on the back for beating out their competitors, they should then consider some potential enhancements  and options for addressing the needs of an expanding user base. In our detailed testing of app A, we spotted several opportunities for improvement. For vendors looking to optimize the user experience, these tactics can represent low-hanging fruit. Ultimately, digital experience impacts social ratings. Averaging a 4.5 out of 5 is good, but 4.6 is better; you always want to be moving the needle in a positive direction. 

While the takeaways are focused on customer experience in the food delivery segment, performance is also a critical aspect for  drivers. Food delivery app vendors could perform drive tests on select routes. One of our customers recently conducted such a test. Their team used our proprietary physical boxes for on-premises solutions,running them in the trunks of their delivery vehicles.

Findings

Following are a few of the findings from our detailed testing of app A:

  • Slow servers: App A was affected by a slow server. The average delay associated with a problematic server was 1,848 ms. These servers could be lagging for any number of reasons, but this would be an important area to dig into. By clicking on the message block for the impacted host, we were able to see what resources were being served to the app and validating that the physical location of the destination IP was where we expected it to be.
 
  • Slow TLS: App A was plagued by lagging TLS response times. We saw an average wait of 439 ms to complete handshakes with hosts. The network hop between distributed devices and destination IP addresses appears to have been the cause of these delays. In one instance, we found that a third-party SDK for an application performance management  tool that the vendor was using was actually a contributor to the slow TLS.
  • HTTP Redirects: The app displayed issues with HTTP redirects. These issues can be very costly from a performance standpoint, requiring additional DNS, TCP, TLS, and request-response round trips. We have observed some redirects and the figure below shows a couple of 301 errors.  

  • HTTP Errors: We observed HTTP responses that are client errors (401s) for certain requests, which had a negative impact on the end user experience.

  • Duplicate messages: In most cases, an app wouldn’t need to download the same resource multiple times. For a specific host in app A, we saw multiple instances in which the app received identical copies of resources. The average performance hit of these duplicate messages was 547 ms. One potential fix for this issue could be caching HTTP requests so duplicates never reach the server.

  • Connection reuse: In several cases, the app created new TCP sessions rather than using existing ones, which introduced an average delay of 1,406 ms. By changing timeout settings, it appears the team could avoid having connections dropped prematurely, which could mitigate the performance hit.
  • Low Page Content: On average, for 4,000 ms, the application displayed a virtually blank screen. This is a clear user experience issue. To address, the team would need to correlate among a number of root causes, including long network requests, network saturation, usage levels, and so on.

Whether you’re in banking, retail, or any other segment, the performance of your mobile apps matters. When you’re in a segment that’s as competitive and prone to churn as the food delivery space, it’s especially critical to ensure an optimal experience every time. That’s why having a tool that can give you visibility into the end users’ experience, and deliver insights into exactly where performance hits are occurring is so critical.

Request a demo to learn more about how HeadSpin can deliver the insights you need to improve your customer’s digital experience.

Finally, be on the lookout for an upcoming post that will detail how we’ve enabled a customer to perform end-to end testing of their driver apps.