1.3

Speeding Up Appium Tests In Distributed Environments

When running Appium tests at scale on devices scattered around the world, you can run into test performance issues due to test command latency. In this unit, we take a look at two ways of reducing the impact of latency: Execute Driver Script (on the Appium server side) and Direct Connect (on the Appium client side).
Mobile 
Group

Batching Appium Commands Using Execute Driver Script

Both Appium and Selenium are based on a client/server architecture, where the test commands are triggered by a client that could be very far away from the server which actually performs the commands. The advantages of this client/server architecture are very important, allowing Appium tests to be written in any language, and making it possible to build large private or public Appium device clouds.

There is one significant downside to this architecture, however, that can dramatically decrease test performance. Because every test command has to travel over the network between the client and server, every command is subject to the latency of that network, as well as any other "atmospheric" conditions. Networks are not perfectly reliable in general, and unanticipated slowness or request failure can occur at any point. This means that Appium scripts developed locally might behave significantly differently when run on an Appium cloud somewhere across the world, leading to heightened flakiness or test failures.

On top of this, tests are not often written by directly implementing Appium commands, but more often by using framework-level functionality, which might encapsulate the use (and overuse) of many Appium commands, all of which add to test execution time. In some cases, I have seen test frameworks that make 5-10 requests to the server for every found element, in order to retrieve element metadata just in case it is useful later on. Apart from being a bad idea in general, this kind of approach can lead to dramatic differences of execution time when run in cloud environments.

More than general slowness, latency is also a killer for real-time automation. If you need to be certain that command B happens a very short time after command A, then sending command B across the global Internet is not going to deliver that guarantee. This is one reason the W3C WebDriver spec team decided to build the new Actions API in a form where the entire action chain is encoded as a single API call, even though the chain might take seconds or minutes to actually execute once the action begins.

Execute Driver Script

The Appium team has now done the same thing, not just for actions, but for any Appium commands at all. Essentially, we have created a single Appium command that allows you to pack as many other Appium commands inside it as you want. All these commands will be executed on the Appium server itself, so will not be subject to network latency. How does this work? It's magic, obviously! Imagine we have this test script written in the Java client:

@Test
public void testLoginNormally() {
   driver.manage().timeouts().implicitlyWait(10, TimeUnit.SECONDS);
   driver.findElement(MobileBy.AccessibilityId("Login Screen")).click();
   driver.findElement(MobileBy.AccessibilityId("username")).sendKeys("alice");
   driver.findElement(MobileBy.AccessibilityId("password")).sendKeys("mypassword");
   driver.findElement(MobileBy.AccessibilityId("loginBtn")).click();
   driver.findElement(By.xpath("//*[@text='Logout']")).click();
}

As you can probably tell, it's a very straightforward login/logout set of commands. If necessary, we could run all these commands in one go, as a batch, using the new executeDriverScript command:

@Test
public void testLoginWithExecute() {
   driver.executeDriverScript(
       "await driver.setImplicitTimeout(10000);\n" +
       "await (await driver.$('~Login Screen')).click();\n" +
       "await (await driver.$('~username')).setValue('alice');\n" +
       "await (await driver.$('~password')).setValue('mypassword');\n" +
       "await (await driver.$('~loginBtn')).click();\n" +
       "await (await driver.$('//*[@text=\"Logout\"]')).click();\n"
   );
}

What on earth is going on here? It looks like we've got some kind of Appium client code wrapped up in a string, somehow? That's right! The Appium team debated many ways of implementing this "batch command" feature, but at the end of the day decided that giving users complete flexibility in terms of what to run within the batch was of utmost importance. So, we implemented this Execute Driver Script command, where the command argument is a string representing JavaScript code to be executed in the context of the currently running Appium session. Whatever you put in that string will be attempted to be executed by the Appium server.

Uh oh! Isn't that the definition of a remote code execution vulnerability? Yes! So, we need to say a couple words about security. First, because there is no way to know what kind of junk a user might send in with this command, the server must be started in a special mode that allows this feature explicitly:

appium --allow-insecure=execute_driver_script

Secondly, all code is run within a NodeJS VM, which means it does not share an execution context with the main Appium process. In fact, we can tightly control what methods the executing code has access to, and we give access to basically nothing except a driver object. What is this driver object? It's an instance of a WebdriverIO session object. So you can use the entire WebdriverIO API, and all the JavaScript syntax your heart desires! This explains the interesting bits of the code above, like the driver.$ method (which is WebdriverIO's equivalent of findElement), or the fact that accessibility ID locators are defined by putting ~ at the front. You can also return text, data, or even elements inside your code string, and the result will be fully usable from within the parent script.

Execute Driver Script In Action

I wanted to get a good idea of the impact of the Execute Driver Script on test execution times, so I ran a bunch of experiments on the only Appium cloud provider which currently supports this feature: HeadSpin. My test methodology is detailed below, but here are the results (in all cases, the client location is Vancouver, Canada):

ServerUsing Execute Driver?Avg Test TimeAvg Command TimeAvg SpeedupLocalhostNo49.12s0.55sLocalhostYes48.71s0.54s0.8%Mountain View, CANo72.53s0.81sMountain View, CAYes43.15s0.48s40.5%Tokyo, JapanNo102.03s1.13sTokyo, JapanYes42.10s0.47s58.74%

Analysis

In the case of local execution, use of Execute Driver Script does not deliver much of an improvement, and this is expected. When client and server are already located on the same network interface, there is basically no time lost to latency. What we see in the examples where the Appium server is located somewhere else in the world is much more drastic. Mountain View, CA is much closer to my office in Vancouver than Tokyo is, and that is reflected in the ~30% difference in the control case for each location. This difference is basically entirely due to latency, and highlights exactly the problem with the client/server model when deployed in this case--about 30 seconds per test, when the command count is high (in this case, 90 commands per test).

When I adjust my script to use Execute Driver Script entirely, so that all 90 commands are contained within one batch, what we see is that test time is basically a low constant number across all environments. Since I'm just making one network call, latency due to geographic distribution becomes a negligible factor, reducing test behavior time by a factor of 40-60%! Of course, your results with this feature will vary greatly due to any number of factors, including the number of commands you put into the batch call, etc... I am also not recommending that every command be stuffed into one of these Execute Driver Script calls, merely demonstrating the performance improvements which might be relevant for a use case you encounter.

Test Methodology

  • These tests were run on real Android devices hosted by HeadSpin around the world on real networks, in Mountain View, CA and Tokyo, Japan. (Locally, the tests were run on an emulator and an Appium server running on my busy laptop, and thus should not be compared in absolute terms to the real devices.)
  • For each test condition (location and use of Execute Driver Script), 15 distinct tests were run.
  • Each test consisted of a login and logout flow repeated 5 times.
  • The total number of Appium commands, not counting session start and quit, was 90 per test, meaning 1,350 overall for each test condition.
  • The numbers in the table discard session start and quit time, counting only in-session test time (this means of course that if your tests consist primarily of session start time and contain very few commands, then you will get a proportionally small benefit from optimizing using this new feature).

Conclusion

Execute Driver Script is a new Appium feature that is especially useful when running your tests in a distributed context. If a cloud server or device is located across the world from you, each command will take longer than it would if the server were close. The farther away the device, the longer your command will take. The administrator of such a distributed cloud can opt to turn on the Execute Driver Script feature in Appium, to allow their users to batch commands as a way of avoiding tons of unnecessary, latency-filled back-and-forth with the server. This gives users the advantage of a geographically distributed cloud (whether the user wants geographic distribution for its own sake or because that is simply where the devices and servers happen to be located), without the typical latency cost associated with it. Of course, this is an advanced feature that you should only use to solve specific problems!

If you want to see that Java code in the context of the full project, you can check it out on GitHub. Other Appium clients also support this new command, including WebdriverIO (so you can have WebdriverIO-ception!)

Connecting Directly to Appium Hosts in Distributed Environments

This edition of Appium Pro is in many ways the sequel to the earlier article on how to batch Appium commands together using Execute Driver Script. In that article, we saw one way of getting around network latency, by combining many Appium commands into one network request to the Appium server.

When using a cloud service, however, there might be other network-related issues to worry about. Many cloud services adopt the standard Webdriver/Appium client/server model for running Appium tests. But because they host hundreds or thousands of devices, they'll be running a very high number of Appium servers. To reduce complexity for their users, they often provide a single entry point for starting sessions. The users' requests all come to this single entry point, and they are proxied on to the appropriate Appium server based on the user's authentication details and the session ID. In these scenarios, the single entry point acts as a kind of Appium load balancer, as in the diagram below:

This model is great for making it easy for users to connect to the service. But it's not necessarily so great from a test performance perspective, because it puts an additional HTTP request/response in between your test client and the Appium server which is ultimately handling your client's commands. How big of a deal this is depends on the physical arrangement of the cloud service. Some clouds keep their load balancers and devices all together within one physical datacenter. In that case, the extra HTTP call is not expensive, because it's local to a datacenter. Other cloud providers emphasize geographical and network distribution, with real devices on real networks scattered all over the world. That latter scenario implies Appium servers also scattered around the world (since Appium servers must be running on hosts physically connected to devices). So, if you want both the convenience of a single Appium endpoint for your test script plus the benefit of a highly distributed device cloud, you'll be paying for it with a bunch of extra latency.

Well, the Appium team really doesn't like unnecessary latency, so we thought of a way to fix this little problem, in the form of what we call direct connect capabilities. Whenever an Appium server finishes starting up a session, it sends a response back to your Appium client, with a JSON object containing the capabilities the server provides (usually it's just a copy of whatever capabilities you sent in with your session request). If a cloud service implements direct connect, it will add four new capabilities to that list:

  • directConnectProtocol
  • directConnectHost
  • directConnectPort
  • directConnectPath

These capabilities will encode the location and access information for a non-intermediary Appium server--the one actually handling your test. Now, your client had connected to the Appium load balancer, so it doesn't know anything about the host and port of the non-intermediary Appium server. But these capabilities give your client that information, and if your client also supports direct connect, it will parse these capabilities automatically, and ensure that each subsequent command gets sent not to the load balancer but directly to the Appium server which is handling your session. At this point in time, the official Appium Ruby and Python libraries support direct connect, as well as WebdriverIO--support for other clients coming soon.

It's essentially what's depicted in the diagram below, where for every command after the session initialization, HTTP requests are made directly to the final Appium server, not to the load balancer:

The most beautiful thing about this whole feature is that you don't even need to know about direct connect for it to work! It's a passive client feature that will work as long as the Appium cloud service you use has implemented it on their end as well. And, because it's a new feature all around, you may have to turn on a flag in your client to signal that you want to use this feature if available. (For example, in WebdriverIO, you'll need to add the enableDirectConnect option to your WebdriverIO config file or object.) But beyond this, it's all automatic!

The only other thing you might need to worry about is your corporate firewall--if your security team has allowed connections explicitly to the load balancer through the firewall, but not to other hosts, then you may run into issues with commands being blocked by your firewall. In that case, either have your security team update the firewall rules, or turn off direct connect so your commands don't fail.

Direct Connect in Action

To figure out the actual, practical benefit of direct connect, I again engaged in some experimentation using HeadSpin's device cloud (HeadSpin helped with implementing direct connect, and their cloud currently supports it).

Here's what I found when, from my office in Vancouver, I ran a bunch of tests, with a bunch of commands, with and without direct connect, on devices sitting in California and Japan (in all cases, the load balancer was also located in California):

DevicesUsing Direct Connect?Avg Test TimeAvg Command TimeAvg SpeedupCaliNo72.53s0.81sCaliYes71.620.80s1.2%JapanNo102.03s1.13sJapanYes70.83s0.79s30.6%

Analysis

What we see here is that, for tests I ran on devices in California, direct connect added only marginal benefit. It did add a marginal benefit with no downside, so it's still a nice little bump, but because Vancouver and California are pretty close, and because the load balancer was geographically quite close to the remote devices, we're not gaining very much.

Looking at the effects when the devices (and therefore Appium server) are located much further away, we see that direct connect provides a very significant speedup of about 30%. This is because, without direct connect, each command must travel from Vancouver to California and then on to Japan. With direct connect, we not only cut out the middleman in California, but we also avoid constructing another whole HTTP request along the way.

Test Methodology

(The way I ran these tests was essentially the same as the way I ran tests for the article on Execute Driver Script)

  • These tests were run on real Android devices hosted by HeadSpin around the world on real networks, in Mountain View, CA and Tokyo, Japan.
  • For each test condition (location and use of direct connect), 15 distinct tests were run.
  • Each test consisted of a login and logout flow repeated 5 times.
  • The total number of Appium commands, not counting session start and quit, was 90 per test, meaning 1,350 overall for each test condition.
  • The numbers in the table discard session start and quit time, counting only in-session test time (this means of course that if your tests consist primarily of session start time and contain very few commands, then you will get a proportionally small benefit from optimizing using this new feature).

Conclusion

You may not find yourself in a position where you need to use direct connect, but if you're a regular user of an Appium cloud provider, make sure to check in with them to ask whether they support the feature and whether your test situation might benefit from the use of it. Because the feature needs to be implemented in the load balancer itself, it's not something that you can take advantage of by using open source Appium directly (although, it would be great if someone built support for direct connect as a Selenium Grid plugin!) Still, as use of devices located around the world becomes more common, I'm happy that we have at least a partial solution for eliminating any unnecessary latency.

Optimizing WebDriverAgent Startup Time

Some Appium users have asked me how to speed up their iOS tests, citing the length of time it takes to start tests which use the WebDriverAgent library (all tests using the XCUITest driver).

Most of the perceived speed of an Appium test can't be improved due to the base speed of booting simulators or the UI actions themselves. The slowest part, which users were asking me how to avoid, is the initial startup of a test: the time between sending the first POST /session command and the response indicating that your test script can begin sending commands. We'll call this time period the "session creation" time.

Let's start with the most basic iOS test:

private String APP = "https://github.com/cloudgrey-io/the-app/releases/download/v1.9.0/TheApp-v1.9.0.app.zip";

@Before
public void setUp() throws IOException {
   DesiredCapabilities caps = new DesiredCapabilities();
   caps.setCapability("platformName", "iOS");
   caps.setCapability("platformVersion", "12.2");
   caps.setCapability("deviceName", "iPhone Xs");
   caps.setCapability("automationName", "XCUITest");

   caps.setCapability("app", APP);

   driver = new IOSDriver<MobileElement>(new URL("http://localhost:4723/wd/hub"), caps);
}

@After
public void tearDown() {
   try {
       driver.quit();
   } catch (Exception ign) {}
}

@Test
public void testA() {
   assertEquals(1,1);
}

@Test
public void testB() {
   assertEquals(1,1);
}

@Test
public void testC() {
   assertEquals(1,1);
}

@Test
public void testD() {
   assertEquals(1,1);
}

@Test
public void testE() {
   assertEquals(1,1);
}

@Test
public void testF() {
   assertEquals(1,1);
}

This is basically the default template we use for an iOS test in the AppiumPro sample code repository. We have our sample app hosted on a web Url for convenience, a @Before step which creates a fresh session for each test, an @After step to delete the session at the end of each test, followed by six tests which do nothing.

Each of the six tests take 12.8 seconds on average. We can cut this down by two thirds!

There are desired capabilities we can specify to greatly reduce the time it takes to create a session. Appium is built to cater to a large number of devices, for use in many different situations, but we also want to make sure that it is easy to get started automating your first test. When specifying desired capabilities, Appium will analyze the state of your system and choose default values for every desired capability which you don't specify. By being more specific, we can have Appium skip the work it does to choose the default values.

Our first improvement is to set the app location to a file already on the host device. Downloading an app from a remote source is adding 3.9 seconds to each test in the suite.

private String APP = "/Users/jonahss/Workspace/TheApp-v1.9.0.app.zip";

Your test suite probably already does things this way, but for our demo repository this really speeds things up.

Running the tests, it's easy to notice that the app gets reinstalled on the simulator for each test. This takes a lot of time, and can be skipped. You may have certain tests which require a fresh install, or need all the app data cleaned, but those tests could be put into a separate suite, leaving the majority of tests to run faster by reusing the same app. Most users should be familiar with the noReset desired capability.

caps.setCapability("noReset", true);

This saves an additional 2.9 seconds per test.

That was the easy stuff, giving us an average startup time of 6 seconds per test, but we can shave off another 2.1 seconds, which is a 35% improvement.

Appium uses the simctl commandline tool provided by Apple to match the deviceName desired capability to the udid of the simulator. We can skip this step by specifying the simulator udid ourselves. I looked through the logs of the previously run test and took the udid from there:

caps.setCapability("udid", "009D8025-28AB-4A1B-A7C8-85A9F6FDBE95");

This saves 0.4 seconds per test.

Appium also gets the bundle ID for your app by parsing the app's plist. We can supply the known bundle ID, again taking it from the logs of the last successful test. This ends up saving us another 0.1 seconds:

caps.setCapability("bundleId", "io.cloudgrey.the-app");

When loading the WedDriverAgent server, Appium loads the files from wherever XCode saved it after compilation. This location is called the "Derived Data Directory" and Appium executes an xcodebuild command in order to get the location. Again, we can look through the logs of the last test run and supply this value ourselves, allowing Appium to skip the work of calculating it:

caps.setCapability("derivedDataPath", "/Users/jonahss/Library/Developer/Xcode/DerivedData/WebDriverAgent-apridxpigtzdjdecthgzpygcmdkp");

This saves a whopping 1.4 seconds. While xcodebuild is useful, it hasn't been optimized to supply this bit of information.

The last optimization is to specify the webDriverAgentUrl desired capability. If specified, Appium skips a step where it checks to make sure that there are no obsolete or abandoned WebDriverAgent processes still running. The WebDriverAgent server needs to already be running at this location, so we can only use this desired capability after the first test starts the server.

caps.setCapability("webDriverAgentUrl", "http://localhost:8100");

So what have we done? Using a local file and noReset reduced the base test time from 12.8 seconds to 6 seconds, making test startup 53.3% faster.

On top of that, we performed some more obscure optimizations to improve test times another 23.9%, shaving off another 1.3 seconds.

Here's a visual breakdown of what Appium spends its time doing during one of our tests:

"Remaining test time" and "session shutdown" are all that remain in our optimized test:

private String APP = "/Users/jonahss/Workspace/TheApp-v1.9.0.app.zip";

private IOSDriver driver;
private static Boolean firstTest = true;

@Before
public void setUp() throws IOException {
   DesiredCapabilities caps = new DesiredCapabilities();
   caps.setCapability("platformName", "iOS");
   caps.setCapability("platformVersion", "12.2");
   caps.setCapability("deviceName", "iPhone Xs");
   caps.setCapability("automationName", "XCUITest");

   caps.setCapability("app", APP);

   caps.setCapability("noReset", true);
   caps.setCapability("udid", "009D8025-28AB-4A1B-A7C8-85A9F6FDBE95");
   caps.setCapability("bundleId", "io.cloudgrey.the-app");
   caps.setCapability("derivedDataPath", "/Users/jonahss/Library/Developer/Xcode/DerivedData/WebDriverAgent-apridxpigtzdjdecthgzpygcmdkp");
   if (!firstTest) {
       caps.setCapability("webDriverAgentUrl", "http://localhost:8100");
   }

   driver = new IOSDriver<MobileElement>(new URL("http://localhost:4723/wd/hub"), caps);
}

@After
public void tearDown() {
   firstTest = false;
   try {
       driver.quit();
   } catch (Exception ign) {}
}

@Test
public void testA() {
   assertEquals(1,1);
}

//... 5 more tests

For all these tests, I kept the same simulator running. Starting up a simulator for the first test takes some extra time. I also noticed that the first test of every suite always takes a little bit longer even when the simulator is already running. For the timing statistics in this article, I omitted the first test from each run. None of them were that much longer than the average test, and the goal is to reduce total test suite time. If you have 100 tests, an extra second on the first one doesn't impact the total suite time much.

That said, all this is splitting hairs, since as soon as your tests contain more than a few commands, the amount of time spent on startup is insignificant compared to the amount of time spent finding elements, waiting for animations, and loading assets in your app.

Full example code located here.

For situations where a CI server is running Appium tests for the first time, further optimizations can be made to reduce the time spent compiling WebDriverAgent for the first test.