Original post from AppiumPro
Both Appium and Selenium are based on a client/server architecture, where the test commands are triggered by a client that could be very far away from the server which actually performs the commands. The advantages of this client/server architecture are very important, allowing Appium tests to be written in any language, and making it possible to build large private or public Appium device clouds.
There is one significant downside to this architecture, however, that can dramatically decrease test performance. Because every test command has to travel over the network between the client and server, every command is subject to the latency of that network, as well as any other “atmospheric” conditions. Networks are not perfectly reliable in general, and unanticipated slowness or request failure can occur at any point. This means that Appium scripts developed locally might behave significantly differently when run on an Appium cloud somewhere across the world, leading to heightened flakiness or test failures.
On top of this, tests are not often written by directly implementing Appium commands, but more often by using framework-level functionality, which might encapsulate the use (and overuse) of many Appium commands, all of which add to test execution time. In some cases, I have seen test frameworks that make 5-10 requests to the server for every found element, in order to retrieve element metadata just in case it is useful later on. Apart from being a bad idea in general, this kind of approach can lead to dramatic differences of execution time when run in cloud environments.
More than general slowness, latency is also a killer for real-time automation. If you need to be certain that command B happens a very short time after command A, then sending command B across the global Internet is not going to deliver that guarantee. This is one reason the W3C WebDriver spec team decided to build the new Actions API in a form where the entire action chain is encoded as a single API call, even though the chain might take seconds or minutes to actually execute once the action begins.
Execute Driver Script
The Appium team has now done the same thing, not just for actions, but for any Appium commands at all. Essentially, we have created a single Appium command that allows you to pack as many other Appium commands inside it as you want. All these commands will be executed on the Appium server itself, so will not be subject to network latency. How does this work? It’s magic, obviously! Imagine we have this test script written in the Java client:
As you can probably tell, it’s a very straightforward login/logout set of commands. If necessary, we could run all these commands in one go, as a batch, using the new
Uh oh! Isn’t that the definition of a remote code execution vulnerability? Yes! So we need to say a couple words about security. First, because there is no way to know what kind of junk a user might send in with this command, the server must be started in a special mode that allows this feature explicitly:
Secondly, all code is run within a NodeJS VM, which means it does not share an execution context with the main Appium process. In fact, we can tightly control what methods the executing code has access to, and we give access to basically nothing except a
driver object. What is this
driver.$ method (which is WebdriverIO’s equivalent of
findElement), or the fact that accessibility ID locators are defined by putting
~ at the front. You can also return text, data, or even elements inside your code string, and the result will be fully usable from within the parent script.
Execute Driver Script In Action
I wanted to get a good idea of the impact of the Execute Driver Script on test execution times, so I ran a bunch of experiments on the only Appium cloud provider which currently supports this feature: HeadSpin. My test methodology is detailed below, but here are the results (in all cases, the client location is Vancouver, Canada):
|Server||Using Execute Driver?||Avg Test Time||Avg Command Time||Avg Speedup|
|Mountain View, CA||No||72.53s||0.81s|
|Mountain View, CA||Yes||43.15s||0.48s||40.5%|
In the case of local execution, use of Execute Driver Script does not deliver much of an improvement, and this is expected. When client and server are already located on the same network interface, there is basically no time lost to latency. What we see in the examples where the Appium server is located somewhere else in the world is much more drastic. Mountain View, CA is much closer to my office in Vancouver than Tokyo is, and that is reflected in the ~30% difference in the control case for each location. This difference is basically entirely due to latency, and highlights exactly the problem with the client/server model when deployed in this case–about 30 seconds per test, when the command count is high (in this case, 90 commands per test).
When I adjust my script to use Execute Driver Script entirely, so that all 90 commands are contained within one batch, what we see is that test time is basically a low constant number across all environments. Since I’m just making one network call, latency due to geographic distribution becomes a negligible factor, reducing test behavior time by a factor of 40-60%! Of course, your results with this feature will vary greatly due to any number of factors, including the number of commands you put into the batch call, etc… I am also not recommending that every command be stuffed into one of these Execute Driver Script calls, merely demonstrating the performance improvements which might be relevant for a use case you encounter.
- These tests were run on real Android devices hosted by HeadSpin around the world on real networks, in Mountain View, CA and Tokyo, Japan. (Locally, the tests were run on an emulator and an Appium server running on my busy laptop, and thus should not be compared in absolute terms to the real devices.)
- For each test condition (location and use of Execute Driver Script), 15 distinct tests were run.
- Each test consisted of a login and logout flow repeated 5 times.
- The total number of Appium commands, not counting session start and quit, was 90 per test, meaning 1,350 overall for each test condition.
- The numbers in the table discard session start and quit time, counting only in-session test time (this means of course that if your tests consist primarily of session start time and contain very few commands, then you will get a proportionally small benefit from optimizing using this new feature).
Execute Driver Script is a new Appium feature that is especially useful when running your tests in a distributed context. If a cloud server or device is located across the world from you, each command will take longer than it would if the server were close. The farther away the device, the longer your command will take. The administrator of such a distributed cloud can opt to turn on the Execute Driver Script feature in Appium, to allow their users to batch commands as a way of avoiding tons of unnecessary, latency-filled back-and-forth with the server. This gives users the advantage of a geographically distributed cloud (whether the user wants geographic distribution for its own sake or because that is simply where the devices and servers happen to be located), without the typical latency cost associated with it. Of course, this is an advanced feature that you should only use to solve specific problems!
If you want to see that Java code in the context of the full project, you can check it out on GitHub. Other Appium clients also support this new command, including WebdriverIO (so you can have WebdriverIO-ception!)