1.1

Making Your Appium Tests Fast, Repeatable, and Reliable

Appium tests often get a bad rap for being flakey. It's true that functional tests have a more complex job than unit tests, and are subject to greater environmental difficulty. However, it's not true that you can't do anything about it! In this unit we look at a huge number of techniques to make your Appium test suite stable and robust.
Mobile 
Group

Making your Appium tests fast and reliable

Part 1

Let's face it, Appium tests have sometimes been accused of being slow and unreliable. In some ways the accusation is true: there are fundamental speed limits to the automation technologies Appium relies on, and in the world of full-fledged functional testing there are a host of environmental problems which can contribute to test instability. In other ways, the accusation is misplaced, because there are strategies we can use to make sure our tests don't run into common pitfalls.

"Flakiness"

No discussion of functional test reliability would be complete without addressing the concept of "flakiness". According to common usage, "flakey" is synonymous with "unreliable"---the test passes some times and fails other times. The blame here is often put on Appium---if a test passes once when run locally, surely any future failures are due to a problem with the automation technology? It's a tempting position to take, especially because it takes us (the test authors) and our apps out of the crosshairs of blame, and allows us to place responsibility on something external that we don't control (scapegoat much?).

In reality, the situation is much more complex. It may indeed be the case that Appium is responsible for unreliable behavior, and regrettably this does happen in reality. But without an investigation that proves this to be the case for your particular test, the problem may with equal probability lie in any number of other areas, for example:

  • Unwarranted assumptions made by the test author about app or device speed, app state, screen size, or dynamic content
  • App instability (maybe the app itself exhibits erratic behavior, even when used by a human!)
  • Lack of compute or memory resources on the machine hosting a simulator/emulator
  • The network (sometimes HTTP requests to your backend just fail, due to load or issues outside of your team's control)
  • The device itself (as we all know, sometimes real devices just do odd things)

Furthermore, even if it can be proved that none of these areas are problematic, and therefore "Appium" is responsible, what does that mean? Appium is not one monolithic beast, even though to the user of Appium it might look that way. It is in fact a whole stack of technologies, and the erratic behavior could exist at any layer. To illustrate this, have a look at this diagram, showing the various bits of the stack that come into play during an iOS test:

The part of this stack that the Appium team is responsible for is really not that deep. In many cases, the problem lies deeper, potentially with the automation tools provided by the mobile vendors (XCUITest and UiAutomator2 for example), or some other automation library.

Why go into all this explanation? My main point isn't to take the blame away from Appium. I want us to understand that when we say a test is "flakey", what we really mean is "this test sometimes passes and sometimes fails, and I don't know why". Some testers are OK stopping there and allowing the build to be flakey. And it's true that some measure of instability is a fact of life for functional tests. But I want to encourage us not to stick our heads in the sand---the instability we can't do anything about is relatively small compared to the flakiness we often settle for out of an avoidance of a difficult investigation.

My rule of thumb is this: only allow flakey tests whose flakiness is well understood and cannot be addressed. This means, of course, that you may need to get your hands dirty to figure out what exactly is going on, including coming to the Appium team and asking questions when it looks like you've pinned the problem down to something in the automation stack. And it might mean ringing some alarm bells for your app dev team or your backend team, if as a result of your investigation you discover problems in those areas. (One common problem when running many tests in parallel, for example, is that a build-only backend service might be underpowered for the number of requests it receives during testing, leading to random instability all over. The solution here is either to run fewer tests at a time, or better yet, get the backend team to beef up the resources available to the service!)

Like any kind of debugging, investigations into flakey tests can be daunting, and are led as much by intuition as by method. If you keep your eyes open, however, you will probably make the critical observations that move your investigation forward. For example, you might notice that a certain kind of flakiness is not isolated to one test, but rather pops up across the whole build, seemingly randomly. When you examine the logs, you discover that this kind of flakiness always happens at a certain time of day. This is great information to take to another team, who might be able to interpret it for you. "Oh, that's when this really expensive cron job is running on all the machines that host our Android emulators!", for example.

We'll dig into the topic of debugging failed tests in a future part in this series. For now, my concrete recommendation for handling flakiness in a CI environment in general is as follows:

  1. Determine whether a test is flakey before permanently adding it to your build. In a perfect world, this would look like automatically catching any functional test that's being committed, and running it many times (maybe 100?) to build a reliability profile for it. If it passes 100% of the time, great! Merge that commit to master and off you go.
  2. If the test doesn't always pass, it's unreliable or flakey. But we can't stop there, because "flakey" is a codeword for ignorance. Time to dig in and find out why it's unreliable. Usually with a little investigation it's possible to see what went wrong, and perhaps adjust an element locator or an explicit wait to handle the problem. At this stage, Appium logs and step-by-step screenshots are essential.
  3. Once you discover the cause of flakiness, you'll either be able to resolve the flakiness or not. If you can do something to resolve it, it is incumbent on you to do so! So get that test passing 100% of the time. If you determine that there's nothing you can do about it (and no, filing a bug report to Appium or to Apple is not "nothing"), you have two options: either forfeit the test if it's not going to provide more value than headache in the long run, or annotate it such that your CI system will retry the test once or twice before considering it failed. (Or only run it before a release, when you have time to manually verify whether a failure is a "flake").
  4. If you take the approach of keeping the test in your build and allowing the build to retry it on failure, you must track statistics about how much each test is retried, and have some reliability threshold above which a new investigation is triggered. You don't want tests creeping up in flakiness over time, because that could be a sign of a real problem with your app.

Keep in mind that Appium tests are functional tests, and not unit tests. Unit tests are hermetically sealed off from anything else, whereas functional tests live in the real world, and the real world is much more messy. We should not aim for complete code coverage via functional testing. Start small, by covering critical user flows and getting value out of catching bugs with a few tests. Meanwhile, make sure those few tests are as rock solid as possible. You will learn a lot about your app and your whole environment by hardening even a few tests. Then you'll be able to invest that learning into new tests from the outset, rather than having to fix the same kinds of flakiness over and over again down the road.

Part 2: Finding elements

One of the biggest surface indicators of instability or flakiness is an element not being found. Finding elements is a natural place for problems to arise, because it is when we try to find an element that our assumption about an app's state and its actual state are brought together (whether in harmony or in conflict). We certainly want to avoid potential problems in finding elements which we can do something about, for example using selectors that are not unique, or trying to find elements by some dynamic attribute which cannot be relied on. This means that knowledge of your app and its design are essential. What is likely to change? What isn't? Which elements have accessibility IDs?

Locator Strategies

Before going further, it's worth saying what we mean by "finding elements", and "accessibility ID", for example. In Appium (as with Selenium), actions can be taken on specific objects in the app UI. These objects (corresponding to elements in a webpage, hence the name of the findElement API command) must be "found" before it is possible to interact with them. There are different ways of finding elements. Take a look at the example call below:

In this example, By.className represents a so-called "locator strategy" called "class name", and Button represents a "selector" which the strategy uses to find one or more elements. The result of this call is (if all goes well) an object of type WebElement, which comes with the rich set of interaction APIs you rely on for your testing.

"Class name" is just one of a number of locator strategies available in Appium, and refers to a platform-specific UI object class name, for example XCUIElementTypeButton or android.widget.Button. Already you can see that perhaps this locator strategy isn't always ideal; what if you are testing a cross-platform app? Would you need to have a different set of code to find an iOS button or an Android button? If you rely on the "class name" locator strategy, the answer is yes.

There's another problem with this strategy: it's often the case that there is more than one element of any given type in the hierarchy. Thus you might very well find a button with this locator strategy, but will it be the button you want? So we could say that the "class name" locator strategy is not a good choice because it is platform-specific (leads to branched iOS and Android code), and too general (hard to uniquely identify an element with). What other options are there? Have a look at this table of the full set:

As you can see, many of the locator strategies were carried over from Selenium, though not all are supported or even make sense in Appium (at least when automating a native app). Appium has also introduced a number of its own strategies, such as "accessibility id", to reflect the fact (and take advantage of the fact) that we're dealing with mobile app UIs and an entirely different automation stack.

XPath

In another Appium Pro article, I go into detail about the XPath locator strategy, and why it should be avoided. To summarize here, many people find the XPath strategy attractive, because it guarantees that any element in the UI can be found. The problem is that some elements can only be found using selectors that are "brittle", meaning they are liable to find no element, or a different element, if anything changes in your app's design. XPath can also be slow with Appium, because it entails sometimes multiple recursive renderings of the UI hierarchy.

Accessibility ID

What should we use instead? When possible, I recommend using the "accessibility ID" locator strategy, because it is (a) cross-platform, (b) unique, and (c) fast. Both iOS and Android have the concept of an accessibility label, though on iOS it's called "accessibility ID" and on Android it's called "content description" (or "content-desc"). Since the accessibility label is a string set by developers, it can be a unique identifier (if the developers use it that way, which I recommend they do unless it interferes with actual accessibility considerations). In the Appium Java client, finding elements by accessibility ID involves using the MobileBy strategy:

WebElement el = driver.findElement(MobileBy.AccessibilityID("foo"));

Since testers don't always have the ability to influence the app's development, sometimes accessibility labels are not available, or are not unique. What else could we use?

iOS-specific Locator Strategies

In the same Appium Pro article I referenced earlier, I went into detail on some iOS-specific locator strategies that could be used as a substitute for XPath, because they are hierarchical query-based strategies. The most robust is the "-ios class chain" strategy, which allows you to use a "lite" version of something like XPath, mixed together with iOS predicate format strings.

The benefit of this locator strategy is that it allows for complex queries while remaining in most cases much speedier than XPath. The drawback, of course, is that it is platform-specific, so requires branching your code (or adding further distinctions to your object models). As an example of what you can do, check out this command:

String selector = "**/XCUIElementTypeCell[`name BEGINSWITH "C"`]/XCUIElementTypeButton[10]";
driver.findElement(MobileBy.iOSClassChain(selector));

What we're doing here is finding the 10th button which is a child of a table cell anywhere in the UI hierarchy which has a name beginning with the character "C". That's quite the query! Because of the more rigid form of class chain queries, the performance guarantees are better than those of XPath.

Android-specific Locator Strategies

A similar trick is available for Android, in the guise of a special parser the Appium team implemented which supports most of the UiSelector API. We make this parser available via the "-android uiautomator" locator strategy, and the selectors should be strings which are valid bits of Java code beginning with new UiSelector(). Let's have a look at an example:

String selector = "new UiSelector().className(\"ScrollView\").getChildByText(new UiSelector().className(\"android.widget.TextView\"), \"Tabs\")";
driver.findElement(MobileBy.AndroidUIAutomator(selector));

Once again, we make use of a MobileBy strategy since this strategy is available only for Appium. What's going on here is that we have constructed a string which could be used as valid UiAutomator test code, but in fact will be parsed and interpreted by Appium when the command is sent. According to the semantics of the UiSelector API, we're saying that we want the first TextView element we find with the text "Tabs", which is also a child of the first ScrollView in the hierarchy. It's a bit clunkier than XPath, but it can be used in similar ways, and again with a better performance profile in most cases.

As with the iOS class chain strategy, the main downside here is that selectors are going to be platform-specific. (Additionally, we can't support arbitrary Java and there are limits to what we can provide from the UiSelector API).

Determining Which Selectors to Use

So far we've seen some good recommendations on which strategies to use to find elements reliably. But how do you know which selectors to use in conjunction with those strategies? I said before that knowledge of your app is required in order to do this correctly. How do you get that knowledge of your app? If you're one of the app developers, you can simply have a look at the code, or maybe you remember that you gave a certain element a certain accessibility label. If you don't have access to the code, or if you want a method that will show you exactly what Appium sees in your app, then it's best to use Appium Desktop.

Appium Desktop is a GUI tool for running Appium and inspecting apps. You can use it to launch "inspector sessions" with arbitrary desired capabilities. Inspector sessions show you a screenshot of your app, its UI hierarchy (as XML), and lots of metadata about any element you select. It looks like this:

One of the great things about the Inspector is that, when you click on an element in the hierarchy, it will intelligently suggest locator strategies and selectors for you. In the image above, you can see that the top suggestion for the selected element is the "accessibility id" locator strategy, used in conjunction with the selector "Login Screen".

Things can get a bit more complex, certainly, but the Appium Desktop Inspector is always a great place to start when figuring out what's going on with your app hierarchy. It's especially useful if you run into issues where you think an element should exist on a certain view: just fire up the Inspector and manually look through the XML tree to see if in fact the element exists. If it doesn't, that means Appium (read: the underlying automation frameworks) can't see it, and you'll need to ask your app developer why.

And that concludes our discussion of finding elements reliably in Appium---or at least one aspect of it. Just because you can find an element with the correct locator strategy doesn't mean it will always be there when you look.

Part 3: Waiting for App States

Using bad locator strategies is one cause of elements in your app not being found, and contributing to flakiness and unreliability. But even when you're using the right locator strategies, and are guaranteed to find the same element every time you look, that doesn't mean the element will be there right when you look! What happens when you try to find an element which just isn't there? This is a remarkably common issue, and it's part of a more general phenomenon in functional tests: race conditions due to poor assumptions about app state.

Race Conditions

A race condition is when two processes or procedures are operating simultaneously, and either one could finish before the other. This is a problem in automated test scripts because our code usually handles only one scenario. So in the alternative world where the unintuitive procedure finishes first, our test will fail.

To use our example of finding elements: our code usually implicitly assumes that the element will be present before we try to find it. In reality, the request to find an element and the app's own process of working to display the element are in a race. As human users of the app, we know how to gracefully lose the race: we simply wait! If we want to tap a button and it's not yet on the screen, we wait for it to show up (for a few seconds, anyway, then we get bored and open Twitter instead).

Appium is less graceful, and does exactly what the client tells it to do, even if that means trying to find an element before it has been properly rendered on the screen. This is just one example of many possible examples in the category of test code assuming the app is in a certain state, but being proven wrong. It can be a particularly vexing problem because it might only show up infrequently, or only in CI environments. When we develop test code locally, we can often be tricked into making all kinds of assumptions about how races will resolve. Just because the app always wins a race (like we expect) when testing locally does not mean the app will behave the same in other environments.

Waiting for App States

The solution is to teach our test code to gain a bit of the grace of the human loser, and not blow up with exceptions just because the state wasn't what it expected. There are three basic ways to wait in your Appium scripts.

Static Waits

Waiting "statically" just means applying a lot of good old Thread.sleep all over the place. This is the brute force solution to a race condition. Is your test script trying to find an element before it is present? Force your test to slow down by adding a static sleep! Taking the login test which is very familiar to Appium Pro readers, this is what it would look like using static waits:

@Test
public void testLogin_StaticWait() throws InterruptedException {
   Thread.sleep(3000);
   driver.findElement(loginScreen).click();

   Thread.sleep(3000);
   driver.findElement(username).sendKeys(AUTH_USER);
   driver.findElement(password).sendKeys(AUTH_PASS);
   driver.findElement(loginBtn).click();

   Thread.sleep(3000);
   driver.findElement(getLoggedInBy(AUTH_USER));
}

I chose 3 seconds as my static wait amount. Why did I choose that value? I'm not sure. It worked for me locally, and solved my race condition problems. Good enough, right? Not exactly. There are some major problems with this approach:

  • My test is now much longer than it needs to be (up to 9 seconds longer!), wasting my time and my build's time and therefore my team's time and my company's time. And time is money, friend! Oops.
  • I have staved off the chaos of a race condition... for now. Who's to say I won't wind up in some other scenario where 3 seconds won't be enough? What if one of the elements shows up based on a network request, and every so often the network is just a bit slower? My only recourse would be to keep increasing the static wait, thereby making the problem above even worse. And meanwhile I still can't sleep at night.
  • Unless I'm diligent at experimentation and commenting, no one will know why I picked the precise values that I did. Was it random or was there a reason?

So what else can we do?

Implicit Waits

Because the designers of Selenium were well aware of the element finding race conditions, a long time ago they added the ability in the Selenium (and we copied it with the Appium) server for the client to set an "implicit wait timeout". This timeout is remembered by the server and used in any instance of element finding. If an element can't be found instantly, the server will keep trying to find it up to the specified timeout. The same test implemented with implicit waits would look like:

@Test
public void testLogin_ImplicitWait() {
   driver.manage().timeouts().implicitlyWait(10, TimeUnit.SECONDS);

   driver.findElement(loginScreen).click();
   driver.findElement(username).sendKeys(AUTH_USER);
   driver.findElement(password).sendKeys(AUTH_PASS);
   driver.findElement(loginBtn).click();
   driver.findElement(getLoggedInBy(AUTH_USER));
}

Wow! That code is a lot nicer, for one. We've also completely solved the problem about the ever-increasing waste of time we ran into with static waits. Because the server-side element-finding retry is on a pretty tight loop, we're guaranteed to find the element within (say) a second of when it actually shows up, meaning we waste very little time while simultaneously making our test much more robust.

There's still a problem or two with this approach, however:

  • Using implicit wait, we tend to set one timeout and forget about it. Inevitably, this timeout becomes pretty high because it has to be high enough to account for the slowest element we could validly wait for. This means that for other elements, which we know would never take as long to show up, we still end up wasting time waiting for them. In other words, we still want our find element command to fail relatively quickly in the case where an element truly never makes an appearance. We don't want to wait for a whole minute to decide when a few seconds would have done.
  • We've been focusing on waiting for elements, which is what implicit waits are designed around. But an element's presence is just one example of an app state that we might want to wait for. What about an element's text, or visibility? Implicit wait won't help us there.

Thankfully, there's an even more general solution that gets us past these (admittedly more minor) issues as well.

Explicit Waits

Explicit waits are just that: they make explicit what you are waiting for and how long it will take. At the cost of a little more verbosity, we get much more fine-grained control, and are able to teach our test script how to wait for just the right condition in our app before moving on. For example:

@Test
public void testLogin_ExplicitWait() {
   WebDriverWait wait = new WebDriverWait(driver, 10);

   wait.until(ExpectedConditions.presenceOfElementLocated(loginScreen)).click();
   wait.until(ExpectedConditions.presenceOfElementLocated(username)).sendKeys(AUTH_USER);
   wait.until(ExpectedConditions.presenceOfElementLocated(password)).sendKeys(AUTH_PASS);
   wait.until(ExpectedConditions.presenceOfElementLocated(loginBtn)).click();
   wait.until(ExpectedConditions.presenceOfElementLocated(getLoggedInBy(AUTH_USER)));
}

Here we use the WebDriverWait constructor to initialize a wait object with a certain timeout. We can reuse this object anytime we want the same timeout. We can configure different wait objects with different timeouts and use them for different kinds of waiting or elements. Then, we use the until method on the wait object in conjunction with something called an "expected condition".

An expected condition is simply a special method which returns an anonymous inner class whose apply method will be called periodically until it returns something. The ExpectedConditions class has a number of useful, pre-made condition methods. What's great about explicit waits, though, is that we're not limited to what comes in the box. We can make our own!

Custom Explicit Waits

If the app state we want to wait for is particularly complex, we can always make our own expected condition. For example, let's say that the click() command is terribly unreliable, and often it fails, even when our element is found. So what we want is to keep retrying both the find and click actions until they both succeed one after the other. We could make a custom expected condition, like so:

private ExpectedCondition<Boolean> elementFoundAndClicked(By locator) {
   return new ExpectedCondition<Boolean>() {
       @Override
       public Boolean apply(WebDriver driver) {
           WebElement el = driver.findElement(locator);
           el.click();
           return true;
       }
   };
}

We simply return a new ExpectedCondition and override the apply method with our particular logic. Then we can use this in our test code, for example as in this revision of the previous test:

@Test
public void testLogin_CustomWait() {
   WebDriverWait wait = new WebDriverWait(driver, 10);

   wait.until(elementFoundAndClicked(loginScreen));
   wait.until(ExpectedConditions.presenceOfElementLocated(username)).sendKeys(AUTH_USER);
   wait.until(ExpectedConditions.presenceOfElementLocated(password)).sendKeys(AUTH_PASS);
   wait.until(elementFoundAndClicked(loginBtn));
   wait.until(ExpectedConditions.presenceOfElementLocated(getLoggedInBy(AUTH_USER)));
}

Granted, this is a bit of a useless example, but it demonstrates how easy it is to create useful and reusable waits that your whole team can use. And this completes our tour of strategies for waiting for app states with Appium. Don't forget to take a look at the full code for the examples shown here, including all the setup and teardown boilerplate!

Part 4: Dealing with Unfindable Elements

One of the sad truisms of mobile testing is that we can't always get what we want. The stack is so complex, the tools so nascent, that we are often in a position of having to try weird hacks just to get our tests to work. Appium does everything it can to canonize the most reliable weird hacks as official endpoints, so you can just go about your day writing straightforward Appium client code. But sometimes, the world doesn't cooperate.

One common occurrence in this category is when an element is clearly visible on the screen, and yet cannot be found. You've printed out the page source. You've gone hunting in Appium Desktop, and sure enough, there's no sign of the element! Why could this be? There are a number of reasons, but usually it has to do with how your app is developed. For iOS and Android, Appium is fundamentally at the mercy of the automation engines it builds on (XCUITest and UiAutomator2, respectively). These engines use the accessibility layer of the OS to determine which elements are available for automation. Oftentimes, apps which employ custom UI components have neglected to register these custom components with the accessibility service. This means that said components are essentially invisible from the perspective of XCUITest/UiAutomator2, and hence un-automatable from the perspective of Appium.

If this describes your problem, and if you have the ability to change the app source code (or can convince someone else to), the solution is actually pretty straightforward: simply turn on accessibility for your custom UI controls! Both Apple and Google mention how in their docs, but it's easy to understand why a developer might forget to do this, since it doesn't affect the functionality of the app. Anyway, you can check out the technique at these links:

If changing the app code isn't an option, or if there is some other reason that your element is not findable (for example, React Native doing something goofy when you set your element's position to absolute), you might need to pull out the big guns. (Sorry, that sounds violent: I should say, you might need to pull out the big keyboard! Because coding happens on keyboards!) Regardless of metaphor, what am I referring to? Tapping by coordinates. That's right: with Appium you can always tap the screen at any x/y position you like, and the tap will (/should) work. If the UI control you are targeting is underneath, the net result will be the same as having called a findElement command and then calling click() on the resulting object.

There's a problem, though, which is that tapping by coordinate is notoriously brittle---hence why my suggestions on how to do it right are showing up here, in our series on test speed and reliability. But first, why is tapping by coordinates brittle? For all the same reasons XPath is probably not a good locator strategy, and more:

  • Different versions of your app might put the element in a different place on screen
  • Different dynamic data on the screen might shove the element around even within the course of a single automation session
  • Different screen sizes might make the element fall under a different coordinate

One tantalizing solution would be to find the element visually, meaning by comparison with a reference image. In fact, Appium can do this! But it's pretty new and advanced, and will be the subject of a future Appium Pro edition once all the pieces are firmly in place.

In the meantime, a first stab at making tapping at a coordinate more reliable would be to ensure our coordinate itself is dynamically calculated based on conditions. For example, if we know that our app renders in completely the same proportions across different screen sizes, we can solve the last problem in the list above by storing coordinates as percentages of screen width and height in our code, rather than storing actual x and y values. Then we determine the specific coordinates for a given test run only after making a call to driver.manage().window().getSize(), so we can turn the percentages back into coordinates.

This is a good start, but it still doesn't help us in the case where the element might have moved around for a variety of reasons. Another idea (which I recently saw discussed as a full blog post by Wim Selles) is to find the coordinates of your unfindable element via reference to an element which is findable.

The key insight of this tactic is that it's often the case that the position of the desired element is always fixed in relationship to some other element, which may be found using the normal findElement methods. We can use the size and position of the reference element to calculate absolute coordinates for the desired element. Of course, you'll have to observe your app's design closely to determine which features of the reference element are useful. For example, you might decide that the desired element is always 20 pixels to the right of the reference element. Or, it could be halfway between the right edge of the reference element and the right edge of the screen. Using a combination of the reference element's bounds and the screen dimensions, pretty much any position for your desired element can be targeted dynamically and reliably.

Let's take a look at a concrete example using The App. Currently, the initial screen of The App looks like this:

In reality, I can find all the elements I care about via Accessibility ID. But let's pretend that the "Login Screen" list item is not accessible, and I can't see it in the hierarchy at all. What I'm going to do is gather information about the element right below it ("Clipboard Demo"), and use that to tap the Login Screen element dead center. I will rely on the facts I know about my app, which is that "Login Screen" is a list item of the same dimensions as "Clipboard Demo", and appears directly before it in the list. All I need to do, then, is find the midpoint of "Clipboard Demo", and then subtract the height of the list item, and I'll have the coordinate for the middle of "Login Screen". Basic math, right? So let's have a look at the code (which assumes we've got driver and wait objects all ready for us):

// first, find our reference element
WebElement ref = wait
   .until(ExpectedConditions.presenceOfElementLocated(referenceElement));

// get the location and dimensions of the reference element, and find its center point
Rectangle rect = ref.getRect();
int refElMidX = rect.getX() + rect.getWidth() / 2;
int refElMidY = rect.getY() + rect.getHeight() / 2;

// set the center point of our desired element; we know it is one row above the
// reference element so we simply have to subtract the height of the reference element
int desiredElMidX = refElMidX;
int desiredElMidY = refElMidY - rect.getHeight();

// perform the TouchAction that will tap the desired point
TouchAction action = new TouchAction<>(driver);
action.press(PointOption.point(desiredElMidX, desiredElMidY));
action.waitAction(WaitOptions.waitOptions(Duration.ofMillis(500)));
action.release();
action.perform();

// finally, verify we made it to the login screen (which means we did indeed tap on
// the desired element)
wait.until(ExpectedConditions.presenceOfElementLocated(username));

This is a direct implementation of what I phrased in words above. The important commands are of course the commands for getting the bounds of the reference element (element.getRect()), and the commands for setting up an Appium TouchAction to natively tap on a coordinate. There are a few lines responsible for this latter purpose, but they're straightforward: press on a coordinate, wait for 500ms, and release. That's it!

Happily, this technique is cross-platform, so the above code will run on both the iOS and Android versions of my app without modification. And it is relatively robust and reliable. It will never be as reliable as clicking on an element found by Accessibility ID, of course, because in that case we have a guarantee of calling an underlying tap method on a platform-specific element reference. In most cases, that element reference will exist even if the app is reorganized spatially. But if we are forced out into the cold dark, beyond all ability to find elements, then the technique discussed here will help you at least retain a modicum of sanity.

If you're interested in a runnable example showing the above code working on both iOS and Android, simply check out the code for this edition!

Part 5: Setting Up App State

One of the unfortunate realities of functional testing is the "speed limit": we can only go as fast as the underlying technologies let us go. Taps take time. Sending keystrokes takes time. Waiting for an application to load data from the network, or finish animations... your app takes time too. And that's kind of the point: we want functional testing to be "high-fidelity", meaning as close to real life as possible.

That's all well and good, except when we have multiple (or many) tests that rely on the same starting point. It's a real pain to wait for a test script to type into half a dozen boxes and tap half a dozen buttons before the meat of a test begins. Probably the most common scenario is logging in: a lot of what we test in our apps is likely behind a user login verification, which typically means finding and entering text into a few text boxes and tapping a button, not to mention waiting for the verification to take place over the network. It's not crazy to imagine that logging into an app could take as long as 20 or 25 seconds, depending on a variety of factors. Let's imagine for a moment that we have a suite of 50 tests which require logging in before they can start verifying their particular requirements. If math doesn't lie, then:

In this scenario, setting up the logged-in state in our application is costing us 20 minutes of build time (which might be real time if we're running in a single thread). Ouch. More than that, we're opening ourselves up to the potential of flakiness in that particular set-up flow, by running it many more times than we need to.

Sadly, there might not be a way to speed up the login steps using Appium itself (once it's established that the most efficient locator strategies are already in use, etc...). That doesn't mean we can't take a shortcut, though! What if we could start our test with the app already in its logged-in state? With a little help from the app developers, we can!

In this article, we're going to take a look at 5 ways to set up app state so that you don't have to use Appium to get to the point where you need to start your test. (Of course, don't forget to leave at least one test that does use Appium to set up the state, so you're actually covering it. You don't want your login functionality to break because none of your tests actually walk through the flow anymore!)

Technique 1: Custom Android Activities (Android-only)

If you're developing an Android app, you might be aware of the concept of Activities. Android apps might have multiple activities, which correspond to different parts of the application which may need to be accessed directly (say from a link in an e-mail).

It's possible to design your app in such a way that starting a certain activity sets up the appropriate state in your app. Essentially, a given activity is simply a trigger for state set-up. Appium makes it possible to start arbitrary activities in your app using the startActivity command, and passing in the package of your app along with the desired activity as parameters, for example:

driver.startActivity(new Activity("com.example.app", ".ActivityName"));

You can even use the appActivity desired capability to dump your session immediately into the specified activity upon start, so that from the very beginning your code will be executing in the context of that activity. Just don't forget to strip out any activities that are used for testing only in the production build of your app.

Technique 2: Custom Launch Arguments (iOS-only)

A similar technique is possible with iOS, even though iOS apps do not have "activities". Instead, iOS apps can be launched with what are essentially command-line flags called "launch arguments". These flags can be read by the running application. You could easily design your application to parse the flags for a variety of data, including usernames and passwords to be used for an initial UI-less login.

The way to access this feature using Appium is via the processArguments capability. For example, depending on how we'd set up our app, we could imagine a schema like this:

{
 "processArguments": "-view login -viewparams 'my_username,my_password'"
}

Of course, the app developer would have to build the app in such a way that it knows what to do with these flags, and be sure to make the production version of the app ignore them.

Technique 3: The Test Nexus

A more straightforward approach that still involves UI automation using Appium is what I call the "Test Nexus" technique. I saw a great example of this from Wim Selles, who put the technique to good use at his company.

What's a Test Nexus? It's essentially a "kitchen sink" view in your app, only available in the test build, which contains links to a variety of other locations in your app, that come with attached state. So you could have a link that says "Logged-in Home", or one that reads "Cart With 1 Item". Tapping either of those links would trigger the app to take you to the appropriate view, with the appropriate state. This does mean the app developer would need to hook the links up to any necessary logic.

The advantage of this approach is that it is cross-platform, and easy enough for the Appium test author to simply find the right button and click on it, saving a lot of time in the process. As an example of what this Test Nexus could look like, here's a screenshot of Wim's app:

Technique 4: Deep Linking

Another cross-platform option is so-called "Deep Linking". This is an OS-level feature whereby clicking on a URL with a custom scheme (i.e., a vendor-specific scheme which differs from "http" or "https") actually takes you to your application. The URL is passed to the app, and thus the app can parse it for all kinds of data which then direct the user experience.

Used for testing, it's a powerful way to set up a very flexible URL structure that can be used to direct your test to anywhere you want in your app. It does require the app developer to have registered a custom scheme with the OS, and to have created some kind of URL "controller" that parses the URL and actually navigates the app to the correct view with the appropriate state. For example, you could have a URL that looks like:

yourapp://test/login/:username/:password

Which, when clicked, would open up your application in a state where the login verification has already been performed with the supplied username and password. What's great is that, using Appium, you don't need to find some way to click a link to this URL: you can use Appium's driver.get() command to trigger the URL to launch directly:

driver.get("yourapp://test/login/my_username/my_password");

I won't say more about Deep Linking here, because I wrote a whole Appium Pro article on that topic already; check it out for more info!

Technique 5: Application Backdoors

I mention this last technique because it is interesting, even though there's not really an easy way to enable it within Appium today. The idea is basically that your application can expose application-internal methods to the outside world. These would be methods that are useful for testing and setting up state in your app.

Your test script would then access these "backdoors" whenever it needed to set up state. At the moment this would have to be some kind of custom RPC channel between your test script and app, though there is an interesting proof-of-concept proposal to make something like this available within Appium itself (for Android). The idea was brought forward to the Appium community by Rajdeep Varma in his AppiumConf 2018 talk, and it's certainly a promising one!

In the meantime, I recommend checking out one of the first 4 techniques. Anything you can do to save tests having to cover the same ground over and over again just to set up state is worth it, even if it involves a painful back-and-forth with your development team. Even if you don't have control over the source code yourself, I think it's a relatively easy argument to make that a little up-front effort to create these state set-up mechanisms will pay off for everyone in the long run, in terms of quicker and more reliable builds.

Part 6: Tuning Your Capabilities

Appium sessions are not just Appium sessions. Or at least, they don't have to be. When you start a session, you can tweak the way Appium operates by means of session initialization parameters known as "desired capabilities" (in Selenium parlance; in the language of the W3C WebDriver spec they are simply "capabilities"). We all know about capabilities (which we all love to abbreviate as "caps") because we have to get dirty with them just to start a test successfully, using capabilities like platformName or deviceName to let Appium know what kind of thing we're interested in automating. Appium actually supports a host of other capabilities---over 100 of them, actually.

All these optional caps basically tell Appium to behave differently. There are caps to tweak default timeouts, to set the language on a device before the session, to adjust the orientation, or work with a host of system-specific issues like paths to important binaries or which ports Appium should use for its internal mechanisms. Some of these capabilities are useful from the perspective of speed or reliability, and can help stabilize tests, especially if you find yourself in the specific scenario that they were designed to address. So let's take a look!

Cross-platform Capabilities

noReset

By default, Appium runs a reset process after each session, to ensure that a device and app are as fresh and clean as when they first arrived. This of course is important for ensuring determinism in your test suite: you don't want the aftereffects of one test to change the operation of a subsequent test! If your tests don't require this kind of clean state, and you're absolutely sure of it, you can set this capability to true to avoid some of the time-consuming cleaning subroutines. If you don't need the safety, take a shortcut for the sake of speed!

fullReset

The opposite of the previous cap is fullReset, which does even more cleanup than by default. Set this cap to true to add even more time-consuming cleaning subroutines. This might be necessary if the default cleanup is not enough to make your build reliable, and you know that the instability is due to leftover state from previous tests.

isHeadless

Both iOS and Android have the concept of 'headless' virtual devices, and Appium can still work with devices that are running in this mode. Set this cap to true to launch a simulator or emulator in headless mode, potentially leading to a performance boost, especially if you're running in a virtualized or resource-constrained environment (such as a VM farm). Of course, in headless mode any tooling you have around taking videos or screenshots outside of Appium will not work.

Android-specific Capabilities

disableAndroidWatchers

The only way to check for toast messages on Android is for the Appium UiAutomator2 driver to run a loop constantly checking the state of the device. Running a loop like this takes up valuable CPU cycles and has been observed to make scrolling less consistent, for example. If you don't need the features that require the watcher loop (like toast verification), then set this cap to true to turn it off entirely and save your device some cycles.

autoGrantPermission

Set to true to have Appium attempt to automatically determine your app permissions and grant them, for example to avoid system popups asking for permission later on in the test.

skipUnlock

Appium doesn't assume that your device is unlocked, and it should be to successfully run tests. So it installs and runs a little helper app that tries to unlock the screen before a test. Sometimes this works, and sometimes this doesn't. But that's beside the point: either way, it takes time! If you know your screen is unlocked, because you're managing screen state with something other than Appium, tell Appium not to bother with this little startup routine and save yourself a second or three, by setting this cap to true.

appWaitPackage and appWaitActivity

Android activities can be kind of funny. In many apps, the activity used to launch the app is not the same as the activity which is active when the user initially interacts with the application. Typically it's this latter activity you care about when you run an Appium test. You want to make sure that Appium doesn't consider the session started until this activity is active, regardless of what happened to the launch activity.

In this scenario, you need to tell Appium to wait for the correct activity, since the one it automatically retrieves from your app manifest will be the launch activity. You can use the appWaitPackage and appWaitActivity to tell Appium to consider a session started (and hence return control to your test code) only when the package and activity specified have become active. This can greatly help the stability of session start, because your test code can assume your app is on the activity expects when the session starts.

ignoreUnimportantViews

Android has two modes for expressing its layout hierarchy: normal and "compressed". The compressed layout hierarchy is a subset of the hierarchy that the OS itself sees, restricted to elements which the OS thinks are more relevant for users, for example elements with accessibility information set on them. Because compressed mode generates a smaller XML file, and perhaps for other Android-internal reasons, it's often faster to get the hierarchy in compressed mode. If you're running into page source queries taking a very long time, you might try setting this cap to true.

Note that the XML returned in the different modes is ... different. Which means that XPath queries that worked in one mode will likely not work in the other. Make sure you don't change this back and forth if you rely on XPath!

iOS-specific Capabilities

usePrebuiltWDA and derivedDataPath

Typically, Appium uses xcodebuild under the hood to both build WebDriverAgent and kick off the XCUITest process that powers the test session. If you have a prebuilt WebDriverAgent binary and would like to save some time on startup, set the usePrebuiltWDA cap to true. This cap could be used in conjunction with derivedDataPath, which is the path to the derived data folder where your WebDriverAgent binary is dumped by Xcode.

useJSONSource

For large applications, it can be faster for Appium to deal with the app hierarchy internally as JSON, rather than XML, and convert it to XML at the "edge", so to speak---in the Appium driver itself, rather than lower in the stack. Basically, give this a try if getting the iOS app source is taking forever.

iosInstallPause

Sometimes, large iOS applications can take a while to launch, but there's no way for Appium to automatically detect when an app is ready for use or not. If you have such an app, set this cap to the number of milliseconds you'd like Appium to wait after WebDriverAgent thinks the app is online, before Appium hands back control to your test script. It might help make session startup a bit more stable.

maxTypingFrequency

If you notice errors during typing, for example the wrong keys being pressed or visual oddities you notice while watching a test, try slowing the typing down. Set this cap to an integer and play around with the value until things work. Lower is slower, higher is faster! The default is 60.

realDeviceScreenshotter

Appium has its own methods for capturing screenshots from simulators and devices, but especially on real devices this can be slow and/or flaky. If you're a fan of the libimobiledevice suite and happen to have idevicescreenshot on your system, you can use this cap to let Appium know you'd prefer to retrieve the screenshot via a call to that binary instead of using its own internal methods. To make it happen, simply set this cap to the string "idevicescreenshot"!

simpleIsVisibleCheck

Element visibility checks in XCUITest are fraught with flakiness and complexity. By default, the visibility checks available don't always do a great job. Appium implemented another type of visibility check inside of WebDriverAgent that might be more reliable for your app, though it comes with the downside that the checks could take longer for some apps. As with many things in life, we sometimes have to make trade-offs between speed and reliability.

Well, I hope you found something in the list above that was intriguing, or that perhaps you didn't already know about. Don't forget to check the Appium docs periodically, as many features or workarounds are made available through the ever-growing set of available capabilities!

Part 7: Disabling Animations

The best mobile apps are designed not only to function well, but also to look good. Beyond the importance of a truly "usable" UI, eye candy sprinkled judiciously throughout your app has the potential to make it stand out and give that extra bit of delight to your human users. This eye candy, tasty though it might be for us humans, is completely wasted on Appium. Appium is a completely non-aesthetically-inclined robot, and doesn't care whether your app is skeuomorphic, materialed-out, or chock full of whatever weird animations the kids are into these days.

The last point is especially salient: animations take time. And unless you're directly trying to test those animations, the time spent waiting for them to complete is completely useless. Actually, it's worse than that! Animations can add instability to your testing by creating race conditions (the kind you have to work around using explicit waits or similar). Finally, animations take up precious CPU, which can also end up reducing the reliability of your device under test. What would be great is if we could simply do away with them entirely when Appium is running its tests, since they serve no useful purpose. And we can!

Disabling Default or System Animations

If you do some spelunking on the internet, you'll find lots of articles on disabling animations for Android devices. They usually involve a series of manual steps involving turning on developer mode, and tapping through a bunch of menus. This is fine if you don't mind manually interacting with your phone or emulator, but often it's better to do things programmatically, if possible. Luckily, we can make the magic happen with a set of adb commands:

adb shell settings put global window_animation_scale 0
adb shell settings put global transition_animation_scale 0
adb shell settings put global animator_duration_scale 0

These commands work on emulators and real devices, without requiring root. Basically, they walk through each of the system animation properties and set their values to 0, meaning we want to turn them off completely. (To reset the settings to the default, we can run the same commands with a value of 1 in each case).

If you run these commands, then interact with your device, you'll notice that there are no more visible transitions between apps, etc... Congratulations, you sped up your Android experience!

On iOS, we're not in quite as fortunate a situation. We can reduce system animations somewhat, but we can't turn them off completely, and we can't even reduce them programmatically---at least not very quickly. The way to "disable" animations on iOS is as follows:

  1. Open Settings
  2. Tap "General"
  3. Tap "Accessibility"
  4. Tap "Reduce Motion"
  5. Flip the switch to "On"

As you can tell by the name of the setting we're dealing with, we're only "reducing" the motion involved in certain transitions on the device. Since this all takes place in the Settings app, we can actually use Appium to take these steps by writing an Appium script to find and tap the appropriate elements. However, doing so would obviate any benefit of time saved due to minimized animations, unless you have a lot of tests and can do this only one time in the device setup routine before testing begins.

Disabling App-Specific or Custom Animations

Disabling system-wide animations is a good idea, but this doesn't always catch every type of animation. Apps can define their own animations, too. To deal with your app specifically, we can make use of a tip (once again supplied by long-time Appium Pro reader Wim) that involves custom app logic based on whether your app is being built for testing. This does mean you'll need the ability to commit code to the app itself, or ask the app developers to do this.

Here's how it works: most animation libraries and methods take a timing parameter, which specifies how long the animation is meant to take. What we want to end up with is a build of the app where each of those timings is specified to be zero. I won't show code examples for how to do this, because it depends on your app platform and framework (iOS, Android, React Native, etc...), but for example have a look at Apple's documentation for the UIViewPropertyAnimator. The init method takes a TimeInterval.

Basically, the call to a method like this should be wrapped in another function which checks to see if the app is in testing mode (via some internal flag or environment variable; it's up to you). If it is, the function will return zero for any time interval. Otherwise, it will return the developer-specified value for that animation. The net result will be that all your animations will take no time at all, effectively disabling them completely. Depending on your app framework, you might be able to come up with an even simpler and more elegant solution than this sketch.

By disabling all possible animations using these two types of technique, you'll save a lot of time and add a bit of reliability back into your build. How much time you save depends on your app and how it uses animations, but every second you shave off your tests is a second shaved off your developer cycle as a whole, which brings benefits to everyone that relies on your build.

Part 8: Mocking External Services

Appium tests are usually written as end-to-end tests. In other words, they exercise the app from top (the UI) to bottom (backend services, which most apps employ). There is certainly value in this kind of testing. After all, your users are doing the same thing! Sometimes, though, what we care about with an Appium test is not whether the backend service works, but whether the App Under Test itself works, on the assumption that any data retrieved from an external service is valid.

The sad truth is that the Internet is not always very reliable. Requests made by an app to an external service might fail for a number of reasons, many of which would have nothing to do with the app itself. It could be a "stormy day" when it comes to "Internet weather", and requests could time out. Or the version of the backend service used for testing might not be able to handle the load of so many requests if you're running a lot of tests at once. The list goes on.

Luckily, it's possible to ensure quality for both the backend service and the mobile app in a much smarter way. The very fact that the Internet lies between these two components is the key: most backend API calls are just HTTP requests. This means that the backend service can be tested "end-to-end" by generating HTTP requests from something other than the app. This is called API Testing, and it's a much faster way to test your backend service than using Appium! Similarly, all the app UI tests care about is the data that comes over the wire from the backend service: it doesn't actually matter whether it's "real" data or not, as long as it's an HTTP response of the appropriate form, with content appropriate to the test. So, rather than having our app make calls to a real running API service, we could have it make calls to a "mock server" instead!

What are the benefits of using a mock server?

  1. Speed. Because the mock server can live on the same network as your device, API calls become blazingly fast.
  2. Reliability. The mock server is a very simple piece of code which just replies with canned API responses. There is very little risk of it failing. Because there is also no longer an actual Internet route in between the app and the API service it relies on, a huge source of unreliability is taken out of the picture. If the backend service itself is unreliable, the benefits increase (though that service should obviously be tested independently, so any unreliability can be addressed by the owners of the service).
  3. Flexibility. A mock service can even be used in the development of an app before or during the development of the backend service, allowing for greater flexibility and independence between teams.

Using a mock server does have some downsides, however:

  1. Maintenance. A mock server and the canned responses do have to be maintained separately of the service itself.
  2. API Drift. Using a mock server raises the danger that what your app is being tested against is not the same as the API the backend service is actually using. The mock server and the backend service must be kept strictly in step with one another. A good practice is to have the owner of the service also provide a mock of the service for use in testing by consumers of the service.
  3. App and Test Complexity. You will have to build a way for your app to select whether it makes its service calls to the real service or to the mock service, based on whether the app is being used for testing or not. You will also have to augment your tests with information about what the mock server should do, and coordinate with the app to make sure it's connecting with the mock service.

While these downsides are real, the speed and reliability gains are definitely worth it for any sizable build. So much for an introduction! Let's see how we would go about implementing this kind of strategy. Here's what we need to get it all working:

  1. A mock server package of some kind (for example, the aptly named Mock Server for Java and JS).
  2. Changes to our app code to allow the app to connect to the mock server and not the real API server. This can be accomplished in a variety of ways, for example by having a separate build for testing which has a host and port hardcoded for the mock server, by passing startup arguments to the app to tell it where to look for its API endpoints, or by adding a new view to the app that allows setting of server details via text fields in the app UI itself.
  3. The ability to start and stop the API server before and after our testsuite, or before and after certain test classes (depending on our needs). Of course, it needs to be running on the same host and port we specified in our device.
  4. The ability for the app to talk to the mock server (this might mean ensuring the app and the mock server are on the same network).
  5. Appropriate "expectations" or "scenarios" we can set on the mock server. If we roll our own mock server, we can simply hard-code these responses in request handlers. If we're using a mock server package, it often comes with its own idiosyncratic way of specifying how the server should behave. Most often, we tell the mock server how to match against requests from the app, and how to respond based on the shape of the request.
  6. (Maybe) Depending on the mock server project we choose to incorporate into our build, it might also come with features that allow the same requests to generate different responses based on the "scenario". For example, we could have two different login scenarios that involve the exact same requests, one of which responds with a successful response and one of which responds with an error about e-mail formatting. In this case, we'll also need a way to uniquely identify requests from a particular device, so that requests from a device running a different test at the same time don't result in the wrong response. The best candidate for this unique id is the device id, which must be available from both the app code (so it can be sent in a header to the API server in all requests), and in the test code (so that the appropriate "scenario" can be selected in advance of any given test). If you want a good description of how to implement this for React Native apps specifically, check out the slides from Wim Selles's SeleniumConf India 2018 talk.

Putting all of these pieces together, the actual test flow would look like this (in pseudocode):

Before:
   - Start mock server at certain port
   - Start Appium session with app (connected to server)

Test:
   - Set mock server expectations
   - Use Appium to drive app, which makes calls to mock server under hood
   - Verify app behaves properly given mock server responses

After:
   - Quit Appium session
   - Shut down mock server

The Mock Server library I linked above has the ability to do all of these tasks with ease. The server can be run from the command line, as part of a maven plugin, via a Java API, etc... We can set expectations in our Java code (or other language code) using a client that comes with the library. What does setting expectations look like? Here's an example that shows how we might mock a login request:

new MockServerClient("localhost", 1080)
   .when(
       request()
           .withMethod("POST")
           .withPath("/login")
           .withBody("{\"username\": \"foo\", \"password\": \"bar\"}")
   )
   .respond(
       response()
           .withBody("{"\status\": \"success\"}")
   );

Running this command at the beginning of a test (or test suite) directs the mock server to respond with a success JSON response to any login attempt with username foo and password bar. With this kind of setup, all we need to do is direct our Appium driver to enter these values into the login prompt of our app, and the mock server will receive the request and respond with the appropriate response we've entered here. As we mentioned earlier, it's important that both the request and response match the form of the real API service, otherwise we'll either run into failures or test the wrong conditions.

One difficulty in running mock API servers is encountered when utilizing a cloud service. One of the main benefits of mock servers is speed, and putting the Internet in between the app (running in the cloud) and your mock server (running locally) works against this benefit. Still, because the mock server doesn't actually have to do any work, it will be faster than a fully-fledged API server, and is worth considering even in cloud testing environments. All in all, the benefits of using mock servers are pretty compelling, and help you to reduce one of the most common types of instability in your test suite--relying on remote or 3rd-party dependencies in the form of external services.

Part 9: When Things Go Wrong

This part is not about specific speed and reliability techniques; instead, it's about what to do when none of the previously-discussed suggestions have been helpful. There are no doubt lots of other useful tips hiding in the wings, and they'll make their appearance here in due time. But even if there's not an obvious solution to a particular problem, it's very useful to be able to pinpoint the problem so that you can look for a reasonable workaround or notify the appropriate authorities (Appium, Apple, Google, etc...).

In order to pinpoint any kind of problem with an Appium test (whether it's a straight-up error, or undesired slowness or instability), it's necessary to understand something about the complexity of the technology stack that comes into play during test execution. Take a look at this diagram of (most of) the pieces of Appium's XCUITest stack:

You've got your test code, the Appium client code, the Appium server code, WebDriverAgent, XCUITest itself, and who knows how much proprietary Apple code shoved in between XCUITest and the various bits of iOS. A problem on any one of these layers will eventually manifest itself on the level of test behavior. It's not impossible to figure out what's going on, and there are typically lots of helpful clues lying about, but my point is that a little work is required to get to a useful answer about your problem. So, here is what I recommend doing to diagnose issues, in roughly this order:

Test Code Exceptions

The first investigation should always be into your own test code. Oftentimes an exception arises that has nothing to do with Appium whatsoever, and is a simple programming error in your test code itself. Always read your test code exceptions in detail! Since your test code imports the Appium client in your language, this is also where you will get the first indication of any other errors. Whenever possible, Appium puts useful information in the error messages it returns to the client, and these messages should end up in your IDE or console.

In the case of exceptions generated by the Appium client, the exception will often belong to a certain class (NoSuchElementException, for example), which will give you a clue as to what has gone wrong. In the case of a generic exception, look for the message itself to see if the Appium server has conveyed anything about what could be wrong. Often, system configuration errors will be noticed by the Appium server and expressed in this manner, and a quick look at the message may even give you specific instructions as to how to fix the issue.

Appium Logs

The Appium server writes a ton of messages to the console (or to a file if that's how you've set it up). In the case of normal test execution, this information is mostly superfluous. It tells you all about what Appium is doing, and gives you a very detailed picture of the flow of HTTP requests/responses, commands Appium is running under the hood, and much more. Check out the Appium Pro article on how to read the Appium logs (by Appium maintainer Isaac Murchie) for a more detailed introduction to the logs. Here are some of the things I usually look at as a first quick pass of the logs:

  1. The session start loglines, to double-check that all the capabilities were sent in as I expected.
  2. The very end of the log, to see the last things Appium did (especially in the case of an Appium crash).
  3. Any stacktraces Appium has written to the logs (since these will often have information directly related to an error).
  4. Any log lines rated "warn" or "error".

If nothing pops out after following those steps, I read through the logs in more detail to try and match up my test steps with loglines, and then look for the area in the log matching the problematic area in my test.

Log Timestamps

In the case of slowness issues, it's essential to see how long different commands take. It's possible to determine this on the client side by adding time measurements before and after individual commands. Often, however, this doesn't reveal anything useful. We already know that a certain command is slow---that's why we're having trouble! But using the Appium logs, we can dig deeper and try to isolate the slowness to a particular Appium subroutine, which can be much more useful.

By default, the Appium log does not show timestamps, in order to keep the log lines short and uncluttered. But if you start the Appium server with --log-timestamp, each line will be prepended with a timestamp. You can use these to good effect by isolating the section of the log that matches the problematic area in your test, and start looking for large jumps in time in the log. You might find a single line or a small set of lines which are taking a long time. If it is surprising to find such a lag with those particular lines, you can always reach out to the Appium issue tracker and share your findings with the maintainers to see if there's a potential bug causing the slowness.

Device Logs

When it comes to problems whose root cause lies with your app, the mobile OS, or vendor-owned frameworks (like XCUITest or UiAutomator2), Appium is typically unable to provide any insight. This doesn't mean we're out of options, however. On both iOS and Android, apps can write to system-level logs, and often the system logs will also capture any exceptions thrown in your app (which might have led to a crash, say).

So, if the Appium logs don't provide any help in your investigation, the next place to go is to these logs. On Android, you can run adb logcat to access this log stream, and on iOS you can simply tail the system log (for iOS simulators) or use a tool like idevicesyslog for real devices. In fact, you can also set the showIOSLog capability to true, and Appium will scrape data from these logs and interleave it with the regular Appium logs, which often helps to put device problems into the context of the Appium session.

Use a Different Appium Client

If you don't notice anything untoward in the Appium server logs, and suspect that the issue might lie with the Appium client you are using, you could always try to rewrite your test script (or the problematic portion of it) using a different client (either in the same language or a different language). If after doing this the problem goes away, you have pretty strong evidence that the problem was with the Appium client.

Happily, the Appium team is often able to fix bugs with the Appium client very quickly (as long as the client is officially maintained by the project). Sometimes, though, a bug might exist in the underlying Selenium WebDriver client library which the Appium client is built on, and then the bug will need to be reported to the Selenium project. The Appium team can assist in that process, too.

Use the Underlying Automation Engine Directly

Appium is built on top of other automation technologies, like XCUITest as we saw in the diagram above. In many cases, issues that appear to be related to Appium are actually issues with the underlying automation technology. Before deciding that the Appium code is at fault for errors or slowness and creating a bug in the Appium issue tracker, it can be useful to write up a small testcase using the underlying engine, expressing the same test steps as with your Appium script. If you notice the same kind of problem (slowness, errors, etc...), then it's unlikely Appium will be able to do anything to directly fix the issue, and it should instead be reported to Apple or Google (and great---now you have a perfect minimal repro case for them!)

It's still useful to ask around the Appium forums in such a case, because there are often workarounds to these kinds of problems. If the workaround is useful enough, we can also build it into Appium as a first-class citizen, making life easier for all Appium users (at least until Apple or Google fixes the original problem).

Ask for Help

If none of the strategies above have resulted in clarity on the problem or a satisfactory resolution, it's time to reach out for help. A good first step is always searching the Appium issue tracker, StackOverflow, or the Internet more generally for any sign of your problem. Maybe someone else has already discussed a workaround for it.

If that's not the case, it's a good idea to create an issue at the Appium issue tracker. At this point, if you've followed all the steps above, prepare to be showered with love and praise by the Appium maintainers, who will appreciate the extremely detailed report you will be able to include in your issue.

Hopefully the Appium maintainers will be able to provide a resolution, especially if it's determined that your issue reflects an Appium bug. If not, then I suppose you could always put on your crazy hacker hat and start decompiling some XCUITest source code! Yeah---that's no fun. Anyway, hopefully you'll find resolution somewhere else along the way, using the methods we've discussed here.

I hope you've enjoyed this series on test speed and reliability!