Special thanks to William Maio, Abhishek Dankar, Rathna Govi and Sahil Kapur for contributing to this report.
In a typical week, our global teams at HeadSpin get to use all possible Video Conferencing tools across web and app, depending on which ones our customers are most comfortable with. There is no doubt that there has been a growing dependency on these forms of tools lately because everyone around the world is trying to figure out the new normal, namely #WFH, #StudyFromHome #WorkoutFromHome or just #StayHome.
These platforms just like many others are trying to grow and launch all kinds of capabilities to accommodate the rising needs of their consumers. They may not have been built for handling such massive demand in a short period of time from businesses of all shapes and sizes. Moreover, the actual experiences are vastly different between each other.
HeadSpin’s AV solution enables testing by capturing the user experience of real media devices, including actual screen and audio output. Know more!
Here are what real users have been sharing in the last couple of months about their experiences with video conferencing apps. Many highlight concerns with core functionalities:
“Terrible audio an video”
I use this app to do online dance classes, now that we are all quarantined, but I can’t make out what the teacher is saying or doing. The audio only works off and on and the video is always blurry.
“Can’t join the meeting from my phone. The app spins for 1-s then nothing happen”
“Last update is a fail.”
It usually cannot start, just displaying a big logo. You cannot switch back to the app screen without killing it. In video conference, you cannot see shared screen. In such time, Microsoft should really check the quality of its apps!
We’ve compiled a list of key performance indicators from a typical user journey that video conferencing tools can monitor to improve their user experience:
Key Performance Indicators
Join to First Frame
No one likes waiting to join a call — especially when they may be running late! Be sure to measure the time taken for a device to start the meeting, both devices to join a meeting, and receive screen share.
Energy Impact
Most video calls include some form of screen sharing, so be sure to measure battery drain with one device sharing their screen. A 6% battery drain for a 25 minute session is poor.
MOS Score — Video Quality
The MOS or Mean Opinion Score is a holistic subjective quality score that represents the perceptual quality of a video as perceived by an end user. HeadSpin’s mean opinion score ranges from 1 to 5, with 1 being very poor and 5 being excellent. The HeadSpin AI engine can measure how streaming video quality evolves over the course of a test and flag regions of poor user experience without having any reference to the source video content.
Without AI, a team would either have to show a video to a pool of users and aggregate their feedback or curate high quality reference videos for each video to be evaluated. Not only are full reference video quality metrics expensive and difficult to maintain, but many rich media applications, such as live video and game streaming, have no reference to compare to. You can learn more about our reference-free MOS here.
For the MOS score scale of 1 to 5 given to each frame of the video, teams should analyze at least 30 seconds of video on multiple devices as one is sharing its screen.
Bytes In When Receiving Screenshare
How HeadSpin Can Help
The above metrics are just a few of the key performance indicators (KPIs) we looked at. There are other comprehensive video and audio streaming KPIs that these apps care about as well as different devices, OS versions, network conditions, location, server side load testing and many others. The HeadSpin platform can automatically diagnose server-side issues that arise due to infrastructure deployment, poor performance, or API errors. These can be run on a 24/7 basis for thousands of daily tests on real devices around the world, to ensure your users continue to experience uninterrupted video conferencing.
No SDK is needed to run performance sessions. HeadSpin’s AI engine sifts through the network traffic, client-side metrics, and videos of the test execution to find areas of poor user experience and performance bottlenecks. On the HeadSpin platform, recommendations are provided for every issue that surfaces. You can collect performance data through the HeadSpin Remote Control or run automation tests.
Reach out to us if you are interested in discussing more with our team!
Accelerate your Automation Skills
Introducing HeadSpin University
FAQs
1. What can be the reason for a low mean opinion score in video and voice calls?
Ans: Everything from a user’s health to the quality of audio and video equipment and computer settings can cause a degradation in the quality of communication. However, network effects are most readily apparent and measurable on these calls and directly impact perceived call quality.
2. Can the HeadSpin AV Platform test the quality of a video conferencing app?
Ans: Yes. With the help of the AV Box, testers can access the video conferencing app on a particular device remotely from anywhere in the world. Also, testers can perform functional and non-functional tests of the app and get insights from the HeadSpin AV Platform directly.
3. Why do we need to automate the audio and video testing?
Ans: The top reasons to automate the audio and video testing process are:
- To minimize human involvement
- To perform comprehensive regression testing
- To increase test coverage
- To conduct extensive performance testing
4. What are the major use cases of the HeadSpin AV Platform?
Ans: Media and entertainment apps, video conferencing, live streaming, camera-enabled apps, voice assistants, games, video calling, voice-activated apps, accessibility testing, video quality, audio match analysis, and smart speaker and OTT/set-top box testing.