A simple step by step guide to A/B testing

A/B Testing is a revolutionary technique for digital marketing, and a game-changer for testers to add value to the product directly.

A/B testing, which is also known as Split testing or Bucket testing is an experimentation technique used to assess the usability of a certain feature. It is usually conducted on two or more variants of the same application assigned to different user bases. It helps to identify the user interest, hence, helping the product to be more improvised as per the user’s needs. 

Why do you need A/B testing?

Before going into that, let’s discuss how a product becomes more popular and usable. How do you get new opportunities in your business? How can you make it more profitable? The answer lies in the effectiveness of the product, and it’s matchless marketing; using your marketing strategies in the right way is what sets a product apart. 

Now, the question is, how do we identify the best usable version of a product? That’s where the concept of A/B testing comes in. It’s simple – you can’t wild-guess the expected response. You will need enough real user data to understand the perspective of your audience. You run different experiments on different user bases to find out which is the best way to go. Then, you can evaluate your conversion funnel and marketing campaign to get data directly from your customers.

benefits of AB testing
Benefits of A/B testing

How do you plan an A/B test?

A/B Testing can be carried out on both web & mobile applications. For a certain feature, screen, webpage, or app, two or more variants are created, each with a different experimental feature. It could be a design-based experiment, functional experiment, or an experiment for optimizing clicks & conversions.

steps for planning AB testing
Steps in planning an A/B test

Today, we are going to look at some easy steps to get you started with split testing on the right foot, even if you can’t hire a professional to help you out. 

  1. Identifying areas of visitor activity
    First, we need to identify the areas of visitor activity that are to be improved. For instance, an e-commerce website analyzed the user traffic & found out that users visit the app, add items in their cart but a major chunk of users leaves the site when it asks to sign-up. So, the registration flow is where we need to improve in order to gain more customers and lower the dropout rate.
  2. Create variants on the basis of the hypothesis
    Taking the example of a registration flow for the hypothesis, we work on two possible solutions. One flow uses emails to register while the other uses phone numbers. This would be an experiment on ways to register. Another experiment could be on the input fields of the registration form – one form could contain fields that require the user to enter personal information, while the other form takes credit/debit card or COD information only. These were examples of two different approaches to create two different variants. Variants can be multiple, and totally different in nature and objective.
  3. Create user bases for variants
    Once the variants are created, the next step is to create different user bases. A user base is actually a group of users generally categorized on the basis of similar interests or causes. For instance, one group could be the users who opt for COD (cash on delivery) and another group could be those who prefer paying through credit/debit card. You can also group users on the basis of demographics, such as age, region, profession, etc.
  4. Testing variants with funnels
    In order to get statistical data, we need to create funnels before testing the user activity. These funnels are like checkpoints in your application to monitor the traffic.
  5. Choosing the right variant
    Analysis is the key. After all the data has been gathered, we need to analyze the results for making the right variant choice. The data is now systematically organized, plotted, metricized, and driven to a conclusion. The decision would be made on a win-win variant.
winner of an AB test

Boost your bottom line through A/B testing

Accurate and well-planned A/B tests can greatly improve your bottom line. Controlled experiments and the resulting data can help you figure out the exact marketing strategies that work best for your business. You are more likely to emphasize conducting A/B tests before running ad campaigns and promotions when you continually see that one variant performs one, two, three, or even four times better than the other. It’ll help you decide what works best and what doesn’t, so you can optimize your products, applications, and content accordingly.  

When you know what works well, it makes it easier for you to make decisions so you can craft more meaningful and impactful marketing collateral from the beginning. The key is to keep testing regularly so you can stay updated with what is effective for you. Remember, since the technology industry changes rapidly, the trends change as well. This means, what may work for you in one month, may not work as well in the next month. The effectiveness of a test change over time, so it’s important to continue testing regularly. 

Here are some tips to help make your A/B tests more effective and impactful:

  1. Test the right elements
    Designs & layouts, headlines & copy, forms, CTAs (Call to action), images, audios & videos, subject lines, product descriptions, social proofs, email marketing, media mentions, landing pages or navigations, etc. 
  2. Achieve milestones
    Improved conversion rates, more user traffic, higher number of views or subscriptions, increased number of downloads, improved sales, improved time on page, etc.
  3. Carry out multiple iterations
    A/B testing is not a one-time activity. It needs multiple iterations when experimenting with a variant. Ultimately, this will lead to a divergence or convergence point through metrics.
  4. Select the right tool
    Identify the tool which best fits your testing experiment. Different testing tools provide different kinds of testing expertise. You just have to choose the one which matches your requirement.

Which tools are the best for A/B testing?

It is essential to select the right tool with features to suit your experiment requirements. There are many tools available on the internet, both free and paid. Some of the more popular tools are:

  • Optimizely
  • VWO
  • Convert Experiences
  • SiteSpect
  • AB Tasty
  • Sentient Ascend
  • Google Optimize
  • Apptimize (for mobile apps)

Let me give you a quick walk-through of one of these tools, Google Optimize (free) to help you better understand how you can set it up to gather meaningful data and drive results that can help boost your business. 

How to setup Google Optimize for A/B testing?

  1. To start, you need to set up an account on Google Optimize. After logging in with your existing Google account (or creating a new one), it will ask you to add a browser extension. This will set up a container.

2. Start with creating an experiment. I have chosen a sample site, and my hypothesis is that the main page heading should be something catchy in order to get more conversions. 

create an experience in Google optimize

3. After adding the target URL, I’ll proceed to create variants. I made one variant where I changed the title and text of a button, and here we go:

4. Select the target audience. Google Optimize offers various options for this purpose:

5. Google Optimize uses Google Analytics to gather the user data. 


After you link your changes to analytics, next step is to deploy your code:

6. Setting the objective as ‘Improve conversion rate’, we’ll proceed to further Settings.

7. After the experiment has been rolled out for a certain time span and for the targeted user base, it’s time to find out the winner of the experiment. The reporting tab will help you to view the results:

NOTE: The above demonstration is just a quick overview. Google Optimize is itself a very thorough tool to work with. In order to explore it more, here is the link to view a short web series on this tool, which is really helpful.

Mistakes to avoid when performing an A/B test

  1. Use original and valid hypothesis & statistics
    Setting invalid hypotheses & using someone else’s app statistics to derive your app hypothesis can weaken your experimenting ground. The results might not be that helpful, hence wasting the time & effort to carry it all the way.
  2. Tackle one pain point at a time
    Avoid using too many testing elements together in a test. It might affect the accuracy of the results. Break the elements into smaller groups & pick one at a time.
  3. Consider internal & external factors
    Using unbalanced traffic, picking incorrect duration, and not considering the external factors can be the reason for the failure of your experiment. These indicators play a pivotal role in this activity hence need to be taken care of.
  4. Use the right tool
    Not using the right tool can be a risk too. The tools are customised for various purposes. Proper research and selecting the best fit is what makes it effective.

Wrapping up

Today, many industries are using A/B testing as a powerful tool to increase the audience viewership, subscription, and readership. Most talked about are Netflix, Amazon, Discovery, Booking.com, WallMonkeys, Electronics Arts (SimCity 5), Careem, etc. There are numerous other multi-domain industries which are using the A/B testing technique to get the best possible results.

If you’re looking to get started with A/B testing, you can set up a call with us and our experts will guide you about outsourcing A/B testing and easily reaping the results without investing your time or efforts. 

Pumped up to get started with your first A/B test? Awesome!

Don’t forget to share this blog and help spread the awesomeness!

How we did it: insight into our QA process for Pakistan telehealth initiative

After weeks of hard work, countless meetings, and a successful project delivery later, we’ve decided to pull back the curtains on exactly what it takes to assure the quality of a high-availability healthcare app.

Back in April 2020, when the first wave of the novel Coronavirus was at its peak, VentureDive reached out to the Government of Pakistan to help the country combat the virus through technology. As part of the ‘Digital Pakistan’ initiative, spearheaded by Tania Aidrus, an ex-Google executive, we collaborated with her team to build and launch the COVID-19 telehealth portal. It is a website specially designed to combat the crisis of the pandemic. It allows Pakistani doctors and all healthcare professionals to register on it and volunteer to remotely help the patients who might have COVID-19 symptoms.

“VentureDive team, I cannot thank you enough on behalf of the entire team for leaning in to help! There has been SO much interest in the platform and what’s interesting is how much interest we are seeing in other verticals. For example, today we had a call with the Law Ministry who are very eager to do something similar to sign up volunteer lawyers to provide free guidance to victims of domestic abuse. I hope this is just the beginning of our working relationship – excited to have started off on doing something that I hope can help thousands of Pakistanis during this time.” — Chief Digital Officer, Digital Pakistan

Healthcare is a very sensitive subject and it was a technology to be used by millions of people across Pakistan. Therefore, it demanded the highest quality, with zero downtime, zero bugs, and intuitive user journeys.  In this blog, we’ve highlighted our experience of testing a portal that was to be used by healthcare professionals to reach out to patients via our technology.

What follows is a tale that tells the challenges we faced during the three-week-long project, and how we resolved them to successfully deliver a web and a mobile application.

The functional, security & scalability challenges of testing the telehealth portal

Before the project kicked off, the quality assurance team at VentureDive gathered the application requirements and shared them with the experts dedicated to working on this. A thorough documentation and sample mockups helped the QA team to begin working on the test plan, test design, and test cases during the development phase. We conducted daily stand-ups so the development & testing teams could stay synced and brainstorm on maneuvering through this project smoothly and in time.  We faced six major challenges during our course:

Time management

The main challenge was racing against time to meet the client’s expectations while ensuring the security of the sensitive healthcare data, and zero glitches within the app. This meant that the QA team had to keep track of every requirement and reporting templates for testing updates that helped the development team fix defects and bugs timely, prior to delivering any milestone to the client.

tools used by QA team
Tools used by the QA team to effectively deliver each milestone

3rd party integrations

The application was to be integrated with third-party software such as WhatsApp chatbot for doctor-patient communication & telecom operators to enable anonymous calling mechanism and receiving SMS OTP. These were essential for fetching data from official sources and making sure both our applications remained in sync with the whole system we were creating.

Mobile responsiveness

The telehealth portal was supposed to be a hybrid mobile application, which meant that the testing team had to test it across various mobile devices and operating systems to make sure that it was responsive and compatible.

Security

Cyber attacks and threats are a real-world problem today with thousands of networks and websites being compromised each day. To help identify, classify, and address security risks, we performed vulnerability assessment and penetration testing activity including server VA, API penetration testing, and web application penetration testing to identify possible routes an attacker could use to break the system.

System performance

Monitoring the performance of the application was an integral part of building the portal, since we anticipated a large number of users, including doctors and patients. The QA team planned to automate the scripts on JMeter to determine how the system performs in terms of responsiveness and stability under heavy load and a huge volume of data.

Standards

A big challenge for the QA team was to keep the testing practices as standardized as possible even with a short time to spare. VentureDive believes in maintaining the quality of the deliverables as our utmost priority, regardless of the length, complexity, or intent of the project.

Adopting a smart testing strategy for successful project delivery

The process of Alpha testing was done remotely. The QA team collaborated and focused all their efforts towards detect any major defects in data security. We carried out usability, performance and security testing for private and sensitive information in a healthcare setup.

It was pertinent for the QA team to also analyze business criticality, plan around testing efforts in minimum time, make the application usable for thousands of users and ensure that testing was compliant with the Open Web Application Security Project (OWASP) standards.

The test strategy called for having separate environments for development, staging, and production. We performed the following steps in the given order:

Functional testing

Keeping in mind the criticality & nature of health-related projects, requirements had to be precise and the validation had to be perfect. We performed static analysis on requirements followed by actual test execution to meet the requirements and clients’ expectations. Data flow integrity and business rules were repeatedly tested via automated suites in our regression cycles

Tools:

  • Postman for API automation & integration testing
  • TestRail for test cases and test cycle reporting
  • PostgreSQL for data validations

Cross browser testing

Browser compatibility was mainly focused on Google Chrome version 80+ on windows. Extended smoke and regression cycle was performed on Firefox and Safari for Windows and Mac respectively.

Tools:

  • crossbrowsertesting.com
  • Browserstack.com

cross browser testing configuration
Cross-browser testing configuration

Responsiveness

Understanding the market trends of portable devices, we analyzed the data of the target audience and performed UI/UX testing on mobile & other portable devices. The application was tested on 6 different Android and iOS devices with different screen sizes and resolutions having different OS versions.

tools and devices
Tools and devices used to check responsiveness

Security testing

Thorough security testing was performed on infrastructure, API, and application level, keeping the top 10 OWASP standards in mind. 

Tools:

  • Burpsuite
  • ZAP 
  • KALI Linux operating system

The QA team identified the following vulnerabilities during the security testing activity:

  • Broken access control
  • Broken session management
  • Disclosure of internal directories
  • Unrestricted file upload
  • Missing server validation
  • Sensitive data exposure
  • Brute Force 
  • No rate limit
  • WAF & ACL implementation

Our goal was to immediately address these issues and recommend further best practices that should be followed as pre-emptive measures against any potential cyber-attacks.

Performance testing

The system undergoing the test was required to have a load-balanced infrastructure supporting thousands of interactions between the patients and the doctors. The flow includes the signup process including uploading of images, populating, and fetching patient data lists and assignments of users one to one. 

Tools:

  • Jmeter
  • Blazemeter
configuration for performance testing
Configurations for performance testing

We analysed all the results, and generated an extensive report using SmartMeter, which was later shared with the stakeholders. The primary issues identified were load balancing, CPU utilisation, and WAF configurations. These were addressed and recommended configurations were made for resolution.

Project delivery

A standard process was put in place to validate the requirements and meet the client’s expectations. After complete and thorough testing, we demonstrated and delivered the project to the client successfully.

Wrap up

Working on the telehealth portal as part of the ‘Digital Pakistan’ initiative was a short, knowledge-packed, and completely amazing journey that helped us learn and implement advanced quality assurance methodologies for a secure application. We adopted agile software quality practices to align software quality with product requirements and accelerate the software lifecycle. In addition, the continuous feedback we received from the project managers helped minimize retesting for verification and validation. Our iterative approach and short sprints enabled us to deliver quality products within a set deadline, successfully.

Thanks, team for all the amazing support. As I mentioned on slack, thanks to your hard work, we have 3000 doctors signed up and 1000 who submitted their documents. I had an amazing experience working with you all and truly admire your work ethic and efficiency. We couldn’t have done it without you. We will keep you updated on the stats and the launch event! — Project Coordinator, Digital Pakistan

Here’s where having a process-driven, and designed around ‘best-in-class’ software technologies delivery model helped us greatly.  It enabled robust scalability while maintaining cost-efficiency within strict quality control measures.

Thanks for stepping up to swiftly contribute towards our nation in these challenging times. It’s been a privilege to watch such a well-oiled team in action. — CEO, Digital Pakistan

icon-angle icon-bars icon-times