Back to Case Studies
Car Rental Services

Driving A Global Digital Experience: An Overview of AVIS Global Testing Strategy Implementation

We established a comprehensive global testing strategy for Avis Budget Group's modernisation initiative, building automated test frameworks from the ground up to cover 97% of backend service endpoints, mitigating manual testing dependencies and ensuring high-quality releases for their new Global Web platform.

Published on: September 24, 2025Last Updated: January 19, 202613 min read

PISR: Problem, Impact, Solution, Result

  • Problem: Avis Budget Group, a large enterprise in the vehicle rental services sector, is undergoing a major transformation across several key capabilities and work streams, from modernising legacy systems to the continued rollout of a new multi-brand mobile experience, a new global web initiative, a new global backend and so on. The primary focus of this transformation is to move off legacy mainframe systems and re-architect their technology landscape to meet the demands of a modern digital business.

  • Business Impact: Avis are embracing cutting-edge technologies, but with such a complex mix of legacy and modern systems, they have lagged behind significantly on embedding quality engineering capabilities. Because of a lack of robust test automation and CI/CD capabilities, they are to carry a large and expensive maintenance burden.

  • Our Solution: Over 12 months, ClearRoute's team of 2 QCE Pods partnered with the client to re-engineer their quality delivery pipeline. We implemented test automation frameworks, introduced reporting and best practices, and established quality engineering gates within a unified pipeline.

  • Tangible Result: The transformation drove a test automation coverage increase of 100% in some services where none was previously in place. Developer experience was improved with being able to view test results from the BitBucket Pipelines over having to manually navigate to Concourse. Through automation we were able to raise around 70 defects across the services with over 86% being resolved within the duration of the engagement.

The Challenge

Business & Client Context

  • Primary Business Goal: Improve quality gates and feedback through test automation to reduce defect cost and increase product confidence.

  • Pressures: Heavy reliance on manual testing, lack of automation feedback loop reduced confidence in the product along with many defects and a few P1/2 incidents being found.

  • Technology Maturity: Siloed teams with manual handoffs. Testing was largely left to a different vendor with little ownership on product teams for the quality of their service.

Current State Assessment: Key Pain Points

  • Challenge Area 1, Lack of test Automation: Manual testing process taking 2-3 weeks, leading to developer wait times.
  • [Challenge Area 2, Test anit-patterns]: With the test automation that has been put in place by a 3rd party vendor, it is using a paid tool Postman (or the limited free version) with the tests being run locally. This reduces visibility and ability to trace issues encountered easily.
  • [Challenge Area 3, Communication]: API teams seem to be following "blind lead the blind" approach where the APIs are constantly changing despite there being clear guidance and documentation for the API to follow from the Architecture team. This leads to significant rework.

Baseline Metrics (Where Available)

Metric CategoryBaselineNotes
Lead Time for Changes36 DaysEstimated from average time to resolve raise defects by ClearRoute
Deployment FrequencyAd-hock, QuarterlyEstimate from conversations with platform, stakeholders and developers, many who did not know when their last release was

Solution Overview

Engagement Strategy & Phases

  • Phase 1: Discovery & Solutions: Investigated current quality engineering practices, gained understanding of the services and started laying the initial frameworks for test automation.
  • Phase 2: Battle Testing Solutions: Put in place quality gate around test automation in the CI/CD pipelines, expanded test automation coverage across the services, further refined the automation frameworks and strategy.
  • Time to First Value: Delivered first version of the Test Strategy within 4 weeks during the discovery phase leading to high stakeholder confidence and buy-in.

Background

A global force in the car rental services sector, Avis Budget Group (ABG) is undertaking a major transformation effort to modernise its legacy systems and consolidate its service offerings worldwide. The effort is underpinned by the launch of their Global Web initiative.

Multiple third-party vendors have been recruited to deliver the initiative, covering areas such as product management, software development, and testing. However, ensuring quality throughout has remained a significant challenge. Given the tight delivery timeline, it was imperative that quality practices were quickly put in place as modernisation is underway. This is why our team has been brought in.

Route To Live Findings

As a strategic partner in testing, we were initially brought in to help devise a comprehensive Global Testing Strategy for the initiative in the discovery phase of the engagement. A pod of three (engagement lead, QCA, and QCE) was deployed to create Route to Live maps for 4 different teams within ABG, with the aim of collating technical findings for the Testing Strategy.

Through the deep dive session, we identified two main anti-patterns that the teams in question adopted. The first one was the “Ice Cream Cone” anti-pattern. Essentially, it is the inversion of the widely recognised test pyramid pattern. The teams have been carrying out a disproportionate amount of End-to-End testing with additional manual testing at the top and minimal to no lower-level testing. This type of anti-pattern would lead to a much longer feedback loop, especially if enterprise-wide integrated environments are required for testing. In this case, the team would have to wait for the environment to spin up or for other teams to complete their respective testing.

Moreover, we also found that the teams have been following the “Cup Cake” pattern, where the same types of testing have been carried out throughout the Route to Live. Specifically, the teams would conduct their own testing, and then regression testing in UAT, and again testing in Production.

The impact of the aforementioned anti-patterns has resulted in a number of production defects that were subsequently fixed by the KTLO team. The overwhelming need for maintenance of each component and over-reliance on E2E tests have slowed the debugging process.

Other Examples of Anti-Patterns:

  • From examining the source code, it was apparent that there was no coherent branching strategy. Long-lived branches have been used for adding new tests instead of merging to a designated main branch.
  • Test account credentials were found in CSV files, and auth tokens have been pushed to the code repo, thereby exposing them.
  • Test Data Management did not align with industry best practices; test data has been stored in CSV files and Excel Files.
  • Readmes have not been used in repos to facilitate onboarding.

How did we help?

In light of our Route to Live findings, we devised a comprehensive Global Test Strategy solution. The strategy document includes:

  • A vision for continuous delivery of individual components, such as the quality gates to be implemented.
  • Functional Tests Architecture: composed of unit tests, component tests, consumer-driven contract tests, End-to-End user journey tests, and manual exploratory tests.
  • A list of non-functional requirements and the relevant tests.
  • Tool choice recommendations for each test type.
  • Quality metrics to be captured for test coverage, CI/CD, and defects.
  • An RTL diagram that shows the target state for the release process, detailing all the quality gates to be implemented
  • A detailed list of essential and desirable interventions – organised by process, QE capabilities, platform capabilities, metrics, standards & skills.
  • Test strategy delivery timeline

The overarching challenge faced by the client is the dependence on manual testing in the release process. To mitigate this, the core milestone of the engagement’s phase 1 was for the team to build a wide range of test frameworks. The test coverage included features implemented up to the end of the first phase of the engagement. The intention is to subsequently integrate the frameworks into CI/CD pipelines to streamline the release process whilst safeguarding the quality of releases. A test portal is also planned to bring observability to the newly built test frameworks.

With a specific focus on the Global Backend Programme, we targeted the following “Digital Experience” work streams and their respective service endpoints:

  • Availability Digital Experience
  • Booking Digital Experience
  • Rental Digital Experience
  • Customer Profile Orchestration
  • Customer Loyalty
  • Customer Wallet
  • Customer Identity

Through embedding the expanded ClearRoute team in the above work streams, we successfully established the following testing mechanisms:

  • End-to-End tests
  • Performance Tests
  • Component Tests
  • Provider Contract Tests
  • Pact Broker Consumer Contract Tests
  • ReportPortal

Throughout the engagement, we maintained close communication with stakeholders at Avis Budget Group, as well as members of the work streams, which are typically comprised of third-party vendors. Through regular sprint updates, technical demos, and discussions, our team was able to demonstrate our tangible contributions and impact to the stakeholders and the wider team - thus gaining their trust and further desire to cooperate.

Key Challenges

Even though the team has overall successfully established effective test practices across the work streams, many challenges have arisen in the process.

  • Notably, API documentation was often incomplete, inaccurate, or constantly subjected to changes without proper prior communication. This has hampered the team’s progress in writing automated tests, as the expected payloads and API responses often changed and caused unexpected failures. This led to frequent reworking of tests. Furthermore, the Development of the services was often released in a big bang style.
  • Coordinating with third-party vendors was difficult at times. For example, the platform team – led by third-party vendors – has at times been uncooperative. This has caused delays in setting up Pact-Broker and Report Portal. PR reviews by the work streams were occasionally quite slow.
  • The onboarding process for several team members was lengthy, with delays. The process was also not well documented. This has led to loss of time in building test suites in the initial period of the engagement. Furthermore, Access to key tools such as JFrog Artifactory was sometimes revoked or expired, which caused a reduction in team capacity.

Our Impact

Through our strategic interventions as a testing partner to establish industry-leading testing practices within Avis Budget Group, the Global Backend Programme has taken a major step forward towards having a robust approach ensuring high-quality releases.

By the end of phase 1:

  • The team delivered Integration tests with a coverage of 97% of dev-done endpoints across all required Digital Experience streams.
  • Likewise, performance tests have a significant coverage of 97% dev-done endpoints.
  • Contract tests, in a much earlier stage of development, have a coverage of 10% of dev-done endpoints.
  • Component tests have 62% coverage of dev-done endpoints.
  • As of 11th July, a total of 300+ automated tests across all test types have been implemented for Digital Experience endpoints.

By the end of phase 2:

  • The team delivered increased coverage of testing types across dev done endpoints.
  • Contract testing is now onboarded across the targeted digital services endpoints with framework solutions being further refined to resolve complexity challenges of the services and technology
  • Component testing is in place across all of the dev-done endpoints
  • End to end tests have been further expanded across the UI with another ClearRoute pod providing required uplift
  • Integration testing is in place across all of the dev-done endpoints
  • Model answer has been established for reference of what good automation testing looks like
  • Clear documentation guidance has been created for the recommended quality practices that have been put in place

Architectural Overview

Use Mermaid syntax to provide a high-level overview of the technical solution. The "Before" and "After" states are highly effective for showing transformation. Keep diagrams high-level.

Before State

After State

QCE Disciplines Applied

  • Primary Discipline, Quality Engineering: Best practices and quality test automation frameworks along with correct test distribution was added when enabled defects to be found earlier in the delivery cycle.
  • Secondary Discipline, Platform Engineering: Added CI/CD test automation with BitBucket Pipelines, along with reporting boards such as Report Portal, SonarQube integration and Pact Broker. This provided a way to holistically see the quality of a service and can be an early indicator of defects or misalignments between teams.
  • Tertiary Discipline, Developer Experience: Developers are now able to easily see tests running and the results within the same platform. Along with this they have also been given ways to be able to test their application without needing to provision a sandbox environment to test in.

The Results: Measurable & Stakeholder-Centric Impact

Headline Success Metrics

MetricBefore EngagementAfter EngagementImprovement
Lead Time for Changesunknown36 DaysEstimated from average time to resolve raise defects by ClearRoute
Deployment FrequencyunknownAd-hock, QuarterlyEstimate from conversations with platform, stakeholders and developers, many who did not know when their last release was
Digital Experience Services with Automation05Component testing, Consumer-driven Contract testing was implemented and added to BitBucket Pipelines that were not currently in place, unit tests were enabled where were skipped previously
Total Test Coverage of services0100%Adding test automation across services allowed to track test coverage by the amount of API endpoints that are covered. With correct test distribution we were able to confirm when and where the endpoints are getting checked
Testing Capability for all targeted services095%We were able to greatly increase the testing capability of component, contract, integration, performance and E2E, with some services and endpoints not being developed or ready for test in time for the end of engagement.

Value Delivered by Stakeholder

  • For the CTO / CIO:
    • Provided Test Strategy to standardise quality gates and direction that will work for the product.
  • For the VP/Director of Engineering:
    • Test Strategy provided for solid reference point and point of truth for quality engineering direction
  • For the Developers and Stakeholders:
    • Standardized tooling and processes, making test automation easier to manage, scale, and use.

Client Testimonials

Lessons, Patterns & Future State

  • What Worked Well: Having escalation channels and frequent communication with stakeholders helped to keep everyone informed and manage expectations.

  • Challenges Overcome: Platform team was often a bottleneck, especially in the earlier phase. This required a lot of stakeholder influence to resolve access requests, provisioning, knowledge sharing etc. Other challenges were with the service development teams themselves with frequent code merging, not quality gates and no communication between services, this would often result in defects that could have been prevented.

  • Key Takeaway for Similar Engagements: Ensure the required permissions and authority is in place from the beginning in order to reduce push back. Ensure full developer and product management buy-in so can start to place quality gates for monitoring and then for enforcement within the first 2 sprint cycles.

  • Replicable Assets Created: Model answer of what good looks like for new services to start from and established services to be able to reference for how to add tests. Reusable test automation frameworks with minimal duplication in order to reduce maintenance.

  • Client's Future State / Next Steps: The client should look at continuing to use, own and adapt the quality frameworks and guidance put in place. ClearRoute should start the enablement cycle with getting developers and testers on board with the frameworks to get the most out of the work that has been put in place. Product managers and teams should be informed of change and impacts ahead of time to be able to suitably plan for initial disruption.