Chris Bramley

Q&A

This interview is based on an interactive conversation between me and ChatGPT (GPT-5), that I hoped would bring my experience to life more than a CV. It was quite an interesting exercise, and I think it really helped me reflect on the people and connections I've made in my career. It was conducted using the following prompt if you'd like to try it (I also attached a CV):

I want to create an interactive Q&A. You will ask the questions, and then I will answer. We can start with a generic intro ('What first drew you to software testing and quality engineering?') but then I'd like subsequent questions to be based on my answers where relevant – seeking clarification, digging deeper into responses, and so on.

You can also reference my CV and experience for creating questions. We will have 10 questions. You should act more like a reporter or media interviewer than a recruiter or hiring manager, so that the answers reveal more about me than the content of my CV.

1. What first drew you to software testing and quality engineering?

careermotivationphilosophy

I've always liked to understand how things work, not necessarily to break them - which I think was the classic tester mindset when I was starting out. I'd rather get out a Haynes manual and work on my car, or solder new pickups into my guitar, or set up a new PC from components - build not break. I think this mindset led me to testing, where I would get to see how things worked so I could better understand the ways in which defects could be prevented.

And the landscape of software development has changed massively over the last 15 years, so there is always something new to discover - agile, automation, cloud, GenAI have all really become established during my career so I've always had something new to tackle.

2. When you look back, was there a particular moment or project where that mindset really clicked for you? A time when you realised testing could be a creative, engineering-driven discipline rather than a reactive one?

creativitycollaborationmindset

Absolutely. It was a job I wasn't in very long, at Stagecoach, but the Engineering Lead Mikey Smith really opened my eyes to this. I'd worked closely with developers before but I'd largely worked in waterfall organisations, so most interactions were up front (requirements stage) or at the end (testing phase, defect reports, customer support). One day, as he was finishing a ticket, he called me over to his desk. The conversation was something like 'OK this is the code I've written to handle the seat assignment, I've created these unit tests that cover the acceptance criteria but I wanted to get your input into cases I might not have covered or even considered'.

This was a lightbulb moment - 'yeah, this is my area of expertise, why have I never helped create unit tests? And now I know what's covered there, I can make sure my higher-level tests aren't just repeating the same thing'. This led me down the path of TDD, of pairing with developers during code to make sure we were preventing issues, thinking of ways to engineer software to make it easy to test, and sharing and evangelising a quality mindset with my developer peers. Which I think is another cornerstone of my philosophy - testing alone doesn't improve quality, so test engineers need to help build quality into the development process.

3. How has that philosophy evolved now that you’re influencing quality at organisational scale? In other words, when you’re working across multiple teams or even entire business units, how do you help others reach that same lightbulb moment you had back then?

leadershipcraftculture

The core of that philosophy has influenced a lot in my approach. In Booking I have been the craft lead for test engineering. This has covered everything from job titles, interview process, and career frameworks, and I think there are two things that stand out.

One, the roles used to be 'Test Manager'. I made sure these were changed to align with the development craft, so they had 'Engineering Manager' and we had 'Engineering Manager - Test'. The intent was to signal that these roles are equal and equally important in achieving the same mission.

The second was the introduction of the career framework. It outlines 6 core competencies for test engineers - and only one of those is actually executing testing. The rest are focused on things like advocacy - how test engineers can influence their peers, projects, departments, etc to focus on building quality - and testability - as I've mentioned before about making testing easier to do, lowering the bar to entry.

4. How do you approach those conversations at VP or SVP level, where quality needs to be argued for in the language of architecture, risk, or business impact rather than testing?

stakeholdersmetricsalignment

Advocacy is totally central to my role, and it's just as vital at my level as it is at a core level. Maybe more so, as if I'm not able to advocate effectively across the VPs of Engineering then it's going to make it harder for anyone else to advocate within their teams. I work in a central team now, so I have more of a remit to work with the VPs across the business, but even so I have no authority so getting the right message framed in the right way is key.

I'm a big believer in metrics as indicators of health and as a way to demonstrate the impact of quality engineering on the things that matter most - customers. And I don't like to reinvent the wheel so DORA metrics are a big part of how I approach these conversations. Nobody wants a high change failure rate, or a long lead time for change, but what I need to do is demonstrate how investing in quality can improve these.

A lot of times this is about breaking misconceptions - and lead time for change is a good example of this. There is default assumption that testing takes time, and you can't have both, so introducing more testing practices is going to slow us down and increase our lead time for change, right? Not for me. If we make things testable, we will spend less time testing. If we shift testing left and invest more in unit and contract, we will spend less time passing builds back and forth when they break, and less time blocking or blocked by dependencies. I can use data both internally and from the industry that proves this.

At the same time, risk is a big factor. I will never advocate for testing everything, all of the time. Failures are normal and being able to observe, detect, and fix quickly are another important part of testability. And I think this flexibility helps show we're on the same page - I don't want to slow you down, and I don't want to cause customer problems, but there is a way we can manage this effectively.

5. Your role, while titled Principal Test Engineer, actually spans a lot of what a Director of QA would do — defining vision, aligning leaders, shaping metrics. How do you balance that strategic remit with staying close to the craft itself? How do you ensure you’re still credible as an engineer, not just a strategist?

leadershiparchitecturecraft

That's a good point - in a lot of places a Principal Test Engineer is much more hands on and a much more direct progression from a Senior Test Engineer. But here at Booking, I do a lot of stuff that doesn't feel like a Principal role - the craft leadership I've mentioned, shaping the company wide Quality Strategy being another. In a lot of ways it's more like an architect role, and it's probably no coincidence that I've been in a couple of architecture teams now.

One of the key ideas I've taken from that is the concept of the Architecture Elevator by Gregor Hohpe. In brief, it says that we need to avoid 'ivory tower' architecture where the people making the decisions are far removed from the problems, and the way to do this is to visit different floors. Get off at the ground floor, spend time with developers and test engineers, understand what's going on day to day. I do this in a couple of ways - I am still engaged with the community, I run workshops and sessions, I am active in Slack where I can help solve problems, and generally make myself available for anyone for a chat or mentoring or for fixing a problem.

I also lead the Dev++ community - this is a forum where any engineer of any level can come and work with the community on problems, or on a good idea they've had, and we can solve it collaboratively and use Dev++ to broadcast this effectively to the company.

On the other hand, I need to get off at higher levels too - I run major projects that need test architecture and design, such as the Digital Markets Act project that's on my CV. This was a massively complex project with short timeframes and huge potential fines if we didn't deliver the right thing on time. I lead this project because it needed that blend of skills - I would go from conversations with Senior Directors and VPs to digging through production logs to track issues, to running fire drills across the business to manage the risk of the phased launch I'd planned.

6. Your focus seems to be on engineering as a whole, not just testing. How intentional has that been? Do you see yourself as a tester who’s expanded into engineering, or as an engineer whose lens happens to be quality?

identityengineering

It's intentional. I think at the end of the day we (we being engineering and test) all want the same thing, we just have different specialisms. Although despite starting off on the engineer path (computer science degree, engineering graduate scheme) I still see myself as a test specialist who has expanded into engineering. I hope this gives me a few more strings to my bow than someone who is purely one or the other. I think my exposure in architecture teams and engineering practices teams has helped shape this too.

7. You’ve clearly moved through a lot of change in your career — from waterfall to agile, from manual to automated, from on-prem to cloud, and now into AI-assisted testing. When you look back, which transformation or evolution in the industry has most challenged or reshaped the way you think about quality?

agileAIevolution of testing

I think they all have in some way - and I think that's healthy - but I think the two biggest are agile, and AI. I'll start with Agile. I touched upon this when talking about that close collaboration at Stagecoach, but at the BBC it really brought it home. I was lucky to work with people like Anna Dick who showed me Agile wasn't just a set of buzzwords but an actionable set of practices that focused on team working, focusing on delivering and releasing value as soon as possible, and building on feedback. These are all things that matter to me.

On the second, GenAI has had a much more immediate impact than anything else before. Agile is OLD. It's been around for longer than I've been in this career but it didn't really become the standard until much later. GenAI though is everywhere now - I mean this is literally a conversation with an LLM. I see it as largely a positive for the craft though. There are plenty of people who lack coding knowledge so may not have the experience in automation, but we now have this amazing tool that can turn a test engineer's vision based on their knowledge and experience into code. And that's just one example.

I think automation was seen as the boogeyman that was going to make test engineering redundant; that didn't happen because the craft is more than just the tests. I see GenAI in the same light, an assistant that can help us unlock our potential and deliver the impact we want more quickly. We need though to keep that test engineering mindset and intelligence, and no blindly put faith in this tool, which might actually make our role even more critical.

8. Which achievements best demonstrate the scale of your impact?

achievementsmetrics

One of the biggest things I'm proud of comes from my first couple of years at Booking.com. When I joined, releases were slow and painful and not very agile. As a Test Lead (as it was before the title change), I could see that this was not only slowing us down, but increasing risk - we had releases with over 30 changes in them, which took up to an hour to deploy and had non-zero downtime whilst deploying. If it went wrong (which it did) then it could be hours of unpicking those changes until the root cause was found and fixed.

I started working with the Engineering Managers and DevOps leads to both focus on a more lean way of delivering changes, as well as taking advantage of AWS to enact blue-green deployments and canary releases to reduce user-facing impact. We went from 6 releases per month with up to 30 tickets per release when I joined to 69 releases with up to 2 tickets within a year. A year after that, each ticket was its own release, and we were delivering 183 releases to our customers each month with no release downtime.

In the past year or two, two more stand out. At Booking.com we had to comply with the Digital Markets Act (DMA) compliance act. As the most senior tester, I was asked to take the lead. I had to organise and deliver a test strategy across several product verticals, coordinating cross-BU testing at scale, working with teams and technologies I'd never used and with incredibly tight deadlines. I organised fire drills, collaborated with Site Reliability Engineers and Major Incidents teams to plan the most effective rollouts, worked with Internal Communications to run large awareness campaigns due to the sheer size and impact of the project, as well as managing Senior Director and VP stakeholders. The project was delivered on time and without and service interruption, which was a huge achievement.

On a smaller scale - but an important building block, I worked with other Principal Test Engineers to introduced SonarQube-based “Releasability” profiles and mutation-testing benchmarks that now underpin quality reviews across a large number of repositories and have proven useful - raising the code releasability rating from E to B.

9. When you describe achievements like that, what stands out is how much of it depends on alignment — getting people with very different goals and pressures moving in the same direction. How do you approach situations where there isn’t natural agreement between different priorities like speed, safety, and visibility?

alignmentstrategyrisk

I actually think at the heart we all want the same thing, and that thing is quality. What quality is depends on who you are, but it runs through all those things. I remember reading Zen and the Art of Motorcycle Maintenance, which at its heart is a discussion on what quality is, because it's not measurable and is often based on feeling and intuition. This resonated with me and I think I've probably spent a lot of time since trying to weave it into a story about testing and quality.

But in the question you've asked, it makes a bit more sense - engineers don't want speed if it means the site doesn't work and nobody can use it. Compliance don't want safety if it means your product is too difficult to use as a result of all of the controls. Product doesn't want visibility if what they see is a 1 start Trustpilot rating or dwindling user numbers. And I don't want quality if it harms innovation and experimentation and breaking the mold.

So I guess my approach is to make this visible - my goal is not to have more engineers than anyone else, or to not accept any risk. After all the safest way to protect users is just not to release at all. I want to highlight how our goals can be aligned and realised together.

10. Let’s end on a forward-looking note. You’ve led change through multiple generations of technology and mindset — agile, automation, observability, AI. When you think about the next phase of your career, what kind of challenges or environments do you want to be tackling next? What would feel like the right next frontier for you?

futureAIleadership

I almost think the next step is shedding the 'test' title. Not because I don't think it's important but more that I think it's SO important it needs to be a fundamental part of engineering and product culture and singling it out or attaching that label only helps make it look like 'someone else's responsibility'. I guess one part of this is also looking back at the career frameworks and designing a new way of thinking about testing and quality - acknowledge the impact of AI for one, but also recognise that any engineer - core to VP - needs not only a solid understanding about their role when it comes to quality, but also a set of tools that make it simple for them to apply it.

I want to turn the talking about it - testability, advocacy, and so on - into something actionable. I think using AI and crafting really solid prompts will be a key way that the test craft is able to support the engineering craft seamlessly.

The other challenge I want to focus on though is enabling and encouraging innovation without sacrificing quality. Too often I think we talk about best practices, and we use analogies about production lines and try to apply Toyota Production Systems to what is a much more creative craft, that we lose some of that spark.

I've sort of gone full circle - but the reason I got involved in computing was because I liked to try stuff out and see what happened. I don't want the next generation to feel like the guardrails are too restrictive and not enjoy the creative process. And I also want people to see Test or Quality Engineering as an important and aspirational role, not something easily replaced. So maybe that's the answer - the next frontier is the next generation. How do we keep software engineering as exciting and engaging as it was for me?