In celebration of this year’s International Software Tester’s Day, we had the pleasure of interviewing our very own Claire McCrae, Software Testing Manager at Connex.
With six years of hands-on experience in testing for customer experience (CX) products, Claire’s journey in software testing has provided her with unique insights into the intricacies of ensuring software quality.
Join us as we discuss Claire’s career journey, her approach to tackling unique CX and AI testing challenges, and her thoughts on the future of testing.
Could you describe your role as Software Testing Manager at ConnexAI and the responsibilities your team typically handles in the software development process?
Claire: In my role at ConnexAI, I oversee the daily activities of our testing department, as well as shaping the direction of the team and the overall testing strategy for the company.
Our team primarily focuses on testing tickets for the projects we are assigned to, but we also ensure that we conduct regression testing sessions often as we periodically release software updates, fixes, and new features to our clients.
When we have new features in the pipeline, we make an effort to plan ahead by building test cases based on the available information, which could be design documents, technical documentation, or discussions with Product Owners. This preparation enables us to test quickly and accurately as soon as a feature becomes available in our first testing environment.
In addition to actively participating in testing procedures, I also manage the team, which includes the usual managerial things like delegating tasks to team members, managing the workforce, and helping to establish priorities when disputes arise. I also play a role in strategising for our product development team, planning for future projects, communicating with stakeholders, and being involved in Connex’s recruitment efforts.
What inspired you to pursue a career in software testing?
The majority of my career has been customer service based, starting in hospitality and then moving into call centre work at a previous company. I eventually became a senior customer service agent, but I found that after reaching a certain point, there was limited room for further learning.
I was already in the mindset of seeking continuous learning opportunities when some friends in the engineering side of my team suggested that I apply for a test engineering position. They felt like I’d be a good fit since I already had experience with user testing, a positive attitude, and a strong work ethic etc.
After researching the role and realising that it aligned with my enjoyment for helping customers and preventing issues from occurring – rather than dealing with them after the fact, I interviewed for the position and was hired within the hour.
Can you tell us about your career journey and how you became a Software Testing Manager?
After beginning my journey as a Junior Test Engineer, I then worked my way up to a Level 3 Test Engineer. From there I quickly came into my own when I was given access to certain areas such as databases, logs, metrics and other services in AWS such as EC2 and ELB etc. With this insight, I was able to teach myself about observability and get a much better understanding about the infrastructure behind the technical processes.
Along the way, I took on various roles and passion projects, including delving into automation testing and becoming a champion for accessibility testing within a guild at my previous company. I also had the privilege of mentoring a test apprentice who successfully passed with distinction.
When I decided to join ConnexAI, I assumed the role of Testing Manager, initially overseeing a team of two which has now grown significantly.
The testing department is actually a majority female team, which is a nice change from the ‘norm’ of majority male teams in the tech space, but it’s happened very naturally. I’ve also been able to take pride in helping individuals return to work after a hiatus and guiding those with no prior experience to successful careers in testing.
Your team works on customer experience (CX) and artificial intelligence (AI) software.
How do these domains present unique challenges for software testing, and what strategies do you employ to address these challenges effectively?
Testing for CX is always a challenge because you have to think as best you can ‘what would a human do? how would they use this?’ There will be a part of a system that was built a certain way, so it should – in theory – be used in that certain way, but that is not what customers tend to do, and that’s where we as testers have to think several steps ahead to account for their behaviour.
A lot of the time for customers processes, you have to “save them from themselves”. So if you do not want them to use it a certain way, do not let them. If they do something wrong, give them a clear error message and use friendly terminology that tells them how to correct it. Any room for inserting self-serve is a big win for both the provider and the consumer.
AI testing on the other hand is a whole other kettle of fish. You often have to forego your traditional knowledge gained from CX testing and take a deep dive into AI research. Understanding how AI systems are built and how they’re supposed to function is the most important thing for being able to test them.
The skills we have been able to utilise from our CX projects at ConnexAI are mostly on the speech analytics side, where customers have different ways of speaking, such as accents, phrases or speeds, that the AI software needs to be able to interpret correctly. When it comes to testing these processes, I think one of the main challenges can be data quality or data availability, so we work closely with our Data Science and Annotation teams to help us avoid and solve such problems.
In your opinion, what are some key contributions that software testers make to the development process and the overall quality of software products?
Software testers play a crucial role in multiple phases of the development process.
During the initial design phase, we provide an unbiased user perspective, helping the team understand how users might interact with the software. In the refinement phase, we introduce “what if” and “what about” scenarios, conducting risk-based analysis to identify potential issues. These contributions are invaluable as they occur before any code is written, ensuring that the entire team shares a common understanding.
Testers often serve as a bridge between the user and the business, especially after the design phase. We also act as intermediaries between developers and Product Owners (POs).
In addition to testing software when it’s ready, we provide feedback in the form of pass/fail assessments and observations to give a better idea of what should be addressed.
Whenever possible, we support our feedback with facts and metrics so everyone can learn from the data, such as the number of bugs, their priority, acceptance rates, and the types of bugs encountered.
What tools, methodologies, or best practices does your team rely on to ensure the quality and reliability of the software you develop?
We use a mix of tools to make sure the testing of our product is top-notch. Jira is our go-to for managing project tickets, keeping everything organised. When it comes to testing APIs, we depend on Postman, which really helps simplify the process.
For database tasks we use DataGrip and when we need to access remote systems via SSH, we use PuTTY and Terminal. For documenting our testing specifics, Confluence comes in handy. And to ensure our software plays nicely across different browsers and devices, we turn to Browserstack for comprehensive compatibility testing.
Looking ahead, how do you see the role of software testing evolving in the coming years, particularly in light of emerging technologies like AI, machine learning, and automation?
In the future, I see the role of software testing continuing to shift to the left in the software development lifecycle, which is where I personally think it should already be, but many companies are still lagging behind.
For all software development teams, the goal is to obtain feedback as early as possible, and that tends to be before code is even written. Embracing failure as a learning opportunity will become increasingly important, and teams should aim to fail fast, so lessons can be learned and addressed quickly.
I also think Automation will remain a significant focus, but I believe it’s crucial to invest in internal training and development for testers who can lay the foundations for automation testing. This enables testers to utilise their UI testing experience and foster collaboration with developers to answer any queries, which is something that can be overlooked when companies bring in automation engineers who lack UI experience and knowledge.
As for AI and machine learning, specialised testing in these areas will definitely grow in importance. Testers will need to gain a deep understanding of how AI systems are built and intended to behave. AI testing will require testers to explore various speech analytics scenarios and understand different ways of interacting with AI systems.
Which areas of testing interest you most, and what developments would you like to see in the next few years?
Accessibility testing is an area that holds great importance for me. It should be implemented universally, and businesses should pay more attention to it. It’s not only the right thing to do for others and follows best design practices, but it also opens doors to additional markets, translating to more opportunities and revenue.
Automation of course remains a significant interest. Everybody talks about it and wants it for their business but many businesses aren’t willing to give time and resources required to do it properly, so it can be built the right way from the start.
From my perspective, it’s essential to build automation frameworks that are easy to implement, maintain and require minimal coding skills. I also believe helping automation engineers gain a comprehensive understanding of UI testing is critical.
Observability is also another area I’m keen on. Providing testers with access to tools that enhance observability allows us to identify issues more effectively. The better we can observe and analyse systems, the more likely we are to detect and report problems.
Do you feel any areas of testing are undervalued or overlooked in the industry?
Accessibility testing, as I’ve mentioned before, is often undervalued and overlooked. Another area that deserves more attention is exploratory testing, which mimics the end-user’s personal freedom and choices when navigating a system. It’s an essential testing technique, but it should always be time-boxed and specific to maximise its impact.
Finally, on a personal note, what do you find most rewarding about your role as a Software Testing Manager at Connex?
The most rewarding aspect of my role as a Software Testing Manager is witnessing the growth of my team. Seeing them acquire new skills and take the lead on projects they couldn’t handle before is incredibly fulfilling.
On this year’s International Software Testing Day, Connex would like to thank Claire and the entire testing team for their incredible contributions to the development of our CX platform and their positive impact throughout our organisation.
To learn more about working at ConnexAI or to apply to join our brilliant team of Connexers, check out the Careers Hub to see our latest vacancies and submit your CV for consideration.