Ady Stokes
Freelance IT and Accessibility Consultant
He / Him
I am Open to Write, Teach, Speak
Freelance IT consultant and accessibility Advocate. I curate the Essentials Certificate STEC, and co-run the Ministry of Testing Leeds. MoT Ambassador. I teach, coach, and give training.
Badges
Community Stars
Contributions
Build quality into every step of the software lifecycle with the whole team involved
User Feedback is the information, opinions, and insights we get directly from the folks who use our software. It's their experiences, thoughts, and reactions that can guide us to valuable insights. Listening to users is incredibly important because they'll tell us what's working well, what's confusing, where the bugs are causing them to rant on the socials, and what features they wish they had. Feedback doesn't have to be formal. It could be complaints or discussions in user groups or forums. But we also need to consider customer service calls, or even mentions and conversations happening on social media, especially those rants! No matter the source, any feedback is a valuable piece of the puzzle, leading to improved quality. Feedback in all its forms helps us to identify problems we might have missed, understand how the software is really being used, and most importantly, it guides us on how to make the software better and more valuable for the folks who rely on it, no matter how they use it.
Ady Stokes reimagines Rudyard Kiplins epic poem If, as 'if' it were being spoken to a junior tester by a senior
How can AI be your trused advisory board. Rahul takes a long standing leadership model of advisors offering ideas to a leader who makes the final decision and brings it into an AI future
Originally a development principle from Extreme Programming (XP), which stands for “You Aren’t Gonna Need It.” YAGNI advises against implementing features, writing code, or in the case of testing, preparing test cases unless there is a clear and immediate business requirement for them. The aim is to prevent overengineering, reduce waste, and encourage teams to focus on delivering only what is necessary at the present time.Applied to software testing, the mnemonic YAGNI encourages testers to avoid designing test cases or creating test data for hypothetical scenarios or unapproved features. Instead, it promotes an evidence-led, lean approach to planning and execution, concentrating efforts on the risks and requirements that are confirmed and current.
YAGNI Examples:A tester resists the urge to write automated tests for a new module that is still under discussion and not yet scheduled for development. Instead, they focus their time on creating and refining tests for features that are already in active development.During a sprint, a team member suggests writing test scripts for a future integration that has not been prioritised. The tester reminds the team of the YAGNI principle, pointing out that the work might never be needed or may change significantly by the time it is relevant.A test lead advises against preparing an extensive test plan for a third-party tool that the organisation has not yet decided to adopt, applying YAGNI to prevent wasted documentation effort.YAGNI is closely related to Agile values and helps testers avoid speculative testing and premature optimisation, encouraging just-in-time preparation aligned with business value. It is similar in ways to DRY, don’t repeat yourself and KISS, keep it simple stupid which are also coding mnemonics.
Now, this isn't your dusty old manual that sits on a shelf gathering cobwebs or languishes in a drive or unvisited project website getting more out of date by the minute. Living Documentation, which can also be known as dynamic documentation, is an artifact that evolves alongside the software itself. Think of it as documentation that's always current and reflects the latest information available.Most test artifacts can be living documentation as long as they are kept current. You can create living documents by updating them as required or build them using automation tools. Depending on a project's context they might find a balance between documentation and dashboards to provide up to date information to colleagues and stakeholders.Why is living documentation a good thing in software development? It can help everyone in and around the team to have a clear understanding of how the system works and any other relevant information identified. It can make onboarding new team members much easier to get them up to date. It can also help to reduce misunderstandings and improve communication. When your documentation is tied to your automated tests, it can also act as a form of executable specification, showing how the system is supposed to behave. It's all about keeping everyone current with documentation that's actually useful and reliable.
Computers have been around much longer than you think. Charles Babbage is credited with creating the first mechanical computer in 1822. The first electronic computer was built in 1942 by John Atanasoff and Clifford Berry. Even in the early days, people did software testing, but not in the way you would see today due to the primitive level of technology. Those programming the computer would review their code, an early form of debugging or reviewing for errors.
It was in the 1950s, as computing power grew, that IBM formed the first dedicated testing team. And it was right around that time that the term "bug" was coined: a computer scientist named Grace Hopper recorded a ‘bug being found’ in the Harvard II computer in 1947. In actual fact, a moth got stuck in a relay inside the machine, but a defect was recorded, and the term "bug" is used to this day. When you hear news stories about issues caused by software failures, glitches, anomalies, crashes, incidents or faults… these are all "bugs."
A collection of small, often invisible actions and tasks that generate a lot of team value” and shared this list.
It is all the small things testers do to make information visible:
Helping connect colleagues who might otherwise not speak or both have important information.
Bouncing ideas off colleagues or other testers.
Asking and answering questions in stand-ups and meetings.
Facilitating meetings or presenting demonstrations to stakeholders.
Suggesting improvements to the product and the processes the team uses.
Helping resolve team bottlenecks.
Communicating with users
Noticing dropped tasks
Discover why the term ‘manual testing’ has limitations and negative impacts on the testing craft and learn to embrace more modern terminology
Great first event for a brand new Ministry of Testing meetup and with 3 speakers on accessibility too!
Cross browser testing (CBT) is essentially exactly what it says. It is all about making sure your web application or website works properly and looks as it should, across different web browsers through testing. Not everyone uses the same car, and the same goes for web browsers. You have Chromium based ones which are now the most popular. But there’s also Firefox, Safari, Microsoft Edge, and even the odd Internet Explorer (IE) is still kicking about! In December 2024 IEs usage was showing at 0.16% globally, so that’s probably an edge case! That’s before we even think about VPN or other security ones like DuckDuckGo, Opera or mobile versions. Extensions like NordVPN and others can also influence CBT.The reason this is so important is that these different browsers can sometimes interpret web code in slightly different ways. When we say web code we mean all the different technologies like HTML (HyperText Markup Language), CSS (Cascading Style Sheets), JavaScript and many others. What looks good in one browser using one or a combination of web code, might be not so good or even broken in another. As software testers, we need to make sure that however someone is browsing or on whichever device, they get a consistent experience.This means we need to test our software on a range of browsers, and ideally on different versions where possible, because things can change with updates. We're looking for things like layout issues with responsive design, where elements might be in the wrong place, functionality problems, where something works in one browser but not another, and even performance differences. It's about ensuring that everyone gets a similar quality experience, regardless of their browser preference. It’s a bit like making sure your instructions are clear no matter who is reading them!
For software that will be used in multiple markets and languages it needs to consider all the possible variances across them. When we are designing and developing software for international markets we have to think of those considerations right from the start. Just like we would for accessibility or security. Think of it as building a solid foundation that allows your application to switch and behave appropriately for a global audience.The whole point of internationalisation is to make it easier to then localise your software for specific markets. So, things like making sure your software can handle different character set inputs (like those used in Japanese or Arabic). That your user interface (UI) is responsive to accommodate varying text lengths in different languages. A big obstacle to doing this is hardcoding assumptions like date formats or currency symbols. That can be a big problem down the line as new languages are added.For testers, it means we need to be thinking about whether the software has been built with this global perspective in mind. What are the risks? Can it handle different languages, date formats and postal locations? Can it handle very short or very long names? Are there any cultural considerations we need to be aware of and look into?Internationalisation is about ensuring this isn't just software that works in the softwares primary language, but works culturally and linguistically, for every single person who might use it, no matter who they are or where they are in the world. What do we need to test to prove that's the case?