The Testers and Coding Debate: Can We Move on Now?

Should Testers Learn How to Write Code?

The debate on whether testers should learn how to write code has ebbed and flowed. There have been many blogs on the subject both recent and not so recent. I have selected the ten most prominent examples and listed them below. I could have chosen twenty or more. I encourage you to read references [1, 2, 3, 4, 5, 6, 7, 8, 9, 10].

At the BCS SIGIST in London on the 5th December 2013, a panel discussion was staged on the topic “Should software testers be able to code?” The panellists were: Stuart Reid, Alan Richardson, Dot Graham and myself. Dot recorded the session and has very kindly transcribed the text of the debate. I have edited my contributions to make more sense than I appear to have made ‘live’. (I don’t know if the other contributors will refine their content and a useful record will emerge). Alan Richardson has captured some pre- and post-session thoughts here – “SIGIST 2013 Panel – Should testers be able to code? [11]. I have used some sections of the comments I made at the session in this article.

It’s easy to find thoughtful comments on the subject of testers and coding skills. But why are smart people still writing about the subject? Hasn’t this issue been resolved yet?  There’s a certain amount of hand-wringing and polarisation in the discussion. For example, one argument goes, if you learn how to code, then either:

a)    You are not, by definition, a tester anymore; you are a programmer and

b)    By learning how to code, you may go native, lose your independence and become a less effective tester.

Another perfectly reasonable view is that you can be a very effective tester without knowing how to code if your perspective is black-box or functional testing only.

I’d like to explore in this article how I think the situation is obviously not black-and-white. It’s what you do, not what you know, that frames your role but also that adding one skill to your skills-set does not reduce the value of another. I’d like to move away from the ‘should I, shouldn’t I’ debate and explore how you might acquire capabilities that are more useful for you personally or your team – if your team need those capabilities.

The demand for coding skills is driven by the demand for capabilities in your project. In a separate article I’ll be proposing a ‘road-map’ for tester capabilities that require varying programming kills.

My Contribution to the ‘Debate’

Before we go any further, let me make a few position statements derived from the Q&A of the SIGIST debate. By the way, when the SiGIST audience were asked, it appeared that more than half confirmed that they had programming skills/experience.

Software testers should know about software, but don’t usually need to be an expert

Business acceptance testers need to know something of the business that the system under test will support. A system tester needs to know something about systems, and systems thinking. Software testers ought to know something about software, shouldn’t they? Should a tester know how to write code? If they are looking at code figuring ways to test it, then probably. And if they need to write code of their own or they are in day to day contact with developers helping them to test their code then technical skills are required. But what a tester needs to know depends on the conversations they need to have with developers.

Code comprehension (reading, understanding code) might be all that is required to take part in a technical discussion. Some programming skills, but not necessarily at a ‘professional programmer level’, are required to create unit tests, services or GUI test automation, test data generation, output scanning, searching and filtering and so on. The level of skill required varies with the task in hand.

New skills only add, they can’t subtract

There is some resistance to learning a programming language from some testers. But having skills can’t do you any harm. Having them is better than not having them; new skills only add, they don’t subtract.

Should testers be compelled to learn coding skills?

Most of us live in free countries, so if your employer insists and you refuse, then you can find a job elsewhere. But is it reasonable to compel people to learn new skills? It seems to me that if your employer decides to adopt new working practices, you can resist the change on the basis of principle or conscience or whatever, but if your company wishes to embed code-savvy testers in the development teams it really is their call. You can either be part of that change or not. If you have the skills, you become more useful in your team and more flexible too of course.

How easy is it to learn to code? When is the best time to learn?

Having any useful skill earlier is better than later of course, but there’s no reason why a dyed-in-the-wool non-techy can’t learn how to code. I suppose it’s harder to learn anything new the older you are, but if you have an open mind, like problem-solving, precise thinking, are a bit of a pedant and have patience – it’s just a matter of motivation.

However, there are people who simply do not like programming or find it too hard or uncomfortable to think the way a programmer needs to think. Some just don’t have the patience to work this way. It doesn’t suit everyone. The goal is not usually to become a full time programmer, so maybe you have to persist. But ultimately, it’s your call whether you take this path.

How competent at coding should testers be?

My thesis is that all testers could benefit from some programming knowledge, but you don’t need to be as ‘good’ a programmer as a professional developer in order to add value. It depends of course, but if you have to deal with developers and their code, it must be helpful to be able to read and understand their code. Code comprehension is a less ambitious goal than programming. The level of skill varies with the task in hand. There is a range of technical capabilities that testers are being asked for these days, but these do not usually require you to be professional programmer.

Does knowing how to code make you a better tester?

I would like to turn that around and say, is it a bad thing to know how to write code if you’re a tester? I can’t see a downside. Now you could argue: if you learn to write code, then you’re infected with the same disease that the programmers have – they are blind to their own mistakes. But testers are blind to their own mistakes too. This is a human failing, not one that is unique to developers.

Let’s take a different perspective: If you are exploring some feature, then having some level of code knowledge could help you to think more deeply about the possible modes (the risks) of failure in software and there’s value in that. You might make the same assumptions, and be blind to some assumptions that the developer made, but you are also more likely to build better mental models and create more insightful tests.

Are we not losing the tester as a kind of proxy of the user?

If you push a tester to be more like a programmer, won’t they then think like a programmer, making the same assumptions, and stop thinking of or like the end user?

Dot Graham suggested at the SiGIST event, “The reason to separate them (testers) was to get an independent view, to find the things that other people missed. One of the presentations at EuroSTAR (2013) was a guy from an agile team who found that all of the testers had ‘gone native’ and were no longer finding bugs important to users. They had to find a way to get independence back.”

On the other hand, by separating the testers, the team lose much of the rapid feedback which is probably more important than ‘independence’. Independence is important, but you don’t need to be in a separate team (with a bureaucratic process) to have an independent mindset – which is what really matters. The independence, wherever the tester is based, is their independent mind whether it’s at the end or working with the developer before they write the code.

There is a Homer Simpson quote [12]: “How is education supposed to make me feel smarter? Besides, every time I learn something new, it pushes some old stuff out of my brain. Remember when I took that home winemaking course, and I forgot how to drive?”

I don’t think that if you learn how to code, you lose your perspective as a subject matter expert or experience as a real user, although I suppose there is a risk of that if you are a cartoon character. There is a risk of going native if, for example, you are a tester embedded with developers. By the same token, there is a risk that by being separated from developers you don’t treat them as members of the same team, you think of them as incompetent, as the enemy. A professional attitude and awareness of biases are the best defences here.

Why did we ever separate testers from developers? Suppose that today, your testers were embedded and you had to make a case that the testers should be extracted into a separated team. I’m not sure the case for ‘independence’ is so easily made because siloed teams are being discredited and discouraged in most organisations nowadays.

What is this shift-left thing?

There seem to be a growing number of companies who are reducing their dependency on scripted testing. The dependency on exploratory testers and of testers ‘shifting left’ is increasing.

Right now, a lot of companies are pushing forward with shift-left, Behaviour-Driven Development, Acceptance Test-Driven Development or Test-Driven Development. In all cases, someone needs to articulate the examples – the checks – that drive these processes. Who will write them, if not the tester? With ATDD, BDD approaches, communication is supported with stories, and these stories are used to generate automated checks using tools.

Companies are looking to embed testers into development teams to give the developers a jump start to do a better job (of development and testing). An emerging pattern is that companies are saying, “The way we’ll go Agile is to adopt TDD or BDD, and get our developers to do better testing. Obviously, the developers need some testing support, so we’ll need to embed some of our system testers in those teams. These testers need to get more technical.”

One goal is to reduce the number of functional system testers. There is a move to do this – not driven by testers – but by development managers and accountants. Testers who can’t do anything but script tests, follow scripts and log incidents – the plain old functional testers – are being offshored, outsourced, or squeezed out completely and the shift-left approach supports that goal.

How many testers are doing BDD, ATDD or TDD?

About a third of the SIGIST audience (of around 80) raised their hands when asked this. That seems to be the pattern at the moment. Some companies practicing these approaches have never had dedicated independent testers so the proportion of companies adopting these practices may be higher.

Shouldn’t developers test their own code?

Glen Myers’ book [12] makes the statement, “A programmer should avoid attempting to test his or her own program”. We may have depended on that ‘principle’ too strongly, and built an industry on it, it seems. There are far too many testers who do bureaucratic paperwork shuffling – writing stuff down, creating scripts that are inaccurate and out of date, processing incidents that add little value etc. The industry is somewhat bloated and budget-holders see them as an easy target for savings. Shift-left is a reassessment and realignment of responsibility for testing.

Developers can and must test their own code. But that is not ALL the testing that is done, of course.

Do testers need to re-skill?

Having technical skills means that you can become a more sophisticated tester. We have an opportunity, on the technical side, working more closely – pairing even – with developers. (Although we should also look further upstream for opportunities to work more closely with business analysts).

Testers have much to offer to their teams. We know that siloed teams don’t work very well and Agile has reminded us that collaboration and rapid feedback drive progress in software teams. But who provides this feedback? Mostly the testers. We have the right skills and they are in demand. So although the door might be closing on ‘plain old functional testers’ the window is open and opportunities emerging to do really exciting things. We need to be willing to take a chance.

We’re talking about testers learning to code but what about developers learning to test better? Should organizations look at this?

Alan Richardson: We need to look at reality and listen to people on the ground. Developers can test better, business analysts can test better – the entire process can be improved. We’re discussing testers because this is a testing conference. I don’t know if other conferences are discussing these things, but developers are certainly getting better at testing, although they argue about different ways of doing it. I would encourage you to read some of the modern development books like “Growing Object-Oriented Software Guided by Tests” [14] or Kent Beck [15]. That’s how developers are starting to think about testing, and this has important lessons for us as well.

There is no question that testers need to understand how test-driven approaches (BDD, TDD in particular) are changing the way developers think about testing. The test strategy for a system and testers in general must take account (and advantage) of these approaches.

Summary

In this article, I have suggested that:

  • Tester programming skills are helpful in some situations and having those skills would make a tester more productive
  • It doesn’t make sense to mandate these skills unless your organization is moving to a new way of working, e.g. shift-left
  • Tester programming skills rarely need to be as comprehensive as a professional programmer’s
  • A tester-programming training syllabus should map to required capabilities and include code-design and automated checking methods.

We should move on from the ‘debate’ and start thinking more seriously about appropriate development approaches for testers who need and want more technical capabilities.

References

  1. Do Testers Have to Write Code?, Elizabeth Hendrickson,  http://testobsessed.com/2010/10/testers-code/
  2. Cem Kaner, comments on blog above http://testobsessed.com/2010/10/testers-code/comment-page-1/#comment-716
  3. Alister Scott, Do software testers need technical skills?, http://watirmelon.com/2013/02/23/do-software-testers-need-technical-skills/
  4. Markus Gartner, Learn how to program in 21 days or so, http://www.associationforsoftwaretesting.org/2014/01/23/learn-how-to-program-in-21-days-or-so/
  5. Schmuel Gerson, Should/Need Testers know how to Program, http://testing.gershon.info/201003/testers-know-how-to-program/
  6. Alan Page, Tear Down the Wall, http://angryweasel.com/blog/?p=624, Exploring Testing and Programming, http://angryweasel.com/blog/?p=613,
  7. Alessandra Moreira, Should Testers Learn to Code? http://roadlesstested.com/2013/02/11/the-controversy-of-becoming-a-tester-developer/
  8. Rob Lambert, Tester’s need to learn to code, http://thesocialtester.co.uk/testers-need-to-learn-to-code/
  9. Rahul Verma, Should the Testers Learn Programming?, http://www.testingperspective.com/?p=46
  10. Michael Bolton, At least three good reasons for testers to learn how to program, http://www.developsense.com/blog/2011/09/at-least-three-good-reasons-for-testers-to-learn-to-program/
  11. Alan Richardson, SIGIST 2013 Panel – Should Testers Be Able to Code, http://blog.eviltester.com/2013/12/sigist-2013-panel-should-testers-be.html
  12. 50 Funniest Homer Simpson Quotes, http://www.2spare.com/item_61333.aspx
  13. Glenford J Myers, The Art of Software Testing
  14. Steve Freeman and Nat Pryce , Growing Object-Oriented Software Guided by Tests, http://www.growing-object-oriented-software.com/
  15. Kent Beck, Test-Driven Development by Example
  16. A Survey of Literature on the Teaching of Introductory Programming, Arnold Pears et al., http://www.seas.upenn.edu/~eas285/Readings/Pears_SurveyTeachingIntroProgramming.pdf

About the Author

Paul Gerrard is a consultant, teacher, author, webmaster, developer, tester, conference speaker, rowing coach and a publisher. He has conducted consulting assignments in all aspects of software testing and quality assurance, specialising in test assurance. He has presented keynote talks and tutorials at testing conferences across Europe, the USA, Australia, South Africa and occasionally won awards for them.

Educated at the universities of Oxford and Imperial College London, in 2010, Paul won the Eurostar European Testing excellence Award. In 2012, with Susan Windsor, Paul recently co-authored “The Business Story Pocketbook”.

He is Principal of Gerrard Consulting Limited and is the host of the UK Test Management Forum and the UK Business Analysis Forum.

Mail: paul@gerrardconsulting.com
Twitter: @paul_gerrard
Web: gerrardconsulting.com

You might be interested in... The Dojo

Dojo Adverts-05

Tags: , ,

24 Responses to “The Testers and Coding Debate: Can We Move on Now?”

  1. Mohinder KhoslaFebruary 16, 2014 at 9:19 pm #

    I fully support what you said. We should move away from The Testers and Coding Debate and look to the future. Great testers can acquire technical skills and provide feedback to developers by pairing with them. This way they will build bridges and enter confidence zone vital for collaboration within the agile teams. Days of black-box are numbered and we require more intuitive approach to testing

  2. Tim WesternFebruary 17, 2014 at 11:59 am #

    So I’m curious now. What if someone learned to code first, and then became a tester? The arguments around this subject always seem to be the reverse, that Testers are somehow less technical, and therefore do not already possess some knowledge about coding. That may be true for some testers, but not everyone sees things that way either.

    Personally I think this question is highly subjective to context, and because of that it is up to each individual to decide whether they want or need coding skills for what they do. One of the risks of approaching Testing from programmatic perspective, is not just the going native problem, but some teams begin to see all testing problems as programming problems.

    The other funny thing is, it seems people want testers to learn to code, but you don’t often hear the reverse, that programmers need to learn how to test. I mean the very fact unit testing ends up mentioned in this article as something a tester should know how to do, I think is telling.

  3. Matti HjelmFebruary 17, 2014 at 12:33 pm #

    I agree that “Tester programming skills are helpful in some situations”. But I don’t agree that its always true that “having those skills would make a tester more productive”.
    I am myself an ex-developer, and know some programming, but I still often end up spending more time trying to automate with programming what I could have finished faster manually.
    The problem I guess is that “light” programming skills doesn’t help more than solving very simple problems. Very often there is some critical problem that completely blocks you and can only be solved by investing a LOT of time because your knowledge is too shallow. What looks easy for a programmer is often TOO hard for a non pro.

    • Paul GerrardFebruary 17, 2014 at 4:03 pm #

      Hi Matti,

      It’s important that the focus of attention in this debate is not just ‘automated testing’. Knowing how to write a little code can help if you want to search files, generate test data, have conversations with developers about stuff ‘under the bonnet’, as well as having perhaps a better insight on how to test from a black-box perspective.

      If automating checks is your goal, having programming skills would seem to me to be a definite advantage.

      If you are “spending more time trying to automate with programming what I could have finished faster manually.” Then either it’s too ambitious or complex – a bad call to automate at all (and coding skills would help with that decision) or you are automating something that will only be done once and the economics don’t stack up?

      Like all skills, if you don’t practice, you won’t improve. Having some coding skills means that your perspective changes. for example, every opportunity to automate a check (that has value in being run many times) becomes an opportunity to use or improve your skills.

  4. Paul GerrardFebruary 17, 2014 at 12:35 pm #

    @Mohinder Thanks. I wouldn’t say ‘days of black box are numbered’ – I’d take that to mean the script-based testing and testers who can only work that way. I don’t think we’ll ever see the end of them.

    But in some organisations they are history. In others they are being reduced in number and in some organisations, scripted testing will remain as the main approach for some years to come.

  5. Kate PaulkFebruary 17, 2014 at 12:40 pm #

    I see it as a matter of where a person’s strengths lie. Each tester should be using his or her abilities to provide information about software to the stakeholders so they can make the business decisions they need to make. If those abilities include coding, you’ll get different information than if they don’t, but the information will still be valuable.

    In addition, pushing someone who doesn’t want to code and has no aptitude for it into the coding realm just turns a manual tester into an indifferent and unhappy coding tester – which is a net loss to the team.

    In my not at all humble opinion the answer to the question “Should testers code or not?” is “Yes”. Coding and non-coding testers add value (assuming they’re good testers to start with). One is not better than the other.

    • Paul GerrardFebruary 17, 2014 at 4:10 pm #

      Hi Kate,

      Indeed. The choice to learn how to code lies with the individual. Not every tester can or wants to code. It must be worth a little time to consider the pros and cons. It doesn’t make sense to me to dismiss it out of hand, ‘in principle’. If you don’t try, you would never know whether it has value to you or your team – or not.

      Having the skill might not add value in all circumstances and I’m not suggesting all testers who can write a little code are ‘better’ than those who do not. It is absolutely down to the individuals.

      But having some coding skills on your CV will definitely make you more attractive in the job marketplace right now.

      • Kate PaulkFebruary 18, 2014 at 12:33 pm #

        Paul,

        The job marketplace is – sadly – an issue, right alongside automation tools that promise the world and deliver rather less than that (yes, I’m cynical). I wouldn’t advocate that anyone dismiss any option out of hand, not least because employment isn’t that easy to find (and good employment is harder). There are still too many places that think of manual testing as mindlessly following a script written by someone else (there’s a place for that, but it certainly shouldn’t be the whole of the testing effort).

        As always, it depends.

  6. Paul GerrardFebruary 17, 2014 at 1:06 pm #

    @Tim

    Thanks – you’re right – I’ve been programming on and off since the late 1970s. It’s really hard to understand the position of a non-programmer and their reluctance to learn to code so I tried to focus on what value an individual could add rather than whether non-coding testers ‘must’ learn that skill. Indeed it is up to the individual.

    But suppose we were discussing ‘user interface design’ or ‘critical thinking’ or ‘domain testing’ – then which tester would say no to acquiring those skills? Would there be a debate about those?

    What makes coding so different? If we see code as the product, this distorts our thinking as testers. See for example, Philip Armour’s, “The Laws of Software Process” and his view that code is a medium, not a product and the goal of a project is to acquire knowledge.

    Absolutely we need to encourage developers to recognise testing isn’t just about learning how to use a unit-test framework to knock out a few executable examples. Testing for developers seems mostly t revolve around the notion of ‘design through checks’ and that isn’t the same as testing to detect defects (as a goal). The developer’s goal is to create some code that ‘works’ and a set of tests that provide a safe platform for refactoring etc. I don’t see much attention being paid to ‘better testing for developers’ in the developer literature/blogs/conferences except in the discussion of test-driven development. The BDD, ATDD approaches promise much, but then users, analysts and testers are most likely to be the people who write the stories that will drive the test automation (and not developers).

    So yes, we may be exhibiting co-dependent behaviour (again) to compensate for the developers’ lack of attention to good testing. But I think testers, working much more closely with developers, could encourage better test and code design through their daily conversations. I’m sure this is a goal of the companies shifting left or moving towards a DevOps culture.

    • Tim WesternFebruary 18, 2014 at 1:25 pm #

      My biggest problem with this discussion, is twofold. One, the unspoken realization by many that to push for coding skilled testers is to in effect write off testing as not that difficult of a process by itself. Secondly, the suggestion that programming is some panacea skill that will solve all testing problems. Look, I come from a background in engineering, so I look at everything as a trade off. When it comes to continuing learning, which is better, to get a little familiar with testing, to improve critical thinking as you suggest or to learn about user interface design. Which do you choose.?

      There is so much emphasis being given to coding for testers, as if coding is the rosetta stone that unlocks all of testing for you. If that were the case, then why were there ever tester specialists to begin with? Furthermore, I first learned to really program in High School and College, I saw them as a support tool for my potential career as an engineer, and learned a number of different languages ranging from Basic, C, Ada, various Assembly languages, to web languages that I won’t list here. At the end of each of those courses, I thought I knew all I needed to code in X language above, but you know what, that turns out to be an example of Dunning-Kruger. No matter what language you may learn, experience using the languages you may code in on real projects is necessary to really be competent at it. It just isn’t as simple as picking up a manual and learning all the basic syntax, semantics, and operators of a given language, you need to learn how they are leveraged in the real world.

      Now there are other types of languages, the more scripting based like say bash for example that might be easy to learn little about and leverage to help make your testing better, but does that really mean, even in this case that the tester who has this knowledge is going to be a 100% competent coder? I say that requires proving it.

      Now I am not suggesting a tester should never learn how to code. That’s not the point I am trying to make. What I am saying is that there is a vast difference in learning about coding, learning about how to write a few basic routines, and being able to competently string class upon class together to form some great project.

      I think as testers we would chafe at the idea that we are just a bunch of unskilled button pushers, and programmers would likely do the same if you suggested all they do is type into an IDE or text editor and click compile. Both groups of developers have a number of tasks that go into how they accomplish what they do, and it is more than just coding.

      Even Test Automation, the one area where there is a lot of argument for testers with coding skills, I wonder how many are looking for people who also know a bit about Design Patterns or SOLID design principles? Maybe I’m the only one who understands this. I do not think testers should be afraid to learn about coding, but I challenge the assertion that doing so is necessarily an easy task, as if it unlocks some mystical door of knowledge.

      I also still maintain that there is an inherent problem in some teams that look for testers with programming experience, who begin to see every testing problem as a programming problem. I think to go immediately to that one tool to leverage testing is to set up a team to eventually see some bugs slip into the product, because the focus had been on the test code and not on the testing the code was supposedly taking care of.

      • Paul GerrardFebruary 18, 2014 at 3:01 pm #

        I’m not sure anyone is arguing in this Forum that asking for testers who can write code ‘writes off testing as not difficult’. (But I’m sure there are employers out there who do). And coding as a panacea – again it’s not a position anyone here is taking but I think perhaps some people, probably some developers, think all tests could be coded.

        But I would draw your attention to the DevOps movement who are actively promoting very high levels of (test) automation as a viable process. DevOps environments tend also to do in-production manual testing and of course, customer-testing (through dark releases, A-B, Blue-Green and Canary Testing etc.) which supplants the automated stuff. They are online business usually.

        A ‘problem’ we have is some of these companies (and not just startups) appear to be succeeding commercially with the continuous delivery and DevOps approaches. They are getting attention and the approach is gaining followers (at least in the DevOps community). I’ve met several really smart and honest people who say their companies are succeeding with these approaches. Who are we to tell them they are doing it wrong?

        With regards to choosing between skills – it has to depend on what problem you need to solve – today and in the future.

        “Why were there ever tester specialists in the first place?”

        Why indeed. I think some people have challenged the traditional view and come up with measures to compensate for not having dedicated testers. One is ’embed them and give them coding skills’. I’m not arguing the pros or cons of this. It’s happening, and that’s why I’m suggesting we move onto the how and not spend too much more time on the why. Ours is not to reason why, ours is but to get on with it.

        Coding skills, like most skills, need continuous practice and development. ‘Learning’ a single language does not imbue you with all skills required (to do much of anything). Coding is obviously much more than typing syntactically correct statements. There should be no expectation that testers need to be or can ever become a “100% competent coder”. As I say, in the UK, there is strong demand for SDETs and SDiTs but there are hardly of them available. And it seems that the majority of tester roles require some technical background.

        Testing is hard. Programming is hard. It is difficult to learn how to do either well.

        I think commercially available programming courses aim to teach language constructs, are aimed at existing programmers and they don’t teach software design (or testing). So if the demand for technical testers is real, increasing and won’t go away, we aren’t very well prepared.

        “I also still maintain that there is an inherent problem in some teams that look for testers with programming experience, who begin to see every testing problem as a programming problem.”

        That may be true. Wearing my developer hat, I will usually imagine how I would write code to do a task before having to do it manually. Nothing wrong with that, I suggest. But test design is not automatable. Execution might be. Perhaps embedding testers in development teams mean the smart tests are created for developers to code up and automate? I think that is one goal of some employers.

  7. Maaret PyhäjärviFebruary 17, 2014 at 2:05 pm #

    Wanted to comment on “New skills only add, they can’t subtract” – I agree. However, out of the thousands of skills I could learn, why would coding be the skill? And, why would coding be one skill only?

    On daily basis, I look at my developer colleagues struggling to keep up to date with latest of code – external and internal libraries in the ever-changing whirl. I see them struggling with the needs to change the code as we learn new things with what they need to learn, to translate that into something that runs with all the conflicting information. And I see them finding too little time for everything, just like me.

    New skills add, but I’d still like to see us look at this from the team’s not the individual’s perspective.

    • Paul GerrardFebruary 17, 2014 at 3:49 pm #

      Hi @Maaret

      I certainly don’t want to give anyone the impression that coding is the only or the most important skill for testers to acquire. It’s just that there’s been a lot of debate (and it is continuing) and there are forces out there (Dev Ops, BDD, ATDD, Shift Left) that are pushing testers in that direction.

      Of course, we should look at things from a team perspective. It seems to me that developers are not going to improve their (test) practices until they are forced to. Both the DevOps and the BDD/ATDD approaches depend on test discipline ‘shifting to the left’. It’s not the whole industry doing this kind of thing, but testers/managers in half of the companies I speak to seems to be making this move.

      These companies are making this change on a team or organisation wide basis. They have (rightly or wrongly) made the call for ‘what’s best for the team’. Who is driving this and why?

      As I say in the article, I believe companies deciding to jump to Agile are saying, “The way we’ll go Agile is to adopt TDD or BDD, and get our developers to do better testing. Obviously, the developers need some testing support, so we’ll need to embed some of our system testers in those teams. These testers need to get more technical.”

  8. ChrisFebruary 17, 2014 at 3:11 pm #

    Hi! First thanks for a great post. I thought I’d balance it out with a reply from the perspective of a tester not learning to code, because I do think the debate is important and I’d like to offer the view that being able to program isn’t as valuable as I feel the tone of the post implies (but if I’m wrong in that assumption please let me know!)

    For example, one argument goes, if you learn how to code, then either (a) (b)

    For (a) I haven’t seen this argument ever presented. I don’t think a rational person would say that a tester that knows how to program is just a programmer any more than a tester who tends to his garden isn’t a tester but a gardener. I don’t know that I fully understand (b) but it has its merits as a position. It’s possible to lose or skew one’s perspective in this way (e.g. http://en.wikipedia.org/wiki/Curse_of_knowledge). I’d call that a warning, rather than an argument. I think that, in this, I’m agreeing with you.

    Another perfectly reasonable view is that you can be a very effective tester without knowing how to code if your perspective is black-box or functional testing only.

    I was with you up until “only”. I’d say it’s a perfectly reasonable view that one can be an effective tester without knowing how to code in a myriad of ways beyond functional testing. I think it’s possible to be a grey-box tester without knowing how to code. I think it’s possible to interpret some code without being able to write a single line. It’s possible to be proficient with APIs and databases and tools. I don’t think that knowing how to code in the company language adds as much as people assume that it does (although I believe it can be useful).

    Some programming skills, but not necessarily at a ‘professional programmer level’, are required to create unit tests, services or GUI test automation, test data generation, output scanning, searching and filtering and so on. The level of skill required varies with the task in hand.

    Unit tests => unit checks. I don’t think that it’s necessary to program in order to generate test data, even tool-based. I have a tool that translates spreadsheet data into database records, for example. I’m not sure why programming is necessary for scanning output, searching or filtering. These things can be done with or without tools all without any programming experience. And if all else fails, get a programmer to write a tool for you!

    There is some resistance to learning a programming language from some testers. But having skills can’t do you any harm. Having them is better than not having them; new skills only add, they don’t subtract.

    The skill of coding can subtract, in terms of pure time and effort to learn each language that’s required (especially if one is not fond of it, or awful at it). Also how dull it can be; I got into testing because I love it. I had opportunities to get into development and I didn’t take them, because coding to spec every day is so devastatingly monotonous (although I have utmost respect for those that do it so well every single workday). I don’t want to be thought of as a check writer (or at least JUST a check writer, I admit I do some tool coding that involves automated checks).

    Should testers be compelled to learn coding skills?

    I don’t think this section dealt with the question. I agree that if your company chooses to replace testing with pure automated checking then one has no place to argue. It isn’t a democracy, and they pay the salaries. But that doesn’t mean a company SHOULD choose to compel testers to learn to code (which I differentiate slightly from coding-related skills), possibly at great expense and at risk of alienating otherwise great testers or turning great testers into mediocre code monkeys. Again, I’m not making the argument that testers should never be compelled to learn to code, just that it’s not clear to me that it’s as frequently a good idea as it seems to be in industry.

    I would like to turn that around and say, is it a bad thing to know how to write code if you’re a tester? I can’t see a downside. Now you could argue: if you learn to write code, then you’re infected with the same disease that the programmers have – they are blind to their own mistakes. But testers are blind to their own mistakes too. This is a human failing, not one that is unique to developers.

    So if testers find the kinds of mistakes that developers don’t find then why would we want to encourage testers to make the same type of mistakes? I think that having knowledge of coding helps a tester to think of the sort of errors that programmers make, but having them actually code checks instead of (also?) do testing IS conflating the nature of their mistakes.

    On the other hand, by separating the testers, the team lose much of the rapid feedback which is probably more important than ‘independence’

    Do you mean separating them physically? Or just their roles? Because it’s possible to offer rapid feedback with tester/dev pairing without the testers knowing how to code.

    The independence, wherever the tester is based, is their independent mind whether it’s at the end or working with the developer before they write the code.

    Except that a tester’s surroundings, their culture, expectations of them, their mental priming all matter. If you change a tester’s mission (or apparent mission to the tester. Their context as they understand it) you will change the nature of their testing. I think that it’s possible to get testers to be coders and good testers but that it’s more difficult than some people assume. Hence the case you quoted with testers losing perspective and no longer finding important bugs. Of course you could have testers who work closely with developers and maintain their mindset independence… but then why is that choice dependent on knowing how to code?

    Why did we ever separate testers from developers? Suppose that today, your testers were embedded and you had to make a case that the testers should be extracted into a separated team. I’m not sure the case for ‘independence’ is so easily made because siloed teams are being discredited and discouraged in most organisations nowadays.

    I’m not sure that industry standards are a reliable barometer of correctness. I could make the case for “independence”, even though obviously I think that testers and developers should work together, because testers have a completely different job to do that requires different skills and different approaches. We don’t have to make a choice between testers and developers working together and them being separate roles in separate departments.

    Right now, a lot of companies are pushing forward with shift-left, Behaviour-Driven Development, Acceptance Test-Driven Development or Test-Driven Development. In all cases, someone needs to articulate the examples – the checks – that drive these processes. Who will write them, if not the tester?

    A developer. BDD, ATDD and TDD have the same D at the end – Development. These are development approaches, not testing ones.

    With ATDD, BDD approaches, communication is supported with stories, and these stories are used to generate automated checks using tools…

    Throughout this section I’m not sure if you’re saying that this is a good thing or just an inevitable thing. Encouraging bad testers to become good check writers isn’t necessarily the way to do good testing.

    Testers who can’t do anything but script tests, follow scripts and log incidents – the plain old functional testers – are being offshored, outsourced, or squeezed out completely and the shift-left approach supports that goal.

    You mean fake testers and factory testers? Yes, that happens. But I do functional testing as part of a less formal exploratory approach without resorting to expensive and overly simple check scripts. The reaction to “manual scripters” shouldn’t be to simply turn them into check-writing code zombies but build their testing skill and release them from the prison of imposed overly-formal structures like scripts.

    How many testers are doing BDD, ATDD or TDD?
    I don’t think I’m disagreeing with you on this point, but only because you don’t say if this practice of testers actually being check-writing developers is good or not. BDD isn’t testing. TDD isn’t testing. ATDD isn’t testing. It’s not possible to formalise testing in that way. It might be useful to formalise repeatable checks that find predicted bugs early but that isn’t the full sum of all testing. If someone is doing behaviour driven development that makes them a developer trying to write more useful/accurate/bug free code, and all power to them. But they’re not testing.

    Developers can and must test their own code. But that is not ALL the testing that is done, of course.

    Do you mean test, or do you mean write code with checks that help them stay on course in writing useful code with fewer bugs? Or are you saying that this checking is a small but worthwhile part of the test effort? And are you saying it’s always preferable?

    So although the door might be closing on ‘plain old functional testers’ the window is open and opportunities emerging to do really exciting things. We need to be willing to take a chance.

    Yes. Provided we remember that the choice between “plain old functional testers” and check-writing engineers is a false dichotomy. We can stay as skilled testers, work closely with developers, and pair with them without knowing a single line of code.

    There is no question that testers need to understand how test-driven approaches (BDD, TDD in particular) are changing the way developers think about testing. The test strategy for a system and testers in general must take account (and advantage) of these approaches.

    How is it changing the way developers think about testing? It might be making them assume that explicitly coded checking is a context-independent suitable replacement for testing.

    We should move on from the ‘debate’ and start thinking more seriously about appropriate development approaches for testers who need and want more technical capabilities.

    And those who need and want more non-technical capabilities, of course.

    Again, thanks for a great post! I took a slightly similar (but more generalistic) approach in my post on the subject which I wrote here: http://secondsignofmadness.blogspot.co.uk/2012/11/learn-to-code-or-learn-to-stack-shelves.html

  9. Paul GerrardFebruary 17, 2014 at 5:54 pm #

    Hi Chris,
    Phew! Thanks for the extensive comments. I won’t make an argument (or maybe I will – up to you :O)) I’ll pick the things on which I have something further to say. Some of your points come from your different perspective, Maybe I am experiencing different influences and that’s why I make my points. I do disagree in principle in one or two places.

    a) and b) arguments. I think the ‘testers going native argument’ is being used quite a lot. Dot’s example at Eurostar is one. It’s a red-herring. Going native is nothing to do with skills – it’s about self- and/or management-control.

    “Another perfectly reasonable view is that you can be a very effective tester without knowing how to code if your perspective is black-box or functional testing only.”

    This is fine. Interestingly in London right now, there seem to be very few jobs for testers who *don’t* have technical skills. Richard Neeve, who has looked into these things more deeply than I have (and focuses on the ‘SDET Crisis’. The market is deciding whether it’s a good thing to have technical skills. Not testers. I think there’s a certain amount of sleepwalking going on. See his blog here: http://www.richardneeve.net/2/post/2014/02/sdit-under-supply-take-two.html

    “You don’t have to write code yourself, you could get programmer to write it for you.” Well that’s easy if they are helpful and have time on their hands. Most are but don’t.

    You express a preference for exploratory testing (rather than checking, manual or automated or writing code). That’s fine. I think of myself more as a developer who tests a lot. Testing is definitely the ‘boring’ part compared with design – because good design makes testing easier – and therefore boring. I’m not saying I’m a great designer, just that a huge part of the design process is dealing with the question, ‘how will I know I can rely on this feature?’ Good software design is very close in thinking process to test design.

    Developers get very excited about TDD once they ‘get it’. That does not mean they become great developers though. They *might* have a good automated platform to make refactoring easier. But the checks they create might not reflect a good design. The GOOS approach is a very promising start, but I think the developer community are really struggling with this one (most are blissfully unaware of the problem of design).

    I’m not arguing in this article that companies should pursue the ‘shift left’ approaches. (Although I believe in some, perhaps many cases it is a sensible approach). I *am* arguing that more companies are doing it. And non-technical testers might get squeezed out of a job.

    I don’t think testers automatically make the same mistakes (same assumptions) as developers if they have coding skills. Part of being a tester is being able to think independently.

    By the way, I’m *not* assuming a tester who has coding skills always automates their checks or never explores. That’s the Homer Simpson argument.

    Re separation of testers – I mean testers who don’t have the confidence to talk to developer about code. Testers who can only demonstrate a failing test or communicate using IRs and hunches.

    “A developer. BDD, ATDD and TDD have the same D at the end – Development.” That is nomenclature. The BDD advocates (at least Dan North who invented BDD) want to make the D stand for design. In TDD most developers would argue they do a lot more than TDD. As I said above, Design involves (or IMHO should require) much of the thinking you would associate with testing.

    “Encouraging bad testers to become good check writers isn’t necessarily the way to do good testing.” I think, for the foreseeable future, the vast majority of testing will comprise checks. A tester who can’t write a good check is not much of a tester.

    “You mean fake testers and factory testers? Yes, that happens.” – Most of the testers in industry work in environments where they write scripts at some level. The fake and factory labels are offensive and silly. Don’t use them.

    “The reaction to “manual scripters” shouldn’t be to simply turn them into check-writing code zombies but build their testing skill and release them from the prison of imposed overly-formal structures like scripts.” I have no idea what a check writing code zombie is haha.

    In all software development environments someone, somewhere has to do a lot of checking. No one likes doing it particularly and to protect developers from screwing up when they change code, it usually makes economic and technical sense to automate *some* of it. TDD aims to create automated checks as part of the design process so that makes sense to a lot of people. And for some teams that works very well. It’s not helpful to denigrate checking as something that has less value or requires a lower intellect.

    Don’t forget that the exploratory part of *all* testing is exploration of your sources of knowledge. Perhaps your sources are documents, people, business context, the ‘old system’ or the system under test. From this knowledge we formulate test models. Then we check the system under test against this model. Either the model is wrong and we explore/remodel a bit more or we believe there’s something wrong with the system under test. All testing involves or *is* checking. Take a look at the http://test-axioms.com site for more on sources of knowledge, models and other stuff.

    “We can stay as skilled testers, work closely with developers, and pair with them without knowing a single line of code.” Don’t forget that most software does not have a user interface. You are of very limited value to your team if 99% of the software *has* to be tested with drivers, simulators or other tools and you cannot understand how to do that yourself.

    “How is it changing the way developers think about testing?” I suggest you talk to the developers in your team about job prospects for developers. SDETs, SDITs are in high demand. Developers expect to be challenged on their testing credentials when interviewed. Employers seem to be seeking SDETS assuming, they can hire fewer testers. SDETs in the City of London can earn up to £120k. There are none available at the moment, it seems.

    • ChrisFebruary 18, 2014 at 11:07 am #

      And thanks for replying! I meant to write a short note, but.. you know how it is :).

      I think we agree on more than I assumed, and you’ve cleared up a lot of points for me, I’ll reply to where I think we’re probably differing on the subject (hopefully that will be valuable!).

      Going native is nothing to do with skills – it’s about self- and/or management-control.
      |
      Yes, it’s how that coding skill is used. It might be used to give added perspective, it might be used to put pressure on great testers to become check writers (or to write more checks where it’s not appropriate). So it can be a good thing, I just think it’s a dangerous move to assume that management (who seem to often not understand testing, or wish to simplify it or make it cheaper and pray that it still works) will give testers coding skills for the benefit of the tester (and therefore the product and test clients) rather than the benefit of simpler management or fake value from lowering costs.

      You express a preference for exploratory testing (rather than checking, manual or automated or writing code). That’s fine. I think of myself more as a developer who tests a lot. Testing is definitely the ‘boring’ part compared with design – because good design makes testing easier – and therefore boring. I’m not saying I’m a great designer, just that a huge part of the design process is dealing with the question, ‘how will I know I can rely on this feature?’ Good software design is very close in thinking process to test design.
      |
      Then I think my replies were worthwhile. I think it’s important to note that there are testers who love testing and don’t care for development. And I’m in favour of tool-assisted testing in whatever form works best, but I also think that it’s rarer than one would assume by looking at the industry that using lots of automation or written scripts is the best way to test a product (and of course it’s never always the best way).
      Also I agree that there’s the question “how can I rely on this feature”.. there’s also “how can users rely on this feature, given the features surrounding it and in any particular system state, and given who they are, in what roles, in what environment they use it in, and how they use it be it how I presume or otherwise to any particular extreme, and how will users who are ignorant or careless or malicious use it and at what time of day in what numbers and at what rate on what platform with what data using what interface…” and that’s the value a good tester can add if they have the freedom to ask that sort of question and investigate it how they need to.

      Re separation of testers – I mean testers who don’t have the confidence to talk to developer about code. Testers who can only demonstrate a failing test or communicate using IRs and hunches.
      |
      But the two aren’t the only possible scenarios. I can describe the product in many ways without referring to code, or falling back on “failing tests” or hunches. I can present evidence about how I tested a product, what I think is a problem, why I think it’s a problem. I can defend my testing (or at least I strive to) and not actually need to refer to the product code.

      “A developer. BDD, ATDD and TDD have the same D at the end – Development.” That is nomenclature. The BDD advocates (at least Dan North who invented BDD) want to make the D stand for design. In TDD most developers would argue they do a lot more than TDD. As I said above, Design involves (or IMHO should require) much of the thinking you would associate with testing.
      |
      Design of what? And if we’re changing nomenclature how about “check-driven design”?
      I don’t think anyone’s claiming that testers can’t help developers and designers, but I won’t concede that BDD et al are testing (or product design). I think that this is a whole separate debate though, and I’ve already taken up too much space!

      “Encouraging bad testers to become good check writers isn’t necessarily the way to do good testing.” I think, for the foreseeable future, the vast majority of testing will comprise checks. A tester who can’t write a good check is not much of a tester.
      |
      And I fight against the idea that it necessarily should. You haven’t said that encouraging bad testers to become good check writers is the way to do good testing, just that checking is inevitable. It’s only inevitable if we all sit around saying that it’s a good idea. A tester is much, much more than a check writer. And yes, anyone can write a good check, that’s why check writers and check executers are fungible commodities that can get offshored. It’s the execution of checks (and which checks to execute and when and why) that’s part of testing, not writing them down.

      “You mean fake testers and factory testers? Yes, that happens.” – Most of the testers in industry work in environments where they write scripts at some level. The fake and factory labels are offensive and silly. Don’t use them.
      |
      You described them as “Testers who can’t do anything but script tests, follow scripts and log incidents”. Is that not as offensive? If that’s all the tester can do then I reserve the right to to call them fake testers because they’re basically robots. I’m not saying that people who *use* scripts are fake/factory testers, but testers who can only do those things are. They are bad testers, or perhaps reasonable testers being forced into a position where they have to be hypocrites and liars.

      “The reaction to “manual scripters” shouldn’t be to simply turn them into check-writing code zombies but build their testing skill and release them from the prison of imposed overly-formal structures like scripts.” I have no idea what a check writing code zombie is haha.
      |
      Then think about it like this: Scripts are tools to aid testing, not to replace it, and changing one testing-replacement for another doesn’t help us get good testing done.

      In all software development environments someone, somewhere has to do a lot of checking.
      |
      That is, I suppose, literally the case. But that doesn’t make a tester with a mission the same as a script writer.

      No one likes doing it particularly and to protect developers from screwing up when they change code, it usually makes economic and technical sense to automate *some* of it.
      |
      Agreed, automate some of it, when and where it’s appropriate to invest in that sort of cost. Although I do enjoy many checks that I perform when I’m testing out a conjecture about product behaviour and the like.

      TDD aims to create automated checks as part of the design process so that makes sense to a lot of people. And for some teams that works very well. It’s not helpful to denigrate checking as something that has less value or requires a lower intellect.
      |
      Less value that what? A lower intellect than what? And I denigrate checking only as a replacement for what should be testing.

      Don’t forget that the exploratory part of *all* testing is exploration of your sources of knowledge.
      |
      I’d say it’s exploration of the product whilst leveraging that sort of oracle information. Sources and knowledge influence your mental model by which you process your observations. That’s why less formal exploratory testing permits us to react to what we learn in our test design and in the nature of our observations.

      Perhaps your sources are documents, people, business context, the ‘old system’ or the system under test. From this knowledge we formulate test models. Then we check the system under test against this model.
      |
      Was with you up to here. Then we TEST the system under test against this model.

      Either the model is wrong and we explore/remodel a bit more or we believe there’s something wrong with the system under test. All testing involves or *is* checking. Take a look at the http://test-axioms.com site for more on sources of knowledge, models and other stuff.
      |
      I disagree that testing can only be checking in principle, but that, again, is a whole other huge discussion. I’m working on this sort of assumption with regards to checking: http://www.satisfice.com/blog/archives/856.
      You weren’t to know this but I’m fairly nerdy when it comes to psychology and neurology in testing, so I’m up on models :).

      “We can stay as skilled testers, work closely with developers, and pair with them without knowing a single line of code.” Don’t forget that most software does not have a user interface.
      |
      In what sense does most software not have any interface with a user? All software out in the world has some relationship to a human in some way or other. Strong suspicion that I’m misunderstanding you on this.

      You are of very limited value to your team if 99% of the software *has* to be tested with drivers, simulators or other tools and you cannot understand how to do that yourself.
      |
      Why do I need to know how to code to use tools? What does the term “99% of the software” mean? And why does it have to be tested in a way that requires writing code and in no other way? Genuine questions, I’m trying to understand your point better. And I can see a scenario where learning how to code is this important, don’t get me wrong, I just don’t see that in many places, it might just be selection bias.

      “How is it changing the way developers think about testing?” I suggest you talk to the developers in your team about job prospects for developers. SDETs, SDITs are in high demand. Developers expect to be challenged on their testing credentials when interviewed.
      |
      Will that help me understand how they think about testing? Because if how they think about testing is “I need to show that I can come up with and write coded checks so I can get work” I’m supportive of them but I hope that’s not what they think professional testing is. I gave a talk to our development team to dispel just that sort of thinking with regards to our test team and how we can best be of service to them. If you mean they’re confused about what testing is, I agree, but (anecdotally) I think that’s already pretty pervasive.

      Employers seem to be seeking SDETS assuming, they can hire fewer testers.
      |
      Sincerely, that is a shame. I hope they at least hire better and more skilled testers to compensate.

      Thanks for your replies, and for putting up with me ;).

      • Paul GerrardFebruary 18, 2014 at 4:42 pm #

        I’ve no problem putting up with anyone if we’re on an interesting thread :O)

        “I think it’s a dangerous move to assume that management … ”

        I think much of the pressure to go DevOps and continuous delivery is practitioner driven. You won’t see that many managers at DevOps conferences. Yet.

        “Love”

        I prefer “code and test or test and code” to just test. But I ‘love’ neither. That makes no sense to me and suggests perhaps you are blind-sighted by that affliction?

        “how can I rely on this feature”

        The developer is no less qualified to ask that question than a tester. Of course it all depends on how teams are structured and the personalities of the people involved. Some devs and some testers are introverts who find it hard to talk to real users/people. Some devs and some testers are organisationally or physically separated from sources of knowledge. Some devs know an awful lot more about the user’s business than testers. It happens. You can’t generalise.

        “But the two aren’t the only possible scenarios. I can describe the product in many ways without referring to code, or …” That’s my point. Testers who can only communicate these ways are of limited use.

        “Design of what?”

        You seem dismissive of checks as being useful for anything. I think you need to think of a software project where no checks were done. A small number of projects might get by. But the vast majority would be hopelessly wayward. Don’t mark things you don’t like as useless. It’s a bigoted stance, not an informed one.

        (A tester who can’t write a good check is not much of a tester.) “And I fight against the idea that it necessarily should.”

        If you don’t believe that checking has value, I’m not going to be able to change your mind. But consider this. If you have (the rather limited view) that the purpose of testing is to find bugs, then the proportion of fatal bugs found by good developer practices (all checking?) is perhaps 99% and late testing finds only 1% and the tester can’t even guarantee that 1% completes the job, I don’t think you can take that much of a bow for testing.

        “Testers who can’t do anything but script tests, follow scripts and log incidents”. Is that not as offensive?

        I don’t know if that is offensive or not – these people exist. They are not fake testers just because they do things differently to you. I suspect the ‘factory’ testers still comprise the majority of testers in our industry and we are in the minority – who is the fake?

        “That is, I suppose, literally the case. But that doesn’t make a tester with a mission the same as a script writer.”

        Are you implying people who write things down are inferior? Have you only ever run tests that could be held in your memory? You need to work on some real (i.e. more complex) systems :O)

        “And I denigrate checking only as a replacement for what should be testing.”

        I recognise the difference between testing and checking, but testing cannot ever be a replacement for checking except in the most trivial programs having a user interface (and I’m not sure even then). There are situations where ‘testing’ as you understand it is is not possible, so the reverse must be true in these situations.

        “I’d say it’s exploration of the product whilst leveraging that sort of oracle information.”

        All testing is exploratory. Cem Kaner, who invented the concept, agrees with me. Ask him.

        “Was with you up to here. Then we TEST the system under test against this model.”

        Nope. you use the word incorrectly. That is a check. You have a model (in your mind). You think of some stimulus, input or whatever. You drive the system with your selected inputs. You notice something is wrong. Or correct. Explain your thinking to me. If you can explain it to me, I can automate it. If you can’t explain it – good luck convincing a developer there’s a problem. When you run that test again to check the fix has worked – it’s a check.

        All I’m trying to say, is whatever test you can think of. If it exposes a problem and you rerun that test, it become a check, by the received definition. You, the person, are following a procedure, written down or not, to reproduce the actions you followed before. Otherwise the test/check could be deemed invalid. That check could be automated – and probably should be if it is a useful check.

        I’m very familiar with the Test v Check definition of Michael and James. They don’t work for me. By definition, testing is a ‘whole process of learning, design and evaluation etc.’, whereas a check seems only to refer to the execution of a ‘sequence of instructions to do some form of comparison’. Who designs the checks? What though process is involved, beyond writing algorithmic decision rules. Do these checks design themselves? I’d refer you to, for example Cem Kaner’s Domain Testing Workbook. It is a book on test design. It’s a very thoughtful and comprehensive method for selecting or designing tests. Or checks. I would say this part of the process (design) is the part that requires most attention and clear thinking, I don’t see how:

        – Firstly that a check designed this way is any different from a test derived using the same method (whether things were written down or not) and
        – Secondly, that if the argument is that one must never use a systematic approach to identifying input and combinations for a test, then tests must be wild-ass-guesses much of the time
        – Thirdly it seems to me that a person running a check, that has not be run before is an experiment, presumably selected to reveal something (enabling learning) and so on.

        I’m not much worried by the definitions. Rather I worry you seem to think they have different value. In what way is ‘testing’ more valuable, effective, intellectual or efficient than checking?

        “In what sense does most software not have any interface with a user?”

        In the sense that the only way to execute a test of that software requires another piece of software to drive it in some way (or the system under test). There is no direct human interface to it. One could say that, all software requires other software to interface to it: operating systems, firmware, hardware and then the human touch). How can you possibly know, without an understanding of code, or some tracing facility what software you have executed/covered? How can you possible say you have tested anything if you cannot, at least in principle, identify what lines of code have been covered. Where code has been written to serve some function, the chances are that some white box, structural or code-based test design and execution (checking) will be required to assure people that it has been tested at all.

        “All software out in the world has some relationship to a human in some way or other.”

        Er I suppose you could say that. Tell me. How would you test a device driver? Or a class that manages memory. Or a message hub. Or a web server. Or the kernel of an operating system? Or a system service. None of these have user interfaces. Tools and checking are required big time.

        “Why do I need to know how to code to use tools?”

        See above. You don’t necessarily, But if you have no tools you better be able to write one.

        “Will that help me understand how they think about testing?”

        I don’t know. but it might help you to understand how they think about design and how you might use that model to test more effectively. Or check.

        Cheers :O)

        • ChrisFebruary 18, 2014 at 5:35 pm #

          And we can test the maximum thread depth at the same time!

          I prefer “code and test or test and code” to just test. But I ‘love’ neither. That makes no sense to me and suggests perhaps you are blind-sighted by that affliction?
          |
          By love I mean being passionate about and deriving great enjoyment from learning about and practising the craft of professional software testing

          The developer is no less qualified to ask that question than a tester.
          |
          Anyone can ask that question. But a good tester is in a better position to answer it. They should have the skillset, mindset, time and resources to inquire after the answers to such questions, know how to do so, when to do so, what to ask and when and of what/whom, and when the results come in who to inform and how to inform them. Maybe a developer can do that too, but I have never seen it happen, and it certainly can’t be covered by single fact checks.
          There’s also the position of knowing one’s role in the company, which is covered better than I could by Klain (http://qualityremarks.com/know-your-role-being-invested-and-the-art-of-objectivity/) and Bolton (http://www.developsense.com/blog/2010/05/testers-get-out-of-the-quality-assurance-business/) and others.

          Of course it all depends on how teams are structured and the personalities of the people involved. Some devs and some testers are introverts who find it hard to talk to real users/people. Some devs and some testers are organisationally or physically separated from sources of knowledge.
          |
          I think that’s a company structure/culture problem. If people can’t work together on a project then that’s a bigger problem than can be solved by any amount of coding knowledge.

          Some devs know an awful lot more about the user’s business than testers. It happens. You can’t generalise.
          |
          It’s a tester’s job to understand the product the best way they can, and to ask questions of what they don’t understand (based on the understanding of their test mission with their test clients). If the developers know more about the user’s business than the testers then the testers should learn from them. Understanding the context of testing work is vital to understanding the value of testing.

          That’s my point. Testers who can only communicate these ways are of limited use.
          |
          Developers who don’t have a good understanding of UX principles are of limited use. But they are still potentially extremely useful.

          You seem dismissive of checks as being useful for anything. I think you need to think of a software project where no checks were done. A small number of projects might get by. But the vast majority would be hopelessly wayward. Don’t mark things you don’t like as useless. It’s a bigoted stance, not an informed one.
          |
          I haven’t said this, I’m not sure how I gave that impression. I use checks and checklists and automated checks in my testing. Checks are important, they’re just not sufficient. I hope you don’t think I’m bigoted; if you would like to chat about checking and testing I’m happy to do that too. Also I still don’t know what TDD/BDD is designing.

          (A tester who can’t write a good check is not much of a tester.) “And I fight against the idea that it necessarily should.”

          If you don’t believe that checking has value, I’m not going to be able to change your mind.
          |
          I have never said that checking doesn’t have value. In fact I will say the contrary now: checking has value.

          But consider this. If you have (the rather limited view) that the purpose of testing is to find bugs, then the proportion of fatal bugs found by good developer practices (all checking?) is perhaps 99% and late testing finds only 1% and the tester can’t even guarantee that 1% completes the job, I don’t think you can take that much of a bow for testing.
          |
          Those percentages don’t make any sense. How can you have a percentage of bugs found when you don’t know how many there are? And if a bug is a relationship between a person and the product then how can checking individual facts about a product be sufficient to find those problems? And I too believe testing is more than that: It’s an investigative process to try to discover the percieved quality of the product, and quality is also a relationship involving value in the mind of a person. It seems to me that an exploratory approach (and yes, all testing is exploratory) without imposing strict algorithmic rules is probably important in many contexts in finding important problems.

          I don’t know if that is offensive or not – these people exist. They are not fake testers just because they do things differently to you. I suspect the ‘factory’ testers still comprise the majority of testers in our industry and we are in the minority – who is the fake?
          |
          Why is the majority necessarily correct? And can they defend the value of what they do?

          Are you implying people who write things down are inferior? Have you only ever run tests that could be held in your memory? You need to work on some real (i.e. more complex) systems :O)
          |
          No, I didn’t actually say that at all. I don’t think it’s necessary to say that the systems I work on aren’t complex because I choose when I write down checks (it’s not possible to write down a test). My testing is tool assisted. But writing things down is expensive, so I don’t inherit that cost unless I feel that there’s a good reason to. So I try to automate checks where appropriate, run unit tests, etc.

          I recognise the difference between testing and checking, but testing cannot ever be a replacement for checking
          |
          I’m not saying testing can replace checking. I’m saying that I denegrate checking when it takes the place of what should be testing. Checking is a subset of testing.

          There are situations where ‘testing’ as you understand it is is not possible, so the reverse must be true in these situations.
          |
          Could you give an example?

          All testing is exploratory. Cem Kaner, who invented the concept, agrees with me. Ask him.
          |
          I already know that all testing is exploratory. However exploratory is a sliding scale between freeform exploratory and algorithmic checking. While all testing is exploratory the nature of how exploratory that testing is is important. When I refer to “exploratory testing” in that sense I mean anything on the non-checking end of the scale. So we can’t just replace testing with automated checks and pretend we have the same value from the exploratory nature of the testing we do.

          Nope. you use the word incorrectly. That is a check. You have a model (in your mind). You think of some stimulus, input or whatever. You drive the system with your selected inputs. You notice something is wrong. Or correct. Explain your thinking to me. If you can explain it to me, I can automate it. If you can’t explain it – good luck convincing a developer there’s a problem. When you run that test again to check the fix has worked – it’s a check.
          |
          Which check? How do we decide what check we run? That’s part of the nature of testing. I can give you a million checks to automate, but a computer will not actually do what a human does. Humans cannot actually perform only checks on a real system, they notice other things about the product with which they’re interfacing. Let’s forgo for a moment the feedback loop that low-formality explortatory testing permits and concentrate on a simple check. Can you describe an explicit check wherein I could not notice anything else about the system when I perform it? There is a difference between human checking and machine checking – so in that sense you can’t automate any check I can describe with the same value.

          I’m entirely out of time today, the guy with the keys is giving me the evil eye so I’ll reply to the remainder tomorrow. Thanks for your time and your input; I know we obviously have some fundamental disagreements on some matters but I’m learning a lot about our difference in perspective and it’s giving me great cause to think and to question my premises and assumptions which is invaluable to me. Until later!

  10. Michael BoltonFebruary 18, 2014 at 12:16 am #

    Having read the article earlier in the day, I’ve just finished looking at the comments, and it appears the debate isn’t over.

    But maybe the debate isn’t over because it was never really a debate in the first place. Instead, the question “Should testers learn how to write code?” is a unicorn question, one that’s not terribly helpful to ask or answer in a general way, isolated from a context. It seems to me that the ideas “Coding Can Be Helpful” and “Coding Isn’t Everything” are largely uncontroversial. So let’s change the question to one that testers can and should ask for themselves: “Should I learn to code or should I not?” That’s a decision, a choice, that is made by the individual tester with respect to his or her skills, preferences, temperaments, etc.; and in the context(s) in which he or she is working.

    —Michael B.

    • Paul GerrardFebruary 18, 2014 at 6:24 am #

      @Michael +1 to that.

      The intention with the article was, from my point of view, to set out some thoughts that I hadn’t made public, except in conversation and the SIGIST meeting a while back. I wanted to draw a line under the ‘should I or shouldn’t I?’ discussion. I don’t think that much new is coming to the surface now.

      If you don’t have enough information to make your own call, I suggest you choose yes, temporarily; have a play with an online training course for a couple of hours and see how you get on. Then decide to leave coding well alone or take it further.

      What I really want to discuss is the question, “If you do decide to learn how to write code (at some competence level, in some language, for some purpose), how should you go about it?” I’ll write more on this topic shortly.

    • ChrisFebruary 18, 2014 at 9:32 am #

      I agree, but there is also a question coming out of this to do with that context: “Should companies move towards having their testers also become coders?” That shift will heavily influence their decision, and perhaps not for reasons that are best for testing, or for software.

  11. SharathFebruary 19, 2014 at 11:53 am #

    Below is my 2 cents after going thru the post and all the replies. Damn had to do that all manually 🙂
    – What do the word ‘technical’ mean? Who are ‘technical’ testers and who are ‘non-technical’ testers? It’s very hard for me to think of a software tester who is not technical enough. For example – do testers who test db’s, query them, use browser plugins, .bat, macros, scripts to speed up their tests, use developer tools within browsers, sniffers such as fiddler, SOAP UI, wire shark, test data generators, traffic generators, etc. classify ‘technical’ or not?
    – Coding languages would definitely look good on a CV. To me checks automated at the end of the day is still code and should have attributes of a good code and for that to happen it takes good design, time, focus, and practise. So should we have bad coders write code? And if not so good coders start developing them what is the value of such code?
    – Applications using L7 protocols including L2 and L3 protocols, conformance to RFC’s, NMS systems, most systems in the telecom domain do not come with UI (if we are saying terminals, command windows, etc. are not UI). And yes they have been tested manually and most of them are still tested manually or should I say have been tested in tool assisted way. So how do these fit into this claim of testers should code/not code to test systems with no UI?
    – At one time job markets were flooded with QTP requirements. Does that mean it was on the right track?
    – From my limited experience I do not see much difference between a good automator (testers who only code) and a developer. And I think managing a team of automators is in most cases like managing a dev team. Their passion is around building code and this many a times have them spend hours exploring to script a check which might return absolutely no value to business. So how do the idea of having only automators (testers who only code) in a team work?
    – I clearly buy the idea of testers who can code and the benefits it brings with it. But like how majority perceive manual testers as boring, develop scripts, run checks, majority of the automators are boring, develop code, which run the checks. Isn’t it? To me we need testers to look at a context and pick an approach that fits best which in most cases is a good mix of both manual and automation and this is where I think we need the right balance of good manual testers and good automators in a team who complement one another and not the other way around.

    • Paul GerrardFebruary 26, 2014 at 2:20 pm #

      “What do the word ‘technical’ mean?”
      You are right to challenge this. The label us used rather loosely and I’m as guilty as the rest. What I mean by it (and I (may mistakenly) assume others too, is this. A technical tester could be someone who focuses on testing the non-functional or technical attributes of a system such as performance, security, reliability and so on. But it is also used to denote a tester who has technical skills such as programming or the use of tools to look at the workings of systems under the bonnet, e.g. network analysers, code coverage, dynamic analysis tools and so on. I would include SQL, sniffers, data generators and the other tools you mention in that category too. Few testers have *no* technical skills (by either definition). End users doing user testing – perhaps?

      In the context of this thread, technical skills mean programming skills.

      “Coding languages would definitely look good on a CV…So should we have bad coders write code?”
      If the author of an automated test needs to be able to code and test, sure, they need to be proficient at both. I would not say that testers make bad coders in general, although some would. On the other hand, if we rely on developers writing the tests, they might be bad testers. Unit test frameworks, aren’t very difficult to use, at least in principle. GUI test tools however, can be rather more of a challenge.

      “Applications using L7 protocols including L2 and L3 protocols… do not come with UI …have been tested in tool assisted way. So how do these fit into this claim of testers should code/not code to test systems with no UI?”
      I’m sure there are harnesses that allow you to do single-shot transactions or message exchanges where the content of messages can be created or examined without programming. I’ve not done L2/L3 testing, but I have tested messaging systems. Two things to say here. Firstly, someone had to write the tool you use to create/send/receive/check messages – so they needed to know how to code. Secondly, to test these systems, it would probably be necessary to generate and check hundreds or thousands of message exchanges; to simulate peak loads, lost messages, duplicate or delayed messages or messages arriving in the wrong order, or messages on noisy networks and so on, and these tests probably require some code to be written and run. Of course, for a common protocol, you can probably find a free or proprietary too to assist, and the ‘better’ ones promise ‘testing without coding’. Good luck if you can find one. More often, an object that a developer writes to perform some business logic might never be directly accessible though the user interface. So a driver has to be written or a unit or class-testing framework is required. These tend to be developer, but not tester friendly.

      “At one time job markets were flooded with QTP requirements. Does that mean it was on the right track?”
      Not sure what you mean. Was QTP on the right track? If a lot of employers believe testing with QTP is the right or best way, then expect to see lots of QTP requirements. Most people have been burnt for thinking that. Larger companies still get sold the idea that tools are de-rigeur by larger testing service companies. They get what they deserve.

      “From my limited experience I do not see much difference between a good automator (testers who only code) and a developer”
      Can’t argue with that – it’s your experience, not mine. And certainly, for 20 years, I’ve been saying that if you are on a large test automation (or performance test) project, you are running a software team. But there’s no such thing as ‘test automation’. There is software that could be used to automate (or partially automate, or support) defined tasks. A development team that aims to automate useless tests is failing just like any development team that builds software that does meet the needs of its users. A Test automation team had better have a very good grasp of what they are trying to achieve that fulfils the needs of their stakeholders. Just like any software project really.

      “… majority perceive manual testers as boring, develop scripts, run checks, majority of the automators are boring, develop code, which run the checks … we need testers to look at a context and pick an approach that fits best which in most cases is a good mix of both manual and automation”
      This is the great challenges of software. By adopting TDD, BDD etc. developers may finally be grasping the nettle of doing better feature-level testing. But. Developers need better testing skills (not just test frameworks). Testers can help by ‘getting in there and helping’ and coding skills might buy them some credibility. But also, testers need to trust developer to be testing the right thing, so the tester can get on with the stuff that developers will never be able to test. Developers are stakeholder for testers just like users and our business are. We need to talk the language of business to users. We need to talk the language of developers to influence them. That might be helped by knowing something about coding.

Trackbacks/Pingbacks

  1. Survey about programming | gustavolabbate - February 19, 2014

    […] how much, but they have. For me, the discussion remains on one side of the table. See the links here and […]