Difference between revisions of "Software Testing Additional Notes"

From SOBAC Wiki
Jump to navigation Jump to search
(Created page with " == A lot of people end up in software testing via an oblique path. == *Possibly from a technical background, working as developers *Possibly a business background, get to kn...")
 
 
(4 intermediate revisions by 2 users not shown)
Line 121: Line 121:
  
 
== Load testing ==
 
== Load testing ==
*After testing different parts of a system and making sure they work together well, you may want to do load testing
+
*After testing different parts of a system and making sure they work together well, you may want to do load testing
 
*Maximize the amount of transactions occurring
 
*Maximize the amount of transactions occurring
 
*Planning can be very helpful here, keep track of the order you did things, test systematically, expected results, otherwise it's too easy to get lost in a big pile of data
 
*Planning can be very helpful here, keep track of the order you did things, test systematically, expected results, otherwise it's too easy to get lost in a big pile of data
Line 131: Line 131:
  
 
== "Monkey Testing" ==
 
== "Monkey Testing" ==
*Can be important
+
*Can be important
 
*Bach talks about examples, able to break right into a computer, produce error boxes with no text
 
*Bach talks about examples, able to break right into a computer, produce error boxes with no text
 
*Using Ctrl-A Ctrl-C Ctrl-V, put a massive amount of text into an input field, see what happens
 
*Using Ctrl-A Ctrl-C Ctrl-V, put a massive amount of text into an input field, see what happens
Line 137: Line 137:
 
*There is one big catch to this kind of testing
 
*There is one big catch to this kind of testing
 
*Can you reliably recreate the error?  If you were frantically pressing buttons fast, and timing is part of the issue, that can be very hard
 
*Can you reliably recreate the error?  If you were frantically pressing buttons fast, and timing is part of the issue, that can be very hard
 +
[[File:3 - EmptyErrorResized.png]]
 +
 +
[[File:4 - SystemEntryExampleResized.png]]
  
 
== Testing similar things at once ==
 
== Testing similar things at once ==
Line 194: Line 197:
 
*What if a component breaks, does the rest of the system respond gracefully?
 
*What if a component breaks, does the rest of the system respond gracefully?
 
*If a load test does cause errors, how well is it handled?  Load tests can make systems slow, resulting in a system appearing to "hang" rather than fail gracefully
 
*If a load test does cause errors, how well is it handled?  Load tests can make systems slow, resulting in a system appearing to "hang" rather than fail gracefully
 +
 +
 +
 +
[[Category:NPSA]]

Latest revision as of 21:29, 2 May 2019


A lot of people end up in software testing via an oblique path.

  • Possibly from a technical background, working as developers
  • Possibly a business background, get to know a company's products
  • You can take courses on it, but often as part of a larger program, rather than the main major
  • Will that change in the future?
  • Result could be why there is variety in terminology

Different project managers take different approaches.

  • Some sit in on code reviews, test plan reviews, dig in to details
  • Others get their team to report to them at a summary level
  • Project manager will more directly intervene when a high priority situation arises, i.e. testing falling behind schedule, often a good project manager is someone who can keep an eye on testing at a high level and quickly assess an important situation in detail when necessary

Developers are often expected to do at least some testing of their own

  • For some technical projects like software upgrades, developers might be recruited to work as testers
  • Developers often do unit testing, sometimes document test plans ahead of time
  • That unit test plan and results are given to testers when they do additional testing - integration testing, regression testing, load testing

For testers, one challenge is the need to be thorough and deliver on time

  • Sometimes, the first, fairly simple tests reveal a number of bugs; this creates concern about staying on schedule
  • Advantage is that at least those bugs were found early
  • It can be good to push hard, maybe put in overtime to get through the first pass of testing, get an idea of where bugs are, start fixing them early. This can avoid worse time crunch near the end of a project. Sometimes that can still happen anyway!

Writing test plans can take a significant amount of time

  • Does the project schedule allow for that?
  • How much time will it save alter, if there is a test plan that is thorough and has been peer reviewed and edited? Often this makes it worth the effort
  • In some situations, time constraints don't allow it, have to jump in to testing with minimal planning or documenting
  • Some projects end up as a mixture. A detailed test plan is written and carefully peer reviewed early in the project
  • During the project, unexpected problems and changes in requirements lead to many changes to the code and test plan
  • At that point, there is limited time left in the project schedule, test plans are rough, possibly not peer reviewed
  • It then helps if you have experience, when you're new to the position it's harder to "improvise" testing

Reporting bugs to developers requires tact

  • People have worked very hard on the code; you are finding problems with it
  • You are all on the same team with the goal of delivering a good quality product
  • Keep e-mails and bug reports in business style; just describe what happens, provide the steps to re-create the error
  • When reporting bugs, avoid the use of pronouns. Say, "When I do x, the system does y", never say "Your code has a problem"
  • But, do use pronouns when giving thanks for good work, acknowledge others in meetings, "Yes, this was a tough bug, thankfully Jon got it fixed yesterday."
  • Einstein is believed to have said, "Make everything as simple as possible but no simpler". May not have been the exact words
  • When doing bug reports, get to the main point quickly, but include the technical details necessary. Make sure YOU can reliably recreate the error following the same steps

Calculator


Basic format: Steps to Re-create:

Press "pi"

Press Clear

Press "1"

Press "+"

Press "1"

Press "="

Expected Results:

Display shows "2"

Actual Results

Display shows "2.00000001"

System should be declaring input and response as integers, possibly using float or another type and not doing garbage collection from previous operation?

  • Being very explicit about what was pressed, in what order, matters sometimes - people have habits. On a keyboard, some use the number pad, others the main keyboard. If someone turns num lock on or off during bootup as a habit and someone else was on the computer, you could get different results.
  • When describing a sequence of menu choices, the "->" characters are helpful, i.e. select Main Menu->Submenu1->SubSubmenu1
  • Making comments, suggestions does not hurt if it only takes a short time. If it's similar to another known bug, point that out, too.
  • Does the system have different states? What state was it in when the test was done?
  • Sounds simple but can be tricky, can be easy to forget an important detail - in a web-based system, which browser were you using? Sometimes you switch between Edge, FireFox and Chrome throughout the day, remember which one you found a bug in, check and see if it happens in other browsers.
  • https://obsproject.com/ - software that can be used to capture videos, including drop menus. Windows 10's Game Capture does not include drop menus.

Other tasks follow testing

  • Technical writers have to produce documentation like manuals. They may want to look at some of the existing documentation. That may not be the test plan, it could be developer notes or requirements. Sometimes the tester has worked with those documents a lot, and is asked to assemble them into one place for the technical writer, or directing those people where to find things.
  • The technical writer might want to also be a tester. In order to write documentation that explains things to the user, the writer has to understand it well, getting their hands on a test system and trying some things helps them do this.
  • The technical writer will have questions for the tester, may want assistance in configuring a system
  • Sometimes testing was done to cover many situations, but the first sales are to customers with specific needs, the technical writer will be focused on that, configuring a system that way
  • It helps to label things, the tester might know about the system configuration but the technical writer doesn't.
  • Sometimes the technical writer has industry experience, ends up trying some additional tests and finds bugs. Then the tester follows up to write up the bug and get it fixed.

Some professionals actively dislike textbooks

  • Some textbooks for software testing, as is the case for books in other technical fields, have a tendency towards "perfect worldism"
  • They will describe techniques and show examples of using those techniques that are very time consuming to do.
  • They might be good techniques that will capture lots of bugs if you apply them
  • On a real project, time is limited, you have to make judgment calls about prioritizing things to meet deadlines
  • Don't want to tell your manager you spent hours on this marvellous documentation and planning technique but got no actual tests done
  • So why pay attention to text books?
  • Even if you can't completely apply the techniques they describe exactly as outlined, you might get ideas you can partially apply that do help raise the quality of what you deliver
  • Such as - Different types of program maps - even a partial one might get you thinking about test cases you want to do, maybe apply these techniques in depth for especially important or complex parts of a system if not everything
  • You may adapt/adjust these techniques to suit your situation
  • Can also give you the idea of the number of test there are in theory, including paths

On YouTube, James Bach has some interesting talks

  • https://www.youtube.com/watch?v=ILkT_HV9DVU
  • Asks people how they would test things, pushes hard for people to explain why
  • Sometimes testers have ideas/intuitions that are on the right track, can you explain why?
  • He talks about a system that will work 100 - 250 VAC
  • So why test, say, 90 V? It's not in the specs, we know it won't work
  • But will it "fail gracefully"?
  • (There is not always time for that in the workplace, but if you just follow your intuition you might find important bugs)

Bach also goes through an example and asks how many test cases would you do?

  • Raises questions about what the project does
  • Decision testing, boundary testing, predicate testing
  • With more experience, you can often think of more tests right off, i.e. "improvise"
  • Emphasizes the fact that a flowchart, document, etc. is a representation, not the actual system

When do you feel pride/satisfaction in your work?

  • Some testers say they "high-five" each other when they find a really nasty, obscure, or complicated bug
  • That is when they feel they are adding a lot of value
  • Not all testers agree
  • For some, the time for "high-fives" are after the bug has been found, reported, fixed, and the system successfully retested, and the "high-five" is shared with the developers
  • Everyone on the team wants the project to be a success!
  • Also fits in with having tact and consideration for others

Load testing

  • After testing different parts of a system and making sure they work together well, you may want to do load testing
  • Maximize the amount of transactions occurring
  • Planning can be very helpful here, keep track of the order you did things, test systematically, expected results, otherwise it's too easy to get lost in a big pile of data
  • This means making sure no transactions are dropped
  • Or, in financial systems, totals add up, reports balance
  • Testing Tools can be used for this
  • Ironically, Testing Tools are not always very well tested
  • They might be "quick and dirty", but as long as the tester who made them can use them to serve a main purpose well enough, then the real, main tests get done

"Monkey Testing"

  • Can be important
  • Bach talks about examples, able to break right into a computer, produce error boxes with no text
  • Using Ctrl-A Ctrl-C Ctrl-V, put a massive amount of text into an input field, see what happens
  • Try moving really fast pressing buttons
  • There is one big catch to this kind of testing
  • Can you reliably recreate the error? If you were frantically pressing buttons fast, and timing is part of the issue, that can be very hard

3 - EmptyErrorResized.png

4 - SystemEntryExampleResized.png

Testing similar things at once

  • Suppose you have a system that allows you to set countdown alarms, like scheduling employees' breaks
  • Each employee has a timer object
  • Each object is therefore distinct
  • But suppose management has a screen they use to see all employee schedules and minutes left until their breaks
  • Executive can shorten or lengthen that time, they have a screen Management has access to
  • That screen loads an employee day schedule object
  • Copy that data into a report for that manager's daily activity, i.e.

Schedule Adjustments: PayRoll Clerk - 10 minute delay Production manager - 15 minute delay

Local variables cleared in between these two transactions What if there's just one break room, an alert is sounded when your break is over. If you have a flag that says "BreakTimeOver = true", it has to be reset

What if functions DON'T appear to have much in common?

  • Some functions, like above, are about tracking time
  • Others keep track of that day's production, how many units produced
  • Very separate things, but all part of the same system
  • Run them all with a heavy load for a while, see if the system responds well, there is enough memory
  • Different developers may have worked on these parts, may not have tested all together

Button availability

  • Many companies will use this, document it as part of a project plan
  • Is important to test
  • Even a small number of buttons can create many paths
  • Especially important for security, don't allow login attempts without credentials being entered
  • You don't have to have lots of buttons in a window for this to become time consuming and a lot of hard work

Consistent look and feel

  • Sometimes errors occur that are hard to spot - fonts might be very similar, but slightly different, just one size larger or smaller, shadow effects

Resizing windows

  • This can cause data to become lost, off screen, is a good idea to stretch windows, maximize, minimize, resize

Logs

  • What information is supposed to be in them? What is definitely NOT supposed to be there? Certain security things should NOT be there, check and make sure. Sometimes developers log things for debug purposes but this should be cleaned up, can be very important!

Time

  • Do you leave a system running overnight? Over a weekend? Over a holiday weekend? For weeks?
  • Some systems do have to be running around the clock.
  • Do you leave it running with a lot of activity for a long time? Is that realistic?
  • If something goes wrong in the night, how hard is it to track down the cause of a problem hours later, when you get to work in the morning?

Updating during a project

  • Testers might be given whatever's available to start testing. This might be prototypes, partly developed items
  • Updates are provided during a project
  • Installing updates can take significant amounts of time
  • It is important to work with developers to manage time. If several updates are coming soon, maybe wait until several are ready, install them, then resume testing rather than have multiple interruptions to install several updates
  • If an update has to be applied to many components, try a few first, do at least some basic testing before investing time in updating whole system

Graceful failures

  • What if a component breaks, does the rest of the system respond gracefully?
  • If a load test does cause errors, how well is it handled? Load tests can make systems slow, resulting in a system appearing to "hang" rather than fail gracefully