Wednesday, July 23, 2008

Quality Assurance Checks

Today, I was pulling out older bug reports to share formats for QA of a new course. This had me reminiscing. QA is a vital process before final roll out. Why is QA important? It helps identify the bug and errors in the course. It also helps check consistency, content misses, errors introduced during development, animation sequence, audio flow, and so on. QA helps us check the course on the whole whereas ID reviews, edit review, and graphic reviews check each aspect individually. When the course is developed, we get to finally see it function. Now, it is not rare to have one or two bugs in the application. It is important to identify these before the learner takes the course.

How do we conduct QA? We use an xls format to document slide numbers, the issue, the type of issue and so on. We put the course itself under the microscope and dissect it to see whether things are in place. We try out all options available to ensure that the course is behaving the way it should. Areas to be addressed during QA are:
  • Functionality
    • Are all buttons on the interface working properly?
    • Are all the screens linked logically?
    • Are all the tabs and click to know text working as they should?
    • Are the exercises functioning normally?
    • Is the navigation free of bugs?
  • Audio
    • Is the VO in sync with the OST?
    • Is the audio audible, crisp, clear?
    • Has any part of the audio got chopped off during integration?
  • Alignment
    • Are all elements aligned to the grid?
    • Are feedback boxes popping up in the right area?
    • Are the text boxes popping up in the right area?
    • Are the graphics positioned appropriately?
    • Has the white space been used appropriately?
  • Graphics
    • Are the graphics clear?
    • Are they in sync with what was visualized in the storyboard? (Though this should be ideally taken care of during graphic reviews)
    • Do the graphics seem to belong to the same family?
    • Are they adequately sized and positioned on the screen?
    • Do they follow the color scheme?
  • Consistency and standardization
    • Is the instruction text standard across the screens?
    • Are the feedback boxes standard across the exercises?
    • Are the tabs and click to know text consistent across the course?
    • Is the font (color and size) consistent across the course?
  • Content accuracy
    • Are there any content misses?
    • Are the screens too text heavy?
    • Are there any new errors introduced in the course?
    • Are there any edit issues in the content?
From prior experience, I know how crucial his stage. I have witnessed how this round can check bugs and ensure an error free course before learner testing. If work is not systematic, errors will come up with every round of QA. This can be very frustrating. Ensure that you always conduct at least two rounds of QA to ensure that your course meets the quality standards.


sowdamini said...

The biggest challenge for QA is to be very alert and also look out for all the possibilities and with every round of QA i confronted with more bugs. so can you tell me how to ensure that QA enables in bug free reports?

Archana Narayan said...

Mini, thanks for sharing your concerns. QA requires high levels of concentration and an eye for detail. Identification of categories helps as you can scan a single slide for all these categories before going ahead. Therefore, during QA, you end up (or rather you should) spend more time on each screen to ensure that you have not missed anything.

When doing a QA, try and ensure the following:
a) Carry out QA in a quiet environment.
b) After you finish checking the audio, mute it. This is help avoid listening to the audio over and over again when looking at other parameters.
c)Make your search more structured. Ensure that you look for each parameter and not scan the screen generally for any errors.
d)Try all options. Whether it is clicking Back, all all options of the exercise, glossary, exit, remember to try them all out. You never know whether it is behaving ok.

The essence of QA is to ensure that you capture all the bugs in round one. There are changes that round two may bring out newer errors. But atleast you would have removed the older ones. Fewer to tackle in the crunch situation.

QAing is also an acquired skill. Your eye needs to be trained to pick up the bugs. This is the reason why Geeta spotted bugs that evryone missed... :) Trained eye...

mysterious said...

Really good post Archie.
Would like to add some more related with functionality.
- If it has a login, need to do all the checks required for login.Especially if the content varies according to the login.
- Need to check whether the page number is changing properly according to navigation. From my experience, it there is high chance of bugs.
- Module/ chapter names changing according to chapters/ modules.
- If it is an LMS based application, need to do a thorough QA for following things
1) Course tracking- Before the learner starts the course on LMS, the status should be showing that the courses has not been attended, once he starts with course and quits in between, the status of course should be 'in progress, and after he takes the assessment, the status should be 'completed'.
2) Passing exact score to LMS after the test is attended by the learner.
3) Complete course on LMS- how it is behaving.

As we move ahead, the above things are much important, since we would be doing LMS based courses in-house.

Again, for thorough functionality QA, I would like to add some more things.

- All the functionality should be checked whether it is matching with the 'Requirement'. For some courses, we may be having color specifications, screen resolutions, SCORM versions etc(according to the clients Standard specifications, which we can not deviate, or can be the requirements we framed).

- Book marking feature- whether the application is 'resuming' the page from which the learner has quit, when he logs in again.
- Need to check whether the application is working properly, after 'resuming'.

- Need to check the 'System requirements' to run the application and we have to cross check whether the course is running properly under all the listed 'System Requirements'.

I would say, typically for functionality checking, we need to create test cases before we start on testing. The test cases should contain all the possible errors or possible areas of checking for that particular course.
This will decrease the filing of 'unwanted bugs' (the tester may find something as bug, but may not be a bug according to functional level design) and will definitely increase the accuracy of functional level testing. But we have to make sure that, QA is not restricted inside a circle.

In Audio:
- Check the 'Mute' Button.
If we have video
- For FLV or MPEG video
1) Check the clarity
2) The audio- graphic synchronization
3) all the interface button for video like - mute, pause, forward, backward, play, stop.

- For videos generated by Captivate or camtasia( typically used for application training),
1) The audio- graphic synchronization
2) The text, color, alignment and size of tool tips.
3) The working of interface buttons.
4)The speed of video.
5) The accuracy.

Will add up later, if I come across any.

But I should say, it is a really helpful post from you Archie.

Waiting for many.... :)