TRESSEL Test Plan: Testing Philosophy

TRESSEL Documentation > TRESSEL Test Plan > Testing Philosophy

Philosophy of Testing

  As for our philosophy of testing; at first, for every function we designed, we would test every possible input and modification to totally optimize the individual function, in the hopes that once we integerate it into the main framework, little to no additional errors are found. This methodology worked very well all throughout our coding process.
Potential errors could be; approaching the deadline, every team member is integrating their own code into the overhead file, and everything doesn't get tested to it's full potential; so if errors are found, it takes an extended amount of time to scour through the entire 4,000 lines of source for one minute misspelling or error. Our methodology works well if and only if all errors are handled, and for those that aren't, it MUST be documented to save us valuable time and keep us on schedule.

  If all individual functions are tested for ensured quality, and we find an error; we know by default that error will be located in the main function, whether it be from a mis-passing of parameters or skipped variable declaration.

  The biggest key to having a good 'Philosophy of Testing,' is documentation of code. If one team member decides to break coding standard, and isn't here for reference when we are trying to debug his code, nothing can get done. Communication and Documentation standards are key to success.

  Our Test Plan was, for each test we wrote; to cover either small errors, which we documented in the TRESSEL comments; or to package Directives and Instructions into logical groups.  We would create a test case calling numerous WORD-NUMs, for example, testing the extremes, negative values, erroneous values, and correct calls.  Each test had some relevant set of instructions, all related.

  We also threw in some random test cases, in which we made many different calls to many different operations, and then reviewed results accordingly.  We did this in order to check for unforseen complications, in the case that several different instructions, when combined together, may create errors.

- An example of this would be a misdeclaration of a Directive, and then a call later on to that label. 

  Several of the Pass Two issues can be classified under this group, in which address resolution was a particularly sticky issue, and many different tests cases had to be run to test for this accordingly.

Reagrding test cases for SP3 and SP4, we decided that since we already have a vast array of test cases, why not use all of those to validate our Linker and Simulator.  It covers almost every single case because we were so detailed when testing SP1 and SP2.  But for the errors exclusive to the Linker and Simulator, special cases were required.  These had to handle modifcation of Record formats and hex code values, and check for akard branches out of memory or unresolved external values.  In the simulator, checking for overflow and divide-by-zero was also required, so all the new test cases must also cover those potential issues.

The more test cases we run, the higher odds we have against tripping over or missing unforeseen errors.  We believe having 50+ test cases, even though it may be tedious, is the best may to manage a small system or program such as this. 


Written by Group 3 of CSE 560 Autumn 2009 5:30 section
Chris Brainerd, Jarrod Freeman, Abe Kim, Abdul Modrokbi, and David Straily
The Ohio State University