Tuesday 28 January 2014

Testing Process

SOME TESTING STEPS

Defect risk
The process of identifying the amount of risk the defect could cause. This will assist in determining if the defect can go undetected into implementation.
Bug Impacts levels:
Low impact
This is for Minor problems, such as failures at extreme boundary conditions that are unlikely to occur in normal use, or minor errors in layout/formatting. These problems do not impact use of the product in any substantive way.
 
Medium impact
This is a problem that a) Effects a more isolated piece of functionality. b) Occurs only at certain boundary conditions. c) Has a workaround (where "don't do that" might be an acceptable answer to the user). d) Occurs only at one or two customers. or e) Is very intermittent
 
High impact
This should be used for only serious problems, effecting many sites, with no workaround. Frequent or reproducible crashes/core dumps/GPFs would fall in this category, as would major functionality not working.
 
Urgent impact
This should be reserved for only the most catastrophic of problems. Data corruption, complete inability to use the product at almost any site, etc. For released products, an urgent bug would imply that shipping of the product should stop immediately, until the problem is resolved.
What Is Error Rate?
The mean time between errors. This can be a statistical value between any errors or it could be broken down into the rate of occurrence between similar errors. Error rate can also have a perception influence. This is important when identifying the "good-enough" balance of the error. In other words, the mean time between errors is greater than the ultimate user will accept.
What Is Priority
Priority is Business.
Priority is a measure of importance of getting the defect fixed as governed by the impact to the application, number of users affected, and company's reputation, and/or loss of money.
Priority levels:
  • Now: drop everything and take care of it as soon as you see this (usually for blocking bugs)
  • P1: fix before next build to test
  • P2: fix before final release
  • P3: we probably won’t get to these, but we want to track them anyway
  1. Must fix as soon as possible. Bug is blocking further progress in this area.
  2. Should fix soon, before product release.
  3. Fix if time; somewhat trivial. May be postponed.
  • High: This has a major impact on the customer. This must be fixed immediately.
  • Medium: This has a major impact on the customer. The problem should be fixed before release of the current version in development, or a patch must be issued if possible.
  • Low: This has a minor impact on the customer. The flaw should be fixed if there is time, but it can be deferred until the next release.
What Is Severity
Severity is Technical.
Severity is a measure of the impact of the defect on the overall operation of the application being tested.

Severity level:
The degree of impact the issue or problem has on the project. Severity 1 usually means the highest level requiring immediate attention. Severity 5 usually represents a documentation defect of minimal impact.
  • Critical: the software will not run
  • High: unexpected fatal errors (includes crashes and data corruption)
  • Medium: a feature is malfunctioning
  • Low: a cosmetic issue
  1. Bug causes system crash or data loss.
  2. Bug causes major functionality or other severe problems; product crashes in obscure cases.
  3. Bug causes minor functionality problems, may affect "fit anf finish".
  4. Bug contains typos, unclear wording or error messages in low visibility fields.
  • High: A major issue where a large piece of functionality or major system component is completely broken. There is no workaround and testing cannot continue.
  • Medium: A major issue where a large piece of functionality or major system component is not working properly. There is a workaround, however, and testing can continue.
  • Low: A minor issue that imposes some loss of functionality, but for which there is an acceptable and easily reproducible workaround. Testing can proceed without interruption.
What Difference Between Severity and Priority
Priority is Relative: the priority might change over time. Perhaps a bug initially deemed P1 becomes rated as P2 or even a P3 as the schedule draws closer to the release and as the test team finds even more heinous errors. Priority is a subjective evaluation of how important an issue is, given other tasks in the queue and the current schedule. It’s relative. It shifts over time. And it’s a business decision.
Severity is an absolute: it’s an assessment of the impact of the bug without regard to other work in the queue or the current schedule. The only reason severity should change is if we have new information that causes us to re-evaluate our assessment. If it was a high severity issue when I entered it, it’s still a high severity issue when it’s deferred to the next release. The severity hasn’t changed just because we’ve run out of time. The priority changed.
Severity Levels can be defined as follow:
S1 - Urgent/Showstopper. Like system crash or error message forcing to close the window.
Tester's ability to operate the system either totally (System Down), or almost totally, affected. A major area of the users system is affected by the incident and it is significant to business processes.

S2 - Medium/Workaround. Exist like when a problem is required in the specs but tester can go on with testing. Incident affects an area of functionality but there is a work-around which negates impact to business process. This is a problem that:
a) Affects a more isolated piece of functionality.
b) Occurs only at certain boundary conditions.
c) Has a workaround (where "don't do that" might be an acceptable answer to the user).
d) Occurs only at one or two customers. or is intermittent

S3 - Low. This is for minor problems, such as failures at extreme boundary conditions that are unlikely to occur in normal use, or minor errors in
layout/formatting. Problems do not impact use of the product in any substantive way. These are incidents that are cosmetic in nature and of no or very low impact to business processes.
Browser Bug Analyzing Tips
  • Check if the client operating system(OS) version and patches meet system requirements.
  • Check if the correct version of the browser is installed on the client machine.
  • Check if the browser is properly installed on the matche.
  • Check the browser settings.
  • Check with different browsers (e.g., Netscape Navigator versus internet Explorer).
  • Check with different supported versions of the same browsers(e.g.3.1,3.2,4.2,4.3, etc).
Equivalence Class Partitiong and Boundary Condition Analysis
Equivalence class partitioning is a timesaving practice that identifies tests that are equivalent to one another; when two inputs are equivalent, you expect them to cause the identical sequence of operations to take place or they cause the same path to be executed through the code. When two or more test cases are seen as equivalent, the rresource savings associated with not running the redundant tests normally outweighs the rsik.
An example of an equivalence class includes the testj g of a data-entry field in an HTML form. If the field accepts a five-digit ZIP code(e.g, 22222) then it can reasonably be assumed that field will accept all other five-digit ZIP codes (e.g. 33333, 44444, etc.)
In equivalence partitioning, both valid and invalid values are treated in this manner. For example, if entering six letters into the ZIP code field just described results in an error message, then it can reasonably be assumed that all six-letter conbinations will result in the same error message. Similarly, if entering a four-digit number inti the ZIP code field results in an error message, then it should be assumed that all four digit combinations will result in the same error message.

EXAMPLES OF EQUIVALENCE CLASSES
  • Ranges of numbers (such as all numbers between 10 and 99, which are of the same two-digit equivalence class)
  • Membership in groups (dates, times, country names, ete.)
  • Invalid inputs (placing symbols into text-only fields, etc)
  • Equivalent output events (variation of inputs that produce the same output)
  • Equivalent operating environments
  • Repetition of activities
  • Number of records in a database (or other equivalent objects)
  • Equivalent sums or other arithmetic results
  • Equivalent numbers of items entered (such as the number of characters enterd into a field)
  • Equivalent space (on a page or on a screen)
  • Equivalent amount of memory, disk space, or other resources available to a program.
Boundary values mark the transition points between equivalence clases. They can be limit values that define the line between supported inputs and nonsupported inputs,or they can define the line between supported system requirements and nonsupported system requirements. Applications are more susceptible to errors at the boundaries of equivalence classs, so boundary condition tests can be quite effective at uncovering errors.
Generally, each equivalence class is partitioned by its boundary vakues. Nevertheless, not all equivalence classs have boundaries. For example, given the following four browser equivalent classes(NETSCAPE 4.6.1 and Microsoft Internet Explorer 4.0 and 5.0), thereis no boundary defined among each class.
Each equivalence class represents potential risk. Under the equivalent class approach to developing test cases, at most, nine test cases should be executed against each partition.
Rules for bug level
Rules for bug level will be determined by the project goals and the project stakeholders. For example, a software product's graphical user interface is very important in the market competition, so inconsistencies in the GUI more important than missing functionality
Critical: Error that make the program can't run.
High: Important functional can be completed, but bad output when good data is input.
Medium: Important functional can be completed and good output when good data is input, but bad output when bad data is input.
Low: Function is working, but there is some little bit problem UI problem, like wrong color, wrong text fond
Rules for bug level will be determined by the project goals and the project stakeholders. For example, a software product's graphical user interface is very important in the market competition, so inconsistencies in the GUI more important than missing functionality
Customer impact as the single way to rank a bug because it eliminates different defintions among different folks. Customer impact is customer impact. There isn't an "impact to testing", a "marketing priority", a "customer support" priority. There is merely a customer impact. Since all of us produce software for a customer, that is really the only field needed. It eliminates confusion in our profession as well as within the companies that each of us work for.
Believe Defect-Free Software is Possible
The average engineer acts as though defects are inevitable. Sure, they try to write good code, but when a defect is found, it's not a surprise. No big deal, just add it to the list of bugs to fix. Bugs in other people's code are no surprise either. Because typical engineers view bugs as normal, they aren't focused on preventing them.
The defect-free engineers, on the other hand, expect their code to have no defects. When a (rare) bug is found, they are very embarrassed and horrified. When they encounter bugs in other people's code, they are disgusted. Because the defect-free engineers view a bug as a public disgrace, they are very motived to do whatever it takes to prevent all bugs.
In short, the defect-free engineers, who believe defect-free software is possible, have vastly lower defect rates than the typical engineer, who believes bugs are a natural part of programming. The defect-free engineers have a markedly higher productivity.
In software quality, you get what you believe in!
Think Defect-Free Software is Important
Why is defect-free software important? 
Delivering defect-free software reduces support costs.
 
Delivering defect-free software reduces programming costs.
 
Delivering defect-free software reduces development time.
 
Delivering defect-free software can provide a competitive advantage.
Commit to Delivering Defect-Free Software
Making a firm commitment to defect-free code and holding to that commitment, in spite of schedule and other pressures, is absolutely necessary to producing defect-free code. As a nice side benefit, you will see improved schedules and reduced costs!

Wednesday 8 January 2014

MY WORDS FOR DETAILED EXPLAINATION OF BUG REPORTING TOOLS

TET (Test Environment Toolkit)




(TET) Test Environment Toolkit  is provided as an open source, is a multi - platform uniform test scaffold, unsupported command-line product, into which non-distributed and distributed test suites can be incorporated . It is widely used in many test applications including The Open Group's UNIX Certification program and the Free Standards Group's LSB Certification program.TET supports tests written in C, C++, Perl, Tcl, Shell (sh, bash, and POSIX shell), Python, Ruby, and Korn Shell.
The Test Environment Toolkit project started in September 1989, as the Open Software Foundation and UNIX International , X/Open entered in an announced agreement to produce a specifications for a testing environment. The three organizations agreed to develop and make freely available an implementation written to that specification they committed to producing test plans ,test cases.and test suites for execution within that environment.
The extensions that made to the Test Environment Toolkit by X/Open was the (DTET) Distributed Test Environment Toolkit project which started in October 1991. The objectives of this project was the functionality of the TET to support the execution of distributed test cases and be backwards compatible with the Test Environment Toolkit. The DTET defined a distributed test case as a test case executing partly on a master system and partly on one or more slave systems. In such a test case, synchronization between the test case controlling software on the multiple systems was required.