Wednesday 7 March 2012

How to do Testing Without a Formal Test Plan?

Testing always requires a formal plan. But there are circumstances where testers need to skip plans to achieve their targets assigned and the tips below will help testers to work without a formal plan.

1) Finding High Level Functions
As always high level functions are of central purpose to the site or an application, modify those functions based on the priority frequently according to their significance to a user's ability. Determine the test direction as soon as possible to see changes happening rapid.

2) Test Before Display
Before the product is being displayed test it using different browsers and platforms also use designers and developers to review the tests. This is an instant where testers run out of time and they need to test without a formal plan.

3) Concentrate on Ideal Path Actions
Testers should assume themselves as users, focus on the ideal path and identify those factors most likely to be used in a majority of user interactions. Get to know about the sections where the user makes a frequent visit, because most of the user doesn't think the same way hence make a set of guess based on user's interest when he visits your website.

4) Focus on Intrinsic Factors
Intrinsic factors are internal and it can be fixed only by the tester also the user can do nothing with it. Hence these factors require an immediate attention before the error reflects in the user's end. These errors are time sensitive hence it needs to be fixed without a formal plan.

5) Boundary Test from Reasonable to Extreme
The systematic testing of error handling is called boundary testing. Unknown values may choke the system therefore start with all you know when you perform a boundary test. Finding reasonable and predictable mistakes is a better way to perform a progressive boundary test.

6) Good Values
Always use valid and current information. Enter in data formatted as the interface requires. Include all required fields all these good values will help testers to make less errors.

7) Reasonable and Predictable Mistakes
It is easier for testers to predict errors done based on design especially with interface interpretation. These errors are very common and occur frequently due to complex design in fact it is easy to fix without a formal plan.

8) Extreme Errors and Crazy Inputs
Testing for maximum size of inputs, long strings of garbage, numbers in the text fields and text in numeric fields will reduce extreme errors, proceed from most likely to less likely to save time.

9) Compatibility Testing from good to bad
It is always good to start a test with the configurations which is more familiar. Therefore when you test a cross platform application begin your testing process with the most used browser and then with least used browser.

10) Expected Bad Values
A tester should make sure that an error message pops out each time when some invalid entries are entered in places like online forms. Such test cases don't require a formal plan. A tester can easily give random values to test the application.

Tuesday 6 March 2012

Ten Commandments of Software Testing

"Software Testing is a systematic activity but it also involves economics and human psychology."

Economics of software testing is to determine and predict the faults of the system early by using foreseeable models and applying structured test strategies and test methodologies to discover those at early phases of the software development life cycle.

Psychology of testing is to destructively test the application by identifying as many exceptional or out of the box scenarios or sometimes called as the third vision.

A set of good test scenarios evaluates every possible permutations and combinations of a program during ideal conditions. In addition, Software Test Engineer needs the proper vision to successfully test a piece/whole application to comply with the Standards and the Quality.

Whenever Software Test Engineer tests a program, they should add some value in it rather than performing only the requirements conformance and validation. A systematic and well planned test process adds the value of quality and reliability of the software program.

The most important considerations in the software testing are the issues of psychology, leading to the set of principles and guidelines to evaluate software quality:

1. Testing is the process of experimenting a software component by using a selected set of test cases, with the intent of revealing defects and evaluate quality Software Test Engineer executes the software using test cases to evaluate properties such as reliability, usability, maintainability, and level of performance. Test results are used to compare the actual properties of the software to those specified in the requirements document as quality goals.

Deviations or failure to achieve quality goals must be addressed. Software Testing has a broader scope rather being only limited to the execution of program or detecting errors more rigorously as described in the test process improvement models such as TMMi framework models.

2. When the test objective is to detect defects, then a good test case is one that has a high probability of revealing a yet undetected defect(s).

Principle 2 supports a strong and robust designing of test cases. This means that each test case should have a goal to identify or detect a specific type of defect. Software Test Engineer approaches the scientific method of designing the tests to prove or disapprove the hypothesis by the means of test procedures.

3. A test case must contain the expected output or result. Principle 3 supports the fact that a test case without expected result is of zero value. Expected output or result determines whether a defect has been revealed or the conditions have been passed during the execution cycle.

4. Test cases should be developed for both valid and invalid input conditions. Use of test cases that are based on invalid inputs is very useful for revealing defects since they may exercise the code in unexpected ways and identify unexpected software behaviour. Invalid inputs also help developers and Software Test Engineers to evaluate the robustness of the software, that is, its ability to recover when unexpected events occur (in this case an erroneous input).

For example, software users may have misunderstandings, or lack information about the nature of the inputs. They often make typographical errors even when complete/correct information is available. Devices or software program may also provide invalid inputs due to erroneous conditions and malfunctions.

5. The probability of the existence of additional defects in a software component is proportional to the number of defects already detected in that component. Principle 5 supports to the fact that the higher the number of defects already detected in a component, the more likely it is to have additional defects when it undergoes further testing.

For example, if there are two components A and B, and Software Test Engineers have found 20 defects in A and 3 defects in B, then the probability of the existence of additional defects in A is higher than B. This empirical observation may be due to several causes and degree of influence of the software factors. Defects often occur in clusters and often in code that has a high degree of complexity and is poorly designed.

6. Test Cases must be repeatable and reusable. Principle 6 is utmost important and plays a vital role supporting the fact that it is also useful for tests that need to be repeated after defect repair. The repetition and reuse of tests is also necessary during regression test (the retesting of software that has been modified) in the case of a new release of the software.

7. Testing should be carried out by a group that is independent of the development group. This principle holds true for psychological as well as practical reasons and does not say that it is impossible for a programming organization to find some of its errors, because organizations do accomplish this with some degree of success. Rather, it implies that it is more economical for testing to be performed by an objective, independent party which gives a direction of the third vision by the means of test cases. Finally, independence of the testing group does not call for an adversarial relationship between developers and Software Test Engineers.

8. Test Activities should be well planned. Test plans should be developed for each level of testing, and objectives for each level should be described in the associated plan. The objectives should be stated as quantitatively as possible. Plans, with their precisely specified objectives, are necessary to ensure that adequate time and resources are allocated for testing tasks, and that testing can be monitored and managed.

Test planning must be coordinated with project planning. A test plan is a roadmap for the testing which should be mapped to organizational goals and policies pertaining to the software program.

Test risks must be evaluated at each levels of testing. Careful test planning avoids wasteful "throwaway" tests and unproductive and unplanned "test-patch-retest" cycles that often lead to poor-quality software and the inability to deliver software on time and within budget.

9. Avoid throwaway test cases unless the program is truly a throwaway program. Whenever the program has to be tested, the test cases must be reinvented. More often than not, since this reinvention requires a considerable amount of work, Software Test Engineer tends to avoid it. Therefore, the retest of the program is rarely as rigorous as the original test, meaning that if the modification causes a previously functional part of the program to fail, this error often goes undetected.

10. Test results should be inspected meticulously. This is probably the most obvious principle, but again it is something that is often overlooked.

Weve seen numerous tests that show many subjects failed to detect certain errors, even when symptoms of those errors were clearly observable on the output listings. Put another way, errors that are found on later tests are often missed in the results from earlier tests.

For example:

I. A failure may be overlooked, and the test may be granted a "PASS" status when in reality the software has failed the test. Testing may continue based on erroneous test results. The defect may be revealed at some later stage of testing, but in that case it may be more costly and difficult to locate and repair.

II. A failure may be suspected when in reality none exists. In this case the test may be granted a "FAIL" status. Much time and effort may be spent on trying to find the defect that does not exist. A careful re-examination of the test results could finally indicate that no failure has occurred.

Summarizing all the above principles, Testing is an extremely creative and intellectually challenging task. Creativity required in testing a large program exceeds the creativity required in designing or developing that program. We already have seen that it is impossible to test a program sufficiently to guarantee the absence of all errors. This requires a systematic test process and methodologies to design the robust test cases.

This principle supports that:

A Software Test Engineer needs to have comprehensive knowledge of the software engineering discipline.

A Software Test Engineer needs to have knowledge from both experience and education as to how software is specified, designed, and developed.

A Software Test Engineer needs to have knowledge of fault types and where faults of a certain type might occur in code constructs.

A Software Test Engineer needs to reason like a scientist and propose hypotheses that relate to presence of specific types of defects.

A Software Test Engineer needs to have a good grasp of the problem domain of the software that he/she is testing.

A Software Test Engineer needs to create and document test cases. To design the test cases the Software Test Engineer must select inputs often from a very wide domain.

A Software Test Engineer needs to design and record test procedures for running the tests.

A Software Test Engineer needs to execute the tests and is responsible for recording results.

A Software Test Engineer needs to analyze test results and decide on success or failure for a test. This involves understanding and keeping track of an enormous amount of detailed information.

A Software Test Engineer needs to know the method for collecting and analyzing test related measurements.

Thursday 9 February 2012

Common programming bugs that every Software Tester should know.

Basically this is more useful for programmers but I think software testers can get insight on how developers can unknowingly leave bugs in software programs.

Each bug listed in this resource can lead to serious software vulnerabilities if not fixed. The top 12 security bugs list will help programmers to avoid some common but serious coding mistakes. For software testers list will be useful as a security testing checklist for Internet as well as for testing desktop application.

Here are few top security vulnerabilities discussed in detail in this article:

  1. Improper input validation
  2. Improper escaping of output or encoding
  3. SQL injection
  4. Cross-site scripting
  5. Race conditions
  6. Information leak in error messages
  7. Error while transmitting sensitive information
  8. Memory leak
  9. External control of critical data and file paths
  10. Improper initialization
  11. Improper authorization
  12. Client side security checks

The most common security vulnerability mistake developers make is “Client side enforcement of server side security”.

Tuesday 7 February 2012

Usage-Based Statistical Testing

Usage-based testing is actually software for client usage. In case if any flaws are detected by patrons, some information about them can be reported to software vendors and integrated fixes may be formed and delivered to all the clients to avoid such defects.

However, because of the enormous quantities of software installations software bugs correcting after release could be very costly and frequent fixes could also harm the software vendor's reputation and long-term business vivacity.

A new product's expected usage can be captured and used in product consistency and software testing could be most straightly assured.

In usage-based statistical testing the testing environs looks like the definite operational environment for the software in the field & the overall testing sequence, as represented by the systematic implementation of specific test cases in a test suite. Test cases also resemble the usage scenarios, templates of actual software usage and sequences by the target customers. Because of the huge quantity of clients and diverse usage templates cannot be captured in an execution set of test cases. This is why one uses the term "statistical" in this tactic and terms the same as "random testing."

Most apposite to the final phase of software testing is commonly known as Usage-based statistical testing and is usually referred to as acceptance testing right before product release, so that stopping testing is of equivalent value to the product release.

7 Deadly Sins in Automated Testing


Automated testing is considered by companies as an easy way out to deliver high quality software and products, reducing testing costs and increase efficiency. But the truth is, not all bugs are found by automated testing, so it is very important to realize what can go undetected.

1. Over indulging on propriety testing tools
Many commercial testing tools provide simple features for automating the capture and replay of manual test cases. While this approach seems sound, it encourages testing through the user-interface and results in inherently brittle and difficult to maintain tests. Additionally, the cost and restrictions that licensed tools place on who can access the test cases is an overhead that tends to prevent collaboration and team work. Furthermore, storing test cases outside the version control system creates unnecessary complexity. As an alternative, open source test tools can usually solve most automated testing problems and the test cases can be easily included in the version control system.

2. Too lazy to setup CI server to execute tests
The cost and rapid feedback benefits of automated tests are best realised when the tests are regularly executed. If your automated tests are initiated manually rather than through the CI continuous integration system then there is significant risk that they are not being run regularly and therefore may in fact be failing. Make the effort to ensure automated tests are executed through the CI system.

3. Loving the UI so much that all tests are executed through the UI
Although automated UI tests provide a high level of confidence, they are expensive to build, slow to execute and fragile to maintain. Testing at the lowest possible level is a practice that encourages collaboration between developers and testers, increases the execution speed for tests and reduces the test implementation costs. Automated unit tests should be doing a majority of the test effort followed by integration, functional, system and acceptance tests. UI based tests should only be used when the UI is actually being tested or there is no practical alternative.

4. Jealously creating tests and not collaborating
Test driven development is an approach to development that is as much a design activity as it is a testing practice. The process of defining test cases (or executable specifications) is an excellent way ensuring that there is a shared understanding between all involved as to the actual requirement being developed and tested. The practice is often associated with unit testing but can be equally applied to other test types including acceptance testing.

5. Frustration with fragile tests that break intermittently
Unreliable tests are a major cause for teams ignoring or losing confidence in automated tests. Once confidence is lost the value initially invested in automated tests is dramatically reduced. Fixing failing tests and resolving issues associated with brittle tests should be a priority to eliminate false positives.

6. Thinking automated tests will replace all manual testing
Automated tests are not a replacement for manual exploratory testing. A mixture of testing types and levels is needed to achieve the desired quality mitigate the risk associated with defects. The automated testing triangle originally described by Mike Cohn explains the investment profile in tests should focus at the unit level and then reduce up through the application layers.

7. Too much automated testing not matched to defined system quality
Testing effort needs to match the desired system quality otherwise there is a risk that too-much, too-little or not the right things will be tested. It is important to define the required quality and then match the testing effort to suit. This approach can be done in collaboration with business and technical stakeholders to ensure there is a shared understanding of risks and potential technical debt implications.

Seven Mistakes in Software Testing

Trial and error method is the best way of testing, but repeatedly not fixing of errors would ruin the efforts taken in testing and is a mere waste of time. Software Testing engineers while testing make small mistakes unintentionally which seems to be insignificant but are of high importance.

The following are the mistakes we usually find in software testing:

The first mistake:

Unit Testing is a method by which smallest testable part of an application of source code are tested to determine if they are fit for use. By just calling all utility methods, it's easy to test a utility class, passing some values and examining if the expected result is returned.

A first mistake arises here. Majority of the developers don't think out of the box, or not adequate.
Developers can test that 1 + 1 =2, that 2 + 1 = 3 and that 3 + 1 = 4. But what is the use of doing nearly 3 times the same test? Testing boundary cases is better. Are the arguments of the sum ( ) method primitive types or Objects? If they are Objects, what happens if you pass null values? If an exception is thrown, is that the expected one? Does it clearly tell what the problem is?

The second mistake:

Mocks - For example the service layer is unit tested, all other components like the DAO layer should be mocked. Manually in many cases it's been done. That's the second mistake.

It's much better to use mocking frameworks because when one mocks stuff manually the mocks are tightly coupled to the implementation as they are made for it. Just believe that they will create the mocks the way they want. Some mocking frameworks are capable of doing more than others; some are easier to use than others. Some of the best mocking framework is Mockito and EasyMock. Mockito is preferred because of its power and simplicity and EasyMock is by far the best known but it's a bit complex too to use. So it's not only a matter of choosing a mocking framework. It's also a matter of choosing the right one.

The third mistake:

Integration Tests are the tests over different integrated parts of an application while unit tests only tests a LUW (logical unit of work). In integration tests - the best known type of are tests of the DAO layer.

In these tests, multiple things are validated:

The input parameters,
The use of the ORM tool
The correctness of the generated query (functional)
If the DB can be accessed
The fact that the query is working and
The fact that the correct ResultSet is returned.

The third mistake that is made often is that in the dbUnit XML dataset developers provide test data. Most of the time this data has no representation of live data; the best way to handle this is to make functional-providing people and maintaining this test data. Because one doesn't want to make them struggle with a specific XML format, good practice is to use the XlsDataSet format (Excel Sheet).

The fourth mistake:

Functional Tests are forgotten too often because the perception is that they are difficult and expensive to maintain. It's true that we'll have to tweak to make them maintainable but executing all the same tests manually over and over again will cost much more. Also when functionality is tested and automated by means of continuous integration, there's no need for stress for screens/ functionality that will be broken in the next release.

Selenium is a great framework that is able to record functionality in the browser. The fourth mistake is made here. Most people will just record and replay the tests the way they are. Selenium--HTML components by their HTML id can be tracked. If developers don't specify id's in their HTML code, they can easily change if a page is updated. This will break the tests immediately even when functionality still works.

The fifth mistake:

This mistake is made in a JUnit test by just pasting the recorded test. Login and logout are common functionalities, the time to wait for a response, should be added to a superior level.
For example, a login can be done before each test, afterwards a logout is done. This way, if something changes in login or logout functionality, tests won't break.

The sixth mistake:

Absence of testing logical flows sixth mistake happens. It's perfectly possible that an insert, a search, an update, and a delete work on their own but that the complete flow from the creation, search, and update until delete of the same entry won't work. The great way to group multiple tests and run them in sequence all together is by the TestSuites.

The seventh mistake:

BDD (Behavior Driven Design) Instead of functional tests most of the tests are technical tests. To test logical behavior than to test if the things work technically is much more important. That can be done by means of Behavior Driven Design. Given some things, when an event occurs, then what should be the outcome? Mockito has some great stuff to easily write tests in a BDD way.

Sunday 29 January 2012

Cookie Testing – How to do Cookie Testing?

What is Cookie?

Cookie is to be known as text a file that gets saved in the hard disk of the user’s system. Browsers are required to use the cookies which have been saved in the desired location. Informative data is recorded in the cookie and that can be retrieved for the web pages.

How to do Cookie Testing?

Being a tester, testing for the cookies is very essential since there are many web applications which include informative content and payment transactions. Below are the steps which should be considered while doing testing:
Major cookie testing involves disabling the cookie from the option available in every browser. You need to make sure that cookies are disabled and access the respective website and check the functionality of the pages that are working properly or not. Browse the whole website and see for crashes. Basically, message should be displayed as “Cookies are disabled. Please enable the cookies to browse the website easily”.
Another testing should be performed after corrupting the cookies. For corrupting the cookies you make sure the location of the cookies and manually edit respective cookie with fake figures or with any invalid data and save them. In some cases, internal information of the domain can be easily accessed/ hacked with the help of corrupted cookies. If you are testing banking or finance domain then this point should come first in your checklist.
Remove all the cookies for the website you are testing and check all the website pages that are working properly.
Cookie testing can be performed on different browsers to check whether the website is writing the cookies or not. You can check cookies manually for the existence for each and every page.
If you are testing an application which authenticate login then get logged in with valid details. You will see the Username parameter in the address bar and can change it to see the behavior of the application. You should not get logged into different user’s account. Proper message is getting displayed or not for the action you have performed.

Therefore, above discussed is the most important cookie testing concepts that tell how to do cookie testing. Cookie testing is really necessary and it should not be missed from the hands of testers while testing any web application. Cookie testing really make the application more stable if any issues found.

GUI Testing Checklist

GUI Testing Checklist

Ideally, motive of the GUI testing is to recognize your application which is according to the specific standards for graphical user interface. These checklists would be beneficial for the QA and development team. Checklist must be comprises of all the GUI components that will be tested systematically. Below is the checklist which is really beneficial for every tester to perform under GUI testing for specific application:

Colors used in the web application:
Check the colors for the hyperlinks.
Background color for all the pages should be tested.
Color for the warning message should be checked.

Content used in the web application:
Font on the whole website should be same according the requirements.
Check that content is properly aligned or not.
All the labels should be proper aligned
Check all the content for respective words to be in lower and upper case.
Check all the error message on the screen and should not have any spelling mistake.

Images used in the web application:
Check all the graphics should be proper aligned.
Check for the broken images all over.
Check the size for the graphics used or uploaded.
Check all the banners and their size.
Check the buttons for the respective command like size, font and font size.

Navigation in the application:
All the tabs should be loaded on time.
Tabs should be displayed in sequence.
All the tabs from the left to right should be correct according to the requirements.
Check scrollbar is getting displayed if required.
Check all the links given in the sitemap and see for broken links.

Usability in the application:
Check the font size and it should be readable.
Check the spelling for all the fields that they are prompting properly or not.
Check whole website’s look and feel.
Test the website in all resolutions like 640×480 etc.

Hence, if the entire checklist followed then there is no chance of missing any requirement from the hands of tester and developer. It is suggested to draft all the points in the excel sheet and add the columns pass and fail against each checklist. And send the development team after performing testing cycle. There are other tools which are also available and it is apart from the manual testing like web link validator to check broken link and online w3c validator tool for checking usability standards.

How to Perform GUI Testing? Points to take care

How to Perform GUI Testing?

Basically GUI testing is performed to check the user interface and test the functionality working properly or not. GUI testing involves several tasks like to test all object events, mouse events, menus, fonts, images, content, control lists, etc. GUI testing improves the overall look and feel of the application according to the requirements. User interface for the website should be designed to attract the audiences globally. There are both options available to the testers one is manual and second is to automate with the use of an automation tool like QTP. There is planning required for all the stages in testing.

Below discussed points will let to know how to perform GUI Testing:
First of all tester should be aware and understand design documents for the testing application. All the mentioned requirements need to be checked properly with exact figures for font size, images, navigations, controls listed in the application.
Create the environment and test the application in different browsers like Mozilla, IE, Opera, and Safari. Sometimes design for the application does not work or behave properly in other browsers. So, make sure you are testing in all the browsers.
Testers can check manually and if you are finding it difficult to test if application has N number of pages then chooses the option to automate the application. You can easily record the application with all respective controls to test with. And at the end iterations can be checked that they are passed or failed.
Prepare a checklist for the GUI testing since there are many things which is related to the user interface. Make out in one sheet and execute them for the application.
Prepare all the test cases for the user interface to perform the GUI testing effectively.
Tester should divide all the modules which are available in the application and start performing testing module wise. This process can be easily tested and there are high chances of getting defects.

Above mentioned points are very beneficial in performing GUI testing. Tester should be more focussed on how to test the application. Planning to test the system or application is very necessary to achieve the respective goals with the help of GUI testing. Testing should be performed in keeping mind the perspective of the users who will come and use the application. Remember first impression is last impression.