Thursday 9 February 2012

Common programming bugs that every Software Tester should know.

Basically this is more useful for programmers but I think software testers can get insight on how developers can unknowingly leave bugs in software programs.

Each bug listed in this resource can lead to serious software vulnerabilities if not fixed. The top 12 security bugs list will help programmers to avoid some common but serious coding mistakes. For software testers list will be useful as a security testing checklist for Internet as well as for testing desktop application.

Here are few top security vulnerabilities discussed in detail in this article:

  1. Improper input validation
  2. Improper escaping of output or encoding
  3. SQL injection
  4. Cross-site scripting
  5. Race conditions
  6. Information leak in error messages
  7. Error while transmitting sensitive information
  8. Memory leak
  9. External control of critical data and file paths
  10. Improper initialization
  11. Improper authorization
  12. Client side security checks

The most common security vulnerability mistake developers make is “Client side enforcement of server side security”.

Tuesday 7 February 2012

Usage-Based Statistical Testing

Usage-based testing is actually software for client usage. In case if any flaws are detected by patrons, some information about them can be reported to software vendors and integrated fixes may be formed and delivered to all the clients to avoid such defects.

However, because of the enormous quantities of software installations software bugs correcting after release could be very costly and frequent fixes could also harm the software vendor's reputation and long-term business vivacity.

A new product's expected usage can be captured and used in product consistency and software testing could be most straightly assured.

In usage-based statistical testing the testing environs looks like the definite operational environment for the software in the field & the overall testing sequence, as represented by the systematic implementation of specific test cases in a test suite. Test cases also resemble the usage scenarios, templates of actual software usage and sequences by the target customers. Because of the huge quantity of clients and diverse usage templates cannot be captured in an execution set of test cases. This is why one uses the term "statistical" in this tactic and terms the same as "random testing."

Most apposite to the final phase of software testing is commonly known as Usage-based statistical testing and is usually referred to as acceptance testing right before product release, so that stopping testing is of equivalent value to the product release.

7 Deadly Sins in Automated Testing


Automated testing is considered by companies as an easy way out to deliver high quality software and products, reducing testing costs and increase efficiency. But the truth is, not all bugs are found by automated testing, so it is very important to realize what can go undetected.

1. Over indulging on propriety testing tools
Many commercial testing tools provide simple features for automating the capture and replay of manual test cases. While this approach seems sound, it encourages testing through the user-interface and results in inherently brittle and difficult to maintain tests. Additionally, the cost and restrictions that licensed tools place on who can access the test cases is an overhead that tends to prevent collaboration and team work. Furthermore, storing test cases outside the version control system creates unnecessary complexity. As an alternative, open source test tools can usually solve most automated testing problems and the test cases can be easily included in the version control system.

2. Too lazy to setup CI server to execute tests
The cost and rapid feedback benefits of automated tests are best realised when the tests are regularly executed. If your automated tests are initiated manually rather than through the CI continuous integration system then there is significant risk that they are not being run regularly and therefore may in fact be failing. Make the effort to ensure automated tests are executed through the CI system.

3. Loving the UI so much that all tests are executed through the UI
Although automated UI tests provide a high level of confidence, they are expensive to build, slow to execute and fragile to maintain. Testing at the lowest possible level is a practice that encourages collaboration between developers and testers, increases the execution speed for tests and reduces the test implementation costs. Automated unit tests should be doing a majority of the test effort followed by integration, functional, system and acceptance tests. UI based tests should only be used when the UI is actually being tested or there is no practical alternative.

4. Jealously creating tests and not collaborating
Test driven development is an approach to development that is as much a design activity as it is a testing practice. The process of defining test cases (or executable specifications) is an excellent way ensuring that there is a shared understanding between all involved as to the actual requirement being developed and tested. The practice is often associated with unit testing but can be equally applied to other test types including acceptance testing.

5. Frustration with fragile tests that break intermittently
Unreliable tests are a major cause for teams ignoring or losing confidence in automated tests. Once confidence is lost the value initially invested in automated tests is dramatically reduced. Fixing failing tests and resolving issues associated with brittle tests should be a priority to eliminate false positives.

6. Thinking automated tests will replace all manual testing
Automated tests are not a replacement for manual exploratory testing. A mixture of testing types and levels is needed to achieve the desired quality mitigate the risk associated with defects. The automated testing triangle originally described by Mike Cohn explains the investment profile in tests should focus at the unit level and then reduce up through the application layers.

7. Too much automated testing not matched to defined system quality
Testing effort needs to match the desired system quality otherwise there is a risk that too-much, too-little or not the right things will be tested. It is important to define the required quality and then match the testing effort to suit. This approach can be done in collaboration with business and technical stakeholders to ensure there is a shared understanding of risks and potential technical debt implications.

Seven Mistakes in Software Testing

Trial and error method is the best way of testing, but repeatedly not fixing of errors would ruin the efforts taken in testing and is a mere waste of time. Software Testing engineers while testing make small mistakes unintentionally which seems to be insignificant but are of high importance.

The following are the mistakes we usually find in software testing:

The first mistake:

Unit Testing is a method by which smallest testable part of an application of source code are tested to determine if they are fit for use. By just calling all utility methods, it's easy to test a utility class, passing some values and examining if the expected result is returned.

A first mistake arises here. Majority of the developers don't think out of the box, or not adequate.
Developers can test that 1 + 1 =2, that 2 + 1 = 3 and that 3 + 1 = 4. But what is the use of doing nearly 3 times the same test? Testing boundary cases is better. Are the arguments of the sum ( ) method primitive types or Objects? If they are Objects, what happens if you pass null values? If an exception is thrown, is that the expected one? Does it clearly tell what the problem is?

The second mistake:

Mocks - For example the service layer is unit tested, all other components like the DAO layer should be mocked. Manually in many cases it's been done. That's the second mistake.

It's much better to use mocking frameworks because when one mocks stuff manually the mocks are tightly coupled to the implementation as they are made for it. Just believe that they will create the mocks the way they want. Some mocking frameworks are capable of doing more than others; some are easier to use than others. Some of the best mocking framework is Mockito and EasyMock. Mockito is preferred because of its power and simplicity and EasyMock is by far the best known but it's a bit complex too to use. So it's not only a matter of choosing a mocking framework. It's also a matter of choosing the right one.

The third mistake:

Integration Tests are the tests over different integrated parts of an application while unit tests only tests a LUW (logical unit of work). In integration tests - the best known type of are tests of the DAO layer.

In these tests, multiple things are validated:

The input parameters,
The use of the ORM tool
The correctness of the generated query (functional)
If the DB can be accessed
The fact that the query is working and
The fact that the correct ResultSet is returned.

The third mistake that is made often is that in the dbUnit XML dataset developers provide test data. Most of the time this data has no representation of live data; the best way to handle this is to make functional-providing people and maintaining this test data. Because one doesn't want to make them struggle with a specific XML format, good practice is to use the XlsDataSet format (Excel Sheet).

The fourth mistake:

Functional Tests are forgotten too often because the perception is that they are difficult and expensive to maintain. It's true that we'll have to tweak to make them maintainable but executing all the same tests manually over and over again will cost much more. Also when functionality is tested and automated by means of continuous integration, there's no need for stress for screens/ functionality that will be broken in the next release.

Selenium is a great framework that is able to record functionality in the browser. The fourth mistake is made here. Most people will just record and replay the tests the way they are. Selenium--HTML components by their HTML id can be tracked. If developers don't specify id's in their HTML code, they can easily change if a page is updated. This will break the tests immediately even when functionality still works.

The fifth mistake:

This mistake is made in a JUnit test by just pasting the recorded test. Login and logout are common functionalities, the time to wait for a response, should be added to a superior level.
For example, a login can be done before each test, afterwards a logout is done. This way, if something changes in login or logout functionality, tests won't break.

The sixth mistake:

Absence of testing logical flows sixth mistake happens. It's perfectly possible that an insert, a search, an update, and a delete work on their own but that the complete flow from the creation, search, and update until delete of the same entry won't work. The great way to group multiple tests and run them in sequence all together is by the TestSuites.

The seventh mistake:

BDD (Behavior Driven Design) Instead of functional tests most of the tests are technical tests. To test logical behavior than to test if the things work technically is much more important. That can be done by means of Behavior Driven Design. Given some things, when an event occurs, then what should be the outcome? Mockito has some great stuff to easily write tests in a BDD way.