Friday, August 7, 2009

Softwere Characteristics

In this post, I would like to summarize typical software characteristics that a Quality Control Leader needs to understand in order to recognize typical risks, develop appropriate testing strategies and specify effective test cases. In general, I consider this is the backbone of Quality Control Engineering because in the end of the day, no matter whatever test design techniques testers use in whatever test levels, no matter whatever tools they use in whatever test types, the ultimate purpose is to estimate/control/monitor the following software quality characteristics.

Please note that when talking about software characteristics, we divide them into two categories, one for Functional Attributes and one for Technical Attributes.

Functional Attributes

Functional Accuracy:

  • Objective: test based on specified or implied functional requirements to evaluate whether the system gives the right answer and produce the right effects. Accuracy also refers to the right degree of precision in the results (Ex: computational accuracy)
  • Techniques: All black-box test design techniques can be used

Functional Suitability:

  • Objective: to evaluate whether the system solves the given problem and appropriate to intended tasks.
  • Techniques: Use case, exploratory testing.

Functional Interoperability:

  • Objective: to evaluate whether the system functions correctly in all intended environments. Environment includes not only elements that the system must interoperate with but also those that interoperate indirectly with or even simply cohabitate with. Cohabitation implies sharing computer resources (CPU, memory…) but do not work together.
  • Techniques:
    - Equivalence Partitioning: to determine environment set when you know possible interactions between one or more environments and one or more functions.
    - Pairwise and classification tree: to determine environment set when you’re not sure about interactions and want to generate more arbitrary configurations.
    - Use case testing in each configuration

Functional Security:

  • Objective: to evaluate the ability of the software to prevent unauthorized access
  • Techniques: Attack and Defect Taxonomies

Accessibility:

  • Objective: to evaluate the ability of how to use the system under particular requirements, restrictions or disabilities. These are often arisen from national standards, industry compliance by law or by contract.
  • Techniques: Specification, requirement-based testing used in risk-based testing approach. Since it obligated strictly by law, it usually not sufficient to test just a few representative fields or functions but every field and function might be required.

Usability:

  • Objective: to evaluate whether the users are effective, efficient and satisfied with the software
  • Techniques:
    - Inspection, evaluation and review
    - Use-case testing along with syntax and semantic tests
    - Survey or questionnaire

Technical Attributes

Technical Security:

  • Objective:
    - Technical Security is different from Functional Security is that Technical Security leverages technical knowledge and experience to take advantage of unintended side effects and bad assumptions to subvert or attack the software.
    - Here, we try to evaluate software security vulnerabilities related to data access, functional privileges, the ability to insert malicious programs into the system, the ability to sniff or capture secret information, the ability to break encrypted traffic and the ability to deliver virus or worms.
  • Techniques and tools:
    - Information retrieval
    - Vulnerability Scanning tools
    - Security attacks techniques (Dependency attacks, user interface attacks)

Reliability:

  • Objective: monitor software maturity and compare it to desired, statistically valid goals. Reliability is important for high-usage, safety-critical systems. Special types of Reliability tests are Robustness and Recoverability.
  • Techniques:
    - Select an appropriate mathematical model from Reliability Growth Models or Software Reliability Growth Models to monitor software increase or decrease in reliability.
    - TAAF (Test, Analyze and Fix). Because the “around-the-clock” nature of testing process, reliability testing is almost always automated. It uses empirical test data performed in a simulated real-life operational environment.
    - Recoverability testing (Ex: failover test, disaster recovery, backup/restore): to evaluate system’s ability to recover from some hardware or software failure in its environment.

Efficiency:

  • Objective: to evaluate whether system has good time response and resource usage or not.
  • Techniques:
    - Review, static analysis before and during design, implementation phase
    - Performance testing: to evaluate time response within specified period of time and under various legal conditions.
    - Load testing: to see how system handles under different level of loads, usually focused on realistic or anticipated loads.
    - Stress testing: put the load to the extreme and beyond to determine system limit and observe its degradation behavior at or above maximum load.
    - Scalability testing: take stress testing further by finding the bottlenecks and then estimate the ability of the system to be enhanced to resolve the problem.

Maintainability:

  • Objective: to evaluate the ability to update, modify, reuse and test the system.
  • Techniques:
    - Static analysis and reviews (maintainability defects are usually found with code analysis tools, design and code walk-through)
    - Test updates, patches, upgrades and migration
    - Collect project and productions metrics (Ex: number of regression test failures, long closure periods of bugs, duration of test cycle) to determine analyzability, stability and testability of the system.

Portability:

  • Objective: to evaluate the ability of the system to install to, use in and move to various environments.
  • Techniques:
    - Equivalence Partitioning, Pairwise, Classification Tree, Decisions table, State Transition
    - Installability testing: install software using its standard installation, update, patch facilities on its target environments. The purpose is to check installation instructions, user’s manual and observe software’s failures during installation/uninstallation.
    - Coexistence testing: to check whether one or more systems that work in the same environment do so without conflict.
    - Replaceability testing: to check that whether we can exchange our software components for other 3rd party ones or not.
    - Adaptability testing: execute test cases to evaluate Functional Interoperability. The techniques are the same with those in Functional Interoperablity section above.

0 comments:

Post a Comment