Chat with us, powered by LiveChat Testing is a crucial phase in the SDLC. The testing pha - EssayAbode

Testing is a crucial phase in the SDLC. The testing pha

   Download and read the document and answer all questions in the document. Please see attached document H6 & APA Criteria doc.       

Testing is a crucial phase in the SDLC. The testing phase also comprises of a divert set of tools and techniques. Modules 6, 7, and 8 are dedicated to software testing and analysis. In this homework, you will compare software security analysis tools and techniques.

In the last module, you read Chapter 14 of Conklin & Shoemaker. In this module, you have been reading Chapters 15 and 16. These chapters contain a variety of different software security analysis tools and methods. These include, but are not limited to:

· Static Code Analysis

· Dynamic Code Analysis

· Peer Review

· Quality Assurance Testing

· Penetration Testing

· Fuzzing

Question 1

Briefly describe each method.

Question 2

Compare static and dynamic code analysis methods.

Question 3

What is the main difference between static & dynamic code analysis and penetration testing & fuzzing? Describe.

Question 4

How does the peer review process differ from other processes in the list? Describe.

Question 5

How does Quality Assurance Testing differ from the other processes in the list? Describe.

,

CHAPTER 16

Security Testing

In this chapter you will

•   Explore the different types of security tests

•   Learn about using scanning and penetration testing to find vulnerabilities

•   Examine fuzz testing for vulnerabilities

•   Examine security models used to implement security in systems

•   Explore the types of adversaries associated with software security

When testing for vulnerabilities, a variety of techniques can be used to examine the software under development. From generalized forms of testing, such as scanning and fuzzing, to more specific methods, such as penetration testing and cryptographic testing, different tools and methods can provide insights as to the locations and levels of security vulnerabilities in the software.

Scanning

Scanning is automated enumeration of specific characteristics of an application or network. These characteristics can be of many different forms, from operating characteristics to weaknesses or vulnerabilities. Network scans can be performed for the sole purpose of learning what network devices are available and responsive. Systems can be scanned to determine the specific operating system (OS) in place, a process known as OS fingerprinting. Vulnerability scanners can scan applications to determine if specific vulnerabilities are present.

Scanning can be used in software development to characterize an application on a target platform. It can provide the development team with a wealth of information as to how a system will behave when deployed into production. There are numerous security standards, including the Payment Card Industry Data Security Standard (PCI DSS), that have provisions requiring the use of scanners to identify weaknesses and vulnerabilities in enterprise platforms. The development team should take note that enterprises will be scanning the application as installed in the enterprise. Gaining an understanding of the footprint and security implications of an application before shipping will help the team to identify potential issues before they are discovered by customers.

Scanners have been developed to search for a variety of specific conditions. There are scanners that can search code bases for patterns that are indicative of elements of the OWASP Top 10 and the SANS Top 25 lists. There are scanners tuned to produce reports for PCI and Sarbanes-Oxley (SOX) compliance. A common mitigation for several regulatory compliance programs is a specific set of scans against a specified set of vulnerabilities.

Attack Surface Analyzer

Microsoft has developed and released a tool called the attack surface analyzer, which is designed to measure the security impact of an application on a Windows environment. Acting as a sophisticated scanner, the tool can detect the changes that occur to the underlying Windows OS when an application is installed. Designed to specifically look for and alert on issues that have been shown to cause security weaknesses, the attack surface analyzer enables a development team or an end user to

•   View changes in the Windows attack surface resulting from the installation of the application

•   Assess the aggregate attack surface change associated with the application in the enterprise environment

•   Evaluate the risk to the platform where the application is proposed to exist

•   Provide incident response teams detailed information associated with a Windows platform

One of the advantages of the attack surface analyzer is that it operates independently of the application that is under test. The attack surface analyzer scans the Windows OS environment and provides actionable information on the security implications of an application when installed on a Windows platform. For this reason, it is an ideal scanner for final security testing as part of the secure development lifecycle (SDL) for applications targeted to Windows environments.

Penetration Testing

Penetration testing, sometimes called pen testing, is an active form of examining the system for weaknesses and vulnerabilities. While scanning activities are passive in nature, penetration testing is more active. Vulnerability scanners operate in a sweep, looking for vulnerabilities using limited intelligence; penetration testing harnesses the power of human intellect to make a more targeted examination. Penetration testers attack a system using information gathered from it and expert knowledge in how weaknesses can exist in systems. Penetration testing is designed to mimic the attacker’s ethos and methodology, with the objective of finding issues before an adversary does. It is a highly structured and systematic method of exploring a system and finding and attacking weaknesses.

Penetration testing is a very valuable part of the SDL process. It can dissect a program and determine if the planned mitigations are effective or not. Pen testing can discover vulnerabilities that were not thought of or mitigated by the development team. It can be done with a white-, black-, or grey-box testing mode.

Penetration Testing

Penetration testing is a structured test methodology. The following are the basic steps employed in the process:

1.   Reconnaissance (discovery and enumeration)

2.   Attack and exploitation

3.   Removal of evidence

4.   Reporting

The penetration testing process begins with specific objectives being set out for the tester to explore. For software under development, these could be input validation vulnerabilities, configuration vulnerabilities, and vulnerabilities introduced to the host platform during deployment. Based on the objectives, a test plan is created and executed to verify that the software is free of known vulnerabilities. As the testers probe the software, they take notes of the errors and responses, using this information to shape subsequent tests.

Penetration testing is a slow and methodical process, with each step and results being validated. The records of the tests should demonstrate a reproducible situation where the potential vulnerabilities are disclosed. This information can give the development team a clear picture of what was found so that the true root causes can be identified and fixed.

Fuzzing

Fuzz testing is a brute-force method of addressing input validation issues and vulnerabilities. The basis for fuzzing a program is the application of large numbers of inputs to determine which ones cause faults and which ones might be vulnerable to exploitation. Fuzz testing can be applied to anywhere data is exchanged to verify that input validation is being performed properly. Network protocols can be fuzzed, file protocols can be fuzzed, web protocols can be fuzzed. The vast majority of browser errors are found via fuzzing.

Fuzz testing works well in white-, black-, or grey-box testing, as it can be independent of the specifics of the application under test. Fuzz testing works by sending a multitude of input signals and seeing how the program handles them. Specifically, malformed inputs can be used to vary parser operation and to check for memory leaks, buffer overflows, and a wide range of input validation issues. Since input validation errors are one of the top issues in software vulnerabilities, fuzzing is the best method of testing against these issues, such as cross-site scripting and injection vulnerabilities.

There are several ways to classify fuzz testing. One set of categories is smart and dumb, indicating the type of logic used in creating the input values. Smart testing uses knowledge of what could go wrong and creates malformed inputs with this knowledge. Dumb testing just uses random inputs. Another set of terms used to describe fuzzers is generation-based and mutation-based.

Images

EXAM TIP     Fuzz testing is a staple of SDL-based testing, finding a wide range of errors with a single test method.

Generation-based fuzz testing uses the specifications of input streams to determine the data streams that are to be used in testing. Mutation-based fuzzers take known good traffic and mutate it in specific ways to create new input streams for testing. Each of these has its advantages, and the typical fuzzing environment involves both used together.

Simulation Testing

Simulation testing involves testing the application in an environment that mirrors the associated production environment. Examining issues such as configuration issues and how they affect the program outcome is important. Data issues that can result in programmatic instability can also be investigated in the simulated environment.

Setting up an application and startup can be time consuming and expensive. When developing a new application, considering the challenges associated with the instantiation of the system can be important with respect to customer acceptance. Simple applications may have simple setups, but complex applications can have significant setup issues. Simulation testing can go a long way toward discovering issues associated with the instantiation of an application and its operation in the production environment.

Simulation testing can provide that last testing line of defense to ensure the system is properly functioning prior to deployment. This is an opportunity to verify that the interface with the OS is correct and that roles are properly configured to support access and authorization. It also checks that firewall rules (or other enforcement points) between tiers/environments are properly documented, configured, and tested to ensure that attack surface/exposure is managed. Other benefits of simulation testing include validating that the system itself can stand up to the rigors of production performance—for example, using load testing to “beat up” the application to ensure availability is sustainable and that the controls don’t “break” when the load reaches a particular threshold.

Testing for Failure

Not all errors in code result in failure. Not all vulnerabilities are exploitable. During the testing cycle, it is important to identify errors and defects, even those that do not cause a failure. Although a specific error, say one in dead code that is never executed, may not cause a failure in the current version, this same error may become active in a later version and result in a failure. Leaving an error such as this alone or leaving it for future regression testing is a practice that can cause errors to get into production code.

Although most testing is for failure, it is equally important to test for conditions that result in incorrect values, even if they do not result in failure. Incorrect values have resulted in the loss of more than one spacecraft in flight; even though the failure did not cause the program to fail, it did result in system failure. A common failure condition is load testing, where the software is tested for capacity issues. Understanding how the software functions under heavy load conditions can reveal memory issues and other scale-related issues. These elements can cause failure in the field, and thus extensive testing for these types of known software issues is best conducted early in the development process where issues can be addressed prior to release.

Cryptographic Validation

Having secure cryptography is easy: Use approved algorithms and implement them correctly and securely. The former is relatively easy—pick the algorithm from a list. The latter is significantly more difficult. Protecting the keys and the seed values, and ensuring proper operational conditions are met, have proven to be challenging in many cases. Other cryptographic issues include proper random number generation and key transmission.

Cryptographic errors come from several common causes. One typical mistake is choosing to develop your own cryptographic algorithm. Developing a secure cryptographic algorithm is far from an easy task, and even when done by experts, weaknesses can occur that make them unusable. Cryptographic algorithms become trusted after years of scrutiny and attacks, and any new algorithms would take years to join the trusted set. If you instead decide to rest on secrecy, be warned that secret or proprietary algorithms have never provided the desired level of protection. One of the axioms of cryptography is that security through obscurity has never worked in the long run.

Deciding to use a trusted algorithm is a proper start, but there still are several major errors that can occur. The first is an error in instantiating the algorithm. An easy way to avoid this type of error is to use a library function that has already been properly tested. Sources of these library functions abound and provide an economical solution to this functionality’s needs. Given an algorithm, and a proper instantiation, the next item needed is the random number to generate a random key.

The generation of a real random number is not a trivial task. Computers are machines that are renowned for reproducing the same output when given the same input, so generating a string of pure, nonreproducible random numbers is a challenge. There are functions for producing random numbers built into the libraries of most programming languages, but these are pseudo-random number generators, and although the distribution of output numbers appears random, it generates a reproducible sequence. Given the same input, a second run of the function will produce the same sequence of “random” numbers. Determining the seed and random sequence and using this knowledge to “break” a cryptographic function has been used more than once to bypass the security. This method was used to subvert an early version of Netscape’s Secure Sockets Layer (SSL) implementation. An error in the Debian instantiation of OpenSSL resulted in poor seed generation, which then resulted in a small set of random values.

Images

EXAM TIP    Cryptographically random numbers are essential in cryptosystems and are best produced through cryptographic libraries.

Using a number that is cryptographically random and suitable for an encryption function resolves the random seed problem, and again, the use of trusted library functions designed and tested for generating such numbers is the proper methodology. Trusted cryptographic libraries typically include a cryptographic random number generator.

Poor key management has failed many a cryptographic implementation. A famous exploit where cryptographic keys were obtained from an executable and used to break a cryptographic scheme involved hackers using this technique to break DVD encryption and develop the DeCSS program. Tools have been developed that can search code for “random” keys and extract them from the code or running process. The bottom line is simple: Do not hard-code secret keys in your code. They can, and will, be discovered. Keys should be generated and then passed by reference, minimizing the travel of copies across a network or application. Storing them in memory in a noncontiguous fashion is also important to prevent external detection.

FIPS 140-2

FIPS 140-2 is a prescribed standard, part of the Federal Information Processing Standards series that relates to the implementation of cryptographic functions. FIPS 140-2 deals with issues such as the selection of approved algorithms, such as AES, RSA, and DSA. FIPS 140-2 also deals with the environment where the cryptographic functions are used, as well as the means of implementation.

Images

EXAM TIP    FIPS 140-2 specifies requirements, specifications, and testing of cryptographic systems for the U.S. federal government.

Regression Testing

Software is a product that continually changes and improves over time. Multiple versions of software can have different and recurring vulnerabilities. Anytime that software is changed, whether by configuration, patching, or new modules, the software needs to be tested to ensure that the changes have not had an adverse impact on other aspects of the software. Regression testing is a minor element early in a product’s lifecycle, but as a product gets older and has advanced through multiple versions, including multiple customizations, etc., the variance between versions can make regression testing a slow, painful process.

Regression testing is one of the most time-consuming issues associated with patches for software. Patches may not take long to create—in fact, in some cases, the party discovering the issue may provide guidance on how to patch. But before this solution can be trusted across multiple versions of the software, regression testing needs to occur. When software is “fixed,” several things can happen. First, the fix may cause a fault in some other part of the software. Second, the fix may undo some other mitigation at the point of the fix. Third, the fix may repair a special case, entering a letter instead of a number, but miss the general case of entering any non-numeric value. The list of potential issues can go on, but the point is that when a change is made, the stability of the software must be checked.

Regression testing is not as simple as completely retesting everything—this would be too costly and inefficient. Depending upon the scope and nature of the change, an appropriate regression test plan needs to be crafted. Simple changes to a unit may only require a level of testing be applied to the unit, making regression testing fairly simple. In other cases, regression testing can have a far-reaching impact across multiple modules and use cases. A key aspect of the patching process is determining the correct level, breadth, and scope of regression testing that is required to cover the patch.

Specialized reports, such as delta analysis and historical trending reports, can assist in regression testing efforts. These reports are canned types and are present in a variety of application security test tools. When leveraging regular scan and reporting cycles, remediation meetings use these reports to enable the security tester to analyze and work with teams to fix the vulnerabilities associated with each release—release 1 vs. release 2, or even over the application’s release lifetime (compare release 1 to 2 to 3 and so on).

Impact Assessment and Corrective Action

Bugs found during software development are scored based on impact. During the course of development, numerous bugs are recorded in the bug tracking system. As part of the bug clearing or corrective action process, a prioritization step determines which bugs get fixed and when. Not all bugs are exploitable, and among those that are exploitable, some have a greater impact on the system. In an ideal world, all bugs would be resolved at every stage of the development process. In the real world, however, some errors are too hard (or expensive) to fix and the risk associated with them does not support the level of effort required to fix them in the current development cycle. If a bug required a major redesign, then the cost could be high. If this bug is critical to the success or failure of the system, then resolving it becomes necessary. If it is inconsequential, then resolution may be postponed until the next major update and redesign opportunity.

Chapter Review

In this chapter, different types of security tests were presented. Scanning was presented as a means of characterizing and identifying vulnerabilities. While scanning tends to be broad in scope, the next technique, penetration testing, tends to be very specific in its methods of finding vulnerabilities. The next method, fuzzing, is specific in its target, but very general in its method of testing, finding a wide range of problems. Simulation testing is where the application is tested in a simulated production environment to find operational errors.

Testing for failures is important, but so is testing for errors that cause incorrect values but not failure. Cryptographic systems can be complex and difficult to implement properly. Testing the areas of failure associated with cryptographic systems was covered. Testing various versions of software is referred to as regression testing. The chapter closed by examining the impact of a bug and how this is used in prioritizing corrective actions.

Quick Tips

•   Scanning is the automated enumeration of specific characteristics of an application or network.

•   Penetration testing is an active form of examining the system for weaknesses and vulnerabilities.

•   Fuzz testing is a brute-force method of addressing input validation issues and vulnerabilities.

•   Simulation testing involves testing the application in an environment that mirrors the associated production environment.

•   Although most testing is for failure, it is equally important to test for conditions that result in incorrect values, even if they do not result in failure.

•   Only approved cryptographic algorithms should be used; creating your own cryptography is a bad practice.

•   Testing various versions of software is referred to as regression testing.

•   Bugs are measured in terms of their impact on the system, and this impact can be used to prioritize corrective action efforts.

,

CHAPTER 15

Security Quality Assurance Testing

In this chapter you will

•   Explore the aspects of testing software for security

•   Learn about standards for software quality assurance

•   Discover the basic approaches to functional testing

•   Examine types of security testing

•   Explore the use of the bug bar and defect tracking in an effort to improve the SDL process

Testing is a critical part of any development process and testing in a secure development lifecycle (SDL) environment is an essential part of the security process. Designing in security is one step, coding is another, and testing provides the assurance that what was desired and planned becomes reality. Validation and verification have been essential parts of quality efforts for decades, and software is no exception. This chapter looks at how and what to test to obtain an understanding of the security posture of software.

Standards for Software Quality Assurance

Quality is defined as fitness for use according to certain requirements. This can be different from security, yet there is tremendous overlap in the practical implementation and methodologies employed. In this regard, lessons can be learned from international quality assurance standards, for although they may be more expansive in goals than just security, they can make sense there as well.

ISO 9216

The International Standard ISO/IEC 9216 provides guidance for establishing quality in software products. With respect to testing, this standard focuses on a quality model built around functionality, reliability, and usability. Additional issues of efficiency, maintainability, and portability are included in the quality model of the standard. With respect to security and testing, it is important to remember the differences between quality and security. Quality is defined as fitness for use, or conformance to requirements. Security is less cleanly defined, but can be defined by requirements. One issue addressed by the standard is the human side of quality, where requirements can shift over time, or be less clear than needed for proper addressing by the development team. These are common issues in all projects, and the standard works to ensure a common understanding of the goals and objectives of the projects as described by requirements. This information is equally applicable to security concerns and requirements.

SSE-CMM

The Systems Security Engineering Capability Maturity Model (SSE-CMM) is also known as ISO/IEC 21827, and is an international standard for the secure engineering of systems. The SSE-CMM addresses security engineering activities that span the entire trusted product or secure system lifecycle, including concept definition, requirements analysis, design, development, integration, installation, operations, maintenance, and decommissioning. The SSE-CMM is designed to be employed as a tool to evaluate security engineering practices and assist in the definition of improvements to them. The SSE-CMM is organized into processes and corresponding maturity levels. There are 11 processes that define what needs to be accomplished by security engineering. The maturity level is a standard CMM metric representing how well each process achieves a set of goals. As a model, the SSE-CMM has become a de facto standard for evaluating security engineering capability in an organization.

OSSTMM

The Open Source Security Testing Methodology Manual (OSSTMM) is a peer-reviewed system describing security testing. OSSTMM provides a scientific methodology for assessing operational security built upon analytical metrics. It is broken into five sections: data networks, telecommunications, wireless, human, and physical security, as shown in  Table 15-1 . The purpose of the OSSTMM is to create a system that can accurately characterize the security of an operational system in a consistent and reliable fashion.

Images

Table 15-1 OSSTM Sections and Test/Audit Areas

OSSTMM provides a scientific methodology that can be used in the testing of security. The Institute for Security and Open Methodologies, ISECOM, the developers of OSSTMM, have developed a range of training classes built around the methodology. OSSTMM can also be used to assist in auditing, as it highlights what is important to verify as to functional operational security.

Testing Methodology

Testing software during its development is an integral part of the development process. Developing a test plan, a document detailing a systematic approach to testing a system such as a machine or software, is the first step. The plan begins with the test strategy, an outline that describes the overall testing approach. From this plan, test cases are created. A test case is designed to answer the question, “What am I going to test, and what will correct look like?” A document enumerating these cases includes information such as of a unique test identifier, link to requirement references from design specifications, notes on any preconditions, a reference to test harnesses and scripts required, and any additional notes.

A test harness is a means of documenting the software, tools, samples of data input and output, and configurations used to complete a set of tests. The individual steps of the tests can be encoded in a series of test scripts. Test scripts are important for several reasons. They replicate user actions, and by automating the series of steps (also known as actions) to follow, including inputs, they remove the errors that could occur with manual testing. They can also be automated to collect outputs and compare the returned values to expected results, improving the speed and accuracy of test interpretation.

The tests are sometimes grouped in collections referred to as test suites. It is common to have test suites for specific functionality cases, such as security, user inputs, boundary condition checking, databases, etc. By grouping them into suites, it makes management easier and promotes reuse as opposed to continual redevelopment of the same types of materials.

Functional Testing

Functional software testing is performed to assess the level of functionality associated with the software as expected by the end user. Functional testing is used to determine compliance with requirements in the areas of reliability, logic, performance, and scalability. Reliability measures that the software functions as expected by the customer at all times. It is not just a measure of availability, but functionally complete availability. Resiliency is a measure of how strongly the software can perform when it is under attack by an adversary.

Steps for Functional Testing

Functional testing involves the following steps in order:

1.   Identifying the functions (requirements) that the software is expected to perform

2.   Creating input test data based on the function’s specifications

3.   Determining expected output test results based on the function’s specifications

4.   Executing the test cases corresponding to functional requirements

5.   Comparing actual and expected outputs to determine functional compliance

Unit Testing

Unit testing is conducted by developers as they develop the code. This is the first level of testing and is essential to ensure that logic elements are correct and that the software under development meets the published requirements. Unit testing is essential to the overall stability of the project, as each unit must stand on its own before being connected together. At a minimum, unit testing will ensure functional logic, understandable code, and a reasonable level of vulnerability control and mitigation.

Images

EXAM TIP    One of the principal advantages of unit testing is that it is done by the development team and catches errors early, before they leave the development phase.

Integration or Systems Testing

Even if each unit tests properly per the requirements and specifications, a system is built up of many units that work together to achieve a business objective. There are emergent properties that occur in systems, and integration (or systems-level) testing should be designed to verify that the correct form and level of the em

Related Tags

Academic APA Assignment Business Capstone College Conclusion Course Day Discussion Double Spaced Essay English Finance General Graduate History Information Justify Literature Management Market Masters Math Minimum MLA Nursing Organizational Outline Pages Paper Presentation Questions Questionnaire Reference Response Response School Subject Slides Sources Student Support Times New Roman Title Topics Word Write Writing