Summary
Topic Summary
Purpose, Scope, and FI Context for Penetration Testing
Penetration Testing Cadence and Triggers Based on Criticality and Exposure
Lifecycle Phases: Planning to Discovery, Attack, Reporting, and Retest
Types of Penetration Tests by Target Technology and Attack Surface
Penetration Testing Styles: Blackbox, Greybox, and Whitebox
Penetration Testing vs Vulnerability Assessment: What Each Activity Produces
Tester Selection Criteria and Professional Certification Expectations
External Frameworks and Standardized References for Findings Interpretation
Key Insights
Internet Exposure Drives Test Depth
The guidelines imply that annual testing for internet-accessible systems is not just about frequency; it is about ensuring the organization repeatedly validates the specific attack paths that are realistically reachable by outsiders. Because blackbox realism is bounded by what can be breached from the outside, internet exposure effectively sets a ceiling on what you can learn unless you deliberately add other styles or follow-up testing.
Why it matters: This reframes cadence as a risk-to-visibility mechanism: the more exposed the system, the more the test must repeatedly prove that external controls still block real attack chains.
Style Choice Changes What You Can Learn
A key implication is that blackbox, greybox, and whitebox are not interchangeable “levels of effort”; they produce different knowledge outcomes. Blackbox may miss internal or post-authenticated weaknesses if the perimeter cannot be breached, while greybox can efficiently validate the impact of privilege gained, and whitebox can uncover logical or design flaws that are hard to infer from external behavior alone.
Why it matters: Students often treat test styles as coverage substitutes; this insight forces them to treat styles as complementary lenses that target different failure modes and discovery limits.
Follow-Up Testing Is a Pipeline
The content implies a workflow pattern: discovery in one test type should trigger targeted testing in another. For example, if network testing reveals an exposed administrative web console, the guidelines suggest performing a web application test on that service rather than stopping at the network finding.
Why it matters: This changes understanding from “run one test per scope” to “build an attack-surface pipeline,” where each phase/type expands the next phase’s hypotheses and evidence.
Retest Validates Remediation, Not Reports
Because the lifecycle explicitly includes retest and ties it to reporting outcomes and remediation, the implied goal is to verify that fixes actually close the exploitable path, not merely that a vulnerability was “patched.” This means the reporting artifacts should be structured to support measurable retest criteria across phases.
Why it matters: It shifts success criteria from documentation quality to control effectiveness over time, making students think like auditors of remediation rather than producers of findings.
Standard Taxonomies Guide Focus, Not Answers
The guidelines imply that CVE/CVSS/CWE/CAPEC are decision-support tools that help prioritize and contextualize testing, but they do not replace exploitation-focused validation. Since vulnerability assessment can rank issues broadly via automation, standardized references mainly help align what you test and how you interpret it, while penetration testing determines whether and how security can be circumvented under real attack conditions.
Why it matters: This prevents a common misconception that standardized identifiers automatically prove exploitability; instead, it teaches students to use taxonomies to steer testing hypotheses and interpret results.
Conclusions
Bringing It All Together
Key Takeaways
- •Penetration testing is a controlled attack simulation with explicit rules of engagement, and it is only meaningful when tied to purpose and scope for the financial institution.
- •The penetration testing lifecycle (planning through retest) turns findings into risk reduction by validating that remediation actually closes the exploited paths.
- •Penetration testing styles (blackbox, greybox, whitebox) are selected based on information access, which changes what the tester can reach and therefore what weaknesses can be observed.
- •Types of penetration tests (network, web, mobile, API, and other environments) must match the target technology, and results can drive follow-up testing across layers.
- •Standardized references (CVE/CVSS/CWE/CAPEC) provide consistent classification and context so that findings can be prioritized and mapped to likely attacker techniques.
Real-World Applications
•Schedule penetration testing at least once every year for systems directly accessible from the internet to validate that security controls still withstand realistic attack paths.
•When a network test discovers an exposed administrative web console, immediately expand the engagement with a web application test focused on that exposed service.
•Use greybox testing by providing limited information such as login credentials to evaluate privilege gained and the impact of authenticated attack scenarios.
•For high-value or complex environments, adopt whitebox testing by providing architecture documentation and source code to increase coverage of logical weaknesses and defensive gaps.
Next, the student should learn how to operationalize these guidelines into an end-to-end testing program: defining rules of engagement and scope, selecting the right style and test types per asset and exposure, and integrating standardized references into reporting and remediation workflows. They should also learn how penetration testing fits alongside other security activities (for example, vulnerability assessment and related external frameworks) so that each activity has a clear role without being confused with the others.
Interactive Lesson
Interactive Lesson: Penetration Testing Guidelines 2.0 for Financial Institutions in Singapore (ABSG, Apr 2024)
⏱️ 30 minLearning Objectives
- Explain penetration testing as a controlled real-world attack simulation and distinguish it from vulnerability assessment.
- Describe the phased lifecycle of penetration testing from planning through discovery, attack, reporting, and retest.
- Choose appropriate penetration testing styles (blackbox, greybox, whitebox) based on information access and expected coverage.
- Select penetration testing types (network, web, mobile, API, and others) based on target technologies and attack surface.
- Apply penetration testing cadence rules using system criticality and exposure, including internet-accessible systems and major changes.
1. Penetration testing as controlled real-world attack simulation
Penetration testing mimics real-world attacks to identify how security features of an application, system, or network can be circumvented under agreed rules of engagement. It is executed by a tester (or team) and produces results that represent a point in time, based on the tester’s actions and access.
Examples:
- Good practice example: conduct penetration testing at least once every year for systems directly accessible from the internet.
- Follow-up example: if an administrative web console is discovered during a network scan, perform a web application test on the exposed service as a follow-up action.
✓ Check Your Understanding:
Which option best captures the meaning of “controlled real-world attack simulation”?
Answer: B. Mimicking real attacks while operating under agreed rules of engagement
What does a penetration test result typically represent?
Answer: B. A point-in-time outcome based on the tester’s actions and access
Why is “agreed rules of engagement” important?
Answer: A. It determines what the tester is allowed to attempt and where
2. Penetration testing lifecycle phases (planning to retest)
A structured lifecycle runs from planning through discovery, attack, reporting, and retest. Planning defines scope and rules of engagement before discovery and attack. Retest depends on reporting outcomes and remediation, validating that identified issues were fixed effectively.
Examples:
- Follow-up example: after reporting an exposed administrative console, a later web test can validate whether the web-layer controls are effective.
- Good practice example: retest after major changes to confirm security controls still work.
✓ Check Your Understanding:
Which phase most directly follows remediation to confirm fixes worked?
Answer: A. Retest
What should planning primarily establish before discovery and attack?
Answer: B. Rules of engagement and scope
How does retest connect to reporting?
Answer: B. Retest depends on reporting outcomes and remediation
3. Penetration testing styles (blackbox, greybox, whitebox)
Penetration testing styles depend on information access. Blackbox provides no internal knowledge, maximizing realism but potentially limiting visibility if the perimeter cannot be breached. Greybox provides limited information (for example, login credentials), improving efficiency and enabling deeper assessment of privilege and impact. Whitebox provides architecture documentation and source code, increasing potential vulnerability capture by enabling testers to identify logical weaknesses and defensive gaps more directly.
Examples:
- Greybox example: limited information provided to the tester, such as login credentials.
- Whitebox example: providing architecture documentation and source code to the penetration tester.
- Blackbox confusion check: blackbox realism is high, but internal or post-authenticated pages may remain undiscovered if the perimeter cannot be breached.
✓ Check Your Understanding:
Which statement is most accurate about blackbox testing?
Answer: B. It can be realistic, but may miss internal/post-authenticated issues if the perimeter cannot be breached
What is the main effect of greybox testing providing limited information such as credentials?
Answer: B. It improves efficiency and helps assess privilege gained and potential damage
What is the main effect of whitebox testing providing architecture documentation and source code?
Answer: B. It increases potential vulnerability capture by enabling direct identification of weaknesses and defensive gaps
4. Types of penetration tests across target technologies (network, web, mobile, API, and more)
Penetration test types map to target technologies and environments, such as networks, web apps, mobile apps, APIs, thick clients, wireless, mainframes, hardware, and IoT. Network testing can discover services that lead to follow-up web or API testing. Application-layer testing (web, mobile, API) focuses on weaknesses exploitable by threat actors.
Examples:
- Follow-up example: if an administrative web console is discovered during a network scan, perform a web application test on the exposed service as a follow-up action.
- Hardware testing example: include review of OS image, USB-device related attacks, removing HDD, and checking presence of BitLocker.
✓ Check Your Understanding:
Why might a network test lead to a web or API test?
Answer: B. Because network testing can discover services that become follow-up web/API targets
Which option best describes what web/mobile/API testing focuses on?
Answer: B. Application-layer weaknesses exploitable by threat actors
Which statement best reflects the purpose of mapping test types to target technologies?
Answer: B. It ensures testing is aligned to the attack surface and technologies in scope
5. Testing cadence based on criticality and exposure (including internet-accessible systems and major changes)
FIs determine penetration testing frequency using system criticality and cyber risk. Good practice suggests testing systems directly accessible from the internet at least once every year. Testing should also occur after major changes or updates because updates can introduce new vulnerabilities or misconfigurations, potentially changing the effectiveness of security controls.
Examples:
- Good practice example: conduct penetration testing at least once every year for systems directly accessible from the internet.
- Follow-up example: if an administrative web console is discovered during a network scan, perform a web application test on the exposed service as a follow-up action.
- Cause-effect anchor: major changes or updates should trigger testing to verify controls remain effective.
✓ Check Your Understanding:
What is the best rule of thumb for internet-accessible systems?
Answer: B. Test at least once every year
Why should testing be performed after major changes or updates?
Answer: B. Major updates can introduce new vulnerabilities or misconfigurations, so testing verifies control effectiveness
Which factor most directly influences cadence decisions?
Answer: A. System criticality and cyber risk exposure
6. Penetration testing vs vulnerability assessment (and why the distinction matters)
Vulnerability assessment identifies and ranks vulnerabilities, often via broad port scanning and automation. Penetration testing differs by focusing on exploitation and on how security features can be circumvented under agreed rules of engagement. Confusing these can lead to incorrect expectations about what results mean and what actions are needed next.
Examples:
- Cause-effect anchor: vulnerability assessment is typically automated and broader in detection than exploitation.
- Penetration testing anchor: penetration testing mimics real-world attacks to identify how security features can be circumvented.
✓ Check Your Understanding:
Which option best describes vulnerability assessment?
Answer: B. Automated identification and ranking of vulnerabilities, often via broad scanning
Which option best describes the key difference in focus?
Answer: B. Penetration testing is exploitation-focused; vulnerability assessment is often broader automated detection
What risk comes from confusing these two activities?
Answer: A. You might expect exploitation evidence from a scan-only process
7. Standardized vulnerability and weakness references (CVE, CVSS, CWE, CAPEC)
External references help standardize how findings are described and understood. CVE is an identifier dictionary for publicly known vulnerabilities/exposures. CVSS provides a standardized scoring system for rating IT vulnerabilities. CWE is a taxonomy of common software weaknesses that can lead to exploitable vulnerabilities. CAPEC maps common attack patterns that exploit CWEs. These references complement penetration testing guidelines by improving consistency in communication and prioritization.
Examples:
- Common confusion check: CVE is an identifier, CVSS is a scoring system, CWE is a weakness taxonomy, and CAPEC maps attack patterns that exploit CWEs.
✓ Check Your Understanding:
Which option correctly matches CVE to its purpose?
Answer: B. A dictionary of publicly known vulnerabilities/exposures with identifiers
Which option correctly matches CWE to its purpose?
Answer: A. A weakness taxonomy
Which option correctly matches CAPEC to its purpose?
Answer: A. Maps common attack patterns that exploit CWEs
Practice Activities
Cadence decision chain for internet exposure and major changes
mediumScenario: An FI has an internet-accessible customer portal. It will receive a major update in 6 weeks. Create a cause-effect chain that states (1) what trigger applies, (2) what testing frequency or timing is recommended, and (3) what lifecycle phase should follow remediation.
Style selection chain for coverage vs efficiency
mediumScenario: The FI wants to validate whether privilege escalation is possible after login, but full source code is not available. Build a cause-effect chain that links (1) the chosen style (blackbox/greybox/whitebox), (2) the information provided, and (3) the expected impact on discovery and exploitation depth.
Technology mapping chain from network discovery to web/API testing
mediumScenario: During network testing, the team discovers an exposed administrative web console endpoint. Build a cause-effect chain that explains why a follow-up web test is appropriate and which lifecycle phases should be involved to confirm fixes later.
Avoiding confusion chain: assessment vs exploitation
hardScenario: A stakeholder asks for “penetration testing results” but the team only ran automated vulnerability scans. Build a cause-effect chain that (1) identifies the mismatch, (2) explains what the scan results likely represent, and (3) states what additional penetration testing actions are needed to meet the intended goal.
Next Steps
Related Topics:
- Penetration tester selection criteria and professional certifications
- Related security activities and external frameworks (OWASP, SANS, NIST, PCI SSC, ISECOM)
- Penetration testing phases in more operational detail (planning artifacts, discovery outputs, attack evidence, retest criteria)
Practice Suggestions:
- After each practice chain, write a one-sentence rule that could guide future decisions (e.g., cadence trigger rule, style selection rule, follow-up testing rule).
- Create a simple mapping table: target technology -> likely test type -> likely style -> expected discovery depth.
Cheat Sheet
Cheat Sheet: Penetration Testing Guidelines 2.0 (Financial Industry, Singapore)
Key Terms
- FIs (Financial Institutions)
- Financial Institutions, including Association of Banks in Singapore members.
- Penetration testing
- Controlled real-world attack simulation to identify how security features can be circumvented under agreed rules of engagement.
- Vulnerability assessment
- Identifies and ranks vulnerabilities (often via automated broad scanning) and differs from exploitation-focused penetration testing.
- Production environment
- The environment that provides service to end users and customers (internal or external).
- Credentialed testing
- Authenticated testing where the tester is given login credentials.
- OWASP
- Open Worldwide Application Security Project; provides application security best practices and the OWASP Top 10 Web Application Risks.
- SANS
- SysAdmin, Audit, Networking, and Security Institute; provides best practices and the SANS Top 25 Most Dangerous Software Errors.
- CVE
- Common Vulnerability and Exposures; a dictionary of publicly known vulnerabilities/exposures using common identifiers.
- CVSS
- Common Vulnerability Scoring System; standardized method for rating IT vulnerabilities.
- CWE
- Common Weakness Enumeration; taxonomy of common software weaknesses that can lead to exploitable vulnerabilities.
Formulas
Penetration testing cadence (rule of thumb)
At least annually for internet-accessible systems; retest after major changes/updates.When deciding how often to test and when to trigger a new test cycle.
Blackbox vs Greybox vs Whitebox access
Blackbox: no internal knowledge; Greybox: limited info (e.g., credentials); Whitebox: full relevant info (architecture + source code).When selecting a testing style to match desired realism and depth.
Follow-up testing mapping
If a network-accessible service is discovered → test the exposed service with the matching type (e.g., web/API).When expanding scope after initial discovery finds new attack surfaces.
Main Concepts
Penetration testing as controlled attack simulation
Mimics real-world attacks to identify how security controls can be bypassed, under agreed rules of engagement, and reflects results at a specific time.
Penetration testing vs vulnerability assessment
Vulnerability assessment focuses on identifying and ranking vulnerabilities (often automated scanning); penetration testing focuses on exploitation paths and real attack feasibility.
Cadence driven by criticality and exposure
Frequency depends on system criticality and cyber risk; internet-accessible systems should be tested at least annually and after major changes.
Testing styles depend on information access
Blackbox maximizes external realism but may miss internal/post-auth issues; Greybox improves efficiency and depth; Whitebox increases vulnerability capture via architecture/source code knowledge.
Types of tests map to target technologies
Choose test types by environment/attack surface: network, web, mobile, API, wireless, thick client, mainframe, hardware, IoT, etc.
Phased lifecycle (planning → discovery → attack → reporting → retest)
A structured process validates fixes via retest, and planning defines rules of engagement and scope before execution.
External frameworks complement the guidelines
FIs may use OWASP, SANS, and standardized references (CVE/CVSS/CWE/CAPEC) to guide focus and interpret findings consistently.
Memory Tricks
Blackbox vs Greybox vs Whitebox
B-G-W by access: Blackbox = “B” for “Blind,” Greybox = “G” for “Given (credentials),” Whitebox = “W” for “Whole (code/architecture).”
Vulnerability assessment vs penetration testing
Assess = “Scan and Rank”; PenTest = “Exploit and Prove.”
CVE vs CVSS vs CWE vs CAPEC
Think of a chain: CVE = “ID,” CVSS = “Score,” CWE = “Weakness,” CAPEC = “Attack Pattern.”
Cadence trigger
“Yearly for Internet, Retest after Updates.”
Why blackbox can miss issues
If you cannot “break the perimeter,” you cannot “see inside,” so internal/post-auth pages may remain undiscovered.
Quick Facts
- Guidance is not a compliance document; FIs must still meet prevailing regulatory requirements separately.
- Penetration testing is one security activity among many; other activities are summarized elsewhere (Appendix B).
- Blackbox: no internal knowledge; Greybox: limited information such as credentials; Whitebox: architecture documentation and source code.
- Internet exposure increases the need for periodic testing because real attackers can follow similar paths.
- Major updates can introduce new vulnerabilities or misconfigurations, so testing should be repeated after changes.
Common Mistakes
Common Mistakes: Penetration Testing Guidelines 2.0 (Financial Industry, Singapore)
Confusing vulnerability assessment with penetration testing, then judging the activity by scan coverage instead of exploitation and security-control bypass.
conceptual · high severity
▼
Confusing vulnerability assessment with penetration testing, then judging the activity by scan coverage instead of exploitation and security-control bypass.
conceptual · high severity
Why it happens:
Students think: "Both activities find security problems, so if we run automated scans and list vulnerabilities, we have done penetration testing." They treat the output as equivalent: vulnerability lists and rankings are assumed to prove exploitability and real attack paths.
✓ Correct understanding:
Students should think: "Vulnerability assessment identifies and ranks vulnerabilities, often via broad automated scanning. Penetration testing mimics real-world attacks under agreed rules of engagement, focusing on how security features can be circumvented." Therefore, penetration testing requires an attack simulation mindset (planning, discovery, attack, reporting, retest) rather than only vulnerability enumeration.
How to avoid:
Use a two-part checklist: (1) Does the work include an attack simulation with exploitation attempts to show how controls can be bypassed? (2) Is the goal to rank vulnerabilities via detection automation only, or to demonstrate real-world attack paths under rules of engagement? If exploitation-focused attack simulation is missing, label it as vulnerability assessment, not penetration testing.
Assuming penetration testing is automatically a compliance requirement, so they plan it as a checkbox activity rather than as guidance-driven risk management.
process · high severity
▼
Assuming penetration testing is automatically a compliance requirement, so they plan it as a checkbox activity rather than as guidance-driven risk management.
process · high severity
Why it happens:
Students think: "The document is about penetration testing guidelines, so it must be required by regulation. If we schedule it, we satisfy the requirement." This leads to minimal tailoring to system criticality, exposure, and scope.
✓ Correct understanding:
Students should think: "The guidelines are guidance, not a compliance document. FIs must still meet prevailing regulatory requirements separately." Penetration testing cadence and scope should be driven by system criticality and cyber risk, with good practice such as at least annual testing for internet-accessible systems and testing after major changes.
How to avoid:
Separate concerns in planning: (1) Confirm applicable regulatory requirements through the FI’s compliance function. (2) Use the guidelines to design risk-based penetration testing cadence and scope (criticality, exposure, major changes). Treat the guidelines as a best-practice input, not the compliance proof.
Believing blackbox testing always finds the most issues, so they choose blackbox by default and assume it guarantees discovery of internal and post-authenticated weaknesses.
conceptual · medium severity
▼
Believing blackbox testing always finds the most issues, so they choose blackbox by default and assume it guarantees discovery of internal and post-authenticated weaknesses.
conceptual · medium severity
Why it happens:
Students think: "No internal knowledge means the tester is more realistic, so blackbox must uncover everything." They ignore the cause-effect relationship between information access and what is reachable from the outside.
✓ Correct understanding:
Students should think: "Blackbox provides no internal knowledge, so realism can be high, but discovery is limited to what can be reached from the outside." If the perimeter cannot be breached, internal services and post-authenticated pages may remain undiscovered. Therefore, style selection should match the testing objectives and expected attacker paths, not just the desire for realism.
How to avoid:
Align style to objective: if the goal is to assess authenticated workflows and privilege escalation, consider greybox (e.g., limited information such as login credentials) or whitebox (architecture and source code). If the goal is to validate external exposure and perimeter resilience, blackbox is appropriate, but do not assume it will reveal internal issues that are unreachable without a successful breach.
Mixing up CVE, CVSS, CWE, and CAPEC, then using the wrong reference to justify testing priorities or interpret results.
conceptual · medium severity
▼
Mixing up CVE, CVSS, CWE, and CAPEC, then using the wrong reference to justify testing priorities or interpret results.
conceptual · medium severity
Why it happens:
Students think: "All these are vulnerability lists or scoring tools, so any one of them can be used interchangeably." They may cite CVSS as if it identifies the weakness type, or cite CWE as if it is a scoring system.
✓ Correct understanding:
Students should think: "CVE is an identifier dictionary for publicly known vulnerabilities/exposures. CVSS is a standardized scoring system. CWE is a taxonomy of common weaknesses that can lead to exploitable vulnerabilities. CAPEC maps common attack patterns that exploit CWEs." Therefore, each reference supports different reasoning: identification, severity communication, weakness classification, or attack-pattern mapping.
How to avoid:
Use a mapping rule during analysis: identify what you need first. If you need a stable name for a known issue, use CVE. If you need severity scoring, use CVSS. If you need to describe the underlying weakness category, use CWE. If you need to reason about attacker techniques and how weaknesses are exploited, use CAPEC. Do not substitute one for another.
Using an incorrect cadence rule, such as testing only when convenient or only when vulnerabilities are found, instead of testing based on criticality/exposure and triggers like major changes.
planning · high severity
▼
Using an incorrect cadence rule, such as testing only when convenient or only when vulnerabilities are found, instead of testing based on criticality/exposure and triggers like major changes.
planning · high severity
Why it happens:
Students think: "If we do a penetration test occasionally, that is enough. Or if scans show no issues, we do not need to test." This ignores the guideline’s cause-effect logic: internet exposure increases likelihood of real-world attack paths, and major updates can introduce new vulnerabilities or misconfigurations.
✓ Correct understanding:
Students should think: "Cadence is determined by system criticality and cyber risk." Good practice includes testing internet-accessible systems at least once every year and performing testing again after major changes or updates. The trigger is not “when problems are detected,” but “when risk changes.”
How to avoid:
Create a cadence matrix: rows are systems, columns are criticality and exposure (internet-accessible vs internal), and rows include triggers (major changes/updates). Then schedule at least annual testing for internet-accessible systems and add “after major changes” retesting events regardless of whether scans currently show issues.
Skipping or misunderstanding retesting, treating the engagement as complete after reporting and assuming remediation is automatically validated.
process · high severity
▼
Skipping or misunderstanding retesting, treating the engagement as complete after reporting and assuming remediation is automatically validated.
process · high severity
Why it happens:
Students think: "Once the report is delivered, the job is done." They may assume that fixing issues guarantees security improvements without verifying that the specific bypass paths were closed and that no regressions were introduced.
✓ Correct understanding:
Students should think: "Penetration testing runs through phases: planning, discovery, attack, reporting, and retest." Retest depends on reporting outcomes and remediation. The purpose of retest is to validate that the security controls now prevent the previously demonstrated circumvention methods.
How to avoid:
In engagement planning, define retest criteria: which findings require retest, what constitutes successful remediation (e.g., inability to reproduce the same exploitation/bypass), and the timeline after fixes. Treat retest as a required validation step, not an optional follow-up.
Choosing a single test type regardless of target technology, then missing major classes of vulnerabilities because the scope does not match the attack surface (e.g., doing only network testing when the main risk is web/API).
scope · medium severity
▼
Choosing a single test type regardless of target technology, then missing major classes of vulnerabilities because the scope does not match the attack surface (e.g., doing only network testing when the main risk is web/API).
scope · medium severity
Why it happens:
Students think: "Network penetration testing covers everything." They assume that if they map open ports and services, they have tested the application layer and API behaviors too, even when the guidelines explicitly distinguish network, web, mobile, API, wireless, thick client, mainframe, hardware, and IoT testing.
✓ Correct understanding:
Students should think: "Types of penetration tests map to target technologies and attack surfaces." Network testing can discover services that lead to follow-up web/API testing. Web, mobile, and API testing focus on application-layer weaknesses exploitable by threat actors. Therefore, scope must be technology-aware and follow the discovered exposure paths.
How to avoid:
Use a scope expansion rule: (1) Start with network discovery to identify exposed services. (2) For each discovered service, determine the relevant test type (web vs API vs mobile vs wireless, etc.). (3) Add follow-up testing that matches the technology and the likely attacker exploitation path.
General Tips
- When analyzing a scenario, explicitly separate: (a) discovery vs (b) exploitation vs (c) validation (retest).
- Use “information access” to predict what will be discovered: blackbox limits reachability; greybox/whitebox increases depth and coverage.
- Drive cadence from risk change: internet exposure implies at least annual testing; major changes imply additional testing.
- Do not treat standardized references as interchangeable: CVE identifies, CVSS scores, CWE categorizes weaknesses, CAPEC maps attack patterns.
- Choose test types based on target technology and attack surface, and expand scope when discovery reveals new exposed services.