Guide: Coordinated Vulnerability Disclosure
1. Introduction to Coordinated Vulnerability Disclosure
1.1. What is Coordinated Vulnerability Disclosure (CVD)?
Coordinated Vulnerability Disclosure (CVD) is the process through which security researchers and the public can report potential vulnerabilities to a product manufacturer, allowing the manufacturer time to remediate the issue before it is publicly disclosed. It is a collaborative approach that is a cornerstone of modern cybersecurity and a mandatory legal requirement.
1.2. The Regulatory Requirement
The Cyber-Resilience Act (CRA) elevates CVD from a best practice to a legal obligation:
- Vulnerability Disclosure Policy (Annex I, Part II, § 5): The CRA requires manufacturers to "put in place and enforce a policy on coordinated vulnerability disclosure."
- The BSI TR-03183-1 provides the details, requiring a public document that defines a reporting process and contact channels (REQ_VH 5).
Beyond this, the CRA also introduces strict new reporting obligations to government bodies for actively exploited vulnerabilities, making a well-practiced internal triage process essential.
1.3. Do I Really Need to Do This?
Yes. The Cyber-Resilience Act (CRA) explicitly requires all manufacturers to "put in place and enforce a policy on coordinated vulnerability disclosure." This is a non-negotiable requirement for market access, as it forms the basis of your entire vulnerability management process.
The law assumes that any product with digital elements can have vulnerabilities, and it requires you to provide a clear channel for security researchers to report them responsibly. There is no exception to this rule.
The real question is not if you need a policy, but "What is a proportionate and compliant CVD policy for a manufacturer of my size?"
-
For a small company or simple product:
- What it might look like: A compliant policy can be very simple. It might consist of a
security.txt
file on your website that points to a single web page. This page would contain a brief policy statement, a "safe harbor" clause, and a single, monitored email address likesecurity@yourcompany.com
. - The key is availability: The channel must exist, be easy to find, and be monitored. You don't need a complex system, but you must have a front door.
- What it might look like: A compliant policy can be very simple. It might consist of a
-
For a large enterprise or high-risk product:
- What it might look like: The policy would be much more detailed, with specific SLAs for response times, a dedicated web form for submissions, and possibly a bug bounty program to incentivize researchers.
- The process is key: Behind this public policy would be a dedicated security team, an integrated ticketing and triage system, and a well-drilled incident response plan to meet the tight 24-hour ENISA reporting deadlines for exploited vulnerabilities.
The Bottom Line
Every manufacturer must have a public vulnerability disclosure policy. It is a sign of maturity and a direct legal requirement. The complexity of your internal process will scale with the size of your organization and the risk profile of your product, but the fundamental requirement to provide a clear, public reporting channel is universal.
2. Key Components of a CVD Policy
Your public CVD policy is a promise to the security community. It should be easy to find and understand. A best practice is to place it in a security.txt
file in the .well-known
directory of your main company website. This file should point to a dedicated web page that details your policy.
Your policy must include:
- The Promise: A statement that you value the work of security researchers and will not take legal action against them for good-faith research that complies with your policy. This is often called a "Safe Harbor" statement.
- The Scope: A clear definition of which products, services, and software versions are covered by the policy. It should also define what types of testing are not permitted (e.g., denial-of-service attacks).
- Reporting Channels: One or more secure ways for researchers to contact you. This is typically a dedicated email address (e.g.,
security@example.com
) and/or a web form. - Response Timelines: Service Level Agreements (SLAs) for how quickly you will respond. For example:
- Acknowledge receipt of a report within 2 business days.
- Provide an initial assessment of the report within 10 business days.
- Provide regular updates on the status of remediation.
3. The Internal Triage Workflow
Once you receive a vulnerability report, a structured internal process is essential.
Step | Action | Key Activities |
---|---|---|
1. Intake | Acknowledge receipt of the report to the researcher. Create an internal ticket to track the issue. | Log the report, confirm you have all necessary information. |
2. Triage | Validate that the vulnerability is real and affects a product in scope. Assign a severity score (e.g., using CVSS). | Reproduce the issue, determine the impact, prioritize based on severity. |
3. Remediation | The engineering team develops, tests, and deploys a patch. | Develop the fix, perform QA, schedule the release according to your patch cadence policy. |
4. Disclosure | Once the patch is available, coordinate the public disclosure with the researcher. This may involve publishing a security advisory and requesting a CVE identifier. | Announce the fix, credit the researcher (with their permission). |
4. Mandatory ENISA Reporting (CRA Art. 14)
The CRA introduces a significant new reporting requirement. From 11 September 2026, manufacturers have a legal duty to notify the EU's cybersecurity agency, ENISA, of any actively exploited vulnerability in their products.
- Initial Notification: An "early warning" must be sent to ENISA within 24 hours of becoming aware of the active exploitation.
- Mitigation Report: Within 14 days, the manufacturer must provide a report detailing the vulnerability, its impact, and any mitigation measures applied.
- Final Report: A final report must be submitted after the vulnerability has been remediated.
This 24-hour deadline requires a well-drilled incident response process.
5. Accelerating Compliance with Tooling
While a simple security@yourcompany.com
email address is the minimum requirement for a reporting channel, managing vulnerability reports via email can be chaotic and difficult to track. Using a dedicated platform can help streamline the process and ensure compliance with response timelines.
- Vulnerability Disclosure Platforms: Instead of building your own ticketing and triage system, services like HackerOne provide a complete platform for managing your CVD policy, receiving reports, communicating with researchers, and even offering monetary rewards (bounties). Using such a platform can significantly streamline the triage and communication process, making it easier to meet the CRA's requirements.
For more details, see the Vulnerability & Threat Intelligence tools page.
6. Compliance Checklist
- Public CVD Policy: Do you have a clear, public CVD policy?
-
security.txt
: Have you created asecurity.txt
file on your website to direct researchers to your policy? - Secure Reporting Channel: Do you have a monitored, secure channel for receiving vulnerability reports?
- Internal Triage Process: Is your internal workflow for handling reports documented and understood by your team?
- ENISA Reporting Plan: Do you have a specific incident response plan to meet the 24-hour ENISA reporting deadline for actively exploited vulnerabilities?
- Documentation: Is your CVD policy and internal process documented in your technical file?