SOC as a Service vs. In-House SOC: What Organizations Miss
Most mid-size organizations frame the SOC decision as a security question. The organizations that get it right frame it as an economic one first.
The choice between building an in-house Security Operations Center and buying managed SOC services carries a total cost of ownership that is consistently underestimated on both sides. The in-house model looks controllable until the full scope of what it requires becomes visible. The managed model looks simple until the first contract renewal exposes what it actually delivered.
Three Takeaways
1. The real TCO of an in-house SOC extends well beyond analyst salaries. SIEM licensing, log ingestion and retention, detection engineering, continuous tuning, audit support, and the opportunity cost of internal engineers not working on architecture or cloud security all belong in the calculation. Organizations that cost only headcount are comparing the wrong number.
2. Risk ownership cannot be outsourced. Incident severity decisions, containment approval, regulator communication, and executive escalation must stay in-house regardless of which monitoring model the organization chooses. The managed SOC executes. The organization remains accountable.
3. NIS2 and DORA raise the floor for everyone. Early warning within 24 hours, incident notification within 72 hours, and a final report within one month are not optional for organizations in scope. Those obligations sit with the organization, not the provider, and they belong in the make-or-buy calculation from the start.
What an In-House SOC Actually Costs
The headcount line is the most visible cost and the least complete. A functional in-house SOC requires analysts across three shifts for round-the-clock coverage, detection engineers who build and tune the rules the analysts act on, and incident responders who handle escalations when something real happens. That is the personnel baseline.
Below it sits a layer of technology cost that grows with organizational complexity. SIEM licensing scales with log volume, and log volume grows with every new system, cloud workload, and endpoint added to the environment. Ingestion costs, retention costs, and integration work compound over time. Detection rules require continuous maintenance: the threat environment changes, the organization’s environment changes, and rules written twelve months ago produce false positives or miss entirely if nobody tunes them.
Training, tabletop exercises, and audit support consume time that is rarely budgeted explicitly. The CISO knows these activities matter. The CFO sees the headcount on the payroll and assumes the rest is included.
The cost that appears on no budget line is opportunity cost. Internal security engineers running a SOC are not improving identity architecture, cloud security posture, or resilience. For mid-size organizations where security headcount is already constrained, that trade-off is real and has a financial consequence that only becomes visible when an architecture problem turns into an incident.
When Does In-House Make Economic Sense?
There is no credible threshold by employee count. The question is whether the organization generates enough operational complexity to justify dedicated engineering and response capacity.
An in-house SOC starts to make sense when four conditions converge: the organization requires genuine round-the-clock coverage, the environment is complex enough to need custom detections rather than generic rules, regulatory pressure is high enough to require deep internal visibility into incident handling, and telemetry volume is large enough to keep a dedicated team continuously occupied.
For most mid-size organizations, one or two of those conditions apply, not all four. A financial institution under DORA may meet the regulatory condition strongly but not generate the telemetry volume that justifies a full in-house engineering function. A manufacturing organization with significant OT exposure may have complex detection requirements but lack the security headcount to staff continuous coverage.
The hybrid model addresses this gap. Core monitoring and alert triage goes to a managed provider. Detection engineering, incident ownership, and regulatory accountability stay in-house. The organization pays for coverage it cannot staff internally while retaining the functions that cannot be delegated.
What Cannot Be Outsourced
Monitoring tasks can be outsourced. Accountability cannot.
The functions that must remain in-house regardless of the SOC model are the ones that carry legal, regulatory, and business consequence. Business impact assessment: only the organization understands which systems are critical to which processes. Incident severity decisions: the determination of whether an event requires executive escalation, regulator notification, or customer communication belongs to the organization, not the provider. Containment approval: the decision to isolate a system, take a service offline, or invoke a business continuity plan is an organizational decision with financial and operational consequences that no managed provider can absorb.
Regulator and customer communication under NIS2 and DORA carries the organization’s name. Third-party oversight of the managed SOC itself is an in-house function. The provider cannot audit the provider.
This distinction matters for contract design. Organizations that outsource monitoring and response without retaining the governance functions listed above have not transferred risk. They have created a dependency while keeping the liability.
Which SOC Metrics Actually Matter
The metrics most commonly offered in managed SOC contracts are not the ones that indicate whether the service is working.
Raw alert volume tells an organization how busy the SOC is. It does not indicate whether the alerts reflect real threats or a poorly tuned ruleset generating noise. Ticket count and acknowledge time are process metrics, not security outcomes. The number of threat intelligence feeds a provider subscribes to says nothing about whether those feeds produce actionable detections for the client’s specific environment.
The metrics that matter are tied to outcomes. Coverage of critical assets and log sources: if the SOC cannot see a system, it cannot protect it, and gaps in coverage are one of the most common failure modes in managed relationships. False positive rate: a high false positive rate indicates poor tuning, which means analysts spend time on noise instead of real events. Time to qualified escalation: not acknowledge time, but the time from event to a human analyst making a quality judgment about what happened. Speed of detection rule changes: how quickly the provider can deploy a new detection when a relevant threat emerges. Evidence that alerts are tested: rules that have never fired on real data may not fire when they need to.
These metrics are harder to report and harder to commit to contractually. That is precisely why they should be negotiated at the start rather than at renewal.
What NIS2 and DORA Change
Both regulations raise the floor for incident handling, risk management, and supply-chain oversight. For organizations in scope, those obligations belong in the SOC model design from the start.
NIS2 requires early warning within 24 hours of becoming aware of a significant incident, incident notification within 72 hours, and a final report within one month. The organization submits those reports. The managed SOC supports the process but does not own the obligation.
DORA, in force since January 2025 for financial entities, adds specific requirements for ICT third-party risk management, resilience testing, and incident reporting. A managed SOC relationship is an ICT third-party relationship under DORA. That means the provider is subject to contractual requirements around audit rights, exit clauses, and concentration risk that go beyond standard SLA terms.
KSC, the Polish cybersecurity framework, adds a domestic layer that applies to operators of essential services and digital service providers. The interplay between KSC, NIS2, and DORA creates a compliance environment where the managed SOC contract needs legal review alongside security review.
The practical implication is direct: organizations that sign managed SOC contracts without mapping provider obligations to regulatory requirements discover the gap at audit time, not contract time.
Why Managed SOC Relationships Fail After Two to Three Years
The failure modes are predictable and consistent across organizations that have been through a provider switch.
Weak onboarding is the most common root cause. A managed SOC that does not complete a thorough onboarding of the client’s environment, log sources, critical assets, and business context will produce generic detections that work at a basic level but miss the threats specific to that organization. The gap is not visible in the first year, when the relationship is new and both sides are motivated. It becomes visible in year two, when the client notices that escalations are generic, tuning requests take weeks, and the provider’s analysts cannot answer questions about the client’s specific environment.
Two other failure patterns account for most of the remaining switches. The first: a significant incident that the SOC either failed to detect or detected too late. The second: a cost-reduction decision at renewal, where the client has found a cheaper provider and underestimates what the transition will cost in lost context and re-onboarding time.
The common thread is that organizations evaluate managed SOC providers on price and certification at procurement, and on actual security outcomes two years later. Those are different evaluations, and closing the gap requires asking the right questions before signing.
If your organization is working through the SOC build-vs-buy decision and wants a structured approach to TCO comparison and provider evaluation, download the SOC Decision Framework: a set of criteria covering cost components, capability requirements, regulatory obligations, and contract terms that should be resolved before the procurement process begins. [link]
FAQ
What is the minimum team size for a credible in-house SOC?
Round-the-clock coverage across three shifts requires a minimum of five to seven analysts to maintain continuous staffing without burnout and allow for training, leave, and incident response. Detection engineering and SOC management add further headcount requirements. Organizations below that threshold should evaluate hybrid or managed models rather than understaffing an in-house function that then fails to deliver genuine coverage.
What contractual protections should organizations require from a managed SOC provider?
The contract should define coverage scope explicitly, including which log sources and assets are included. It should specify escalation timelines with quality thresholds, not just acknowledge time. It should include audit rights, data portability, and exit assistance to prevent provider lock-in. For organizations under DORA, concentration risk provisions and sub-contractor disclosure requirements belong in the contract. Termination clauses should address knowledge transfer and log retention to ensure continuity if the relationship ends.
How does the SOC model affect NIS2 incident reporting obligations?
NIS2 reporting obligations belong to the organization regardless of SOC model. The managed SOC can support the process by providing technical analysis and incident timelines, but the early warning, notification, and final report are submitted by the organization to the relevant national authority. Organizations should map their reporting workflow before an incident, not during one, and verify that their managed SOC contract includes the support obligations needed to meet reporting deadlines.
What is the most common mistake organizations make when evaluating managed SOC providers?
Evaluating on price and certification rather than on the provider’s ability to understand and adapt to the client’s specific environment. Generic detection capability is a baseline requirement. The differentiator is whether the provider can build and maintain custom detections for the client’s technology stack, respond to tuning requests within a defined timeframe, and keep pace with the client’s environment as it changes. Those capabilities are best evaluated through reference checks with existing clients in similar environments, not through RFP responses.
When should an organization consider switching managed SOC providers?
The clearest signals are a pattern of generic escalations that do not reflect the client’s actual environment, repeated delays in onboarding new log sources or deploying detection rule changes, and provider responses to real incidents that reveal a lack of context about the client’s business. Cost-driven switches without addressing the root causes of poor performance typically reproduce the same problems with a new provider after a re-onboarding period that itself carries significant risk.