When security is reduced to an IT issue
The technical “universal solution” paradox
In many technical organizations, there is an inherent psychological tendency to seek binary solutions to complex problems.5 This “panic buying” of security tools—often following a high-profile incident or an uncomfortable board meeting—can be compared to buying a treadmill after a bad medical report without any intention of actually changing one’s lifestyle habits.5 The problem is that each new tool introduces additional complexity, new configurations, and therefore new potential attack surfaces.5 If the decision to acquire the tool is not accompanied by a change in how the organization prioritizes security in everyday operations, the tool becomes more of a burden than an asset.2
Statistics on security tools and alert management
Average number of alerts per day in a SOC environment
Share of alerts classified as non-actionable noise
Share of security professionals who ignore alerts
Share of incidents caused by human actions/decisions
Cost of a data breach (2024–2025)
Värde
4 4899
4,44-4,88 million USD3
When security is seen as an isolated IT issue, leadership tends to focus on “what” can be bought rather than “how” work is done.1 This leads to technical solutions being implemented on top of processes that do not work. A SIEM system can collect millions of log lines, but if the organization has not made the difficult decisions about who owns the risk when an alert indicates that a business-critical server must be shut down, the system remains only a passive observer of the organization’s downfall.2
MFA and the psychological vulnerability
Multi-factor authentication (MFA) is often considered the ultimate defense against identity theft.12 Technically, MFA is highly robust, but its effectiveness is entirely dependent on the decision the user makes at the critical point of approval.14The vulnerability known as MFA fatigue (or push bombing) perfectly illustrates how a technical solution can be bypassed by exploiting human fatigue and organizational shortcomings.16
The Uber case: When frustration overcomes technology
The breach at Uber in September 2022 is one of the most illustrative examples of how a technically mature organization can fail despite advanced protections.19 The attacker, linked to the Lapsus$ group, obtained login credentials for a third-party contractor.21 Even though the account was protected by MFA, the attacker was able to trigger a continuous stream of push notifications to the contractor’s phone.16 This bombardment of requests, often carried out at night to maximize confusion, created a state of cognitive fatigue for the user.12
However, the decisive step in the attack was not only technical exhaustion, but a form of social engineering: the attacker contacted the user via WhatsApp, impersonated Uber IT support, and explained that the push notifications were caused by a system error that could only be stopped if the user approved one of them.14 By eventually “giving in” and approving the request, the user opened the door to the entire network.20 It becomes clear that the vulnerability exploited was not the MFA protocol itself, but the decision made by the user under pressure.1
The organizational shortcomings in this case were multifaceted. First, there was no technical control limiting excessive MFA requests (rate limiting), a design decision made by IT architects.15 Second, there were evident gaps in security culture and training; the user did not know how to report an ongoing attack and instead felt compelled to resolve the situation independently.1 Third, once the attacker gained access, internal systems (such as PowerShell scripts on a shared drive) contained hardcoded administrative credentials—an outcome of convenience-driven decisions made by engineers earlier in time.20
From convenience to security in authentication
Many organizations implement push-based MFA because it is convenient for users, but they overlook that convenience often stands in direct conflict with resilience.12 The decision to prioritize user experience (UX) over security creates the attack surface that MFA fatigue exploits.12 One solution that addresses the underlying decision is the implementation of “number matching,” where the user must enter a number shown on the login screen into their app.15 This breaks the automatic reflex of simply pressing “approve” and forces an active, conscious decision.16
MFA-method
Simple push notification
Number Matching
FIDO2
SMS-codes
Teknisk riskprofil
High risk of MFA fatigue and accidental approvals15
Medium risk; reduces blind acceptance15
Low risk; resistant to phishing and fatigue15
Low risk; resistant to phishing and fatigue15
Organizational impact
Low friction, but requires high user awareness.14
Increases security without significant productivity loss12
Higher initial cost and requires hardware management17
Easy to deploy, but technically outdated protection14
However, technical adjustments are not enough. We must build a culture where employees feel safe to deny a request and immediately report it as an incident, without fear of “bothering” the IT department.1 This requires leadership that clearly signals that security takes precedence over speed in authentication.1
Cloud security and the inverted shared responsibility model
The shift to cloud environments has radically changed how organizations manage infrastructure, but decision-making models have often failed to keep pace.2 A common misconception is that the cloud service provider (CSP) is responsible for security in its entirety.26 In reality, cloud environments operate under a shared responsibility model where the provider is responsible for security of the cloud (hardware, datacenters), while the customer is responsible for security in the cloud (data, identities, configurations).26
Configuration errors as an organizational symptom
When cloud incidents occur—such as the well-known exposures from misconfigured S3 buckets in AWS—it is rarely due to a technical flaw in AWS itself.26 Instead, it is the result of organizational decisions that prioritize rapid deployment over rigorous configuration governance.2 In an environment where “infrastructure as code” (IaC) allows engineers to build entire networks in seconds, the consequence of a single incorrect decision—such as setting a permission to “Public” to simplify access for a development team—becomes immediate and global.2
Statistics from the Cloud Security Alliance and IBM show that up to 15% of all security breaches are caused by cloud misconfigurations.26 This is not a technical problem that can be solved with more tools, but a governance problem.2 The decision to allow development teams to manage their own cloud environments without centralized guardrails or automated controls is a strategic choice that directly expands the organization’s attack surface.2
Capital One and the vulnerability in architectural decisions
The 2019 Capital One breach, where data from 100 million customers was exposed, is a striking example of how cloud complexity intersects with flawed architectural decisions.27 The attacker exploited a vulnerability known as Server-Side Request Forgery (SSRF) in a misconfigured web application firewall (WAF).27 However, the real vulnerability ran deeper: the compromised instance had been assigned an IAM role (Identity and Access Management) with overly broad permissions, enabling the attacker to read data from thousands of storage buckets.27
Here we see how a decision to grant a role “a bit extra permissions to avoid troubleshooting” created catastrophic consequences once the technical vulnerability was discovered.28 Cloud security therefore requires continuous verification of decisions (Zero Trust) rather than static firewall rules.27 Organizations that succeed in the cloud are those that shift focus from merely purchasing security services to embedding security decisions directly into the codebase (Security as Code).29
CI/CD and the hidden risks of the software factory
For modern technical organizations, the CI/CD pipeline (Continuous Integration/Continuous Deployment) is the heart of the operation.4 This is where code is transformed from idea into production-ready services. But the pipeline is also one of the most critical and least protected attack surfaces.4 When security is reduced to an IT issue, the pipeline is often seen as an internal tool rather than part of critical production infrastructure.4
Secrets in broad daylight
One of the most common technical mistakes in CI/CD environments is “secret sprawl” – hardcoding API keys, passwords, and certificates directly into configuration files or scripts.4 This is rarely a result of ignorance, but rather a decision driven by time pressure.4 Developers working under strict deadlines to deliver new features often choose the simplest path to get the build process working.4
When these secrets end up in a Git repository, even a private one, a permanent security debt is created4 Attackers who gain access to a single developer’s account can quickly scan repositories for patterns like “AKIA…” (AWS keys) and within minutes gain full access to the organization’s cloud infrastructure.28 The decision not to implement centralized and automated secret management (Secrets Management) is an organizational choice that prioritizes short-term speed over long-term survival.4
Problems i CI/CD
Over-privileged tokens
Lack of segmentation
in the build environment
Insecure third-party dependencies
Manual approvals that
are based on “vibes”
Teknisk konsekvens
Leaked token gives access to code and production4
Attackers can move freely between different projects31
Malicious code is injected via public libraries9
Security controls become a guessing game under time pressure29
Underlying decisions/culture
Default trust settings to avoid disruptions in the flow4
Decision to use shared build infrastructure to save costs31
Choice to blindly trust open source without automated review29
Lack of defined and automated security policies30
Security debt interest effect
These small, everyday decisions accumulate into a massive “security debt”.1 Just like financial debt, security debt generates “interest” in the form of increased risk and more expensive future remediation.34 A technical organization that chooses to ignore a vulnerability in a third-party library in order to meet a launch deadline has taken out a high-interest loan.34 If this debt is not handled systematically through a culture of accountability, the organization will sooner or later face a forced “repayment” in the form of a breach.3
SIEM, tool sprawl, and analyst fatigue
In cybersecurity, there has long been a saying that “you cannot protect what you cannot see.”26 This has led to an explosive growth in monitoring tools. But in the pursuit of total visibility, many organizations have created a new problem: “tool sprawl” or tool chaos.5 When an organization acquires dozens of different security products that do not communicate with each other, a fragmented picture is created that actually reduces real security.5
Alert fatigue: When the warning bells go silent
A typical SOC (Security Operations Center) today handles over 4,400 alerts per day.9 Of these, up to 67% turn out to be pure noise or false positives.9 This massive volume of information leads to alert fatigue, a condition where analysts become desensitized and start ignoring alerts.5
Analysis shows that 30% of IT professionals admit to ignoring alerts due to the high rate of false positives.5 This is a dangerous decision made in everyday operations, but it is a direct consequence of management’s choice to prioritize purchasing tools over the team’s ability to actually process the information.2 When security is reduced to an IT issue, success is often measured by the number of installed agents or enabled log sources, rather than by analysts’ ability to actually detect and stop a breach.5
The human cost of technical complexity
Tool sprawl also carries a significant human cost. SOC analysts often spend more time switching between 15 different consoles and trying to reconcile conflicting data than actually hunting threats.5 This leads to stress, dissatisfaction, and extremely high staff turnover.9
According to research from IBM and KPMG, 70% of SOC analysts report suffering from burnout.9 When the most experienced people leave an organization due to an unsustainable work environment, the organization loses not only technical expertise but also the institutional memory needed to make the right decisions in a crisis.10 Here, the connection to information security culture becomes clear: an organization that does not care about the well-being and cognitive load of its employees can never be secure, no matter how much money it invests in technology.1
Shadow IT: A silent protest against decision paralysis
Shadow IT – the use of apps and services that are not approved – is often described as a security problem caused by “careless” users.40 But if you scratch the surface, it turns out that shadow IT is almost always a symptom of organizational decisions that have created barriers to productivity.42
Why employees choose the shadow
Statistics show that up to 80% of all employees use shadow IT in some form.42 The most common examples include using personal Dropbox accounts to share files, communicating via WhatsApp, or using AI tools like ChatGPT without approval.42
Drivers behind Shadow IT
Need for speed
Poor usability
Need for collaboration
Pressure from management
Organisatorisk orsak
The IT department’s approval process is too slow42
The official tools are cumbersome or outdated43
Internal tools do not allow seamless sharing with external partners43
A focus on results makes security policies seen as “optional”45
Security consequence
Data is moved outside control and backup42
Weak encryption and poor identity management43
Risk of data leaks and compliance violations (GDPR)42
Expanded attack surface that IT cannot monitor42
When management decides to implement strict, inflexible security policies without at the same time providing the tools employees need to do their work, they create incentives for shadow IT.43 Shadow IT is therefore not a technical problem, but a consequence of a weak security culture where the IT department is seen as “naysayers” rather than enablers.1
Addressing the shadow through inclusion
Instead of trying to prohibit shadow IT—which has historically never worked—organizations should try to understand the underlying needs.43 By involving employees in tool selection and creating a culture where it is easy to request new solutions, organizations can bring the “shadow” into the light.43 This requires viewing cybersecurity as a collaboration between humans and technology, rather than a series of prohibitions dictated from an IT office.1
OT environments: When IT security meets the physical world
In industry and infrastructure, we are seeing an increasing convergence between IT and OT (Operational Technology).48This creates unique challenges because decision models that work well in an office environment can be directly dangerous in a factory or a power plant.49
Colonial Pipeline and the fear of the unknown
The 2021 attack on Colonial Pipeline is one of the most well-known examples of how an IT-based intrusion (ransomware) led to the shutdown of critical infrastructure, even though the operational systems (OT) themselves were not compromised.49 Management made the decision to shut down the pipeline because their billing business systems were down, meaning they did not know how much oil was flowing or who would pay for it.49
This highlights a fundamental gap in organizational preparedness: decisions had not been made on how the operation could continue in “manual mode” or without IT system support.51 When security becomes an IT issue, organizations often forget to build the operational resilience needed to handle technology outages. A strong security culture would have involved practicing these scenarios and establishing clear decision paths to maintain operations despite an IT failure.51
Detta belyser en fundamental brist i den organisatoriska förberedelsen: man hade inte fattat besluten om hur verksamheten skulle kunna fortsätta fungera i “manuellt läge” eller utan stöd av IT-systemen.51 När säkerhet blir en IT-fråga, glömmer man ofta bort att bygga den operationella motståndskraften som krävs för att hantera teknikavbrott.51 En organisation med en stark säkerhetskultur skulle ha övat på dessa scenarier och haft färdiga beslutsvägar för att upprätthålla leveransen trots ett IT-haveri.51
The conflict between patching and production
In an IT environment, it is a straightforward decision to install security updates as quickly as possible.36 In an OT environment, the same decision can mean that a production line generating millions per hour must be stopped, or in the worst case, that safety-critical systems (such as valves or pressure sensors) stop functioning due to incompatibility.49
Här blir det tydligt att säkerhet kräver affärskontext. Beslutet att låta bli att patcha ett sårbarbart system i OT kan vara det rätta beslutet ur ett helhetsperspektiv, förutsatt att man implementerar andra kompenserande skydd (som nätverkssegmentering)25 Men om säkerhet hanteras som en isolerad IT-fråga, uppstår ofta en destruktiv konflikt mellan IT-säkerhetsansvariga och produktionschefer, där båda parter fattar beslut utan att förstå den andras verklighet.49
This highlights a fundamental gap in organizational preparedness: decisions had not been made on how the operation could continue in “manual mode” or without IT system support.51 When security becomes an IT issue, organizations often forget to build the operational resilience needed to handle technology outages. A strong security culture would have involved practicing these scenarios and establishing clear decision paths to maintain operations despite an IT failure.51
The Swedish context: the Swedish Civil Contingencies Agency, the Security Service, and the need for cultural change
Sweden is in a serious security policy situation where threats from foreign powers (Russia, China, Iran) and organized crime are increasing in complexity and scale.54 Despite this, the Swedish Civil Contingencies Agency (MCF) shows in its measurements (the Cyber Security Check) that the resilience of Swedish authorities and companies is alarmingly low.57
Lack of structure and management commitment
According to the Swedish Civil Contingencies Agency (MCF) (2024–2025), only just over 5% of the surveyed organizations reach a qualified level in their systematic security work.57 The areas where performance is strongest are purely technical (“IT security work”), while areas such as management governance, information classification, and procurement lag behind.57 This confirms the thesis that many Swedish organizations have reduced security to an IT issue and thereby missed the necessary anchoring in top management.1
The Swedish Security Service emphasizes in its annual reports (2024/2025) that the greatest vulnerability is often a lack of understanding of the threat.56 It is not about technical ignorance, but about decision-makers’ inability to understand how their everyday choices—such as entering into collaboration with a foreign researcher or outsourcing an IT service to a low-cost provider—affect Sweden’s national security.54
Säkerhetspolisen betonar i sina årsböcker (2024/2025) att den största sårbarheten ofta är bristande kunskap om hotet.56 Det handlar inte om teknisk okunnighet, utan om en oförmåga hos beslutsfattare att förstå hur deras vardagliga val – som att ingå ett samarbete med en utländsk forskare eller att outsourca en IT-tjänst till en billig leverantör – påverkar Sveriges nationella säkerhet.54
Human error as the primary cause of incidents
The Swedish Civil Contingencies Agency (MCF) reports that approximately 50% of all reported IT incidents in 2024 were caused by system failures or human error, while actual attacks accounted for only 20%.60 This highlights the importance of focusing on internal processes and culture. If half of the problems are caused by our own incorrect decisions or mistakes in handling, no new technology in the world can solve more than a fraction of the problem.6
Key figures from MCF/Säpo reports
Share of authorities that reach MCF’s requirement level 3
Incidents caused by mistakes/system failures
Main threat actors against Sweden
Median time to detect intrusion (MTTD)
Average cost per incident in Sweden
Value/Observation
8 of 120 (6,7 %) 57
Russia, China, iran 54
11 days 9
-46 millions SEK (converted from USD) 3
A holistic solution to a complex problem
To meet the modern threat landscape, it is not enough to “fix IT”.1 There needs to be a holistic model where security culture is the foundation.1 This means we must stop seeing security as an obstacle to operations and instead see it as an enabler of stability and growth.1
Psychological safety: The cornerstone of a strong defense
One of the most radical and effective measures to strengthen security is to build psychological safety.6 This means that leaders must eliminate fear within the organization.6 If an employee feels they will be punished for clicking a phishing link, they will try to hide the mistake.1 By hiding the error, the attacker gains the time needed to establish themselves in the network (the so-called dwell time).1
In a psychologically safe organization, mistakes are reported immediately, allowing IT to act while the damage is still limited.1 This is an organizational decision: to value openness and learning over control and punishment.1
Decisions as an active attack surface
By viewing every decision as a potential attack surface, we can start working proactively. It is about asking the right questions at the right time:
● When we save money by not segmenting the network – what attack paths are we opening?22
● When we approve a fast release without code review – what security debt are we taking on?34
● When we give an administrator unrestricted privileges – how large is the damage when an attacker gets in through a vulnerable MFA solution?4
The answers to these questions are not technical, they are business and strategic.2 A robust security culture makes it easier to make the right decisions even when they are uncomfortable or costly in the short term.1
Conclusions and the way forward
When security is reduced to an IT issue, the result is a fragile organization that is vulnerable to everything from simple human mistakes to sophisticated state-sponsored cyberattacks.2 Blind trust in technical solutions such as MFA, cloud protection, and SIEM only works if it is built on a stable foundation of organizational decisions and a healthy security culture.1
For technical organizations, IT managers, and engineers, the challenge is now to lift their gaze from consoles and screens and start addressing the hidden attack surface: decision-making.1 This means integrating security into leadership, promoting psychological safety, and no longer seeing humans as the “weakest link,” but instead as the organization’s most important sensor and defender.1
The future of cybersecurity will not be won by those with the most tools, but by those who make the most conscious decisions in everyday operations.1 By working holistically with governance, behavior, and decision-making, we can build organizations that not only survive the next attack, but grow stronger from each challenge.1 Cybersecurity is, after all, not an IT problem – it is a leadership issue, a culture issue, and above all, a human issue.1
