CybersecurityCybersecurity
Year in Review: What Building for Real Security Teams Has Taught Us and what AI means in 2026

16.12.25
4 min read

As cybersecurity teams close out 2025, many find themselves facing a familiar paradox: more data, more tools, and more alerts - but no corresponding sense of clarity or control.
At the same time, artificial intelligence has become the dominant narrative in security, often positioned as a solution to problems that are fundamentally operational in nature.
We sat down with Gunnsteinn Hall, Chief Product Officer at Nanitor, to reflect on what 2025 revealed about the realities of managing cyber risk today, where AI is already delivering value, and why a more grounded, continuous approach to exposure management will matter even more in 2026.
Q: As 2025 comes to a close, when you look back from the CPO’s seat, what stands out most to you?
What stands out is how consistent the challenges have been across organizations of very different sizes and industries. Everyone feels the pressure - more vulnerabilities, more tools, more alerts - but very limited time and resources to deal with them.
What surprised me most is not that the threat landscape became more complex, but that many teams are still expected to operate with approaches that were designed for a very different reality. The gap between what security teams are asked to do and what their tooling realistically enables became very visible this year.
Q: Was there anything that changed your thinking in 2025?
Yes - I think I underestimated how much cognitive load security teams are carrying every day. We talk a lot about vulnerabilities, exposures, and risk, but less about the human cost of constantly triaging noise.
This year reinforced for me that better security is not about adding more information. It is about reducing friction, removing uncertainty, and helping people make fewer, better decisions with confidence.
Q: From your perspective, where do organizations struggle most today when it comes to managing cyber risk?
The biggest struggle is prioritization. Most organizations are not short on data. They are short on clarity.
Security teams know there are issues, but they do not always know which ones truly matter right now, which ones can wait, and which ones are already mitigated in practice. That uncertainty slows everything down and creates stress, especially when reporting upwards to management or the board.
Q: You spoke at Nanitor Day about AI and its role in cybersecurity. How do you assess the state of AI in security today?
AI is already useful in cybersecurity, but not in the way it is sometimes portrayed. It is not a magic layer you add on top of broken processes to suddenly make everything work.
Where AI helps today is in handling complexity at scale - summarizing, correlating, validating, and contextualizing information faster than humans can. Where it does not help is replacing judgment or responsibility. That distinction is important.
Q: There is a lot of AI hype suggesting it will “solve” cybersecurity. What do you think is misunderstood?
The biggest misunderstanding is that AI can compensate for poor fundamentals. If asset visibility is incomplete, if exposure data is outdated, or if remediation workflows are unclear, AI will not fix that. It will only operate on flawed inputs.
Good AI amplifies good practices. It does not replace them. That is why we have been very deliberate about how we apply AI at Nanitor.
The goal is not to impress users with AI features. The goal is to help them move faster with fewer mistakes.
Q: So how should security leaders think about AI more realistically?
They should think of AI as an assistant, not an authority.
AI should help answer questions faster, reduce repetitive work, and highlight patterns that humans might miss. But accountability still sits with people. Security is ultimately about decisions, trade-offs, and trust. Those are human responsibilities.
Q: There is a lot of AI hype suggesting it will “solve” cybersecurity. What do you think is misunderstood?
The biggest misunderstanding is that AI can compensate for poor fundamentals. If asset visibility is incomplete, if exposure data is outdated, or if remediation workflows are unclear, AI will not fix that. It will only operate on flawed inputs.
Good AI amplifies good practices. It does not replace them. That is why we have been very deliberate about how we apply AI at Nanitor.
Q: So how should security leaders think about AI more realistically?
They should think of AI as an assistant, not an authority.
AI should help answer questions faster, reduce repetitive work, and highlight patterns that humans might miss. But accountability still sits with people. Security is ultimately about decisions, trade-offs, and trust. Those are human responsibilities.
Q: Looking ahead to 2026, what role do you see AI playing day-to-day for security teams?
In 2026, AI should quietly take work off people’s plates. Things like validating remediation advice, adapting guidance to the tools an organization actually uses, or reducing the time it takes to understand whether an exposure is real or theoretical.
If AI is doing its job well, it will feel less visible, not more. The goal is not to impress users with AI features. The goal is to help them move faster with fewer mistakes.
Q: How does this thinking influence Nanitor’s product direction going into 2026?
Our focus is on using AI where it strengthens continuous threat exposure management - not where it distracts from it.
That means AI that understands context, environment, and constraints. AI that helps teams validate risk, not just detect it. And AI that supports decision-making rather than overwhelming users with yet another layer of output.
Q: What should boards and business leaders take away from all this?
Cybersecurity is becoming more continuous and more operational. It is less about one-off assessments and more about ongoing visibility and resilience.
AI can support that shift, but leadership should ask the right questions: Does this reduce risk in practice? Does it help teams act faster? Does it make our security posture clearer, not more complicated?
Those are better questions than “does this tool use AI.”
Q: On a more personal note, what excites you most about building at Nanitor right now?
What excites me is that we are solving real problems for real teams. The feedback we get is not about flashy features. It is about clarity, confidence, and time saved.
That tells me we are on the right path.
Q: Finally, if you could give one piece of advice to security leaders going into 2026, what would it be?
Focus on reducing uncertainty.
Better visibility, better prioritization, and better validation will always beat chasing the latest trend. AI can help with that - but only if it is applied thoughtfully and grounded.
“In a landscape defined by constant change, that focus on clarity and resilience may prove to be the most valuable innovation of all. ”
In a landscape defined by constant change, that focus on clarity and resilience may prove to be the most valuable innovation of all.
If these reflections resonate, we’re continuing the conversation in 2026.