Written by Melissa Newberg, Head of Intelligence at Seerist
Many of us were there—February 24, 2022, and it looks like Russia really is invading Ukraine. My leadership is demanding immediate answers about employee safety, decisions need to be made, and the information landscape is chaotic. There was mis- and disinformation to contend with, every major news outlet reporting disparate information, the much beloved (but sometimes problematic) amateur OSINT accounts on X were overrun. Nothing feels certain, but nonetheless there I am, with 50 tabs open, having to distill lots of information quickly and make sense of it.
Meanwhile, I was cross-referencing known employee and office locations against impact zones that keep shifting as reports come in. What about the team in Warsaw—are they close enough to be affected if this spreads? All the while, the Chief Security Officer wants talking points and needs to convey to his peers what actions the security team has taken.
Analyst teams in this position know all too well that whatever you put in writing is going to be forwarded to dozens, if not hundreds, of people and potentially influence multi-million-dollar decisions. Get it wrong, and you’ve just misinformed leadership during a crisis. Get it right, and you’ve helped protect lives. You’re right back to the credibility question.
Thinking back about that day made me think about the kind of tools that could have taken even a small part of the burden off my plate. Which is why a recent roundtable comment stuck with me:
‘This was clearly built with an analyst in mind.’ That one sentence nailed what’s wrong with most AI tools: they’re not always designed for us, and that has real consequences for credibility.
Credibility and AI
Analysts at this roundtable were super candid about the current state of AI in security tooling. As one participant put it, “AI has become a fad. Everyone wants to throw AI content at you because it’s the cool thing to do, but you don’t really understand how it’s being used or where it’s coming from.”
This is a huge problem for an analyst. It ultimately comes down to professional self-preservation, not resistance to innovative new solutions. For intelligence analysts, credibility is everything, full stop. You can literally lose your job if your analytical credibility is compromised, so any tool that asks you to stake your reputation and livelihood on an opaque AI output is asking way too much.
But there’s something deeper at play here and that’s analytical tradecraft. For analysts, it’s the foundation of your professional identity. It encompasses source evaluation, reasoning transparency, bias recognition, and the systematic approach to turning information into intelligence. AI tools that operate as black boxes can violate credibility and fundamental tradecraft principles that analysts have spent years developing and refining. And honestly? That’s not okay.
AI tools do work, of course. It’s just that they often don’t work properly for analysts, who have to brief executives during a crisis or explain threat assessments to skeptical stakeholders. The result is often tools that create more work instead of less and add to the very problems they claim to solve.
What “Analyst-Forward” Actually Means
Creating AI tools that analysts trust requires understanding three fundamental needs that allow them to maintain their tradecraft and credibility: transparency, control, and workflow integration.
Show Your Work and Demand Transparency
Analysts cannot just accept AI outputs. They need to understand how and from where those outputs were generated. This means having access to source materials, understanding the reasoning chains, and knowing what models are being used and why. Sourcing remains critical because analysts need to research and investigate findings to make them truly bespoke to their organizational needs.
True transparency means being able to peek behind the curtain and understand not just what the AI found, but how it got there. It’s the difference between being handed a mysterious report with no author and being walked through the analytical process by a trusted colleague.
Keep the Analyst in the Driver’s Seat
There’s a crucial distinction between how analysts use tools like ChatGPT personally versus workplace AI tools. In their personal life, they control the inputs, craft the questions, and guide the conversation. But most workplace AI tools operate on a “push” model: here’s an AI output, accept it and move on.
Analysts want to maintain a “pull” relationship with AI tools as much as they can—control over inputs, the ability to influence outputs, and confidence in how the tool serves their specific needs. Again, this is ultimately about maintaining the professional standards that analyst credibility depends on.
Reduce Burden, Don’t Add to It
The best AI tools should solve problems analysts actually have in ways that fit naturally into existing workflows. For small security teams wearing multiple hats, any new tool needs to reduce cognitive load to be of any value, not add to it.
This becomes especially critical during crisis situations, when teams are stretched thin. The last thing thinly stretched and time-poor analysts need is another tool requiring dedicated time to manage. The content might be solid, but if you don’t have bandwidth for it and it’s not reliably trustworthy, it’s not much use.
Making Analysts Better, Not Obsolete
The most successful AI tools in threat intelligence will be those that make analysts faster and more effective, not those that try to replace analysts entirely. While there are many things AI can do, what excites me most is the ability to eliminate the tedious scanning, aggregation and sorting work that prevents analysts from doing what they do best: critical thinking and strategic assessment. After all, every analyst wishes they had more time for this anyway.
Instead of asking “How can AI automate threat analysis?” we should be asking “How can AI give analysts more time to think?” The organizations that recognize this distinction and embrace AI as analyst enablement will build more trustworthy tools and more effective security programs.
Building these tools requires respecting those three fundamentals: transparency that honors tradecraft principles, control that keeps analysts in the driver’s seat, and workflow integration that reduces cognitive load instead of adding to it. That February morning in 2022 required all three: transparency to verify conflicting reports, control to guide the analysis, and workflow integration to manage the chaos without adding to it. When AI tools violate these principles, they fail the trust test that defines analyst credibility.
At a time when everyone wants to build and incorporate AI, the real differentiation comes from being obviously analyst-centered. At the end of the day, the most sophisticated AI in the world is worthless if the people who need to use it don’t trust it. And trust, as our roundtable participants reminded us, isn’t just nice to have in the intelligence world—it’s everything.