The Office of the Director of National Intelligence’s 2025 Annual Threat Assessment (ATA) offers a sobering but critical overview of the evolving global threat environment. It highlights a complex web of adversaries and emerging risks—ranging from state-sponsored cyber campaigns and transnational criminal networks to decentralized terrorist threats and influence operations—all of which present mounting challenges to U.S. national security. While the threats are familiar in type, they are growing in scale, complexity, and velocity.

The ATA makes one thing abundantly clear: the traditional intelligence model—built on classified collection and compartmentalized analysis—is no longer sufficient on its own. The Intelligence Community (IC) must adopt a more agile, scalable, and integrated approach to managing information and generating insight. Central to this transformation is the expanded use of AI-human teaming, particularly in the exploitation of open-source intelligence (OSINT).

By combining artificial intelligence with expert human analysis, AI-human teaming enables the IC to detect, assess, and respond to threats in near real-time. It bridges the speed of machines with the judgment of analysts—allowing agencies to not only observe the world more effectively but also to understand and anticipate change before it escalates into crisis.

 

A New Era of Threat Complexity

The ATA outlines four state actors—China, Russia, Iran, and North Korea—as the primary strategic threats. Each continues to develop advanced cyber capabilities, strategic weapons systems, and influence operations that cut across conventional, economic, and information domains.

China, in particular, is positioned as the United States’ most consequential competitor. It seeks global leadership in critical technology sectors—such as AI, quantum computing, semiconductors, and biotechnology—while simultaneously expanding its military, asserting its power in the Indo-Pacific, and building influence networks that penetrate global institutions and media ecosystems.

Russia, though weakened by the ongoing conflict in Ukraine, remains a potent cyber and influence actor. It continues to leverage disinformation, intelligence operations, and critical infrastructure probing as part of a broader strategy to undermine NATO cohesion and democratic stability.

Iran and North Korea pose asymmetric challenges—combining regional aggression, proxy warfare, missile development, and, increasingly, cybercrime. Both rely on cyber theft, ransomware, and cryptocurrency fraud to fund illicit activity and evade sanctions.

Layered atop these state-driven threats are non-state actors: transnational criminal organizations fueling the fentanyl crisis; terrorist groups operating in digital safe havens; and ideologically motivated individuals radicalized through online platforms.

The thread connecting these actors is their use of publicly available information—social media, news outlets, niche forums, and economic signals—to plan, coordinate, recruit, and mislead. For the IC, this underscores a stark truth: the open-source environment is now a primary battleground for national security.

 

OSINT and the Scale Problem

Open-source intelligence is no longer a peripheral discipline—it is a critical pillar of national defense. But it comes with unique challenges. The volume of open data is immense: millions of articles published daily, billions of social media interactions, continuous video, imagery, and evolving narratives across dozens of languages and platforms.

Traditional analytical workflows were not built for this scale. Analysts cannot manually process terabytes of unstructured data every day. Even with automated scraping tools, distinguishing between noise and signal remains a monumental task. In fast-moving crises—cyber intrusions, conflict escalation, terrorist attacks—the cost of delay is measured in lives, resources, and strategic setbacks.

That is why AI-human teaming has emerged as a transformative solution. It enables machines to do what they do best—ingest, sort, classify, and detect patterns—while empowering humans to apply context, judgment, and decision-making to the resulting intelligence.

 

What AI-Human Teaming Actually Looks Like

In practice, AI-human teaming involves more than automating alerts or running keyword searches. It means applying a sophisticated suite of machine learning models trained to detect anomalies, categorize event types, monitor sentiment shifts, and flag coordinated influence campaigns. These models operate across structured and unstructured data sources, continuously scanning for signals of instability, disruption, or strategic intent.

Where the machine reaches its limits, human analysts step in. They verify emerging events, assess the credibility of sources, interpret context, and provide judgment. The result is a refined output: not just a data point or headline, but an actionable insight—delivered at speed and tailored to specific mission requirements.

Importantly, AI-human teaming scales. It allows for simultaneous monitoring of hundreds of locations, topics, or actors without overwhelming human capacity. It creates a force multiplier effect—augmenting, not replacing, the analyst’s role.

 

Real-World Applications Across the IC

AI-human teaming is already proving effective across a variety of intelligence use cases. In border security, these systems can monitor migrant flows, criminal networks, and local unrest indicators to help forecast surges and inform operational postures. In counter-narcotics, they can identify drug trafficking corridors by correlating online chatter with ground-level incidents and geospatial risk.

In influence operations, AI can detect coordinated inauthentic behavior and synthetic media—while human teams validate authenticity and interpret the broader intent behind the content. This has profound implications for election security, public trust, and counter-disinformation strategies.

For cyber defense, open-source AI systems can detect digital reconnaissance, ransomware group activity, and geopolitical reactions to cyber incidents—often before traditional cyber indicators trigger alerts.

At the strategic level, these capabilities help anticipate economic coercion, infrastructure sabotage, or military posturing by monitoring shifts in sentiment, supply chain disruptions, or unusual movements in trade and transportation.

 

Operationalizing AI-Human Teaming at Scale

For the Intelligence Community to fully realize the value of AI-human teaming, several operational enablers must be in place:

  1. Unified Data Integration – Open-source systems must be interoperable with classified workflows and analytical environments. APIs, data normalization, and federated search tools can bridge the open-closed divide.
  2. Mission-Centric Customization – Alerts and outputs must be aligned with each agency’s core objectives, regions of interest, and tactical frameworks. Analysts should be able to configure thresholds, filters, and priority indicators.
  3. Secure, Compliant Infrastructure – Tools must adhere to federal cybersecurity frameworks, privacy protections, and data handling regulations. Trust is essential.
  4. Analyst-Centered Design – Interfaces and user experiences should enhance, not hinder, the analyst’s workflow. This includes intuitive dashboards, narrative brief generation, and integrated training support.
  5. Public-Private Collaboration – Many of the best AI models and platforms originate in the private sector. The IC should pursue structured, mission-aligned partnerships to accelerate adoption and tailor commercial innovation to government needs.

 

From Collection to Anticipation

The Intelligence Community has always excelled at classified collection. But in 2025 and beyond, strategic advantage will belong to those who can connect open dots faster than adversaries can act on them.

AI-human teaming doesn’t just enhance intelligence—it transforms it. It offers a way to move from reactive threat response to proactive foresight. From siloed data streams to fused insights. From fragmented tools to interoperable, mission-ready solutions.

As the 2025 ATA shows, the threat environment is only getting more challenging. To meet it head-on, the IC must not only evolve—it must adapt with speed, scalability, and precision. AI-human teaming is the key to that adaptation. It is no longer a future capability. It is a current necessity.

Newsletter

Stay informed of current events, security trends, and more

The Seerist newsletter delivers expert insights and analysis of developments surrounding security, technology, and other impactful issues. It is full of the facts and thinking you need to operate wisely in a tumultuous world.