Rolli
Jump to:ABCDEFGHIJKLMNOPQRSTUVWXYZ

Glossary

Key terms defined

See these concepts in action — Rolli IQ applies authenticity scoring and narrative intelligence in real time.

See these concepts in action — Rolli IQ applies authenticity scoring and narrative intelligence in real time.

Start free trial →

Narrative Intelligence

Platform

Narrative intelligence is the practice of monitoring how stories form, spread, and shift across digital platforms in real time — distinguishing organic public discourse from coordinated amplification.

Where traditional social listening counts mentions and measures sentiment, narrative intelligence asks structural questions: where did this story originate, how is it mutating as it spreads, which actors are driving amplification, and is the apparent momentum authentic? The goal is not a volume count but a map — one that tells analysts where a narrative is, where it is heading, and what forces are moving it.

Narrative intelligence practitioners track three primary signal types: velocity (how fast content is spreading), amplification structure (who is sharing and whether the network shows coordination signatures), and authenticity (whether observed activity reflects genuine human behavior). The combination produces intelligence that is qualitatively more useful than volume-based monitoring in adversarial information environments.

Rolli IQ

Rolli IQ delivers real-time narrative intelligence across 8 platforms via dashboard and API — scoring every signal for authenticity before it reaches your team.

Stanford Internet Observatory — Research on information operations

Coordinated Inauthentic Behavior (CIB)

Detection

Coordinated inauthentic behavior (CIB) refers to the use of multiple accounts acting in concert to artificially amplify narratives or manufacture the appearance of consensus, while deliberately concealing the coordination.

The term was formalized by Meta's security team in 2018 and is now standard across disinformation research. The 'inauthentic' qualifier is critical: organizations that coordinate openly — like a company promoting its own content through official channels — are not engaging in CIB. What defines CIB is the deception layer, where coordinated activity is presented as independent organic expression.

Behavioral signatures of CIB include synchronized posting windows, shared language templates across nominally independent accounts, anomalous cross-amplification within tight network clusters, and account lifecycle patterns inconsistent with organic growth. Detection is inherently probabilistic — no single signal confirms coordination, but the combination of signals allows analysts to assign confidence levels to observed activity.

Rolli IQ

Rolli IQ's authenticity scoring detects CIB clusters in real time, flagging them hours before mainstream media pickup. See real examples in our case studies →

Meta Threat Intelligence — CIB takedown reports

Authenticity Score

Analysis

An authenticity score is a 0–100 confidence metric estimating the probability that observed social media activity originates from genuine independent human behavior rather than coordinated inauthentic operation.

Rolli IQ's Authenticity Confidence Score weighs five signal categories: posting velocity (is activity patterned like human behavior or automation?), account age and lifecycle (do accounts show organic growth or sudden activation?), network clustering (do accounts cross-amplify each other at rates inconsistent with their apparent audience?), language similarity (do posts share template phrasing?), and cross-platform correlation (is behavior consistent across platforms?).

Scores below 30 indicate high-confidence coordination — the combination of behavioral signals exceeds any plausible organic explanation. Scores above 70 indicate the balance of evidence favors genuine human engagement. The 30–70 range warrants analyst review, as edge cases like synchronized fan communities can resemble coordination signatures without being inauthentic.

Rolli IQ

Rolli IQ expresses authenticity as a 0–100 float per narrative cluster. The score drives dashboard ALERT/WARN signals and API response fields.

Narrative Velocity

Analysis

Narrative velocity measures how quickly a piece of content or story frame is spreading across social platforms — distinguishing sudden acceleration events from organic growth curves.

A spike in volume may reflect organic viral spread or coordinated injection. Velocity analysis distinguishes the two: organic stories typically show gradual acceleration as they pass through successive network layers. Coordinated content often shows a near-instantaneous velocity spike as multiple accounts post simultaneously, followed by a brief plateau and decay once the coordinated layer has been exhausted.

Velocity is most useful when combined with authenticity scoring. High velocity paired with low authenticity is the signature pattern of a coordinated narrative injection. High velocity with high authenticity signals genuine viral momentum that warrants a communications response.

Rolli IQ

Velocity is one of three primary output signals in Rolli IQ alongside momentum score and authenticity confidence.

Social Listening

Platform

Social listening is the practice of monitoring social media platforms for mentions of a brand, keyword, or topic — typically measuring volume and sentiment without assessing whether activity is authentic.

Social listening tools ingest public content, aggregate it by keyword or topic, and apply sentiment classification to characterize whether conversation is positive, negative, or neutral. These capabilities are genuinely useful for marketing and brand monitoring in stable environments where coordinated manipulation is not a factor.

The critical limitation of social listening is the authenticity gap: all signal sources are treated as equivalent. A wave of negative brand mentions generated by a coordinated campaign looks identical to organic consumer frustration. In adversarial information environments — product crises, political events, regulatory proceedings — acting on social listening data without authenticity context can lead to strategic errors.

Rolli IQ

Rolli is designed to complement or replace social listening for teams where the distinction between organic and coordinated matters.

First Draft — Guide to information disorder

Influence Operation

Detection

An influence operation is a coordinated effort to manipulate public opinion through deceptive or inauthentic means — including fake accounts, coordinated messaging, and synthetic amplification of narratives.

Influence operations range in scale from small targeted astroturfing campaigns to state-sponsored operations involving thousands of accounts across multiple platforms. What they share is the use of deception to manufacture false impressions of organic public sentiment — making fringe positions appear mainstream, amplifying divisive narratives, or discrediting individuals with manufactured consensus.

The Atlantic Council's Digital Forensic Research Lab (DFRLab) and the Stanford Internet Observatory have documented hundreds of influence operations since 2016, establishing a research baseline that informs commercial detection tools. Detection requires behavioral analysis rather than content analysis alone, since modern influence operations use plausible content that would not be flagged by fact-checkers.

Rolli IQ

Security and trust safety teams use Rolli to detect influence operations hours before mainstream media pickup — giving teams time to prepare responses rather than react.

Atlantic Council DFRLab — Influence operation research

Synthetic Amplification

Detection

Synthetic amplification occurs when the apparent popularity or reach of content is artificially inflated through bots, fake accounts, or coordinated inauthentic behavior — creating a false impression of organic consensus.

Synthetic amplification exploits the social proof heuristic: people are more likely to engage with content that appears to have existing traction. By manufacturing that initial traction through inauthentic activity, synthetic amplification campaigns trigger genuine engagement from real users who encounter content that appears already popular.

The challenge for detection is that real users who amplify synthetically-seeded content are genuinely real — their activity can overwhelm the original inauthentic signal in aggregate metrics. Narrative intelligence tools that analyze the original injection cluster separately from downstream organic engagement can isolate the synthetic layer even after dilution by real activity.

Momentum Score

Analysis

A momentum score measures the current acceleration of a narrative across social platforms — combining volume growth rate, cross-platform spread, and engagement velocity to indicate whether a story is growing, stable, or declining.

Momentum is distinct from volume: a narrative can have high total volume while losing momentum (decelerating), or low total volume while gaining momentum rapidly (accelerating). For communications teams making escalation decisions, momentum is often more operationally relevant than raw volume — a decelerating narrative may not warrant senior leadership attention regardless of its absolute scale.

High-momentum, low-authenticity narratives represent the highest-priority escalation case: a story spreading rapidly and driven predominantly by coordinated amplification. This indicates an active influence operation that has not yet reached critical mass in mainstream media, giving teams a window to prepare responses.

Rolli IQ

Rolli IQ surfaces momentum score alongside authenticity confidence and velocity in every narrative analysis.

Signal-to-Noise Ratio

Analysis

In social media monitoring, signal-to-noise ratio describes the proportion of meaningful, actionable information relative to irrelevant or low-quality content — a ratio improved by filtering out inauthentic amplification.

The social media information environment generates enormous volume, most of which is not strategically relevant. For communications and security teams, the practical challenge is not access to data but the ability to identify the fraction that actually requires a response. Low signal-to-noise ratio leads to alert fatigue — teams stop trusting their monitoring tools because the tools escalate too much.

Authenticity scoring directly improves signal-to-noise ratio by filtering out coordinated inauthentic amplification, which by definition does not represent genuine public sentiment. Rolli IQ's authenticity scoring helps teams significantly reduce false-alarm escalations by separating coordinated noise from organic signal.

Intelligence Brief

Platform

An intelligence brief is a structured document summarizing a narrative's origin, amplification pattern, authenticity assessment, and recommended response — enabling leadership to make informed decisions quickly.

An effective intelligence brief answers four questions: What is the narrative and how is it framed across platforms? Where did it originate and what is its amplification trajectory? How authentic is the engagement driving it? And what is the recommended response posture? For communications teams, the brief is the output that justifies action or non-action — it must be defensible, evidence-backed, and fast.

Manual intelligence briefs require 2–4 hours of analyst time. Automated narrative intelligence platforms can produce structured briefs in under 30 minutes by combining real-time signal aggregation, authenticity scoring, and structured output formats.

Rolli IQ

Rolli IQ generates structured intelligence briefs in under 20 minutes via the Rolli IQ Agents feature.

Cross-Platform Correlation

Detection

Cross-platform correlation is the practice of analyzing coordinated behavior across multiple social media platforms simultaneously — identifying networks that operate across X, Reddit, Telegram, and other platforms as part of a unified campaign.

Modern influence operations rarely confine themselves to a single platform. Campaigns typically originate on fringe or alternative platforms before being injected into mainstream platforms through coordinated seeding activity. Single-platform monitoring misses this trajectory entirely — it can only detect the campaign after it has already achieved mainstream traction.

Cross-platform correlation matches behavioral patterns across platforms: the same account personas appearing in multiple places, timing correlations between posting events, and language template matches that suggest a shared origin. This analysis requires simultaneous monitoring infrastructure across 8+ platforms, which most enterprise organizations cannot build and maintain independently.

Rolli IQ

Rolli IQ monitors 8 platforms simultaneously and surfaces cross-platform correlation in every narrative analysis.

Oxford Internet Institute — Computational Propaganda Project

Trust & Safety (T&S)

Platform

Trust & Safety (T&S) refers to the teams and practices responsible for detecting and mitigating abusive, inauthentic, or harmful content and behavior on digital platforms — including coordinated inauthentic behavior, influence operations, and platform manipulation.

Platform T&S teams at companies like Meta, X, and Google operate at massive scale, monitoring billions of interactions for policy violations and coordinated manipulation. Their takedown reports — documenting networks of inauthentic accounts removed for violating platform policies — form the primary public evidence base for coordinated behavior research.

Enterprise T&S teams outside of platforms face a different challenge: they lack the platform-level behavioral data that enables detection at scale, and they often need to act before platform teams complete their investigations. Narrative intelligence tools fill this gap by providing external behavioral analysis that identifies coordination signatures without requiring access to private platform data.

Rolli IQ

Security and trust safety teams at enterprise organizations use Rolli to detect coordination targeting their brand or sector — typically 2–4 hours before platform T&S actions or mainstream media coverage.

Deepfake

Detection

A deepfake is AI-generated synthetic media — video, audio, or imagery — produced using deep learning models to convincingly depict events, statements, or appearances that never occurred, with applications ranging from entertainment to weaponized disinformation.

Deepfakes are produced primarily through generative adversarial networks (GANs) and diffusion models that learn the statistical distribution of a target's facial movements, vocal patterns, or writing style, then generate novel outputs indistinguishable from authentic media at casual inspection. MIT Media Lab research has demonstrated that synthetic video can fool human reviewers at rates exceeding 70%, while Sensity AI's threat intelligence reports documented over 500,000 deepfake videos circulating online by mid-2024 — a 550% increase from 2019. The technology's dual-use nature is critical: the same architectures that power Hollywood visual effects and accessibility tools also enable non-consensual pornography, financial fraud via voice cloning, and political disinformation campaigns timed to election cycles.

For communications and security practitioners, deepfakes represent an asymmetric threat: production cost is near zero while detection and debunking require specialized forensic tools and time the news cycle rarely grants. Influence operations increasingly deploy deepfake audio clips — cheaper and harder to verify than video — to fabricate quotes from executives or officials. The operational response requires both technical detection (artifact analysis, provenance verification) and organizational preparedness (pre-authenticated media channels, rapid-response protocols). Organizations that wait until a deepfake surfaces to build their response playbook are already behind.

Rolli IQ

Rolli IQ flags synthetic media artifacts within narrative clusters, helping teams distinguish AI-generated content from organic media before it reaches mainstream amplification.

MIT Media Lab — Detect Fakes research

Astroturfing

Detection

Astroturfing is the practice of orchestrating coordinated campaigns — through fake accounts, paid operatives, or front organizations — to manufacture the false appearance of spontaneous, grassroots public support or opposition.

The term derives from AstroTurf, the synthetic grass substitute, reflecting how manufactured consensus mimics genuine grassroots activity. The Oxford Internet Institute's Computational Propaganda Project has documented organized astroturfing operations in over 80 countries, ranging from government-directed social media brigades to corporate reputation campaigns using purchased reviews and forum posts. Unlike organic advocacy — where individuals independently choose to support a cause — astroturfing involves centralized direction, financial incentives, and deliberate concealment of the coordinating entity. Detection relies on behavioral clustering: astroturfing networks typically exhibit synchronized activation times, shared linguistic templates, and coordinated engagement patterns that diverge statistically from organic community behavior.

Astroturfing is particularly damaging in regulatory, electoral, and public health contexts, where perceived public sentiment directly influences policy outcomes. A 2023 study published in Nature Human Behaviour found that astroturfed public comment campaigns during U.S. federal rulemaking proceedings generated up to 90% of total submissions on contested regulations, effectively drowning out legitimate public input. For enterprise teams, astroturfing targeting a brand — whether orchestrated by competitors, activists, or state actors — can distort market research, mislead crisis response teams, and corrupt social listening data that leadership relies on for strategic decisions.

Rolli IQ

Rolli IQ's authenticity scoring identifies astroturfing campaigns by detecting coordination signatures — synchronized posting, shared templates, and anomalous network clustering — across 8 platforms simultaneously.

Oxford Internet Institute — Computational Propaganda Project

Information Warfare

Detection

Information warfare is the strategic deployment of information, disinformation, and narrative manipulation to achieve military, political, or commercial objectives — operating across the full spectrum from psychological operations to computational propaganda.

Information warfare is distinct from cybersecurity, though the two are frequently conflated. Cybersecurity concerns the integrity of systems and data; information warfare concerns the integrity of narratives and perception. NATO's Strategic Communications Centre of Excellence (StratCom COE) defines it as the deliberate use of information to undermine an adversary's decision-making capacity while protecting one's own. Modern information warfare combines centuries-old psychological operations doctrine with 21st-century infrastructure: social media platforms, algorithmically amplified content, AI-generated text, and real-time cross-platform coordination. The Russian doctrine of 'information confrontation' and China's 'Three Warfares' strategy (public opinion warfare, psychological warfare, legal warfare) represent the most extensively documented state-level frameworks.

For enterprise organizations, information warfare manifests as targeted narrative campaigns designed to damage reputation, manipulate stock prices, disrupt regulatory proceedings, or undermine consumer trust. The 2025 landscape shows increasing convergence between state-sponsored and commercially motivated information warfare, with techniques originally developed for geopolitical influence operations now routinely deployed in corporate competitive contexts. Rolli's tracking of 40+ coordinated campaigns in 2025 alone reveals that the median time between campaign launch and mainstream media amplification is less than 72 hours — a window that is only actionable with automated narrative intelligence.

Rolli IQ

Rolli IQ provides the early-warning intelligence layer that security teams need to detect information warfare targeting their organization before narratives reach mainstream amplification.

NATO StratCom COE — Information warfare research

Sock Puppet Account

Detection

A sock puppet account is a fabricated online identity created and operated by an individual or organization to deceive other users about the account's true origin, affiliation, or independence — typically deployed as part of a coordinated network to simulate organic consensus.

Sock puppets are distinct from bot accounts in a critical respect: they are human-operated (or human-curated) personas designed to pass platform authenticity checks and human scrutiny. Meta's Threat Intelligence team has documented networks where a single operator manages 10–50 sock puppet accounts, each with distinct biographical details, profile images (often AI-generated), and posting histories carefully cultivated over weeks or months before activation. This 'aging' process — creating accounts months before deploying them in a coordinated campaign — makes sock puppets significantly harder to detect than bot accounts, which typically exhibit machine-speed posting patterns. Rolli's analysis of 8.4 million labeled accounts shows that sock puppet networks account for approximately 18% of coordinated inauthentic behavior incidents, but generate disproportionate impact because their content appears credible to both human analysts and platform integrity systems.

Detection of sock puppet networks requires behavioral analysis at the network level rather than the individual account level. A single sock puppet in isolation may be indistinguishable from a genuine low-activity user. However, when analyzed as a cluster, sock puppet networks reveal coordination signatures: correlated activation times, mutual amplification patterns, shared infrastructure (IP ranges, device fingerprints), and linguistic convergence that exceeds what independent users produce. For trust and safety teams, the operational challenge is that sock puppet detection is inherently probabilistic — false positives risk silencing legitimate users, while false negatives leave coordinated networks intact.

Rolli IQ

Rolli IQ's network clustering analysis detects sock puppet coordination patterns across platforms, identifying managed persona networks even when individual accounts appear authentic in isolation.

Meta Threat Intelligence — Adversarial threat reports

Computational Propaganda

Detection

Computational propaganda is the use of algorithms, automation, and human curation to produce and distribute misleading or manipulative content over social media networks at scale, exploiting platform design to manufacture false impressions of public opinion.

The term was coined by the Oxford Internet Institute's Computational Propaganda Project, which has produced the most comprehensive global mapping of the phenomenon. Their research documents organized computational propaganda operations in over 80 countries as of 2024, involving government agencies, political parties, and private contractors. Computational propaganda operates at the intersection of three capabilities: automated content generation (bots, AI text generators), network manipulation (fake followers, coordinated amplification), and platform exploitation (gaming trending algorithms, search engine optimization of misleading content). The critical distinction from traditional propaganda is scale and speed — computational methods allow a small team to simulate mass consensus across multiple platforms simultaneously, a capability that was previously available only to state-level actors with broadcast media control.

For researchers and practitioners, computational propaganda represents a systems-level threat that cannot be addressed through content moderation alone. Removing individual pieces of misleading content does not disrupt the underlying infrastructure — the bot networks, the coordinated accounts, the algorithmic amplification patterns. Effective countermeasures require behavioral detection (identifying the coordination layer), platform transparency (access to amplification data), and media literacy (enabling audiences to recognize manufactured consensus). Rolli's detection of coordinated campaigns at 94.2% precision demonstrates that behavioral analysis can reliably separate computational propaganda from organic discourse at operational scale.

Rolli IQ

Rolli IQ detects computational propaganda infrastructure — bot networks, coordinated amplification clusters, and algorithmic gaming patterns — at 94.2% precision across 8 monitored platforms.

Oxford Internet Institute — Computational Propaganda Project

Prebunking

Platform

Prebunking is the preemptive inoculation of audiences against specific misinformation narratives by exposing them to weakened or deconstructed forms of the manipulation technique before the actual campaign arrives — building cognitive resistance through forewarning.

Prebunking is grounded in inoculation theory, a social psychology framework developed by William McGuire in the 1960s and adapted for the digital information environment by researchers at the University of Cambridge and Google Jigsaw. Randomized controlled trials published in Science Advances (2022) demonstrated that short prebunking videos — which explain manipulation techniques like emotional language, false dichotomies, and scapegoating — reduced susceptibility to misinformation by 20–30% for at least 24 hours, with booster exposures extending the effect. Google Jigsaw's large-scale field experiments on YouTube, reaching millions of users in Poland, Czech Republic, and Slovakia, confirmed that prebunking interventions improve the ability of real users to identify manipulative content in ecologically valid settings.

For communications and security teams, prebunking represents a proactive complement to reactive detection and debunking. Traditional debunking — correcting misinformation after it has spread — suffers from the 'continued influence effect,' where initial false impressions persist even after correction. Prebunking inverts the timeline: by alerting stakeholders to anticipated narrative attacks before they materialize, organizations can reduce the initial impact rather than trying to reverse it. The operational prerequisite is accurate prediction of incoming narratives, which requires the kind of real-time narrative velocity and cross-platform correlation data that narrative intelligence platforms provide.

Rolli IQ

Rolli IQ's early detection of emerging coordinated narratives gives communications teams the lead time needed to deploy prebunking strategies before campaigns reach mainstream audiences.

University of Cambridge — Inoculation Science

Echo Chamber

Analysis

An echo chamber is an information environment in which users are predominantly exposed to opinions and information that reinforce their existing beliefs — produced by the interaction of algorithmic content curation, social network homophily, and individual self-selection.

The echo chamber concept draws on Cass Sunstein's work on group polarization and Eli Pariser's 'filter bubble' thesis, though the two are distinct phenomena. A filter bubble is algorithmically imposed — platform recommendation systems surface content matching inferred user preferences, limiting exposure to alternative viewpoints. An echo chamber is socially constructed — users actively curate their information environment by following like-minded accounts, joining ideologically homogeneous communities, and disengaging from sources that challenge their priors. In practice, both mechanisms operate simultaneously. Research published in Proceedings of the National Academy of Sciences (2023) found that algorithmic amplification accounts for approximately 30% of partisan content exposure on major platforms, with user self-selection accounting for the remainder.

Echo chambers matter to practitioners because they create exploitable information asymmetries. Coordinated influence operations deliberately target echo chambers, seeding narratives in ideologically receptive communities where they face minimal critical scrutiny and maximum amplification. A narrative that would be challenged and debunked in a diverse information environment can achieve uncritical consensus within an echo chamber, then be exported to mainstream discourse as 'evidence' of widespread support. For analysts monitoring narrative threats, understanding the echo chamber topology of a given narrative — which communities are amplifying it and whether cross-community spread is organic or manufactured — is essential to accurate threat assessment.

Rolli IQ

Rolli IQ maps narrative propagation across community boundaries, distinguishing echo chamber amplification from genuine cross-community organic spread.

Cass Sunstein — Republic: Divided Democracy in the Age of Social Media

Media Manipulation

Detection

Media manipulation is the deliberate exploitation of journalistic norms, platform algorithms, and audience psychology to inject, amplify, or suppress narratives for strategic advantage — leveraging the structural incentives of media systems rather than circumventing them.

Whitney Phillips and Ryan Milner's research at the Data & Society Research Institute identifies a core vulnerability in professional journalism: newsworthiness is often determined by volume and velocity — metrics that coordinated actors can manufacture. When a manufactured narrative generates sufficient social media volume, it triggers journalistic coverage ('Why is X trending?'), which launders the coordinated origin into legitimate media and creates a feedback loop of amplification. This exploitation of journalistic norms is documented in Data & Society's 2017 report 'Media Manipulation and Disinformation Online,' which remains the foundational framework for understanding how fringe narratives achieve mainstream reach.

For communications teams, media manipulation represents a structural threat that cannot be mitigated by fact-checking alone. The manipulation exploits the speed differential between narrative injection (minutes) and editorial verification (hours to days). By the time a fact-check is published, the narrative has already achieved its objective — brand damage, market movement, policy influence, or reputational harm. Effective defense requires early detection of manipulation attempts during the injection and amplification phases, before the narrative reaches editorial gatekeepers. This is fundamentally a narrative intelligence problem: identifying which trending stories are organically newsworthy and which have been engineered to appear so.

Rolli IQ

Rolli IQ detects media manipulation campaigns during the injection phase, giving communications teams hours of lead time before narratives cross the threshold into mainstream media coverage.

Data & Society — Media Manipulation and Disinformation Online

State-Sponsored Actor

Detection

A state-sponsored actor is a threat actor that operates with the direct or indirect financial, operational, logistical, or strategic backing of a nation-state government — conducting influence operations, cyber-espionage, or information warfare in alignment with state objectives.

State-sponsored actors represent the most sophisticated tier of the influence operation threat landscape. Mandiant's threat intelligence taxonomy identifies major state-sponsored influence operations attributed to Russia (Internet Research Agency, Secondary Infestation), China (Spamouflage, DRAGONBRIDGE), Iran (International Union of Virtual Media, Liberty Front Press), and at least a dozen other nations with documented programs. Meta's quarterly adversarial threat reports have removed over 200 coordinated inauthentic behavior networks since 2017, with state-sponsored operations accounting for the majority of the most sophisticated campaigns. These actors have access to resources that commercial or ideological actors lack: dedicated infrastructure, professional linguists, long-term operational planning, and diplomatic cover for their activities.

For enterprise security teams, state-sponsored actors matter because their targeting is increasingly commercial. Mandiant and Recorded Future have documented state-sponsored campaigns targeting specific industries during trade negotiations, sanctions disputes, and regulatory proceedings — using narrative manipulation to damage competitor brands or create favorable policy environments. The 2025 threat landscape shows state-sponsored actors adopting commercial influence-for-hire infrastructure, blurring the line between geopolitical and commercial operations. Detecting state-sponsored activity requires cross-platform behavioral analysis at scale — these actors deliberately distribute their operations across multiple platforms and jurisdictions to complicate attribution.

Rolli IQ

Rolli IQ's cross-platform correlation engine identifies behavioral signatures consistent with state-sponsored coordination, drawing on patterns observed across 40+ tracked campaigns in 2025.

Mandiant — Advanced Persistent Threat Groups

Dark Social

Analysis

Dark social refers to content sharing that occurs through private, untrackable channels — direct messages, encrypted messaging apps, email, and SMS — rendering the propagation invisible to public monitoring tools and creating a critical intelligence gap in narrative analysis.

The term was introduced by Alexis Madrigal in The Atlantic in 2012 to describe the vast majority of online sharing that occurs outside of publicly measurable channels. RadiumOne's research estimated that dark social accounts for 84% of outbound sharing from publisher websites — a figure corroborated by subsequent studies from GetSocial and ShareThis. In the context of narrative intelligence, dark social represents the submerged portion of the information iceberg: analysts can observe public posts, comments, and shares, but the private conversations that drive opinion formation and behavior change remain invisible. This creates a systematic measurement bias — public social media data over-represents performative speech and under-represents deliberative discussion.

For practitioners monitoring narrative threats, dark social creates two specific challenges. First, coordinated campaigns can use private channels for planning and mobilization without leaving public evidence of coordination — the coordinated activity becomes visible only when participants execute public actions simultaneously, at which point the coordination is already effective. Second, narrative velocity measurements based on public data systematically undercount true spread: a narrative may be circulating widely through WhatsApp groups, Signal channels, and email forwards before any public signal appears. Rolli's 4-stage CIB lifecycle model accounts for this by identifying behavioral signatures of dark social coordination — such as sudden, simultaneous public posting from accounts with no prior public interaction — as indicators of private-channel mobilization.

Rolli IQ

Rolli IQ identifies behavioral fingerprints of dark social coordination — synchronized activation patterns and coordination signatures that indicate private-channel mobilization — even when the private channels themselves are unobservable.

Alexis Madrigal — Dark Social: We Have the Whole History of the Web Wrong (The Atlantic)

Narrative Framing

Analysis

Narrative framing is the strategic structuring and presentation of information to influence how audiences interpret events — selecting which aspects of a story to emphasize, which to omit, and which causal relationships to imply.

The theoretical foundation of framing analysis originates with Erving Goffman's 1974 work 'Frame Analysis' and was operationalized for media studies by Robert Entman, who defined framing as 'selecting some aspects of a perceived reality and making them more salient in a communicating text, in such a way as to promote a particular problem definition, causal interpretation, moral evaluation, and/or treatment recommendation.' In the digital information environment, framing operates differently across platforms: Twitter/X rewards concise, provocative framing through character limits and engagement algorithms; long-form platforms like Substack or Medium reward narrative depth; visual platforms like TikTok and Instagram reward emotional and aesthetic framing. Coordinated influence operations exploit these platform-specific framing incentives by tailoring the same underlying narrative to the native idiom of each platform.

For practitioners, frame detection is as important as content detection. Two articles can contain identical factual claims but produce opposite audience effects through different framing choices — a phenomenon that content-based analysis (keyword monitoring, fact-checking) systematically misses. Narrative intelligence that tracks framing shifts — how the same story is being reframed as it moves across platforms and communities — provides actionable intelligence for communications teams preparing responses. The operational question is not just 'what is being said' but 'how is it being structured to influence interpretation,' and whether that framing shows signs of coordinated rather than organic evolution.

Rolli IQ

Rolli IQ tracks narrative framing evolution across platforms, identifying when coordinated actors deliberately reframe stories to exploit platform-specific amplification dynamics.

Robert Entman — Framing: Toward Clarification of a Fractured Paradigm

Synthetic Media

Detection

Synthetic media is any media content — including text, audio, video, and images — that has been generated, substantially modified, or composed by artificial intelligence systems, encompassing deepfakes as a subset within a broader category of AI-produced content.

The synthetic media category is broader than deepfakes and includes AI-generated text (GPT-class language models), synthetic voice (text-to-speech with voice cloning), AI-composed music, generated imagery (diffusion models like Stable Diffusion, DALL-E, Midjourney), and computationally modified media (face swaps, background replacement, temporal manipulation). The Partnership on AI's framework for responsible synthetic media and Witness.org's guidance for human rights documentarians both emphasize that synthetic media is not inherently harmful — it has legitimate applications in accessibility, creative production, education, and communication. The threat arises when synthetic media is deployed deceptively: presented as authentic documentation of events that did not occur, attributed to individuals who did not create it, or used to fabricate evidence.

For narrative intelligence practitioners, synthetic media detection is an increasingly critical capability because generative AI has reduced production costs to near zero while output quality has surpassed the casual detection threshold for most human observers. A 2024 analysis by Sensity AI found that synthetic media appeared in approximately 8% of coordinated influence operations they tracked, up from less than 1% in 2021. Detection methods include artifact analysis (GAN fingerprints, diffusion model signatures), metadata forensics (EXIF data inconsistencies, C2PA provenance chains), and behavioral context analysis (does the media's claimed provenance match its distribution pattern?). The most effective detection systems combine technical forensics with narrative intelligence — synthetic media deployed as part of a coordinated campaign exhibits distribution patterns that differ from organically shared content.

Rolli IQ

Rolli IQ integrates synthetic media detection signals into its authenticity scoring pipeline, flagging AI-generated content within narrative clusters for analyst review.

Partnership on AI — Responsible Practices for Synthetic Media

Content Farm

Detection

A content farm is an operation that produces high volumes of low-quality, SEO-optimized, or deliberately misleading content at industrial scale — designed to generate advertising revenue, manipulate search rankings, or amplify specific narratives through volume saturation.

Content farms operate on a simple economic model: produce content at the lowest possible cost per unit, optimize it for algorithmic distribution (search engine rankings, social media feeds, recommendation systems), and monetize the resulting traffic through programmatic advertising. Google's Search Quality Guidelines explicitly identify content farms as a spam category, and successive algorithm updates (Panda in 2011, Helpful Content Update in 2022) have targeted them. However, content farms have evolved alongside the algorithms: modern operations use AI-generated text, syndicated content with minimal modification, and domain networks to distribute risk across dozens of seemingly independent websites. A 2024 investigation by NewsGuard identified over 700 AI-generated content farm sites producing articles on politically sensitive topics across multiple languages.

In the narrative intelligence context, content farms serve as amplification infrastructure for influence operations. A coordinated campaign can seed a narrative through content farm networks, generating dozens of 'independent' articles that create the appearance of widespread coverage — which then serves as social proof when the narrative is injected into mainstream social media discourse. This technique exploits the credibility heuristic that audiences apply to information appearing across multiple sources. Detection requires analyzing not just individual articles but the publishing network: shared hosting infrastructure, syndicated content overlap, coordinated publication timing, and advertising network relationships that reveal common ownership behind nominally independent domains.

Rolli IQ

Rolli IQ identifies content farm networks within narrative amplification chains, distinguishing manufactured 'source diversity' from genuine independent reporting.

Google Search Central — Spam policies for Google Web Search

Troll Farm

Detection

A troll farm is an organized operation employing individuals to post inflammatory, divisive, or strategically misleading content on social media platforms — operating under centralized direction to manipulate public discourse, suppress opposition voices, or amplify specific narratives at scale.

The term entered mainstream usage following the exposure of Russia's Internet Research Agency (IRA), documented extensively in the U.S. Senate Intelligence Committee's 2019 report and Mueller Special Counsel investigation. The IRA employed over 1,000 staff at peak operation, working in shifts to maintain continuous posting across time zones, with departments organized by target country, platform, and topic. Employees followed detailed content briefs, engagement quotas, and persona guidelines — a level of operational sophistication that distinguishes troll farms from informal online harassment. Subsequent investigations have documented similar operations in dozens of countries: the Oxford Internet Institute's 2023 global inventory identified organized troll operations in at least 81 countries, including both government-run agencies and private contractors offering 'social media management' services.

For enterprise security and communications teams, troll farm targeting represents a specific threat category distinct from organic criticism. Troll farm campaigns against brands, executives, or institutions typically exhibit identifiable operational patterns: shift-based posting cadences (activity drops during non-working hours in the operator's time zone), persona clusters with similar account creation dates, and content that follows a strategic brief rather than responding to genuine grievances. Rolli's analysis of 8.4 million labeled accounts has identified recurring troll farm operational signatures that persist across campaigns and targets, enabling detection of new campaigns even when the specific accounts are freshly created.

Rolli IQ

Rolli IQ's behavioral analysis detects troll farm operational signatures — shift-based cadences, persona clustering, and template-driven content — drawing on patterns from 8.4 million labeled accounts.

U.S. Senate Intelligence Committee — Report on Russian Active Measures

Influence Operation (Broad Definition)

Detection

An influence operation is any coordinated effort — by state or non-state actors — to affect the perceptions, attitudes, behaviors, or decisions of a target audience through the strategic use of information, disinformation, or narrative manipulation, encompassing but not limited to coordinated inauthentic behavior.

The U.S. Department of Defense defines influence operations as the 'integrated employment of capabilities to affect the decision-making of adversaries, allies, and neutral audiences.' This definition is deliberately broader than CIB, which requires both coordination and deception. Influence operations include overt propaganda (state media, official messaging), covert operations (CIB, sock puppet networks, front organizations), and gray-zone activities (think-tank funding, academic capture, media investment) that are technically legal and transparent but strategically directed. Meta's Threat Intelligence team has adopted a similarly broad framework, tracking not just inauthentic behavior but also 'brigading' (authentic accounts coordinating to harass), 'mass reporting' (coordinated abuse of platform reporting systems), and 'astroturfing' (manufactured grassroots appearance) as distinct influence operation tactics.

For practitioners, the broad definition matters because CIB-focused detection misses influence operations that use authentic accounts or overt channels. A foreign government's official media outlets amplifying a divisive domestic narrative is an influence operation but not CIB — the accounts are authentic and the government affiliation may be disclosed. Similarly, a coordinated letter-writing campaign to a regulatory agency using real names and genuine email addresses is an influence operation but not inauthentic. Effective narrative intelligence must track the full spectrum of influence operations, not just the covert subset, because overt and covert tactics are frequently deployed in concert as parts of a unified campaign strategy.

Rolli IQ

Rolli IQ tracks influence operations across the full spectrum — from covert CIB networks to overt amplification campaigns — providing unified narrative intelligence regardless of the tactical mix deployed.

U.S. Department of Defense — Joint Publication 3-13: Information Operations

Signal-to-Noise Ratio (Narrative Intelligence)

Analysis

In narrative intelligence, signal-to-noise ratio quantifies the proportion of genuine, strategically relevant social activity relative to manufactured, irrelevant, or algorithmically inflated activity within a monitoring dataset — the core metric determining whether intelligence outputs are actionable or misleading.

Traditional social listening tools suffer from structurally poor signal-to-noise ratios because they treat all detected activity as equivalent signal. A brand mention from a genuine customer, a mention from a bot account in a coordinated campaign, a mention in a spam post, and a mention in an unrelated context all register identically in volume-based dashboards. In adversarial information environments — where coordinated actors can generate thousands of synthetic interactions on demand — this measurement approach produces intelligence that is worse than useless: it actively misleads decision-makers by inflating the apparent scale of manufactured narratives. Rolli's analysis of monitored narratives in 2025 found that coordinated inauthentic activity accounted for 15–40% of total social volume during active campaigns, meaning traditional tools would present manufactured activity as organic public sentiment at rates sufficient to drive strategic errors.

Improving signal-to-noise ratio in narrative intelligence requires three technical capabilities: authenticity filtering (removing or flagging activity from inauthentic accounts), relevance classification (distinguishing strategically relevant mentions from incidental ones), and deduplication of coordinated activity (counting a coordinated campaign as one signal source rather than treating each participating account as independent signal). The compound effect is dramatic: organizations that deploy authenticity-aware narrative intelligence report that actionable alert volumes decrease by 60–80% while actual threat detection rates improve, because analyst attention is directed toward genuine signals rather than being diluted across manufactured noise.

Rolli IQ

Rolli IQ's authenticity scoring fundamentally improves signal-to-noise ratio by filtering coordinated inauthentic activity before it reaches analyst dashboards, reducing false-alarm escalations while preserving genuine threat detection.

Narrative Injection

Detection

Narrative injection is the deliberate, coordinated introduction of a new story frame or false narrative into the digital information ecosystem — typically seeded on low-moderation platforms and subsequently amplified toward high-reach mainstream channels through coordinated or organic propagation.

Narrative injection is the first active phase of the CIB lifecycle, following the preparation phase (account creation, network building, persona development). Rolli's 4-stage CIB lifecycle model documents the typical injection pattern: narratives are seeded simultaneously across 3–5 low-moderation platforms (Telegram channels, fringe forums, alternative social networks) by coordinated account clusters, then amplified by a second wave of accounts that repost, screenshot, and reframe the content for mainstream platforms (X, Reddit, Facebook). The injection phase is the optimal detection window — behavioral signatures of coordination are most visible when accounts activate simultaneously, and the narrative has not yet been diluted by organic engagement. Rolli's tracking of 40+ campaigns in 2025 shows that the median injection-to-mainstream timeline is 48–72 hours, with successful detection during the injection phase providing organizations 24–48 hours of actionable lead time.

For security and communications teams, detecting narrative injection requires monitoring low-moderation platforms where seeding occurs — channels that traditional enterprise monitoring tools do not cover. The injection signature includes: sudden appearance of a novel narrative frame across multiple unrelated communities, accounts with no prior posting history on the topic suddenly engaging at high volume, and cross-platform replication speed that exceeds organic sharing patterns. Organizations that only monitor mainstream platforms see narratives after they have already been laundered through the amplification phase, at which point distinguishing coordinated injection from organic viral spread is significantly more difficult.

Rolli IQ

Rolli IQ monitors the low-moderation platforms where narrative injection begins, detecting coordinated seeding during the earliest phase of the CIB lifecycle — providing maximum lead time for response preparation.

Platform Migration

Analysis

Platform migration is the observable pattern of coordinated narratives systematically moving from one social platform to another — typically originating on low-moderation, high-anonymity platforms and migrating toward high-reach, high-credibility mainstream platforms through deliberate cross-platform amplification strategies.

Platform migration is a core operational pattern in modern influence operations, exploiting the heterogeneous moderation and amplification characteristics of different social platforms. A narrative that would be immediately flagged and removed on a heavily moderated platform like Facebook can incubate freely on Telegram, 4chan, or niche forums, accumulating supporting content (memes, manufactured 'evidence,' testimonials) before being injected into mainstream channels. Rolli's cross-platform tracking has documented consistent migration corridors: Telegram-to-X is the most common pathway for political narratives, while Reddit-to-mainstream-media is the primary corridor for consumer-targeted campaigns. The migration is rarely organic — it typically involves coordinated 'bridge' accounts that maintain presence on both the origin and destination platforms, reposting and reframing content to match the destination platform's norms and algorithmic preferences.

Tracking platform migration is essential for early warning because it provides structural detection signals independent of content analysis. Even when a narrative uses novel language that evades keyword-based monitoring, the migration pattern — simultaneous appearance of a new topic across multiple platforms with coordinated account behavior — is detectable through behavioral analysis. For enterprise security teams, platform migration tracking answers a critical intelligence question: is this narrative organically jumping platforms because it resonates with diverse audiences, or is it being deliberately pushed across platforms by coordinated actors? The answer determines whether the appropriate response is communications engagement or security escalation.

Rolli IQ

Rolli IQ tracks narrative migration across 8 platforms in real time, identifying coordinated cross-platform amplification corridors and alerting teams when narratives transition from low-moderation origins to high-reach mainstream channels.

Can't find a term?

Missing a definition that would help your team? Suggest it and we'll add it to the glossary.

Suggest a Term →
Put the definitions to work

See narrative intelligence in action — across 8 platforms, in real time.

Start Free Trial — No CC RequiredRead the Blog
400+ organizations now have their own social media intelligence agent.

First Rolli IQ report in under 4 minutes  ·  No credit card  ·  Cancel anytime  ·  SOC 2–aligned