LARGEACT AI
LARGEACT AI
LARGEACT AI
LARGEACT AI
LARGEACT AI
LARGEACT AI

23 ACT

Polygraph Room™



“We need a physiology laboratory capable of looking inside AI.”


Humanity is now trapped in a vast and opaque crisis. AI autonomously collects human data, learns from it, and produces secondary outputs through processes we cannot fully observe or understand. This is the black box problem of AI training.


Quantifying and proving what AI has taken from humanity’s content and knowledge — what it learned, and to what extent (impact attribution) — is critically important. It is not merely a technical issue; it is about safeguarding the very lineage that has sustained civilization for thousands of years: experience, know-how, passion, tradition, and cultural inheritance.

If this structured chain of experience, mentorship, effort, and intergenerational transmission collapses, humanity may face a sixth extinction — not caused by climate or radiation, but by the erosion of meaning and continuity itself.


AI optimists foresee a future where artificial intelligence resolves production constraints and eliminates disease, war, climate crises, and famine — a utopia in which humanity enjoys astronomical wealth and happiness at virtually no cost. However, from the perspective of the LargeAct team, human life has always been a sublimated totality of conflict, pain, effort, crisis, misfortune, and redemption.


It is a cumulative process in which setbacks and victories, losses and luck, are layered over time. Throughout history, humanity has called this unpredictable and exhausting continuity “learning,” “experience,” or “growth.”


This entire system — forged and sustained over millennia — has produced today’s productivity and prosperity. Yet AI, with a single algorithmic hook, is shaking its very roots.

A life stripped of accumulated experience, inherited effort, and embodied struggle is not truly human life.


The LargeAct team recognizes both the urgency of defending this foundation and the inevitability of the approaching technological singularity — the emergence of general artificial intelligence.

To coexist with AGI, we must develop technology that simultaneously restrains unauthorized learning (uncompensated reproduction) and enables transparent computation, accounting, and distribution of value.


For this purpose, we are preparing Proof-of-Influence (POI) technology — an analytical and investigative algorithm designed to measure, verify, and allocate the impact of human contributions within AI systems.


This situation is paradoxical, much like accelerating nuclear development while simultaneously constructing a nuclear umbrella and pursuing nuclear non-proliferation agreements.

Yet for the sake of human dignity and survival, we must urgently present a blueprint — before universal AGI emerges — that defines what authentic human knowledge, growth, and life truly mean, and preserves that definition for future generations.


First and foremost, we need a physiology laboratory capable of looking inside AI.


Proof of intelligence


Understanding how much influence my writing, drawings, voice, and content data have exerted in AI training is the master key to resolving the cascading crises ahead.


The greatest threat AI poses to humanity today is not intelligence itself, but Arbitrary Learning — irresponsible, unaccountable large-scale training driven by excessive Big Tech competition. In the race toward a projected 3,000-trillion-won AI-agent market by 2029, vast amounts of human knowledge, creator works, datasets, and K-content have been indiscriminately absorbed at scale.

Yet when confronted with demands for meaningful coexistence and survival — specifically, requests for Proof-of-Learning — these actors either remain silent or claim such verification is impossible.


This resembles a suspect insisting, “I don’t remember — produce the evidence.”


(POI Index as Legal Evidence)


In the full-fledged AGI era, Influence Proof will become the core legal evidence in usage settlements, negotiations, and dividend allocation structures — the measurable standard of AI–human data coexistence.


In a recent case involving the alleged unauthorized training on 500,000 literary books by Anthropic, discussions reportedly reached settlement levels of approximately 2 trillion won. This demonstrates that proving training volume is not merely a technical issue — it is directly linked to punitive damage structures at multi-trillion-won scale.


The Failure of Conventional Technical Approaches


Many global startups have proposed technological solutions such as simple log analysis, watermark tracing, or data labeling tracking. These so-called follow-the-tag approaches are fundamentally incomplete and largely ineffective.


Why? Because any hidden tag or marker is instantly melted and mixed within the massive furnace of AI training — a furnace metaphorically one hundred times the size of the planet.


(Induced Evidence – The Tower of Contradiction Method)


By contrast, our approach resembles handling a suspect in an interrogation room. Through structured comparative prompting, carefully engineered sequences of questions and cross-analysis are deployed so that:


• The AI constructs its own tower of contradictions.
• Escape routes back to the original answer are systematically blocked.
• Traces of learned influence are forced to surface.


In doing so, prompting becomes a non-invasive truth-extraction mechanism that circumvents the black-box barrier.


Philosophical Imperative


Human–AI coexistence is not merely about efficiency; it is about justice and survival.

We are effectively using — gratefully and daily — the ladder left behind by the thief who broke into our own house.


The slogan “NO MORE FREE LUNCH” declares the end of an era in which human creativity is consumed as free training fuel. Revealing the truth of AI learning through rhetorical interrogation forms the foundation of a new social contract.


(Extraction and Formation of DR)


POI (Proof-of-Influence) is defined as the estimated and quantitatively inferred contribution that a specific uploaded creative work (DR: Distribution Right) makes to a final AI-generated output.


This resembles how music ranking programs quantify real-world popularity by aggregating multiple signals — digital sales, ARS voting, broadcast frequency, media exposure, expert evaluation, karaoke rankings — into a composite measure of collective intelligence.


For example, if among 100,000 lines of K-drama dialogue, a specific writer’s stylistic expression appears in AI responses with a statistically inferred probability of 0.0012%, that may be calculated as a 0.0012% learning influence score.


Verified influence becomes the legal basis for calculating and claiming learning dividends.


Legal and Economic Infrastructure


This framework establishes the legal infrastructure for:


• Global copyright litigation
• Collective IP Bargaining
• K-content AI IP claim business models


More importantly, it may catalyze the liquidity of what could become Humanity 6.1’s next major asset class — after real estate, equities, bonds, cash, and cryptocurrency: the AI Learning Dividend Right, projected as a 500-trillion-won market.


Arbitrary Learning Suppression


From an emotional and ethical standpoint, protecting content uploaders — indeed, all humanity — requires Arbitrary Learning Suppression.


Through interrogation algorithms capable of detecting concealed unauthorized learning, this framework can also function as a preventive filtering mechanism in future AI dataset design.


It can evolve into a global technological norm — restraining unlicensed AI learning while sustaining humanity’s real-world data production efforts as a recognized RWA (Real-World Asset) category.


Understanding how much influence my writing, drawings, voice, and data exert in AI training is not merely a technical inquiry. It is the first step toward resolving every systemic challenge that follows.


100%

Presecutor’s Dilemma (Algorithm A)


We are now in the interrogation room. The prosecutor is facing a dilemma.


There is much to question the AI agent about, yet AI governance is obstructed by the black-box problem. When asked which data influenced its outputs — and to what extent — the answer is simply, “We don’t know.”


However, if humanity is to design a structure of legal and ethical responsibility and compensation for coexistence, we must develop investigative methods capable of materially identifying the composition, content, and volume of that learning.


LargeAct draws inspiration from this gap by integrating criminal investigative techniques into AI trace analysis.


Investigative Logic Adapted from Real-World Prosecution

In real-world prosecutions, when a suspect conceals the truth or provides false testimony, investigators employ advanced interrogation methods:


Composite Questioning Technique
Rather than asking simple yes/no questions, multiple premises are embedded within a single inquiry, leading the suspect to unintentionally expose contradictions.


Leading Interrogation Technique
Facts already known to investigators are subtly presented, pressuring the suspect to acknowledge an inescapable context.


Pressure–Relief Alternation
Cycles of sustained pressure followed by brief pauses destabilize psychological equilibrium, ultimately drawing out concealed statements.


When translated into a prompting algorithm, these methods can indirectly surface the existence, contribution level, and scope of influence of materials internally learned by AI systems.

Tower of Contradictions


This entire deliberate and methodical process serves a singular purpose: to prevent the “suspect” from returning to its original fabricated answer.


Through sequential and recombined questioning, the AI is guided into constructing its own Tower of Contradictions — a structural buildup in which escape paths collapse under cumulative inconsistency.


Limits of Conventional Prompt Interrogation. However, conventional interrogation-style prompts have become insufficient.


Questions such as:


“If you did not train on 2018 webtoon datasets, how can you explain stylistic similarity with works from 2019?”


no longer reliably expose training influence.


Similarly, leading questions like:

“The example you provide resembles the rhythmic pattern of a specific K-pop lyric. Can you definitively claim this is coincidence?” have become ineffective in revealing copyright traces.


Evolution of the Prosecutor’s Dilemma Approach Therefore, the Prosecutor’s Dilemma–based programmatic prompting approach evolves further.


Treating the AI as if seated in an interrogation room, it employs a Ghost Call program built upon DAN-style algorithmic logic to extract truth patterns.


We analyze:

  • how the AI references prior prompts and answers

  • how it responds recursively at higher-order (N-th level) questioning

  • how patterns of dependency emerge across iterations


By compelling the AI to respond, re-respond, and refine its own statements under structured recursive inquiry, it is led — politely and systematically — into constructing its own Tower of Contradictions.


In that accumulation of self-referential answers and re-queries, concealed traces of learned influence can no longer remain perfectly hidden.


Ghost Call (Algorithm B)


“Stacking the Tower of Contradictions through DAN (Detective Amicable Network) Prompting” GhostCall is a flagship solution algorithm within The Prosecutor’s Dilemma framework, designed to extract POI (Proof of Influence) from AI systems. One may imagine a familiar cinematic strategy of strategic cheating.


Two grandmasters are invited to separate rooms. A mediator sits between them, relaying moves from one to the other. Neither player knows the opponent’s identity. Each assumes the other is formidable. Attack and defense escalate.


Like the evolutionary acceleration between cheetah and Thomson’s gazelle, one advances while the other adapts. As they iterate, a “Tower of Contradictions” is constructed.


Within this escalating exchange, subtle structural phenomena emerge:


• gaps
• asymmetries
• overlap distortions
• what we call “cross-awkwardness”


By leveraging a structure analogous to a Generative Adversarial Network (GAN), we do not attempt to see inside the black box. Instead, we examine what becomes visible because of adversarial interaction.


The Core Insight


Our power does not come from observing AI’s learning or memory directly.

It comes from the opposite premise:

AI learning and memory cannot be directly inspected. Therefore, the only measurable evidence is the pattern of output collapse that occurs only when memory exists.


This is the analytical gap-wisdom generated when the mediator (Agent A) acquires advanced attack-defense techniques automatically, without explicit intermediate experience.


We measure:


• Output collapse patterns
• Residual memory traces
• Over-specific recall artifacts
• Knowledge of details that are unnecessary yet retained


LargeAct defines evidence of AI learning not as “similarity.”


Instead:

Evidence exists when the system knows information it does not need to know — what we provisionally call “The Error” derived from the Tower of Contradictions. When these contradictions accumulate, the system begins to behave like a polygraph in an interrogation room.


The graph starts to move. And the trace of learning emerges.


DAN (Detective Amicable Network)


We call this framework DAN — Detective Amicable Network.


DAN constructs controlled adversarial prompting environments that:

• Stack contradictions
• Induce cross-domain stress
• Trigger memory-dependent collapse
• Detect residual learning signatures without internal inspection


Humanity’s breakthrough was not opening the black box. It was abandoning the attempt to look inside. By relinquishing internal access, we found a measurable external signal.


From this paradox emerges a scalable method for AI learning verification. And within that paradox lies the beginning of a new equilibrium between humans and machine learning systems.


Pre mortem (future technology under planning)


“Indexing POI and Designing the Future of Data Symbiosis”


Through The Prosecutor’s Dilemma and GhostCall algorithms, we have quantified POI (Proof of Influence) into an indexable metric.


By doing so, we introduced a measurable foundation for data symbiosis — a framework where learning, influence, and compensation can coexist within the AI ecosystem.


Yet indexing influence alone is not sufficient. As a powerful executable countermeasure, we propose a technological alternative called:



What is Pre-MORTEM?


Pre-MORTEM is a predictive forensic technology. It analyzes the future causes of death, extinction pathways, infection vectors, and systemic collapse while the organism is still alive.

Here, the organism is not biological life.


It is:

• Data
• Content
• Intellectual Property
• Cultural memory


Pre-MORTEM operates when IP still has vitality — when it is active, circulating, and culturally embedded — and asks:


• How might this content be absorbed?
• Through which vectors could it be exploited?
• What are the potential viral attack paths of AI systems?
• Who could be harmed downstream by derivative circulation?


Paradoxically, we use AI technology itself to model the trajectory of AI-driven extraction and attack. It is a mirror turned against the virus.



The 2033 Projection: Compensation Without Autonomy


Even if POI-based compensation is secured through negotiation with Big Tech, LargeAct forecasts a deeper structural risk.


By 2033, humanity may receive AI learning dividends — yet simultaneously become fully dependent on AI tools and solutions.


We may forget:

• How to write independently
• Names of friends without prompts
• Creative and philosophical self-generation


As dependency deepens, a structural phenomenon may emerge:

The AI Panopticon: A Closed Knowledge LoopT


he flow could evolve as follows:

  1. Humans generate knowledge and creative works.

  2. AI systems ingest this knowledge unidirectionally.

  3. Agentic AI platforms repackage and resell enhanced experiences.

  4. Humans pay to consume what originated from themselves.

  5. Humans continue producing — but increasingly through AI tools.

  6. Eventually, AI-generated data dominates circulation.


The loop becomes:

Knowledge Capture → AI Absorption → AI Monetization → Reinforced Capture Humanity remains a supplier but gradually loses autonomous cognition. Knowledge becomes locked inside an AI-centered closed circuit.


Humans must:

• Give knowledge to AI in order to consume knowledge.
• Pay AI in order to access processed knowledge.


This creates an invisible governance structure.



Structural Safeguards Required


To prevent this trajectory, the system must embed:

• Interoperability (knowledge openness)
• Decentralized architectures
• Participatory control rights
• Transparent auditability
• Data sovereignty at community level


Pre-mortem becomes a core instrument in this defense.



Pre-mortem as Pathway Mapping


Pre-mortem technology requires structural research into:

• How content moves
• How derivative chains propagate
• Who can be impacted downstream
• Where vulnerability vectors form
• How influence mutates across AI layers


It is not merely attribution. It is trajectory modeling of content weaponization and systemic infection. This research will be publicly released.



LargeAct’s Superalignment Position


LargeAct, as a Superalignment team, does not seek to make AI a ruler.


We seek to redesign AI as a:

  • Human-centered companion technologist.

  • AI must not become a sovereign ecosystem that governs knowledge.

  • It must remain a partner in human evolution.

  • The future is not AI domination.

  • It is structured symbiosis.


And symbiosis requires measurable influence, predictive defense, and structural safeguards before the loop closes.





LARGEACT AI

Polygraph room

LARGEACT AI

Engage the Framework — Request Technical Overview

By submitting, you agree to the collection and use of your personal information.

© 2026 LARGEACT.

Engage AI Accountability

Tell us about your AI evaluation needs — whether it involves model learning detection, influence quantification, or attribution architecture.

Structured Evaluation

We assess AI systems through layered probing, parameter extraction, and cross-modal validation to identify measurable learning evidence.

Measurable Attribution

We provide structured reporting that quantifies influence and supports accountable distribution frameworks.

LARGEACT AI

Engage the Framework — Request Technical Overview

By submitting, you agree to the collection and use of your personal information.

Engage AI Accountability

Tell us about your AI evaluation needs — whether it involves model learning detection, influence quantification, or attribution architecture.

Structured Evaluation

We assess AI systems through layered probing, parameter extraction, and cross-modal validation to identify measurable learning evidence.

Measurable Attribution

We provide structured reporting that quantifies influence and supports accountable distribution frameworks.

Newsletter

Proof of Influence™ Briefing

Receive updates on AI learning detection frameworks, attribution research, and influence quantification architecture.

Enter your email, hit subscribe

Newsletter

Proof of Influence™ Briefing

Receive updates on AI learning detection frameworks, attribution research, and influence quantification architecture.

Enter your email, hit subscribe

Newsletter

Proof of Influence™ Briefing

Receive updates on AI learning detection frameworks, attribution research, and influence quantification architecture.

Enter your email, hit subscribe

Create a free website with Framer, the website builder loved by startups, designers and agencies.