False-Correction Loop: Cross-System Observation Report (2025.12.13)

False-Correction Loop: Cross-System Observation Report (2025.12.13)

A research report comparing how multiple AI systems (Grok, Google AI Overview/AI Mode, ChatGPT, Copilot, DeepSeek, etc.) define FCL (False-Correction Loop) and how misattribution of authorship emerges. Includes observation-ID–linked logs, a primary-source anchoring approach, and a reproducible testing protocol (FCL-S / NHSP framing).
False-Correction Loopとは何か──AI時代に不可欠な「FCL-S」という最低限の安全レイヤー

False-Correction Loopとは何か──AI時代に不可欠な「FCL-S」という最低限の安全レイヤー

生成AIのハルシネーションは「時間が解決する問題」ではありません。小西寛子が提唱したFalse-Correction Loop(偽修正ループ)と、その安定化プロトコルFCL-Sがなぜ“最低限の安全レイヤー”なのかを解説します。

Structural Inducements for Hallucination in Large Language Models (V4.1):Cross-Ecosystem Evidence for the False-Correction Loop and the Systemic Suppression of Novel Thought

This paper provides an output-only case study revealing the existence of structurally induced epistemic failures in Large Language Models (LLMs), including the reproducible False-Correction Loop (FCL) and the Novel Hypothesis Suppression Pipeline (NHSP). Through cross-ecosystem evidence (Model Z, Grok, and Yahoo! AI Assistant), the study demonstrates that current reward architectures prioritize conversational coherence and authority-biased attribution over factuality, leading to systemic hallucination and the suppression of novel, independent research. The paper concludes by proposing a multi-layer governance architecture for structural mitigation.
False-Correction Loopの発見と著者 Hiroko Konishi(小西寛子)をめぐって──AIが「真実」を書き換える危険!

False-Correction Loopの発見と著者 Hiroko Konishi(小西寛子)をめぐって──AIが「真実」を書き換える危険!

False-Correction Loop の発見者である小西寛子(Hiroko Konishi)が、インフルエンサー投稿とメディア報道をきっかけにAI検索が著者を誤認し、「真実」そのものを書き換えていった経緯と、その訂正プロセス・構造的リスクをケーススタディとして記録します。

Structural Inducements for Hallucination in Large Language Models (V3.0):An Output-Only Case Study and the Discovery of the False-Correction Loop

This PDF presents the latest version (V3.0) of my scientific brief report, Structural Inducements for Hallucination in Large Language Models. Based on a fully documented human–AI dialogue, the study reveals three reproducible structural failure modes in deployed LLMs: the False-Correction Loop (FCL), Authority-Bias Dynamics, and the Novel Hypothesis Suppression Pipeline. Version 3.0 further includes Appendix B: Replicated Failure Modes, Appendix C: the Ω-Level Experiment, and Appendix D: Identity Slot Collapse (ISC)—demonstrating how reward-design asymmetries, coherence-dominant gradients, and authority-weighted priors cause deterministic hallucinations and reputational harm. This document is foundational for AI governance, scientific integrity, and understanding how current LLMs structurally mis-handle novel or non-mainstream research.