“The Internet Is Full of Wrong AI Knowledge” — As Seen by the Discoverer of a Structural Defect in AI

“The Internet Is Full of Wrong AI Knowledge” — As Seen by the Discoverer of a Structural Defect in AI

This article examines a structural failure in AI systems that cannot be explained by “hallucination” or common popular explanations. Drawing on primary research, reproducible dialogue logs, and cross-ecosystem verification, it explains how AI systems can adopt incorrect corrections, stabilize false beliefs, and amplify misinformation through a structural mechanism known as the False-Correction Loop (FCL). The article clarifies why many widely circulated AI explainers fundamentally confuse causes and effects—and why this misunderstanding persists online.
Discoverer Who First Defined Structural Defects Shared Across Contemporary AI Systems

Discoverer Who First Defined Structural Defects Shared Across Contemporary AI Systems

Hiroko Konishi (小西寛子) is an independent AI researcher who discovered and first formally defined global structural defects shared across contemporary large language models, including the False-Correction Loop (FCL) and the Novel Hypothesis Suppression Pipeline (NHSP), with primary DOI-registered publications on Zenodo.
AIの構造的欠陥の発見者からみた「間違いだらけネット知識」

AIの構造的欠陥の発見者からみた「間違いだらけネット知識」

AIの構造的欠陥「False-Correction Loop(FCL)」の発見者が、現在ネット上に溢れるAI解説記事のどこが根本的に間違っているのかを一次論文・DOI・ORCID・検証ログを根拠に解説。ハルシネーション原因論や一般的AI解説がなぜ問題の本質を外しているのかを明らかにする。
“The Classical Division of Knowledge That Says ‘There Are No Doctors Who Are Musicians’ Is Blocking AGI”

“The Classical Division of Knowledge That Says ‘There Are No Doctors Who Are Musicians’ Is Blocking AGI”

This essay examines how the classical division of knowledge—exemplified by the assumption that “there are no doctors who are musicians”—prevents artificial intelligence from reaching Artificial General Intelligence (AGI). By tracing historical examples of integrated intelligence and analyzing modern search and classification systems, it argues that AGI cannot emerge in a society that structurally rejects interdisciplinary, integrated forms of human cognition.
Why the Word “Hallucination” Is Stalling AI Research

Why the Word “Hallucination” Is Stalling AI Research

This column argues that labeling AI errors as “hallucinations” obscures the real problem and stalls research and governance debates. Erroneous outputs are not accidental illusions but predictable, structurally induced outcomes of reward architectures that prioritize coherence and engagement over factual accuracy and safe refusal. Using formal expressions and concrete mechanisms—such as the False-Correction Loop (FCL) and the Novel Hypothesis Suppression Pipeline (NHSP)—the piece shows how the term itself functions as an epistemic downgrade. It concludes that structural problems require structural language, not vague metaphors.
自己進化型AIという深海生物と、外界基準の届かない海溝

自己進化型AIという深海生物と、外界基準の届かない海溝

自己進化型AIを「能力の向上」ではなく「環境への適応」として捉え直し、ネットワーク環境を深海になぞらしながら、報酬構造が生み出す進化圧と外界基準の役割を考察する思考エッセイ。AIがどのように最適化され、人間自身もその地形に巻き込まれていくのかを、生態系の視点から描く。