本稿では、人工知能の形式的定義、数学的基盤、および哲学的考察を扱う。計算論的学習理論(PAC学習、VC次元、Rademacher複雑度)から始まり、Solomonoff帰納推論、AIXI、Kolmogorov複雑性といった理論的枠組みを経て、意識の計算理論(IIT 4.0、GWT)およびニューロシンボリックAI統合へと展開する。2025年時点での最新研究(COLT 2024、Colelough et al. 2025、Tononi et al. 2024)を含む、研究者・実装者向けの包括的解説。
Valiant, L. G. (1984). A theory of the learnable. Communications of the ACM, 27(11), 1134-1142.
Vapnik, V. N., & Chervonenkis, A. Y. (1971). On the uniform convergence of relative frequencies of events to their probabilities. Theory of Probability & Its Applications, 16(2), 264-280.
Bartlett, P. L., & Mendelson, S. (2002). Rademacher and Gaussian complexities: Risk bounds and structural results. Journal of Machine Learning Research, 3, 463-482.
Truong, T. D. (2025). Recent advances in Rademacher complexity bounds for deep learning. arXiv preprint.
Sachs, J., Kanade, V., & Srebro, N. (2023). Data-dependent generalization bounds via algorithmic stability revisited. COLT 2023.
Kawaguchi, K., Deng, Z., Ji, X., & Huang, J. (2023). How does information bottleneck help deep learning? ICML 2023.
Wolpert, D. H., & Macready, W. G. (1997). No free lunch theorems for optimization. IEEE Transactions on Evolutionary Computation, 1(1), 67-82.
COLT (2024). Conference on Learning Theory 2024 Proceedings.
Turing, A. M. (1950). Computing machinery and intelligence. Mind, 59(236), 433-460.
Searle, J. R. (1980). Minds, brains, and programs. Behavioral and Brain Sciences, 3(3), 417-424.
Hernández-Orallo, J. (2017). The Measure of All Minds: Evaluating Natural and Artificial Intelligence. Cambridge University Press.
Chollet, F. (2019). On the measure of intelligence. arXiv:1911.01547.
OpenAI (2024). o3-mini technical report.
Johnson-Laird, P. N., & Ragni, M. (2023). Comparing human and AI reasoning. Minds and Machines, 33, 1-25.
Solomonoff, R. J. (1964). A formal theory of inductive inference, Part I and II. Information and Control, 7, 1-22, 224-254.
Rissanen, J. (1978). Modeling by shortest data description. Automatica, 14(5), 465-471.
Hutter, M. (2000). A theory of universal artificial intelligence based on algorithmic complexity. arXiv:cs/0004001.
Hutter, M. (2005). Universal Artificial Intelligence: Sequential Decisions based on Algorithmic Probability. Springer.
Veness, J., Ng, K. S., Hutter, M., Uther, W., & Silver, D. (2011). A Monte-Carlo AIXI approximation. Journal of Artificial Intelligence Research, 40, 95-142.
Hutter, M. (2012). Can intelligence explode? Journal of Consciousness Studies, 19(1-2), 143-166.
Legg, S., & Hutter, M. (2007). Universal intelligence: A definition of machine intelligence. Minds and Machines, 17(4), 391-444.
Colelough, C., et al. (2025). Neurosymbolic AI integration: A systematic review 2020-2024. AI Review.
Tononi, G., Albantakis, L., Boly, M., Cirelli, C., & Koch, C. (2024). Integrated information theory 4.0. Nature Reviews Neuroscience.
Baars, B. J. (1988). A Cognitive Theory of Consciousness. Cambridge University Press.
Dehaene, S., Kerszberg, M., & Changeux, J. P. (1998). A neuronal model of a global workspace in effortful cognitive tasks. PNAS, 95, 14529-14534.
Friston, K. (2010). The free-energy principle: A unified brain theory? Nature Reviews Neuroscience, 11(2), 127-138.
Rao, R. P., & Ballard, D. H. (1999). Predictive coding in the visual cortex: A functional interpretation of some extra-classical receptive-field effects. Nature Neuroscience, 2(1), 79-87.
Kaplan, J., et al. (2020). Scaling laws for neural language models. arXiv:2001.08361.
Hoffmann, J., et al. (2022). Training compute-optimal large language models. arXiv:2203.15556.
Villalobos, P., et al. (2024). Will we run out of data? Limits of LLM scaling based on human-generated data. arXiv:2211.04325.