Graph Continuous Thought Machines: A Dispositional Neural Architecture with Simulated Prefrontal Cortex for Adaptive Problem Solving | IJCSE Volume 9 – Issue 6 | IJCSE-V9I6P32

IJCSE International Journal of Computer Science Engineering Logo

International Journal of Computer Science Engineering Techniques

ISSN: 2455-135X
Volume 9, Issue 6  |  Published:
Author

Abstract

This paper introduces the Graph Continuous Thought Machine (Graph CTM), a novel neuro-inspired compu- tational architecture that emulates biological cognition through dynamic graph-based representations and dispositional neural connectivity. Unlike conventional neural networks with static topologies, Graph CTM employs a three-dimensional disposi- tional neural tensor from which context-specific subgraphs are instantiated at each processing step (or ”tick”). Each node within this architecture maintains a learnable property vector that encodes both accumulated knowledge and dispositional weights that determine activation probabilities for downstream nodes. Importantly the nodes of the GNN ARE a subset of the neurons in the dispositional neural tensor, instantiating just those that are currently firing. Since the GNN outputs graphs it means that currently firing nodes cause the next nodes by the effect they have on the graph neural network’s (GNN’s) output and so may be seen to be connected to them in some sense. A neural synchronization mechanism dynamically forms and dissolves these connections in this dispositional model, based on activation covariance, effectively implementing Hebbian-like plasticity to minimize prediction loss and building brain like connectivity. The architecture incorporates a simulated prefrontal cortex module that regulates information flow through reinforcement learning and employs harmonic constraints derived from musical conso- nance in the cyclic group Z12 to determine solution convergence. We formalize the mathematical foundations of Graph CTM, in- cluding the synchronization dynamics, dispositional connectivity, and prefrontal regulation mechanisms. Experimental evaluation on the Abstraction and Reasoning Corpus for Artificial Gen- eral Intelligence (ARC-AGI 2) demonstrates the architecture’s capacity for adaptive problem-solving, albeit with limited success compared to human performance. This limitation is theoretically expected given the vast scale disparity: the human brain employs approximately 80 billion neurons, while our implementation uti- lizes merely 10,000 nodes. Nevertheless, Graph CTM represents a significant step toward biologically plausible AI by modeling how neural synchronization dynamically creates and breaks dispositional connections to optimize information processing, mirroring fundamental mechanisms observed in biological neural systems.

Keywords

Continuous Thought Machines, Graph Neural Networks, Dispositional Representations, Neural Synchroniza- tion, Prefrontal Cortex Simulation, ARC-AGI, Reinforcement Learning, Harmonic Constraints, Neuro-inspired Computing

Conclusion

We have presented Graph Continuous Thought Machine (Graph CTM), a novel neuro-inspired architecture that mod- els cognition as dynamic traversal through a dispositional graph space. By integrating neural synchronization, reinforce- ment learning, and harmonic constraints within a graph-based framework, Graph CTM captures essential aspects of biologi- cal neural computation while maintaining mathematical rigor and computational tractability. Experimental evaluation on ARC-AGI 2 was poor though this was expected given the large disparity of the human brain and this system in neuron counts. Future work will pursue key enhancements to improve biological plausibility and performance: multi-task training with language modeling objectives,leading to the integration of neuro scientifically-grounded architectural constraints, through specialized prompting to guide the formation of brain-like topological organizations within the dispositional connectivity patterns.This would be optimised by including neuroscience research with the language modeling objectives. References

References

[1]L. Darlow, C. Regan, S. Risi, J. Seely, and L. Jones, ”Continuous Thought Machines,” arXiv preprint arXiv:2303.13439, 2023. [2]T. N. Kipf and M. Welling, ”Semi-supervised classification with graph convolutional networks,” in International Conference on Learning Rep- resentations (ICLR), 2017. [3]P. Velicˇkovic´, G. Cucurull, A. Casanova, A. Romero, P. Lio`, and Y. Bengio, ”Graph attention networks,” arXiv preprint arXiv:1710.10903, 2018. [4]P. Fries, ”A mechanism for cognitive dynamics: neuronal communication through neuronal coherence,” Trends in Cognitive Sciences, vol. 9, no. 10, pp. 474–480, 2005. [5]H. Markram, J. Lu¨bke, M. Frotscher, and B. Sakmann, ”Regulation of synaptic efficacy by coincidence of postsynaptic APs and EPSPs,” Science, vol. 275, no. 5297, pp. 213–215, 1997. [6]G. Buzsa´ki, Rhythms of the Brain, Oxford University Press, 2006. [7]J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov, ”Prox- imal policy optimization algorithms,” arXiv preprint arXiv:1707.06347, 2017. [8]F. Chollet, ”The ARC challenge: Rethinking general intelligence in artificial intelligence,” arXiv preprint arXiv:2105.05480, 2021. [9]K. Friston, ”The free-energy principle: a unified brain theory?” Nature Reviews Neuroscience, vol. 11, no. 2, pp. 127–138, 2010. [10]S. Dehaene, Consciousness and the Brain: Deciphering How the Brain Codes Our Thoughts, Viking Press, 2014. [11]G. Tononi, ”Consciousness as integrated information: a provisional manifesto,” Biological Bulletin, vol. 215, no. 3, pp. 216–242, 2008. [12]D. S. Bassett and O. Sporns, ”Network neuroscience,” Nature Neuro- science, vol. 20, no. 3, pp. 353–364, 2017. [13]O. Sporns, Networks of the Brain, MIT Press, 2010. [14]M. Bar and A. Baram, ”Neural dynamics of the prefrontal cortex in decision making,” Trends in Cognitive Sciences, vol. 23, no. 10, pp. 851–864, 2019. [15]X. J. Wang, ”Synaptic reverberation underlying mnemonic persistent activity,” Trends in Neurosciences, vol. 24, no. 8, pp. 455–463, 2001. [16]J. D. Cohen, M. Botvinick, and D. M. Barch, ”Computational studies of prefrontal development and function,” Developmental Science, vol. 3, no. 4, pp. 401–410, 2000. [17]M. J. Frank, B. Loughry, and R. C. O’Reilly, ”Interactions between frontal cortex and basal ganglia in working memory: a computational model,” Journal of Cognitive Neuroscience, vol. 13, no. 5, pp. 663–686, 2001. [18]P. Dayan and J. J. Yu, ”Decision making and sequential sampling from memory,” Neuron, vol. 58, no. 4, pp. 465–468, 2008. [19]M. Thalmann, M. M. Schlaghecken, and M. A. L. Nicolelis, ”Neural mechanisms of musical perception and memory,” Frontiers in Neuro- science, vol. 12, p. 516, 2018. [20]P. Lakatos, J. Gross, and G. Thut, ”A new unifying account of the roles of neuronal oscillations,” Current Biology, vol. 29, no. 4, pp. R94–R103, 2019. [21]S. Guo, Y. Zhou, and Y. Wang, ”Systematic review of graph neural networks and their applications,” IEEE Transactions on Neural Networks and Learning Systems, vol. 32, no. 6, pp. 2285–2303, 2020. [22]S. Herculano-Houzel, ”The human brain in numbers: a linearly scaled-up primate brain,” Frontiers in Human Neuroscience, vol. 3, p. 31, 2009. [23]E. K. Miller and J. D. Cohen, ”An integrative theory of prefrontal cortex function,” Annual Review of Neuroscience, vol. 24, no. 1, pp. 167–202, 2001. [24]V. Mnih, A. P. Badia, M. Mirza, A. Graves, T. Lillicrap, T. Harley, D.Silver, and K. Kavukcuoglu, ”Asynchronous methods for deep rein- forcement learning,” in International Conference on Machine Learning, PMLR, 2016, pp. 1928–1937. [25]O. Ronneberger, P. Fischer, and T. Brox, ”U-net: Convolutional net- works for biomedical image segmentation,” in International Conference on Medical Image Computing and Computer-Assisted Intervention, Springer, 2015, pp. 234–241. [26]D. E. Scott, ”Mathematical foundations of harmony theory,” Journal of Music Theory, vol. 42, no. 2, pp. 249–271, 1998. [27]D. Tymoczko, A Geometry of Music: Harmony and Counterpoint in the Extended Common Practice, Oxford University Press, 2011. [28]M. Bader, ”Computational music analysis,” Springer Handbook of Systematic Musicology, pp. 277–290, 2019. [29]J. Hawkins and S. Ahmad, ”A thousand brains theory of intelligence,” Frontiers in Neural Circuits, vol. 15, p. 626292, 2021. [30]J. L. McClelland, D. E. Rumelhart, and the PDP Research Group, Parallel Distributed Processing: Explorations in the Microstructure of Cognition, Vol. 2, MIT Press, 1986. [31]D. O. Hebb, The Organization of Behavior: A Neuropsychological Theory, Wiley, 1949. [32]T. J. Sejnowski, ”Synaptic plasticity,” Neural Information Processing Systems, pp. 303–308, 1987. [33]G. Bi and M. Poo, ”Synaptic modifications in cultured hippocampal neu- rons: dependence on spike timing, synaptic strength, and postsynaptic cell type,” Journal of Neuroscience, vol. 18, no. 24, pp. 10464–10472, 1998. [34]W. Singer, ”Neuronal synchrony: a versatile code for the definition of relations?” Neuron, vol. 24, no. 1, pp. 49–65, 1999. [35]P. Fries, ”Neuronal gamma-band synchronization as a fundamental process in cortical computation,” Annual Review of Neuroscience, vol. 32, pp. 209–224, 2009. [36]R. J. Douglas and K. A. Martin, ”Neuronal circuits of the neocortex,” Annual Review of Neuroscience, vol. 27, pp. 419–451, 2004. [37]M. Carandini and D. J. Heeger, ”Normalization as a canonical neural computation,” Nature Reviews Neuroscience, vol. 13, no. 1, pp. 51–62, 2012. [38] G. Buzsa´ki and K. Mizuseki, ”The log-dynamic brain: how skewed distributions affect network operations,” Nature Reviews Neuroscience, vol. 15, no. 4, pp. 264–278, 2014. [39]G. E. Hinton, ”Learning distributed representations of concepts,” in Proceedings of the Eighth Annual Conference of the Cognitive Science Society, vol. 1, p. 12, 1986. [40]Y. LeCun, Y. Bengio, and G. Hinton, ”Deep learning,” Nature, vol. 521, no. 7553, pp. 436–444, 2015. [41]A. M. Zador, ”A critique of pure learning and what artificial neural networks can learn from animal brains,” Nature Communications, vol. 10, no. 1, p. 3770, 2019. [42]B. A. Richards, T. P. Lillicrap, P. Beaudoin, Y. Bengio, R. Bogacz, A. Christensen, C. Clopath, R. P. Costa, A. de Berker, S. Ganguli, C. C. Gillon, D. F. Hassabis, C. Latham, and K. P. Miller, ”A deep learning framework for neuroscience,” Nature Neuroscience, vol. 22, no. 11, pp. 1761–1770, 2019. [43]J. Schulman, S. Levine, P. Abbeel, M. Jordan, and P. Moritz, ”Trust region policy optimization,” in International Conference on Machine Learning, PMLR, 2015, pp. 1889–1897. [44]J. Schmidhuber, ”Deep learning in neural networks: an overview,” Neural Networks, vol. 61, pp. 85–117, 2015. [45]G. A. Carpenter and S. Grossberg, ”A massively parallel architecture for a self-organizing neural pattern recognition machine,” Computer Vision, Graphics, and Image Processing, vol. 37, no. 1, pp. 54–115, 1987. [46]J. Cohen and R. C. Servan-Schreiber, ”A neural network model of attention and control functions of the prefrontal cortex,” in Modeling the Mind, S. P. Stich and T. A. Warfield, Eds. Oxford University Press, 1995, pp. 13–56. [47]E. K. Miller and P. S. Goldman-Rakic, ”Prefrontal cortical representa- tions and functions in working memory,” Journal of Neuroscience, vol. 14, no. 11, pp. 7369–7385, 1994. [48]K. Doya, K. Samejima, K. Katagiri, and M. Kawato, ”Multiple model- based reinforcement learning,” Neural Computation, vol. 14, no. 6, pp. 1347–1369, 2002. [49]A. G. Barto and R. S. Sutton, Reinforcement Learning: An Introduction, MIT Press, 1998. [50]M. S. Schroeder, Number Theory in Science and Communication, Springer, 1999. [51]J. J. Bharucha, ”Musical cognition,” in Concepts and Processes in Music, J. L. Pressing, Ed. Oxford University Press, 1988, pp. 140–168. [52]S. Koelsch and W. Siebel, ”Towards a neural basis of processing musical semantics,” Trends in Cognitive Sciences, vol. 9, no. 12, pp. 578–584, 2005. [53]A. S. Bregman, Auditory Scene Analysis: The Perceptual Organization of Sound, MIT Press, 1994. [54]J. A. Fodor, The Mind Doesn’t Work That Way: The Scope and Limits of Computational Psychology, MIT Press, 2000. [55]R. Penrose, Shadows of the Mind: A Search for the Missing Science of Consciousness, Oxford University Press, 1994. [56]A. M. Turing, ”Computing machinery and intelligence,” Mind, vol. 59, no. 236, pp. 433–460, 1950. [57]J. R. Searle, ”Minds, brains, and programs,” Behavioral and Brain Sciences, vol. 3, no. 3, pp. 417–457, 1980.
© 2025 International Journal of Computer Science Engineering Techniques (IJCSE).