Comprehensive Computational Psychology Cheatsheet

Introduction: What is Computational Psychology?

Computational psychology is an interdisciplinary field that applies computational methods, mathematical modeling, and simulation techniques to understand and predict human cognition, behavior, and mental processes. It bridges psychology, computer science, neuroscience, and mathematics to create formal, testable models of psychological phenomena.

Why It Matters:

  • Provides precise, testable theories of psychological processes
  • Enables quantitative predictions of behavior and cognition
  • Facilitates integration between psychology and other scientific disciplines
  • Supports development of AI systems that better understand human behavior
  • Drives innovation in clinical applications, education, and human-computer interaction

Core Concepts and Principles

Fundamental Assumptions

AssumptionDescription
Mind as Information ProcessorThe mind can be understood as a system that processes information, transforms inputs into outputs
Algorithmic ThinkingMental processes can be described using precise algorithms and computational procedures
Multiple Levels of AnalysisPsychological phenomena can be studied at computational, algorithmic, and implementation levels
Rationality PrinciplesHuman cognition approximates optimal solutions to computational problems posed by the environment
Constraint SatisfactionMental processes involve balancing multiple constraints simultaneously

Marr’s Levels of Analysis

  1. Computational Level: What problem is being solved? What is the goal?
  2. Algorithmic Level: How is the problem solved? What representations and processes are used?
  3. Implementation Level: How is the algorithm physically realized (neural mechanisms)?

Key Theoretical Frameworks

  • Bayesian Cognitive Science: Mind as probabilistic inference engine
  • Connectionism: Cognition emerges from networks of simple processing units
  • Reinforcement Learning: Behavior shaped by reward and punishment signals
  • Dynamical Systems: Cognition as evolving trajectory through state space
  • Production Systems: Cognition as rule-based symbol manipulation
  • Predictive Processing: Brain as prediction machine minimizing prediction error

Research Methodologies in Computational Psychology

General Research Process

  1. Identify Phenomenon: Select specific cognitive or behavioral phenomenon
  2. Formalize Problem: Define computational problem the mind is solving
  3. Develop Model: Create mathematical/computational model of underlying processes
  4. Generate Predictions: Derive behavioral predictions from model
  5. Collect Empirical Data: Run experiments to test model predictions
  6. Parameter Estimation: Fit model parameters to empirical data
  7. Model Comparison: Compare competing models using statistical criteria
  8. Refine Model: Update based on empirical findings
  9. Cross-Validation: Test model on new datasets

Model Evaluation Metrics

MetricDescriptionWhen to Use
LikelihoodProbability of observing data given modelWhen comparing nested models
AIC/BICInformation criteria penalizing model complexityWhen comparing non-nested models
Cross-validationPrediction accuracy on held-out dataTo assess generalizability
Posterior predictive checksAgreement between model predictions and observed patternsTo assess qualitative fit
Parameter recoveryAbility to recover known parameters from simulated dataTo assess model identifiability

Computational Modeling Approaches

Bayesian Models

Key Concepts:

  • Prior probabilities represent existing beliefs
  • Likelihood functions represent how data relates to hypotheses
  • Posterior probabilities update beliefs based on evidence
  • Optimal decisions maximize expected utility

Applications:

  • Perception as unconscious inference
  • Concept learning and categorization
  • Decision-making under uncertainty
  • Social cognition and theory of mind

Example: Bayesian Category Learning

P(category|features) ∝ P(features|category) × P(category)

Neural Network Models

Key Components:

  • Nodes (artificial neurons)
  • Connection weights
  • Activation functions
  • Learning algorithms

Types:

  • Feedforward networks (perception, categorization)
  • Recurrent networks (memory, language)
  • Deep networks (complex pattern recognition)
  • Self-organizing maps (representational development)

Learning Algorithms:

  • Supervised learning (backpropagation)
  • Unsupervised learning (Hebbian learning)
  • Reinforcement learning (temporal difference)

Reinforcement Learning Models

Key Components:

  • States (S): Current situation
  • Actions (A): Possible behaviors
  • Rewards (R): Feedback signals
  • Policy (π): Strategy for action selection
  • Value function (V): Expected future reward

Key Algorithms:

  • Temporal Difference Learning
  • Q-Learning
  • Actor-Critic Models
  • Model-based RL

Applications:

  • Habit formation
  • Decision-making
  • Motor learning
  • Addiction and compulsion

Symbolic/Rule-Based Models

Key Components:

  • Symbols representing concepts
  • Rules for manipulating symbols
  • Production systems (if-then rules)
  • Working memory buffers

Examples:

  • ACT-R (Adaptive Control of Thought-Rational)
  • SOAR (State, Operator And Result)
  • EPIC (Executive Process/Interactive Control)

Applications:

  • Problem-solving
  • Reasoning
  • Skill acquisition
  • Memory retrieval

Dynamical Systems Models

Key Concepts:

  • State space
  • Attractors and repellers
  • Bifurcations
  • Self-organization

Applications:

  • Motor control
  • Developmental transitions
  • Perceptual bistability
  • Emotion dynamics

Comparison of Modeling Approaches

ApproachStrengthsLimitationsBest For
BayesianPrincipled treatment of uncertainty; Incorporates prior knowledgeComputational complexity; Determining appropriate priorsReasoning under uncertainty; Optimal behavior benchmarks
Neural NetworksLearn from experience; Handle complex patterns; Neurally plausibleBlack box nature; Require large datasets; Parameter sensitivityPattern recognition; Implicit learning; Brain-like processing
Reinforcement LearningModel learning from feedback; Connect behavior to neuroscienceOften require simplistic task environmentsLearning; Decision-making; Habit formation
Symbolic/Rule-BasedTransparent processing; Explicit knowledge representationDifficulty handling uncertainty; Scaling issuesExpert knowledge; Logical reasoning; Complex problem-solving
Dynamical SystemsCapture continuous time processes; Emergent behaviorMathematical complexity; Parameter identificationMotor control; Developmental change; Continuous interaction

Key Application Domains

Perception

Visual Perception Models:

  • Bayesian inference in visual illusions
  • Predictive coding in visual processing
  • Deep neural networks for object recognition

Auditory Perception Models:

  • Bayesian causal inference in audio-visual integration
  • Neural networks for speech recognition
  • Dynamical models of rhythm perception

Memory

Working Memory Models:

  • Resource models vs. slot models
  • Attractor network models of maintenance
  • Reinforcement learning models of memory control

Long-Term Memory Models:

  • ACT-R declarative memory module
  • Temporal context models of episodic memory
  • Neural network models of semantic memory

Decision Making

Value-Based Decision Models:

  • Prospect theory
  • Drift diffusion models
  • Reinforcement learning models of reward-based choice

Multi-Attribute Decision Models:

  • Bayesian models of preference
  • Evidence accumulation models
  • Quantum probability models of context effects

Language Processing

Language Comprehension Models:

  • Probabilistic context-free grammars
  • Neural network models of sentence processing
  • Bayesian pragmatics

Language Production Models:

  • Spreading activation models
  • Reinforcement learning for dialogue
  • Neural models of lexical retrieval

Social Cognition

Theory of Mind Models:

  • Bayesian models of mental state inference
  • Simulation theory models
  • Game-theoretic models of strategic interaction

Social Learning Models:

  • Bayesian models of cultural transmission
  • Multi-agent reinforcement learning
  • Neural models of imitation learning

Common Challenges and Solutions

Theoretical Challenges

ChallengeDescriptionSolutions
Abstraction LevelDetermining appropriate level of detailStart with simplest model that captures phenomenon; Systematically add complexity
Parameter IdentifiabilityMultiple parameter sets may fit data equally wellUse parameter recovery analysis; Collect more diverse behavioral measures
Model ComplexityMore complex models fit better but risk overfittingUse principled model selection; Penalize complexity (AIC/BIC); Cross-validation
Bridging LevelsConnecting computational and neural levelsDevelop multi-level models; Use neural data constraints
Individual DifferencesModels fit group averages but miss individual variationHierarchical Bayesian modeling; Latent variable approaches

Practical Challenges

ChallengeDescriptionSolutions
Computational ResourcesComplex models require heavy computationUse approximation methods; Parallel computing; GPU acceleration
Experiment DesignStandard designs may not distinguish between modelsAdaptive experimentation; Model-based experimental design
Code ComplexityImplementing complex models error-proneUse existing toolboxes; Follow software engineering practices; Document extensively
Interdisciplinary KnowledgeRequires expertise across multiple domainsCollaborate across disciplines; Develop shared vocabularies
InterpretationConnecting model parameters to psychological constructsValidate with multiple tasks; Correlate with other measures

Best Practices and Practical Tips

Model Development

  • Start with simple models and incrementally add complexity
  • Implement multiple competing models to compare explanations
  • Simulate synthetic data to verify model implementation
  • Document all modeling assumptions and choices
  • Perform sensitivity analyses for critical parameters
  • Pre-register model predictions when possible

Data Analysis

  • Fit models to individual participants rather than group averages when possible
  • Use hierarchical Bayesian methods to pool information across participants
  • Conduct posterior predictive checks to assess model fit
  • Compare model predictions across multiple tasks/conditions
  • Always include appropriate null/baseline models
  • Report model complexity alongside fit measures

Programming and Implementation

  • Use established computational frameworks (PyTorch, TensorFlow, STAN)
  • Version control your code (Git)
  • Create reproducible analysis pipelines
  • Document code thoroughly
  • Share code and model implementations publicly
  • Use unit tests to verify model components

Collaboration and Communication

  • Develop interdisciplinary vocabulary
  • Create visualizations of model mechanics for non-technical audiences
  • Report both formal model details and intuitive explanations
  • Highlight practical implications of modeling results
  • Connect model parameters to established psychological constructs

Essential Tools and Resources

Programming Languages and Libraries

  • Python: PsyNeuLink, PyMC3, TensorFlow, PyTorch, scikit-learn
  • R: brms, rstan, lme4, rjags
  • MATLAB: Psychtoolbox, SPM, EEGLab
  • Julia: Turing.jl, Flux.jl, DifferentialEquations.jl

Model-Specific Frameworks

  • Bayesian Modeling: STAN, JAGS, WebPPL
  • Cognitive Architectures: ACT-R, SOAR, Leabra
  • Neural Models: Emergent, The Virtual Brain, NEURON
  • Reinforcement Learning: OpenAI Gym, PsychRNN

Data Collection Platforms

  • Online Behavioral Experiments: jsPsych, lab.js, PsychoPy
  • Neuroimaging: SPM, FSL, AFNI
  • Eye-Tracking: PyGaze, EyeLink, GazeRecorder
  • Physiological Data: Biopac, OpenSignals

Resources for Further Learning

Textbooks and Key References

  • Sun, R. (2008). The Cambridge Handbook of Computational Psychology
  • Busemeyer, J.R., et al. (2015). Quantum Models of Cognition and Decision
  • Griffiths, T.L., et al. (2010). Probabilistic Models of Cognition
  • O’Reilly, R.C. & Munakata, Y. (2000). Computational Explorations in Cognitive Neuroscience

Online Courses and Tutorials

  • Computational Cognitive Neuroscience (CCN) course materials (https://CompCogNeuro.org)
  • Probabilistic Models of Cognition (https://probmods.org)
  • Neuromatch Academy (https://neuromatch.io/academy)
  • Kaggle competitions for applied modeling

Research Groups and Labs

  • Computational Cognitive Science Lab (MIT)
  • Princeton Computational Cognitive Science Lab
  • Max Planck Institute for Human Development
  • DeepMind Neuroscience Research
  • Stanford Computational Cognitive Science Group

Conferences and Journals

  • Conferences: Cognitive Science Society, NeurIPS, Computational Psychiatry
  • Journals: Computational Brain & Behavior, Neural Computation, Psychological Review, Cognitive Science
Scroll to Top