DCM: Advanced topics Klaas Enno Stephan Zurich SPM Course 2014 14 February 2014 Overview • Generative models & analysis options • Extended DCM for fMRI: nonlinear, two-state, stochastic • Embedding computational models into DCMs • Integrating tractography and DCM • Applications of DCM to clinical questions Generative models p( y | , m) p( | m) p( | y, m) Advantages: 1. force us to think mechanistically: how were the data caused? 2. allow one to generate synthetic data. 3. Bayesian perspective → inversion & model evidence Dynamic Causal Modeling (DCM) Hemodynamic forward model: neural activityBOLD Electromagnetic forward model: neural activityEEG MEG LFP Neural state equation: dx F ( x , u, ) dt fMRI simple neuronal model complicated forward model EEG/MEG complicated neuronal model simple forward model inputs Bayesian system identification Neural dynamics u (t ) Design experimental inputs dx dt f ( x, u, ) Define likelihood model Observer function Inference on model structure Inference on parameters y g ( x, ) p( y | , m) N ( g ( ), ( )) p ( , m) N ( , ) Specify priors p ( y | m) p ( y | , m) p ( )d Invert model p ( | y, m) p ( y | , m) p ( , m) p ( y | m) Make inferences VB in a nutshell (mean-field approximation) Neg. free-energy approx. to model evidence. Mean field approx. Maximise neg. free energy wrt. q = minimise divergence, by maximising variational energies ln p y | m F KL q , , p , | y F ln p y, , q KL q , , p , | m p , | y q , q q q exp I exp ln p y, , q exp I exp ln p y, , Iterative updating of sufficient statistics of approx. posteriors by gradient ascent. q ( ) q ( ) Generative models p | y, m p y | , m p | m • any DCM = a particular generative model of how the data (may) have been caused • modelling = comparing competing hypotheses about the mechanisms underlying observed data model space: a priori definition of hypothesis set is crucial model selection: determine the most plausible hypothesis (model), given the data inference on parameters: e.g., evaluate consistency of how model mechanisms are implemented across subjects • model selection model validation! model validation requires external criteria (external to the measured data) Model comparison and selection Given competing hypotheses on structure & functional mechanisms of a system, which model is the best? Which model represents the best balance between model fit and model complexity? For which model m does p(y|m) become maximal? Pitt & Miyung (2002) TICS Bayesian model selection (BMS) Model evidence: Gharamani, 2004 log p( y | m) log p( y | , m) KL q , p | m KL q , p | y, m accounts for both accuracy and complexity of the model p(y|m) p( y | m) p( y | , m) p( | m) d y all possible datasets Various approximations, e.g.: - negative free energy, AIC, BIC a measure of generalizability McKay 1992, Neural Comput. Penny et al. 2004a, NeuroImage Approximations to the model evidence in DCM Logarithm is a monotonic function Maximizing log model evidence = Maximizing model evidence Log model evidence = balance between fit and complexity log p( y | m ) accuracy ( m ) complexity( m ) log p( y | , m) complexity( m ) No. of parameters In SPM2 & SPM5, interface offers 2 approximations: Akaike Information Criterion: Bayesian Information Criterion: AIC log p( y | , m) p p BIC log p ( y | , m ) log N 2 No. of data points Penny et al. 2004a, NeuroImage The (negative) free energy approximation • Under Gaussian assumptions about the posterior (Laplace approximation): log p( y | m) log p( y | , m) KL q , p | m KL q , p | y, m F log p( y | m) KL q , p | y, m log p( y | , m) KL q , p | m accuracy complexity The complexity term in F • In contrast to AIC & BIC, the complexity term of the negative free energy F accounts for parameter interdependencies. KLq( ), p( | m) 1 1 1 T ln C ln C | y | y C1 | y 2 2 2 • The complexity term of F is higher – the more independent the prior parameters ( effective DFs) – the more dependent the posterior parameters – the more the posterior mean deviates from the prior mean • NB: Since SPM8, only F is used for model selection ! definition of model space inference on model structure or inference on model parameters? inference on individual models or model space partition? optimal model structure assumed to be identical across subjects? yes FFX BMS comparison of model families using FFX or RFX BMS inference on parameters of an optimal model or parameters of all models? optimal model structure assumed to be identical across subjects? yes no FFX BMS RFX BMS no RFX BMS Stephan et al. 2010, NeuroImage FFX analysis of parameter estimates (e.g. BPA) RFX analysis of parameter estimates (e.g. t-test, ANOVA) BMA Random effects BMS for heterogeneous groups r ~ Dir (r ; ) mk ~ p(mk | p) mk ~ p(mk | p) mk ~ p(mk | p) m1 ~ Mult (m;1, r ) y1 ~ p( y1 | m1 ) y1 ~ p( y1 | m1 ) y2 ~ p( y2 | m2 ) y1 ~ p( y1 | m1 ) Dirichlet parameters = “occurrences” of models in the population Dirichlet distribution of model probabilities r Multinomial distribution of model labels m Model inversion by Variational Bayes or MCMC Measured data y Stephan et al. 2009a, NeuroImage Penny et al. 2010, PLoS Comput. Biol. Inference about DCM parameters: Bayesian single-subject analysis • Gaussian assumptions about the posterior distributions of the parameters • Use of the cumulative normal distribution to test the probability that a certain parameter (or contrast of parameters cT ηθ|y) is above a chosen threshold γ: cT y p N cT C y c • By default, γ is chosen as zero ("does the effect exist?"). Inference about DCM parameters: Random effects group analysis (classical) • In analogy to “random effects” analyses in SPM, 2nd level analyses can be applied to DCM parameters: Separate fitting of identical models for each subject Selection of model parameters of interest one-sample t-test: parameter > 0 ? paired t-test: parameter 1 > parameter 2 ? rmANOVA: e.g. in case of multiple sessions per subject Bayesian Model Averaging (BMA) • uses the entire model space considered (or an optimal family of models) • averages parameter estimates, weighted by posterior model probabilities • particularly useful alternative when p n | y1.. N p n | yn , m p m | y1.. N m NB: p(m|y1..N) can be obtained by either FFX or RFX BMS – none of the models (subspaces) considered clearly outperforms all others – when comparing groups for which the optimal model differs Penny et al. 2010, PLoS Comput. Biol. definition of model space inference on model structure or inference on model parameters? inference on individual models or model space partition? optimal model structure assumed to be identical across subjects? yes FFX BMS comparison of model families using FFX or RFX BMS inference on parameters of an optimal model or parameters of all models? optimal model structure assumed to be identical across subjects? yes no FFX BMS RFX BMS no RFX BMS Stephan et al. 2010, NeuroImage FFX analysis of parameter estimates (e.g. BPA) RFX analysis of parameter estimates (e.g. t-test, ANOVA) BMA Overview • Generative models & analysis options • Extended DCM for fMRI: nonlinear, two-state, stochastic • Embedding computational models into DCMs • Integrating tractography and DCM • Applications of DCM to clinical questions The evolution of DCM in SPM • DCM is not one specific model, but a framework for Bayesian inversion of dynamic system models • The default implementation in SPM is evolving over time – improvements of numerical routines (e.g., for inversion) – change in parameterization (e.g., self-connections, hemodynamic states in log space) – change in priors to accommodate new variants (e.g., stochastic DCMs, endogenous DCMs etc.) To enable replication of your results, you should ideally state which SPM version (release number) you are using when publishing papers. The release number is stored in the DCM.mat file. y y BOLD y activity x2(t) neuronal states t Neural state equation endogenous connectivity The classical DCM: a deterministic, one-state, bilinear model hemodynamic model x integration modulatory input u2(t) t λ activity x3(t) activity x1(t) driving input u1(t) y modulation of connectivity direct inputs x ( A u j B( j ) ) x Cu x x x u j x A B( j) x C u Factorial structure of model specification in DCM10 • Three dimensions of model specification: – bilinear vs. nonlinear – single-state vs. two-state (per region) – deterministic vs. stochastic • Specification via GUI. bilinear DCM non-linear DCM modulation driving input driving input modulation Two-dimensional Taylor series (around x0=0, u0=0): dx 2 f x2 f f 2 f f ( x, u ) f ( x0 ,0) ... x u ux ... 2 dt x 2 x u xu Bilinear state equation: m dx A ui B ( i ) x Cu dt i 1 Nonlinear state equation: m n dx (i ) ( j) A ui B x j D x Cu dt i 1 j 1 Neural population activity 0.4 0.3 0.2 u2 0.1 0 0 10 20 30 40 50 60 70 80 90 100 0 10 20 30 40 50 60 70 80 90 100 0 10 20 30 40 50 60 70 80 90 100 0.6 u1 0.4 x3 0.2 0 0.3 0.2 0.1 0 x1 x2 3 fMRI signal change (%) 2 1 0 Nonlinear dynamic causal model (DCM) 0 10 20 30 40 50 60 70 80 90 100 0 10 20 30 40 50 60 70 80 90 100 0 10 20 30 40 50 60 70 80 90 100 4 3 m n dx (i ) ( j) A ui B x j D x Cu dt i 1 j 1 2 1 0 -1 3 2 1 Stephan et al. 2008, NeuroImage 0 attention MAP = 1.25 0.10 0.8 0.7 PPC 0.6 0.26 0.5 0.39 1.25 stim 0.26 V1 0.13 0.46 0.50 V5 0.4 0.3 0.2 0.1 0 -2 motion Stephan et al. 2008, NeuroImage -1 0 1 2 3 4 p( DVPPC 5,V 1 0 | y ) 99.1% 5 Two-state DCM Single-state DCM Two-state DCM input u x1E x1E x1 x1I x1I x x Cu ij ij exp( Aij uBij ) x x Cu ij Aij uBij 11 1N N 1 NN Marreiros et al. 2008, NeuroImage x1 x x N EE 11 IE 11 EE N 1 0 EI 11 1EEN II 11 0 0 EE NN 0 IE NN Extrinsic (between-region) coupling 0 0 EE NN IINN Intrinsic (within-region) coupling x1E I x1 x E xN xI N Estimates of hidden causes and states (Generalised filtering) Stochastic DCM inputs or causes - V2 1 dx ( A j u j B ( j ) ) x Cv ( x ) dt v u (v) 0.5 0 -0.5 -1 0 200 400 600 800 1000 hidden states - neuronal 0.1 excitatory signal 0.05 • all states are represented in generalised coordinates of motion • random state fluctuations w(x) account for endogenous fluctuations, have unknown precision and smoothness two hyperparameters • fluctuations w(v) induce uncertainty about how inputs influence neuronal activity • can be fitted to resting state data 1200 0 -0.05 -0.1 0 200 400 600 800 1000 1200 hidden states - hemodynamic 1.3 flow volume dHb 1.2 1.1 1 0.9 0.8 0 200 400 600 800 1000 1200 predicted BOLD signal 2 observed predicted 1 0 -1 -2 Li et al. 2011, NeuroImage -3 0 200 400 600 time (seconds) 800 1000 1200 Overview • Generative models & analysis options • Extended DCM for fMRI: nonlinear, two-state, stochastic • Embedding computational models in DCMs • Integrating tractography and DCM • Applications of DCM to clinical questions Prediction errors drive synaptic plasticity PE(t) x3 R x1 a0 x2 McLaren 1989 t wt w0 PEk k 1 t wt a0 PE ( )d 0 synaptic plasticity during learning = f (prediction error) Learning of dynamic audio-visual associations 1 Conditioning Stimulus CS1 Target Stimulus CS2 0.8 or p(face) or CS 0 Response TS 200 400 600 Time (ms) 800 0.6 0.4 0.2 2000 ± 650 0 0 200 400 600 trial den Ouden et al. 2010, J. Neurosci. 800 1000 Hierarchical Bayesian learning model prior on volatility volatility pk 1 k vt-1 pvt 1 | vt , k ~ N vt , exp( k ) vt probabilistic association rt rt+1 observed events ut ut+1 Behrens et al. 2007, Nat. Neurosci. prt 1 | rt , vt ~ Dir rt , exp( vt ) Explaining RTs by different learning models Reaction times 1 True Bayes Vol HMM fixed HMM learn RW 450 0.8 430 p(F) RT (ms) 440 420 0.6 0.4 410 400 390 0.2 0.1 0.3 0.5 0.7 0.9 p(outcome) 0 400 0.7 • Rescorla-Wagner • Hidden Markov models (2 variants) 520 560 600 Bayesian model selection: 0.6 Exceedance prob. • hierarchical Bayesian learner 480 Trial 5 alternative learning models: • categorical probabilities 440 hierarchical Bayesian model performs best 0.5 0.4 0.3 0.2 0.1 0 Categorical model den Ouden et al. 2010, J. Neurosci. Bayesian learner HMM (fixed) HMM (learn) RescorlaWagner Stimulus-independent prediction error Putamen Premotor cortex p < 0.05 (cluster-level wholebrain corrected) 0 -0.5 0 -0.5 -1 -1.5 -2 BOLD resp. (a.u.) BOLD resp. (a.u.) p < 0.05 (SVC) -1 -1.5 p(F) p(H) den Ouden et al. 2010, J. Neurosci . -2 p(F) p(H) Prediction error (PE) activity in the putamen PE during active sensory learning PE during incidental sensory learning p < 0.05 (SVC) PE during reinforcement learning O'Doherty et al. 2004, Science den Ouden et al. 2009, Cerebral Cortex PE = “teaching signal” for synaptic plasticity during learning Could the putamen be regulating trial-by-trial changes of task-relevant connections? Hierarchical Bayesian learning model Prediction errors control plasticity during adaptive cognition • Influence of visual areas on premotor cortex: PUT – stronger for surprising stimuli – weaker for expected stimuli PMd PPA den Ouden et al. 2010, J. Neurosci . p = 0.017 p = 0.010 FFA Overview • Generative models & analysis options • Extended DCM for fMRI: nonlinear, two-state, stochastic • Embedding computational models in DCMs • Integrating tractography and DCM • Applications of DCM to clinical questions Diffusion-weighted imaging Parker & Alexander, 2005, Phil. Trans. B Probabilistic tractography: Kaden et al. 2007, NeuroImage • computes local fibre orientation density by spherical deconvolution of the diffusion-weighted signal • estimates the spatial probability distribution of connectivity from given seed regions • anatomical connectivity = proportion of fibre pathways originating in a specific source region that intersect a target region • If the area or volume of the source region approaches a point, this measure reduces to method by Behrens et al. (2003) 1.6 Integration of tractography and DCM 1.4 1.2 1 R1 R2 0.8 0.6 0.4 0.2 0 -2 -1 0 1 2 low probability of anatomical connection small prior variance of effective connectivity parameter 1.6 1.4 1.2 1 R1 R2 0.8 0.6 0.4 0.2 0 Stephan, Tittgemeyer et al. 2009, NeuroImage -2 -1 0 1 high probability of anatomical connection large prior variance of effective connectivity parameter 2 Proof of concept study probabilistic tractography FG 34 6.5% FG left FG FG right 24 43.6% 13 15.7% LG left LG LG 12 34.2% LG right anatomical connectivity DCM connectionspecific priors for coupling parameters 6.5% v 0.0384 2 1.8 15.7% 1.6 v 0.1070 1.4 1.2 1 0.8 0.6 34.2% 43.6% v 0.5268 v 0.7746 0.4 Stephan, Tittgemeyer et al. 2009, NeuroImage 0.2 0 -3 -2 -1 0 1 2 3 Connection-specific prior variance as a function of anatomical connection probability m 1: a=-32,b=-32 m 2: a=-16,b=-32 m 3: a=-16,b=-28 m 4: a=-12,b=-32 m 5: a=-12,b=-28 m 6: a=-12,b=-24 m 7: a=-12,b=-20 m 8: a=-8,b=-32 m 9: a=-8,b=-28 1 1 1 1 1 1 1 1 1 0 ij 1 0 exp( ij ) 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0 0 0 0 0 0 0 0 0 0 0.5 1 0 0.5 1 0 0.5 1 0 0.5 1 0 0.5 1 0 0.5 1 0 0.5 1 0 0.5 1 0 0.5 1 m 10: a=-8,b=-24 m 11: a=-8,b=-20 m 12: a=-8,b=-16 m 13: a=-8,b=-12 m 14: a=-4,b=-32 m 15: a=-4,b=-28 m 16: a=-4,b=-24 m 17: a=-4,b=-20 m 18: a=-4,b=-16 1 1 1 1 1 1 1 1 1 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0 0 0 0 0 0 0 0 0 0 0.5 1 0 0.5 1 0 0.5 1 0 0.5 1 0 0.5 1 0 0.5 1 0 0.5 1 0 0.5 1 0 0.5 1 m 19: a=-4,b=-12 m 20: a=-4,b=-8 m 21: a=-4,b=-4 m 22: a=-4,b=0 m 23: a=-4,b=4 m 24: a=0,b=-32 m 25: a=0,b=-28 m 26: a=0,b=-24 m 27: a=0,b=-20 1 1 1 1 1 1 1 1 1 • 64 different mappings by systematic search across hyperparameters and 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0 0 0 0 0 0 0 0 0 0 0.5 1 0 0.5 1 0 0.5 1 0 0.5 1 0 0.5 1 0 0.5 1 0 0.5 1 0 0.5 1 0 0.5 1 m 28: a=0,b=-16 m 29: a=0,b=-12 m 30: a=0,b=-8 m 31: a=0,b=-4 m 32: a=0,b=0 m 33: a=0,b=4 m 34: a=0,b=8 m 35: a=0,b=12 m 36: a=0,b=16 1 1 1 1 1 1 1 1 1 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0 0 0 0 0 0 0 0 0 0 0.5 1 0 0.5 1 0 0.5 1 0 0.5 1 0 0.5 1 0 0.5 1 0 0.5 1 0 0.5 1 0 0.5 1 m 37: a=0,b=20 m 38: a=0,b=24 m 39: a=0,b=28 m 40: a=0,b=32 m 41: a=4,b=-32 m 42: a=4,b=0 m 43: a=4,b=4 m 44: a=4,b=8 m 45: a=4,b=12 1 1 1 1 1 1 1 1 1 0.5 • yields anatomically informed (intuitive and counterintuitive) and uninformed priors 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0 0 0 0 0 0 0 0 0 0 0.5 1 0 0.5 1 0 0.5 1 0 0.5 1 0 0.5 1 0 0.5 1 0 0.5 1 0 0.5 1 0 0.5 1 m 46: a=4,b=16 m 47: a=4,b=20 m 48: a=4,b=24 m 49: a=4,b=28 m 50: a=4,b=32 m 51: a=8,b=12 m 52: a=8,b=16 m 53: a=8,b=20 m 54: a=8,b=24 1 1 1 1 1 1 1 1 1 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0 0 0 0 0 0 0 0 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0 0 0.5 1 0 0.5 1 0 0.5 1 0 0.5 1 0 0.5 1 0 0.5 1 0 0.5 1 0 0.5 1 0 0.5 1 m 55: a=8,b=28 m 56: a=8,b=32 m 57: a=12,b=20 m 58: a=12,b=24 m 59: a=12,b=28 m 60: a=12,b=32 m 61: a=16,b=28 m 62: a=16,b=32 m 63 & m 64 1 1 1 1 1 1 1 1 1 0 0 0.5 1 0 0 0.5 1 0 0 0.5 1 0 0 0.5 1 0 0 0.5 1 0 0 0.5 1 0 0 0.5 1 0 0.5 0 0.5 1 0 0 0.5 1 log group Bayes factor 600 400 200 log group Bayes factor 0 0 10 20 30 model 40 50 60 0 10 20 30 model 40 50 60 40 50 60 700 695 690 685 680 post. model prob. 0.6 0.5 0.4 0.3 Models with anatomically informed priors (of an intuitive form) 0.2 0.1 0 0 10 20 30 model m 1: a=-32,b=-32 m 2: a=-16,b=-32 m 3: a=-16,b=-28 m 4: a=-12,b=-32 m 5: a=-12,b=-28 m 6: a=-12,b=-24 m 7: a=-12,b=-20 m 8: a=-8,b=-32 m 9: a=-8,b=-28 1 1 1 1 1 1 1 1 1 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0 0 0 0 0 0 0 0 0 0 0.5 1 0 0.5 1 0 0.5 1 0 0.5 1 0 0.5 1 0 0.5 1 0 0.5 1 0 0.5 1 0 0.5 1 m 10: a=-8,b=-24 m 11: a=-8,b=-20 m 12: a=-8,b=-16 m 13: a=-8,b=-12 m 14: a=-4,b=-32 m 15: a=-4,b=-28 m 16: a=-4,b=-24 m 17: a=-4,b=-20 m 18: a=-4,b=-16 1 1 1 1 1 1 1 1 1 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0 0 0 0 0 0 0 0 0 0 0.5 1 0 0.5 1 0 0.5 1 0 0.5 1 0 0.5 1 0 0.5 1 0 0.5 1 0 0.5 1 0 0.5 1 m 19: a=-4,b=-12 m 20: a=-4,b=-8 m 21: a=-4,b=-4 m 22: a=-4,b=0 m 23: a=-4,b=4 m 24: a=0,b=-32 m 25: a=0,b=-28 m 26: a=0,b=-24 m 27: a=0,b=-20 1 1 1 1 1 1 1 1 1 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0 0 0 0 0 0 0 0 0 0 0.5 1 0 0.5 1 0 0.5 1 0 0.5 1 0 0.5 1 0 0.5 1 0 0.5 1 0 0.5 1 0 0.5 1 m 28: a=0,b=-16 m 29: a=0,b=-12 m 30: a=0,b=-8 m 31: a=0,b=-4 m 32: a=0,b=0 m 33: a=0,b=4 m 34: a=0,b=8 m 35: a=0,b=12 m 36: a=0,b=16 1 1 1 1 1 1 1 1 1 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0 0 0 0 0 0 0 0 0 0 0.5 1 0 0.5 1 0 0.5 1 0 0.5 1 0 0.5 1 0 0.5 1 0 0.5 1 0 0.5 1 0 0.5 1 m 37: a=0,b=20 m 38: a=0,b=24 m 39: a=0,b=28 m 40: a=0,b=32 m 41: a=4,b=-32 m 42: a=4,b=0 m 43: a=4,b=4 m 44: a=4,b=8 m 45: a=4,b=12 1 1 1 1 1 1 1 1 1 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0 0 0 0 0 0 0 0 0 0 0.5 1 0 0.5 1 0 0.5 1 0 0.5 1 0 0.5 1 0 0.5 1 0 0.5 1 0 0.5 1 0 0.5 1 m 46: a=4,b=16 m 47: a=4,b=20 m 48: a=4,b=24 m 49: a=4,b=28 m 50: a=4,b=32 m 51: a=8,b=12 m 52: a=8,b=16 m 53: a=8,b=20 m 54: a=8,b=24 1 1 1 1 1 1 1 1 1 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0 0 0 0 0 0 0 0 0 0 0.5 1 0 0.5 1 0 0.5 1 0 0.5 1 0 0.5 1 0 0.5 1 0 0.5 1 0 0.5 1 0 0.5 1 m 55: a=8,b=28 m 56: a=8,b=32 m 57: a=12,b=20 m 58: a=12,b=24 m 59: a=12,b=28 m 60: a=12,b=32 m 61: a=16,b=28 m 62: a=16,b=32 m 63 & m 64 1 1 1 1 1 1 1 1 1 0.5 0 0.5 0 0.5 1 0 0.5 0 0.5 1 0 0.5 0 0.5 1 0 0.5 0 0.5 1 0 0.5 0 0.5 1 0 0.5 0 0.5 1 0 0.5 0 0.5 1 0 0.5 0 0.5 1 0 0 0.5 Models with anatomically informed priors (of an intuitive form) were clearly superior to anatomically uninformed ones: Bayes Factor >109 1 Overview • Generative models & analysis options • Extended DCM for fMRI: nonlinear, two-state, stochastic • Embedding computational models in DCMs • Integrating tractography and DCM • Applications of DCM to clinical questions Model-based predictions for single patients model structure BMS set of parameter estimates model-based decoding BMS: Parkison‘s disease and treatment Age-matched controls Rowe et al. 2010, NeuroImage PD patients on medication Selection of action modulates connections between PFC and SMA PD patients off medication DA-dependent functional disconnection of the SMA Model-based decoding by generative embedding step 1 — model inversion step 2 — kernel construction A B C measurements from an individual subject A subject-specific inverted generative model B C Brodersen et al. 2011, PLoS Comput. Biol. subject representation in the generative score space step 3 — support vector classification step 4 — interpretation jointly discriminative model parameters A→B A→C B→B B→C separating hyperplane fitted to discriminate between groups Discovering remote or “hidden” brain lesions Discovering remote or “hidden” brain lesions Model-based decoding of disease status: mildly aphasic patients (N=11) vs. controls (N=26) Connectional fingerprints from a 6-region DCM of auditory areas during speech perception PT PT HG (A1) HG (A1) MGB MGB S S Brodersen et al. 2011, PLoS Comput. Biol. Model-based decoding of disease status: aphasic patients (N=11) vs. controls (N=26) Classification accuracy PT PT HG (A1) HG (A1) MGB auditory stimuli Brodersen et al. 2011, PLoS Comput. Biol. MGB Generative embedding using DCM Multivariate searchlight classification analysis Generative score space 0.4 0.4 0.3 0.3 0.2 0.2 0.1 0.1 -0.3 0 0 patients -0.35 controls -0.1 -0.5 -0.1 -0.5 0 Voxel (-42,-26,10) mm -0.15 -0.2 -0.25 -10 0 0.5 10 -0.15 -10 -0.4 -0.4 0 0 Voxel (-56,-20,10) mm 0.5 10 -0.2 L.HG L.HG generative embedding Voxel (64,-24,4) mm Voxel-based feature space -0.25 -0.3 -0.35 0.5 -0.4 -0.4 -0.2 -0.2 0 -0.5 L.MGB L.MGB 0 0 -0.5 0.5 0 R.HG L.HG Definition of ROIs Are regions of interest defined anatomically or functionally? anatomically A 1 ROI definition and n model inversions unbiased estimate functionally Functional contrasts Are the functional contrasts defined across all subjects or between groups? across subjects between groups B 1 ROI definition and n model inversions slightly optimistic estimate: voxel selection for training set and test set based on test data D 1 ROI definition and n model inversions highly optimistic estimate: voxel selection for training set and test set based on test data and test labels C Repeat n times: 1 ROI definition and n model inversions unbiased estimate E Repeat n times: 1 ROI definition and 1 model inversion slightly optimistic estimate: voxel selection for training set based on test data and test labels F Repeat n times: 1 ROI definition and n model inversions unbiased estimate Brodersen et al. 2011, PLoS Comput. Biol. Generative embedding for detecting patient subgroups Brodersen et al. 2014, NeuroImage: Clinical Generative embedding of variational Gaussian Mixture Models Supervised: SVM classification Unsupervised: GMM clustering 71% number of clusters number of clusters • 42 controls vs. 41 schizophrenic patients • fMRI data from working memory task (Deserno et al. 2012, J. Neurosci) Brodersen et al. 2014, NeuroImage: Clinical Detecting subgroups of patients in schizophrenia • three distinct subgroups (total N=41) • subgroups differ (p < 0.05) wrt. negative symptoms on the positive and negative symptom scale (PANSS) Brodersen et al. 2014, NeuroImage: Clinical Optimal cluster solution TAPAS www.translationalneuromodeling.org/tapas • PhysIO: physiological noise correction Kasper et al., in prep. • Hierarchical Gaussian Filter (HGF) • Variational Bayesian linear regression Brodersen et al. 2011, PLoS CB Mathys et al. 2011, Front. Hum. Neurosci. • Mixed effects inference for classification studies Brodersen et al. 2012, J Mach Learning Res Methods papers: DCM for fMRI and BMS – part 1 • Brodersen KH, Schofield TM, Leff AP, Ong CS, Lomakina EI, Buhmann JM, Stephan KE (2011) Generative embedding for model-based classification of fMRI data. PLoS Computational Biology 7: e1002079. • Brodersen KH, Deserno L, Schlagenhauf F, Lin Z, Penny WD, Buhmann JM, Stephan KE (2014) Dissecting psychiatric spectrum disorders by generative embedding. NeuroImage: Clinical 4: 98-111 • Daunizeau J, David, O, Stephan KE (2011) Dynamic Causal Modelling: A critical review of the biophysical and statistical foundations. NeuroImage 58: 312-322. • Daunizeau J, Stephan KE, Friston KJ (2012) Stochastic Dynamic Causal Modelling of fMRI data: Should we care about neural noise? NeuroImage 62: 464-481. • Friston KJ, Harrison L, Penny W (2003) Dynamic causal modelling. NeuroImage 19:1273-1302. • Friston K, Stephan KE, Li B, Daunizeau J (2010) Generalised filtering. Mathematical Problems in Engineering 2010: 621670. • Friston KJ, Li B, Daunizeau J, Stephan KE (2011) Network discovery with DCM. NeuroImage 56: 1202–1221. • Friston K, Penny W (2011) Post hoc Bayesian model selection. Neuroimage 56: 2089-2099. • Kiebel SJ, Kloppel S, Weiskopf N, Friston KJ (2007) Dynamic causal modeling: a generative model of slice timing in fMRI. NeuroImage 34:1487-1496. • Li B, Daunizeau J, Stephan KE, Penny WD, Friston KJ (2011). Stochastic DCM and generalised filtering. NeuroImage 58: 442-457 • Marreiros AC, Kiebel SJ, Friston KJ (2008) Dynamic causal modelling for fMRI: a two-state model. NeuroImage 39:269-278. • Penny WD, Stephan KE, Mechelli A, Friston KJ (2004a) Comparing dynamic causal models. NeuroImage 22:11571172. • Penny WD, Stephan KE, Mechelli A, Friston KJ (2004b) Modelling functional integration: a comparison of structural equation and dynamic causal models. NeuroImage 23 Suppl 1:S264-274. Methods papers: DCM for fMRI and BMS – part 2 • Penny WD, Stephan KE, Daunizeau J, Joao M, Friston K, Schofield T, Leff AP (2010) Comparing Families of Dynamic Causal Models. PLoS Computational Biology 6: e1000709. • Penny WD (2012) Comparing dynamic causal models using AIC, BIC and free energy. Neuroimage 59: 319-330. • Stephan KE, Harrison LM, Penny WD, Friston KJ (2004) Biophysical models of fMRI responses. Curr Opin Neurobiol 14:629-635. • Stephan KE, Weiskopf N, Drysdale PM, Robinson PA, Friston KJ (2007) Comparing hemodynamic models with DCM. NeuroImage 38:387-401. • Stephan KE, Harrison LM, Kiebel SJ, David O, Penny WD, Friston KJ (2007) Dynamic causal models of neural system dynamics: current state and future extensions. J Biosci 32:129-144. • Stephan KE, Weiskopf N, Drysdale PM, Robinson PA, Friston KJ (2007) Comparing hemodynamic models with DCM. NeuroImage 38:387-401. • Stephan KE, Kasper L, Harrison LM, Daunizeau J, den Ouden HE, Breakspear M, Friston KJ (2008) Nonlinear dynamic causal models for fMRI. NeuroImage 42:649-662. • Stephan KE, Penny WD, Daunizeau J, Moran RJ, Friston KJ (2009a) Bayesian model selection for group studies. NeuroImage 46:1004-1017. • Stephan KE, Tittgemeyer M, Knösche TR, Moran RJ, Friston KJ (2009b) Tractography-based priors for dynamic causal models. NeuroImage 47: 1628-1638. • Stephan KE, Penny WD, Moran RJ, den Ouden HEM, Daunizeau J, Friston KJ (2010) Ten simple rules for Dynamic Causal Modelling. NeuroImage 49: 3099-3109. Thank you
© Copyright 2024