|
|
|
|
LEADER |
00000cam a2200000 i 4500 |
001 |
b10134075 |
003 |
CoU |
007 |
ta |
008 |
170727t20182018enka b 001 0 eng |
005 |
20231010161741.7 |
010 |
|
|
|a 2017025806
|
020 |
|
|
|a 9781107109995
|
020 |
|
|
|a 110710999X
|
020 |
|
|
|a 9781107525610
|
020 |
|
|
|a 1107525616
|
035 |
|
|
|a (OCoLC)ocn995630312
|
035 |
|
|
|a (OCoLC)995630312
|
040 |
|
|
|a DLC
|b eng
|e rda
|c DLC
|d OCLCO
|d OCLCF
|d OCLCQ
|d YDX
|d CHVBK
|d OCLCO
|d W2U
|d U3G
|
042 |
|
|
|a pcc
|
049 |
|
|
|a CODA
|
050 |
0 |
0 |
|a BF311
|b .F358 2018
|
066 |
|
|
|c (S
|
100 |
1 |
|
|a Farrell, Simon,
|d 1976-
|e author.
|0 http://id.loc.gov/authorities/names/n2010047645
|1 http://isni.org/isni/0000000382431295.
|
245 |
1 |
0 |
|a Computational modeling of cognition and behavior /
|c Simon Farrell, University of Western Australia, Perth, Stephan Lewandowsky, University of Bristol.
|
264 |
|
1 |
|a Cambridge, United Kingdom ;
|a New York, NY :
|b Cambridge University Press,
|c 2018.
|
264 |
|
4 |
|c ©2018.
|
300 |
|
|
|a xxii, 461 pages ;
|c 26 cm.
|
336 |
|
|
|a text
|b txt
|2 rdacontent.
|
337 |
|
|
|a unmediated
|b n
|2 rdamedia.
|
338 |
|
|
|a volume
|b nc
|2 rdacarrier.
|
504 |
|
|
|a Includes bibliographical references and index.
|
650 |
|
0 |
|a Cognition
|x Mathematical models.
|
650 |
|
0 |
|a Psychology
|x Mathematical models.
|0 http://id.loc.gov/authorities/subjects/sh85108463.
|
650 |
|
7 |
|a Cognition
|x Mathematical models.
|2 fast
|0 (OCoLC)fst00866468.
|
650 |
|
7 |
|a Psychology
|x Mathematical models.
|2 fast
|0 (OCoLC)fst01081481.
|
700 |
1 |
|
|a Lewandowsky, Stephan,
|e author.
|0 http://id.loc.gov/authorities/names/n90679823
|1 http://isni.org/isni/0000000384110241.
|
880 |
0 |
0 |
|6 505-00
|a Machine generated contents note:
|g pt. I
|t Introduction to Modeling --
|g 1.
|t Introduction --
|g 1.1.
|t Models and Theories in Science --
|g 1.2.
|t Quantitative Modeling in Cognition --
|g 1.2.1.
|t Models and Data --
|g 1.2.2.
|t Data Description --
|g 1.2.3.
|t Cognitive Process Models --
|g 1.3.
|t Potential Problems: Scope and Falsifiability --
|g 1.4.
|t Modeling as a "Cognitive Aid" for the Scientist --
|g 1.5.
|t In Vivo --
|g 2.
|t From Words to Models --
|g 2.1.
|t Response Times in Speeded-Choice Tasks --
|g 2.2.
|t Building a Simulation --
|g 2.2.1.
|t Getting Started: R and RStudio --
|g 2.2.2.
|t The Random-Walk Model --
|g 2.2.3.
|t Intuition vs. Computation: Exploring the Predictions of a Random Walk --
|g 2.2.4.
|t Trial-to-Trial Variability in the Random-Walk Model --
|g 2.2.5.
|t A Family of Possible Sequential-Sampling Models --
|g 2.3.
|t The Basic Toolkit --
|g 2.3.1.
|t Parameters --
|g 2.3.2.
|t Connecting Model and Data --
|g 2.4.
|t In Vivo --
|g pt. II
|t Parameter Estimation --
|g 3.
|t Basic Parameter Estimation Techniques --
|g 3.1.
|t Discrepancy Function --
|g 3.1.1.
|t Root Mean Squared Deviation (RMSD) --
|g 3.1.2.
|t Chi-Squared(χ2) --
|g 3.2.
|t Fitting Models to Data: Parameter Estimation Techniques --
|g 3.3.
|t Least-Squares Estimation in a Familiar Context --
|g 3.3.1.
|t Visualizing Modeling --
|g 3.3.2.
|t Estimating Regression Parameters --
|g 3.4.
|t Inside the Box: Parameter Estimation Techniques --
|g 3.4.1.
|t Simplex --
|g 3.4.2.
|t Simulated Annealing --
|g 3.4.3.
|t Relative Merits of Parameter Estimation Techniques --
|g 3.5.
|t Variability in Parameter Estimates --
|g 3.5.1.
|t Bootstrapping --
|g 3.6.
|t In Vivo --
|g 4.
|t Maximum Likelihood Parameter Estimation --
|g 4.1.
|t Basics of Probabilities --
|g 4.1.1.
|t Defining Probability --
|g 4.1.2.
|t Properties of Probabilities --
|g 4.1.3.
|t Probability Functions --
|g 4.2.
|t What Is a Likelihood--
|g 4.3.
|t Defining a Probability Distribution --
|g 4.3.1.
|t Probability Functions Specified by the Psychological Model --
|g 4.3.2.
|t Probability Functions via Data Models --
|g 4.3.3.
|t Two Types of Probability Functions --
|g 4.3.4.
|t Extending the Data Model --
|g 4.3.5.
|t Extension to Multiple Data Points and Multiple Parameters --
|g 4.4.
|t Finding the Maximum Likelihood --
|g 4.5.
|t Properties of Maximum Likelihood Estimators --
|g 4.6.
|t In Vivo --
|g 5.
|t Combining Information from Multiple Participants --
|g 5.1.
|t It Matters How You Combine Data from Multiple Units --
|g 5.2.
|t Implications of Averaging --
|g 5.3.
|t Fitting Aggregate Data --
|g 5.4.
|t Fitting Individual Participants --
|g 5.5.
|t Fitting Subgroups of Data and Individual Differences --
|g 5.5.1.
|t Mixture Modeling --
|g 5.5.2.
|t K-Means Clustering --
|g 5.5.3.
|t Modeling Individual Differences --
|g 5.6.
|t In Vivo --
|g 6.
|t Bayesian Parameter Estimation --
|g 6.1.
|t What Is Bayesian Inference--
|g 6.1.1.
|t From Conditional Probabilities to Bayes Theorem --
|g 6.1.2.
|t Marginalizing Probabilities --
|g 6.2.
|t Analytic Methods for Obtaining Posteriors --
|g 6.2.1.
|t The Likelihood Function --
|g 6.2.2.
|t The Prior Distribution --
|g 6.2.3.
|t The Evidence or Marginal Likelihood --
|g 6.2.4.
|t The Posterior Distribution --
|g 6.2.5.
|t Estimating the Bias of a Coin --
|g 6.2.6.
|t Summary --
|g 6.3.
|t Determining the Prior Distributions of Parameters --
|g 6.3.1.
|t Non-Informative Priors --
|g 6.3.2.
|t Reference Priors --
|g 6.4.
|t In Vivo --
|g 7.
|t Bayesian Parameter Estimation --
|g 7.1.
|t Markov Chain Monte Carlo Methods --
|g 7.1.1.
|t The Metropolis-Hastings Algorithm for MCMC --
|g 7.1.2.
|t Estimating Multiple Parameters --
|g 7.2.
|t Problems Associated with MCMC Sampling --
|g 7.2.1.
|t Convergence of MCMC Chains --
|g 7.2.2.
|t Autocorrelation in MCMC Chains --
|g 7.2.3.
|t Outlook --
|g 7.3.
|t Approximate Bayesian Computation: A Likelihood-Free Method --
|g 7.3.1.
|t Likelihoods That Cannot be Computed --
|g 7.3.2.
|t From Simulations to Estimates of the Posterior --
|g 7.3.3.
|t An Example: ABC in Action --
|g 7.4.
|t In Vivo --
|g 8.
|t Bayesian Parameter Estimation --
|g 8.1.
|t Gibbs Sampling --
|g 8.1.1.
|t A Bivariate Example of Gibbs Sampling --
|g 8.1.2.
|t Gibbs vs. Metropolis-Hastings Sampling --
|g 8.1.3.
|t Gibbs Sampling of Multivariate Spaces --
|g 8.2.
|t JAGS: An Introduction --
|g 8.2.1.
|t Installing JAGS --
|g 8.2.2.
|t Scripting for JAGS --
|g 8.3.
|t JAGS: Revisiting Some Known Models and Pushing Their Boundaries --
|g 8.3.1.
|t Bayesian Modeling of Signal-Detection Theory --
|g 8.3.2.
|t A Bayesian Approach to Multinomial Tree Models: The High-Threshold Model --
|g 8.3.3.
|t A Bayesian Approach to Multinomial Tree Models --
|g 8.3.4.
|t Summary --
|g 8.4.
|t In Vivo --
|g 9.
|t Multilevel or Hierarchical Modeling --
|g 9.1.
|t Conceptualizing Hierarchical Modeling --
|g 9.2.
|t Bayesian Hierarchical Modeling --
|g 9.2.1.
|t Graphical Models --
|g 9.2.2.
|t Hierarchical Modeling of Signal-Detection Performance --
|g 9.2.3.
|t Hierarchical Modeling of Forgetting --
|g 9.2.4.
|t Hierarchical Modeling of Inter-Temporal Preferences --
|g 9.2.5.
|t Summary --
|g 9.3.
|t Hierarchical Maximum Likelihood Modeling --
|g 9.3.1.
|t Hierarchical Maximum Likelihood Model of Signal Detection --
|g 9.4.
|t Recommendations --
|g 9.5.
|t In Vivo --
|g pt. III
|t Model Comparison --
|g 10.
|t Model Comparison --
|g 10.1.
|t Psychological Data and the Very Bad Good Fit --
|g 10.1.1.
|t Model Complexity and Over-Fitting --
|g 10.2.
|t Model Comparison --
|g 10.3.
|t The Likelihood Ratio Test --
|g 10.4.
|t Akaike's Information Criterion --
|g 10.5.
|t Other Methods for Calculating Complexity and Comparing Models --
|g 10.5.1.
|t Cross-Validation --
|g 10.5.2.
|t Minimum Description Length --
|g 10.5.3.
|t Normalized Maximum Likelihood --
|g 10.6.
|t Parameter Identifiability and Model Testability --
|g 10.6.1.
|t Identifiability --
|g 10.6.2.
|t Testability --
|g 10.7.
|t Conclusions --
|g 10.8.
|t In Vivo --
|g 11.
|t Bayesian Model Comparison Using Bayes Factors --
|g 11.1.
|t Marginal Likelihoods and Bayes Factors --
|g 11.2.
|t Methods for Obtaining the Marginal Likelihood --
|g 11.2.1.
|t Numerical Integration --
|g 11.2.2.
|t Simple Monte Carlo Integration and Importance Sampling --
|g 11.2.3.
|t The Savage-Dickey Ratio --
|g 11.2.4.
|t Transdimensional Markov Chain Monte Carlo --
|g 11.2.5.
|t Laplace Approximation --
|g 11.2.6.
|t Bayesian Information Criterion --
|g 11.3.
|t Bayes Factors for Hierarchical Models --
|g 11.4.
|t The Importance of Priors --
|g 11.5.
|t Conclusions --
|g 11.6.
|t In Vivo --
|g pt. IV
|t Models in Psychology --
|g 12.
|t Using Models in Psychology --
|g 12.1.
|t Broad Overview of the Steps in Modeling --
|g 12.2.
|t Drawing Conclusions from Models --
|g 12.2.1.
|t Model Exploration --
|g 12.2.2.
|t Analyzing the Model --
|g 12.2.3.
|t Learning from Parameter Estimates --
|g 12.2.4.
|t Sufficiency of a Model --
|g 12.2.5.
|t Model Necessity --
|g 12.2.6.
|t Verisimilitude vs. Truth --
|g 12.3.
|t Models as Tools for Communication and Shared Understanding --
|g 12.4.
|t Good Practices to Enhance Understanding and Reproducibility --
|g 12.4.1.
|t Use Plain Text Wherever Possible --
|g 12.4.2.
|t Use Sensible Variable and Function Names --
|g 12.4.3.
|t Use the Debugger --
|g 12.4.4.
|t Commenting --
|g 12.4.5.
|t Version Control --
|g 12.4.6.
|t Sharing Code and Reproducibility --
|g 12.4.7.
|t Notebooks and Other Tools --
|g 12.4.8.
|t Enhancing Reproducibility and Runnability --
|g 12.5.
|t Summary --
|g 12.6.
|t In Vivo --
|g 13.
|t Neural Network Models --
|g 13.1.
|t Hebbian Models --
|g 13.1.1.
|t The Hebbian Associator --
|g 13.1.2.
|t Hebbian Models as Matrix Algebra --
|g 13.1.3.
|t Describing Networks Using Matrix Algebra --
|g 13.1.4.
|t The Auto-Associator --
|g 13.1.5.
|t Limitations of Hebbian Models --
|g 13.2.
|t Backpropagation --
|g 13.2.1.
|t Learning and the Backpropagation of Error --
|g 13.2.2.
|t Applications and Criticisms of Backpropagation in Psychology --
|g 13.3.
|t Final Comments on Neural Networks --
|g 13.4.
|t In Vivo --
|g 14.
|t Models of Choice Response Time --
|g 14.1.
|t Ratcliff's Diffusion Model --
|g 14.1.1.
|t Fitting the Diffusion Model --
|g 14.1.2.
|t Interpreting the Diffusion Model --
|g 14.1.3.
|t Falsifiability of the Diffusion Model --
|g 14.2.
|t Ballistic Accumulator Models --
|g 14.2.1.
|t Linear Ballistic Accumulator --
|g 14.2.2.
|t Fitting the LBA --
|g 14.3.
|t Summary --
|g 14.4.
|t Current Issues and Outlook --
|g 14.5.
|t In Vivo --
|g 15.
|t Models in Neuroscience --
|g 15.1.
|t Methods for Relating Neural and Behavioral Data --
|g 15.2.
|t Reinforcement Learning Models --
|g 15.2.1.
|t Theories of Reinforcement Learning --
|g 15.2.2.
|t Neuroscience of Reinforcement Learning --
|g 15.3.
|t Neural Correlates of Decision-Making --
|g 15.3.1.
|t Rise-to-Threshold Models of Saccadic Decision-Making --
|g 15.3.2.
|t Relating Model Parameters to the BOLD Response --
|g 15.3.3.
|t Accounting for Response Time Variability --
|g 15.3.4.
|t Using Spike Trains as Model Input --
|g 15.3.5.
|t Jointly Fitting Behavioral and Neural Data --
|g 15.4.
|t Conclusions --
|g 15.5.
|t In Vivo.
|
907 |
|
|
|a .b101340758
|b 03-19-20
|c 05-24-18
|
998 |
|
|
|a sci
|b 06-04-18
|c x
|d m
|e -
|f eng
|g enk
|h 0
|i 1
|
907 |
|
|
|a .b101340758
|b 07-03-18
|c 05-24-18
|
944 |
|
|
|a MARS - RDA ENRICHED
|
907 |
|
|
|a .b101340758
|b 06-04-18
|c 05-24-18
|
948 |
|
|
|a lr
|
999 |
f |
f |
|i a1ded6aa-a32b-52fc-bda5-1c80975be633
|s 4b4472a4-81c1-5759-9935-c8e0738e5996
|
952 |
f |
f |
|p Can circulate
|a University of Colorado Boulder
|b Boulder Campus
|c Norlin
|d Norlin Library - Science Stacks
|e BF311 .F358 2018
|h Library of Congress classification
|i book
|m U183073641956
|n 1
|