�A multinomial model of event-based prospective memory�
Ute Johanna Bayen & Rebekah E. Smith, The University of North Carolina at Chapel Hill
Event-based prospective memory involves remembering to perform an action in response to a particular future event (e.g., give your colleague a message when you see her). In the typical laboratory paradigm to investigate event-based prospective memory, participants perform a particular action (e.g., press the Z key) when a target word appears on a computer screen during an ongoing task (e.g., a lexical decision task). We distinguish between the prospective component of the task (remembering that you have to do something) and the retrospective component (remembering when to perform the action). A current debate in the prospective-memory literature regards the question whether the processes involved in the prospective component of the task are automatic or resource-demanding. To address this issue, we have developed a multinomial processing tree model (Smith & Bayen, 2004) which is the first formal model of event-based prospective memory. The model includes a parameter P that measures the degree to which the prospective component of the prospective-memory task is resource-demanding, a parameter M for the retrospective-memory component of the task, and two parameters related to the ongoing task.
We validated the model in eight experiments which demonstrated a good fit of the model to the data and showed that experimental and individual-difference variables affected model parameters in predictable and separable ways. Manipulations of instructions to place importance either on the prospective-memory task or on the ongoing task affected parameter P only. Manipulations of the similarity of target and distracter events affected parameters P and M . A manipulation of the difficulty of target encoding affected parameter M only. Working memory span influenced parameters P and M , especially when the ongoing activity was particularly demanding. In experiments with normal young and older adults, we found age-related differences in parameter P , and not in M . A model variant postulating an automatic prospective component did not fit the data in any of the experiments. We illustrate advantages of the multinomial modeling approach over traditional design-based approaches in the prospective-memory paradigm.
�Cognitive Psychometrics, Multinomial Processing Tree (MPT) Models, and Cultural Consensus Theory (CCT)�
William H. Batchelder, University of California, Irvine
I will discuss some of the history at UCI that led to the classes of models which go by the labels MPT and CCT. I will briefly discuss the extent of their influence in cognitive psychology and cultural anthropology. I will discuss their similarities and differences and make the point that both classes of models are similar in spirit. That spirit is informed simplicity, where the disadvantage of approximation rather than completeness is traded off against the advantage of great mathematical and statistical understanding. Also, MPT and CCT share the advantage of being statistical process models where one can simulate data from them, and this is an advantage over data exploratory methods such as multidimensional scaling and clustering methods. I will discuss some recent work that brings both MPT and CCT into contact with clinical data and differential psychology. In this discussion I will attempt to define what I mean by cognitive psychometrics. Finally, I will discuss computational, formal, and statistical advances that are in need of development in order to make these classes of models more useful to the scientific community.
�Multinomial Models for Social Information Processing�
Ece Batchelder and William H. Batchelder, University of California, Irvine
A number of studies have been published arguing that graph-theoretic balance facilitates learning and memory of dyadic social relations. In our studies, participants read stories about the friendship relations about pairs of actors in a social network. The social structure presented in the stories was represented as an incomplete signed graph, and groups differed as to whether stories did or did not satisfy balance. Later, they were tested for source memory of which pairs were described in the story and whether or not the relationship for a pair was (or was inferred to be) positive or negative.
A family of multinomial processing tree models was developed for the paradigm that included item detection and source discrimination, as well as inference processes and guessing. The analyses revealed that the balanced group had better source memory and more inferences based on balance. Also, participants had better memory for negative ties than for positive ones. Also, data show possible evidence that base rate of positive ties in the presented story, in addition to bias toward graph-theoretic balance suggested in earlier studies, affects the memory and learning of social structures.
�Free Listing and Latent Semantic Analysis�
Roy D'Andrade, Professor Anthropology, University of Connecticut
Despite the ease with which word associations and free listing responses can be elicited, and the apparent informativeness of such responses, this technique has not been utilized by cognitive anthropologists and other cognitive scientists. Part of the problem is that factor analyses of the stimulus words based on similarity in associations or listing terms have proven disappointing, providing approximately the same data as similarity sorting. Analysis of the content of the associations or free listing responses has been more effective, but has required coding and other labor intensive processes.
Given the potential of word association and free listing, it seems worthwhile to investigate the possibility of using the techniques of latent semantic analysis developed by Landauer and other for automated text analysis. By focusing on the structure of the responses, and by using single value decomposition as a method, it has proven possible to investigate the structure of implicit associations without depending on direct overlap in frequencies. Using various data samples, this talk will discuss the advantages and disadvantages of using latent semantic analysis for the analysis of these data, comparing latent semantic analysis to correspondence analysis and other methods of analysis.
�Cultural Consonance and Individual Adaptation in Urban Brazil�
William W. Dressler, Department of Anthropology, University of Alabama
In research in Brazil and the U.S., the hypothesis was developed and tested that cultural consonance is associated with health outcomes, including arterial blood pressure and depressive symptoms. Cultural consonance is the degree to which individuals are able to approximate in their own beliefs and behaviors the prototypes for belief and behavior encoded in shared cultural models. In previous research, it was found that individuals who had higher cultural consonance in the domains of lifestyle and social support had lower blood pressures and fewer depressive symptoms. Research recently conducted in urban Brazil replicates and extends these findings. In this research, a more extensive cultural domain analysis, employing cultural consensus analysis as the ultimate step, was carried out in order to improve the description of shared cultural models in several domains. This made possible both more sensitive measures of cultural consonance in lifestyle and social support, and measures of cultural consonance in additional domains (family life, national character, and foodways). The following findings structure this presentation: (a) cultural domain analysis using both structured ethnographic techniques and unstructured interviewing was consistent with the hypothesis of shared cultural models in the domains studied; (b) the associations of cultural consonance with arterial blood pressure and with depressive symptoms were replicated; (c) the patterns of association of cultural consonance and blood pressure and cultural consonance and depressive symptoms were different; and, (d) cultural consonance was prospectively associated with depressive symptoms, independent of other predictors, at a follow-up of 1-2 years. These results are consistent with the hypothesis that cultural consonance with shared cultural models is an important component of individual adaptation.
�Statistical Tests for Parameterized Multinomial Models: Power Approximation and Power Optimization�
Edgar Erdfelder, Department of Psychology, University of Mannheim
Parameterized multinomial models like log-linear models, Multinomial Processing Tree (MPT) models, or Cultural Consensus Theory (CCT) models play an important role in the social sciences. Empirical applications usually start with an overall goodness-of-fit test of a base model. If the base model can be retained, special tests of significance are used to test hypotheses on parameter values (i.e., parameter fixations) or differences between parameter values (i.e., equality constraints) within or across populations. Substantive conclusions are typically based on the results of these special parameter tests. Obviously, not only the type-1 error probability a but also the type-2 error probability b may severely bias both the statistical decisions and the substantive conclusions derived from these decisions. Nevertheless, most researchers routinely employ standard levels of significance like a = .05 or a = .01 without making any reference to ß or its complement 1-ß, the power of the statistical test. In those few cases in which researchers have aimed at controlling 1- b , they referred to the effect size conventions (�small�, �medium�, and �large�) introduced by Jacob Cohen (1969). However, this approach is often misleading because the meaning of �small�, �medium�, and �large� effect sizes may differ between models, designs and statistical hypotheses.
I propose an alternative approach that allows for directly controlling the power of a test as a function of the model parameters under the null and the alternative hypothesis. As will be shown, this approach leads to better interpretable results. In addition, it is easy to apply in practice, using standard software for parameterized multinomial models and statistical power analyses. Different methods of power approximation will be described and compared by means of a Monte-Carlo study for different goodness-of-fit statistics (likelihood ratio c 2 , Pearson´s c 2 , and the Cressie-Read statistic). The final part of my talk is devoted to several techniques of maximizing the test power given a fixed overall sample size: (1) The optimal choice of a test statistic, (2) the optimal decomposition of the overall sample size in case of joint multinomial models, (3) optimization of parameter values not addressed in the statistical hypothesis, and (4) optimization of the test strategy.
�GPT models: Basic theory, initial implementation, and existing issues and challenges�
Xiangen Hu � Department of Psychology, The University of Memphis
General Processing Tree (GPT) model is a mathematical form of multinomial processing tree (MPT) models. This presentation will be divided into three parts.
- Mathematical and statistical properties of GPT models : The basic theory of GPT models will include an overview of the theoretical results that have been obtained in prior researches. I will emphasize some extensions. For example, I will talk about the extension of GPT models to analyze categorical data such as contingency tables.
- Computer software packages for GPT models : There are several computer software packages that have been created for analyzing GPT and MPT models. I will explore the features for the packages. The purpose of evaluating the existing packages is to see what else we can do to make the GPT/MPT tools available to researchers in other domains. For example, I will explore the possibilities of having components in popular software packages, such as R.
- Existing issues and challenges : To end the presentation, I will raise a few issues of GPT models, such as the issue of structure identifiably for GPT models. I will also share with the audiences the lessons learned using GPT models in analyzing data from directed forgetting experiments.
�Bayesian Cultural Consensus Theory�
George Karabatsos, Department of Educational Psychology, Illinois University
A Cultural Consensus Theory (CCT) model provides a device to infer the beliefs of a cultural group of respondents from a set of questionnaire data, using answer key parameters that describe the culturally-correct response to each and every questionnaire item, and parameters that describe differences between respondents according to ability. Parameters of respondent response-bias, and item difficulty may also be included in the model. This presentation discusses a general Bayesian approach for performing statistical inference with Cultural Consensus Theory (CCT) models, which includes methods of model estimation, model testing, and model selection (Karabatsos & Batchelder, 2003, Psychometrika ). Specifically, the approach: 1) provides a particular Markov Chain Monte Carlo algorithm that provides the basis for estimating the posterior distribution of the parameters of a CCT model (the answer key parameter, respondent ability parameters, and possibly response-bias parameters and item difficulty parameters), 2) includes methods for providing global or detailed tests of fit of a CCT model through posterior-predictive p-values, and 3) employs the Deviance Information Criterion for the task of model selection, in order to determine which model, of a set of CCT models with different parameterizations, implies a predictive distribution that is closest (in Kullback-Liebler distance) to the (unknown) true sampling distribution of the item responses. This entire Bayes framework is illustrated through analyses of real data sets, where one of the applications involves placing order-constraints on the ability parameters of the CCT model. A simulation study demonstrates that estimates of the answer key parameters can be quite accurate even for a respondent sample size as low as 3. Finally, in looking towards the future, I will discuss possible Bayesian hierarchical formulations of CCT models, either in order to handle situations where there is more than one cultural group in the respondent sample, to handle any dependence that may exist between item (or person) parameters, and/or to define Bayesian non-parametric versions of CCT models through Dirichlet-Process (hyper-) priors. Of course, the Bayesian inference framework for model estimation, model testing, and model selection, is easily extended to the hierarchical versions of CCT models.
�Hierarchical Multinomial Processing Tree Models: A Latent-Class Approach�
Christoph Klauer, Institut of Psychology, University of Freiburg
Multinomial processing tree models are widely used in many areas of psychology. Their application relies on the assumption of parameter homogeneity, that is, on the assumption that participants do not differ in their parameter values. Tests for parameter homogeneity are proposed that can be routinely used as part of multinomial model analyses to defend the assumption. If parameter homogeneity is found to be violated, a new family of models, termed latent-class multinomial processing tree models, can be applied that accomodates parameter heterogeneity and correlated parameters, yet preserves most of the advantages of the traditional multinomial method. Estimation, goodness-of-fit tests and tests of other hypotheses of interest are considered for the new family of models.
�The CCM as a tool for analyzing cultural processes�
Douglas L. Medin � Department of Psychology, Northwestern University
This talk will focus on applications of the cultural consensus model to within and across cultural comparisons. The CCM is not a theory of culture but it is a very effective tool for cultural analysis. The talk describes within and across group comparisons of folk ecological models from ongoing studies in Guatemala (with Q'eqchi' Maya, Itza' Maya and Ladinos) and in Wisconsin (with Native American and European American fishing experts).
�Analyzing Feature Binding in Episodic Memory with a Multinomial Model of Multidimensional Source Monitoring: Methodological Problems and Solutions�
Thorsten Meiser, Department of Psychology, University of Jena
The talk gives an overview of recent extensions and applications of a multinomial model for multidimensional source monitoring. The multidimensional source monitoring model specifies episodic memory and reconstructive guessing processes in old-new recognition judgments and in the attribution of old events to their source attributes on different dimensions of context information (Meiser & Bröder, 2002). It is shown that the multidimensional source monitoring model contains the model of partial source memory (Dodson, Holland, & Shimamura, 1998) and the memory model for four sources (Batchelder, Hu, & Riefer, 1994) as special cases in a model hierarchy, which affords tests of psychological hypotheses about the mental representation of multi-facetted source information (e.g., crossed, nested, all-or-none). Earlier research has shown that source memory for multiple context features of an event is stochastically dependent if the event is consciously recollected, but not if the event is judged old on the basis of familiarity (Meiser & Bröder, 2002). This finding was interpreted as support for the assumed binding of multiple context features into conjunctive memory representations that are specifically related to the experience of conscious recollection. The interpretation is subject, however, to severe methodological caveats. First, overall source memory performance is much lower in the case of familiarity-based old judgments than in the case of conscious recollection, so that stochastic independence in source memory for familiar items may reflect a near floor effect, rather than the absence of binding processes. This problem was tackled in a new experiment that equated source memory performance for consciously recollected and familiar items. Second, the context dimensions that were used in the earlier study were limited to the visuo-spatial modality. To demonstrate cross-modal binding processes in episodic memory, we replicated the stochastic dependence in source memory for recollected events, and the stochastic independence in source memory for familiar events, with the combination of a visuo-spatial and an acoustic context dimension. Third, stochastic dependence in source memory may reflect interindividual differences in memory performance across participants, rather than binding processes at item level. To rule out this spurious correlation hypothesis, we collected data from a large sample of participants, created stratified subsamples on the basis of memory performance for two context dimensions, and applied the multidimensional source monitoring model to each subsample. Although there was no correlation between source memory for the two context dimensions within subsamples, the model-based analysis revealed stochastic dependence in source memory for consciously recollected events, but not for familiar events. This result confirms our interpretation of stochastic dependence on the level of items, that is, in terms of binding processes and conjunctive memory representations. Taken together, the research illustrates how the statistical modeling of cognitive processes in combination with the flexible use of experimental techniques may shed light on underlying memory and decision processes in episodic memory.
�Exploring Generation Effects in Source Monitoring: Applying MPT Models for Source Discsrimination Across Multiple Source Dimensions�
David Riefer, Department of Psychology, California State University at San Bernardino
I will talk about a line of research dealing with two common memory phenomena � source monitoring and the generation effect. In source monitoring, people try to remember information coming from multiple sources (e.g., Source A vs. Source B). The generation effect is the observation that item memory is better for self-generated items than for externally-generated items. This talk explores whether source memory is also better for self vs. generated information. Prior research on this question is mixed � with some experiments finding a positive generation effect for source (better source memory for self-generated items) and others finding a negative generation effect for source (poorer source memory for self-generated items). However, a review of the literature reveals that positive generation effects for source tend to occur when self vs. external items are treated as the source (i.e., self = Source A and external = Source B). In contrast, negative generation effects for source tend to occur when other dimensions constitute the main source (e.g., red vs. green, Speaker 1 vs. Speaker 2). An experiment was conducted combining both of these elements. Participants memorized items that they either read or generated (generation as source) and where either in red or green type (color as source). Later their memory was tested for both of these two source dimensions. Empirical results, using standard identification-of-origin scores, showed a negative generation effect when generation was the source, contrary to my hypothesis. However, prior research has also shown that strong response biases can influence empirical statistics in this situation. For that reason I applied a version of an MPT model developed by Meiser and Bröder (2002) that is applicable to sources differing along two separate dimensions. The model revealed that (a) strong response biases were in fact operating in the experiment, (b) a positive generation occurred when generation was the source, the opposite of what the empirical results showed, and (c) a negative generation effect occurred when color was the source. All of these results were as hypothesized, and they help explain the contradictory finding of earlier research. The model's analysis nicely demonstrates the potential pitfalls of making conclusions based on empirical statistics, especially when those measures are a reflection of multiple cognitive processes.
�Constructing a Binary Processing Tree Through Selective Influence�
Richard Schweickert (with Shengbao Chen), Department of Psychology, Purdue
Consider an experiment with two factors, with probability of correct recall recorded for each combination of levels of the two factors. Necessary and sufficient conditions are given for the existence of a processing tree with two branches at each node, such that each of the two factors changes a probability associated with exactly one node in the tree. Patterns in the probability of correct recall distinguish the case where the selectively influenced nodes are together on a path from the root to a terminal node from the case when there is no such path. If the selectively influenced nodes are together on such a path, the order of the nodes can sometimes be discovered from the data. Finally, if the two factors selectively influence two nodes in an arbitrary binary processing tree, there exists an equivalent relatively simple tree.
"Cross-Cultural Comparisons Using the Cultural Consensus Model"
Susan Weller, Professor, Preventive Medicine & Community Health, University of Texas Medical Branch
An integral part of theory building involves an integration of theory, methods, and data. This presentation describes a series of data sets used to describe variation in beliefs about AIDS, diabetes, asthma, and two folk illnesses, empacho and the evil eye. Separate questionnaires were developed for each illness with approximately 120-150 questions on the causes, symptoms, and treatments for each illness. Interviews were conducted with approximately 40 individuals from Latino from two US and two international sites; 800 subjects with 160 subjects per illness. Hypotheses correctly predicted an ordering of the amount of variation across regions based upon how long an illness has been recognized and whether or not it recognized by biomedicine. Comparison of responses across individuals suggests that although variation may be two to three times greater for the folk illnesses, the overall magnitude of differences across regions may be small. The cultural consensus model facilitates the description of cultural beliefs and the comparison across regions, but the effect of real world conditions with (1) response bias, (2) combined with very low or very high rates of "yesses" in the true answer key, and (3) moderate to low competency levels is unknown. We would like to encourage research in these areas by sharing some of these data sets.