Skip to content

Amy Perfors

Title Associate Professor
E-mail amy.perfors~at~adelaide.edu.au
Phone +61 8 8313 5744
Office 5 08 Hughes Building

I'm interested many different questions in higher-order cognition, from language to concept learning to decision making. My approach combines experiments with people (usually, but not always, adults) with computational models (usually, but not always, Bayesian). My general research questions all revolve around how the structure of data in the world, and people's assumptions about it, shape and are shaped by cognition. Much of my work takes place within a theoretical framework which suggests that human inference and reasoning can all be explained as a byproduct of reasoning about where the data came from and how it was generated.

My research program is constantly evolving. But here is an overview of some of the work I've done in each of my three main areas.

Language

My beginnings

I got my start as a scientist by studying language. My honours work investigated infant speech perception in children longitudinally from 12 to 24 months of age (Fernald, Perfors, & Marchman, 2006) and my master’s thesis modelled the evolutionary emergence of ambiguity and meaning (Wasow, Perfors, & Beaver, 2005). The same thing that fascinated me then still does now: children learn language so well while adults struggle much more, despite their obvious cognitive advantages. How do we explain this? What biases do children come to the language-learning problem with? How does the structure of the linguistic data they saw shape what they learned? To what extent do differences in these factors drive child and adult differences in learning?

PhD work

During my PhD I addressed these questions by building computational models which captured different aspects of language learning, from hierarchical phrase structure (Perfors, Tenenbaum, & Regier, 2011) to verb constructions in the absence of negative evidence (Perfors, Tenenbaum, & Wonnacott, 2010) to recursion (Perfors, Tenenbaum, Gibson, & Regier, 2010). My focus was on determining the bounds of the possible: what could a learner without capacity limitations and with Bayesian reasoning abilities learn from typical naturalistic data? I found that the answer depended not only on the data itself but also on the assumptions people made about it like whether each sentence was generated independently or whether types or tokens were relevant for generalisation. In subsequent work I found that, as predicted, people do often presuppose that type-level information is the most relevant for generalisation (Perfors, Ransom, & Navarro, 2014).

Regularisation

One area I have studied since then involves regularisation: the tendency for children, but not adults, to receive inconsistently varying linguistic input yet produce unvarying output. One theory for why children regularise is that they have capacity limitations relative to adults (Hudson Kam & Newport, 2005). However, in investigating it computationally, I discovered that under no plausible set of assumptions would this happen (Perfors, 2012). Rather, regularisation was only possible if the learner, whether implicitly or explicitly, somehow also had a prior bias for regularisation. Influenced by some of my other work looking at the social assumptions people rely on when data is generated by other people — which all linguistic data is — I wondered whether such social reasoning might explain such a bias. Indeed, I found that people who thought that linguistic data was being provided by an incompetent partner regularised more than people who thought the source was purposeful; regularisation also emerged more in cooperative rather than competitive contexts (Perfors, 2016).

Linguistic and cultural evolution

The past decade has seen the rise of a modelling paradigm known as iterated learning, which captures evolution as a chain of repeated learning and transmission by a series of Bayesian agents. A foundational result shows that iterated learning chains converge to the priors of those in the chain (Griffiths & Kalish, 2007). This means that the structure of the world, the assumptions people make about how the data were generated, or even the data people see should have no effect on the ultimate structure of evolved languages. My work has shown that this result depends critically on mathematical assumptions that probably do not hold in many contexts and that if you change these assumptions, the chains end up reflecting these other factors as well (Perfors & Navarro, 2014). In addition, we also show that the presence of multiple learners per generation further distorts the nature of the language that emerges (Smith, Perfors, et al., in press), that the evolution of word order can be explained according to principles of information theory (Maurits, Perfors, & Navarro, 2010), and that a few learners with very strong priors can have an outsized effect on the resulting language (in prep). The latter result is especially interesting given the fact that most language learners are very young children, who may have very different biases than adults. It also has deep implications for purely social contexts in which a few strongly biased individuals interact with many less-biased ones.

Statistical and distributional learning

It is hard to study language without on some level being interested in statistical and distributional learning. The focus of my interest has been what assumptions and abilities are required for people to be able to learn from different kinds of distributional data. My work shows that in cross-situational word-learning, people acquire meanings more quickly if the words follow a Zipfian distribution (Hendrickson & Perfors, under review); this is surprising in light of the fact that previous mathematical results have predicted the opposite (Blythe et al., 2016) and interesting since language is Zipfian on almost every level. I have also studied the ability of adults to learn novel phonemic categories from distributional information, finding that musicians are better at this kind of learning in both tonal and non-tonal languages (Perfors & Ong, 2012) and that this knowledge transfers rapidly to improve word learning involving those phonemes as well (Perfors & Dunbar, 2010).

Concepts and categories

My interest in concept and category learning stems naturally out of my interest in language, since word learning involves making generalisations about the extensions of categories based on labelled and unlabelled data. There is also a rich vein of computational models applied in this area, encompassing Bayesian and non-Bayesian approaches, and a trove of phenomena to understand.

Abstractions about category structure and labels

One of the first questions I investigated with the issue of learning higher-order abstractions about categories, which we called overhypotheses. We showed that this kind of abstraction can be naturally accounted for within a hierarchical Bayesian framework (Kemp, Perfors, & Tenenbaum, 2007; Perfors & Tenenbaum, 2009). Later work found that people can learn such overhypotheses in experimental settings as long as the experiments are simple enough and the exemplars are labeled (in prep). This work links naturally with oft-debated questions in the literature about the utility of labels in category learning. My work demonstrates that labels are most useful when the category structure is ambiguous (Vong, Perfors, & Navarro, 2016). I have also found that labels behave much like highly-salient features (rather then explicit cues to category membership) in how they direct attention, which is relevant to a contentious debate about whether labels are “special” (Perfors & Navarro, 2010).

Social and pragmatic reasoning in inferences about concepts

One reason for believing that labels might be special relates to the fact that they are pragmatically complex — generated by people for the purpose of communication or teaching. A great deal of my recent work revolves around modelling and understanding the pragmatic and social factors involved in learning concepts, which can be naturally captured using the pedagogical modelling approach of Shafto & Goodman (2014). My research has shown that certain phenomena in category-based induction (e.g., premise non-monotonicity) are attributable to the assumptions people make about how the category examples are being generated (Voorspoels et al., 2015; Ransom, Perfors, & Navarro, in press). Related work rests on the previously undiscovered fact that two of the main models of categorisation (the GCM and a Bayesian model) provide qualitatively opposing predictions about behaviour when the number of categories and exemplars is varied (Hendrickson, Navarro, & Perfors, under review). We are currently pursuing an explanation based on the assumptions people make about how the data were sampled (in prep). This modelling approach is also being used in ongoing research in my lab capturing reasoning in potentially deceptive situations (in prep) as well as how people combine evidence from multiple different advisors (in prep).

Innateness and learnability

Another deep issue in the study of concepts and categories circles around how people learn about category features and what kind of features (if any) are “primitives.” Coming from a background in language, I am especially interested in issues of innateness, and have argued in the philosophical literature that Fodor’s problem of innateness is conceptually incoherent in a way that is clear when one tries to instantiate it within the precision of the Bayesian framework (Perfors, 2012). Another project that tries to get at issues of innateness investigates the curse of dimensionality: the fact that any given concept has a large number of features that might be associated with it creates a learning problem, since the number of possible feature combinations that must be evaluated increases exponentially with the number of features. My research shows that there is no learning problem as long as categories follow a family resemblance structure, but there is one if they are rule-based; moreover, human performance in both situations can be naturally accounted for by a single unified model with capacity limitations in memory and decision making (Vong et al., 2016; in prep).

Decision making

Although I didn’t start off with a focus on decision making, in recent years these issues have made up an increasing proportion of my research. My interest grew from two things: first, as a Bayesian modeller, I became interested in the process of hypothesis testing and generation, which led naturally into considerations of how people make decisions about what information to search for or use. Second, as I learned more about how the assumptions people make about how data were generated guide learning in other contexts, I became increasingly aware that such assumptions were likely to drive decision-making behaviour as well: in particular, such assumptions can result in choices that might appear irrational in normative terms but look rational when the assumptions are considered.

Hypothesis generation and testing

My work in hypothesis generation and testing grew out of a theoretical paper in which we proved mathematically that performing positive tests (similar to confirmation bias, i.e., asking for data that is consistent with your hypothesis) provides much more information than negative tests as long as categories are sparse: that is, as long as most items are not members of that category (Navarro & Perfors, 2011). Moreover, this sparse category structure occurs in real-world categories because they are more coherent. This work made a clear prediction: if category sparsity drove the use of positive tests, then people should be less likely to use them in the rare circumstance when categories were not sparse. Subsequent research bore this prediction out (Hendrickson, Navarro, & Perfors, 2016; Langsford, Hendrickson, Perfors, & Navarro, 2014); indeed, people are sensitive on a fine-grained level to the precise information gain of different tests (Hendrickson, Navarro, & Perfors, 2014).

Social reasoning in decision making

This research suggests that people’s reliance on positive tests can be a sensible thing to do, despite decades of research suggesting that it is a hallmark of human irrationality. Some of my other work suggests that other classic “reasoning biases” may be explainable in similar ways. For instance, in the famous Monty Hall Problem a game show host asks people to select from three doors to find a prize. It is used as a standard example of people’s inability to use conditional probability information, but our research demonstrates that the underlying mathematics change if it is viewed as a social inference problem about the motivations of the host and that people are sensitive to such changes (in prep). Another famous bias is ambiguity aversion, in which people avoid choosing ambiguous choices. We have found that people often choose to avoid ambiguity because they are suspicious of the experimenter’s motivation for providing that option, and thus assume that it is not likely to favour them: when the motivation is clarified, the aversion disappears (in prep).

Medical decision making

A final strand of research in this area is new but very exciting. It grew out of a collaboration with researchers in medicine who are concerned about the phenomenon of vaccine hesitancy — the tendency for people to avoid vaccinating their children for fear of autism. In seeking to characterise people’s vaccination decisions in decision-theoretic terms, we have discovered that people are actually reasoning perfectly rationally (i.e., in accordance with expected utility theory) given the utilities and probabilities they actually have. Unfortunately, those utilities and probabilities are arguably wrong. The main problem is not (as one would expect) an incorrect probability estimation of the likelihood that vaccines cause autism: it is slightly off, but not by much (e.g., about 0.2% rather than 0%). The more profound problem is that people radically underestimate the negative utility of getting a disease like whooping cost and radically overestimate the negative utility of autism: autism is rated as over 10,000 times worse than whooping cough. This suggests that a far more effective intervention than trying in vain to persuade people that autism is not caused by vaccination would be to teach them how bad diseases like whooping cough or diphtheria are. This work is still in preparation and we are currently designing intervention studies that test the prediction.

This section of my page contains a list of publications, with links to the full versions of the paper where possible. Because I host many of the files myself, I've taken a lot of time to check all the copyright transfer forms and the author rights policies for the various journals, in order to ensure that the site is in compliance with copyright laws. There is an extensive copyright page that documents the process that I have followed in each case. If you believe that one or more of the documents that I have posted here is in violation of copyright, please contact me and I will investigate the matter as soon as possible.

Submitted

  • A Hendrickson and A Perfors (submitted). Cross-situational learning in a Zipfian environment. Manuscript submitted for publication
  • A Hendrickson, A Perfors and DJ Navarro (submitted). Categorization and generalization: A curious discrepancy with increasing sample size. Manuscript submitted for publication
  • L Kennedy, DJ Navarro, A Perfors and N Briggs (submitted). On the use of clinical scales in non-clinical populations: A discussion of the analysis of skewed responses. Manuscript submitted for publication
  • L Kennedy, DJ Navarro, A Perfors and N Briggs (submitted). Not every credible interval is credible: On the importance of robust methods in Bayesian data analysis. Manuscript submitted for publication
  • A Perfors, DJ Navarro, C Donkin and T Benders (submitted). Poor statistical inference or good social reasoning? On the pragmatics of the Monty Hall dilemma. Manuscript submitted for publication

In Press

2016

2015

2014

2013

2012

2011

2010

2009

2008

2007

2006

2005

  • A Perfors, C Kemp and JB Tenenbaum (2005). Modeling the acquisition of domain structure and feature understanding. In B Bara, L Barsalou and M Bucciarelli (Ed.) Proceedings of the 27th Annual Conference of the Cognitive Science Society (pp. 1720-1725)
  • T Wasow, A Perfors and D Beaver (2005). The Puzzle of Ambiguity. In O Orgun and P Sells (Ed.) Morphology and the Web of Grammar

2004

  • C Kemp, A Perfors and J Tenenbaum (2004). Learning domain structures.. In K Forbus, D Gentner and T Regier. (Ed.) Proceedings of the 26th Annual Conference of the Cognitive Science Society (pp. 672-677)

2002

  • A Perfors (2002). Simulated Evolution of Language: A Review of the Field Journal of Artificial Societies and Social Simulation, 5, 2 (published version)

2000

1999

  • A Perfors (1999). Slow and steady doesn't win the race: The relation between infant information processing skills and language comprehension. Honours Thesis. Stanford University Symbolic Systems Department. Stanford, CA

These are some of the highlights of my publications since 2010 (not an exhaustive list).

Perfors, A., and Navarro, D.J. (2014) Language evolution can be shaped by the structure of the world. Cognitive Science.. [paper][journal]

This work makes me happy. I've long been interested in issues in language evolution, dating back to my master's thesis work, but it's a hard area to do solid work in because it is by nature so speculative and so hard to tie to empirical phenomena. This paper builds on seminal work in the literature, centered on characterizing the nature of languages that evolve under certain conditions. Our conclusion, based on mathematical modeling and some experimental work, is that the structure of a language can be shaped by the structure of the events or things to be spoken about in the world. It has implications for understanding both how languages might differ (or not) across cultures, and also for understanding how the biases in our mind interact with the nature of the world we live in. I like it mainly because prior to this work most of the focus (on the modeling side of things, at least) was on how people's biases affect the structure of language; this research offers an explanation for how the structure of the world might have an important impact as well (it also ties in to existing empirical work that suggests that languages are shaped by social structure and other aspects of the environment.)

Perfors, A. (2012) When do memory limitations lead to regularization? An experimental and computational investigation. Journal of Memory and Language 67: 486-506. [paper] [journal]

I like this because it's a good example of empirical results leading the way, and of how experimental and computational work can mutually enlighten each other. I originally began this area of research thinking it would be a small side project that would lend a bit of support for the theory that memory limitations lead to regularization, at least in many circumstances. But try as I might, none of the experiments I ran in which people were placed under memory load resulted in any sort of regularization. These results where what inspired me to think more deeply about the whole issue, and to try to model what might be going on. I am now much more interested in the topic of regularization than I was when I started this line of research, because there are still a lot of open questions (many of which I talk about in the discussion, and which I'm currently trying to follow up on).

Perfors, A. (2012) Bayesian models of cognition: What's built in after all? Philosophy Compass 7(2): 127-138. [journal]

My fondness for this paper is probably out of proportion to its actual value, but I really like it for two reasons: (a) it's my only impact on the philosophy literature to date, and I've always had an interest in the philosophical issues surrounding the mind and the nature of thought; and (b) it captures and discusses a number of ideas I had been puzzling over for years. In a nutshell, it describes my argument for why many of the Fodorian-type "it's all built in" paradoxes aren't actually very problematic. It's framed within a Bayesian context, primarily because that is the framework that shaped my thinking, but I think the argument is a quite general one. As with many philosophical arguments I don't think it's conclusive, but it's certainly done a lot to make me feel better about the assumptions underlying my work as a cognitive scientist and a computational modeller; I don't think I'm just building everything important in, which was originally one of my worries.

Perfors, A., Tenenbaum, J., Griffiths, T.L., and Xu, F. (2011) A tutorial introduction to Bayesian models of cognitive development. Cognition 120: 302-321. [paper] [journal]

I like this mainly because I think it fills in a needed gap in the literature. In the past decade or so, Bayesian models have been increasingly useful, but one of the biggest practical problems with them is that it requires a lot of training to be able to implement them, and a fair amount of background simply to have a solid intuition for how they work and why. Although there are many solid resources for constructing and understanding Bayesian models, most presume a higher level of mathematical and computational fluency than many people necessarily have to begin with. I wrote this with the goal of providing a bridge for people without much of this background to at least be able to understand the qualitative intuitions behind Bayesian models, as well as why these intuitions emerge from the basic math. It also contains links to some of the more technical references and some of the main models that exist (as of when it was published, at least), so that people can use it as a stepping stone to greater technical understanding.

Perfors, A., Tenenbaum, J.B., and Regier, T. (2011) The learnability of abstract syntactic principles. Cognition 118(3): 306-338. [paper] [journal]

This was another small project that turned into a bigger deal than I originally expected, but I like it because it really engages with many of the important questions at the nexus of cognitive science and linguistics, and because it offers a distinctly different approach. These questions include: what are our linguistic representations? How do we know that? Can abstract knowledge about these representational principles be learned? To be honest, parts of this work frustrate me as well, because the question we ended up asking ended up bigger than I feel like we could fully answer given the limits of current technology (and what is possible to do within one project). So at most, it's a stab in the right direction. But I think the direction is an interesting one, and this work makes an important contribution by changing the nature of an ongoing debate in the literature to one that is, I think, both more interesting and more scientifically tractable. It takes seriously the possibility that language might actually have a certain kind of structure (i.e., hierarchical phrase structure), and that this knowledge could itself be learned.

Fernald, A., Perfors, A., and Marchman, V. (2006) Picking Up Speed in Understanding: Speech Processing Efficiency and Vocabulary Growth Across the 2nd Year. Developmental Psychology 42(1). 98-116 [paper] I am pleased with this work in part for personal reasons: it's the first major scientific paper I contributed to, since it's based on my honors thesis. It also is one of the first examples in the infant literature that longitudinally explores the relationship between speech processing and other aspects of language acquisition and development. We found that this relationship was much stronger than we expected, which has interesting implications both methodologically (for measuring how "well" some aspect of language is known) as well as theoretically (for what it implies about what drives speech processing efficiency as well as what it implies about the representation of linguistic knowledge). This work helped to shape many of my interests in the relationship between language learning and general cognition, and also represents a kind of research that I sometimes wish I could go back to. I have occasional dreams of someday returning to do pure empirical research on infants again (without stopping the computational and experimental work I currently do), once I miraculously find the hours in the day and the money to develop and maintain an infant lab. In the meantime, I look back on this with fondness.

This section of my page is fairly abbreviated, but for now, here's a list of classes that I have taught over the last few years.

Cognitive Modelling


Cognitive Science

  • Foundations of Perception & Cognition II, 2008-present (Language)
  • Perception & Cognition III, 2009-2014 (Language)
  • Introduction to Psychology I (Learning), 2008-2010
  • Learning & Behaviour III (Language evolution), 2011

Statistics

  • Doing Research in Psychology, 2016-present
  • Statistics and Critical Issues in Psychology (Honours), 2009-2011
  • Doing Research in Psychology: Advanced (Bayesian statistics), 2010-2011

Currently

I am an Associate Professor in the University of Adelaide Department of Psychology. My main responsibilities are research and teaching. I am also convenor of the School's Research Committee and School representative on the Faculty Research Committee.

This is the long, unedited version of my CV that has basically everything in there.

Career

  • University of Adelaide School of Psychology, 2008 - present
    Hired as lecture (Assistant professor equivalent); currently Associate Professor
  • Ph.D., MIT Department of Brain and Cognitive Sciences (BCS), 2003-2008
    Worked with Josh Tenenbaum
    I also spent about 6 months working in the infant lab of Fei Xu, UBC (Vancouver, Canada)
    Thesis title: Learnability, representation, and language: A Bayesian approach. [pdf]
  • Santa Fe Institute, Complex Systems Summer School 2002
    Coursework in the mathematics of nonlinear dynamical systems and applications of complexity theory to economic, social, and biological systems. Included independent research work.
  • M.A., Stanford University Department of Linguistics
    Thesis title: Simulated evolution of language: The emergence of meaning [pdf]
    Advisor: David Beaver
  • B.S., Stanford University (1999)
    Major: Symbolic Systems (with distinction, with honors); Minor: Physics
    Thesis title: Slow and steady doesn't win the race: The relation between infant information processing skills and language comprehension
    Advisor: Anne Fernald
  • Montrose High School (Montrose, Colorado), 1991-1995
    Class valedictorian, National Merit Scholar

Grants

  • 2015-2017: ARC Discovery Project: Learning from others: Inductive reasoning based on human-generated data. $301,300.
  • 2016: University of Adelaide Small Grant Scheme: Decision-making in a high-risk, uncertain scenario: The case of vaccination. $20,000
  • 2012-2014: Discovery Early Career Researcher Award (DECRA): What shapes the structure of language? An experimental and computational investigation. $375,000.
  • 2011-2014: ARC Discovery Project: How are beliefs altered by data? Robust Bayesian models for human inductive learning. $454,995. With D. Navarro and J. Tenenbaum.
  • 2008: University of Adelaide Establishment Grant. $10,000.
  • 2004-2008: NDSEG and NSF graduate fellowships. Each worth full tuition plus $30,000/year.
  • Plus several other small grants.

Publications

Full list available here.

Honours and awards

Presentations and invited talks not already listed in publications

  • 2016: Who said that, and why? How assumptions about socially-generated data drive human learning. Rational Inferences Workshop, CCD Developing Mind Series. Macquarie University.
  • 2014: Levels of representation. NeuroCog Collective. Coffs Harbour NSW.
  • 2014: On the informational value of negative evidence. Stanford Workshop on Gradience in Grammar. Stanford University.
  • 2012: Acquisition of linguistic structure and regularity: What can the models tell us? Mayfest conference on the role of computational models in linguistic theory. University of Maryland.
  • 2011: Language acquisition, representation, and use: What can we learn from computational and experimental evidence? Harvard-Australia Workshop on Language, Learning, and Logic. Sydney, Australia.
  • 2011: Comparing adult and child learners: The case of over-regularisation. Stanford University Computational Language Group. Stanford, CA, USA.
  • 2011: Language evolution is shaped by the structure of the world. University of Edinburgh Language Evolution and Computation Group Talk. Edinburgh, UK.
  • 2011: Comparing adult and child learners: The case of over-regularisation. University of Western Australia Psychology Colloquium. Perth, Australia.
  • 2010: What shapes the structure of language? Exploring the role of world structure. Macquarie Centre for Cognitive Science. Sydney, Australia.
  • 2010: For better or for worse? Exploring the source of differences between adult and child language acquisition. Macquarie Centre for Cognitive Science. Sydney, Australia.
  • 2010: New computational approaches to word learning. Society for Mathematical Psychology Conference. Portland, OR, USA.
  • 2010: Levels of explanation and the workings of science. Experimental Psychology Conference. Melbourne, Australia.
  • 2010: Bayesian rule learning: The role of targeted negative evidence. Australian Mathematical Psychology Conference. Margaret River, Australia.
  • 2009: Language learnability, computational modelling, and the innateness problem. Australian Mathematical Society Conference. Adelaide, Australia.
  • 2009: Confirmation bias is rational when hypotheses are sparse. 42nd Annual Meeting of the Society for Mathematical Psychology. Amsterdam, Holland.
  • 2009: Learning to learn, simplicity, and sources of bias in language learning. University of Rochester Department Seminar. Rochester, NY, USA.
  • 2009: What's innate, and how much input is enough? Probabilistic Models of Cognitive Development Workshop. Banff, Canada.
  • 2009: Learnability and learning in language. University of Adelaide Department Seminar. Adelaide, Australia.
  • 2009: The role of labels in categorisation. Australian Experimental Psychology Conference. Wollongong, Australia.
  • 2009: Learning to learn categories. Australian Mathematical Psychology Conference. Newcastle, Australia.
  • 2008: A little something about language acquisition. Flinders University Department Seminar. Adelaide, Australia.
  • 2008: Learnability in language acquisition (and how Bayesian and other modelling might help). Berkeley Workshop on Connectionist and Probabilistic Models of Cognition. Berkeley, CA, USA.
  • 2008: Learnability and negative evidence: A Bayesian exploration with CHILDES. 11th Conference of the International Association for the Study of Child Language. Edinburgh, UK.
  • 2008: Word learning: Bayes, labels, and inductive constraints. Workshop on New Directions in Word Learning. York, UK.
  • 2007: Indirect evidence and the poverty of the stimulus. Proceedings of the 29th Annual Conference of the Cognitive Science Society. Nashville, TN, USA.
  • 2007: Representation and learnability: A rational approach. 40th Annual Meeting of the Society for Mathematical Psychology. Irvine, CA, USA.
  • 2007: A Bayesian approach to the poverty of the stimulus. Machine Learning and Cognitive Science of Language Acquisition Workshop. University College London.
  • 2007: Hierarchical phrase structure and recursion: A Bayesian exploration of learnability. Recursion in Human Languages Conference. Normal, IL, USA.
  • 2007: Learning inductive constraints. Symposium at Biennial Meeting for Society for Research in Child Development (SRCD); Boston, MA, USA.
  • 2006: Poverty of the Stimulus? A rational approach. University of British Columbia Department Seminar. Vancouver, Canada.
  • 2006: Poverty of the Stimulus? A rational approach. Stanford University (CSLI) Department Seminar. Stanford, CA, USA.
  • 2006: Poverty of the Stimulus? A rational approach. Harvard University Department Seminar. Cambridge, MA, USA.
  • 2006: Poverty of the Stimulus? A Rational Approach. 32nd Annual Meeting of the Society for Philosophy and Psychology. St. Louis, Missouri, USA.
  • 2002: Why Does Ambiguity Exist? Second Annual Semantics Fest, Stanford, CA, USA.

Teaching/Education experience

  • Course coordinator
    2016-present: Doing Research in Psychology (2nd year introductory statistics course; ~300 students).
    2010-2014: Computational Cognitive Science (3rd year computer science course; ~25 students). With Dan Navarro.
    2010-2011: Foundations of Perception and Cognition (2nd year required psychology course; ~300 students).
    2009-2011: Perception and Cognition (3rd year psychology course; ~150 students).
  • Lecturer within a course
    2016-present: Doing Research in Psychology (2nd year introductory statistics course; ~300 students). Introduction to research methods, statistical theory, and basic frequentist statistical tests (e.g., t-test, ANOVA).
    2008-present: Foundations of Perception and Cognition (2nd year required psychology course; ~300 students). Lectures giving overview of language.
    2009-2014: Perception and Cognition (3rd year psychology course; ~150 students). Lectures on language acquisition.
    2010-2014: Computational Cognitive Science (3rd year computer science course; ~25 students). Lectures on computational modeling of cognition, with a focus on language and categorisation
    2011: Learning and Behaviour (3rd year psychology course; ~200 students). Lectures on language evolution.
    2010-2011: Doing Research in Psychology: Advanced (3rd year required psychology course; ~200 students). Lectures on Bayesian data analysis.
    2009-2011: Statistics and critical issues (Honours; ~50 students). Lectures on problems of induction and Bayesian statistics.
    2008-2010: Introduction to Psychology (1st year required psychology course; ~600 students). Lectures on learning and concepts.
  • IPAM Graduate Summer School: Probabilistic Models of Cognition, 2007
    Lecture: Grammar induction in language; and several tutorials
  • Graduate student teaching assistant at MIT
    2007: 9.012, Cognitive Science (the graduate core class in cognitive science); responsibilities included guest lecturing, coordinating among the professors, organizing, and grading.
    2005: 9.66, Computational Cognitive Science (a graduate-level class); responsibilities included guest lecturing, designing problem sets, grading, and running sections.
    2005: Assisted with the TA training for new TAs university-wide, as well as a special three-day program for BCS students in particular
    2005-2008: Guest lecturer for 9.98, Language and Mind; 9.59/24.905, Psycholinguistics; 9.63, Lab in Cognitive Science
    2004: 9.00, Introduction to Psychology.
  • Teaching assistant at Stanford University
    2002-2003: Head course assistant and curriculum designer, Human Biology Core course (year long).
    1999-2000: Course assistant, Human Biology Core course (year-long).
  • Peace Corps (Mozambique), 2000 to 2001: Secondary School Biology and English Teacher
  • ACE Computer Camp
    MIT, Summer 1997: Academic director
    Stanford. Summer 1996: Computer teacher

Supervisory experience

I list only students for whom I am or was a principal supervisor, or one of the principal supervisors -- those for whom I make a routine, non-trivial contribution.
  • Postdoctoral associates
    Andrew Hendrickson (current)
    Simon De Deyne (now an independent Research Fellow in this lab)
    Sean Tauber (now postdoctoral associate at UNSW)
    Wouter Voorspoels (now postdoctoral associate at University of Leuven)
  • PhD students
    Lauren Kennedy (2014-present)
    Steven Langsford (2013-present)
    Keith Ransom (2014-present)
    Wai Keen Vong (2013-present)
    Dinis Gökaydin (2010-2015)
    Rachel Stephens (2008-2012)
    Luke Maurits (2008-2011)
  • Honours students
    Zhe Khor (2014), 1st class
    Hazel Craig (2013), 1st class
    Lauren Kennedy (2013), 1st class
    Angela Vause (2012), 2nd class level 1
    Wai Keen Vong (2012), 1st class
    Erica Behrens (2011), 2nd class level 1
    Tin Yim Chuk (2011), 1st class
    Natalie May (2011), 1st class
    Jia Ong (2011), 1st class
    Alexandra Christopher (2010), 1st class
    Pamela Lee (2010), 1st class
    Nicholas Colebatch (2009), 2nd class level 1
    Melissa de Vel (2009), 1st class
    David Dunbar (2009), 1st class
    Xin Wei Sim (2009), 2nd class level 1
  • External thesis examination
    Gabriel Tillman, University of Newcastle (2016)
    Pragati Vasuki, Macquarie University (2016)
    Vanessa Ferdinand, University of Edinburgh (2014)
    Ben Borschinger, Macquarie University (2014)
    Magdalena Dimitru, Macquarie University (2010)

Administrative experience

  • Action editor at Cognitive Science. Associate editor at Journal of Language Evolution. Editorial board of Cognition and Open Mind.All 2015-present
  • Served on the program committee for the Conference of the Cognitive Science Society, 2016-present
  • Representative on the Faculty Research Committee, 2016-present
  • Convenor of the University of Adelaide School of Psychology Research Committee, 2015-present
  • Member of University of Adelaide School of Psychology Research Committee, 2010-2013
  • Member of University of Adelaide School of Psychology Infrastructure Support Committee, 2010-2011
  • Organiser of University of Adelaide School of Psychology seminar series, 2009-2011
  • Member of University of Adelaide School of Psychology OH&S committee, 2009 2010
  • Ad hoc reviewer for many, many journals
  • Ad-hoc reviewer for grant agencies including the ARC, NSF and EPSRC.
  • MIT BCS Department, 2005 to 2006: Graduate student member on the faculty search committee
  • Member of the authors' committee on the Harvard University IQSS Social Science Statistics Blog. Named head of the authors' committee in January 2006 (one-year position).
  • MIT BCS Department, 2004 to 2005: Graduate student mentor with the Undergraduate Opportunities Program
  • MIT BCS Department, 2004 to 2005: Graduate student representative.
  • Stanford, 2002 to 2003: Freshman advisor for five Stanford undergraduates
  • Stanford Symbolic Systems Department, 1998 to 1999:Advising fellow for students in the Symbolic Systems major
  • Stanford Admissions Office, 1996: Assistant to the President's Scholars Program

Additional Research Experience

Random other jobs that were just kinda interesting

  • Bachelor/Syracuse Mine, Summer 1999: Line Cook and Gold Panner (Ouray, Colorado)
  • Tire House Builder, Summer 1999: Manual labor (Ridgway, Colorado)
  • Stanford Dining Services, 1995 to 1997: Food service worker
  • Bray and Company Real Estate, 1993 to 1995: Desktop publishing and advertisement design (Montrose, Colorado)
  • Howard Hughes High School Scholars Program, Summer 1994: Biomedical research assistant, Colorado State University
  • Math & Science Upward Bound Program, Summer 1993: Trinidad State Junior College

Other interests

  • Mother of two kids, one born in 2012 and one in 2015
  • Member (2008-2009) of Adelaide's Old Collegians' Women's Rugby Team
  • Captain (2005) and member (2003-2006) of MIT Women's Rugby Club
  • Licensed Emergency Medical Technician (2004)
  • Co-captain of San Francisco Rugby Club, Women's Side (2000, 2002)
  • Select side player for PCNRFU, Spring 2003
  • Member of Stanford Women's Rugby National Championship (1999) and National Championship Runner-Up (1998) Teams
School of Psychology,
North Terrace Campus,
University of Adelaide
SA 5005 AUSTRALIA
Contact

T: +61 8 8313 5744
Email