I am so excited to announce that lab alum Amy Finn will be an Assistant Professor in the Department of Psychology at the University of Toronto starting next fall (2015).
I’m happy to announce a new paper just out on-line in JEP:LMC entitled “Why Segmentation Matters: Experience-Driven Segmentation Errors Impair
“Morpheme” Learning“, by Amy Finn and Carla Hudson Kam.* (Warning: it’s not open access.)
Here’s the abstract: “We ask whether an adult learner’s knowledge of their native language impedes statistical learning in a new language beyond just word segmentation (as previously shown). In particular, we examine the impact of native-language word-form phonotactics on learners’ ability to segment words into their component morphemes and learn phonologically triggered variation of morphemes. We find that learning is impaired when words and component morphemes are structured to conflict with a learner’s native language phonotactic system, but not when native-language phonotactics do not conflict with morpheme boundaries in the artificial language. A learner’s native-language knowledge can therefore have a cascading impact affecting word segmentation and the morphological variation that relies upon proper segmentation. These results show that getting word segmentation right early in learning is deeply important for learning other aspects of language, even those (morphology) that are known to pose a great
difficulty for adult language learners.”
(Finn, A. S., & Hudson Kam, C. L. (2015, March 2). Why Segmentation Matters: Experience-Driven Segmentation Errors Impair “Morpheme” Learning. Journal of Experimental Psychology: Learning, Memory, and Cognition. Advance online publication. http://dx.doi.org/10.1037/xlm0000114)
*2015 is getting off to a pretty great start, and everyone in the lab is happy about that, but it’s worth keeping in mind that both papers out so far this year are work that’s been in progress for a very long time.
I’m happy to announce a new paper “Children’s Use of Gesture in Ambiguous Pronoun Interpretation” just out in the Journal of Child Language by Whitney Goodrich Smith and Carla Hudson Kam. FYI: It’s published as an open access paper.
Here’s the abstract:
“This study explores whether children can use gesture to inform their interpretation of ambiguous pronouns. Specifically, we ask whether four- to eight-year-old English-speaking children are sensitive to information contained in co-referential localizing gestures in video narrations. The data show that the older (7–8 years of age) but not younger (4–5 years) children integrate co-referential gestures into their interpretation of pronouns. This is the same age at which they show sensitivity to order-of-mention, the only other cue available in the stimuli. Interestingly, when children show sensitivity to the gestures, they are quite similar to adults, in that gestures consistent with order-of-mention increase first-mentioned responses as compared to stimuli with no gestures, but only slightly, while gestures inconsistent with order-of-mention have a larger effect on interpretation, decreasing first-mentioned responses and increasing second-mentioned responses.”
I am so happy to be able to announce the publication of “When It Hurts (and Helps) to Try: The Role of Effort in Language Learning“, just out in PLOS ONE. This is another piece by former student Amy Finn, PhD, currently a post doc at MIT.
Here’s the abstract:
Compared to children, adults are bad at learning language. This is counterintuitive; adults outperform children on most measures of cognition, especially those that involve effort (which continue to mature into early adulthood). The present study asks whether these mature effortful abilities interfere with language learning in adults and further, whether interference occurs equally for aspects of language that adults are good (word-segmentation) versus bad (grammar) at learning. Learners were exposed to an artificial language comprised of statistically defined words that belong to phonologically defined categories (grammar). Exposure occurred under passive or effortful conditions. Passive learners were told to listen while effortful learners were instructed to try to 1) learn the words, 2) learn the categories, or 3) learn the category-order. Effortful learners showed an advantage for learning words while passive learners showed an advantage for learning the categories. Effort can therefore hurt the learning of categories.
Thanks to PLOS ONE for a great experience. Thanks also to Michael Ramscar, who gave very thoughtful and helpful commentary along the way. (He served as a reviewer, more than once. And no, it’s not a case of conflict of interest. He signs his reviews.)
Open Access publication.
Last week, I had the pleasure of giving a short talk to people in the Faculty of Education here at UBC as part of their Research Week 2014. I was part of a panel of Canada Research Chairs invited to speak to professors and students about my research. In particular, we were asked to focus on what our research had to say that was of relevance for teacher training and practice. It was a really interesting exercise for me, thinking about my work from a more practical perspective. I only had 15 minutes, so it had to be somewhat direct, and without much of the detail I would usually include in a talk. I decided to give them a whirlwind tour of my work I’ve been involved in, and linked studies together in rather non-traditional ways. Instead of linking work by specific research questions, I linked it by messages that were relevant to education/educators.
The title of the talk was “The Role and Function of Input in Language Acquisition”. It was not really a description of what I was going to tell them, rather, how I see all of my work being linked. It is the tie that binds all of my work together. I know that the various projects I have and continue to work on might seem scattered to an outside observer. To my mind, that is largely due to the way we write papers – self-contained units that outline a small question or issue. My work is linked by the theme that was the title of my talk. I then broke it down into three section, each of which contained a message for the educators. But really, these are things that I have learned, ways my thinking has changed, that I thought were worth sharing.
Each section had a question, and a take home message attached to it. The first set of findings I talked about were bound together by the question How do children (and adults) learn language? I get at this question by examining the relationship between input and output, with the idea that understanding the function from one to the other tells us something about the mechanisms, or the ‘how’. Here I talked about the work on the learning of variation, as well as a lot of work conducted by Amy Finn. The take home message here was that children are not just little adults, or adults missing some cognitive ‘piece’ or ability. While it may make sense to discuss them that way, if we want to understand learning, we have to understand that even in a system that is ‘missing’ some ability, the system will still function, just differently, and that that difference will not necessarily be a simple mapping to the abilities that the system does have available. Think of it more like a recipe with a replacement ingredient than a puzzle with a missing piece.
The second body of work (or bodies) was introduced by the question How do children learn about language? It is work directed at understanding the nature of the input itself/directly expanding how we might think about input. In the field of language acquisition we often think of input just in terms of the speech the child hears. But learning takes place in a rich communicative environment, and we need to consider that more carefully. Here, I made the point, rather counter-intuitively, that just because there is other input available (e.g., gesture) doesn’t mean it will be used. I also pointed out that context can actually impede learning at times. Here, I was referring mostly to the work by Tim Beyer on AAE-speaking children’s interpretation of SAE. We can’t just assume that providing extra information will help children – they may not be ready or able to use it. And even when they do, it may not be scaffolding acquisition in the ways we assume.
The third section of the talk was focused on the question of ‘How children learn from language?” There, I talked about some new work I am just starting to do (in collaboration with Carrie Ichikawa Jenkins over in Philosophy) on explanations and the development of explanatory preferences. There, the message was just that children make generalizations over the patterns they hear, including some very abstract ones, and that we should be mindful of this when talking to children. Or as I put it “we need to think more critically about the ways we talk about things, and how they can impact children’s generalizations about knowledge”.
I’m visiting Northern Illinois University this week, as part of serving as a mentor in their PI Academy. I’ll be working with Karen Lichtman in the Department of Foreign Languages and Literature. I’m looking forward to the visit, and to being part of this really innovative program.
As part of my involvement, I’ll be giving a talk on some of my work. I’ve chosen to talk about work most related to Karen’s own (really interesting!) research program. Which means talking about work I’ve never talked about before (although thankfully, Amy Finn, the former student who did much of the work, has). It’s especially exciting to me because I’m bringing together work by 2 PhD students at 2 different universities, in addition to work done by 2 undergraduates, again, from both UC Berkeley and UBC. Given the way we write papers, things can appear as a bunch of unrelated projects, when really there is a very programmatic thread underlying them all, and it’s been fun to string that thread through the studies while working on the talk.
The talk is entitled “Input, ‘intake’, and the adult language learner”.
Here’s the abstract: Differences in outcomes between child and adult language learners have long been noted – in particular, the fact that people who start learning a language as children usually reach a higher level of proficiency in the language than those who start learning later in life. A variety of explanations for this discrepancy in outcomes have been proposed, ranging from differences in neural plasticity to differences in the levels of personal identification with the new and old cultures, most of which find some support in the data. One factor that has received relatively less attention is input differences, the idea that children and adults tend to get very different linguistic input. People have pointed out, for instance, that the context or environment in which adults versus children learn obviously will affect their input, which then has the potential to affect learning outcomes. But there is another aspect to input, namely, how learners engage with and process the input they receive, that can be described as affecting their intake, not just their input. These are things internal to the learner, like the nature and strength of prior knowledge and maturationally controlled cognitive/brain changes. In this talk, I will present data from several studies using miniature artificial language methods demonstrating how learning outcomes for adult learners are affected by their intake, and discuss how these intake effects are related to maturation and so age of acquisition.
I’m excited to finally be able to announce a publication on the blog! It’s a paper entitled “Learning language with the wrong neural scaffolding: the cost of neural commitment to sounds” that just came out in Frontiers in Systems Neuroscience. It’s part of a special issue on sensitive periods in development. So happy to see this work finally come out. Congratulations to first author Amy Finn!
It’s my first foray into open access publishing, and it was a great experience.
Abstract is here: Does tuning to one’s native language explain the “sensitive period” for language learning? We explore the idea that tuning to (or becoming more selective for) the properties of one’s native-language could result in being less open (or plastic) for tuning to the properties of a new language. To explore how this might lead to the sensitive period for grammar learning, we ask if tuning to an earlier-learned aspect of language (sound structure) has an impact on the neural representation of a later-learned aspect (grammar). English-speaking adults learned one of two miniature artificial languages (MALs) over 4 days in the lab. Compared to English, both languages had novel grammar, but only one was comprised of novel sounds. After learning a language, participants were scanned while judging the grammaticality of sentences. Judgments were performed for the newly learned language and English. Learners of the similar-sounds language recruited regions that overlapped more with English. Learners of the distinct-sounds language, however, recruited the Superior Temporal Gyrus (STG) to a greater extent, which was coactive with the Inferior Frontal Gyrus (IFG). Across learners, recruitment of IFG (but not STG) predicted both learning success in tests conducted prior to the scan and grammatical judgment ability during the scan. Data suggest that adults’ difficulty learning language, especially grammar, could be due, at least in part, to the neural commitments they have made to the lower level linguistic components of their native language.”
There looks to be lots of really interesting work being presented at CogSci2013 in Berlin this weekend.
Our lab is being represented by Sarah Wilson (a former Berkeley PhD student), who is presenting on Friday, August 2, at 4:30 pm, in the Language Acquisition 1 session.
“Acquisition of phrase structure in an artificial visual grammar”. As promised, we doubled the data, so the talk will be a little different than the proceedings paper.
“I try to think about the mechanism as much as possible – it’s the very careful and most systematic “how” explanation that we should always have in mind. Still, if I see this word nonchalantly thrown around in another review of my work, I will scream. It’s important. Fundamental. So fundamental, that it should motivate the design of experiments and theory guiding them, not the speculative post-hoc interpretations we make (at least not out loud right?). Mechanism is becoming the word used by (perhaps lazy?) reviewers who don’t have anything specific to say. Ironic. That, or perhaps I’ve just been reading way too many reviews these last 2 weeks and I need to thicken up…”
– Amy Finn, PhD^
I know I said that this blog was going to be used mostly to post updates on the goings on in the lab, and this post won’t fit with that theme. But it touches on something that I’ve been thinking about a lot lately, while introducing you to the way I think, and the way I try to train grad students to think.
I think about theory, a lot. Everything I do is done with theory in mind. By theory, I don’t necessarily mean big grand theory, I mean, more specific concrete ideas about how things work (the mechanism Amy refers to), how different aspects of cognition are related to each other, and how my findings fit in with what we already know. I don’t do ‘cute’ studies that are interesting ‘just because’. I always want my work to tell us something, something bigger. This means that I think. A lot.
This makes me slower than many other researchers. I don’t jump into things quickly. And I don’t tend to write quickly either. It also means that I frustrate students. When a student comes to me with an idea for a study, especially early in their relationship with me, I usually stop them from explaining it to me part way through and ask why, why they would do whatever it is they are proposing to do. What will it tell us about anything other than the particular experiment they are describing? Usually they don’t have much of an answer. That’s why I encourage students to start with questions, not ideas for studies. Armed with a question, we can design a study to answer that question. And then we sit and think about what positive and negative results would mean, what are the possible confounds, other interpretations, and think about what the follow up studies should be given different sets of results. I almost never design one experiment at a time. And I never think about tweaking variables in a study just because they are there to be tweaked. I always want a reason, a bigger picture reason, for manipulating a variable. That’s not to say that ‘little’ variables (like ISI) aren’t important. In fact, we’re finding out in an ongoing study by Alexis Black that it (ISI) is. We found that out by accident though, and we’re now investigating it purposely, with informed ideas about why it has the effect it seems to have. Ideas that might be wrong. (Stay tuned to the blog for more on this in the near future.)
So in general, my approach to graduate training is to help students learn to think in a certain way, not to think certain things. Specifically, I want them to leave the lab approaching research in a certain way. To think big thoughts, even about ‘small’ things. The conclusions they come to, and the theories they espouse might be different than they are for me. That’s as it should be. And I try to remain open enough myself that I can learn from them as well. I’m not sure how successful I am at all of this, but it seems to be working OK. On the latter point, my own interests have been demonstrably affected by my students (see my continuing interest in gesture, for instance, which is all due to the influence of Whitney Goodrich Smith). On the former, I am heartened by the recent facebook post by Amy Finn that was the quote that lead this post.
But I am also frustrated. Frustrated for her, as I seem to have made her life much more difficult by encouraging her to think this way. Frustrated that this is so far from the standard way of working that reviewers don’t believe you when you say that contrasts are planned. And ask you to examine your data every which way, with no theoretical basis, while simultaneously chastising you for doing too many comparisons. (How doubling the number of statistical tests is a solution for too many to start with is beyond me.) And who encourage you to remove non-significant findings from a paper. You know what, I include carefully controlled variables in a study because there was reason to believe that they would affect outcomes. Sometimes I am wrong. And I think it is worth knowing that I am wrong. Especially when results from other related studies would suggest otherwise. (I have been able to include some null results before, see e.g., *Hudson Kam, 2009, so public records of my incorrect ideas do exist. But not enough.) But as we all know, null results are notoriously hard to publish. This is a topic that has been much discussed lately. And people are pushing better stats as a way to fix the problem. But that is only part of the solution. It seems to me that situating work within theoretical issues and questions that are specific enough to be meaningful (not just of the ‘hey, are these two things related’ variety) is another crucial part of the solution. But of course, this only works if we also know about failures. And understand them.
What is the point of this post? Well, it is two-fold. One, to announce that we will do blog postings of failed experiments and conditions on the blog so that there is a more public record of them, from my lab at least. Two, to make a point about the importance of thinking from a theoretical perspective. And as part of this, to inform people of how we do things in our lab. To clearly state that if you encounter a student who has worked with me, you can ask them to justify their work, to put it in a broader framework. I can assure you, they have thought about it. (And to warn people who might be interested in working with me about this. My way of working works for some people, but not for others.) We don’t have all the answers here. (If we did, I’d be out of a job.) But we’re really good at questions.
^quote used by permission
*Hudson Kam, C.L. (2009). More than words: Adults learn probabilities over categories and relationships between them. Language Learning and Development, 5, 115-145.
As promised, more information about a tweet:
Lab Alum Sarah Wilson will be presenting “Acquisition of Phrase Structure in an Artificial Visual Grammar” at the CogSci 2013 in Berlin this summer.
Here’s the abstract:
Recent studies showing learners can induce phrase structure from distributional patterns (Thompson & Newport, 2007; Saffran, 2001) suggest that phrase structure need not be innate. Here, we ask if this learning ability is restricted to language. Specifically, we ask if phrase structure can be induced from non-linguistic visual arrays and further, whether learning is assisted by abstract category information. In an artificial visual grammar paradigm where co-occurrence relationships exist between categories of objects rather than individual items, participants preferred phrase-relevant pairs over frequency-matched non-phrase pairs. Additionally, participants generalized phrasal relationships to novel pairs, but only in the cued condition. Taken together these results show that learners can acquire phrase structure in a non-linguistic system, and that cues improve learning.
The plan is to collect some more data for this project over the summer too, so stay tuned for more updates.