
Full text loading...
Syntactic theories are typically construed based on acceptability judgments. These judgments are increasingly often collected experimentally, testing larger sets of linguistically naive participants. An important assumption is that participants have a very clear understanding of what it is they are asked to do, which can be assessed by establishing their internal consistency. The question we address in this paper is whether ‘human measuring instruments’ are consistent in their judgments. To this end, we re-examined the judgment data from Schoenmakers (2023), where three types of violations of the prescriptive norm and object scrambling sentences were evaluated. We used Generalizability Theory to investigate the degree of covariation in the judgments and found that the internal consistency was poor in the norm violation item sets, but excellent in the scrambling item set. A difference between the data patterns is that the former item sets led to ‘sledgehammer’ effects between the stigmatized and non-stigmatized variants, which left little room for participant variation. Our analyses show that judgments from naive native speakers can adequately serve linguistic theorizing, both in the case of stigmatized and non-stigmatized variation. Furthermore, we performed cluster analyses to identify subgroups of participants to get a better grasp on the variation in the data set. We conclude that specific statistical analyses can help understand data and advance linguistic theory building.
Article metrics loading...
Full text loading...
References
Data & Media loading...