Glossa (Jul 2022)

A MaxEnt learner for super-additive counting cumulativity

  • Seoyoung Kim

DOI
https://doi.org/10.16995/glossa.5856
Journal volume & issue
Vol. 7, no. 1

Abstract

Read online Read online

Whereas most previous studies on (super-)gang effects examined cases where two weaker constraints jointly beat another stronger constraint (Albright 2012; Shih 2017; Breiss & Albright 2022), this paper addresses gang effects that arise from multiple violations of a single constraint, which Jäger & Rosenbach (2006) referred to as counting cumulativity. The super-additive version of counting cumulativity is the focus of this paper; cases where multiple violations of a weaker constraint not only overpower a single violation of a stronger constraint, but also surpass the mere multiplication of the severity of its single violation. I report two natural language examples where a morphophonological alternation in a compound is suppressed by the existence of marked segments in a super-additive manner: laryngeally marked consonants in Korean compound tensification and nasals in Japanese Rendaku. Using these two test cases, this paper argues that these types of super-additivity cannot be entirely captured by the traditional MaxEnt grammar; instead, a modified MaxEnt model is proposed, in which the degree of penalty is scaled up by the number of violations, through a power function. This paper also provides a computational implementation of the proposed MaxEnt model which learns necessary parameters given quantitative language data. A series of learning simulations on Korean and Japanese show that the MaxEnt learner is able to detect super-additive constraints and find the appropriate exponent values for those, correctly capturing the probability distributions in the input data.

Keywords