panelarrow

So Jackendoff, 1987, pages 112-115 for theoretical discussion). How much information is

| 0 comments

So Jackendoff, 1987, pages 112-115 for theoretical discussion). How much information is maintained at each of these different levels, and for how long, remains an open question (see, e.g., Bicknell et al., under review; Dahan, 2010), but it seems fair to assume that maintenance of lower level information within the internal representation of context is shorter-lived than higher level information. This literature also highlights the fact that, because language processing is highly interactive, with extensive communication across representational levels during processing (Elman, Hare, McRae, 2004; McClelland Rumelhart, 1981; Rumelhart McClelland, 1982), a comprehender can use many of these different types of information, encoded within her internal representation of context, to facilitate the processing of incoming information at almost any other level of representation (see Altmann PM01183 supplier Steedman, 1988; Crain Steedman, 1985; Tanenhaus Trueswell, 1995, for reviews and discussion). We next consider the computational implications of this type of interactivity for understanding the role of prediction in language comprehension. Computational insights In the probabilistic models of parsing we considered in section 1, the aim of the parser was to infer the structure of the sentence that was being communicated. This structure was conceptualized as generating words or word sequences. Several other generative probabilistic models of language have attempted to model inference at different levels and types of representation. For example, phonetic categories can be understood as generating phonetic cues (Clayards et al., 2008; Feldman et al., 2009; Kleinschmidt Jaeger, 2015; Sonderegger Yu, 2010), while semantic categories (Kemp Tenenbaum, 2008) or topics (Griffiths, Steyvers, Tenenbaum, 2007; Qian Jaeger, 2011) can be understood as generating words. One simplifying feature of all these models is that they each generate just one type of input (although see Brandl, Wrede, Joublin, Goerick, 2008; Feldman, Griffiths, Goldwater, Morgan, 2013; Kwiatkowski, Goldwater, Zettlemoyer, Steedman, 2012, for exceptions in the developmental literature). The ultimate goal of comprehension, however, is not to infer a syntactic structure, a phonemic category, a semantic category or a topic. Rather, it is to inferAuthor Manuscript Author Manuscript Author Manuscript Author ManuscriptLang Cogn Neurosci. Author manuscript; available in PMC 2017 January 01.Kuperberg and JaegerPageits full meaning — the message (Bock, 1987; Bock Levelt, 1994; Dell Brown, 1991) or situation model (Johnson-Laird, 1983; Van Dijk Kintsch, 1983; Zwaan Radvansky, 1998) that the speaker or writer intends to communicate (Altmann Mirkovic, 2009; Jaeger Ferreira, 2013; Kuperberg, 2013; McClelland, St. John, Taraban, 1989). For a comprehender to infer this message, she must draw upon multiple different types of stored information. Given this logic, any complete generative model of language comprehension (the process of language understanding) must consider message-level representations as probabilistically generating information at these multiple types and levels of representation. One way of modeling this type of architecture might be within a multi-representational hierarchical generative LY294002 mechanism of action framework — the type of framework that been proposed as explaining other aspects of complex cognition (Clark, 2013; Friston, 2005, Hinton 2007; see Farmer, Brown Tanenhaus,.So Jackendoff, 1987, pages 112-115 for theoretical discussion). How much information is maintained at each of these different levels, and for how long, remains an open question (see, e.g., Bicknell et al., under review; Dahan, 2010), but it seems fair to assume that maintenance of lower level information within the internal representation of context is shorter-lived than higher level information. This literature also highlights the fact that, because language processing is highly interactive, with extensive communication across representational levels during processing (Elman, Hare, McRae, 2004; McClelland Rumelhart, 1981; Rumelhart McClelland, 1982), a comprehender can use many of these different types of information, encoded within her internal representation of context, to facilitate the processing of incoming information at almost any other level of representation (see Altmann Steedman, 1988; Crain Steedman, 1985; Tanenhaus Trueswell, 1995, for reviews and discussion). We next consider the computational implications of this type of interactivity for understanding the role of prediction in language comprehension. Computational insights In the probabilistic models of parsing we considered in section 1, the aim of the parser was to infer the structure of the sentence that was being communicated. This structure was conceptualized as generating words or word sequences. Several other generative probabilistic models of language have attempted to model inference at different levels and types of representation. For example, phonetic categories can be understood as generating phonetic cues (Clayards et al., 2008; Feldman et al., 2009; Kleinschmidt Jaeger, 2015; Sonderegger Yu, 2010), while semantic categories (Kemp Tenenbaum, 2008) or topics (Griffiths, Steyvers, Tenenbaum, 2007; Qian Jaeger, 2011) can be understood as generating words. One simplifying feature of all these models is that they each generate just one type of input (although see Brandl, Wrede, Joublin, Goerick, 2008; Feldman, Griffiths, Goldwater, Morgan, 2013; Kwiatkowski, Goldwater, Zettlemoyer, Steedman, 2012, for exceptions in the developmental literature). The ultimate goal of comprehension, however, is not to infer a syntactic structure, a phonemic category, a semantic category or a topic. Rather, it is to inferAuthor Manuscript Author Manuscript Author Manuscript Author ManuscriptLang Cogn Neurosci. Author manuscript; available in PMC 2017 January 01.Kuperberg and JaegerPageits full meaning — the message (Bock, 1987; Bock Levelt, 1994; Dell Brown, 1991) or situation model (Johnson-Laird, 1983; Van Dijk Kintsch, 1983; Zwaan Radvansky, 1998) that the speaker or writer intends to communicate (Altmann Mirkovic, 2009; Jaeger Ferreira, 2013; Kuperberg, 2013; McClelland, St. John, Taraban, 1989). For a comprehender to infer this message, she must draw upon multiple different types of stored information. Given this logic, any complete generative model of language comprehension (the process of language understanding) must consider message-level representations as probabilistically generating information at these multiple types and levels of representation. One way of modeling this type of architecture might be within a multi-representational hierarchical generative framework — the type of framework that been proposed as explaining other aspects of complex cognition (Clark, 2013; Friston, 2005, Hinton 2007; see Farmer, Brown Tanenhaus,.

Leave a Reply