1 Beaumont (1982) suggests that children of 7 would find at least some types of
2 relative clauses difficult to read. She tested children’s ability to understand both subject relative and object relative clauses with and without a relative pronoun, that. Interestingly and contrary to previous findings, the children found the object relative pronouns slightly easier than the subject relatives. With the subject relatives, inclusion or omission of the relative pronoun made no difference. Subject-relative when misunderstood tended to be interpreted as two overlapping NVN constructions. For example: The woman following the man is carrying a dog. would be interpreted as:
The woman is following the man … The man is carrying a dog.
With the object-relative, however, the presence of the relative pronoun was important. Understanding was significantly better when the pronoun was included. The marking function of that guided the children’s interpretations. Also, the children tended to match up the nouns and verbs according to word order and distance principle (Oakhill and Garnham, 1988: 47).
Furthermore, in a study of children’s spontaneous production of relative clauses, Romaine (1984) showed an overwhelming preference for right branching (object relative) clauses in all the ages. She tested 6, 8 and 10-year-olds. This preference, according to Oakhill and Garnham (1988) reflects the fact that RB structures are systematically less complex that CE structures.
The implication for syntactic simplification would be: substitute RB structures for CE ones as much as possible. 220.127.116.11) A Proposition-based Measure of Comprehensibility
There have been several attempts to develop text analysis schemes more appropriate to the active processing model of reading. The best known of these are the approaches of Miller and Kintsch (1980) and Meyer (1975). In Miller and Kintsche’s model, texts are analyzed into propositions and it is these propositions which are used to build a meaning structure. An initial text structure or scheme is imposed based on the initial propositions. That structure is expanded, modified, or abandoned as the reader attempts to interpret successive propositions in terms of it. Frequency of occurrence of propositions and the limits of short-term memory in holding sequences of propositions play important roles in determining comprehensibility.
Meyer’s (1975) approach is also based on a propositional analysis of the text and the identification of the coherence relations between the propositions. From her perspective, the reader builds a hierarchical structure of the propositions. Propositions higher in the structure or supporting the higher order propositions will be better recalled.
Meyer maintains that the structure imposed by the writer may not be adopted by the reader. Such a failure may be due to inadequate signaling of the structure by the writer, lack of skill on the part of the reader, or a difference in goals (information requirements) between the two.
The implications for syntactic simplification would be: state explicitly, with proper clues, the structure of propositions and, as far as possible, decrease the number of propositions in each sentence. 2.7) Syntactic Complexity and Reading
Many experts in the field agree that the students of English as a second or foreign language cannot make reasonable prediction about the material they are reading unless they are familiar with the English grammatical patterns (Buck, 1973). The effect of syntax on reading comprehension has attracted attention of quite a number of researchers during the past decade (Smith, 1971). To understand a sentence, one must work out its ingredient: morphemes, words, phrases, and clauses-their boundaries, their meanings (grammatical, lexical, or both), and their relationships to each other as constituents of larger units, up to and including the sentence itself (Ives, 1964).
It is a widely accepted principle that the simple declarative sentence is in a sense the canonical from of a sentence, in terms of which other sentence types, both complex and reduced, may be explained by reference to such operations as conjunction, insertion, inversion, substitution and transportation (Quirk, Greenbaum, Leech, and Svartvik, 1985).
Quirk, et al. (1985) propose the following sources for syntactic complexity:
1. Combining subordination devices within a sentence.
2. Positions of subordination clauses: subordinate clauses may be positioned initially, medially, or at the end of their super ordinate clauses. Right-branching clauses are the easiest to comprehend; however, comprehension becomes more difficult as the complexity of the left-branching increases.
3. Self embedding: the medial subordination of one constituent with another constituent of the same kind.
4. Subordination versus coordination: coordination is the kind of link most used foroptimum ease of comprehension.
5. Structural ambiguity: A change in the typical word order that is familiar to FL readers could also be a cause of complexity. FL readers are mostly familiar with the SVO or NVO of the surface structure, so when their expectations are violated in the foreign language, their fluency may be disrupted, and hence comprehension hindered. According to many researchers (Wood, 1974; Clark and Clark, 1977) parsing sentences into their natural structure constituents clearly facilitates the rate at which sentences can be processed, regardless of the level or the skill of the reader (Cited in Barzegar, 1997).
Anderson and Davison (1988) have demonstrated that difficulty of comprehension is not linked in a simple way to complex features of sentence syntax. However, they state that if the processing of a complex structure in some way exceeds the attentional resources of the reader, it will be difficult for the reader to continue with processingthe structure. Crain and Shankweiler (1988) propose two hypotheses regarding reading acquisition:
1. The structural deficit hypothesis, which proposes that some syntactic structures are inherently more complex than others; for instance, it is claimed that a sentence containing both a main clause and a subordinating clause is more complex than a coordinate structure.
2. The processing deficit hypothesis, which postulates that reading, demands a number of secondary processing mechanisms to interface spoken language and an orthographic system of representation. These subsidiary mechanisms include verbal working memory, routines for identification of printed words, and the syntactic, semantic and pragmatic processors.
Furthermore, word recognition, parsing, and semantic composition of word meanings that are all highly automatic in speech processing, must be reshaped in reading to interface with a new input source.
In essence, these two views hinge upon the distinction between structure and process. On the first view, there is a structural deficit, i.e., a deficit in stored knowledge. On another view, the problem is one of process, i.e., access and use of this stored knowledge.
What is common to these hypotheses is that each attempts to locate the causes of difficulty in reading. It thus indicates that it is not the structures themselves that make comprehension difficult, but the demands these structures make on the subsidiary processing mechanisms, especially verbal working memory (Crain and Shankweiler, 1988).
On the other hand, Frazier (1988) suggests that syntactic ambiguity poses a difficult problem for the readers. There are indefinitely many sentences in a natural language and thus it is in principle impossible for each sentence to be prestored in memory. Hence, we will assume the processor uses the syntactic well-formedness constraints of the language to assign a syntactic structure to a sentence. Therefore, it will be assumed that the point of assigning a syntactic structure to an input is to determine what a sentence actually mean s,
i.e., to distinguish the permissible meanings of a sentence from the larger sets of meanings that result from randomly combining the meaning of the lexical items in the sentence. All this should impose memory and computational demands on the processor that are larger than the demands imposed by a corresponding unambiguous sentence (Crain and Shankweiler, 1988).
Furthermore, the complexity of processing ambiguous inputs will persist regardless of whether the input is fully ambiguous, and ultimately open to more than one analysis, or only temporarily ambiguous, i.e., the initial portion of a sentence may be open to more than one analysis but sequence items may be consistent with only one of these analyses (Crain and Shankweiler, 1988; Frazier, 1988).
Thus, two types of temporary ambiguity are distinguished: horizontal ambiguity which persists even when all information has been extracted from the preceding sentence and discourse context like: John told the girl that Bill liked the story
and vertical ambiguity due to delayed use of information which in principle may be extracted from material preceding the ambiguous string but which is in fact not exploited by the processor until sometime after the initial syntactic analysis of the string (Frazier, 1988). One further source of processing complexity derives from the operations needed to revise an initial incorrect analysis of a sentence, e.g.:
Lydia knew the answer was correct.
The processor deals with temporary ambiguity by initially pursuing the analysis which requires the fewest syntactic nodes. In the above example, “the answer” must be taken to be the object of a sentential complement to “know” not requiring the insertion of an extra S-node dominating the temporarily ambiguous noun-phrase (Frazier, 1988).
In addition to processing difficulty due to syntactic ambiguities, increases in complexity are associated with differences in the memory burden imposed by different sentences structure (Wanner and Maratsos, 1978). Moreover, the complexity of syntactic processing appears to be greater when many syntactic decisions are clustered together (associated with a local region of the sentence) rather than distributed evenly over the input string (Frazier, 1988). As a matter of fact, it is hard to imagine that linguistic structure has no effect at all on performance, and in fact such a view is not generally held (Smith, 1988).
All the above mentioned sources of syntactic complexity and their effect on reading comprehension suggest that an efficient reader, especially at the university level , should rely partially on syntactic elements to get at the meaning of a given text, and that the matter of vocabulary is not as difficult as that of syntax for university students (Cojocaru,1977). 2.8) Simplification of Reading Materials
Researchers have asked whether it is possible to assess the difficulty of a text by looking at some features of the text, especially those which can be measured objectively. Do features of the text reflect its difficulty, and do linguistic features, such as word difficulty and sentence complexity in themselves present barriers to comprehension? Why do some readers and not others, find a text difficult to comprehend – is it because of a deficit of knowledge, or of language or of attentional capacity for efficient processing of language?
These are questions which should be investigated both by educational and cognitive psychologists as well as educators, writers and publishers concerned with very practical problems of matching texts and readers (Davis, 1995).
All pedagogy involves simplification in that it aims at expressing concepts, beliefs, attitudes, and so on in a way which is judged to be in accord with the knowledge and experience of learners. In language teaching, simplification usually refers to a kind of intra lingual translation whereby a piece of discourse is reduced to a version written]]>
1 Beaumont (1982) suggests that children of 7 would find at least some types of