Focus On Research The Generality Of Schedule Effects

Many behavior analysts assume that basic research with animals will yield general principles that extend to many different species, including humans. This assumption applies to the research on schedules of reinforcement. In this context, experimenters who describe patterns of behavior on a given schedule believe that similar regularities will develop for any species that has evolved the capacity for operant conditioning. The assumption of generality implies that the effects of contingencies of reinforcement extend over species, reinforcement, and behavior. For example, a fixed-interval schedule is expected to produce the scalloping pattern not only for a pigeon pecking a key for food but also for a child solving mathematics problems for teacher approval.

This assumption is clearly stated in a variety of passages from books in behavior analysis. In their popular text, Whaley and Malott (1971) comment that "past research has shown that nearly all of the results of animal experimentation are just as true of humans as they are of animals" (Whaley & Malott, 1971, p. 8). A similar view was expressed by Morse (1966) in the early handbook of operant behavior. He wrote that "any member of most species will give a similar performance on the same schedules" (Morse, 1966, p. 59). Finally, B. F. Skinner (1969) supported the assumption of generality when he suggested that "the fact is that methods first developed for the study of lower organisms, as well as the concepts and principles arising from that study have been successfully applied to human behavior, both in basic analysis and in many technological applications" (1969, p. 101).

Dr. Fergus Lowe (Fig. 5.10), a professor of psychology at the University College of North Wales, has questioned the generality of schedule effects. He states that "the question which provides the main focus of my research is one which should be central to all behavior analysis; namely, how do the principles of behavior derived from animal experiments apply to human behavior?" (personal communication, March 20, 1989). Lowe devoted much

FIG. 5.10. Fergus Lowe. Reprinted with permission.

research to an analysis of performance on fixed-interval schedules of reinforcement. He has investigated the operant behavior of rats, pigeons, chimpanzees, human adults, and children of differing ages and language ability.

Lowe (1979) has conducted numerous studies of fixed-interval performance with humans, who press a button to obtain points that are later exchanged for money. Figure 5.11 shows typical performances on fixed-interval schedules by a rat and two human subjects. Building on research by Harold Weiner (1969), Lowe argues that animals show the characteristic scalloping pattern, and humans generally do not. Humans often produce one of two patterns—either an inefficient high, steady rate of response or an efficient low-rate, break-and-run performance. Experiments by Lowe and his colleagues have focused on the conditions that produce the high- or low-rate patterns in humans. Of course one obvious controlling factor is the effort required to make the operant response. The greater (lower) the response cost the lower (higher) will be the rate of responding.

Other than effort, the basic idea is that schedule performance in humans reflects the influence of language. (See chap. 12 on verbal behavior.) In conditioning experiments, people generate some verbal rule and proceed to behave according to the rule rather than to the experimentally arranged contingencies. Lowe, Beasty, and Bentall (1983) commented that:

Verbal behavior can, and does, serve a discriminative function that alters the effects of other variables such as scheduled reinforcement. Unlike animals, most humans are capable of describing to themselves, whether accurately or inaccurately, environmental events and the ways in which those events impinge upon them; such descriptions may greatly affect the rest of their behavior. (p. 162)

FIG. 5.11. Typical animal performance on FI and the high- and low-rate performance usually seen with adult humans. Note: The data are adapted from Reinforcement and the Organization of Behavior (p. 162), by F. C. Lowe, 1979, New York: Wiley.

In most cases, people who follow self-generated rules satisfy the requirements of the schedule, obtain reinforcement, and continue to follow the rule. For example, one person may say, "I should press the button fast," whereas another says that "I should count to 50 and then press the button." Only when the contingencies are arranged so that self-generated rules conflict with programmed reinforcement do people reluctantly abandon the rule and behave in accord with the contingencies (Baron & Galizio, 1983).

Although conditions may be arranged to override the effects of rules, most adult human behavior is rule governed (see Skinner, 1969). The implication is that humans who have not developed language skills will show characteristic effects of schedules. Lowe et al. (1983) designed an experiment to show typical FI performance by children less than a year old. The infants sat in a high chair and were able to touch a round metal cylinder. When the cylinder was touched, one infant (John) received a small bit of food (pieces of fruit, bread, or candy) on fixed-interval schedules of reinforcement. A second infant, Ann, was given 4 s of music played from a variety of music boxes on the same schedules. Both infants produced a response pattern similar to the rat's performance in Fig. 5.11. Thus, infants who are not verbally skilled behave in accord with the FI contingencies and are substantially different from adult humans.

Based on this finding and other research, Lowe argues that "these studies have shown 1) that the operant behavior of verbally able humans differs very markedly from that of non-verbal organisms (i.e., animals and human infants) and 2) that verbal behavior plays a major role in bringing about these differences" (personal communication, 1989). These conclusions have encouraged Dr. Lowe to increasingly concentrate his investigations on the interactions between verbal and nonverbal behavior, particularly in early childhood when verbal control of behavior is first established.

Although the effects of verbal behavior and self-instruction may account for human performance on FI schedules, there are alternative possibilities. Dr. Michael Perone and his colleagues, Drs. Mark Galizio and Alan Baron, in an article concerning the relevance of animal-based principles for human behavior, noted:

when comparisons are made between the performances of humans and animals, discrepancies .. . are not difficult to find and, in themselves, provided little basis for satisfaction. The challenge for the student of human operant conditioning is to identify the similarities in the variables underlying the discrepant performances and ultimately to bring them under experimental control. (Perone, Galizio, & Baron, 1988, p. 80)

There is no doubt that humans become more verbal as they grow up. However, there are many other changes that occur in the movement from infancy to adulthood. An important consideration is the greater experience that adults have with ratio-type contingencies of reinforcement. Infants rely on the caregiving of other people. This means that most of the infant's reinforcement is delivered on the basis of time and behavior. A baby is fed when the mother has time to do so, although fussing may decrease the interval. As children get older, they begin to crawl and walk, and reinforcement is delivered more and more on the basis of their behavior. When this happens, many of the contingencies of reinforcement change from interval to ratio schedules. This experience with ratio schedules of reinforcement may contribute to the differences between adult human and animal performance on fixed-interval schedules.

Research by Wanchisen, Tatham, and Mooney (1989) has shown that rats perform like adult humans on FI schedules after a history of ratio reinforcement. The animals were exposed to variable-ratio reinforcement and then were given 120 sessions on a fixed-interval 30-s schedule (FI30 s). Two patterns of response developed on the FI schedule—a high-rate pattern with little pausing and a low-rate pattern with some break-and-run performance. These patterns of performance are remarkably similar to the schedule performance of adult humans (see Fig. 5.11). One implication is that human performance on schedules may be explained by a special history of reinforcement rather than by self-generated verbal rules. At this time, it is reasonable to conclude that both reinforcement history and verbal ability contribute to fixed-interval performance of humans.

Variable Interval

On a variable-interval, or VI, schedule responses are reinforced after a variable amount of time has passed (see Fig. 5.12). For example, on a VI 30-s schedule, the time to each reinforcement changes, but the average time is 30 s. The symbol V indicates that the time requirement varies from one reinforcer to the next. The average amount of time required for reinforcement is used to index the schedule.

Interval contingencies are common in the ordinary world of people and other animals. People line up, sit in traffic jams, wait for elevators, time a boiling egg, and are put on hold. In everyday life, variable time periods occur more frequently than fixed ones. Waiting in line to get to a bank teller may take 5 min one day and half an hour the next time you go to the bank. A wolf pack may run down prey following a long or short hunt. A baby may cry for 5 s, 2 min, or a quarter of an hour before a parent picks up the child. A cat waits varying amounts of time in ambush before a bird becomes a meal. Waiting for a bus is rarely reinforced on a fixed schedule, despite the efforts of transportation officials. The bus will arrive around an average specified time and will wait only a given time. A carpool is another example of such a VI with limited hold. The car arrives more or less at a specified time but will wait for a rider only a limited, and usually brief, time. This limited hold addition (where the reinforcer is available for a set time after a variable interval) to a VI schedule will increase the rate of responding by reinforcing short interresponse times.

Figure 5.13 portrays the pattern of response generated on a VI schedule. On this schedule, the rate of response is moderate and steady. The pause after reinforcement that occurs on FI usually does not appear in the variable-interval record. Because the rate of response is steady and moderate, VI performance is often used as a baseline for evaluating other independent variables. The rate of response on VI schedules may increase or decrease as a result of experimental manipulations. For example, tranquilizing drugs such as chlopromazine decrease the rate of response on variable-interval schedules (Waller, 1961), whereas stimulants increase VI performance (Segal, 1962). Murray Sidman (1960) has commented on the usefulness of VI performance as a baseline.

An ideal baseline would be one in which there is as little interference as possible from other variables. There should be a minimal number of factors tending to oppose any shift in behavior that might result from experimental manipulation. A variable-interval schedule, if skillfully programmed, comes close to meeting this requirement. (p. 320)

FIG. 5.12. A variable-interval schedule. The symbol V stands for variable and indicates that the schedule is indexed by the average time requirement for reinforcement.

FIG. 5.13. Idealized cumulative pattern of response produced by a variable-interval schedule of reinforcement.

In summary, VI contingencies are common in everyday life. These schedules generate a moderate steady rate of response. Because of this pattern, variable-interval performance is frequently used as a baseline.

0 0

Post a comment