Choice between stimuli previously presented separately

Choice between stimuli previously presented separately

LEARNIN...

868KB Sizes 0 Downloads 1 Views

LEARNIN<:

Choice

AND

MOTIVATION

Between

( 197.2)

Stimuli

3, 209-222

Previously

WALTER Princeton

VOM

Presented

Separately’

SAAL

University

Three experiments examined the choice behavior of pigeons in a chamber with two adjacent keys that could each be lit with either a red or a green dot. In each experiment subjects first received several sessions of separate-stimulus training with only one color present on each trial, then a choice test with both colors present on each trial. Trials lasted a fixed period of time regardless of the number of pecks that occurred, with earned reinforcements presented at the end of the trial; no reinforcement was available during choice tests. When separate-stimuhrs training was arranged so that more reinforcements occurred per unit time with S, present than with S, present, either because responding was reinforced on a higher proportion of S, trials or because S, trials were shorter, S, was pecked more often on the subsequent choice test. When reinforcements per unit time with the stimulus present were held constant, a number of other variables had little or no effect on subsequent choice, including total number of trials presented, proportion of trials followed by reinforcement, trial length, and total number of reinforcements per session with each stimulus.

When an animal is presented with two stimuli and must respond to one or the other, it is possible to view its choice as a function of theoretical “response strengths” to each of the individual stimuli (Herrnstein, 1970; Hull, 1943, p. 341; Spence, 1956, pp. 29ff, 200ff). This view suggests an experimental approach to the study of choice. One could first present S, and S1 separately on different trials during sepamtestimulus training to establish different histories of reinforcement for responding to each stimulus. After this separate-stimulus training, a choice test could be given by presenting the two stimuli together on each trial and observing the degree of responding to each stimulus. By manipulat’ More detailed data from these experiments is reported in a Ph.D. thesis sorbmitted to McMaster University (vom Saal, 1969). The author expresses deep appreciation to H. M. Jenkins for helpful suggestions and stimulating discussions throughout the course of this work. Carol Cairns, David Challen, and Donna Warwick offered valuable assistance in running subjects and analyzing data This research was partially supported by a grant from the National Research Council of Canada to H. M. Jenkins. Reprints may be obtained from the author at the Department of Psychology, Green Hall, Princeton University, Princeton, N.J. 66546. 209 @ 1972

by Academic

Press,

Inc.

210

WALTER

VOM

SAAL

ing the histories of reinforcement associated with the two stimuli when they are presented separately, it should be possible to vary the response strengths to those stimuli and thereby influence choice between the stimuli when they are subsequently presented together. Choice has been examined following separate-stimulus training in several previous experiments (Catlin & Gleitman, 1968; D’Amato, Lachman, & Kivy, 1958; Divak & Elliott, 1967; Honig, 1962; Logan, 1969; Mason, 1957; Pavlik, & Born, 1962). The research reported here differs from previous research in that (a) pigeons were used in a discrete trial procedure, and (b) an attempt was made to examine a large number of variables whose manipulation during separate-stimulus training might affect responding on the subsequent choice test. EXPERIMENT

1 REINFORCEMENTS REINFORCEMENTS

PER STIMULUS PER SESSION

TIME

AND

The variables examined in Expt. 1 were the number of reinforcements (rft) received with each stimulus per unit time that the stimulus was present (&/stimulus time) and the number of reinforcements received with each stimulus per session (rft/session). Each of these variables could conceivably affect choice by modifying response strengths to S, and S, during separate-stimulus training. For example, suppose response strength to each stimulus increased each time a response to the stimulus was reinforced, but decreased as time passed with the stimulus present and responses nonreinforced. Then if two stimuli differed in rftlstimulus time during separate-stimulus training, greater response strength would develop to the stimulus receiving more reinforcements per unit time with the stimulus present, and that stimulus would be pecked more frequently on the subsequent choice test. Similarly, suppose response strength to each stimulus increased each time a response to the stimulus was reinforced, but decreased gradually either over time during each session or over time between sessions. Then if two stimuli differed in rft/session during separate-stimulus training, greater average response strength would be maintained by the stimulus reinforced more often in each session, and that stimulus would be pecked more frequently on the subsequent choice test. Rft/stimulus time and rft/session have often been confounded in previous experiments. For example, in some experiments an equal number of S, and S, trials are presented with a higher percentage of S, trials reinforced; when this is done, S, receives both more rft/stimulus time and more &/session than S, (e.g., Catlin & Gleitman, 1968). Moreover, rft/stimulus time and rft/session are necessarily confounded in any experiment where S, and S, are always presented together for equal lengths

SEPARATE-STIMULUS

TRAINING

ASD

CHOICE

211

of time-as is done, for example, in simultaneous discrimination experiments and in concurrent schedule experiments. In Expt. 1, S, and S, were presented separately for different lengths of time before the choice test, so that &/stimulus time and rftlsession could be varied indeptndently in different groups, Genera1 Method Subjects and apparatus. Each of the experiments used experimentally naive male White King pigeons, 5-7-years-old, maintained at 75-85X of free feeding weight. Six Lehigh Valley Electronics pigeon chambers were modified so that the front panels contained two 28mm square holes, horizontally adjacent to each other with their edges 4 mm apart. A 7-mm diameter spot in the center of each key could be illuminated from behind with red or green light. Trials were presented independently to each experimental chamber using standard relay circuitry. Pecks were recorded on standard electromagnetic counters. In addition, in Expt. 2 each peck was recorded along with its time to the nearest 31 110 set on punched paper tape to allow analysis of latencies and intcrresponse times. Proceclure. Brief trials were used that lasted a fixed period of time regardless of the number of pecks that occurred. In Expts. 1 and 2, trials were 6.2 set long and time between trial onsets varied from 24 to 132 sec. On negative trials no reinforcement was available, but on positive trials a single reinforcement (4 set access to grain) was presented at the end of the trial if one or more pecks occurred to a lit key. Fixed-length trials were used so that the total time each stimulus was present could be precisely controlled. The minimal response requirement of only one peck on positive trials was used so that programmed reinforcements would almost always be delivered. For example, the response requirement was met and reinforcement was delivered on an average of 98% of all positive trials during the last ten sessions of separatcstimulus training in Expt. 1. On separate-stimulus trials only one key was lit, either red or green; on choice trials both keys were lit, one red and one green. For both types of trials, the colors were presented equally often on each key in every experimental session, so that key location was always varied and irrelevant. Color assignments were counterbalanced within each experimental group, with S, red for half the birds in each group and green for the remaining birds. A background masking noise ( 75-80 dB rc SPI,) was replaced by a 1000 Hz tone (75-80 dB) during every trial. During the first three sessions of each experiment, the birds were trained to peck by presenting the same types of positive and ncgativc

212

WALTER

VOM

SAAL

trials they were to receive throughout separate-stimulus training, but giving reinforcement following positive trials whether or not a peck occurred. This procedure is similar to the “autoshaping” procedure of Brown and Jenkins ( 1968), and successfully produced pecking in 63 of 65 birds in these experiments. Design of Experiment 1 Three groups of six birds were run in Expt. 1. During separatestimulus training each group received 12 S, trials and 36 S, trials daily, but the proportion of trials reinforced in each group was manipulated so that &/stimulus time and &/session would vary (see Table 1). In Group T the stimuli differed in A/stimulus time but rft/session was held constant; in Group N the stimuli differed in rft/session but &/stimulus time was held constant; and in Group TN both variables were manipulated so that one stimulus was associated with more &/stimulus time and the other with more rft/session. TABLE 1 Design and Results of Expt.

Group T N TN

Number of positive (+ ) and negative (-) trials __3 s, r1 12f 12+ 12+

12+, 2436f 24+, 12-

Rft/stimulus timea

1

Rf t/session

Relative rate of response*

Y L1

sz

s1

S?

Separate

Choice

9.7 I) .7 9.7

3.2 9.7 6.5

12

12

.51

12 12

36 24

.47 .49

.82 .33 .68

a Rft/stimulus time is expressed in reinforcements per minute. * Relative rate of response is (rate of response to &)/(rate to S1 plus rate to S,). The first value shown for each group is the group mean over the last five sessions of separatestimulus training; the second value shown is the group mean on the choice test.

Each group was given 16 sessionsof separate-stimulus training with the trial types shown in Table 1 presented in a mixed order in each session, In session 17, each group was given half of a normal separatestimulus session,then 20 nonreinforced choice trials, then the second half of a normal session, then another 20 nonreinforced choice trials. The 40 choice trials taken together will be referred to as the choice test. Results and Discussion The main finding of Expt. 1 was that differences in &/stimulus time during separate-stimulus training strongly influenced responding on the choice test, while the effect of rftlsession was considerably weaker and

SEPARATE-STIMULUS

THAINIE;C:

ASI)

CHOICE

213

i , 1

&

. I

001 0 GROUP

1

0 T

GROUP

1 N

0 GROUP

! ;lu

FIG. 1. Relative rate of response to S, for each bird in Expt. 1 averaged o\r’~ the last five sessions of separate-stimulus training (0) and on the choice test ( 1 ). Each line connects values for an individual bird. Filled triangles indicate statistically significant preferences for S, or S?.

not consistent for all birds. Neither variable significantly affected response rates during separate-stimulus training. Relative rates of response to S, averaged over the last five sessionsof separate-stimulus training and on the choice test are shown for individual birds in Fig. 1, and group means are given in Table 1. Relative rate of response to S, is (rate of response to S,) / (rate to S, plus rate to S,) ; values above .5 indicate a higher rate of response to S1, while values below .5 indicate a higher rate of response to S,. At the end of separatestimulus training, response rates appeared asymptotic at an average of 3.3 responsesper second and rates of response to S, and S, were close to each other in all birds (relative rates of response close to .5 in Tablr 1 and Fig. 1) . Relative rates of response on the choice test differed markedly from .5 in most birds, indicating strong preferences for one stimulus or the other. Preferences on the choice test were not closely related to diffcrences in response rate to the two stimuli at the end of separate-stimulus training. Rank order correlation coefficients between mean relative rate of response to S, over sessionsE-16 and relative rate of response to S, on the choice test were - -54 in Group T, + .03 in Group N, and + .49 in Group TN. Choice responding, unlike separate-stimulus responding, was strongly influenced by one of the variables manipulated during separate-stimulus training. All six birds in Group T pecked at a higher rate on the choice test to the stimulus previously associated with more rft/stimulus time. Moreover, this preference for S, was significant (two-tailed p < .05) in each of these birds according to a sign test based on the proportion of choice trials in which more responses were made to S, than S,. The effect of rftlsession on choice was less clearcut, since only five of the six birds in Group N pecked at a higher rate to the stimulus as-

214

WALTER

VOM

SAAL

sociated with a higher number of rft/session, and only three of the birds showed a significant preference for that stimulus. While this suggests a possible effect, it must be interpreted cautiously since the preference for S, was not consistent for all birds. In Group TN, where rft/stimulus time and rft/session were effectively competing, five of the six birds pecked at a higher rate to the stimulus associated with a higher value of rft/stimulus time. This is consistent with the finding in Groups T and N that rft/stimulus time had a strong effect on choice while the effect of &/session was less clearcut. EXPERIMENT 2 FURTHER REINFORCEMENTS

EXAMINATION PER SESSION

OF

Because the effect of rft/session was not clearcut in Expt. 1, that variable was examined further in Expt. 2 using a larger number of subjects. To increase the chances of an effect, the difference between S, and S? in rft/session was made larger than that used in Expt. 1. In addition, two groups of birds were run so the effects of rft/session could be examined against different background values of proportion of trials followed by reinforcement. After this phase of Expt. 2 was complete, all birds were given training in which rft/stimulus time was manipulated as in Expt. 1, in order to see whether that variable would again have a strong effect even after a considerable amount of other training. Design

and Procedure

Apparatus and details of procedure were the same as those used in Expt. 1. Two groups of naive pigeons were given 26 sessions of separatestimulus training, with the trial types shown in Table 2 presented in a

Design

and

TABLE 2 Results of Expt.

2

Relative Number of positive (+ ) and negative (- ) trials il9

Group

N(H)

8+

N(L)

1+,

s2

7-

40+ 5+,

35-

rate

of response5 -__

-__--__-.Choice Separate .50 .51

Test .46 .50

1

Test .45 .54

2

Test, 3 .81 .a7

a Relative rate of response is (rate of response to &),I(rat,e to S1 plus rate to S,). The first value shown for each group is t,he group mean over the last five sessions of separatestimulus training prior to the first choice t,est ; the remaining values are group means on the three choice test,s.

SEPARATE-STIMULUS

TRAINING

ASD

21.3

CHOICE

mixed order in each session. In each group S, received five times as of trials followed by reinmany rftlsession as S1, but the proportion forcement was high for both stimuli in Group N(H) and low for both stimuli in Group N(L). Birds in Group N(L) began training with all trials positive, but the proportion of trials followed by reinforcement was gradually reduced to that shown in Table 2. There were nine birds in Group N( H ) and six birds in Group N(L) . Each group was given a choice test in the 27th session, when the only trials presented were 40 nonreinforced choice trials. For the next 10 sessions. separate-stimulus training was resumed with the numbers of positive and negative trials used with S, and S, reversed for each bird, and in the 38th session a second choice test was given. Finally, all birds were given 16 sessions of separate-stimulus training the same as that given to Group T in Expt. 1, followed by a third choice test identical to the first two choice tests.

Results nrd Discussion Even a five-to-one difference in rft/ session during separate-stimulus training had no effect on choice in Expt. 2. The relative rates of response given in Fig. 2 and Table 2 show that, as in Expt. 1, rates of response to S, and S, were very similar at the end of separate-stimulus training. On the first choice test, only 9 of the 15 birds pecked S, more than S,, so there was no consistent preference for the stimulus previously associated with a higher number of rftlsession. After the first choice test it remained possible that rftlsession was having an effect, but that this effect was competing against strong color preferences in individual birds. To test this possibility, birds were given further separate-stimulus training with S, now receiving more &/session

.

i

1

2

GO 0

GROUP

.3 N(H)

0

1

2

G R 0 U P

3 N(L)

FIG. 2. Relative rate of response to S1 for each bird in Expt. the last five sessions of separate-stimulus training prior to the first on the first choice test ( 1 ), on the second choice test (2), and on test ( 3). Each line connects values for an individual bird. Filled statistically significant preferences for S, or S,.

2 averaged over choice test (0), the third choice triangles indicate

216

WALTER

VOM

SAAL

than S?. If rft/session had any effect, birds that preferred S, in the first choice test because of a color preference should now prefer S, even more strongly, while birds that preferred S, in the first choice test might still prefer Sz, but should prefer it less strongly, so that relative rate of response to S, would increase for all birds, This did not happen: Relative rate of response to S, increased in seven birds and decreased in eight birds from the first to the second choice test (Fig. 2). Rftlsession had no effect in this experiment, even using a within-subject test that should have been quite sensitive to such an effect. Following the second choice test, all birds were given separatestimulus training in which S, had more rftlstimulus time than SE, in order to see whether the strong effect of rft/stimulus time found in Expt. 1 would be replicated even following a considerable amount of other training. Rft/stimulus time again had a strong effect: as shown in Fig. 2, 14 of 15 birds responded at a higher rate to S, than S? on the third choice test, and the preference for S, was significant for all 14 of those birds (two-tailed p < .05 by the test described earlier). In this experiment latencies and interresponse times were recorded during separate-stimulus training as well as overall response rates. These measures agreed in showing very little difference in responding to S, and S, during separate-stimulus training. For instance, during the five separate-stimulus sessions preceding the third choice test, average latency of first response across all birds was 1.10 set for S, and 1.29 set for S,, and average interresponse time was .252 set for S, and .251 set for S,. These data imply that during separate-stimulus training the delay between the last peck on a positive trial and subsequent reinforcement was normally quite short. Trials ended at a fixed interval after trial onset in these experiments, so there was normally some delay between the last peck on a trial and the reinforcement that came when the trial ended. The typical response pattern was one of rapid pecking throughout the entire trial, however, so average delay of reinforcement can be estimated by taking half the average interresponse time. The interresponse time data presented above indicate that average delay of reinforcement near the end of separate-stimulus training was less than a fifth of a second and was almost identical for S, and S?. As in Expt. 1, the small differences in response rate to S, and S, that did occur at the end of separate-stimulus training were not closely related to relative rates of response on the subsequent choice test. Combining the two groups, rank order correlation coefficients between relative rate of response on a choice test and mean relative rate of response during

SEPARATE-STIMULUS

TRAIKINC

AND

CHOICE

217

the five preceding separate-stimulus sessions were + .29 for the first choice test, + -28 for the second choice test, and + .43 for the third choice test. Analyses based on relative latencies and relative interresponse times over the five sessions preceding each choice test showed that those measures were on the average no more useful than relative rates in predicting subsequent choice test behavior. Moreover, using measures taken from the last day of separate-stimulus training instead of averaged over the last five days, or taking measures over the first half of the choice test session instead of the entire session, reduced rather than increased the consistency in the data. EXPERIMENT PROPORTION

3 REINFORCEMENTS PER STIMULUS TIME AND OF TRIALS FOLLOWED BY REINFORCEMENT

In both Expts. 1 and 2, manipulation of rft/stimulus time strongly affected subsequent choice behavior. However, in both of those esperiments rft/stimulus time was manipulated by varying the proportion of trials followed by reinforcement, so that rft/stimulus time and proportion of trials followed by reinforcement were confounded. In Expt. :3. different trial lengths were used with S, and S, in order to isolate these variables. Design and Procedure Thirty naive pigeons were used in the same apparatus and with the same details of procedure that were used in Expts. 1 and 2. In this experiment, however, trials could be 3.2, 6.2, or 12.2 set long, and time between trial onsets varied from 36 to 198 sec. Three groups of 10 birds were given the trial lengths and numbers of positive and negative trials shown in Table 3. In Group T-Only, stimuli differed in rft/stimulus time but not in proportion of trials followed by reinforcement. In Group P-Only, stimuli differed in proportion of trials followed by reinforcement but were similar in rftlstimulus time. In Group TP, stimuli differed in both rft/stimulus time and proportion of trials followed by reinforcement, just as they had when rftlstimulus time was manipulated in Expts. 1 Xl d 2. After 16 sessionsof separate-stimulus training, all birds were given 12 choice test sesions, each consisting of 24 nonreinforced choice trials. The first choice sessionallowed examination of the effects of the independent variables on choice; the remaining choice sessions allowed examination of the extent to which choice would be maintained throughout extinction. In each choice session, half the trials were 3.2 set long and half were 12.2 set long, in a mixed order. Both short and long trials

218

WALTER

VOM

SAAL

SEPARATE-STIMULUS

TRAINING

ASD

CHOICE

219

were used to evaluate shifts in preference as a trial progressed; such shifts would be revealed by differences in average rates of response on short and long trials.

Results and Discussion In the first choice session, significant preferences for S, occurred both in Group T-Only and in Group TP, but no significant preference for either stimulus occurred in Group P-Only. This means that &/stimulus time had a strong influence on choice even when proportion of trials follow4 by reinforcement was held constant, but proportion of trials followed by reinforcement did not influence choice when rft/stimulus time was held constant. Relative rates of response to S, at the end of separate-stimulus training and in the first choice session are plotted for individual birds in Fig. 3, with group means given in Table 3. At the end of separate-stimulus training, rates of response to S, and S, differed more in individual birds than they had in the previous experiments. However, relative rates of response averaged over the last 5 days of separate-stimulus training showed that preference for a particular stimulus were not consistent for more than eight of the ten birds in any group. In the first choice test session, all ten birds in Group T-Only responded at a higher rate to S, than S,, and this preference was significant in nine of the ten birds (two-tailed p < .05 by the test described earlier). In Group TP, nine of the ten birds responded at a higher rate to S, than S,, and this preference was significant in four of the birds. In Group. P-Only, however, only three birds responded at a higher rate to S,. while six birds responded at a higher rate to S,, so there was no consistent preference for either stimulus in this group. There were no

cc-

-t OOL

I 0 T-ONLY

. 1

0 P-ONLY

1

1

0 TP

FIG. 3. Relative rate of response to S, for each bird in Expt. 3 averaged over the last five sessions of separate-stimulus training (0) and on the first choice session ( 1). Each line connects values for an individual bird. Filled triangles indicate statistically significant preferences for S, or S1.

220

WALTER

VOM

SAAL

significant differences between relative rates of response on short and long choice trials in any group, so the data presented above are based on short and long trials considered together, with response rates calculated simply as total responses to a stimulus divided by total time the stimulus was present. As in Expts. 1 and 2, choice test behavior in Expt. 3 was not closely related to behavior at the end of separate-stimulus training. Rank order correlation coefficients between mean relative rate of response to S, over sessions 12-16 and relative rate of response to S, in the first choice session were + .51 in Group T-Only, + .18 in Group P-Only, and + -13 in Group TP. Data for the remaining 11 choice sessions showed movement toward chance performance within the first few sessions in each group. The proportion of birds showing relative rates of response to S, of greater than .5 across the first four choice test sessions was 10110, 7110, 7110, and 419 in Group T-Only, 319, l/9, 4/S, and 4110 in Group P-Only, and 9/10, 9/ 10, 6/10, and 4/10 in Group TP. For about half of the birds in Groups T-Only and TP, where there were strong preferences in the first choice session, the weakening preferences over the second and third choice sessions were accompanied by an increase in the absolute rate of response to the less preferred stimulus. When total responses in the 12 extinction sessions were considered, there were no significant differences between total responses to S, and Sz in any group, and there were no significant differences between any two groups in total responses to both stimuli. GENERAL

DISCUSSION

Of the variables manipulated during separate-stimulus training in these experiments, the only variable having a strong and consistent effect on choice was rft/stimulus time. This variable strongly influenced choice in each of the three experiments, and did so whether it was manipulated by varying the proportion of trials reinforced (Expts. I and 2) or the lengths of reinforced trials (Expt. 3). However, response rates to S, and S, were always very similar throughout separate-stimulus training in these experiments, which means that whenever rft/stimulus time was varied, &/response also changed. In each experiment, the stimulus associated with a higher number of reinforcements per unit time with the stimulus present was strongly preferred on the choice test, but in each experiment that stimulus was also the stimulus in whose presence the highest proportion of responses was reinforced. These experiments did not assess whether rft/stimulus time, rft/response, or both of these variables would influence choice in a situation where differential re-

SEPARATE-STIMULUS

TRAIXING

ASD

CHOICE

221

sponse rates to S, and S, during separate-stimulus training allowed the variables to be separated. A number of other variables were found to have little or no effect on choice in these experiments when rft/stimulus time (and therefore also rftlresponse) was held constant. These included the total number of trials presented with each stimulus (Expt. 2), proportion of trials followed by reinforcement (Expt. 3, Group P-Only), trial lengths usrd (Expt. 3, Group P-Only), and rft/session (Expt. 2). If one assumes that choice is a function of response strengths, these results have implications concerning the kinds of events that increase or decrease response strength. FOI example, the lack of an effect due to rft/session in Expt. 2 suggests that response strengths to S, and S, rise to an asymptote that is not heavily influenced by the total number of reinforcements received with each stimulus or by the relative number of reinforcements per session the animal continues to receive with each stimulus. Similiarly, the lack of an effect due to proportion of trials reinforced, when separated from rft/stimulus time in Expt. 3, suggests that the termination of a trial without reinforcement is not by itself an aversive event that markedly reduces response strength. On the other hand, a model that is consistent with the present data is that response strength to each stimulus increases each time a response to that stimulus is reinforced, but decreases as time passes with the stimulus present and responses nonreinforced. A response strength model of this sort might lead one to expect differences in rates of response to S, and S? at the end of separatestimulus training as well as on the choice tests. Such differences were observed by Catlin and Gleitman (1968) and Honig (1962), who found that the magnitude of the differences on separate-stimulus trials was related to the degree of preference observed on choice trials. In the present experiments, however, rates of response to S, and S, were always very similar (relative rate of response to S, near 5) at the end of separate-stimulus training, and whatever differences occurred were not consist&t across birds in a group. Moreover, the low correlations reported above show that the small differences in response rate to S, and S, that did occur at the end of separate-stimulus training were not closely related to relative rates of response to S, on the choice test. These results do not necessarily rule out a response strength model of the sort discussed above. However, they do suggest that the “response strengths” posited as developing to S, and S, during separate-stimulus training should not be directly tied to observed rates of response to S, and S, when presented separately. One might argue, for example, that the rather high values of rft/stimulus time used throughout these espcri-

WALTER

222

VOM

SAAL

ments produced high response strengths to both stimuli, so that observed rates of response to the stimuli presented separately were asymptotic, and differential response strengths could only be revealed on the choice tests. REFERENCES BROWN, P. L., & JENKINS, H. M. Auto-shaping of the pigeon’s keypeck. Journal of the Experimental Analysis of Behavior, 1968, 11, l-6. CATLIN, J., & GLEITMAN, H. Relations between choice and latency measures in a selective learning paradigm. Journal of Mathematical Psychology, 1968, 5, 422441. D’AMATO, M. R., LACHMAN, R., & KIVY, P. Secondary reinforcement as affected by reward schedule and the testing situation. Journal of Comparative and Physiological Psychology, 1958, 51, 737-741. DIVAK, M., & ELLIOTT, R. Effects in extinction, with and without free choice, of degree of experience with stimuli associated with both partial and continuous reinforcement. Psychonomic Science, 1967, 7, 255-256. HERRNSTEIN, R. J. On the law of effect. Journul of the Experimental Analysis of Behavior, 1970, 13, 243-266. HONK, W. K. Prediction of preference, transposition, and transposition-reversal from the generalization gradient. Journal of Experimental Psychology, 1962, 64, 239-248. HULL, C. L. Principles of Behavior. New York: Appleton-Century-Crofts, 1943. LOGAN, F. A. The negative incentive value of punishment. In B. A. Campbell and R. M. Church (Eds.), Punishment and Aversive Behavior. New York: AppletonCentury-Crofts, 1969. Pp. 43-54. MASON, D. J. The relation of secondary reinforcement to partial reinforcement. Journal of Comparative and Physiological Psychology, 1957, 50, 264-268. PAVL~, W. B., & BORN, D. G. Partial reinforcement effects in selective learning. Psychological Reports, 1962, 11, 575-580. SPENCE, K. W. Behavior Theory and Conditioning. New Haven: Yale University Press, 1956. VOM SAAL, W. Choice between stimuli associated with different histories of reinforcement when presented separately. Unpublished Ph.D. thesis, McMaster University, 1969. (Received

April

22,

1971)