Health Care > QUESTIONS & ANSWERS > Practice exam 378/378 questions with approved answer sheet updated docs (All)

Practice exam 378/378 questions with approved answer sheet updated docs

Document Content and Description Below

Practice exam 378/378 questions with approved answer sheet updated docs Name: Date: 1. Angie's angry boss wears a particular type of cologne. One day, Angie was at the mall when she smelled... the same cologne. The smell produced a momentary feeling of uneasiness. In this example, what is serving as the unconditioned stimulus? A) the cologne B) the angry boss C) Angie D) feelings of uneasiness 2. In eyeblink conditioning, a tone is sounded immediately before air is puffed into a participant's eye. After a number of tone–air puff pairings, what is the conditioned response? A) the perception of the tone B) the air puff C) blinking when the tone sounds D) blinking when air is puffed into the eye 3. How could extinction be used to eliminate Little Albert's fear of a white rat? A) Repeatedly expose Little Albert to the loud noise in the absence of the rat. B) Give Albert candy when he sees the rat. C) Punish Albert when he cries and crawls away from the rat. D) Repeatedly present the rat, and do not follow it with the loud noise. 4. Pavlov's dogs were conditioned to salivate to a 1000-hz tone due to its pairing with food. After this training, it was found that the dogs would salivate to a 900-hz tone but not to a 500-hz tone. Salivating to the 900-hz tone is an example of , and not salivating to a 500-hz tone is an example of . A) a conditioned response; generalization B) discrimination; extinction C) generalization; discrimination D) a conditioned response; negative punishment 5. activity is a critical underlying mechanism in conditioning responses similar to the one conditioned in Little Albert. A) Amygdala B) Hippocampal C) Broca's area D) Motor cortex 6. The idea of suggests that conditioning has an evolutionary perspective, in that certain associations will be more easily learned if they help the organism's survival. A) biological preparedness B) natural selection C) implicit learning D) the enculturation hypothesis 7. Classical conditioning is to operant conditioning as: A) Thorndike is to Skinner. B) Skinner is to Pavlov. C) explicit is to implicit. D) reactive is to active. 8. Positive reinforcement involves the of a stimulus when a behavior occurs, and positive punishment involves the of a stimulus when a behavior occurs. A) presentation; removal B) presentation; presentation C) removal; presentation D) removal; removal 9. Jake can make his sister, Joy, give him her toys or candy by whistling. Because she hates the sound so much, Joy will give Jake whatever he wants to make him stop. Jake's whistling is maintained by , and Joy's giving is being maintained by . A) positive reinforcement; negative punishment B) negative reinforcement; positive reinforcement C) positive reinforcement; negative reinforcement D) negative reinforcement; extinction 10. the delay between a response and a consequence : A) Decreasing; strengthens the effects of reinforcement but weakens the effects of punishment. B) Decreasing; strengthens the effects of punishment but weakens the effects of reinforcement. C) Increasing; weakens the effects of reinforcement but strengthens the effects of punishment. D) Increasing; weakens the effects of both reinforcement and punishment. 11. A teacher originally wanted to increase class participation, so she positively reinforced question asking with nickels. After a few days, students had stacks of nickels on their desks, and the class was asking far too many questions. The teacher decided things were better off when the students didn't ask so many questions. To reduce question asking, the teacher has several options available to her. She could approach the students' desks and physically remove a nickel from their stacks whenever they ask a question, a process known as . Or, she could simply no longer give students nickels when they ask questions, a process termed . A) extinction; negative punishment B) negative punishment; extinction C) positive punishment; negative punishment D) extinction; positive punishment 12. Although it is not known if you have to buy 1, 10, or 100 scratch-off lottery tickets to get a winner, it is highly probable that if you just keep buying, eventually you will get a winner. In fact, maybe the very next ticket you buy will be a winner. Buying scratch-off lottery tickets is reinforced according to which type of schedule? A) fixed ratio B) variable ratio C) fixed interval D) variable interval 13. suggested that rats in a maze don't simply produce behaviors in response to stimuli or consequences, but rather develop a of the maze. A) Tolman; cognitive map B) Tolman; latent response C) Thorndike; cognitive map D) Thorndike; latent response 14. When Jasminda first moved into her apartment, the buzzing noise coming from her refrigerator drove her nuts. Now, after three months, Jasminda doesn't notice the sound because of: A) stimulus generalization. B) habituation. C) extinction. D) latent learning. 15. Research has shown that students led to believe that a lecture will end in a quiz will: A) distribute their study in the week prior to the lecture. B) take practice tests the night before the lecture. C) engage in less mind wandering during the lecture. D) practice visual imagery mnemonics during the lecture. 16. The school of psychology most associated with learning is: A) structuralism. B) functionalism. C) behaviorism. D) cognitive psychology. 17. Rachel is training her dog, Duke, to bark when there is a knock on the door. She already knows that the family cat will make Duke bark, so she decides to knock on the door and then open the door to reveal the cat in an effort to condition Duke to bark. What is serving as the unconditioned stimulus? A) the cat B) the knock C) Duke's bark D) Rachel 18. In classical conditioning, a(n) elicits a(n) due to its association with a(n) . A) conditioned stimulus; conditioned response; unconditioned response B) unconditioned response; conditioned response; unconditioned response C) unconditioned stimulus; unconditioned response; conditioned stimulus D) conditioned stimulus; conditioned response; unconditioned stimulus 19. The adaptive ability for organisms to transfer learning of one particular instance to another very similar case is evidence of: A) generalization. B) spontaneous recovery. C) reinforcement. D) discrimination. 20. Little Albert developed a fear of a because it was paired with a(n) . A) Santa Claus mask; electric shock B) loud noise; rabbit C) dog; white rat D) white rat; loud noise 21. The Rescorla–Wagner model showed that classical conditioning actually involved a cognitive aspect based on the organism's: A) expectations. B) generalizations. C) discriminations. D) preferences. 22. Learned food aversions are generally acquired from due to the evolutionary adaptive conditioning of rejecting foods that may be toxic. A) familiar foods B) novel foods C) acidic foods D) foods with strong odors 23. Positive punishers behavior, and negative punishers behavior. A) strengthen; strengthen B) strengthen; weaken C) weaken; weaken D) weaken; strengthen 24. Jane drinks alcohol because it reduces her anxiety. Jane's drinking is being maintained by: A) positive reinforcement. B) negative reinforcement. C) positive punishment. D) negative punishment. 25. On a daily basis, we experience more reinforcers than ones. A) primary; secondary B) secondary; primary C) positive; secondary D) primary; negative 26. Which schedule arranges reinforcement on the basis of time and response? A) fixed ratio B) variable interval C) variable ratio D) continuous 27. Armed with bits of jerky, Alfred taught his pet dog to lie down and roll over. He MOST likely used to teach his dog this trick. A) secondary reinforcement B) negative reinforcement C) a diffusion chain D) shaping through successive approximations 28. A pleasure center of the brain is located in the: A) medulla. B) nucleus accumbens. C) basal ganglia. D) pineal gland. 29. After watching an adult model interact aggressively with a Bobo doll, Bandura noted that children behaved aggressively with the doll: A) but did not tend to imitate the specific actions modeled by the adult model. B) but only if they were the same gender as the model. C) even if they saw the adult model being punished for these aggressive behaviors. D) especially when the model's aggressive behavior was rewarded. 30. Which statement about study techniques is FALSE? A) Distributed practice is more effective than massed practice. B) Practice testing is more effective than rereading material. C) Highlighting or underlining material has low utility. D) The use of mnemonics is more effective than distributed practice. Use the following to answer questions 31-34: Scenario I Tyler is physically dependent on heroin and uses it intravenously multiple times per day. Most often he uses with his dealer in a drug house on his street. One day when his dealer was out of town, Tyler met a group of fellow users going to a drug party about two hours away. Tyler tagged along, and noted that he felt odd injecting the drug in this setting, because the new people and environment was a marked departure from his usual routine. Minutes later, although the drug was of the same quality and he took no more or less than he usually did, Tyler overdosed and had to be rushed to the emergency room. 31. (Scenario I) Several decades ago, the psychologist Sheppard Siegel first noted the role that classical conditioning plays in drug overdose. In this model, the unconditioned stimulus in the scenario is and the conditioned stimulus is . A) heroin; the new environment B) heroin; the usual drug-taking environment C) the new environment; heroin D) the usual drug-taking environment; heroin 32. (Scenario I) According to a classical conditioning account of heroin overdose, one unconditioned response is a(n) in the rate of respiration and one conditioned response is a(n) . A) increase; subsequent cardiovascular collapse B) increase; similar increase in heart rate C) decrease; subsequent cardiovascular collapse D) decrease; increase in the rate of respiration 33. (Scenario I) According to a classical conditioning account of drug overdose, after numerous environment–drug pairings, the environment can: A) trigger effects similar to the drug even in the absence of the drug. B) markedly increase the likelihood that overdose will occur. C) help prepare the body for the drug. D) sensitize a user to the drug such that less now is required to obtain the same effect. 34. (Scenario I) According to a classical conditioning account of drug overdose, because Tyler always used in the same location with the same person, the presence of those cues resulted in a(n): A) increased tolerance to heroin. B) increase in the reinforcing properties of heroin. C) increased sensitivity to the depressant effects of heroin. D) decreased expectancy of heroin. Use the following to answer questions 35-39: Scenario II A boy with autism sometimes engages in self-stimulatory behavior such as waving his hands in front of his eyes while doing his homework. Curiously, he engages in this behavior when his father is around but usually not when his mother is around. An applied behavior analyst working with the boy closely observes the interactions between the boy and his parents in an effort to determine why the self-stimulatory behavior is occurring. She notes that on the rare occasions when the boy engages in the problem behavior in the presence of his mother, the mother simply ignores this behavior. However, after the problem behavior occurs a number of times in the presence of the father, the father often intervenes. While he does not provide attention to his son, he does remove the homework materials for a number of minutes, essentially giving the boy a break. This usually calms the boy and results in the cessation of the problem behavior. The therapist notes that the amount of self-stimulatory behavior before the father intervenes is quite unpredictable; sometimes the father intervenes after only one or two instances and sometimes he waits until many instances occur in an effort to keep the boy on-task with his homework as long as possible. 35. (Scenario II) The boy's self-stimulatory behavior is an example of: A) observational learning. B) behavior maintained by positive reinforcement. C) behavior maintained by negative reinforcement. D) a classically conditioned response triggered by the father. 36. (Scenario II) The applied behavior analyst would conclude that the self-stimulatory behavior is probably being maintained by: A) conditioned stimulus–unconditioned stimulus pairings between the father and the homework. B) access to enjoyable activities. C) escape. D) automatic reinforcers associated with sensory stimulation. 37. (Scenario II) The applied behavior analyst would conclude that the father's behavior of allowing his son to take breaks from his homework is probably being maintained by: A) extinction. B) negative reinforcement. C) positive reinforcement. D) observational learning. 38. (Scenario II) The mother is utilizing to keep the boy's level of self-stimulatory behavior low. A) extinction B) positive punishment C) negative punishment D) delayed reinforcement 39. (Scenario II) The boy's self-stimulatory behavior produces consequences according to a schedule. A) fixed-ratio B) variable-ratio C) fixed-interval D) variable-interval 40. A small girl visited the Science Center to view a tarantula, and as she did so, her brother screamed in her ear and pinched her. A few days later, her brother did the same thing when she viewed a tarantula at a zoo exhibit. Now the young girl is terrified of tarantulas. Briefly explain the learning that took place. Review the US, UR, CS, and CR. Include the following terms in your discussion, and predict the path that the girl's learning may eventually take: acquisition, generalization, discrimination, extinction, and spontaneous recovery. 41. Explain how classical conditioning is involved in drug tolerance and overdose. 42. Contrast classical and operant conditioning in terms of (a) the nature of the behavior, (b) the role of stimuli that precede the response, and (c) the role of consequences. 43. Compare and contrast extinction in classical and operant conditioning. Give examples. 44. Discuss the cognitive, neural, and evolutionary elements of classical conditioning. 45. Bob has contracted a stomach virus and will be extremely sick in 6 hours. At the moment, however, Bob is completely unaware of his condition. In fact, he is starving for his favorite food, pizza. His roommate wants anchovies on the pizza; although Bob never has eaten anchovies, he agrees. Bob eats six slices of pizza and likes the taste of the anchovies. A few hours later, Bob becomes extremely sick to his stomach. Describe the likely taste aversion that Bob will experience. Be sure to identify the US, CS, UR, and CR. Finally, discuss how the conditioning process might differ if Bob was a pigeon in Central Park instead of a college student. 46. Provide a unique example of positive reinforcement, negative reinforcement, positive punishment, and negative punishment. In doing so, identify the target behavior, the consequence of that behavior, and how the consequence affects the future probability of that behavior. 47. A child does not reliably make his bed in the morning. His parents want to use monetary positive reinforcement to increase bed-making. Describe how the following schedules of reinforcement would operate: continuous reinforcement, fixed-ratio (FR) 7, variable-ratio (VR) 7, fixed-interval (FI) 7 days, variable-interval (VI) 7 days. If you were the parent, which schedule of reinforcement would you employ? Why? 48. How would you train a puppy to turn in a complete clockwise circle to receive a doggie treat using shaping through successive approximations? 49. Describe the procedure, results, and importance of Tolman's experiment on latent learning. 50. Many major-league baseball players are now wearing chord-type necklaces because they believe that the necklaces will enhance their athletic performance. Discuss two possible explanations for this phenomenon: Skinner's conceptualization of superstitions and observational learning. 51. Compare and contrast the observational learning of tool use in humans and chimpanzees. Briefly describe the difference in tool use between chimps raised in the wild versus those raised by humans. 52. Describe how artificial grammar has been used to study implicit learning. 53. Review some effective and ineffective techniques for learning academic material. When are difficulties in studying desirable? 54. The effects of learning are usually short-lived. A) True B) False 55. Learning occurs independently of an individual's experience. A) True B) False 56. Cognitive psychologists conducted most of the experiments on learning from the 1930s to the 1950s. A) True B) False 57. Habituation and sensitization are forms of learning that can be demonstrated in the simplest of organisms. A) True B) False 58. In Pavlov's work with dogs, the unconditioned stimulus was salivation. A) True B) False 59. In Pavlov's work with dogs, the conditioned stimulus was the sound of a bell. A) True B) False 60. In classical conditioning, the conditioned stimulus (CS) is initially neutral and does not evoke a conditioned response. A) True B) False 61. In Pavlov's work with dogs, animals were not presented food unless they salivated when they heard the tone. A) True B) False 62. Acquisition is the phase of classical conditioning in which the initially neutral CS and US are presented together. A) True B) False 63. Acquisition is the phase of classical conditioning in which the US elicits the UR. A) True B) False 64. Acquisition of a conditioned response starts low, increases slowly, and then rises rapidly for as long as the CS is paired with the US. A) True B) False 65. In second-order conditioning, a neutral stimulus becomes a CS when it is repeatedly paired with a previously established CS. A) True B) False 66. In second-order conditioning, a neutral stimulus becomes a US when it is repeatedly paired with a previously established CS. A) True B) False 67. Extinction of a conditioned response involves the repeated presentation of the US in the absence of the CS. A) True B) False 68. Pavlov extinguished the conditioned response by repeatedly presenting the bell and not following it with food. A) True B) False 69. Once a conditioned response has been extinguished, it will not occur again unless the CS–US pairings are reintroduced. A) True B) False 70. Extinction is the equivalent of erasing the effects of learning. A) True B) False 71. Stimuli paired with drugs often elicit effects opposite that of the drugs. A) True B) False 72. An experienced heroin user will have a higher risk of overdose if he injects the drug in an unfamiliar environment. A) True B) False 73. The presence of a strong compensatory response as a heroin user injects makes overdose from that injection more likely. A) True B) False 74. The development of a conditioned compensatory response contributes to drug tolerance. A) True B) False 75. Pavlov's dogs, which were conditioned to salivate at the sound of a tone, also may salivate at the sound of a doorbell, illustrating the process of generalization. A) True B) False 76. Pavlov's dogs, which were conditioned to salivate at the sound of a metronome, also may salivate at the sound of a ticking clock, illustrating spontaneous recovery. A) True B) False 77. Discrimination occurs when an organism displays a conditioned response to the CS but does not display a conditioned response to a similar stimulus. A) True B) False 78. Increased discrimination results in increased generalization. A) True B) False 79. William James conducted the Little Albert study. A) True B) False 80. Little Albert was conditioned to fear a loud noise. A) True B) False 81. The conditioned stimulus in the Little Albert study initially was a white rat. A) True B) False 82. The unconditioned stimulus in the Little Albert study initially was a white rat. A) True B) False 83. Through a process called stimulus generalization, Little Albert learned to fear anything that was white and furry. A) True B) False 84. After being conditioned to fear white rats, Albert also was afraid of white rabbits, illustrating the effects of stimulus discrimination. A) True B) False 85. The Little Albert study showed that fear conditioning in humans is quite different from fear conditioning in animals. A) True B) False 86. Therapies that systematically expose clients to stimuli associated with their trauma are effective because of the process of extinction. A) True B) False 87. According to the Rescorla–Wagner model, the US sets up an expectation for the CS. A) True B) False 88. Rescorla and Wagner argued that when the CS predicts the US, the occurrence of the CS will lead the organism to expect that the US is forthcoming. A) True B) False 89. The Rescorla–Wagner model predicts that conditioning will be easier when the CS is an unfamiliar event than when it is familiar. A) True B) False 90. Rescorla and Wagner postulated that the cognitive processes underlying classical conditioning usually are conscious. A) True B) False 91. The functions of the cerebellum are critical for eyeblink conditioning. A) True B) False 92. The functions of the hippocampus are critical for eyeblink conditioning. A) True B) False 93. The part of the brain that is essential to emotional conditioning is the thalamus. A) True B) False 94. The amygdala is critical for emotional conditioning. A) True B) False 95. If connections between the amygdala and midbrain are severed, the behavioral changes associated with fear conditioning will not occur. A) True B) False 96. If connections between the amygdala and hypothalamus are severed, the behavioral changes associated with fear conditioning will not occur. A) True B) False 97. Taste aversions can develop with just one taste–sickness pairing. A) True B) False 98. A taste aversion will not form unless the sickness follows food intake within a few minutes of ingestion. A) True B) False 99. In rats, birds, and humans, it is easier to condition an association between a taste and stomach sickness than a visual stimulus and stomach sickness. A) True B) False 100. The more often a person eats a particular food, the more likely it is that she ultimately will form a taste aversion to it. A) True B) False 101. Operant behavior is voluntary in nature. A) True B) False 102. Operant behavior is behavior under the control of its consequences. A) True B) False 103. Operant behavior is to a passive response as classically conditioned behavior is to an active response. A) True B) False 104. Operant behaviors are controlled more so by stimuli preceding the behavior than classically conditioned behaviors. A) True B) False 105. The law of effect was developed by studying the behavior of dogs in puzzle boxes. A) True B) False 106. Thorndike used the change in the time it took to escape from the puzzle box across trials as an index of learning. A) True B) False 107. B. F. Skinner developed the law of effect. A) True B) False 108. Edward Thorndike developed the law of effect. A) True B) False 109. The law of effect states that consequences that produce dopamine release will strengthen the behavior that produced it. A) True B) False 110. B. F. Skinner invented the operant conditioning chamber. A) True B) False 111. Edward Thorndike first used the term operant behavior. A) True B) False 112. B. F. Skinner was diametrically opposed to the consequence-based conceptualization of learning studied by Thorndike. A) True B) False 113. John Watson's approach to the study of learning focused on reinforcement and punishment. A) True B) False 114. Both reinforcers and punishers increase the future likelihood of behavior. A) True B) False 115. Skinner defined reinforcers and punishers based on whether or not most people would find them enjoyable. A) True B) False 116. Both positive and negative reinforcement increase the likelihood of future behavior. A) True B) False 117. In negative reinforcement, an unpleasant stimulus is removed upon the occurrence of the target behavior. A) True B) False 118. In negative punishment, an unpleasant stimulus is removed upon the occurrence of the target behavior. A) True B) False 119. Both positive reinforcement and positive punishment involve the presentation of a stimulus when a behavior occurs. A) True B) False 120. Positive reinforcement is usually considered desirable and negative reinforcement usually is considered undesirable. A) True B) False 121. Reinforcement is generally more effective at promoting learning than punishment. A) True B) False 122. Examples of primary reinforcement include food, comfort, and warmth. A) True B) False 123. Stimuli that help us satisfy biological needs are termed secondary reinforcers. A) True B) False 124. For most people, money is a powerful primary reinforcer. A) True B) False 125. The majority of human behavior is maintained by secondary reinforcement. A) True B) False 126. Secondary reinforcers acquire their value through classical conditioning. A) True B) False 127. Flashing lights paired with a speeding ticket may become primary punishers. A) True B) False 128. Delayed reinforcers are usually less effective than immediate reinforcers. A) True B) False 129. Delay between the behavior and the consequence affects the processes of reinforcement and punishment differently. A) True B) False 130. Pigeons have been taught to discriminate paintings by Monet from paintings by Picasso. A) True B) False 131. Extinction of an operant response involves no longer providing reinforcement for that response. A) True B) False 132. Extinction of an operant response involves removing a desired stimulus when the behavior occurs. A) True B) False 133. Spontaneous recovery after extinction occurs in classical conditioning but not operant conditioning. A) True B) False 134. In operant conditioning, occasionally not delivering a reinforcer when the behavior occurs will extinguish that behavior. A) True B) False 135. In operant conditioning, occasionally not delivering a reinforcer when the behavior occurs may actually strengthen that behavior. A) True B) False 136. Responding is not required for reinforcement under a fixed interval schedule. A) True B) False 137. Reinforcement under variable interval schedules depends partly on the passage of time. A) True B) False 138. Slot machines arrange reinforcers according to a variable interval schedule. A) True B) False 139. A textile worker who is paid by the piece for the number of shirts sewn is on a fixed ratio schedule. A) True B) False 140. Variable ratio schedules of reinforcement produce slightly lower response rates than fixed ratio schedules. A) True B) False 141. Reinforcement under a fixed-ratio schedule is based on a particular average number of responses. A) True B) False 142. Vending machines operate according to a continuous reinforcement schedule. A) True B) False 143. Behavior maintained under an intermittent schedule of reinforcement is more difficult to extinguish than behavior maintained under a continuous schedule of reinforcement. A) True B) False 144. Thorndike used shaping through approximations to teach his cats how to escape the puzzle box. A) True B) False 145. Shaping through approximations is another term for trial-and-error learning. A) True B) False 146. Shaping through approximations involves providing reinforcers for behaviors that are closer and closer to the overall desired behavior. A) True B) False 147. According to Skinner, superstitions arise when the response only occasionally causes the occurrence of the reinforcer. A) True B) False 148. According to Skinner, superstitions arise when reinforcers are not contingent on any particular behavior. A) True B) False 149. Edward Chace Tolman argued that latent learning is a by-product of reinforcement. A) True B) False 150. Edward Chace Tolman argued that stimuli set up a cognitive expectation that a response will produce a reinforcer. A) True B) False 151. Tolman's views on operant conditioning are similar to Rescorla and Wagner's views on classical conditioning. A) True B) False 152. Tolman's views on operant conditioning are similar to Watson's views on classical conditioning. A) True B) False 153. In a study on latent learning, Tolman demonstrated that rats who never received a food reward at the end of a maze nevertheless ran it faster and faster each day. A) True B) False 154. Edward Chace Tolman introduced the term cognitive map. A) True B) False 155. Dopamine is secreted in the pleasure centers of the brain. A) True B) False 156. The nucleus accumbens is a brain region rich in serotonin. A) True B) False 157. The nucleus accumbens, medial forebrain bundle, and hypothalamus are all major pleasure centers in the brain. A) True B) False 158. Rats will choose to electrically stimulate the pleasure centers of their brain over food or sex. A) True B) False 159. Rats will choose to electrically stimulate their cerebellum over food or sex. A) True B) False 160. Rats that discover food in one arm of a maze with many arms will tend to revisit that arm over and over again, searching for more food. A) True B) False 161. Biological predispositions may interfere with the ability to learn an operant response. A) True B) False 162. All species are biological predisposed to learn some things more easily than others. A) True B) False 163. The Brelands' found that raccoons being trained to put a coin inside a box instead buried the coin in the ground. A) True B) False 164. Simple, but not complex, tasks can be learned through observation. A) True B) False 165. Observational learning challenges operant conditioning because such learning occurs in the absence of reinforcement or punishment. A) True B) False 166. Albert Bandura conducted the Bobo doll experiment investigating observational learning. A) True B) False 167. Bandura's research demonstrated that simply observing aggressive models was not sufficient to produce aggressive behavior in children; reinforcement of the aggressive behavior was needed. A) True B) False 168. Bandura's research demonstrated that children's behavior was insensitive to the consequences of the model's behavior. A) True B) False 169. In some cases, observational learning may be just as effective as repeatedly practicing a skill. A) True B) False 170. Monkeys, but not pigeons, can learn by observation. A) True B) False 171. Wild-raised chimpanzees imitate a human modeling tool use more accurately than chimpanzees raised by humans. A) True B) False 172. Wild-raised chimpanzees imitate a human modeling tool use less precisely than two-year-old humans. A) True B) False 173. Being raised by humans increases the likelihood that animals will imitate humans but does not affect how well animals imitate humans. A) True B) False 174. Bipolar cells fire while we observe others; their activity mirrors what we are observing. A) True B) False 175. Regions in the frontal and temporal lobes are thought to be part of the mirror neuron system in humans. A) True B) False 176. Brain-imaging studies have revealed that watching a model perform a complex behavior, such as ballroom dancing, is a more effective learning strategy than actually practicing the complex behavior. A) True B) False 177. Transcranial magnetic stimulation of the motor cortex has been shown to dramatically increase the speed of observational learning by increasing the activity of mirror neurons. A) True B) False 178. Implicit learning occurs outside of conscious awareness. A) True B) False 179. Habituation is a type of implicit learning. A) True B) False 180. Activity in the hippocampus underlies habituation. A) True B) False 181. Some types of learning begin as explicit but become implicit over time. A) True B) False 182. An explicit knowledge of grammatical rules is necessary to identify correct words in an artificial grammar task. A) True B) False 183. People vary widely on implicit-learning abilities. A) True B) False 184. People with amnesia exhibit decrements in implicit learning. A) True B) False 185. The hippocampus is not necessary for implicit learning to occur. A) True B) False 186. The strategies of highlighting material, rereading it, and summarizing it have been shown to be of moderate utility as a study technique. A) True B) False 187. Making mental images of study material has been shown to be more effective than distributed practice. A) True B) False 188. Practice testing has been shown to be an effective study strategy. A) True B) False 189. Cramming is a form of massed practice. A) True B) False 190. Distributed practice is more effective than massed practice for learning language-related material, but not for learning content devoid of meaning (e.g., nonsense syllables). A) True B) False 191. Students intuitively prefer study techniques that are effective. A) True B) False 192. The learning process involves the of new knowledge, skills, or responses as a function of . A) development; time B) growth; maturity C) acquisition; experience D) gradual accumulation; education 193. Which of these is NOT an essential feature of learning? A) It is based on experience. B) It requires language. C) It produces changes in the learner. D) It produces changes that are relatively permanent. 194. A general process in which the repeated presentation of a stimulus leads to a gradual decrease in responding is termed: A) habituation. B) classical conditioning. C) sensitization. D) association. 195. A general process in which the presentation of a stimulus leads to an increased response to a later stimulus is termed: A) habituation. B) classical conditioning. C) sensitization. D) association. 196. Which school of psychology is MOST associated with pioneering research on learning? A) functionalism B) evolutionary psychology C) cognitive psychology D) behaviorism 197. When a neutral stimulus evokes a response after being paired with a stimulus that naturally evokes a response, the result is an example of which phenomena? A) classical conditioning B) semantic conditioning C) episodic conditioning D) operant conditioning 198. A speck of dirt that gets into your eye will naturally cause a blinking reflex. The speck of dirt is a(n): A) conditioned stimulus. B) unconditioned stimulus. C) neutral stimulus. D) second-order stimulus. 199. A reflexive reaction that is reliably elicited by an unconditioned stimulus is called a(n) response. A) reinforced B) sensitized C) unconditioned D) conditioned 200. The smell of a particular perfume scent causes Juan's heart to flutter because it is the same scent as his girlfriend's. The perfume is a(n): A) US. B) UR. C) CS. D) CR. 201. Last week, Harry drank too much tequila and this made him vomit. Now, just the smell of tequila makes his stomach a bit queasy. Vomiting is an example of a(n): A) US. B) UR. C) CS. D) CR. 202. In Pavlov's research, what was the US? A) food B) the humming of a tuning fork C) salivating at the sound of a bell D) a tube inserted into the salivary gland 203. In Pavlov's research, what was the UR? A) food B) salivating upon food presentation C) a tuning fork D) salivating at the sound of a bell 204. In Pavlov's research, what was the CS? A) food B) a dog C) a ringing bell D) the salivary reflex 205. In Pavlov's research, what was the CR? A) pricking ears at the sound of the bell B) salivating upon food presentation C) a bell D) salivating at the sound of a bell 206. Your friend's mother was always baking ginger-flavored cookies whenever you were at their house. You loved those cookies, and would eat several each time you visited. One day, you noticed that you started to salivate as you walked up the front steps to the house, before you smelled the cookies. The reason for this is that the house has become a(n): A) unconditioned response. B) unconditioned stimulus. C) conditioned response. D) conditioned stimulus. 207. Billy Bob's Big Burger Barn is your favorite restaurant, and lately you've noticed that every time you walk by there on your way to class, your mouth starts to salivate. In this incidence of classical conditioning, the sight of the restaurant is the ; your salivation is the . A) US; UR B) CS; CR C) CS; UR D) US; CR 208. Imagine that your girlfriend loves zydeco music. Eventually, your girlfriend dumps you, walking out of your apartment in the middle of a song. Your stomach tightens, and you are sad and angry. A few weeks later, you happen to hear zydeco music in a friend's car. Assuming classical conditioning occurred, what should happen when you hear the zydeco? A) Your stomach clenches. B) You call your ex-girlfriend on your cell phone and beg her to take you back. C) You begin discussing your relationship problems with your friend. D) You immediately think of all the good times that you had with your ex-girlfriend. 209. An initially neutral stimulus that comes to elicit a response because it has been paired with an unconditioned stimulus is called a(n) stimulus. A) discriminative B) habituated C) conditioned D) reinforcing 210. A reaction that usually resembles the unconditioned response but is produced by a conditioned stimulus is termed a(n) response. A) sensitized B) habituated C) conditioned D) reinforcing 211. Which statement about the conditioned stimulus in classical conditioning is true? A) Before any conditioning trials, the conditioned stimulus produces only one response. B) The conditioned stimulus is initially neutral and does not produce a response. C) Learning is not required for a conditioned stimulus to produce a response. D) The conditioned stimulus must resemble the unconditioned stimulus. 212. The phase of classical conditioning in which the CS and the US are presented together is called: A) habituation. B) discrimination. C) acquisition. D) generalization. 213. Which of these describes how the intensity of a CR changes during the acquisition phase of classical conditioning? A) starts low, rises rapidly, then decreases sharply B) starts low, rises rapidly, then tapers off C) starts low, rises slowly, then decreases sharply D) starts high, decreases sharply, then tapers off 214. In second-order conditioning, the CS is paired with a: A) stimulus that naturally elicits a response. B) biologically relevant stimulus such as food. C) neutral stimulus such as a black square. D) previously established CS. 215. In Pavlov's experiments on second-order conditioning, dogs were conditioned to salivate at the sight of a black square: A) by pairing it with food. B) by rewarding the dog with food when it salivated in the presence of the black square. C) by presenting food first and then following it with the black square. D) even though the black square was never directly associated with food. 216. In Pavlov's experiments on second-order conditioning, dogs were conditioned to salivate at the sight of a black square by: A) pairing it with a tone that previously had been associated with food. B) rewarding the dog with food when it salivated in the presence of the black square. C) presenting food first and then following it with the black square. D) pairing it with an unconditioned stimulus. 217. Lots of cash in hand always makes Charlie's heart race with excitement, due to the phenomenon known as: A) positive reinforcement. B) conditioned compensatory response. C) second-order conditioning. D) spontaneous recovery. 218. Extinction of a CR involves: A) repeated presentations of the CS without the US. B) repeated presentations of the US without the CS. C) repeated presentations of the UR alone. D) punishing the organism with an electric shock every time the CR occurs. 219. Pavlov conditioned dogs to salivate at the sound of a bell by following the bell with food. How could Pavlov use extinction to eliminate salivating to bells? A) Present food repeatedly in the absence of the sound of the bell. B) Repeatedly ring the bell, but never follow it with food. C) Ring the bell, and then present poison-laced food to induce sickness. D) Present food only every other time the bell is rung. 220. Johnny hated going to the doctor because he always received vaccinations. When he entered the doctor's office, he would become anxious and his stomach would hurt. When Johnny reached the age of 8, he was finished with his immunizations. After many trips to the doctor's office with no injections, what should happen with Johnny's anxiety and stomachache? A) They should get worse with every visit. B) They should stay the same. C) They should gradually become less severe and eventually disappear. D) They should start occurring before he even enters the doctor's office. 221. An animal trainer is conditioning an elephant to startle at the sound of a trombone. Every time she plays the trombone, she then shows the elephant a mouse, which startles the elephant. Eventually the elephant startles at just the sound of the trombone. However, after playing the trombone 10 more times, without a mouse appearing, the elephant exhibits no response. The result is an example of: A) generalization. B) spontaneous recovery. C) acquisition. D) extinction. 222. When the veterinarian advised an owner to feed her dog in the morning instead of in the evening, it took several days for the dog to stop salivating late in the afternoon around its previous dinner time. This is an example of: A) spontaneous recovery. B) extinction. C) generalization. D) second-order conditioning. 223. The tendency of a previously extinguished behavior to reoccur following a rest period is called: A) sensitization. B) generalization. C) spontaneous recovery. D) acquisition. 224. An animal trainer is conditioning an elephant to startle at the sound of a trombone. Every time she plays the trombone, she then shows the elephant a mouse, which startles the elephant. Eventually the elephant startles at just the sound of the trombone. However, after playing the trombone 10 more times, without a mouse appearing, the elephant exhibits no response. The trainer then gives the elephant an adequate rest period from the trombone. What is MOST likely to occur if the elephant hears the trombone again the following week? A) The elephant will not react. B) The elephant will have a greatly delayed startle reaction. C) The elephant will startle upon hearing the sound. D) The elephant will startle only if it sees a mouse. 225. Spontaneous recovery demonstrates that: A) CS–US pairings are not necessary for classical conditioning to occur. B) learning is greater the more CS–US trials are spaced in time. C) a stimulus paired with passage of time can come to elicit a response. D) extinction does not completely erase previous learning. 226. Most animals can adjust to slight variations in the conditioned stimulus, an adaptation known as: A) discrimination. B) generalization. C) extinction. D) second-order conditioning. 227. You watch a horror movie, and are scared. Running through the background music is a sustained, high-pitched note on a violin. The next day, you are watching a drama that also has background music featuring violin. Suddenly, you feel uneasy. Which process explains this effect? A) extinction B) reinforcement C) generalization D) acquisition 228. Accustomed to the sound of the old can opener, a cat still rushes to her food dish even when she hears the sound of the new can opener for the first time. This demonstrates: A) second-order conditioning. B) discrimination. C) generalization. D) spontaneous recovery. 229. Holly's blender makes a noise similar to the can opener, but her cat doesn't get up from the sofa when it hears it. This demonstrates: A) extinction. B) generalization. C) acquisition. D) discrimination. 230. Pavlov's dogs learned to salivate to a particular tone but not to other similar tones and buzzers. This process is called: A) sensitization. B) discrimination. C) extinction. D) habituation. 231. When a drug of abuse such as heroin is injected, the entire setting (the drug paraphernalia, the room, the lighting, etc.) can become a(n) and elicit responses to the drug. A) US; similar B) US; in opposition C) CS; similar D) CS; in opposition 232. A conditioned compensatory response is a: A) special type of UR. B) CR that is of greater intensity than the UR. C) CR that opposes the UR. D) drug-paired stimulus. 233. The presence of a drug-related conditioned compensatory response produces: A) tolerance. B) a natural high. C) intoxication. D) an overdose. 234. An experienced user who takes heroin in a new setting has an increased risk of overdose because: A) she may take more drug than intended. B) the conditioned compensatory response becomes stronger. C) changed settings result in added stress. D) the CS that triggers the compensatory CR is degraded or absent altogether. 235. Who conducted the “Little Albert” study on conditioned fear? A) Ivan Pavlov B) B. F. Skinner C) John Watson D) Edward Thorndike 236. Little Albert was a: A) human infant conditioned to fear a white rat. B) human infant raised in a modified operant chamber. C) Bobo doll in Bandura's experiments on observational learning. D) affectionate nickname given to the Bobo doll in Bandura's experiments. 237. In his experiment with Little Albert, one of John Watson's goals was to show that: A) humans, unlike other animals, are not susceptible to classic conditioning. B) fear can be learned by means of classical conditioning. C) conditioning can produce only behavioral responses, not emotional ones. D) one's environment is not responsible for behavior. 238. Nine-month-old Albert cried when a large steel bar was struck with a hammer while he viewed a white rat. In this acquisition phase, the white rat was the: A) unconditioned response. B) conditioned response. C) unconditioned stimulus. D) conditioned stimulus. 239. What was the US in the Little Albert study? A) a white rat B) a loud noise C) fear D) anything white and furry 240. What was the UR in the Little Albert study? A) crying when exposed to a loud noise B) fear of a white rat C) fear of anything white and furry D) Little Albert's natural temperament of being “stolid and unemotional” 241. Not only did Little Albert learn to fear white rats, he also cried when presented with a Santa Claus mask or a seal-fur coat. This behavior was the result of: A) stimulus discrimination. B) stimulus generalization. C) second-order conditioning. D) punishment. 242. Rescorla and Wagner originated which theory? A) Behavioral responses can be conditioned in animals. B) Classical conditioning occurs only when the animal has learned to set up an expectation. C) Humans, as well as animals, are capable of undergoing classical conditioning. D) Conditioning works more quickly when the conditioned stimulus is familiar. 243. Rescorla and Wagner theorized that a stimulus will only become a CS when it is the US. A) similar to B) sometimes paired with C) a reliable indicator of D) perceived as 244. Rescorla and Wagner introduced a(n) component to classical conditioning. A) cognitive B) behavioral C) neural D) emotional 245. The Rescorla–Wagner model predicts that conditioning will be easier when the: A) CS is an unfamiliar event. B) CS is a familiar event. C) UR is predictable. D) UR is unpredictable. 246. Which brain region's functions are responsible for eyeblink conditioning? A) amygdala B) reticular formation C) cerebellum D) hypothalamus 247. The nonconscious cognitive elements that give rise to expectancies in classical conditioning are largely the result of the functions of the: A) cerebellum. B) prefrontal cortex. C) hippocampus. D) hypothalamus. 248. In people as well as in rats and other animals, the is critically involved in emotional conditioning. A) reticular formation B) thalamus C) amygdala D) hippocampus 249. Which brain region was critical for Little Albert to form the association between the rat and the loud noise? A) prefrontal cortex B) hippocampus C) Wernicke's area D) amygdala 250. If the connections between the amygdala and midbrain regions are severed, a CS paired with shock will no longer elicit in a rat: A) freezing. B) increases in heart rate and blood pressure. C) release of stress hormones. D) nonconscious cognitive expectancies. 251. Tammy ate raw oysters for the first time and, four hours later, became extremely sick to her stomach. Now, the smell of oysters makes her stomach queasy. What is the CS? A) the smell of the oysters B) a bacterium or other toxin that was definitely present in the oysters C) a bacterium or other toxin that may or may not have been in the oysters D) stomach queasiness 252. Tammy ate raw oysters for the first time and, four hours later, became extremely sick to her stomach. Now, the smell of oysters makes her stomach queasy. What is the US? A) the smell of the oysters B) a bacterium or other toxin that was definitely present in the oysters C) a bacterium or other toxin that may or may not have been in the oysters D) stomach sickness 253. Tammy ate raw oysters for the first time and, four hours later, became extremely sick to her stomach. Now, the smell of oysters makes her stomach queasy. What is the UR? A) the smell of the oysters B) eating oysters C) a bacterium or other toxin D) stomach sickness 254. Tammy ate raw oysters for the first time and, four hours later, became extremely sick to her stomach. Now, the smell of oysters makes her stomach queasy. What is the CR? A) stomach queasiness at the smell of oysters B) the taste of an oyster C) a bacterium or other toxin D) vomiting to a toxin 255. From an evolutionary perspective, effective learning to avoid any food that has made you sick in the past should have all of these EXCEPT: A) rapid learning. B) conditioning capable of taking place over very long intervals. C) development of the aversion to the smell or taste of the food instead of its ingestion. D) development of an aversion more often to familiar than to unfamiliar foods. 256. Cancer patients experiencing nausea from chemotherapy often develop taste aversions to the foods they had eaten earlier. Based on the research of Garcia and colleagues, researchers developed a technique for minimizing this negative effect involving: A) flashing different pictures of a patient's favorite food on an overhead monitor while the patient underwent treatment. B) giving a patient unusual foods, such as coconut or root-beer flavored candy, at the end of their last meal before undergoing treatment. C) administering food to a patient in the middle of the treatment cycle. D) telling a patient to eat samples of favorite foods before entering therapy, thereby ensuring that the patient remembered what the favorite foods were even after treatment. 257. What principle describes why the taste and smell stimuli that produce food aversions in rats does NOT work with most species of birds? A) law of effect B) operant conditioning C) biological preparedness D) extinction 258. In rats, taste aversions are elicited by ; in birds, taste aversions are elicited by . A) visual cues; texture B) texture; smells C) visual cues; smells D) smells; visual cues 259. is most known for his work on conditioned taste aversion. A) Pavlov B) Garcia C) Watson D) Thompson 260. Classical conditioning is the study of behaviors that are , whereas operant conditioning studies behaviors that are . A) active; reactive B) reactive; active C) inactive; reactive D) voluntary; involuntary 261. A type of learning in which the consequences of an organism's behavior determine whether or not it will be repeated is called conditioning. A) observational B) evolutionary C) operant D) classical 262. Every time Kasey, the dog, whines, her owners give her a doggie treat. As a result of her clueless owners, Kasey is a very whiney dog, illustrating the effects of conditioning. A) observational B) emotional C) classical D) operant 263. conducted research with cats in puzzle boxes. A) Tolman B) Skinner C) Lashley D) Thorndike 264. In Thorndike's research involving cats in puzzle boxes, when did the puzzle box open, allowing the cat freedom and food? A) after a fixed period of time that the cat had been inside the box B) after escape attempts had been extinguished C) when the cat engaged in a behavior that moved a concealed lever D) 500 milliseconds after a buzzer sounded 265. Thorndike found that, with continued experience in the puzzle box, effective responses and ineffective responses . A) increased; increased B) increased; decreased C) decreased; decreased D) decreased; increased 266. The idea that behaviors followed by a “satisfying state of affairs” tend to be repeated and those that produce an “unpleasant state of affairs” are less likely to be repeated is known as: A) Pavlov's law of law of classical conditioning. B) Thorndike's law of effect. C) Garcia's theory of evolutionary conditioning. D) Tolman's theory of latent learning. 267. In what way did Thorndike's experiments significantly differ from Pavlov's? A) The US occurred on every training trial no matter what the animal did. B) The CS occurred on every training trial no matter what the animal did. C) The behavior of the animal determined what happened next. D) The US occurred on alternating trials. 268. Behaviorists were influenced by the work of Thorndike for all of these EXCEPT: A) it was free from explanations involving the mind. B) Thorndike's measurements were observable. C) the law of effect stressed the importance of expectancy. D) Thorndike's procedure could quantify the rate of learning. 269. Behavior that an organism produces, which has some impact on the environment, is known as: A) operant behavior. B) the law of effect. C) classical conditioning. D) a reinforcer. 270. The technical term for a Skinner box is a(n): A) operant conditioning chamber. B) radial arm maze. C) classical conditioning chamber. D) puzzle box. 271. Skinner's approach to the study of learning focused on and . A) reinforcement; reward B) reinforcement; punishment C) punishment; escape D) the mind; behavior 272. Reinforcers and punishers the future probability of the behavior that led to these respective consequences. A) increase; increase B) decrease; decrease C) increase; decrease D) decrease; increase 273. Positive reinforcers and negative reinforcers the future probability of the behavior that led to these respective consequences. A) increase; increase B) increase; decrease C) decrease; decrease D) decrease; increase 274. The “positive” in positive reinforcement and positive punishment means that: A) the target behavior increases in frequency. B) an unpleasant stimulus is removed upon the occurrence of the target behavior. C) a stimulus is presented upon the occurrence of the target behavior. D) the target behavior increases or decreases to adaptive levels. 275. When something desirable is presented upon the occurrence of a behavior, and as a result that behavior is strengthened, has happened. A) positive punishment B) negative reinforcement C) positive reinforcement D) negative punishment 276. When something undesirable has been removed upon the occurrence of a behavior, and as a result that behavior is strengthened, has happened. A) positive punishment B) negative reinforcement C) positive reinforcement D) negative punishment 277. When your 3-year-old sister threw a tantrum over the candy she wanted, your mother gave it to her so that she would calm down. Unfortunately, your sister's temper tantrums probably have been: A) negatively reinforced. B) positively reinforced. C) negatively punished. D) positively punished. 278. When your 3-year-old sister threw a tantrum over the candy she wanted, your mother gave it to her so that she would calm down. Although counterproductive in the long-run, in that moment, your mother's behavior was being by . A) negatively reinforced; the end of the tantrum B) positively reinforced; the end of the tantrum C) positively reinforced; candy D) positively punished; continued tantrums 279. A parent spanked a child for misbehaving, and the misbehavior stopped. By definition, the spanking served as a for the misbehavior. A) positive reinforcer B) negative reinforcer C) positive punisher D) negative punisher 280. Booker has a really bad headache, so he takes a new painkiller. The drug works wonderfully to relieve his headache. The next time Booker has a headache, he definitely will take that drug. Booker's behavior of taking the new drug has been: A) positively reinforced. B) negatively reinforced. C) positively punished. D) negatively punished. 281. A teenager swears at the dinner table, and as a result, her parents take away her iPod for one week. The teenager is now much less likely to swear at the dinner table, illustrating: A) positive reinforcement. B) negative reinforcement. C) positive punishment. D) negative punishment. 282. One day, in the midst of his babbling, baby Chad uttered the sound “mama” in the presence of his mother. Chad's mother screamed in delight. This noise startled baby Chad; he stopped uttering that sound for quite some time, illustrating the effects of: A) positive reinforcement. B) negative reinforcement. C) positive punishment. D) negative punishment. 283. A baby will not stop crying, and he is driving his parents crazy. As a last resort, the father puts the baby in a laundry basket and sets the basket on top of the running dryer. The vibrations of the dryer soothe the baby and he falls asleep. The father congratulates himself on discovering this solution, which he most definitely will employ the next time the baby cannot be consoled. The dryer trick has thus been: A) positively reinforced. B) negatively reinforced. C) positively punished. D) negatively punished. 284. Frank flings a paper airplane through the air in history class. All of the students laugh. Their teacher, Mr. Curtis, becomes irate; his face turns beet red, and large veins bulge from his neck. “Frank!” Mr. Curtis screams, “You are in big trouble now!” Frank really doesn't care about the trouble. The combination of the attention from the students and the emotional reaction from Mr. Curtis was plenty to keep him misbehaving far into the future. Throwing the paper airplane, then, was: A) positively reinforced. B) negatively reinforced. C) positively punished. D) negatively punished. 285. A shy and timid child works up the courage to raise his hand to answer a question. The teacher calls on him and the child delivers an excellent answer. The teacher exclaims, “That is the best answer I have heard all week! Great job! Come up to the front of the room to get a sticker!” Unfortunately, the child found all of this attention unpleasant, and no longer answers questions in class. His behavior of answering questions was: A) positively reinforced. B) negatively reinforced. C) positively punished. D) negatively punished. 286. The grid floor in an operant chamber is electrified. Each time a rat presses a lever, however, the shock is turned off for one minute. The rat quickly learns to press the lever the second that the grid becomes electrified. Lever pressing has been: A) positively reinforced. B) negatively reinforced. C) positively punished. D) negatively punished. 287. Every time a rat presses a lever, it receives a small injection of cocaine. That rat begins pressing the lever more and more, indicating that cocaine is a: A) positive reinforcer. B) negative reinforcer. C) positive punisher. D) negative punisher. 288. A college instructor implements the following attendance policy. All students are given 20 bonus points at the start of the semester. Each absence will result in the loss of 5 bonus points. The instructor hopes this policy will absences. A) positively reinforce B) negatively reinforce C) positively punish D) negatively punish 289. What is one major flaw of punishment? A) Punishment does not decrease the undesired behavior. B) Punishment increases the undesired behavior. C) Punishment does not promote learning about the desired behavior. D) Punishment signals that unacceptable behavior has occurred. 290. Which statement about primary reinforcers is true? A) Primary reinforcers include handshakes, smiles, and trophies. B) Primary reinforcers have little to do with the majority of daily reinforcers and punishers. C) Primary reinforcers acquire value through classical conditioning. D) Primary reinforcers do not help satisfy biological needs. 291. Which item is a secondary reinforcer? A) food B) shelter C) money D) comfort 292. Secondary reinforcers derive their effectiveness from their associations with primary reinforcers through: A) operant conditioning. B) operant behavior. C) positive reinforcement. D) classical conditioning. 293. Which item is a secondary reinforcer? A) a cold drink on a hot day B) a juicy hamburger C) sexual activity D) a first-place ribbon 294. A rat's lever presses occasionally produce food. Across conditions, the food is delivered either immediately after a lever press or after some delay. Which is true? A) The rate of lever pressing is relatively unaffected by delays ranging from a few seconds to about thirty seconds. B) The rate of lever pressing increases as a function of its delay. C) The rate of lever pressing first increases and then decreases as a function of its delay. D) The rate of lever pressing decreases as a function of its delay. 295. Relative insensitivity to delayed rewards helps explain why it is: A) relatively infrequent that people behave in opposition to their own long-term interests. B) easier to use punishment than reinforcement to control behavior. C) difficult to engage in behaviors that have only long-term benefits. D) relatively easy for most people to stick to a diet. 296. Reinforcers and punishers effectiveness as the delay increases between the behavior and the consequence. A) lose; gain B) gain; lose C) gain; gain D) lose; lose 297. A child throws a temper tantrum at the mall because he wants to play with the toy that his mother just bought him. The most effective way for his mother to punish this behavior is to: A) ignore the outburst and wait for the opportunity to reinforce a desired behavior. B) not let the child play with the toy for several hours after they get home. C) decide upon a course of punishment after the emotional aspects of the incident have subsided. D) immediately return the toy she just bought him. 298. When a child cries when her babysitter is around, the babysitter picks her up and talks to her. Because of the attention, the child becomes more likely to cry in the future. If the child cries when any adult is in the room, what process has occurred? A) generalization B) discrimination C) negative reinforcement D) negative punishment 299. A pigeon is reinforced for pecking a key whenever a particular tone is sounded but never reinforced if the tone is absent. Pigeons discriminating these conditions will: A) not learn anything under these conditions. B) quickly learn to engage in vigorous key pressing only when the tone turns off. C) quickly learn to engage in vigorous key pressing whenever the tone sounds and will continue to press the key even when it turns off. D) quickly learn to engage in vigorous key pressing whenever the tone sounds but cease when it turns off. 300. in an experiment were reinforced if they selected paintings by the Cubist artist, Picasso, over paintings by the French Impressionist, Monet. Subsequently, they demonstrated , and selected paintings from another Cubist artist, Matisse, over other paintings by Monet. A) Pigeons; discrimination B) Pigeons; generalization C) Children; discrimination D) Children; generalization 301. Extinction in operant conditioning involves: A) presenting the reinforcer when the behavior does not occur. B) no longer presenting the reinforcer when the response occurs. C) repeatedly presenting the CS without the US. D) not presenting the discriminative stimulus, which signals the opportunity to respond. 302. A person tries to call her friend and there is no answer. She waits a little while and tries again but there is still no answer. She continues to try for the rest of the day but her friend never answers. Exasperated, she gives up. What is happening to the friend-calling behavior? A) It is being negatively reinforced. B) It is being negatively punished. C) It is being extinguished. D) It is being positively punished. 303. Not providing a reinforcer every time a response occurs: A) will negatively punish the behavior. B) may negatively reinforce the behavior. C) will extinguish the behavior. D) may strengthen the behavior. 304. A(n) schedule is based on the time between reinforcements. A) interval B) ratio C) joint D) conditional 305. A(n) schedule is based on the number of responses needed to achieve reinforcement. A) interval B) ratio C) joint D) conditional 306. In operant conditioning, presenting reinforcement upon the occurrence of a response only at fixed-time periods is called a schedule. A) variable-ratio B) fixed-ratio C) variable-interval D) fixed-interval 307. A fixed-interval, 60-second schedule delivers a reinforcer: A) every 60 seconds. B) after 60 responses. C) for the first response that occurs after 60 seconds elapse. D) 60 seconds after a response is emitted. 308. Molly's mail carrier delivers the mail promptly at noon each day. Molly never checks her mailbox in the morning, but always checks it at 12:05 p.m. Molly's mail checking is maintained on which schedule? A) fixed interval B) variable interval C) fixed ratio D) variable ratio 309. Students who do relatively little work until just before an upcoming exam and then engage in a burst of studying are displaying a response pattern similar to the one engendered by which_ schedule of reinforcement? A) variable interval B) fixed interval C) fixed ratio D) variable ratio 310. A doctor gives a patient in chronic pain a morphine pump to operate. The doctor tells the patient to press the button whenever she is in pain. The button press will result in a morphine injection, but only if it has been at least 1 hour since the last injection. The morphine pump is operating according to which schedule of reinforcement? A) fixed ratio B) variable ratio C) variable interval D) fixed interval 311. The radio DJ says “Sometime this hour, I'll be giving away a pair of tickets to the Jonas Brothers concert to one lucky caller.” This is an example of which type of reinforcement schedule? A) fixed interval B) variable interval C) fixed ratio D) variable ratio 312. Checking your e-mail is reinforced with receiving important e-mails at unpredictable times according to which schedule of reinforcement? A) fixed interval B) variable interval C) continuous D) variable ratio 313. A toddler has learned that the jack-in-the-box won't operate unless she turns the hand crank around exactly 10 times. The toddler's behavior is being reinforced under which schedule of reinforcement? A) fixed ratio B) fixed interval C) variable interval D) variable ratio 314. Sam gets a free pretzel using his Pretzel Smorgasbord card with every 10th pretzel he buys. This arrangement is an example of which type of reinforcement schedule? A) variable interval B) fixed interval C) fixed ratio D) variable ratio 315. Slot machines pay off on schedules that are determined by the random number generator that controls the play of the machine. For example, on average, the machine might pay off every 100 pulls, but sometimes two pulls are required and sometimes several hundred pulls are required. Slot machines are a real-world example of which schedule of reinforcement? A) variable interval B) fixed interval C) fixed ratio D) variable ratio 316. When you put money in the candy machine, you expect to be reinforced on a schedule. A) variable-ratio B) fixed-interval C) continuous reinforcement D) intermittent reinforcement 317. Sam was an accomplished furniture maker and was paid well each time he produced a set number of pieces of furniture. Which reinforcement schedule was Sam was on? A) variable ratio B) fixed ratio C) fixed interval D) variable interval 318. Which reinforcement schedule has the highest rate of response? A) variable interval B) fixed interval C) fixed ratio D) variable ratio 319. Jeremy goes to crowded parties and approaches women with a cheesy pick-up line designed to get their phone numbers. Jeremy never knows which women will fall for his line, but he does know that if he asks enough women, it's quite probable that he will get a phone number. Jeremy's behavior is being reinforced on a schedule. A) variable-ratio B) fixed-ratio C) fixed-interval D) variable-interval 320. All of these are examples of schedules of intermittent reinforcement EXCEPT: A) variable interval 1 minute. B) fixed interval 2 minute. C) fixed ratio 1. D) variable ratio 10. 321. The special case of presenting reinforcement after each response is called reinforcement. A) intermittent B) immediate C) variable D) continuous 322. Extinction will proceed most rapidly if behavior previously had been maintained under a(n) schedule of reinforcement. A) continuous B) intermittent C) variable ratio D) variable interval 323. Dolphins are trained to do amazing tricks MOST likely through a process called: A) shaping. B) habituation. C) classical conditioning. D) trial and error. 324. Playing the hot-or-cold game, where you direct someone to move around the room toward a goal known only to you by telling the person whether he or she is getting warmer or cooler is an example of which behavioral process? A) extinction B) fixed ratio schedules of reinforcement C) generalization D) shaping 325. According to B. F. Skinner, the best way to train a rat to press the bar in the Skinner box is to begin by: A) placing the rat immediately next to the bar. B) delivering a food reward if the rat turns in the direction of the bar. C) placing the rat's paw on the bar and rewarding that movement immediately. D) placing the rat in the box with another rat that has already been trained to press the bar. 326. Shaping reinforces behavior sequences until the overall sequence of behavior is performed reliably. A) complex B) unusual C) smaller D) larger 327. In one of his experiments, Skinner put several pigeons in Skinner boxes, set the food dispenser to deliver food every 15 seconds, and left the birds to their own devices. Later, he returned and found the birds engaging in odd, idiosyncratic behaviors, such as pecking aimlessly in a corner or turning in circles. Skinner referred to these behaviors as: A) inadvertent. B) nonsensical. C) unexplainable. D) superstitious. 328. One day when Max was bowling, his shoelaces came untied and he bowled a strike. From then onward, Max always bowled with his shoes untied. This action is an example of: A) negative reinforcement. B) classical conditioning. C) intermittent reinforcement. D) superstitious behavior. 329. Superstitious behavior: A) results from accidental reinforcement of inconsequential behavior. B) occurs more frequently on ratio then interval schedules. C) is established using shaping through successive approximations. D) reflects the contributions of biology to operant behavior. 330. The case of superstitious behavior illustrates the fact that: A) learning can occur when the reinforcer is delayed. B) the correlation between response and reinforcer does not necessitate a causal relation. C) learning occurs when the correlation between the ongoing behavior and the response is zero. D) a causal relation between response and reinforcer is required for learning to occur. 331. One of the strongest early advocates of a cognitive approach to operant learning was: A) Edward Chace Tolman. B) John B. Watson. C) B. F. Skinner. D) Edward L. Thorndike. 332. Tolman's views on learning differed from those of Skinner in that Tolman believed: A) reinforcement is dependent upon the formation of stimulus–stimulus associations. B) a reinforcer directly strengthens the response that produces it. C) the environmental context can influence whether or not a behavior will occur. D) organisms develop means–ends expectancies as a result of operant conditioning. 333. A condition in which something is learned but is NOT manifested as a behavioral change until sometime in the future is called: A) latent learning. B) delayed response. C) time-delayed learning. D) second-order conditioning. 334. Latent learning can easily be established in rats: A) by only using secondary reinforcers. B) by the use of shaping through successive approximations. C) by occasionally punishing an otherwise reinforced response. D) without any obvious reinforcement. 335. A mental representation of the physical features of the environment is called a: A) cognitive maze. B) latent map. C) cognitive map. D) mental diagram. 336. Rats that traverse a maze for 10 consecutive days with no reward show no evidence of learning the maze. When reinforcement is provided beginning on the 11th day, these rats will: A) be unable to effectively master the maze over the next week due to learned helplessness. B) slowly start to learn the maze over the next week as long as reinforcement is provided each day. C) immediately demonstrate mastery of the maze. D) slowly start to learn the maze over the next week, even if reinforcement is discontinued. 337. Tolman's experiments with rats and mazes strongly suggested that: A) stimulus–response theories of learning were essentially correct. B) there is no functional difference between reinforcement and punishment. C) spatial learning is the primary form of learning in rats. D) there is a cognitive component to operant learning. 338. Tolman trained rats to run down a straightaway, and subsequently make several turns until finally reaching a goal box baited with food. After the rats learned this task, the maze was altered. The main straightaway was blocked. However, there were many alternate paths radiating in all directions from the start box. The goal box remained in the same location relative to the start box, and one of the alternate paths led directly to it. Tolman found that rats: A) selected the path that led directly to the goal box, even though they had never traversed this route before. B) spent an inordinate amount of time clawing at the blocked main straightaway. C) demonstrated generalization and selected the path closest to the main straightaway. D) explored each path systematically, either from right to left or from left to right. 339. In a series of studies by Delgado and colleagues, participants chose between receiving one dollar or giving their partner three dollars. If given money, the partners could choose to split it with the participants or keep it. Prior to the study, participants read descriptions of their partners that described them as either trustworthy or untrustworthy. Delgado and colleagues found that participants based their decision to give money to their partners largely on: A) the direction of the discrepancy between the descriptions and how the partners actually behaved. B) what the partners actually did in the game. C) the written descriptions. D) the consistency with which the partners acted according to the written descriptions. 340. James Olds (1956) discovered that rats would ignore food, water, and other life-sustaining necessities for hours if they were able to control stimulation of certain parts of the brain. He called these areas of the brain: A) receptors. B) pleasure centers. C) reinforcers. D) addiction centers. 341. Which structure is NOT involved in the reward center of the brain? A) cerebellum B) hypothalamus C) medial forebrain bundle D) nucleus accumbens 342. The neurons in the pleasure centers of the brain, especially those in the nucleus accumbens, secrete , a neurotransmitter usually associated with positive emotions. A) serotonin B) epinephrine C) norepinephrine D) dopamine 343. A hungry monkey's lever pressing behavior is reinforced with a banana-flavored treat. Which would produce the MOST dopamine activity? A) treat delivery after the behavior is learned and the monkey expects the treat B) treat delivery upon the first press when the monkey does not expect a treat C) an unexpected non-delivery of the treat when the lever press occurs D) expected non-deliveries of treats as the response is extinguished 344. A study put rats in a typical T-maze that ended with two different directions. The study found that if a rat found food in one arm on the first trial of the day, it typically looked in the other arm on the very next trial. This is BEST explained by: A) the rat's evolutionary preparedness of foraging. B) latent learning in the absence of reinforcement. C) the behaviorist view of operant conditioning. D) the formation of stimulus–response associations. 345. A rat in a complicated maze with many arms, some of which are baited with food, will: A) once finding food in an arm, return only to that arm. B) explore the entire maze but often revisit arms that contained food. C) explore the maze in a random, haphazard fashion due to its complexity. D) rarely return to an arm it has previously visited. 346. When the Brelands tried to teach pigs and raccoons to drop a coin in a box, the animals: A) learned the trick only if explicit reinforcement was provided for doing so. B) learned the trick if food was used as a reinforcer, but did not learn the trick if secondary reinforcement was used. C) had difficulty learning the trick because it competed with their biological tendencies. D) failed to learn the trick, indicating the some animals evolved to be resistant to operant conditioning. 347. When the Brelands tried to teach raccoons to drop a coin in a box by using food reinforcement for doing so, the raccoons: A) quickly learned this task via shaping through approximations. B) spent an inordinate amount of time rubbing the coin between their front paws instead of dropping it into the box. C) tended to bury the coin in the ground and then dig it up instead of dropping it into the box. D) failed to learn an association between the coin and food. 348. When your sister burned herself playing with matches, she cried for a long time. A by-product of this experience was that you also learned not to play with matches. This type of learning is called: A) observational learning. B) classical conditioning. C) operant conditioning. D) intermittent reinforcement. 349. Observational learning is: A) an important part of learning our culture. B) not as important as other forms of learning. C) an inefficient means of learning. D) unique to humans. 350. conducted the Bobo doll study of observational learning. A) Tolman B) Bandura C) Flynn D) Skinner 351. In the classic Bobo doll experiment, after watching adults get punished for hitting the Bobo doll, children: A) behaved even more aggressively toward the Bobo doll. B) showed no difference in their aggressive behavior toward the Bobo doll. C) behaved less aggressively toward the Bobo doll. D) became scared of the Bobo doll and didn't go near it. 352. When children observed the adult models being rewarded for being aggressive toward the Bobo doll, the children: A) were even more aggressive to the Bobo doll. B) were less aggressive to the Bobo doll. C) initially were more aggressive but then became less aggressive toward the Bobo doll. D) did not change their aggressiveness toward the Bobo doll. 353. If children observe their older brothers fighting in the neighborhood, Bandura would suggest that the younger siblings would: A) likely behave aggressively, too. B) learn that aggressive behavior is not always effective. C) imitate the victims of their brothers' aggressive behavior. D) decrease their aggressive behavior when their brothers were nearby. 354. A parrot identified a new fruit that was good to eat. After watching the parrot eat the fruit, the other parrots in the flock also began eating the fruit. The behavior of the flock illustrates: A) operant conditioning. B) latent learning. C) observational learning. D) positive reinforcement. 355. Research has demonstrated observational learning in: A) humans only. B) primates only, including humans. C) humans and birds only. D) humans and a variety of animals. 356. In one study, pigeons watched other pigeons receive reinforcement for either stepping on a bar or pecking at the feeder. When the observer pigeons were later put in the operant chamber they tended to: A) step on the bar. B) peck at the feeder. C) use whichever technique that they had observed. D) alternate between stepping on the bar and pecking at the feeder. 357. Chimpanzees raised in the wild watched a human model using a tool either efficiently or inefficiently and then themselves interacted with the tool. Which statement is true? A) Only the chimpanzees observing efficient behavior used the tool, but use was not efficient. B) Only the chimpanzees observing efficient behavior used the tool, and use was efficient. C) Both groups used the tool in the manner that they observed. D) Both groups used the tool but did not differ in their efficiency. 358. A few studies have suggested that observational learning of tool use in chimpanzees can be enhanced by: A) using child human models to illustrate tool use instead of adult human models. B) increasing chimpanzees' overall contact with humans. C) studying only chimpanzees raised in the wild. D) using only tools that are foreign to the chimpanzees' natural environment. 359. Recent research has revealed that certain cells in the brain fire both when an animal performs an action and when it watches that action performed. These cells are called: A) mirror neurons. B) dopaminergic neurons. C) nucleus accumbi. D) motor neurons. 360. Regions in the lobes are thought to be part of the mirror neuron system in humans. A) frontal and temporal B) frontal and parietal C) frontal and occipital D) temporal and occipital 361. Which statement about the observational learning that occurs when participants watched videos of dance moves relative to actually practicing the dance moves is true? A) Vastly different brain regions were activated between the two scenarios. B) The participants who only observed dance moves exhibited no learning on a surprise dance test. C) Both groups performed equally well on a surprise dance test. D) The participants who actually practiced dancing performed better on the surprise dance task. 362. Learning that takes place largely independent of awareness of both the process and the products of information acquisition is called: A) implicit learning. B) conditioning. C) biological predisposition. D) observational learning. 363. Habituation is a general process in which exposure to a stimulus results in a gradual in responding to that stimulus. A) reduced; reduction B) reduced; increase C) repeated; reduction D) repeated; increase 364. Learning to drive a car is an example of that becomes over time. A) implicit learning; habituated B) explicit learning; implicit C) implicit learning; explicit D) explicit learning; operant 365. Implicit learning is implicit memory. A) functionally the same as B) related to C) caused by D) unrelated to 366. Using an artificial grammar is one way of studying: A) implicit learning. B) explicit learning. C) biological predisposition. D) observational learning. 367. In an artificial grammar task, participants are shown strings of letters labeled grammatically correct and grammatically incorrect. Then they are shown new strings and are asked to classify them as correct or incorrect. Participants usually are: A) unable to do the task unless explicitly told the grammatical rules. B) unable to do the task even after being told the grammatical rules. C) quite good at the task and can articulate the grammatical rules. D) quite good at the task but cannot articulate the grammatical rules. 368. Most people have little difficulty spotting grammatical errors but cannot articulate which rules of English grammar were violated. Knowing that a sentence is grammatically wrong but being unable to say why illustrates: A) explicit learning. B) implicit learning. C) latent learning. D) observational learning. 369. Participants who are studied under a serial reaction time test are shown five boxes on a computer screen. When a box lights up, they are to click it as quickly as possible. The boxes seemingly light up at random, but a pattern exists. Over time, participants respond on this task of the pattern. A) more slowly; and are unaware B) more carefully; and become aware C) more quickly; but are unaware D) more quickly; and become aware 370. In recent research on neural pathways during implicit and explicit learning, participants who were given explicit instructions showed increased activity in the , whereas those given implicit instructions showed decreased activity in the . A) frontal cortex and parietal cortex; hippocampus B) occipital lobe; hippocampus C) frontal cortex and hippocampus; occipital lobe D) hippocampus and occipital lobe; frontal cortex and parietal cortex 371. Which is the LEAST effective technique to learn school-related material? A) distributed practice B) visual imagery mnemonics C) practice testing D) self-explanation 372. Which is the MOST effective technique to learn school-related material? A) practice testing B) visual imagery mnemonics C) marking important material while reading D) writing summaries of to-be-learned material 373. Cramming for exams is an example of: A) distributed practice. B) massed practice. C) implicit practice. D) practice testing. 374. Distributed practice is better than massed practice: A) for college students but not for children. B) for matching names to faces but not for remembering facts from textbooks. C) for remembering facts from textbooks, but not for matching names to faces. D) for numerous kinds of materials. 375. Which statement regarding practice testing is FALSE? A) It is effective across a wide range of materials. B) It is preferred by students relative to simply rereading the material. C) Its benefits increase with the difficulty of the test. D) It enhances the transfer of learning from one situation to another. 376. Recent research has indicated that interjecting frequent short tests (e.g., pop quizzes) during a lecture is beneficial to learning because it: A) fosters implicit learning. B) increases sympathetic nervous system activity. C) discourages mind wandering. D) encourages massed practice. 377. A student closes her textbook, convinced that she has mastered the material. Her conclusion is an example of a(n): A) false belief. B) judgment of learning. C) summarization. D) objective assessment. 378. The BEST way for students to guard against incorrect judgments of learning is to base their decisions on: A) sense of familiarity after massed practice. B) sense of familiarity after distributive practice. C) results from practice testing. D) amount of time spent studying the material. 1. The answer should provide the following information: (1) The brother's scream and pinch (US) was paired with the sight of the tarantula, eliciting a fear response (UR) in the girl. The next time the girl saw a spider of any kind (CS), she experienced fear (CR); (2) Acquisition: The first few pairings of the tarantula and the scream constitute acquisition; (3) Generalization: After acquisition, the CR may occur to similar stimuli. In this case, the girl may generalize her fear of tarantulas to fear of similar spiders; (4) Discrimination: The girl may eventually learn to fear only tarantulas and not react so strongly to other spiders, especially if the brother only screams in her ear and pinches her when she views tarantulas; (5) Extinction: If her brother were to stop screaming in her ear and pinching her when she viewed tarantulas, her fear of them would decrease and eventually disappear; (6) Spontaneous recovery: After her fear response extinguishes, if she goes an extended period without seeing a tarantula, she may react with fear at some future point when she encounters one. 2. The answer should provide the following information: (1) The US is the drug (e.g., heroin), and the UR is a direct effect of the drug on the body (e.g., respiratory depression); (2) Environmental contexts normally associated with drug taking (e.g., a room, a house, a friend, etc.) become CSs due to their pairing with the drug. They reliably predict that the drug is forthcoming. Over time, they will come to elicit a CR that is compensatory in nature. For example, if the UR is respiratory depression, the CR would be an increase in respiration. A compensatory CR thus prepares the body for the drug and helps the body maintain homeostasis; (3) Because these drug cues elicit bodily reactions in opposition to the drug, more of the drug is needed to obtain a similar high. This results in drug tolerance. Users must take large amounts of the drug to overcome the compensatory CR; (4) If a user with a well-developed drug tolerance takes heroin in a different setting, overdose may occur. The changed setting degrades or eliminates the CS. If there are no (or reduced) drug cues, then the compensatory CR will not occur. The user takes a large dose normally needed to overcome the CR, but there is no CR present. As a result, the body is not prepared for a dose of heroin that is suddenly too large. Thus, the user may overdose on a dose of heroin that he or she normally can handle. 3. The answer should provide the following information: (1) Conditioned responses often are reflexive in nature, whereas operant responses are seemingly purposive. Conditioned responses are reactions to stimuli that evoke them. Operant responses are active attempts to procure reinforcers; (2) In classical conditioning, initially neutral stimuli acquire meaning through their association with unconditioned stimuli that naturally elicit the response. Thereafter, these conditioned stimuli are able to elicit the response. In operant conditioning, stimuli preceding the response do not elicit the response in a reflexive sense. Instead, they provide context that signals when or how a response will produce a reinforcer; (3) Operant behavior is, by definition, behavior controlled by its consequences. Consequences of behavior are not involved in classical conditioning. For example, in Pavlov's preparation, dogs do not have to salivate at the sound of a tone in order to receive the food. The food is coming one way or the other. The behavior of Pavlov's dogs is under the control of the preceding CS. The lever-pressing behavior of Skinner's rats, in contrast, is controlled by consequences. If the lever press produces a food pellet, they will press again. Note that the food pellet only will be presented if the animal presses the lever. 4. The answer should provide the following information: (1) In both cases, extinction ultimately will result in a decrease (or elimination) of the target response; (2) In classical conditioning, extinction involves repeatedly presenting the CS in the absence of the US. Using Pavlov's procedure, on every trial, the tone is presented but is not followed by the food. With repeated tone–nothing pairings, the dog will stop salivating to the sound of the tone; (3) In operant conditioning, extinction involves no longer delivering the reinforcement that had previously been maintaining the response. For example, in rats pressing levers for food pellets, extinction would be arranged by disconnecting the food pellet dispenser. Lever presses would no longer produce food. Under these circumstances, rats will eventually stop pressing the lever. 5. The answer should provide the following information: (1) Cognitive elements: Classical conditioning will not occur unless the CS reliably predicts the occurrence of the US. Because of this, Rescorla and Wagner argued that CS–US pairings result in an expectancy of the US. This expectancy then gives rise to the CR. This cognitive component helps explain why Pavlov's dogs did not salivate merely at the sight of Pavlov. Although Pavlov was paired with food many times, he also was paired with the absence of food. Pavlov was not a reliable predictor of food, but his bell was! In addition, expectancy theory helps explain why it is easier to condition a response to a novel stimulus than a familiar stimulus. Organisms have previously learned expectancies associated with familiar stimuli, making conditioning a new response difficult; (2) Neural elements: The cerebellum is highly involved in classical conditioning. In addition, the central nucleus of the amygdala is involved in emotional conditioning; (3) Evolutionary elements: CS–US associations that have been important for the survival of the species are extremely easy to condition, sometimes requiring only one CS–US pairing. We are biologically prepared to form some associations, such as the association between the taste of a novel food and sickness. The adaptive value is clear. Our ancestors, who may have eaten a poisonous plant, became ill several hours later, and subsequently avoided that plant in the future, tended to survive and pass on their genes. Ancestors who did not form this association perished at a higher rate. 6. The answer should provide the following information: (1) The US is the virus, and the UR is the stomach sickness; (2) The CS is the smell or taste of the anchovies. Because Bob has eaten pizza many times and has never gotten sick, it would be difficult to form a taste aversion to the pizza. The anchovies, however, are new to Bob. It is fairly easy to condition a taste aversion to a new food for evolutionary reasons. If our ancestors ate a particular food frequently and one time got sick, it probably wasn't due to that food. If, on the other hand, they ate a new food and got sick, there's a good chance that the food was the culprit; (3) The CR probably would take the form of stomach queasiness from the smell of anchovies. Bob would avoid anchovies in the future; (4) Note that this occurs even though the anchovies were not the reason that Bob got sick! (5) If Bob was a pigeon, the sight of anchovies, instead of their smell, would elicit the CR. Birds are biologically prepared to form sight–sickness associations, whereas mammals are biologically prepared to form taste/smell–sickness associations. 7. The answer should provide examples that take the following forms: (1) Positive reinforcement: behavior occurs; the consequence involves the presentation of a stimulus; the behavior is more likely to occur; (2) Negative reinforcement: behavior occurs; the consequence involves the removal of a stimulus; the behavior is more likely to occur; (3) Positive punishment: behavior occurs; the consequence involves the presentation of a stimulus; the behavior is less likely to occur; (4) Negative punishment: behavior occurs; the consequence involves the removal of a stimulus; the behavior is less likely to occur. 8. The answer should provide the following information: (1) Continuous reinforcement: Every instance of bed making would result in money delivery. This probably would result in the child making the bed each day, but the parents would quickly go broke! Moreover, if the parents were to stop reinforcing bed-making, the child would quickly stop making his bed. For these reasons, this schedule is both impractical and not ideal for the long-term maintenance of bed-making; (2) FR 7: Every seven times that the child makes his bed, this behavior will be reinforced with money; (3) VR 7: On the average of every seven times that the child makes his bed, this behavior will be reinforced with money; (4) The FR 7 and VR 7 options both produce reinforcement intermittently. Assuming the child makes his bed only once per day, both of these schedules result in reinforcer delivery once per week, which is a realistic payment schedule for most parents. However, the downside with these schedules is that nothing is stopping the child from making his bed 100 times per day. The parents would quickly go broke! (5) Under a FI 7-day schedule, the first instance of bed-making that occurs after 7 days since the last reinforcement will produce money. Perhaps every Friday the parents will pay the child if he has made the bed that day. If not, they will withhold the money until the next time that he makes the bed. The advantage of this schedule is that it guarantees that the parents will not have to pay out more than once per week. The disadvantage is that the child may quickly learn to behave efficiently under the schedule, making his bed only on Fridays! (6) Under a VI 7-day schedule, the first instance of bed-making that occurs after an average of 7 days since the last reinforcement will produce money. On average, bed-making will pay off once per week. But sometimes reinforcement will come after only one day, and other times it will not come for several weeks. One advantage of this schedule is that it guarantees that the parents will pay money to the child, on average, no more than once per week. Another advantage is that this schedule generates a steady rate of responding. Because reinforcement could come any day, the child probably would make his bed every day. A third advantage is that, because the schedule is time-based, making the bed 100 times per day won't increase the rate of reinforcement as is the case under the FR and VR schedules. There is nothing that the child can do to get paid, on average, more than once per week. Thus, this schedule both maintains a consistent rate of response and is pocketbook friendly. Finally, this schedule is resistant to extinction; if the parents were to stop providing reinforcement for bed-making due to a temporary economic hardship, bed-making would continue unabated. The VI 7-day schedule, then, is the best choice to employ. 9. The answer should provide the following information: (1) Shaping through successive approximation involves providing reinforcers for closer and closer approximations to the desired behavior; (2) First, provide reinforcement for any clockwise movement. Then provide reinforcement only if the dog engages in a quarter-clockwise turn. Then provide reinforcement only if the dog engages in a half-clockwise turn. Then provide reinforcement only if the dog engages in a three-quarters-clockwise turn. Finally, provide reinforcement only if the dog turns in a complete circle. 10. The answer should provide the following information: (1) Tolman gave three groups of rats access to a complex maze every day for more than two weeks. The first group received no reinforcement the entire time. The second group was given reinforcement every day as soon as they completed the maze. The third group was given no reinforcement for the first 10 days and regular reinforcement similar to the second group the remainder of the time; (2) The first group that never received reinforcement made a large number of errors each day and never showed improvement. The second group that received daily reinforcement made progressively fewer errors each day. These results are not surprising. The results from the third group are of theoretical interest. Over the first 10 days, these rats made a large number of errors and showed no improvement exactly like the first group. After receiving just one reinforcement on Day 11, however, their number of errors decreased dramatically on Day 12, resembling the Day 12 performance of the second group. The second group had been receiving reinforcement and gradually made fewer errors each day. The fact that the third group immediately mastered the maze and did not start learning gradually (similar to Group 2) suggests that this group (and Group 1 for that matter) had been learning gradually from Day 1. Tolman termed this latent learning, defined as a condition in which something is learned but is not manifested as a behavioral change until some time in the future. The reason why the rats not receiving reinforcement did not show a decrease in errors was not because they weren't learning the maze; it was because there was no incentive to make fewer errors! As soon as an incentive was given, the rats displayed the effects of latent learning. 11. The answer should provide the following information: (1) Skinner demonstrated that the presentation of a reinforcing stimulus strengthens the behavior that immediately precedes it. Usually, this behavior is what caused the reinforcer to occur, but sometimes this is not the case. In a classic experiment, Skinner delivered grain to hungry pigeons at regular intervals, regardless of what they were doing. Skinner discovered that whatever the pigeons happened to be doing when the food arrived (grooming, turning in a circle, pecking) tended to be repeated. Over time, the pigeons began to engage in this behavior excessively, “believing” that this behavior produced the food. The pigeons had formed a superstition. In much the same way, superstitions develop in people. With respect to baseball players, one player may wear the necklace because his daughter gave it to him as a present. The first day he wears the necklace, he hits a home run. The necklace had nothing to do with the home run, of course, but an accidental association between the necklace and the home run reinforced necklace wearing; (2) Skinner's account is plausible but probably inadequate to fully explain the hundreds of major-league baseball players now wearing these necklaces. Observational learning provides another mechanism by which players begin to engage in this superstition. Player B may watch Player A wear the necklace and perform well. Because of this, Player B may start wearing a necklace. An opponent sees Player B wearing the necklace and decides that he, too, must obtain one so as not to be at a competitive disadvantage. In this way, a diffusion chain of observational learning occurs. Before long, many players are wearing necklaces, and this behavior is occasionally accidentally reinforced (and thus strengthened) as Skinner described. 12. The answer should provide the following information: (1) Humans readily learn through observation. Even toddlers will imitate tool use. Not only will they use the tool, they will imitate how the model used the tool; (2) Chimpanzees raised by their mothers in the wild also will learn tool use by observing a human model. While they will use the tool, they tend not to imitate the specific actions of the model; (3) Chimpanzees raised around humans better imitate the specific actions of human models. This finding led to the development of the enculturation hypothesis which states that being raised in a human culture better enables chimpanzees to recognize the intentions of human models. 13. The answer should provide the following information: (1) In a typical study investigating implicit learning using artificial grammar, participants are given strings of letters (e.g., VXJJ) labeled as either grammatically correct or incorrect. Critically, participants are not told the grammatical rules of the language. They are simply shown exemplars of correct and incorrect uses of grammar; (2) After being exposed to a number of exemplars, participants are shown new strings of letters and asked to classify them as grammatically correct or incorrect. Participants usually do reasonably well on this task; (3) Although participants can correctly classify novel exemplars as grammatically correct or incorrect, they cannot articulate the grammatical rules of the language. This finding has been interpreted as evidence of implicit learning. 14. The answer should provide the following information: (1) Many study techniques students frequently use, including rereading, massed practice (cramming), summarizing, highlighting, and using imagery or mnemonic devices have little utility; (2) Two particularly effective techniques are distributed practice and practice testing. Distributed practice spreads out study over time, as opposed to cramming the material (massed practice). Practice testing involves taking practice tests of the to-be-learned material; (3) Distributed practice and practice testing usually are not students' first choice of study techniques, in part because they are difficult. Distributed practice entails a longer time commitment than massed practice and requires that students retrieve information learned at the last study session. Practice testing also requires a considerable retrieval component, especially so with more difficult material. Thus, although these techniques are more difficult than more common techniques such as rereading, highlighting, and summarizing, these difficulties lead to the desirable outcome of increased learning. [Show More]

Last updated: 1 year ago

Preview 1 out of 6 pages

Add to cart

Instant download

document-preview

Buy this document to get the full access instantly

Instant Download Access after purchase

Add to cart

Instant download

Reviews( 0 )

$10.50

Add to cart

Instant download

Can't find what you want? Try our AI powered Search

OR

REQUEST DOCUMENT
60
0

Document information


Connected school, study & course


About the document


Uploaded On

Dec 13, 2020

Number of pages

6

Written in

Seller


seller-icon
YongSam

Member since 4 years

99 Documents Sold


Additional information

This document has been written for:

Uploaded

Dec 13, 2020

Downloads

 0

Views

 60

Document Keyword Tags


$10.50
What is Browsegrades

In Browsegrades, a student can earn by offering help to other student. Students can help other students with materials by upploading their notes and earn money.

We are here to help

We're available through e-mail, Twitter, Facebook, and live chat.
 FAQ
 Questions? Leave a message!

Follow us on
 Twitter

Copyright © Browsegrades · High quality services·