logo

Login Register
Nursing Exams subject

PSYCH 111: Introduction to Psychology: Week 9: Chapter 6 Quizzes: Learning

PSYCH 111: Introduction to Psychology: Week 9: Chapter 6 Quizzes: Learning .Top Test Bank Platform | Self-Directed Excellence | Updated Weekly | Start Free Today

Access exact questions for PSYCH 111: Introduction to Psychology: Week 9: Chapter 6 Quizzes: Learning. 100% Passing rate guaranteed . Fewer study hours, for guaranteed grades
PSYCH 111: Introduction to Psychology: Week 9: Chapter 6 Quizzes: Learning Nursing Exams
PSYCH 111: Introduction to Psychology: Week 9: Chapter 6 Quizzes: Learning
PSYCH 111: Introduction to Psychology: Week 9: Chapter 6 Quizzes: Learning practice questions with answers | nursingprepplug.com
Questions: 47+ Duration: 2hrs 21min
$15/month

Detailed Answer Explanations Well-structured questions covering all topics, accompanied by organized images.

Purchase For $15/month

About PSYCH 111: Introduction to Psychology: Week 9: Chapter 6 Quizzes: Learning

PSYCH 111: Introduction to Psychology: Week 9: Chapter 6 Quizzes: Learning .Top Test Bank Platform | Self-Directed Excellence | Updated Weekly | Start Free Today

Free PSYCH 111: Introduction to Psychology: Week 9: Chapter 6 Quizzes: Learning Questions

1.

Which of the following lists the correct steps in Bandura’s modeling process?

  • Motivation, retention, attention, reproduction

  • Attention, retention, reproduction, motivation

  • Retention, motivation, attention, reproduction

  • Attention, reproduction, motivation, retention

Explanation

Correct Answer:

B. Attention, retention, reproduction, motivation

Explanation:

Bandura identified four key steps necessary for observational learning to occur. First is attention, where the learner must focus on the model’s behavior. Next is retention, the ability to remember what was observed. The third step is reproduction, where the learner demonstrates the behavior. Finally, motivation determines whether the learner chooses to perform the behavior, often influenced by vicarious reinforcement or punishment. Together, these steps explain how modeling leads to lasting learning.

Why Other Options Are Wrong:

A. Motivation, retention, attention, reproduction

This sequence is incorrect because attention must come first for learning to occur. Without focus, no other steps can follow.

C. Retention, motivation, attention, reproduction

This order is inaccurate. Motivation occurs last, after the learner has paid attention, retained the information, and can reproduce the behavior.

D. Attention, reproduction, motivation, retention

This skips the crucial step of retention before reproduction. One cannot reproduce a behavior without first remembering it.


2.

Which of the following best illustrates a fixed-interval reinforcement schedule?

  • A worker receives $20 for every five shirts they sew

  • A student receives a paycheck every two weeks regardless of performance

  • A slot machine pays out after an unpredictable number of pulls

  • A dog gets a treat after every third time it rolls over

Explanation

Correct Answer:

B. A student receives a paycheck every two weeks regardless of performance

Explanation:

A fixed-interval reinforcement schedule provides reinforcement after a consistent, predictable amount of time has passed. The reinforcement does not depend on the number of responses, only on time. For example, paychecks given every two weeks or June’s pain medication available once per hour both demonstrate fixed intervals. This type of reinforcement tends to produce a "scalloped" response pattern, where behavior increases as the reinforcement time approaches.

Why Other Options Are Wrong:

A. A worker receives $20 for every five shirts they sew

This describes a fixed-ratio schedule because reinforcement is delivered after a specific number of responses (five shirts), not after a fixed amount of time.

C. A slot machine pays out after an unpredictable number of pulls

This is a variable-ratio schedule. The reinforcement is based on the number of responses, but the number required is unpredictable and constantly changing.

D. A dog gets a treat after every third time it rolls over

This also represents a fixed-ratio schedule, where reinforcement occurs after a set number of responses (every third rollover), not according to a time interval.


3.

Which of the following best describes the three types of models in Bandura’s theory of observational learning?

  • Live, verbal, and symbolic models that show, explain, or represent behaviors

  • Primary, secondary, and tertiary models that depend on reinforcement schedules

  • Fixed, variable, and interval models that control reinforcement timing

  • Immediate, delayed, and continuous models that determine learning speed

Explanation

Correct Answer:

A. Live, verbal, and symbolic models that show, explain, or represent behaviors

Explanation:

Albert Bandura identified three types of models in observational learning. A live model directly demonstrates behavior, such as standing on a surfboard. A verbal instructional model explains the behavior without performing it, such as a coach giving directions. A symbolic model represents behavior through media, including books, movies, or television. These forms of modeling show how people learn not just by doing but also by observing, listening, and interpreting symbolic representations.

Why Other Options Are Wrong:

B. Primary, secondary, and tertiary models that depend on reinforcement schedules

This option incorrectly uses reinforcement terminology. Bandura’s models are not categorized this way.

C. Fixed, variable, and interval models that control reinforcement timing

These terms apply to reinforcement schedules in operant conditioning, not to modeling in observational learning.

D. Immediate, delayed, and continuous models that determine learning speed

This option incorrectly frames models in terms of timing. Bandura’s categories are based on how behavior is demonstrated, not the speed of learning.


4.

Which of the following best explains why gambling is so strongly linked to variable-ratio reinforcement schedules?

  • Reinforcement is delivered after a fixed amount of time, creating predictable behavior patterns

  • Reinforcement is delivered after every single response, ensuring fast learning

  • Reinforcement is delivered unpredictably after varying numbers of responses, producing persistent behavior

  • Reinforcement is delivered after a set number of responses, leading to rapid but predictable responding

Explanation

Correct Answer:

C. Reinforcement is delivered unpredictably after varying numbers of responses, producing persistent behavior

Explanation:

Gambling is maintained through a variable-ratio reinforcement schedule, where wins occur after an unpredictable number of attempts. This unpredictability generates high, steady response rates and makes the behavior extremely resistant to extinction. Gamblers continue playing because each bet could be the one that pays off, just as a child’s tantrum may persist if it is occasionally rewarded. Skinner highlighted this power, noting that even long periods without reinforcement cannot easily extinguish the behavior.

Why Other Options Are Wrong:

A. Reinforcement is delivered after a fixed amount of time, creating predictable behavior patterns

This describes a fixed-interval schedule. It does not produce the persistence or unpredictability associated with gambling.

B. Reinforcement is delivered after every single response, ensuring fast learning

This is continuous reinforcement. While effective for teaching new behaviors, it does not explain the addictive persistence of gambling.

D. Reinforcement is delivered after a set number of responses, leading to rapid but predictable responding

This represents a fixed-ratio schedule. Though it produces high response rates, it lacks the unpredictability that makes variable-ratio schedules so resistant to extinction.


5.

How does John B. Watson’s ideas build on Ivan Pavlov’s work?

  • Pavlov conditioned human emotions, while Watson conditioned animal reflexes

  • Pavlov’s work showed reflexes could be conditioned in dogs, and Watson extended this idea to conditioning human emotions

  • Pavlov rejected conditioning, while Watson used it exclusively to study behavior

  • Pavlov focused on operant conditioning, while Watson focused on classical conditioning

Explanation

Correct Answer:

B. Pavlov’s work showed reflexes could be conditioned in dogs, and Watson extended this idea to conditioning human emotions

Explanation:

Ivan Pavlov demonstrated classical conditioning through his experiments with dogs, showing that reflexive responses (like salivation) could be conditioned by pairing a neutral stimulus with food. John B. Watson applied these principles to humans, arguing that emotions such as fear could also be conditioned. His famous “Little Albert” experiment illustrated that fear of animals could be learned by pairing them with loud, frightening sounds. Watson’s extension of Pavlov’s work supported his view that human behavior is largely the result of conditioned responses.

Why Other Options Are Wrong:

A. Pavlov conditioned human emotions, while Watson conditioned animal reflexes

This reverses the roles—Pavlov worked with dogs’ reflexes, not human emotions.

C. Pavlov rejected conditioning, while Watson used it exclusively to study behavior

Pavlov discovered classical conditioning, so he did not reject it.

D. Pavlov focused on operant conditioning, while Watson focused on classical conditioning

Pavlov studied classical conditioning, not operant conditioning, and Watson extended that to human behavior.


6.

In operant conditioning, why is shaping an important technique for teaching complex behaviors?

  • Because it punishes incorrect responses until the organism learns the exact desired behavior

  • Because it reinforces only the final target behavior, ignoring all earlier approximations

  • Because it breaks behaviors into small steps and reinforces closer approximations until the final behavior is achieved

  • Because it relies on observational learning rather than reinforcement to establish behavior

Explanation

Correct Answer:

C. Because it breaks behaviors into small steps and reinforces closer approximations until the final behavior is achieved

Explanation:

Shaping is essential for teaching complex behaviors that are unlikely to occur naturally. The process involves reinforcing small steps (successive approximations) that gradually resemble the target behavior. At first, any behavior resembling the goal is reinforced. Then reinforcement shifts to responses that come closer and closer until only the desired behavior is rewarded. This step-by-step method makes it possible to train animals and humans to perform complex actions, such as learning to play an instrument or teaching a dog to fetch.

Why Other Options Are Wrong:

A. Because it punishes incorrect responses until the organism learns the exact desired behavior

Shaping uses reinforcement, not punishment, to guide learning.

B. Because it reinforces only the final target behavior, ignoring all earlier approximations

If only the final behavior were reinforced, the organism would not know the steps to get there.

D. Because it relies on observational learning rather than reinforcement to establish behavior

Shaping is a reinforcement-based technique, not an observational learning process.


7.

In operant conditioning, what do organisms learn to associate?

  • A neutral stimulus with an unconditioned stimulus to produce a reflexive response

  • A behavior with its consequence, such as reinforcement or punishment

  • Observed actions of others with imitated behaviors

  • A repeated stimulus with reduced responsiveness over time

Explanation

Correct Answer:

B. A behavior with its consequence, such as reinforcement or punishment

Explanation:

Operant conditioning, developed by B.F. Skinner, is a learning process where behaviors are influenced by their outcomes. If a behavior is followed by reinforcement, it is more likely to be repeated, whereas behaviors followed by punishment are less likely to recur. This principle highlights how organisms adapt their actions to maximize rewards and minimize negative outcomes, shaping future behavior.

Why Other Options Are Wrong:

A. A neutral stimulus with an unconditioned stimulus to produce a reflexive response

This describes classical conditioning, not operant conditioning.

C. Observed actions of others with imitated behaviors

This refers to observational learning, not operant conditioning.

D. A repeated stimulus with reduced responsiveness over time

This describes habituation, a form of non-associative learning, not operant conditioning.


8.

Learning that results from consequences is based on which principle first proposed by Edward Thorndike?

  • The law of effect

  • The law of readiness

  • The principle of classical conditioning

  • The theory of observational learning

Explanation

Correct Answer:

A. The law of effect

Explanation:

Edward Thorndike’s law of effect states that behaviors followed by satisfying consequences are more likely to be repeated, while those followed by unpleasant outcomes are less likely to occur again. This principle laid the groundwork for operant conditioning, later expanded by B.F. Skinner. It emphasizes the role of reinforcement and punishment in shaping behavior, showing that consequences directly influence the likelihood of a behavior being repeated.

Why Other Options Are Wrong:

B. The law of readiness

Thorndike did propose a law of readiness, but it focuses on preparedness to act, not on consequences shaping learning.

C. The principle of classical conditioning

Classical conditioning was developed by Ivan Pavlov and involves stimulus–response associations, not consequences.

D. The theory of observational learning

This describes Albert Bandura’s work, where learning occurs by watching and imitating others, not through direct consequences.


9.

In classical conditioning, what is the process called when the conditioned response decreases because the unconditioned stimulus is no longer paired with the conditioned stimulus?

  • Acquisition

  • Extinction

  • Generalization

  • Spontaneous recovery

Explanation

Correct Answer:

B. Extinction

Explanation:

Extinction occurs when the conditioned stimulus (CS) is repeatedly presented without the unconditioned stimulus (UCS), leading to a gradual weakening and eventual disappearance of the conditioned response (CR). For example, if Pavlov’s bell (CS) is rung but food (UCS) is no longer presented, the dog will eventually stop salivating (CR) to the bell. Extinction does not erase learning; it simply suppresses the conditioned response.

Why Other Options Are Wrong:

A. Acquisition

Acquisition is the initial learning stage where the CS and UCS are paired to form an association.

C. Generalization

Generalization occurs when stimuli similar to the conditioned stimulus also trigger the conditioned response.

D. Spontaneous recovery

Spontaneous recovery is when a previously extinguished conditioned response reappears after a pause, not the process of its decline.


10.

In operant conditioning, how do negative reinforcement and punishment differ?

  • Negative reinforcement decreases a behavior, while punishment increases a behavior

  • Negative reinforcement removes an unpleasant stimulus to increase a behavior, while punishment decreases a behavior

  • Negative reinforcement adds a pleasant stimulus to increase a behavior, while punishment removes a stimulus to increase a behavior

  • Negative reinforcement and punishment are the same because both reduce unwanted behavior

Explanation

Correct Answer:

B. Negative reinforcement removes an unpleasant stimulus to increase a behavior, while punishment decreases a behavior

Explanation:

Negative reinforcement and punishment are often confused, but they serve very different purposes. Negative reinforcement strengthens behavior by removing something unpleasant (e.g., fastening a seatbelt stops the car’s beeping, so seatbelt use increases). Punishment, on the other hand, always decreases a behavior. Punishment can be positive (adding an unpleasant stimulus, like scolding) or negative (removing a pleasant stimulus, like taking away a toy). Thus, reinforcement—positive or negative—always increases behavior, while punishment—positive or negative—always decreases it.

Why Other Options Are Wrong:

A. Negative reinforcement decreases a behavior, while punishment increases a behavior

This is the exact opposite of the correct definitions.

C. Negative reinforcement adds a pleasant stimulus to increase a behavior, while punishment removes a stimulus to increase a behavior

Adding a pleasant stimulus describes positive reinforcement, not negative reinforcement, and punishment never increases behavior.

D. Negative reinforcement and punishment are the same because both reduce unwanted behavior

This is incorrect—negative reinforcement increases a behavior, while punishment decreases it.


What Students Say About NurseExam Pro

Trusted by thousands of nursing students worldwide for exam success.

Related Exams