Classical conditioning is so named after the experimental procedure devised by the physiologist, Ivan Pavlov (1849-1936), when he changed his focus from the digestive system to conditioning, after noticing a dog salivate when it saw the bucket in which its food was kept.
Pavlov devised an instrument to measure the salivation of the dog when giving it meat powder. The meat powder was the unconditioned stimulus (UCS) and the response was salivation, an unconditioned response (UCR). Unconditioned means that the response is automatic, based on instinct. He then rang a bell, the neutral stimulus, and directly afterwards gave some meat powder (UCS) to the dog. The dog responded by salivating. Pavlov repeated this several times a day for 1 week and discovered that if he rang the bell but did not give the dog meat powder it still salivated. He now saw the bell as a conditioned stimulus (CS), and the salivation as a conditioned response (CR), as it had been learned.
If a neutral stimulus that does not produce a response is repeatedly paired with a UCS that does produce a response, then the neutral stimulus will become a CS and also produce a response.
Principles to Classical Conditioning
+ Stimulus Generalisation – this refers to using similar stimuli to the CS, a bell with a slightly higher ringing tone to the original one, for example, that will probably evoke the salivary response. The more different the stimuli, the weaker the CR will be.
+ Stimulus Discrimination – this refers to the ability to evoke a response to a CS, but not to other similar but different stimulus. Pavlov paired a black ellipse with the meat powder, combining this with the bell and once the black ellipse had produced a CR he paired the UCS (meat powder) with a similar shape and not with the ellipse. The new shape, a black circle, was repeatedly paired with the meat powder until it produced a CR. The dog was able to discriminate between the shapes.
+ Higher Order Conditioning – Pavlov did another experiment where he paired a metronome (CS) with the meat powder (UCS). After evoking a CR to the CS he paired it with a black square (neutral stimulus) but no UCS. After a while the square evoked a CR even though it had never been paired with the UCS.
+ Extinction this occurs when the CS loses its ability to produce a CR. Pavlov produced the CS but did not reinforce it using the UCS. After repeating this several times the dog did not salivate. Also, if the CS is used again after a time-lapse, the CR may return but in a weaker form. This is known as spontaneous recovery.
Applications of Classical Conditioning
Classical conditioning is used to treat people with phobias, using various methods. Firstly there is systematic desensitisation . In the case of arachnophobia, the person would be shown the mildest image of a spider, eg a cartoon drawing, and then asked how they felt. The image of the spider would then be gradually intensified, making it more realistic, in each case followed with an assessment of the person s reaction. This process is known as hierarchy of stimulus intensity . Reciprocal inhibition speeds this process up by getting the person to relax more (by using hypnotherapy, for example). Also there is flooding where the person is forced to confront the phobia in its most extreme form for as long as they can bear it. This is the most effective technique, but generally less extreme techniques are favoured.
Operant conditioning started with some experiments by E.L Thorndike (1911) who built a puzzle box in which he put a hungry cat. (see figure 2)
Figure 2 – Thorndike s puzzle box
The door of the box was held shut with a spring on a pulley with a loop on the end. If the cat pulled the loop the door would open. The cat could see and smell the fish and reacted by meowing, prowling around the box and various other responses until eventually it pulled the loop to open the door, thus escaping to eat the fish. Thorndike called this response a pleasant consequence . He put the cat back in the box and saw that the cat would arrive at the pleasant consequence a bit quicker than before. Based on this experiment he formulated The Law of Effect which states, If the response to a stimulus is followed by pleasant consequences it becomes stamped in to the organism and is more likely to occur to the stimulus in the future. An example of this would be giving a biscuit to a child for picking up all of its toys. If it does not have pleasant consequences then it will become erased and less likely to occur with the stimulus in the future . This kind of learning only occurs through trial and error with no thought involved. Due to the nature of this contingency, this approach has been named instrumental conditioning.
B.F. Skinner (1904 1997) refined instrumental conditioning and proposed his own approach, operant conditioning . Instead of using Thorndike s term pleasant consequences , he used the term positive reinforcement. Skinner devised machines for his experiments, named Skinner s boxes , and he mostly used rats and pigeons. Inside these boxes was a lever, which the animal had to press to open a food tray and receive food (positive reinforcement). The animals showed all kinds of behaviour in the box but eventually, through exploration, pressed the lever and received the food. Once this had been conditioned, the other behaviours died out, as they were not reinforced. The pressing of the lever is a CR.
A quicker way of getting the animal to press the lever is known as shaping. Here, the animal is reinforced for getting nearer to the lever and therefore heightening its chances of pressing the lever accidentally. After the animal has pressed the lever it will only be reinforced for that response.
Skinner also introduced the term negative reinforcement which means that if you get a bad response, you do something else to avoid getting the bad response again. An example of this would be if a child were punished for not picking up all of its toys. It would be more likely to pick them up in future to avoid the negative reinforcement.
Skinner developed five different schedules of reinforcement, which affect both response and extinction rates. Without some level of reinforcement, extinction of the CR will ultimately occur.
Both classical and operant conditioning approach basic learning phenomena from a behaviourist perspective, attempting to explain how specific patterns of behaviour are acquired in the presence of well-defined stimuli linked to a response. As shown, stimulus generalisation, discrimination and extinction are all common characteristics.
The common limitation is that they are based purely upon observed behaviour and fail to reflect upon the unobservable contents of consciousness. Both rely upon the premise that, for learning to have taken place, a change in behaviour must be displayed. Is it not possible to change behaviour not having learnt anything or conversely for behaviour to remain the same despite something having been learnt?
Classical conditioning deals only with involuntary behaviour, whereas operant conditioning also deals with voluntary behaviour.
In operant conditioning the learner must provide a correct response in order to be reinforced, which strengthens the response, but in classical conditioning the learner is automatically reinforced when learning to respond to the neutral stimulus. Another difference is that, in operant conditioning the form of behaviour to be learned or extinguished through positive reinforcement or punishment is determined by the experimenter and can be achieved more effectively through shaping.
We have seen that Classical and Operant conditioning have both made valuable contributions to our understanding of the learning process. Of the two, operant conditioning is the more adaptable, employing shaping and schedules of reinforcement, thus going some way to overcoming the shortcomings and fragility of the rigid classical model.
It is apparent that both models, with their heavy behaviourist leaning, fall short of explaining the complexities of human behaviour. Making deductions about human behaviour from mere observation of hungry animals is perhaps, at best, misguided. To what extent can we really extrapolate from animals to humans?
Fortunately for us, life is not just a successive journey of escaping Skinner s boxes to enjoy our next Pavlovian dish. Above the physiological level of existence, areas of higher motivation and consciousness need to be explored for us to gain a fuller insight into what it is to learn.
Pavlov I.P. (1849-1936) class notes.
Pavlov I.P. (1849-1936), Introduction to Psychology 10th Edition, Atkinson R.L, Atkinson R.C, Smith E, Bem D.J, Hilgard E.R. (p249)
Skinner B.F. (1904-1997) class notes.
Thorndike E.L (1911) Beginning Psychology, A comprehensive introduction to psychology, Hardy M, Heyes S. (pp41-44)