What is the difference between higher order conditioning and sensory preconditioning




















This demonstrates that neutral sensory stimuli the light can be used to block predictions of other neutral sensory stimuli the noise , in a manner that transcends scalar value inherent in an outcome like food. Again, this supports the idea that training during SPC is supported by the development of sensory-specific representations between specific stimuli. Figure 2.

Procedural paradigms for blocking and blocking of sensory preconditioning. A Blocking involves first pairing a stimulus e. Then the tone is paired in compound with another novel stimulus e. Blocking is said to occur when responding to the light is reduced as a consequence of the blocking procedure. B Blocking of sensory preconditioning is when subjects first learn that two neutral stimuli are related in time e.

Then the light is presented in compound with another neutral stimulus e. Like blocking with food rewards, this procedure also reduces the sensory preconditioning effect. This demonstrates that the tone can serve as a sensory-specific prediction, which can be blocked, much like a food reward that has inherent value. This supports the idea that SPC is mediated by a representation of a sensory-specific relationship between the tone and light. It has also been demonstrated that SPC explicitly does not involve the transfer of general value.

Using a standard SPC design, where the light and tone are paired together, and then the tone is paired with food, Sharpe et al. That is, the light would promote the appetitive response to go to the location where food is usually delivered, however, they would not press a lever that produced the light. Thus, SPC provides strong evidence that animals are capable of learning associations between various neutral stimuli which they can use to build internal models and help navigate towards rewards.

Compatible with the idea that SPC promotes the development of complex internal models of stimulus relationships, SPC recruits neural circuits that are known to play a role in these types of inferential processing, including the hippocampus and orbitofrontal cortex. For example, hippocampal neurons in CA1 increase in excitability during the pairing of the light and tone in SPC, and this excitability correlates with future response to the light after its pairing with the food-predictive tone.

Further, subsequent lesions to those same stimulus-responsive neurons in CA1 disrupts responding to the light, but not the food-predictive tone Port et al. The role of the hippocampus is also supported by studies in humans; neural activity in the hippocampus that is observed to the light during SPC is re-evoked when the tone is paired with reward, suggesting the development of the cognitive framework that supports SPC in the hippocampus Wimmer and Shohamy, Recently, Barron et al.

Specifically, optogenetic inhibition of CA1 neurons at test reduces responding to the light. Finally, areas adjacent and heavily connected to the hippocampus e. Indeed, Wong et al. One interpretation of these data is that while the tone was paired with the outcome, the perirhinal cortex recruited a representation of the light, which was then associated with the outcome Doll and Daw, ; Sharpe et al. Thus while SPC is often thought to rely on a chain-like-association between light-tone-outcome, the perirhinal cortex might be critical in SPC procedures that promote mediated conditioning i.

In any event, these studies establish the hippocampus and several adjacent regions as critical to the development of SPC, often supporting a cognitive account of SPC but in other cases supporting the mediated account.

Specifically, neurons in the orbitofrontal cortex acquire responses to the light and tone during SPC in a manner that reflects the development of a sensory-specific association between the light and tone Sadacca et al. Further, optogenetic inhibition of these neurons prevents the development of the association between the light and tone, while pharmacological inactivation of orbitofrontal cortex at test also reduces responding.

This strongly implicates the orbitofrontal cortex in the stimulus-stimulus associations at play in SPC, consistent with the core function of the orbitofrontal cortex in representing and navigating through the structure of our environments Schuck et al.

Given the role of both the hippocampus and orbitofrontal cortex in the SPC, and their complementary roles in learning, it becomes of interest to examine how these two regions might interact to produce the complex associations that drive behavior in SPC in future research. One of the modern success stories of neuroscience has been the discovery that dopamine neurons in the midbrain serve as a neural substrate for reward prediction errors that drive appetitive Pavlovian conditioning Waelti et al.

Schultz et al. For example, these neurons will exhibit a phasic response if an animal is given a reward in an unpredictable manner, but not if they have learned that a stimulus reliably predicts the delivery of the reward. This also works in the reverse. If a reward was expected but not delivered, dopamine neurons show a phasic decrease in firing from baseline. Thus, these neurons follow the mathematical patterns described in error-reduction models of associative learning e.

The content of information that can be endowed by the phasic dopamine signal has been the topic of much debate. Initially, Schultz and colleagues described the increase in dopamine firing to reflect the transfer of scalar value inherent in the reward back to a stimulus that predicts its occurrence Schultz, This conceptualization of phasic dopamine firing is consistent with that described by the model-free temporal difference reinforcement learning TDRL algorithm described by Sutton and Barto Critical to this proposal is that the reward-predictive stimulus has now been endowed with value inherent in reward, and not that the stimulus is associated with a sensory-specific representation of that reward.

While this value is sufficient to alter behavior to the reward-predictive stimulus i. SPC and SOC are two procedures that have helped us understand how the dopamine prediction error contributes to learning and behavior. Of course, central to the narrative that dopamine represents reward prediction error is the idea that the dopamine signal continues to back-propagate to the earliest predictor of reward.

This begs the question of whether the presence of the dopamine error at the onset of a reward can support conditioning in its own right. Maes et al. Rats were first trained that a tone predicted food. Then, the light was paired with the tone, and dopamine neurons in VTA were inhibited across the transition between the light and tone, to prevent a prediction error from occurring. The involvement of the prediction error in SOC is consistent with it acting either as a teaching signal that facilitates the development of associations between stimuli or acting as a value signal.

However, examining the role of the prediction error in SPC can dissociate between these possibilities. In fact, all error correction models of learning that rely on value to drive learning [e. Specifically, Sharpe et al. This demonstrates that stimulation of dopamine neurons facilitated the sensory-specific associations present in SPC, without adding value to these associations.

These data are consistent with a role for the dopamine prediction error in acting as a teaching signal to drive associations between stimuli, and not as a signal that makes antecedent stimuli valuable. This is because it positions dopamine to facilitate Pavlovian conditioning in a more flexible manner than previously conceptualized. Further, that these higher-order phenomena are associatively and neurologically distinct, and yet both fundamentally driven by dopamine, demonstrates that the role of dopamine prediction errors in learning need not be constrained by specific associative or neurological structures.

Put another way, while dopamine was once thought to act as a value signal, which restricts the role it can play in associative learning, its involvement in higher-order conditioning processes suggests a much broader role for dopamine as a critical driver of Hebbian plasticity in many regions of the brain.

What are the implications of dopamine being involved in learning in such a broad way? To understand this, we need to think about the more general role of higher-order stimulus relations play in complex behavior and cognitive processes. For instance, Blaisdell and colleagues have explored the role of SPC in forming cognitive maps for spatial search Blaisdell and Cook, ; Sawa et al. Then, pigeons were separately taught a relationship between Landmark 1 and the hidden location of food e.

At the test, pigeons were presented with Landmark 2, and they were able to locate the food despite never having experienced the relationship between Landmark 2 and the food cup Blaisdell and Cook, Similar results were obtained with pigeons using a modified version of this task using an operant touchscreen Sawa et al. At present, there has been little investigation of the neural basis of the integration of these separately learned spatial maps, but it is exciting to think that dopamine may be critical for such sophisticated cognitive processes.

Indeed, mice lacking D 1 dopamine receptors showed deficits in several spatial learning tasks without showing deficits in visual or motor performance El-Ghundi et al. There is also evidence for the integration of temporal maps in higher-order conditioning procedures. The temporal coding hypothesis describes the role time plays in associative learning experiments Miller and Barnet, ; Savastano and Miller, ; Arcediano et al. Analogous to the role of higher-order conditioning in the integration of spatial maps, temporal maps acquired during Pavlovian conditioning can be integrated as a result of higher-order conditioning procedures.

In one example, Leising et al. The tone was then paired with food, and appetitive response was examined to the light. Appetitive response was higher at the beginning of the light in the group early, relative to the group late. Similar results have been reported using fear conditioning procedures in rats Savastano and Miller, and appetitive procedures in humans e. This research demonstrates that rats had not only encoded the relationships between the light and tone but that they encoded these relationships into a temporal map.

Again, it would be interesting to think about how dopamine might contribute to the inferred temporal relationships that can be formed during the SPC procedure. Higher-order associative processes even appear to be involved in learning causal models of events.

In a study using appetitive SPC, Blaisdell et al. However, if they are taught that the tone produces both the light and the food i.

This is because they reason that, in the latter case, the light was caused by their own action and not by the tone, as it was in the former case. Thus, they did not expect the light to produce a food reward. This sophisticated reasoning process exhibited by these rats is akin to that observed in adults e.

These results and others e. However, there is a dearth of research on the role of dopamine—or other neural substrates—in these domains. What is next for those interested in understanding how dopamine and higher-order processes give rise to more complex cognition?

One direction is that these sophisticated learning procedures could be coupled with recently developed technologies to record from and manipulate dopamine and related circuits.

Because these techniques e. Similarly, while the circuits that support learning of neutral stimuli in SPC are ongoing, there is also recent evidence that some regions e. This brings to bear the possibility that there is more than one system at play in the forming of these associations. More generally, future research utilizing these tools in combination with higher-order tasks would help to elucidate how we make sense of the world around us, and how this may go awry in psychological disorders.

All authors contributed to the synthesis of research and writing of the article. All authors contributed to the article and approved the submitted version. The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers.

Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Arcediano, F. An action that the unconditioned stimulus automatically elicits. Conditioned Stimulus CS. Initially a neutral stimulus.

After repeated pairings with the unconditioned stimulus, the CS elicits the same response as the US. Conditioned Response CR. The response elicited by the conditioned stimulus due to the training. Classical conditioning is a method used to study associative learning.

What is an association? Examples of Classical Conditioning. Positive Contingency. Two stimuli tend to occur together and neither tends to occur when the other is absent.

Classical excitatory conditioning. Negative Contingency. One stimulus regularly precedes the absence of another stimulus that is present at other times. Classical inhibitory conditioning. No Contingency. Would you predict classical conditioning to occur in this situation? Human Conditioning Paradigms. Second-order conditioning with food as the unconditioned stimulus. Leyland, C. Higher-order autoshaping. Quarterly Journal of Experimental Psychology , 29 , — Pfautz, P.

Sensory preconditioning versus protection from habituation. PubMed Article Google Scholar. Prewit, E. Number of preconditioning trials in sensory preconditioning using CER training. Rashotte, M. Rizley, R. Associations in second-order conditioning and sensory preconditioning. Download references.

Robert C. We bring a myriad of conditioning experiences into any given situation, making the task of determining the role of conditioning in our lives fairly complex.

For example, as we talked about in class, I intentionally said I would "call on people at random" to answer a question. I did it to evoke a Pavlovian response, which it did. Let's look at some extensions of the basic paradigm that will help us understand some of the complexities of Pavlovian conditioning: Stimulus Generalization. We rarely encounter the exact same situation twice. There's always some change in the environment. Usually, this new environment has some physical resemblance to an environment with which we have some history.

The some is the crucial element--the more similar the new environment is to something we already know, the more we will respond in a similar way. For example, the first time you walked into Principles of Learning and Behavior , you did not have a history with this class, yet you had certain responses to the environment.

This was because there was much in common with previous classes you've been in, and you responded similarly to components of the class that were like previous classes feeling depressed when you saw how much work was required, etc. Eventually, you will refine your responses to the stimuli associated with this course, but your initial responses are an example of stimulus generalization.

So, resemblance is another way that new reflexes can be developed. Like other parts of conditioning, stimulus generalization is adaptive--we don't need to learn everything all over again every time there's some change in the environment. Stop for a minute and reflect on the term stimulus generalization.



0コメント

  • 1000 / 1000