Citations


Citations



History - Philosophy - Medicine - Neuroscience - Psychology - Education - Neuroeducation - Computer Science - Robotics - Applications - Art - Sport



L’homme est une machine si composée qu’il est impossible de s’en faire une idée claire, et conséquemment de la définir (La Mettrie, 1748).

La Mettrie, J. O. de (1748). L’homme machine. Leyden.




History : Sciences - Philosophy - Biology - Medicine - Neuroscience - Psychology - Cognitive Science - Education - Computer Science - Robotics


Philosophy : Ethics and Moral Philosophy - Sciences - Life Sciences - Human Sciences - Biology - Medicine - Neuroscience - Psychology - Cognitive Science - Education - Computer Science - Robotics


Medicine : Ethics of Medecine - Journals - Congress


Neuroscience (1) : Prologue - Brain - Nervous system - Endocrine system - Neuron - Synapse - Neurotransmitter - Development - Learning - Homeostasy - Plasticity


Neuroscience (2) : Vision - Visual perception - Attention - Selective attention - Attentional control - Shared attention - Sustained attention - Memory - Language - Metacognition - Consciousness - Decision


Psychology (1) : Prologue - Representation - Modularity - Hierarchy - Processing - Cognitive architecture - Development - Learning


Psychology (2) : Vision - Visual perception - Attention - Selective attention - Attentional control - Shared attention - Sustained attention - Memory - Language - Metacognition - Consciousness - Decision


Psychology (3) : Perception and Action - Selective attention and Memory - Attention control and Memory - - Attention and Language - Attention and Intelligence - Attention and Consciousness - Attention and Development - Attention and Learning - Attention and Meditation


Neuroeducation : Theories - Models - Methods - Practices - eEducation - Programs - Assessment


Computer Science : NeuroEthics - NeuroPsychology - NeuroEducation - Educational Psychology - Specialized Education - Reeducation


Robotics : NeuroEthics - NeuroPsychology - NeuroEducation - Educational Psychology - Specialized Education - Reeducation


Applications : ResearchApplications - NeuroApplications - PsyApplications - EduApplications


Art : Art and Science - Art and Psychology - Art and Education - Art and Reeducation - Music and Language


Sport : Sport and Psychology - Sport and Education - Sport and Reeducation




History

Sciences - Philosophy - Biology - Medicine - Neuroscience - Psychology - Cognitive Science - Education - Computer Science - Robotics

History of Sciences

Citations > History > Sciences

The history of science, like the history of all human ideas, is a history of irresponsible dreams, of obstinacy, and of error. But science is one of the very few human activities — perhaps the only one — in which errors are systematically criticized and fairly often, in time, corrected. This is why we can say that, in science, we often learn from our mistakes, and why we can speak clearly and sensibly about making progress there (Popper, 1963).

Popper, K. (1963). Conjectures and Refutations: The Growth of Scientific Knowledge. New York, London: Routledge & Kagan Paul.

History of Philosophy

Citations > History > Philosophy

There is probably no more abused a term in the history of philosophy than “representation,” and my use of this term differs both from its use in traditional philosophy and from its use in contemporary cognitive psychology and artificial intelligence.... The sense of “representation” in question is meant to be entirely exhausted by the analogy with speech acts: the sense of “represent” in which a belief represents its conditions of satisfaction is the same sense in which a statement represents its conditions of satisfaction. To say that a belief is a representation is simply to say that it has a propositional content and a psychological mode (Searle, 1983, p.12).

Searle, J. (1983). Intentionality: An Essay in the Philosophy of Mind. Cambridge: Cambridge University Press.

History of Biology

Citations > History > Biology

The egg cell ... is a universe. And if we could but know it we would feel in its minute confines the majesty and beauty which match the vast wonder of the world outside us. In it march events that give us the story of all life from the first moment when somehow out of chaos came life and living. That first tremendous upheaval that gave this earth its present contour finds its counterpart in the breaking up of the surface of the egg which conditions all its life to follow (Just, 1939).

Just, E.3. (1939). Basic Methods for Experiments on Eggs of Marine Animals. Philadelphia.: Blakiston's Son and Co..

History of Medicine

Citations > History > Medicine

The history of Medicine is largely the history of science and philosophy. It is not a narrative of events simply, but more a tracing of the evolution of the various branches of the sciences, the ensemble of which comprises Medicine (Gorton, 1910).

The history of Medicine is... a study of the progress of the science and art of caring for living beings in health and disease, and of ideas fundamental to them, and only incidentally of men who distinguished themselves in their advancement (Gorton, 1910).

Gorton, D.A. (1910). The History of Medicine: Philosophical and Critical, from its origins to the Twentieth Century. New York, London: G.P. Putman's Sons.

History of Neuroscience

Citations > History > Neuroscience

From the things that have been ascertained and investigated thus far, I believe it has been sufficiently well established that there is present in animals an electricity which we ... are wont to designate with the general term "animal." ... It is seen most clearly ... in the muscles and nerves (Galvani, 1791).

Galvani, L. (1791). De viribus electricitatis in motu musculari. Commentarius Bononiæ, ex Typographia Instituti Scientiarum.

History of Psychology

Citations > History > Psychology

Psychology is the Science of Mental Life, both its phenomena and of their conditions. The phenomena are such things as we call feelings, desires, cognitions, reasonings, decisions, and the like; and, superficially considered, their variety and complexity is such as to live a chaotic impression on the observer (James, 1890/1950, p.1).

James, W. (1890). Principles of psychology. Vol. 1. New York: Dover.


This chapter and the next develop a schema of neural action to show how a rapprochement can be made between (1) perceptual generalization, (2) the permanence of learning, and (3) attention, determining tendency, or the like. It is proposed first that a repeated stimulation of specific receptors will lead slowly to the formation of an "assembly" of association-area cells which can act briefly as a closed system after stimulation has ceased; this prolongs the time during which the structural changes of a representative process (image or idea) (Hebb, 1949, p. 60).

Psychologically, these ideas mean (1) that there is a prolonged period of integration of the individual perception, apart from associating the perception with anything else, (2) that an association between two perceptions is likely to be possible only after each one has independently been organized, or integrated, (3) that, even between two integrated perceptions, there may be a considerable variation in the ease with which association can occur Finally, (4) the apparent necessity of supposing that there would be a "growth," or fractionation and recruitment, in the cell-assembly underlying perception means that there might be significant differences in the properties of perception at different stages of integration (Hebb, 1949, p. 77).

The reader will remember that what we are aiming at here is the solution of a psychological problem To get psychological theory out of a difficult impasse, one must find a way of reconciling three things without recourse to animism. perceptual generalization, the stability of memory, and the instabilities of attention As neurophysiology, this and the preceding chapter go beyond the bounds of useful speculation They make too many steps of inference without experimental check. As psychology, they are part of a preparation for experiment, a search for order in a body of phenomena about which our ideas are confused and contradictory, and the psychological evidence does provide some check on the inferences made here (Hebb, 1949, p. 79).

Hebb, D.O. (1949). The organization of behavior. New York: Wiley.

History of Cognitive Science

Citations > History > CognitiveScience

As is well known, cognitive science has undergone a number of stages, since its inception, which can be placed in the 1940s. It is important to have this history in mind, in schematic form, for to each stage corresponds a specific framework for the mathematics of cognitive science (Andler, 2012, p. 376).

(1) The prehistorical phase (1942–1956) was centered on the recently reborn logic and the just emerging cybernetics. Logic was developed as a branch of mathematics and as a language for representing certain essential mental operations. It was mechanicized in the hands of Turing and others, and biologized by McCulloch and Pitts and others. The broad ambition of cybernetics was to provide an overarching theory of mind, brain and machines, couched in the appropriate language of information and control. (Andler, 2012, p. 376).

(2) The first phase of the historical period (roughly 1956–1980) centered on artificial intelligence (AI), broadly understood as the science of “intelligent” information processing, leading up to the so-called classical, or symbolic paradigm in cognitive science. The formal systems of logic provided the language, and theories (at least notionally) took the form of (computer) programs; we would be more comfortable today calling them models, but at the time it was important not to let the theoretical ambition of AI be watered down: AI was to be the scientific theory of human intelligence (of cognition), not a mere methodology for producing intelligence-like effects. The needed mathematics was logic, automata theory, and the nascent computer science or informatics. (Andler, 2012, p. 376).

(3) Next came (ca. 1980–1995) connectionism or the neural nets approach, which took up the perceptual strand of cybernetics and extended it into a full-fledged framework for cognitive science (and AI), competing with the classical, symbolic approach. Connectionism, which comprises several rather distinct currents, can be applied at the functional or mental level, at the neuronal level, or again at an intermediate level, abstracted from the neuronal level and reflecting the “microstructure” of cognition, understood in informational terms. The mathematics is here much more visible than in the symbolic approach, and also much richer and more varied, comprising fragments of linear algebra, of probability and signal theory, of analysis, and of dynamical systems, although seldom reaching great heights of sophistication. (Andler, 2012, p. 377).

(4) The modern phase, to which the present still belongs, but is morphing into what I venture to call post-modern, is characterized, first and foremost, by the appearance of a new contender for the status of admiral discipline: cognitive neuroscience, supported by functional neuro-imaging technology but also by the strengthening of theoretical neuroscience, which consists in applying the methods of physical modeling to phenomena arising at various levels of organization of the nervous tissue. Mathematical tools have become considerably more sophisticated. Functional imagery calls upon highly complex statistical methods aiming at providing a pictorial representation of the distributed activity in neuronal population, taking a gigantic mass of indirect signals as the basis of an inference to their sources. Theoretical neuroscience helps itself to a vast repertory of mathematical theories. (Andler, 2012, p. 377).

(5) Post-modernism (a notion which I venture to propose here, but which to my knowledge has not been proposed under this or any other name by observers of contemporary cognitive science) is characterized by a breakdown of pragmatic unity and doctrinal consensus. Cognitive science is at a tipping point. Is it on the verge of disintegration, with a majority of programs recategorized inside neuroscience (and more broadly biology), and the rest reintegrating other main disciplines, or is it headed towards a fully integrated fi eld, awaiting a new framework in which mathematics is likely to play a fundamental part? (Andler, 2012, p. 378).

Andler, D. (2012). Mathematics in cognitive science. In Weber, M., Dieks, D., Gonzalez, W.J., Hartmann, S., Stöltzner, M, Weber, M. (Eds). Probabilities, Laws, and Structures. Springer, Dordrecht, 2012 (ppp. 363-377).

History of Education

Citations > History > Education

History of Computer Science

Citations > History > Computer Science

History of Robotics

Citations > History > Robotics



Philosophy

Ethics and Moral Philosophy - Sciences - Life Sciences - Human Sciences - Biology - Medicine - Neuroscience - Psychology - Cognitive Science - Education - Computer Science - Robotics

Prologue

Citations > Philosophy > Prologue

Que nul n'entre ici s'il n'est géomètre (Platon, -428/-348).

It is perfectly true, as the philosophers say, that life must be understood backwards. But they forget the other proposition, that it must be lived forwards (Kierkegaard, 1843).

Kierkegaard, S. (1843). Journals IV A 164.


Every mental phenomenon is characterized by what the Scholastics of the Middle Ages called the intentional (or mental) inexistence of an object, and what we might call, though not wholly unambiguously, reference to a content, direction toward an object (which is not to be understood here as meaning a thing), or immanent objectivity. Every mental phenomenon includes something as object within itself, although they do not all do so in the same way. In presentation something is presented, in judgement something is affirmed or denied, in love loved, in hate hated, in desire desired and so on. This intentional in-existence is characteristic exclusively of mental phenomena. No physical phenomenon exhibits anything like it. We could, therefore, define mental phenomena by saying that they are those phenomena which contain an object intentionally within themselves. (Brentano, 1874, pp. 201-202).

Brentano, F. (1874). Psychology from an Empirical Standpoint. In Linda L. McAlister (Ed.). London: Routledge (1995), pp. 88–89.


Ethics and Moral Philosophy

Citations > Philosophy > Ethics and Moral Philosophy

(1) Agis de telle sorte que la maxime de ton action puisse être érigée par ta volonté en une loi universelle (Kant, 1785).

(2) Agis de telle sorte que tu traites toujours l'humanité en toi-même et en autrui comme une fin et jamais comme un moyen (Kant, 1785).

(3) Agis comme si tu étais à la fois législateur et sujet dans la république des volontés libres et raisonnables (Kant, 1785).

Kant, E. (1785). Fondements de la métaphysique des mœurs. Traduction V. Delbos, Paris : Delagrave, 1960.


Philosophy of Science

I fully agree with you about the significance and educational value of methodology as well as history and philosophy of science. So many people today—and even professional scientists—seem to me like someone who has seen thousands of trees but has never seen a forest. A knowledge of the historic and philosophical background gives that kind of independence from prejudices of his generation from which most scientists are suffering. This independence created by philosophical insight is—in my opinion—the mark of distinction between a mere artisan or specialist and a real seeker after truth (Einstein, 1944).

Einstein, A. (1944). Letter to Robert A. Thorton. (Dec 7, 1944) EA-674, Einstein Archive, Hebrew University, Jerusalem.


Philosophy of science is an old and practiced discipline. Both Plato and Aristotle wrote on the subject, and, arguably, some of the pre-Socratics did also. The Middle Ages, both in its Arabic and high Latin periods, made many commentaries and disputations touching on topics in philosophy of science. Of course, the new science of the seventeenth century brought along widespread ruminations and manifold treatises on the nature of science, scientific knowledge and method. The Enlightenment pushed this project further trying to make science and its hallmark method definitive of the rational life. With the industrial revolution, “science” became a synonym for progress. In many places in the Western world, science was venerated as being the peculiarly modern way of thinking. The nineteenth century saw another resurgence of interest when ideas of evolution melded with those of industrial progress and physics achieved a maturity that led some to believe that science was complete. By the end of the century, mathematics had found alternatives to Euclidean geometry and logic had become a newly re-admired discipline (Machamer, 2001, p. 1).

But just before the turn to the twentieth century, and in those decades that followed, it was physics that led the intellectual way. Freud was there too, he and Breuer having published Studies in Hysteria in 1895, but it was physics that garnered the attention of the philosophers. Mechanics became more and more unified in form with the work of Maxwell, Hertz and discussions by Poincaré. Plank derived the black body law in 1899, in 1902 Lorenz proved Maxwell’s equations were invariant under transformation, and in 1905 Einstein published his paper on special relativity and the basis of the quantum. Concomitantly, Hilbert in 1899 published his foundations of geometry, and Bertrand Russell in 1903 gave forth his principles of mathematics. The development of unified classical mechanics and alternative geometries, now augmented and challenged by the new relativity and quantum theories made for period of unprecedented excitement in science (Machamer, 2001, p. 1).

Machamer, P. (2001). A brief historical introduction to the philosophy of science. In P.K. Machamer and M. Silberstein (Eds.). Blackwell Guide to the Philosophy of Science. Blackwell: Oxford (pp. 1-17).


Philosophy of Life Science


Philosophy of Human Sciences


Philosophy of Biology

Philosophical discussion of molecular and developmental biology began in the late 1960s with the use of genetics as a test case for models of theory reduction. With this exception, the theory of natural selection remained the main focus of philosophy of biology until the late 1970s. It was controversies in evolutionary theory over punctuated equilibrium and adaptationism that first led philosophers to examine the concept of developmental constraint. Developmental biology also gained in prominence in the 1980s, as part of a broader interest in the new sciences of self-organization and complexity. The current literature in the philosophy of molecular and developmental biology has grown out of these earlier discussions under the influence of twenty years of rapid and exciting growth of empirical knowledge. Philosophers have examined the concepts of genetic information and genetic program, competing definitions of the gene itself, and competing accounts of the role of the gene as a developmental cause. The debate over the relationship between development and evolution has been enriched by theories and results from the new field of “evolutionary developmental biology.” Future developments seem likely to include an exchange of ideas with the philosophy of psychology, where debates over the concept of innateness have created an interest in genetics and development. (Grush, 2001, p. 272).

Griffiths, P. (2001). Molecular and developmental biology. In P.K. Machamer and M. Silberstein (Eds.). Blackwell Guide to the Philosophy of Science. Blackwell: Oxford (pp. 252-271).


Philosophy of Medicine

(1) Au moment d’être admis(e) à exercer la médecine, je promets et je jure d’être fidèle aux lois de l’honneur et de la probité (Hippocrate, -460/-370).

(2) Mon premier souci sera de rétablir, de préserver ou de promouvoir la santé dans tous ses éléments, physiques et mentaux, individuels et sociaux (Hippocrate, -460/-370).

(3) Je respecterai toutes les personnes, leur autonomie et leur volonté, sans aucune discrimination selon leur état ou leurs convictions. J’interviendrai pour les protéger si elles sont affaiblies, vulnérables ou menacées dans leur intégrité ou leur dignité. Même sous la contrainte, je ne ferai pas usage de mes connaissances contre les lois de l’humanité (Hippocrate, -460/-370).

(4) J’informerai les patients des décisions envisagées, de leurs raisons et de leurs conséquences. Je ne tromperai jamais leur confiance et n’exploiterai pas le pouvoir hérité des circonstances pour forcer les consciences (Hippocrate, -460/-370).

(5) Je donnerai mes soins à l’indigent et à quiconque me les demandera. Je ne me laisserai pas influencer par la soif du gain ou la recherche de la gloire (Hippocrate, -460/-370).

(6) Admis(e) dans l’intimité des personnes, je tairai les secrets qui me seront confiés. Reçu(e) à l’intérieur des maisons, je respecterai les secrets des foyers et ma conduite ne servira pas à corrompre les moeurs. Je ferai tout pour soulager les souffrances. Je ne prolongerai pas abusivement les agonies. Je ne provoquerai jamais la mort délibérément (Hippocrate, -460/-370).

(7) Je préserverai l’indépendance nécessaire à l’accomplissement de ma mission. Je n’entreprendrai rien qui dépasse mes compétences. Je les entretiendrai et les perfectionnerai pour assurer au mieux les services qui me seront demandés (Hippocrate, -460/-370).

(8) J’apporterai mon aide à mes confrères ainsi qu’à leurs familles dans l’adversité. (Hippocrate, -460/-370).

(9) Que les hommes et mes confrères m’accordent leur estime si je suis fidèle à mes promesses ; que je sois déshonoré(e) et méprisé(e) si j’y manque (Hippocrate, -460/-370).

Hippocrate, le Grand. (-460/-370). Serment d'Hippocrate. Œuvres complètes d'Hippocrate. Traduit par Emile Littré. Paris, 1839-1861, 10 vol..


Philosophy of Neuroscience

Over the past three decades, philosophy of science has grown increasingly “local.” Concerns have switched from general features of scientific practice to concepts, issues, and puzzles specific to particular disciplines. Philosophy of neuroscience is a natural result. This emerging area was also spurred by remarkable recent growth in the neurosciences. Cognitive and computational neuroscience continues to encroach upon issues traditionally addressed within the humanities, including the nature of consciousness, action, knowledge, and normativity. Empirical discoveries about brain structure and function suggest ways that “naturalistic” programs might develop in detail, beyond the abstract philosophical considerations in their favor (Bickle, 2006).

Bickle, J., Mandik, P., & Landreth, A., (2006). The philosophy of neuroscience. Stanford Encyclopedia of Philosophy .


Neuroscience is an interdisciplinary research community united by the goal of understanding, predicting and controlling the functions and malfunctions of the central nervous system (CNS). The philosophy of neuroscience is the subfield of the philosophy of science concerned with the goals and standards of neuroscience, its central explanatory concepts, and its experimental and inferential methods.1 Neuroscience is especially interesting to philosophers of science for at least three reasons (Craver & Kaplan, 2011, p. 268).

First, neuroscience is immature in comparison to physics, chemistry and much of biology and medicine. It has no unifying theoretical framework or common vocabulary for its myriad subfi elds. Many of its basic concepts, techniques and exemplars of success are under revision simultaneously. Neuroscience thus exemplifi es a form of scientific progress in the absence of an overarching paradigm (Kuhn 1970) (Craver & Kaplan, 2011, p. 268).

Second, neuroscience is a physiological science. Philosophers of biology have tended to neglect physiology (though see Schaff ner 1993; Wimsa􀄴 2007). Physiological sciences study the parts of organisms, how they are organized together into systems, how they work and how they break. Its generalities are not universal in scope. Its theories intermingle concepts from several levels. Neuroscience thus off ers an opportunity to refl ect on the structure of physiological science more generally (Craver & Kaplan, 2011, p. 268).

Finally, unlike other physiological sciences, neuroscientists face the challenges of relating mind to brain. The question arises whether the explanatory resources of physiological science can be extended into the domains involving consciousness, rationality and agency, or whether such phenomena call out for distinctive explanatory resources (Craver & Kaplan, 2011, p. 268).

Craver, C.F., & Kaplan, D.M. (2011). Towards a Mechanistic Philosophy of Neuroscience: A Mechanistic Approach. In P. French and J. Saatsi (Eds.). Introduction to the Philosophy of Science (pp. 268-292).


Philosophy of Psychology

The point of this historical excursus is to introduce the notion of a natural philosophical background to recognizably modern, mathematics-using, experiment-generating scientific discipline. Psychology also has a natural philosophical background. Its ultimate source is again Aristotle, through his "De anima" or "On the soul". The Latin word "anima" translates the Greek "psyche", which is the root for the modern term "psychology". For reasons that remain obscure, but may have to do with the awkwardness of the noun form "animistics" as opposed to "psychology", the discipline slowly changed its name from de anima studies to psychology across the seventeenth and early eighteenth centuries. But the study of the functions of the mind or soul was continuous. In the early period, Aristolean psychology included the study of vital as well as sensory and cognitive functions ("soul" for Aristotle simply meant vivifying principle - through in fact Aristotle and his followers spent most of their time on the sensory and cognitive functions in the works entitled "On the soul"). Cartesian psychology, by contrast, included only the sensory, cognitive, and affective dimensions of mind: those that are available to human consciousness. This narrowing of the subject matter to the contents of consciousness took hot, and became a standard way to delimiting psychology in the eighteenth century (Hatfield, 2002, p. 211).

Hatfield, G. (2002). Psychology, Philosophy, and cognitive science reflections on the history and philosophy of experimental psychology. Mind and Language. 17(3) (pp. 207-232).


Philosophy of Cognitive Science

Philosophy interfaces with cognitive science in three distinct, but related, areas. First, there is the usual set of issues that fall under the heading of philosophy of science (explanation, reduction, etc.), applied to the special case of cognitive science. Second, there is the endeavor of taking results from cognitive science as bearing upon traditional philosophical questions about the mind, such as the nature of mental representation, consciousness, free will, perception, emotions, memory, etc. Third, there is what might be called theoretical cognitive science, which is the attempt to construct the foundational theoretical framework and tools needed to get a science of the physical basis of the mind off the ground – a task which naturally has one foot in cognitive science and the other in philosophy. (Grush, 2001, p. 272).

Grush, R. (2001). Cognitive Science. In P.K. Machamer and M. Silberstein (Eds.). Blackwell Guide to the Philosophy of Science. Blackwell: Oxford (pp. 272-289).


Philosophy of Education

The most important attitude that can be formed is that of desire to go on learning (Dewey, 1916).

Dewey, J. (1916). Democracy and Education: An introduction to the philosophy of education. London: Macmillan.


What follows, then, is the slow, complex and indirect answer given by a philosopher to the apparently simple question : "Whats is philosophy of education?" And, as indicated, the discussion must start with the nature of philosophy itself - for it should be obvious that individuals holding different conceptions of what constitues philosophy will give quite different accounts of philosophy of education, and sadly there do indeed exist a number of divergent views about this underlying matter (Phillips, 2010, p. 4).

In the light of the preceding accounts of the nature philosophy, it seems natural to conclude that philosophy of education is a domain of activity roughly comparable to philosophy of science or political philosophy. But it does not seem adequate; the field of education is so broad and complex, and is intertwined with so many other aspects of society, and is of such fundamental social importance, that the direction philosophical work can take is almost limitless. My ((speculative) suggestion is that as a filed philosophy of education is on a par in complexity not with any one branch of philosophy, but withe the whole field of philosophy (Phillips, 2010, p. 4).

Phillips, D.C. (2010). What is philosophy of education. In Richard Bailey (Ed.). The Sage Handbook of Philosophy of Education. Sage Publication. (pp. 3-19).


Philosophy of Computer Science


Philosophy of Robotics




Medicine


Genetics

When finally interpreted, the genetic messages encoded within our DNA molecules will provide the ultimate answers to the chemical underpinnings of human existence (Watson, 1990).

Watson, J.D. (1990). The Human Genome Project: past, present, and future. Science, 6 April 1990; 248:44-48..

Philosophy of Education

Ajouter du texte.




Neuroscience


Neuroscience (1) : Prologue - Brain - Nervous system - - Endocrine system - Neuron - Synapse - Neurotransmitter - Development - Learning - Homeostasy - Plasticity
Neuroscience (2) : Vision - Visual perception - Attention - Selective attention - Attentional control - Shared attention - Sustained attention - Memory - Language - Metacognition - Consciousness - Decision

Prologue

There was another major phase of split-brain research where we studied the patients as a way of getting at the other questions very much alive in neuroscience, everything from questions about visual midline overlap to spatial attention and resource allocations. At this point the split-brain patients provided a way of examining cortical-subcortical relationships, and other matters (Gazzaniga, 2011).

Gazzaniga, M.. (2011). Interview with Michael Gazzaniga. Annals of the New York Academy of Sciences. Vol. 2, issue 1, pp. 1-8.

The last decade of the 20th century, the Decade of the Brain, has also been the Decade of Cognitive Neuroscience. It has been the decade in which the merger of cognitive psychology and neural science has begun to realize its promise. The joining of neural science and cognitive psychology is the most recent in a series of scientific unifications that have brought together the disparate subfields of biology into one coherent discipline. Almost all of the other unifications have been spearheaded by the synthetic power of molecular biology. Cognitive neuroscience is distinctive in that the important impetus has come from other sources; in particular, a large part of the impetus has come from psychology and from systems neuroscience (Albright, Kandel, & Posner, 2000, p. 612).

Vision : In his pioneering text, which first appeared 50 years ago, Donald Hebb [67] observed that “we know virtually nothing about what goes on between the arrival of an excitation at a sensory projection area and its later departure from the motor area of the cortex…” “Something like thinking intervenes,” and although it would be hard to disagree with that proposition, the goal of cognitive neuroscience has been to flesh out that ‘something’ in a form that is more satisfying to both psychologists and neurobiologists alike. In part because its operations span the chasm that Hebb lamented, the visual system has served as a proving ground for this goal. By tracing the flow of visual information from retina to motor control circuits we can, in principle, determine how its representation by the brain contributes to the various cognitive processes that constitute thinking, such as perception, recognition, imagery, decision making, and motor planning (Albright, Kandel, & Posner, 2000, p. 616-617).

Visual attention : The primate visual system has a limited information processing capacity. An exciting area of research in the 1990s has been that addressing the means and conditions under which this limited capacity — visual attention — is dynamically allocated. Work in this area has revealed two basic types of attentional phenomena, which may have distinct neuronal substrates. One effect, known as ‘attentional facilitation’, is the improved processing of a stimulus when it appears at an attended location. Early investigations of the effects of focal brain lesions in humans implicated the parietal lobe in attentional facilitation. In subsequent physiological studies of parietal cortex in non-human primates, Michael Goldberg and colleagues [79] found that for many neurons an attended visual stimulus elicited a much larger sensory response than did an identical unattended stimulus. Similar facilitatory effects have since been reported for other cortical visual areas [80,81] (Albright, Kandel, & Posner, 2000, p. 617).

The other basic attentional effect that has been studied extensively is known as ‘attentional selection’. This effect refers to the phenomenon in which a target stimulus (i.e. the thing you’re looking for) is selected from among other stimuli that are competing for attention. In the mid-1980s, Robert Desimone and colleagues [82] found that receptive field profiles of individual neurons in cortical areas V4 and IT contract around the attended stimulus, excluding unattended stimuli. These findings of selection at the neuronal level imply that information about an attended stimulus is carried to higher processing stages, at the expense of information about unattended stimuli. Selective effects have now been reported for many visual areas, including areas V1, V2, V4, MT, MST, and IT (see e.g. [83–86]), indicating that selective mechanisms operate simultaneously on multiple feature maps (Albright, Kandel, & Posner, 2000, p. 618).

Albright, T.D., Kandel, E.R., Posner, M.I. (2000). Cognitive neuroscience. Current Opinion in Neurobiology. 10, pp. 612–624.


Brain

Ajouter du texte.


Nervous system

Ajouter du texte.


Endocrine system

Ajouter du texte.


Neuron

Ajouter une citation.


Synapse

Ajouter une citation.


Neurotransmitter

Ajouter une citation.


Development

Ajouter une citation.


Learning

Ajouter une citation.


Homeostasy

Ajouter une citation.


Plasticity

Ajouter une citation.


Vision

Ajouter une citation.


Visual perception

Ajouter une citation.


Vision

Ajouter une citation.


Selective attention

Ajouter une citation.


Attentional control

One of the great mysteries of the brain is cognitive control. How can interactions between millions of neurons result in behavior that is coordinated and appears willful and voluntary? There is consensus that it depends on the PFC, but there has been little understanding of the neural mechanisms that endow it with the properties needed for executive control. Here, we have suggested that this stems from several critical features of the PFC: the ability of experience to modify its distinctive anatomy; its wide-ranging inputs and intrinsic connections that provide a substrate suitable for synthesizing and representing diverse forms of information needed to guide performance in complex tasks; its capacity for actively maintaining such representations; and its regulation by brainstem neuromodulatory systems that provide a means for appropriately updating these representations and learning when to do so. We have noted that depending on their target of influence, representations in the PFC can function variously as attentional templates, rules, or goals by providing top-down bias signals to other parts of the brain that guide the flow of activity along the pathways needed to perform a task. We have pointed to a rapidly accumulating and diverse body of evidence that supports this view, including findings from neurophysiological, neuroanatomical, human behavioral and neuroimaging, and computational modeling studies (Miller & Cohen, 2001, p. 193).

The theory we have described provides a framework within which to formulate hypotheses about the specific mechanisms underlying the role of the PFC in cognitive control. We have reviewed a number of these, some of which have begun to take explicit form in computational models. We have also provided a sampling of the many questions that remain about these mechanisms and the functioning of the PFC. Regardless of whether the particular hypotheses we have outlined accurately describe PFC function, they offer an example of how neurally plausible mechanisms can exhibit the properties of self-organization and self-regulation required to account for cognitive control without recourse to a “homunculus.” At the very least, we hope that they provide some useful examples of how the use of a computational and empirical framework, in an effort to be mechanistically explicit, can provide valuable leads in this conceptually demanding pursuit. We believe that future efforts to address the vexing, but important, questions surrounding PFC function and cognitive control will benefit by ever tighter coupling of neurobiological experiments and detailed computational analysis and modeling. (Miller & Cohen, 2001, p. 193-194).

Miller, E.K., & Cohen, J.D. (2001). An integrative theory of prefrontal cortex function. Nature Reviews Neuroscience, Vol. 24, pp. 167-202.


In everyday life, visual attention is controlled by both cognitive (TOP-DOWN) factors, such as knowledge, expectation and current goals, and BOTTOM-UP factors that reflect sensory stimulation. Other factors that affect attention, such as novelty and unexpectedness, reflect an interaction between cognitive and sensory influences. The dynamic interaction of these factors controls where, how and to what we pay attention in the visual environment. In this review, we propose that visual attention is controlled by two partially segregated neural systems (Corbetta & Shulman, 2002, p. 201).

One system, which is centred on the dorsal posterior parietal and frontal cortex, is involved in the cognitive selection of sensory information and responses. The second system,which is largely lateralized to the right hemisphere and is centred on the temporoparietal and ventral frontal cortex, is recruited during the detection of behaviourally relevant sensory events, particularly when they are salient and unattended (Corbetta & Shulman, 2002, pp. 201-202).

Corbetta, M., & Shulman, G.L. (2002). Control of goal-direct and stimulus-driven attention in the brain. Nature Reviews Neuroscience, Vol. 84(1), pp. 201-215.


Consciousness

Consciousness consists of a stream of unified mental constructs that arise spontaneously from a material structure, the Dynamic Core in the brain. Consciousness is a concomitant of dynamic patterns of reentrant signaling within complex, widely dispersed, interconnected neural networks constituting a Global Workspace. The contents of consciousness, or qualia, are correlates of discriminations made within this neural system. These discriminations are made possible by perceptions, motor activity, and memories – all of which shape, and are shaped by, the activity-dependent modulations of neural connectivity and synaptic efficacies that occur as an animal interacts with its world (Edelman, Gally, & Baars, 2011, p.5).

Edelman, G. M., Gally, J. A., & Baars, B. J. (2011). Biology of consciousness. Frontiers in Psychology. Vol. 2, pp. 1-7.


The focus on the identification of reliable neural correlates of consciousness in vision has led to a general consensus on the types of experiment that are likely to prove informative, mainly those that explicitly dissociate conscious and unconscious neural processes. Most of the single-cell evidence from monkeys and fMRI data in humans are compatible with the hypothesis that activity in V1, although necessary for many forms of vision, does not correspond to visual perception. Other experiments have been interpreted to suggest that some aspects of V1 activity do relate to conscious perception. Most observers agree that the neural correlates of consciousness are associated with functionally specialized areas in the ventral visual pathway interacting with specific areas of prefrontal and parietal cortex. (Rees, Kreiman, & Koch, 2002, p.5).

Rees, G., Kreiman, G., & Koch, C. (2002). Neural correlates of consciousness in humans. Nature Reviews Neuroscience. Vol. 3, pp. 261-270.


Memory

Ajouter du texte.


Language

Ajouter du texte.


Metacognition

Ajouter du texte.


Decision

Ajouter une citation.




Psychology

Psychology (1) : Representation - Modularity - Hierarchy - Processing - Cognitive architecture - Development - Learning
Psychology (2) : Vision - Visual perception - Attention - Selective attention - Attentional control - Shared attention - Sustained attention - Memory - Language - Metacognition - Consciousness - Decision
Psychology (3) : Perception and Action - Selective attention and Memory - Attentional control and Memory - Attention and Language - Attention and Intelligence - Attention and Consciousness - Attention and Development - Attention and Learning - Attention and Meditation

Prologue


Representation


Modularity

(1) Les systèmes périphériques sont propres à un domaine (Fodor, 1986, p. 67).

(2) L'opération des systèmes périphériques est obligatoire (Fodor, 1986, p. 74).

(3) Les systèmes centraux n'ont qu'un accès limité aux représentations calculées par les systèmes périphériques (Fodor, 1986, p. 77).

(4) Les systèmes périphériques sont rapides (Fodor, 1986, p. 83).

(5) Les systèmes périphériques sont informationnellement cloisonnées (Fodor, 1986, p. 87).

(6) La sortie des systèmes périphériques est "superficielle" (Fodor, 1986, p. 113).

(7) Les systèmes périphériques sont associés à une structure neuronale fixe (Fodor, 1986, p. 128).

(8) Les systèmes périphériques présentent des défaillances caractéristiques (Fodor, 1986, p. 130).

(9) L'ontogenèse des systèmes périphériques suit un certain rythme et un séquence d'étapes caractéristique (Fodor, 1986, p. 131).

Fodor, J.A. (1986). La modularité de l'esprit. Traduction A. Gerschenfeld, Paris : Les Editions de Minuit.


A central question in psychology concerns the parts or processes of which the mind is composed. Prior to the cognitive revolution of the 1960s, it was popular to view the mind as a kind of black box and to view conjectures about its contents as unscientific. The cognitive revolution reversed this climate, rendering the search for the contents of the black box—a description of its internal structure that could account for the systematic relationships between information inputs and behavioral outputs—a key scientific objective of psychologists (Barrett & Kurzban, 2006, p. 628).

An important part of this enterprise has been the development of information-processing theories of mental phenomena, couched in the terms of the theory of computation. Central to computational approaches, in turn, has been modularity: the notion that mental phenomena arise from the operation of multiple distinct processes rather than a single undifferentiated one. Most psychologists today would probably agree that the mind has some internal structure: For example, the information-processing systems underlying perception are different in important respects from those underlying reasoning or motor control. However, beyond this modest agreement that the brain has some parts, there is little consensus on this important issue (Barrett & Kurzban, 2006, p. 628).

The 1983 publication of Fodor’s The Modularity of Mind (Fodor, 1983) launched a debate that has continued to the present day. In this book, Fodor proposed a particular account of mental structure in which information-processing modules of a very specific specific kind—reflex-like, hardwired devices that process narrow types of information in highly stereotyped ways—played a central role. The long-term effects of this book on cognitive approaches to the mind were twofold. First, because the vision of modularity it laid out was so narrow and well specified, it gave psychologists a potentially useful concrete concept to work with. However, for the same reason—the narrowness of the modularity concept—this work ultimately led virtually everyone, including Fodor, to believe that modularity as he defined it would eventually account for little of how the mind works (Fodor, 2000) (Barrett & Kurzban, 2006, p. 628).

We also assert, as have other evolutionary psychologists (Barrett, 2005; Cosmides & Tooby, 1994; Pinker, 1997; Sperber, 1994; Tooby & Cosmides, 1992; Tooby, Cosmides, & Barrett, 2005), that a broader notion of modularity than the one Fodor advanced is possible: in particular, a modularity concept based on the notion of functional specialization, rather than Fodorian criteria such as automaticity and encapsulation (Barrett & Kurzban, 2006, pp. 628-629).

Barrett, H.C. & Kurzban, R. (2006). Modularity in Cognition: Framing the Debate. Psychological Review, 113(3), pp.628-647.


Hierarchy

Hierarchical reinforcement learning : The work of O’Reilly and Frank [14] is representative of an emerging focus of research of hierarchically structured behavior on the issue of learning. In an interesting parallel development, the potential role of hierarchy has taken on increasing interest within the field of machine learning and, in particular, in research on reinforcement learning. As explained in Box 2, hierarchical methods for reinforcement learning provide a powerful computational framework for understanding how abstract action representations might develop through experience, and also call attention to the role that such representations might play in supporting learning in novel task domains. As recently explored by Botvinick, Niv and Barto [27], hierarchical reinforcement learning might also shed light on the neural mechanisms underlying hierarchically structured behavior in humans (Box 2) (Botvinick, 2008, p. 204).

Hierarchical structure in PFC The neural mechanisms underlying the production of hierarchically organized behavior have long been considered to reside, at least in part, within the dorsolateral PFC (DLPFC). Based on neurophysiological and neuropsychological findings, Fuster [28,29] proposed that the DLPFC has a key role in the temporal integration of behavior, serving to maintain context or goal information at multiple, hierarchically nested levels of task structure. In connection with this function, Fuster also noted the position of the DLPFC at the apex of an anatomical hierarchy of cortical areas (Figure 3a). Recent research has introduced an important extension to this idea by indicating that a topographical organization might exist within the frontal cortex and the DLPFC, according to which progressively higher levels of behavioral structure are represented as one moves rostrally [8,9,30–34] (Figure 3b) (Botvinick, 2008, pp. 204-205).

The discovery of this topographic organization places a new constraint on computational models of hierarchical behavior, and models addressing the relevant findings are now beginning to emerge. In one such effort, Botvinick [35] reimplemented the recurrent neural network model from Botvinick and Plaut [20], thereby introducing a structural hierarchy resembling the hierarchy of cortical areas described by Fuster [29] (Figure 3c). When the resulting network was trained on a hierarchically structured task, the processing units at the apex of the structural hierarchy spontaneously came to code selectively for temporal context information, while units lying lower in the hierarchy, nearer to the input and output layers of the network, coded more strongly for current stimuli and response information. These simulations showed how a functional–representational gradient like the one observed in the cerebral cortex could emerge spontaneously through learning, given only an initial architectural constraint (Botvinick, 2008, p. 205).

Botvinick, M.M. (2008). Hierarchical models of behavior and prefrontal function. Trends in Cognitive Sciences. 12 (5), pp. 201-208.


Cognitive control permits selection of actions that are consistent with our goals and context. The prefrontal cortex (PFC) is a central component in the network of brain regions supporting cognitive control [1–9]. Thus, a fruitful approach to understanding the architecture of control has been to investigate the functional organization of the PFC. In recent years, functionally selective PFC sub-regions have been associated with distinct forms of control [10–15]. However, it remains an important goal to understand these isolated control functions in context of broader functional and neuroanatomical organizing principles [2,13,16] (Badre, 2008, p. 193).

This review considers one such organizing hypothesis: that the rostro–caudal axis of the frontal lobes is organized hierarchically, whereby posterior frontal regions support control involving temporally proximate, concrete action representations, and the anterior PFC supports control involving temporally extended, abstract representations [5,16– 24] (Figure 1). Of course, there are diverse ways of defining ‘abstraction’ and, likewise, many processing schemes by which these levels might interact, including non-hierarchical ones. Here, the evidence and associated theories of a frontal rostro–caudal gradient of function are reviewed (Badre, 2008, p. 193).

The fact that behavior can be organized hierarchically does not require that the system itself be structured hierarchically. Nevertheless, growing evidence supports spatially distinct regions of the frontal lobe that process differentially abstract components of action selection. Considerable controversy persists regarding the factors that distinguish this functional gradient and whether these processors interact hierarchically. Resolving these points of controversy will be fundamental to our understanding of frontal-lobe function and the control of action (Box 3) (Badre, 2008, p. 199).

Badre, D. (2008). Cognitive control, hierarchy, and the rostro–caudal organization of the frontal lobes. Trends in Cognitive Sciences. 12 (5), pp. 193-200.


Processing


Cognitive Architecture

Figure 1 contains some of the modules in the system: a visual module for identifying objects in the visual field, a manual module for controlling the hands, a declarative module for retrieving information from memory, and a goal module for keeping track of current goals and intentions. Coordination in the behavior of these modules is achieved through a central production system. This central production system is not sensitive to most of the activity of these modules but rather can only respond to a limited amount of information that is deposited in the buffers of these modules. For instance, people are not aware of all the information in the visual field but only the object they are currently attending to. Similarly, people are not aware of all the information in long-term memory but only the fact currently retrieved. Thus, Figure 1 illustrates the buffers of each module passing information back and forth to the central production system. The core production system can recognize patterns in these buffers and make changes to these buffers, as, for instance, when it makes a request to perform an action in the manual buffer. In the terms of Fodor (1983), the information in these modules is largely encapsulated, and the modules communicate only through the information they make available in their buffers. It should be noted that the EPIC (executive-process/interactive control) architecture (Kieras, Meyer, Mueller, & Seymour, 1999) has adopted a similar modular organization for its production system architecture (Anderson et al., 2004, p. 1037).

The goal buffer keeps track of one’s internal state in solving a problem. In Figure 1, it is associated with the dorsolateral prefrontal cortex (DLPFC), but as we discuss later, its neural associations are undoubtedly more complex. The retrieval buffer, in keeping with the HERA (hemispheric encoding–retrieval asymmetry) theory (Nyberg, Cabeza, & Tulving, 1996) and other recent neuroscience theories of memory (e.g., Buckner, Kelley, & Petersen, 1999; Wagner, Pare´-Blagoev, Clark, & Poldrack, 2001), is associated with the ventrolateral prefrontal cortex (VLPFC) and holds information retrieved from long-term declarative memory.1 This distinction between DLPFC and VLPFC is in keeping with a number of neuroscience results (Braver et al., 2001; Cabeza, Dolcos, Graham, & Nyberg, 2002; Fletcher & Henson, 2001; Petrides, 1994; Thompson-Schill, D’Esposito, Aguirre, & Farah, 1997) (Anderson et al., 2004, pp. 1037-1038).

Anderson, J.R., Bothell, D., Byrne, M.D., Douglass, S., Lebiere, C., & Qin, Y. (2004). An integrated theory of mind. Psychological Review. 111(4), pp. 1036–1060.


Although perceptual and motor modules can be very important to the performance of a task, this paper focuses on the four central modules and their associated areas, which we have shown to be independent of the modality of input or output [3]: (i) The module responsible for controlled retrieval from declarative memory is associated with a lateral inferior prefrontal region (Talairach coordinates x = +/40, y = 21, z = 21) around the inferior frontal sulcus. (ii) The module responsible for constructing imagined representations is associated with a parietal region centered at x = +/23, y = 64, z = 34, on the border of the intraparietal sulcus. (iii) The module associated with setting controlling goals is associated with the anterior cingulate cortex centered at x = +/5, y = 10, z = 38 in the medial frontal cortex. (iv) The module associated with procedural execution is associated with the head of the caudate nucleus, part of the basal ganglia, centered at x = +/15, y = 9, z = 2. (Anderson et al., 2008, pp. 136).

Anderson, J.R., Fincham, J.M., Qin, Y., & Stocco, A. (2008). A central circuit of the mind. Trends in Cognitive Sciences. 12(4), pp. 136–143.


We do not expect that the branding of cognitive architectures will disappear or that all researchers will flock to a single one. But we may expect an emergence of shared mechanisms and principles that will gradually unify the field. The chasms between the different paradigms in cognitive modeling are gradually mellowing with the recognition that no single theory can be right at all levels, restoring a balance between incremental and revolutionary science. Given the current interest in neuroimaging, the correspondence between model and brain activity will become more important. Eventually, cognitive models have to live up to the expectations of strong scientific theories, in that they are both general and are able to predict (Taatgen & Anderson, 2010, p. 702).

Taatgen, N., & Anderson, J. R. (2010). The Past, Present, and Future of Cognitive Architectures. Topics in Cognitive Science 2 (2010) 693–704.


The BICA journal focuses on biologically inspired cognitive architectures. A cognitive architecture can be thought of as a computational formalism that implements a unified theory of cognition in the sense of Newell (1990). Such cognitive architectures must aspire to account for the full range of cognitive processing from sensory input to motor output. A biologically inspired cognitive architecture must draw its insights from what is known from animal (including human) cognition. That is, the structures and processes comprising the cognitive architecture should be tested against empirical studies of humans and other animals. Such studies are the products of cognitive science and cognitive neuroscience. To be truly biologically inspired, such architectures should faithfully model the high-level modules and processes of cognitive neuroscience, though they need not model the low-level neural representations and mechanisms. Whereas cognition in humans and other animals is implemented in brains, cognitive architectures typically do not attempt to model neural systems per se, but rather work from functional conceptual models. This stance leaves cognitive modelers, the designers of cognitive architectures, with the problem of explaining how their high-level structures and processes might correspond to those in an underlying neural system. Finally, biologically inspired cognitive architectures are expected to contribute to the BICA "challenge of creating a real-life computational equivalent of the human mind" (Franklin et al., 2012, p. 32).

Supported by considerable empirical evidence (e.g. Baars, 2002), one such unified theory of cognition, Global Workspace Theory (GWT) (Baars, 1988) has emerged as the most widely accepted theory of the role of consciousness in cognition (Connor & Shanahan, 2010; Dehaene & Naccache, 2001; Glazebrook & Wallace, 2009; Schutter & van Honk, 2004; Sergent & Dehaene, 2004; Seth, 2007; Shanahan & Baars, 2005; Wallace, 2007). Recent experimental studies reveal rich cortical connectivity capable of supporting a large-scale dynamic network (Hagmann et al., 2008; Shanahan, 2010; van den Heuvel & Sporns, 2011). We propose that brains in fact cyclically and dynamically form such a network according to GWT, allowing for highly flexible, rapid reorganization of the neural state in accordance with the demands of an open, unpredictable, and at times dangerous environment. (Franklin et al., 2012, p. 33).

The biologically inspired LIDA2 cognitive architecture (Franklin, Baars, Ramamurthy, & Ventura, 2005; Franklin & Patterson, 2006) implements GWT conceptually (Baars & Franklin, 2003; Baars & Franklin, 2007) and computationally (Snaider, McCall, & Franklin, 2011), as well as other theories from cognitive science and neuroscience including situated (embodied) cognition (Glenberg & Robertson, 2000; Varela, Thompson, & Rosch, 1991), perceptual symbol systems (Barsalou, 1999), working memory (Baddeley & Hitch, 1974), memory by affordances3 (Glenberg, 1997), long-term working memory (Ericsson & Kintsch, 1995), and transient episodic memory (Conway, 2002) (Franklin et al., 2012, p. 33).

The LIDA model is a comprehensive, conceptual and computational model that covers a large portion of human cognition while implementing and fleshing out GWT. The model and its ensuing architecture are grounded in the LIDA cognitive cycle. The cycle is based on the fact that every autonomous agent (Franklin & Graesser, 1997), be it human, animal, or artificial, must frequently sample (sense) its environment and select an appropriate response (action). The agent’s ‘‘life’’ can be viewed as consisting of a continual sequence of these cognitive cycles. Each cycle is comprised of phases of understanding, attending and acting. Neuroscientists call this three-part process the action-perception cycle. A cognitive cycle can be thought of as a cognitive ‘‘moment’’. Sophisticated agents such as humans process (make sense of) the input from such sampling in order to facilitate their decision making. Higher-level cognitive processes are composed of many of these cognitive cycles, each a cognitive ‘‘atom’’. (Franklin et al., 2012, p. 35).

Franklin, S., Strain, S., Snaider, J., McCall, R., & Faghihi, U. (2012). Global Workspace Theory, its LIDA model and the underlying neuroscience. Biologically Inspired Cognitive Architectures. 1, pp. 32– 43 Avai.


Vision

(1) The visual cortex consists of many different areas, each one part of a chain or system that consists of several stations or nodes (Zeki & Bartels 1999). [...] There are therefore several parallel, distributed systems in the visual brain. The presence of several nodes within each processing system raises the question of whether activity at each is always implicit and not perceived until a “terminal” stage of processing, where perception is enshrined, is reached (Zeki, 2001, p. 59).

(2) Apart from V1, all these areas reside within an expanse of cytoarchitectonically uniform cortex consisting of the basic six layers. This cytoarchitectonic uniformity naturally prompts speculation about whether, in addition to the specialized functions imputed to each, there is any common operation that all areas perform. The notion of a uniform operation, repetitively applied in all cortical areas, has been especially championed by Mountcastle (1998). (Zeki, 2001, p. 59).

(3) There is compelling evidence that the different parallel systems, and the nodes comprising them, are specialized for different visual functions (Zeki, 1978, DeYoe & van Essen, 1988, Livingstone & Hubel, 1988, Zeki & Shipp, 1988). [...] I have traced this specialization to the brain’s need to undertake different operations to acquire knowledge about different attributes and believe that it has found it more efficient to separate the different machineries for these operations into separate areas or systems (see below) (Zeki 1993). The knowledge-acquiring system of the visual brain is therefore distributed throughout much of the cerebral cortex. (Zeki, 2001, p. 59).

(4) Clinical evidence shows that damage to one processing system need not affect the other systems and, conversely, that a spared system can still function when much of the other systems are damaged or inactive. Iinterpret this to mean that the different systems have fair autonomy in their operations (Zeki & Bartels 1999) (Zeki, 2001, p. 59-60).

(5) Recent psychophysical evidence shows that some attributes, e.g. color, are perceived before others, e.g. motion (Moutoussis & Zeki 1997a). I interpret this to mean that different systems reach a perceptual end point at different times, and independently of each other, thus supporting further the notion of autonomy (Zeki, 2001, p. 60).

(6) Clinical evidence shows that damage to one processing system need not affect the other systems and, conversely, that a spared system can still function when much of the other systems are damaged or inactive. Iinterpret this to mean that the different systems have fair autonomy in their operations (Zeki & Bartels 1999). (Zeki, 2001, p. 60).

In trying to account for conscious vision, we thus have two competing sets of facts that have somehow to be reconciled. On the one hand are the facts of anatomy, physiology, pathology, and psychophysics which tell that activity in the specialized processing systems and the nodes within them can have a conscious correlate, even in a vastly impoverished cortex. On the other, we have the knowledge that a greatly enhanced and sophisticated repertoire is the preserve of a hugely expanded and complexly interconnected cerebral cortex. On the one hand, we have to account for the microorganizing principles that underlie the activity at individual nodes and result in a conscious correlate, and on the other we have to try to understand whether there is an overall general organizing principle that not only controls the microorganizing principles but also enhances their capacity (Zeki, 2001, p. 80).

Zeki, S. (2001). Localization and globalization in conscious vision. Annual Review of Neuroscience. 24, pp. 57-86.


Visual perception

The hypothesis that has been guiding our research is that appreciation of an object's qualities and of its spatial location depends on the processing of different kinds of visual information in the inferior temporal and posterior parietal cortex, respectively (Ungerleider & Mishkin, 1982, p.578).

On the assumption that both systems can indeed be followed stepwise to our target areas, not only in the temporal but also in the parietal lobe, a major question for the future will be how the object and spatial information carried in these two separated systems are subsequently integrated into a unified visual percept (Ungerleider & Mishkin, 1982, p.579).

Ungerleider, L.G., & Mishkin, M. (1982). Two cortical visual systems. Chapter 18 In Analysis of visual behavior, D.J. Ingle, M.A. Goodale, and R.J.W. Mansfield, eds., Cambridge, MA: MIT Press, pp. 549–586..


The model proposed by the authors of two cortical systems providing "vision for action" and "vision for perception", respectively, owed much to the inspiration of Larry Weiskrantz (Milner & Goodale, 2008, p.774).

When we first set out our account of the division of labour between the ventral and dorsal visual pathways in the cerebral cortex, our distinction between vision for perception and vision for action was intended to capture the idea that visual information is transformed in different ways for different purposes (Milner & Goodale, 2008, p.775).

The model we have developed was inspired by, and to some extent depends on, a set of partial or complete double dissociations that have been observed between patients like D.F., who has ventral-stream damage, and patients with optic ataxia, who have damage to the dorsal stream (Milner & Goodale, 2008, p.781).

Milner, A.D., & Goodale, M.A. (2008). Two visual systems re-viewed. Neuropsychologia. 46, pp. 774–785.


Relation to Functional Organization in V4 : One hint comes from the association of gamma band oscillation with hemodynamic signals. Hemodynamic signals are thought to be more closely related to local field potentials (LFPs) than to action potentials (Logothetis et al., 2001). In fact, Niessing et al. (2005) reported that optically imaged hemodynamic response strength correlated better with the power of highfrequency LFPs than with spiking activity. Optical imaging of attentional signals in V4 in monkeys has shown enhancement of the hemodynamic response during spatial attention tasks (Tanigawa and A.W.R., unpublished data). This is consistent with reported enhancements in gamma band synchrony (Fries et al., 2001) and predicts that spatial attention acts by elevating response magnitude in all functional domains within the attended locale (Figure 8A). This study also showed that feature-based attention (e.g., attention to color) may be mediated, not via enhancement of imaged domain response, but rather via enhanced correlations between task-relevant functional domains (e.g., color domains) in V4. Thus, feature attention may be mediated via correlation change across the visual field, but only within domains encoding the attended feature (Figure 8B). These differential effects of spatial and feature attention suggest that domain-based networks are dynamically configured in V4 (Roe et al., 2012, p.23).

Top-Down Influences : We briefly give some consideration to how attentionally mediated reconfiguration of networks in V4 might be directed by top-down influences. V4 receives feedback influences from temporal (DeYoe et al., 1994; Felleman et al., 1997), prefrontal, and parietal areas (Stepniewska et al., 2005; Ungerleider et al., 2008; Pouget et al., 2009). In this sense, V4 is well positioned for integrating top-down influences with information about stimuli from the bottom-up directionCausal Interactions between Frontal and Visual Cortical Areas? Although imaging and neuropsychological studies strongly suggested that feedback signals from fronto-parietal cortex interact with sensory signals in visual areas such as V4, it has been difficult to prove a causal link between activity in frontal (or parietal) cortex and modulation of visually driven activity. One area in prefrontal cortex that has been proposed as a source of topdown influence is the frontal eye fields (FEF), a cortical area responsible for directing eye movements. During overt attention, FEF initiates circuits which direct the center of gaze toward salient objects. During covert attention, similar neuronal mechanisms may be at play (which has led to the ‘‘pre-motor theory of attention’’) (Corbetta et al., 1998; Corbetta, 1998; Hoffman and Subramaniam, 1995; Kustov and Robinson, 1996; Moore et al., 2003; Moore and Armstrong, 2003; Moore and Fallah, 2001; Moore and Fallah, 2004; Nobre et al., 2000; Rizzolatti et al., 1987). If so, then FEF should play a causal role in directing attention and in influencing V4 activity (Roe et al., 2012, p.23).

In sum, existing data indicate that top-down feedback modulates activity in V4 in a way that parallels spatial attention effects, and, furthermore, the magnitude of effect depends on specifics of bottom-up stimuli (i.e., presence/absence of distractors, salience). This is clear evidence that V4 integrates both sensory and attentional effects. It remains unknown how such specificity is achieved via anatomical feedback which is described as diffuse, broad and divergent (cf. Rockland and Drash, 1996; Pouget et al., 2009; Anderson et al., 2011b) (Roe et al., 2012, p.23).

We conclude by trying to link the feature encoding and attentional encoding (cf. Reynolds and Desimone, 2003; Qiu et al, 2007) aspects of V4 with its functional organization. We have seen that V4 encodes a range of stimulus properties (contour, color, motion, disparity) and have proposed that these contribute to figure-ground segregation processes. We have also seen that V4 is prime real estate for mediating bottom-up and top-down attentional effects. We propose (1) as suggested by studies cited in this review, that these feature representations are tied to feature-specific domains within V4, (2) that domains of shared feature selectivity are anatomically and/or functionally linked into feature-specific networks, and (3) that attentional mechanisms map onto these domain networks and shape them in spatially and featurally specific ways (Roe et al., 2012, p.24).

We suggest that the unifying function of V4 circuitry is to enable selective extraction, whether it be by bottom-up feature-specified shape or by attentionally driven spatial or feature-defined selection (Figure 9). Thus, during bottom-up driven processes, stimulus features select which domains to modulate. During top-down attentional processes, feedback influences select which domains to modulate. This selective modulation creates an active network of functional domains that can be dynamically configured. Under what conditions such selection is mediated by enhancement of activity versus domain-domain correlation requires further investigation. For example, in case of spatial attention, all domains within a restricted region of V4 are networked. In the case of color constancy, a color network is selected. In case of shape representation, orientation domains are networked. In case of color search a color network is also selected, albeit driven by top-down sources. Subsets of color, shape, depth, and motion domains can all be dynamically reconfigured into stimulus-specific or task-specific networks. Shifting attention from one feature to another would be implemented by enhancement of one feature domain network and suppression of another (Roe et al., 2012, p.24).

Roe, A.W., Chelazzi, L., Connor, C.E., Conway, B.R., Fujita, I., Gallant, J.L., & Lu, H. (2012). Toward a unified theory of visual area V4. Neuron. 74 (12), pp. 12–29.


So what is visual cognition? On the large scale, visual processes construct a workable simulation of the visual world around us, one that is updated in response to new visual data and which serves as an efficient problem space in which to answer questions. The representation may be of the full scene or just focused on the question at hand, computing information on an as-needed basis (O’Regan, 1992; Rensink, 2000). This representation is the basis for interaction with the rest of the brain, exchanging descriptions of events, responding to queries. How does it all work? Anderson’s work on production systems (c.f. Anderson et al., 2004, 2008) is a good example of a possible architecture for general cognitive processing. This model has sets of ‘‘productions’’, each of them in an ‘‘if X, then Y’’ format, where each production is equivalent to the routines mentioned earlier. These respond to the conditions in input buffers (short term memory or awareness or both) and add or change values in those buffers or in output buffers that direct motor responses. This production system architecture is Turing-machine powerful and biologically plausible. Would visual processing have its own version of a production system that constructs the representation of the visual scene? Or is there a decentralized set of processes, each an advanced inference engine on its own that posts results to a specifically visual ‘‘blackboard’’ (van der Velde & de Kamps, 2006) constructing, as a group, our overall experience of the visual world? This community approach is currently the favored hypothesis for overall mental processes (Baars, 1988; Dehaene & Naccache, 2001) and we might just scale it down for visual processes, calling on multiple specialized routines (productions) to work on different aspects of the image and perhaps different locations. On the other hand, the very active research on visual attention hints that there may be one central organization for vision at least for some purposes (Cavanagh, 2011, p.1548).

Cavanagh, P. (2011). Visual cognition. Vision Research. 51, pp. 1538–1551.


Attention

Every one knows what attention is. It is the taking possession by the mind, in clear and vivid form, of one out of what seem several simultaneously possible objects or trains of thought. Focalization, concentration, of consciousness are of its essence. It implies withdrawal from some things in order to deal effectively with others (James, 1890, pp. 403-404).

James, W. (1890). Principles of psychology. Vol. 1. New York: Dover.


Three fundamental findings are basic to this chapter. First, the attention system of the brain is anatomically separate from the data processing systems that perform operations on specific inputs even when attention is oriented elsewhere. In this sense, the attention system is like other sensory and motor systems. It interacts with other parts of the brain, but maintains its own identity. Second, attention is carried out by a network of anatomical areas. It is neither the property of a single center, nor a general function of the brain operating as a whole (Mesulam 1981, Rizzolatti et a11985). Third, the areas involved in attention carry out different functions, and these specific computations can be specified in cognitive terms (Posner et al 1988). To illustrate these principles, it is important to divide the attention system into subsystems that perform different but interrelated functions. In this chapter, we consider three major functions that have been prominent in cognitive accounts of attention (Kahneman 1973, Posner & Boies 1971): (a) orienting to sensory events; (b) detecting signals for focal (conscious) processing, and (c) maintaining a vigilant or alert state (Posner & Petersen, 1990, pp. 26).

Posner, M.I., & Petersen, S.E. (1990). The attention system of the human brain. Annual Review of Neuroscience, 13, pp. 25-32.


We ran an event-related fMRI experiment using the ANT to find brain areas active for the three attentional networks. We hypothesized that a pattern of separable activity would emerge with specific attentional functions loading heavily on segregated anatomical areas. We explored the following specific hypotheses based on previous studies that have isolated each network within separate tasks on separate subjects: (1) alerting would activate the frontal and parietal areas of the right and/or left hemisphere and thalamic areas that are potentially related to norepinephrine (Coull et al., 2000, 2001; Marrocco and Davidson, 1998). [...] ; (2) orienting would activate a superior parietal region and the temporal parietal junction, with a right hemisphere bias (Corbetta et al., 2000); and (3) conflict would activate anterior cingulate cortex (ACC) and a left lateral frontal bias might be suggested for the importance of dopamine for this system (Bush et al., 2000; MacDonald et al., 2000) (Fan et al., 2005, pp. 472).

Overall, results support the initial hypotheses that specific attention networks operating within the same subjects and within the same task-blocks are associated with separable activation patterns loading on specific anatomical regions (Fan et al., 2005, pp. 475).

Fan, J., McCandliss, B.D., Fossella, J., Flombaum, J.I., & Posner, M.I. (2005). The activation of attentional networks. Neuroimage, 26, pp. 471-479.


We consider here top-down processes in attention, and how they interact with bottom-up processing, in a model of visual attentional processing which has multiple hierarchically organized modules in the architecture [...]. The model shows how the dorsal (sometimes called where) visual stream (reaching the posterior parietal cortex, PP) and the ventral (what) visual stream (via V4 to the inferior temporal cortex, IT) could interact through early visual cortical areas (such as V1 and V2) to account for many aspects of visual attention (Deco, 2001; Deco and Zihl, 2001; Rolls and Deco, 2002; Deco and Lee, 2002; Corchs and Deco, 2002; Heinke et al., 2002; Deco and Lee, 2004). The system modelled is essentially composed of six modules (V1 (the primary visual cortex), V2–V4, IT, PP, ventral prefrontal cortex v46, and dorsal prefrontal cortex d46). These six modules are reciprocally connected in a parallel (dorsal and ventral) hierarchy in accord with anatomical data (Felleman and Van Essen, 1991) (Deco & Rolls, 2005, pp. 239).

Deco, G., & Rolls, E.T. (2005). The activation of attentional networks. Progress in Neurobiology, 76, pp. 236-256.


These networks carry out the functions of alerting, orienting, and executive attention (Posner & Fan 2007). [...] Alerting is defined as achieving and maintaining a state of high sensitivity to incoming stimuli; orienting is the selection of information from sensory input; and executive attention involves mechanisms for monitoring and resolving conflict among thoughts, feelings, and responses (Posner & Rothbart, 2007, p. 7).

The alerting system has been associated with thalamic as well as frontal and parietal regions of the cortex (Fan et al. 2005) (Posner & Rothbart, 2007, p. 7).

Orienting involves aligning attention with a source of sensory signals. This may be overt, as when eye movements accompany movements of attention, or may occur covertly, without any eye movement. The orienting system for visual events has been associated with posterior brain areas, including the superior parietal lobe and temporal parietal junction, and in addition, the frontal eye fields (Corbetta & Shulman 2002) (Posner & Rothbart, 2007, p. 7).

Executive control of attention is often studied by tasks that involve conflict, such as various versions of the Stroop task. In the Stroop task, subjects must respond to the color of ink (e.g., red) while ignoring the color word name (e.g., blue) (Bush et al. 2000). Resolving conflict in the Stroop task activates midline frontal areas (anterior cingulate) and lateral prefrontal cortex (Botvinick et al. 2001, Fan et al. 2005). (Posner & Rothbart, 2007, p. 7).

Posner, M.I., & Rothbart, M.K. (2007). Research on attention networks as a model for the integration of psychological science. Annual Review of Psychology, 58, pp. 1-23.

Attention can be captured in a bottom-up fashion, by a salient stimulus. For example, brightly colored or fast moving objects are often important and are therefore salient. But intelligent behavior depends on top-down control signals that can modulate bottom-up sensory processing in favor of inputs more relevant to achieving long-term goals. Neurophysiological studies have begun to distinguish the circuitry, within a shared frontal–parietal network, that guides top-down and bottom-up attention (Miller & Buschman, 2013, p. 216).

Bottom-up attention signals may be first extracted in, and therefore flow from, the parietal cortex. One particular region, lateral intraparietal cortex (LIP), seems to contain saliency maps sensitive to strong sensory inputs [1] (Figure 2). Highly salient, briefly flashed, stimuli capture both behavior and the response of LIP neurons [2,3]. Microstimulating LIP biases visual search toward the corresponding location in the presumptive LIP saliency map [4]. Saliency maps are thought to automatically select the strongest sensory input via competition between map locations. This may result from interactions between excitatory receptive field centers (ERFCs) and inhibitory surrounds (Figure 2). The planning of a saccade to a location outside the ERFC suppresses LIP activity to a stimulus in the ECRF, reflecting LIPs center-surround structure [5]. Saliency maps are also seen in the frontal cortex [6,7] as are center/surround interactions [8]. However, LIP neurons signal salient stimuli with a short latency [9,10], shorter than the frontal cortex [11,12]. This suggests the neural signals reflecting the bottom-up capture of attention flows from parietal, not frontal, cortex. The saliency maps in parietal cortex may, in turn, be partially inherited from midbrain structures [13]: local inactivation of superior colliculus disrupts an animal’s ability to select a salient stimulus [14]. (Miller & Buschman, 2013, p. 216).

By contrast, the network interactions for top-down shifts of attention networks seem to flow in a different direction: originating in frontal cortex, the brain region most associated with ‘executive’ brain functions. Deactivation of the frontal eye fields (FEF) disrupts planned (topdown) saccades but has no effect on bottom-up stimulus detection [15]. Similarly, removing the top-down influence of the frontal cortex on visual cortex by combining unilateral PFC lesions with a split-brain transection results in monkeys that cannot flexibly switch their attention to constantly changing targets but has no effect when attention can be automatically grabbed by a salient, pop-out, target [16] (Miller & Buschman, 2013, p. 216).

Neurophysiological studies also suggest that top-down signals may originate from the frontal cortex (Figure 1). Frontal cortical neurons reflect shifts of top-down attention with a shorter latency than parietal area LIP [11,12]. When attention is focused, the FEF and visual cortex go into rhythmic synchrony (more below) with a phase offset that suggests the former is driving the latter [17]. If internal control of attention originates in frontal cortex, artificial activation of frontal cortex should induce the type of top-down modulation of visual cortex seen during volitional shifts of attention. Indeed, microstimulation of the FEF produces top-down attention-like modulation of visual area V4 [18]. This can also be seen by modulating dopamine in the FEF, the neurotransmitter system most associated with reward and goal-directed behavior [19] (Miller & Buschman, 2013, p. 217).

Miller, E.K., & Buschman, T.J. (2013). Cortical circuits for the control of attention. Current Opinion in Neurobiology, Vol. 23, pp. 216-222.



Selective attention

To summarize the conclusions : it seems that we can detect and identify separable features in parallel across a display (within the limits set by acuity, disciminability,and lateral interference); that this early, parallel, processs of feature registration mediates texture segregation and figure-ground grouping; that locating any individual feature requires in additional operation; that if attention is diverted or overloaded, illusory conjunctions may occur (Treisman et al., 1977). Conjunctions, on the other hand, require focal attention to be directed to each relavant locations; they don not mediate texture segregation, and their cannot be identified without also being spatially localized (Treisman & Gelade, 1980, pp. 131-132).

The findings also suggest a convergence between two perceptual phenomena- parallel detection of visual targets and perceptual grouping or segregation. Both appear to depend on a distinction at the level of separable features. Neither requires focal attention, so both may precede its operation? This means that both could be involved in the control of attention. The number of items receiving focal attention at any moment of time can vary. Visual attention, like a spotlight or zoom lens, can be used over a small area with high resolution or spread over a wider area with some loss of detail (Eriksen & Hoffman, 1972). We can extend the analogy in the present context to suggest that attention can either be narrowed to focus on a single feature, when we need to see what others features are present and form a object, or distributed over a whole group of items which share a relevant feature (Treisman & Gelade, 1980, p. 132).

To conclude: the feature-integration theory suggests we become aware of unitary objects, in two different ways - through focal attention or through top-down processing. We may not know on any particular occasion which has occurred, or which has contributed most to what we see. In normal conditions, the two routes operates together, but in extreme conditions we may be able to show either of the two operating almost independently of the other (Treisman & Gelade, 1980, p. 134).

The first route to object identification depends on focal attention, directed serrially to different locations, to integrate the features registered within the same spatio-temporal "spotlight" into a unitary percept. This statement is of course highly oversimplified; it b begs many questions, such as how we deal with spatially overlapping objects and how we register the relationships between features which distinguish many otherwise identical objects. These problems belong to a theory of object recognition and are beyond the scope of this paper (Treisman & Gelade, 1980, p. 134).

The second way in which we may "identify" objects, when we focused attention is prevented by brief exposure or overlaading, is through top-down processing. In a familiar context, likely objects can be predicted. Their presence can then be checked by matching their disjunctive features to those in the display, without checking how they are spatially conjoined. If the context is misleading, this route to object recognition should give rise to errors; but in the highly redundant and familiar environments in which we normally operate, it should seldom lead us astray. When the environment is less predictable to he task requires conjunctions to be specified, we are in fact typically much less efficient. Searching for a face, even as familiar as on's own child, is a school photograph, can be a painstakingly serial process and focused attention is certainly recommended in proof reading and instrument monitoring (Treisman & Gelade, 1980, p. 134).

Treisman, A.M. & Gelade, G. (1980). A feature-integration theory of attention. Cognitive Psychology. 12, pp. 97-136.


Our model is limited to the bottom-up control of attention, i.e. to the control of selective attention by the properties of the visual stimulus. It does not incorporate any top-down, volitional component. Furthermore, we are here only concerned with the localization of the stimuli to be attended (‘where’), not their identification (‘what’) (Itti & Koch, 2000, p. 1492).

(1) First, visual input is represented, in early visual structures, in the form of iconic (appearancebased) topographic feature maps. Two crucial steps in the construction of these representations consist of center-surround computations in every feature at different spatial scales, and within-feature spatial competition for activity (Itti & Koch, 2000, p. 1492).

(2) Second, information from these feature maps is combined into a single map which represents the local ‘saliency’ of any one location with respect to its neighborhood. Third, the maximum of this saliency map is, by definition, the most salient location at a given time, and it determines the next location of the attentional searchlight. And fourth, the saliency map is endowed with internal dynamics allowing the perceptive system to scan the visual input such that its different parts are visited by the focus of attention in the order of decreasing saliency (Itti & Koch, 2000, p. 1492).

(3) Third, the maximum of this saliency map is, by definition, the most salient location at a given time, and it determines the next location of the attentional searchlight. (Itti & Koch, 2000, p. 1492).

(4) And fourth, the saliency map is endowed with internal dynamics allowing the perceptive system to scan the visual input such that its different parts are visited by the focus of attention in the order of decreasing saliency (Itti & Koch, 2000, p. 1492).

Itti, L. & Kock, C. (2000). A saliency-based search mechanism for overt and covert shifts of visual attention. Vision Research. 40, pp. 1489-1506.


First of all, what is the source that determines the location and shape of a spatially selective attentional focus? More than 10 years ago, due to the lack of detailed electrophysiological data, attention has been described as a selection or winnertakes-all process within a saliency (or master) map (Koch & Ullman, 1985; Treisman & Gelade, 1980; Wolfe, 1994). Such a map has been defined to indicate potentially relevant locations by an enhanced activity at the corresponding spatial location. In the search for the saliency map a number of brain areas have been identified. Among those are the frontal eye field (Schall, 2002; Thompson & Schall, 2000), the superior colliculus (Ignashchenkova, Dicke, Haarmeier, & Thier, 2004; Muller, Philiastides, & Newsome, 2005) and LIP (Bisley & Goldberg, 2006). However, area V4 has also been shown to reflect aspects of a saliency map (Bichot, Rossi, & Desimone, 2005; Mazer & Gallant, 2003; Ogawa & Komatsu, 2004), which suggests that saliency alone might not be a sufficient criterion for defining the source of spatial attention in the brain. In fact, we suggested a model in which the information of saliency in V4 can be task relevant immediately after the presentation of a visual scene regardless of spatial attention (Hamker, in press). This selective enhancement at intermediate levels of the cortical hierarchy could be used to guide visuomotor processes such as eye movements (Hamker & Zirnsak, 2006, p. 1371).

The model consists of visual areas V4, inferotemporal (IT) cortex, prefrontal areas that contain the frontal eye field (FEF) for saccade planning and more ventrolateral parts for implementing functions of working memory (Hamker & Zirnsak, 2006, p. 1372).

Model for visual attention. First, information about the content and its low level stimulus-driven salience is extracted, as indicated by the map “Salience”. This information is sent further downstream to V4 and to IT cells which are broadly tuned to location. A target template is encoded in PF memory (PFmem) cells. Feedback from PFmem to IT increases the strength of all features in IT matching the template. Feedback from IT to V4gain sends the information about the target downwards to cells with a higher spatial tuning. FEF visuomovement (FEFv) cells combine the feature information across all dimensions and indicate salient or relevant locations in the scene. The FEF movement (FEFm) cells compete for the target location of the next eye movement. The activity of the FEF movement cells is also sent to V4 gain and IT for gain modulation. The IOR map memorizes recently visited locations and inhibits the FEF visuomovement cells. However, this map is only required for the simulation of a scanpath but not for the receptive field dynamics simulated here. (Hamker & Zirnsak, 2006, p. 1373).

Attention in the model has two different sources, one is stimulus-driven and the other is task-driven (Fig. 1). In order to compute the stimulus-driven source we (i) create multiresolution feature maps, (ii) compute multi-resolution contrast maps using center-surround operations and (iii) combine both in feature conspicuity maps. For computing the first two steps we largely follow Itti, Koch, and Niebur (1998) but see Hamker (2005c) for differences in these early processing stages. The initial conspicuity is then continuously updated to reflect the task-relevance. The relevance of each feature is determined by the search template (target). Feedback enhances the gain of feedforward processing and the network ultimately settles onto a final response and a specific attentional state. Thus, attention emerges by the dynamics of vision (Hamker & Zirnsak, 2006, p. 1378).

Hamker, F.H. & Zirnsak, M. (2006). V4 receptive field dynamics as predicted by a systems-level model of visual attention using feedback from the frontal eye field. Neural Networks. 19. pp. 1371–1382.


Attentional control

This paper discussed the distinction between automatic activation processes which are solely the result of past learning and processes with under current conscious control. Automatic activation processes are those which may occur without intention, without any conscious awareness and without interference with other mental activity. They are distinguished from operations performed by the conscious processing system since the latter system is of limited capacity and thus its commitment to any operation reduces its availability to perform any other operation. Many current cognition tasks were analyzed in terms of the interaction of automatic activation processes with strategies determined by task instructions. Concentration on a source of signals serves to reduce interference from outside that source. Outside signals still intrude, particularly when they are classified by the memory system as having emotional significance (Posner & Snyder, 1975, pp. 221).

Posner, M.I., & Snyder, C.R.R. (1975). Attention and cognitive control. Chapter 12 (pp. 205-223). In R. Solso (Ed.), Information processing and cognition: The Loyola symposium. Potomac: Lawrence Erlbaum Associates.


We have presented a theory of information processing that emphasizes the roles of automatic and controlled processing. Automatic processing is learned in longterm store, is triggered by appropriate inputs, and then operates independently of the subject's control. An automatic sequence can contain components that control information flow, attract attention, or govern overt responses. Automatic sequences do not require attention, though they may attract it if training is appropriate, and they do not use up short-term capacity. [...] or memory load. Controlled processing is a temporary activation of nodes in a sequence that is not yet learned. It is relatively easy to set up, modify, and utilize in new situations. It requires attention, uses up short-term capacity, and is often serial in nature. Controlled processing is used to facilitate long-term learning of all kinds, including automatic processing (Schneider & Shiffrin, 1977, pp. 52-53).

Schneider, W., & Shiffrin, R.M. (1977). Controlled and Automatic Human Information Processing: I. Detection, Search, and Attention. Psychological Review, Vol. 84(1), pp. 1-66.


To some extent, task-set reconfiguration is endogenously (internally) driven. That is, we can adopt task-sets at will, in advance of the stimulus, and without foreknowledge of the stimulus identity other than that it will be a member of a specified class. The responsibility for this intentional component of task control is typically attributed to a special executive mechanism-the Will (James, 1890), Controlled Processing (Atkinson & Shiffrin, 1968; Shiffrin & Schneider, 1977), the Central Executive (e.g., Baddeley, 1986), or a Supervisory Attentional System (Norman & Shallice, 1986; Shallice, 1988, 1994bthat is widely supposed to be unitary, resource-limited, functionally distinct from the processes it organizes, and intimately associated with conscious awareness. Although the endogenous component of task-set undoubtedly exists, we refrain from making any such assumptions about it (Rogers & Monsell, 1995, p. 208).

In addition to the endogenous component of task-set, there is also ample evidence that stimuli can of themselves activate or evoke in a person a tendency to perform actions (or tasks)' habitually associated with them, irrespective of prior intention, and sometimes in conflict with prior intention. We refer to this as exogenous control. (The endogenous-exogenous terminology is borrowed from its application to a similar distinction between two mechanisms for the spatial orienting of attention-e.g., Briand & Klein, 1987). In a clinical setting, striking illustrations of exogenous control are observed following damage to the frontal lobes (Shallice, 1988) (Rogers & Monsell, 1995, pp. 208-209).

The relation between endogenous and exogenous control of task-set is such that the endogenous SAS modulates, biases, and perhaps even restarts-when appropriate-the competition between task-sets driven by exogenous input. In this formulation, deliberate adoption of a task-set would be seen as an anticipatory biasing of task-set activations (Rogers & Monsell, 1995, p. 209).

Rogers, R.D., & Monsell, S. (1995). Costs of a predictable switch between simple cognitive tasks. Journal of Experimental Psychology: General. Vol. 124, No. 2, pp. 207-231.


Psychological experiments (with adult human subjects) typically depend on their subjects' ability to adopt "at will" now one task set, now another, according to the experimenter's instructions. Very often, experiments require the subject to maintain a given task over many trials, in which the same cognitive operations are to be performed repeatedly. IN everyday activity, on the contrary, subjects often shift rapidly, and repeatedly, from one intended set of cognitive operations to another, frequently without any inmmediate external cues. We are interested in the mechanisms of voluntary control responsible for implementing these intentional shifts of set (Allport, Styles, & Hsieh, 1994, pp. 421-422).

Research on dynamic shifting or orienting of visual attention from one spatial location to another has been studied extensively in recent years (Posner and Petersen, 1990). In contest, research on other (generally non spatial) aspects of cognitive control has tended to focus on the efficiency with which a given task set can be maintained, either in the face of potentially conflicting stimuli, s in the many forms of "Strooplike" interference (McLeod, 1991), or under conditions of "divided attention" with multiple task set in concurrent, dual-task performance (Hirst, 1986). Dynamic shifting of set has received much less attention (Allport, Styles, & Hsieh, 1994, p. 432).

Our intention these experimental manipulations was to explore task variables that might be related to current ideas about "controlled" (or attentional or "supervisory mode") processing. Voluntary shift of task set we took to be a prototypical function of executive or intentional control. According to a number of popular accounts, such control is postulated to be the responsibility of a unitary central executive or supervisory attentional system (Baddeley, 1986; Johnson-Laird, 19888, Norman and Shallice, 1986 ; Posner, 1982; Shiffrin, 1988, Umiltà et al., 1992). A underlying theoretical distinction is made between a controlled system, which is essentially stimulus driven, and a autonomous control system, which does not depend on stimulus trigerrring (Shiffrin and Schneider, 1977) (Allport, Styles, & Hsieh, 1994, p. 432).

In contrast to this conception of a unitary central executive, we may contrapose a hypothesis of distributed control. That isn we suggest that voluntary or intentional control of task set is realized through interactions among a variety of functionally specialized components, each responsible for specific features of executive control (Allport, 1989, 1993) (Allport, Styles, & Hsieh, 1994, p. 432).

In this respect, at least, non spatial attention ressemble spatial attention, which has been shown to depend on a number of both functionally and anatomically distinct subsystems (e.G., Desimone et al, 1990; Posner and Presti, 1987, Rizollatti, Gentilucci, Matelli, 1985) (Allport, Styles, & Hsieh, 1994, p. 432).

Could it be that the underlying and widely held assumption, formulated most influentially perhaps by Shiffrin and Schneider (1977) and by Baddeley and Hitch (1974; Baddeley, 1986), and from which these explorations began - the idea of a fundamental distinction between a controlled system (or systems) and a separate, autonomous controller (an executive system) - is misconceived? In these and related formulations, an essential feature of the controlled (or slave) system(s) is that they are exogenously triggered, or stimulus driven. In contradistinction, the postulated central executive, or supervisory attentional system, is not dependent on external triggering; it control operations are autonomous, initiated (in some way) from within. The problem is that the prototypical control operation of a shift of set appears here to depend on - to await triggering by - appropriate external stimuli (Allport, Styles, & Hsieh, 1994, pp. 449-450).

Allport, D.A., Styles, E.A., & Hsieh, S. (1994). Shifting intentional set: Exploring the dynamic control of tasks. In C. Umilta & M. Moscovitch (Eds.), Attention and performance XV (pp. 421-452). Cambridge, MA: MIT Press.


One paradigm to study cognitive control is task switching, in which participants rapidly switch between two or more choice reaction-time (RT) tasks. In most circumstances, switching tasks is associated with a sizable decrement in performance (called switching cost) (Allport, Styles, & Hsieh, 1994; Biederman, 1972; de Jong, in press; Fagot, 1994; Gopher, Armony, & Greenshpan, in press; Jersild, 1927; Meiran, 1996; Rogers & Monsell, 1995; Rubinstein, Meyer, & Evans, submitted). Two explanations have been suggested for this cost. The first explanation is based on the concept of preparatory reconfiguration, presumably an organizational-executive process. The second explanation is based on the concept of task set inertia, a mechanism not necessarily related to executive processing (Meiran,Chorev & Sapir, 2000, p. 211).

Rubinstein et al. (submitted) suggested that reconfiguration is composed of two components. One is goal activation, presumably related to the updating of the contents of declarative memory where task demands are represented. The other component is rule activation, related to the activation of procedural memory aspects related to task performance (Meiran,Chorev & Sapir, 2000, p. 212).

In contrast, Allport et al. (1994, see also Allport & Wylie, in press) emphasized processes that are unrelated to intentional control. They suggested ‘‘that . . . the [task] switch cost. . . . reflects a kind of proactive interference from competing S-R mappings with the same stimuli, persisting from the instruction set on preceding trials. We [Allport et al.] might call this phenomenon task set inertia’’ (p. 436) (Meiran,Chorev & Sapir, 2000, p. 212).

The present results reconcile two opposing views regarding the reduction in switching costs by prolonging the preparatory interval. According to one view (Rogers & Monsell, 1995; but see also De Jong, in press; Fagot, 1994; Goschke, in press; Meiran, 1996, in press-a, in press-b) the reduction in switching costs reflects preparatory reconfiguration. According to the alternative view (Allport et al., 1994) switching cost reduction reflects passive dissipation of the previous task set. We capitalized on the advantages of the cueing paradigm (e.g., Shaffer, 1965; Sudavan & Taylor, 1987) and our results indicate that both these processes operate in the present task-switching paradigm. Prolonging the RCI resulted in cost reduction, in line with the set dissipation hypothesis. In addition, prolonging the CTI resulted in a further reduction in switching costs, indicating reconfiguration (Meiran,Chorev & Sapir, 2000, p. 247).

The present work concentrated on switching cost. This component consisted of two subcomponents in Fagot’s (1994) formulation, a preparatory component and a residual component (to avoid confusion, we use our terms, although Fagot’s terms were different). The preparatory component reflects the reduction in switching cost by preparation, while the residual component reflects switching cost given plenty of time to prepare (Meiran,Chorev & Sapir, 2000, p. 248).

Meiran, D.A., Chorev, E.A., & Sapir, S. (1994). Component processes in task switching. Cognitive Psychology. 41, pp. 211-253.


Shared attention

.... (James, 1890, pp. 403-404).

James, W. (1890). Principles of psychology. Vol. 1. New York: Dover.


Sustained attention

.... (James, 1890, pp. 403-404).

James, W. (1890). Principles of psychology. Vol. 1. New York: Dover.


Memory

Ajouter du texte.


Language

Ajouter du texte.


Metacognition

Metacognition of learning and remembering has been extensively studied by psychologists [1–9]. The consensus in the field is that metacognitive monitoring is inferential rather than direct, and is grounded in a variety of sensorily accessible cues. For example, a feeling of knowing can be grounded in the familiarity of the cue or recall of facts closely related to the target item, and a judgement of learning can be based on the fluency with which the item to be learned is processed. While these findings might be consistent with the claim that there is nevertheless an evolved system or ‘module’ for metacognition, in fact, there is no reason to believe that any cognitive mechanism is employed other than the same mindreading faculty that we use for attributing mental states to other people, enhanced with some acquired first-person strategies and recognition abilities [10] (Fletcher & Carruthers, 2012, p. 1366).

Indeed, when brain scans of people engaged in such metacognitive activities are conducted (and appropriate first-order tasks are used for purposes of subtraction), the very same network of regions that has been found to be involved in mindreading tasks is seen to be active. This includes medial prefrontal cortex, posterior cingulate cortex and the temporo-parietal junction [11–13] (Fletcher & Carruthers, 2012, p. 1366).

Fletcher, L. & Carruthers, P., (2012). Metacognition and reasoning. Philosophical Transactions of The Royal Society B. 367, pp. 1366-1378.


Consciousness

(1) Conscious perception involves more than sensory analysis; it enables access to widespread brain sources, whereas unconscious input processing is limited to sensory regions (Baars, 2002, p. 47).

(2) Consciousness enables comprehension of novel information, such as new combinations of words. (Baars, 2002, p. 48).

(3) Working memory depends on conscious elements, including conscious perception, inner speech and visual imagery, each mobilizing widespread functions. (Baars, 2002, p. 49).

(4) Conscious information enables many types of learning, using a variety of different brain mechanisms. (Baars, 2002, p. 50).

(5) Voluntary control is enabled by conscious goals and perception of results (Baars, 2002, p. 50).

(6) Selective attention enables access to conscious contents, and vice versa (Baars, 2002, p. 50).

(7) Consciousness enables access to "self": executive interpretation in the brain. (Baars, 2002, p. 50).

Baars, B. J. (2002). The conscious access hypothesis: origins and recent evidence. Trends in Cognitive Sciences. 6(1), pp. 47-52.


Voluntary action and free will : The hypothesis of an attentional control of behavior by supervisory circuits including AC and PFC, above and beyond other more automatized sensorimotor pathways, may ultimately provide a neural substrate for the concepts of voluntary action and free will (Posner, 1994). One may hypothesize that subjects label an action or a decision as "voluntary" whenever its onset and realization are controlled by higher-level circuitry and are therefore easily modi®ed or withheld, and as "automatic" or "involuntary" if it involves a more direct or hardwired command pathway (Passingham, 1993). One particular type of voluntary decision, mostly found in humans, involves the setting of a goal and the selection of a course of action through the serial examination of various alternatives and the internal evaluation of their possible outcomes. This conscious decision process, which has been partially simulated in neural network models (Dehaene & Changeux, 1991, 1997), may correspond to what subjects refer to as "exercising one's free will". Note that under this hypothesis free will characterizes a certain type of decision-making algorithm and is therefore a property that applies at the cognitive or systems level, not at the neural or implementation level. This approach may begin to address the old philosophical issue of free will and determinism. Under our interpretation, a physical system whose successive states unfold according to a deterministic rule can still be described as having free will, if it is able to represent a goal and to estimate the outcomes of its actions before initiating them (Dehaene & Nacache, 2001, pp. 29-30).

According to the workspace hypothesis, a large variety of perceptual areas can be mobilized into consciousness. At a microscopic scale, each area in turn contains a complex anatomical circuitry that can support a diversity of activity patterns. The repertoire of possible contents of consciousness is thus characterized by an enormous combinatorial diversity: each workspace state is "highly differentiated" and of "high complexity", in the terminology of Tononi and Edelman (1998). Thus, the flux of neuronal workspace states associated with a perceptual experience is vastly beyond accurate verbal description or long-term memory storage. Furthermore, although the major organization of this repertoire is shared by all members of the species, its details result from a developmental process of epigenesis and are therefore specific to each individual. Thus, the contents of perceptual awareness are complex, dynamic, multi-faceted neural states that cannot be memorized or transmitted to others in their entirety. These biological properties seem potentially capable of substantiating philosophers' intuitions about the "qualia" of conscious experience, although considerable neuroscientific research will be needed before they are thoroughly understood (Dehaene & Nacache, 2001, pp. 29-30).

Dehaene, S., & Naccache, L., (2001). Towards a cognitive neuroscience of consciousness: basic evidence and a workspace framework. Cognition. 79, pp. 1–37.


Figure 1. Proposed distinction between subliminal, preconscious, and conscious processing. Three types of brain states are schematically shown, jointly defined by bottom-up stimulus strength (on the vertical axis at left) and top-down attention (on the horizontal axis). Shades of color illustrate the amount of activation in local areas, and small arrows the interactions among them. Large arrows schematically illustrate the orientation of top-down attention to the stimulus, or away from it (‘task-unrelated attention’). Dashed curves indicate a continuum of states, and thick lines with separators indicate a sharp transition between states. During subliminal processing, activation propagates but remains weak and quickly dissipating (decaying to zero after 1–2 seconds). A continuum of subliminal states can exist, depending on masking strength, top-down attention, and instructions (see Box 1). During preconscious processing, activation can be strong, durable, and can spread to multiple specialized sensori-motor areas (e.g. frontal eye fields). However, when attention is oriented away from the stimulus (large black arrows), activation is blocked from accessing higher parieto-frontal areas and establishing long-distance synchrony. During conscious processing, activation invades a parieto-frontal system, can be maintained ad libidum in working memory, and becomes capable of guiding intentional actions including verbal reports. The transition between preconscious and conscious is sharp, as expected from the dynamics of a self-amplified non-linear system [4] (Dehaene et al., 2006, p. 206).

(1) Subliminal processing. We define subliminal processing (etymologically ‘below the threshold’) as a condition of information inaccessibility where bottom-up activation is insufficient to trigger a large-scale reverberating state in a global network of neurons with long range axons. Simulations of a minimal thalamo-cortical network [4] indicates that such a nonlinear self-amplifying system possesses a well-defined dynamical threshold. A processing stream that exceeds a minimal activation level quickly grows until a full-scale ignition is seen, while a slightlyweaker activation quickly dies out. Subliminal processing corresponds to the latter type (Dehaene et al., 2006, p. 207).

(2) Preconscious processing. Freud [36] noted that ‘some processes [.] may cease to be conscious, but can become conscious once more without any trouble’, and he proposed that ‘everything unconscious that behaves in this way, that can easily exchange the unconscious condition for the conscious one, is therefore better described as “capable of entering consciousness” or as preconscious.’ (Dehaene et al., 2006, p. 207).

Instead of the classical binary separation between non-conscious and conscious processing, we introduce here a tripartite distinction between subliminal, preconscious, and conscious processing. The key idea is that, within non-conscious states, it makes a major difference whether stimuli invisibility is achieved by a limitation in bottom-up stimulus strength, or by the temporary withdrawal of top-down attention. The first case corresponds to subliminal processing, the second to preconscious processing. We have shown how this distinction is theoretically motivated and helps make sense of neuroimaging data (Dehaene et al., 2006, p. 208).

Our proposal could also lead to a reconciliation of several major theories of conscious perception. The distinction between preconscious and conscious processing is consistent with Lamme’s proposal of a progressive build-up of recurrent interactions, first locally within the visual system, and second more globally into parieto-frontal regions [3]. It is also consistent with Zeki’s hypothesis of an asynchronous construction of visual perception in multiple distributed sites before binding into a ‘macro-consciousness’ [2]. Our only source of disagreement – but an important one – resides in their attribution of ‘phenomenal consciousness’ or ‘micro-consciousness’ to what we have termed pre-conscious processing. (Dehaene et al., 2006, pp. 208-209).

Dehaene, S., Changeux, J.-P., Naccache, L., Sackur, J. and Sergent, C., (2006). Conscious, preconscious, and subliminal processing: a testable taxonomy. Trends in Cognitive Sciences. 10 (5), pp. 204–211.


Decision

Ajouter une citation.


Development

Ajouter du texte.


Learning

Ajouter du texte.



Perception and Action

Ajouter du texte.


Perception-to-Action

We propose a new theoretical framework for the cognitive underpinnings of perception and action planning, the Theory of Event Coding (TEC). [...] According to TEC, the core structure of the functional architecture supporting perception and action planning is formed by a common representational domain for perceived events (perception) and intended or to-begenerated events (action). (Hommel et al., 2001, pp. 849).

Hommel, B., Müsseler, J., Aschersleben, G. & Prinz, S.E. (1990). The Theory of Event Coding (TEC): A framework for perception and action planning. Behavioral and Brain Sciences. 24, pp. 849–937.


Selective attention and Memory

In this article, I argue that the two interpretations of the term focus of attention in fact point to two different functional states of information in working memory. Therefore, I propose to conceptualize working memory as a concentric structure of representations with three functionally distinct regions (see Figure 1). 1. The activated part of long-term memory can serve, among other things, to memorize information over brief periods for later recall. 2. The region of direct access holds a limited number of chunks available to be used in ongoing cognitive processes. 3. The focus of attention holds at any time the one chunk that is actually selected as the object of the next cognitive operation (Oberauer, 2000, p. 412).

The limits of working memory capacity, as measured by various tasks (see, e.g., Cowan, 2001; Oberauer, SüB, Schulze, Wilhelm, & Wittmann, 2000), presumably reflect the limited number of independent elements that can be held in the region of direct access at the same time. This region, therefore, corresponds most closely to what Cowan (1995, 1999) named the focus of attention. The capacity limit of working memory probably arises from two factors, partial overwriting of representations in working memory and crosstalk between the elements in the region of direct access when one of them must be selected for processing (Oberauer & Kliegl, 2001). Overwriting means that representations sharing features tend to overwrite each other’s feature codes (e.g., as in the feature model of Nairne, 1990). Crosstalk refers to the competition among items in the region of direct access when it comes to selectively retrieve one of them at the exclusion of others. Elements held in the activated part of long-term memory do not contribute to crosstalk because they are not part of the set from which the focus selects one element (Oberauer, 2000, p. 412).

Retrieving an item from working memory, either for recall or for manipulation, means bringing this item into the focus of attention. The focus of working memory therefore has a function with respect to memory that is equivalent to the function of a focus of attention in perception. Following Allport (1987), we can characterize this function as “selection for (cognitive) action” (p. 395). Whereas the focus of attention can directly retrieve items from the region of direct access, recall of items from the activated part of long-term memory must be mediated by retrieval structures that help to bring the to-be-recalled chunks into the region of direct access (Ericsson & Kintsch, 1995). Only objects within the region of direct access are regarded as selection candidates by the focus of attention. Therefore, only objects in the direct access region contribute to crosstalk, thereby slowing down the selection process. (Oberauer, 2000, p. 412).

To summarize, I see working memory as an organized set of representations characterized by their increased state of accessibility for cognitive processes. Representations belonging to the contents of working memory can be distinguished with respect to their access status. Capacity limits on “simultaneous storage and processing” arise not from the need to share a limited resource, but from the difficulty of selective access when several distinct mental objects must be held immediately available. (Oberauer, 2000, pp. 412-413).

Working memory contents that must be held available for an ongoing processing task are kept in a functionally different state than those remembered in the background. Only the former have a substantial effect on the speed of the processing operations. This suggests that there is a capacity limit for holding items in a state of direct accessibility for cognitive operations. When more items are held in the selection set (i.e., the set of candidates for access), the selection of the required item takes more time, thus slowing the completion of the cognitive operation applied to it. Within the selection set, the one item selected for processing at any moment has a special status. When this item is selected again for the next processing step, the operation is executed several hundred milliseconds faster than when a new item must be drawn from the selection set (Oberauer, 2000, p. 420).

These three states of information in working memory are captured by the concentric model outlined in the introduction. The model specifies three regions in which memory contents can be held: the activated part of long-term memory, the region of direct access, and the focus of attention. I do not regard the three regions of this model as structurally (or even anatomically) separate subsystems, such that information must be transferred from one place to another when it is “moved” into another region. Rather I think of the three regions as functionally different states of representations in working memory. They differ with respect to how their contents are related to the processes executed in working memory. Memory elements in the focus of attention are already selected for whatever cognitive operation is set up in the system to be executed next. Elements in the region of direct access form the selection set; when a new item must be retrieved from working memory as input to a process, it is selected from this set. Memory contents in the activated part of long-term memory are held available in the background. They can be retrieved only indirectly through associations with items in the more central regions. Activated representations in long-term memory can influence ongoing processes indirectly. For example, when a probe in a Sternberg task matches an item held in the activated part of long-term memory, RTs are slowed (Oberauer, 2001). Presumably the activated information in long-term memory can also prime or bias the processes executed on the element in the focus (Oberauer, 2000, p. 420).

Oberauer, K. (2000). Access to Information in Working Memory: Exploring the Focus of Attention. Journal of Memory and Language, 58, pp. 730-745.


Working memory is a system that provides selective access to a small set of representations for goal-directed processing. Its capacity is severely limited—we can hold only small amounts of information immediately accessible at the same time. Several hypotheses have been suggested on why working memory capacity is limited, among them the idea of a limited resource of activation (Just & Carpenter, 1992), a limit to the ability to control attention (Kane, Bleckley, Conway, & Engle, 2001), time-based decay (Page & Norris, 1998), and interference between representations held in working memory (Oberauer & Kliegl, 2001, 2006; Saito & Miyake, 2004). The purpose of this article is to investigate one such mechanism, interference, in verbal working memory. We will be concerned with two cases of interference—interference between items to be held in working memory simultaneously, and interference between memory items and representations involved in a concurrent processing task. The latter case is of interest because working memory is often studied with tasks requiring concurrent storage and processing, as for instance the family of complex span tasks (Conway et al., 2005). The term interference can refer to various mechanisms by which representations get in each other’s way. Here, we consider three of them, confusion between items, feature migration, and feature overwriting (Oberauer & Lange, 2008, pp. 730-731).

Besides similarity-based confusion, at least one further mechanism, feature overwriting, contributes to interference in working memory. Feature overwriting presupposes distributed representations of items, and therefore this finding has strong implications for models of working memory. Models using localist representations of items in working memory, such as the network model of Burgess and Hitch (1999), Burgess and Hitch (2006), the primacy model (Page & Norris, 1998), and the start-end model (Henson, 1998), will have difficulty with explaining the feature overlap effect. Models using distributed representations, such as the feature model (Nairne, 1990) and SOB (Farrell & Lewandowsky, 2002) are better suited to handle effects on the feature level. (Oberauer & Lange, 2008, p. 743).

Oberauer, K. & Lange, E., (2008). Interference in verbal working memory: Distinguishing similarity-based confusion feature overwriting, and feature migration. Journal of Memory and Language, 58, pp. 730-745.


Attentional control and Memory


Attention and Language

Ajouter du texte.


Attention and Intelligence

Ajouter du texte.


Attention and Consciousness

Since the time of Williams James, it has been known that selection is based on either bottom-up exogenous or top-down endogenous factors. Exogenous cues are imageimmanent features that transiently attract attention or eye gaze, independent of a particular task. Thus, if an object attribute (e.g. flicker, motion, color, orientation, depth or texture) differs significantly from a neighboring attribute, the object will be salient. This definition of bottom-up saliency has been implemented into a suite of neuromorphic vision algorithms that have at their core a topographic saliency map that encodes the saliency or conspicuity of locations in the visual field, independent of the task [18]. Such algorithms account for a significant proportion of scanning eye movements [19,20] (Koch & Tsuchiya, 2007, p. 16).

However, under many conditions, subjects disregard salient, bottom-up cues when searching for particular objects in a scene, by dint of top-down, task-dependent control of attention. Bringing top-down, sustained attention to bear on an object or event in a scene takes time. Top-down attention selects input defined by a circumscribed region in space (focal attention), by a particular feature (feature-based attention) or by an object (object-based attention). It is on the relationship between these volitionally controlled forms of selective, endogenous attention and consciousness that this article focuses (Koch & Tsuchiya, 2007, p. 16).

Consciousness is surmised tohave substantially different functions from attention. These include summarizing all information that pertains to the current state of the organism and its environment and ensuring this compact summary is accessible to the planning areas of the brain, and also detecting anomalies and errors, decision making, language, inferring the internal state of other animals, setting long-term goals, making recursive models and rational thought (Koch & Tsuchiya, 2007, p. 17).

To the extent that one accepts that attention and consciousness have different functions, one must also accept that they cannot be the same process. It follows, then, that any conscious or unconscious percept or behavior can be classified in one of four ways, depending on whether top-down attention is required and whether it necessarily gives rise to consciousness (Koch & Tsuchiya, 2007, p. 17).

Koch, C., & Tsuchiya, N. (2007). Attention and consciousness: two distinct brain processes. Behavioral and Brain Sciences. 11 (1), pp. 16–22.


A previous paper on this topic Posner(1994) argued that the mechanisms of attention form the basis for an understanding of consciousness. Since that time the study of attention has greatly advanced (Petersen and Posner, 2012; Posner, 2012). While the intervening years have provided evidence of dissociations between brain networks involved in attention and aspects of consciousness (Koch and Tsuchiya, 2007), I still believe that much can be learned about consciousness from an understanding of attention. (Posner, 2012, p. 1).

In this paper I first summarize the relation of attention and consciousness and illustrate how the study of attentional networks might help illuminate dissociations. Because attention involves different brain networks(Posner and Petersen, 1990; Posner and Rothbart, 2007) and because consciousness has a wide variety of definitions it is necessary to illustrate their constraints and inter-relations rather than provide a single unified account. I try to do this by dealing first with the conscious state, second with consciousness of sensory qualities and finally with volition. (Posner, 2012, p. 1).

The study of attention has made great strides in the last several years. It has been possible to combine imaging, genetics, and even cellular studies in humans, monkeys, and rodents to examine aspects of networks involved in the various functions of attention (Posner and Rothbart, 2007; Posner, 2012). One way to proceed involves continuing the development of models of attention. We can then determine the constraints upon various definitions of consciousness they might provide. We need also to keep in mind that in the end these constraints may not be sufficient to entirely answer the many issues related to consciousness. It is important to realize that mapping of attention and consciousness is not one to one, but rather a mapping that involves several attentional functions or networks in addition to the several meanings of consciousness (Posner, 2012, p. 3).

Posner, M.I. (2012). Attentional networks and consciousness. Frontiers in Psychology. 3 (64), pp. 1–4.


Attention and Development

Ajouter du texte.


Attention and Learning

Ajouter du texte.


Attention and Meditation


The current study examined whether intensive meditation can affect the distribution of limited attentional resources, as measured by performance in an attentional-blink task and scalp-recorded brain potentials. A major ingredient of meditation is mental training of attention. Such mental training is thought to produce lasting changes in brain and cognitive function, significantly affecting the way stimuli are processed and perceived. In line with this view, recent studies have reported cognitive and neural differences in attentional processing between expert meditators and novices [6,7]. (Slagter et al., 2007, pp. 1228).

This study examined whether intensive mental training can affect one of the major capacity limits of information processing in the brain: the brain’s limited ability to process two temporally close meaningful items. Using performance in an attentional-blink task and scalp-recorded brain potentials, we found, as predicted, that 3 mo of intensive mental training resulted in a smaller attentional blink and reduced brain resource allocation to the first target, as reflected by a smaller T1-elicited P3b. Of central importance, those individuals that showed the largest decrease in brain-resource allocation to T1 generally showed the greatest reduction in attentional blink size. These novel observations indicate that the ability to accurately identify T2 depends upon the efficient deployment of resources to T1 and provide direct support for the view that the attentional blink results from suboptimal resource sharing [5,13,15,16]. Importantly, they demonstrate that through mental training, increased control over the distribution of limited brain resources may be possible. (Slagter et al., 2007, pp. 1233).

The current findings allow us to speculate on candidate brain structures that intensive Vipassana meditation training may affect. Previous neuroimaging studies have implicated a network of frontal, parietal, and temporal brain areas in the generation of the scalp-recorded P3b [23]. Activation of a similar network of brain areas has been associated with conscious target processing in the attentional-blink task [24]. Three months of intensive mental training may thus have affected the recruitment of this distributed neural network. (Slagter et al., 2007, pp. 1234).

In summary, the results presented here are consistent with the idea that the ability to accurately identify T2 depends upon the efficient processing of T1. They furthermore demonstrate that, through mental training, increased control over the allocation of limited processing resources may be possible. Our study corroborates the idea that plasticity in brain and mental function exists throughout life, and illustrates the usefulness of systematic mental training in the study of the human mind. (Slagter et al., 2007, pp. 1234).

Slagter, H.A., Lutz, A., Greischar, L.L., Francis, A.D., Nieuwhenhuis, S., Davis, J.M., & Davidson, R.J. (2007). Mental training affects distribution of limited brain resources. Plos Biology, 5 (6), pp. 1228-1235.


In prior work (24), ANT has been used to measure skill in the resolution of mental conflict induced by competing stimuli. It activates a frontal brain network involving the anterior cingulate gyrus and lateral prefrontal cortex (35, 36). Our underlying theory was that IBMT should improve functioning of this executive attention network, which has been linked to better regulation of cognition and emotion (22, 37) (Tang et al., 2007, pp. 17153).

In previous work, executive attention has been shown to be an important mechanism for self-regulation of cognition and emotion (22, 37, 39). The current results with the ANT indicate that IBMT improves functioning of this executive attention network. Studies designed to improve executive attention in young children showed more adult-like scalp electrical recordings related to an important node of the executive attention network in the anterior cingulate gyrus (22, 37, 39). We expect that imaging studies with adults would show changes in the activation or connectivity of this network after IBMT (Tang et al., 2007, pp. 17155).

In summary, IBMT is an easy, effective way for improvement in self-regulation in cognition, emotion, and social behavior. Our study is consistent with the idea that attention, affective processes, and the quality of moment-to-moment awareness are flexible skills that can be trained (55, 56) (Tang et al., 2007, pp. 17155).

IBMT belongs to body–mind science in the ancient Eastern tradition. Chinese tradition and culture is not only a theory of being but also (most importantly) a life experience and practice. The IBMT method comes from traditional Chinese medicine, but also uses the idea of human in harmony with nature in Taoism and Confucianism, etc. The goal of IBMT is to serve as a self-regulation practice for body–mind health and balance and well being and to promote body–mind science research (Tang et al., 2007, pp. 17156).

IBMT has three levels of training: (i) body–mind health, (ii) body–mind balance, and (iii) body–mind purification for adults and one level of health and wisdom for children. In each level, IBMT has theories and several core techniques packaged in compact discs or audiotapes that are instructed and guided by a qualified coach. A person who achieves the three levels of full training after theoretical and practical tests can apply for instructor status. (Tang et al., 2007, pp. 17156).

Tang, Y.-Y., Ma, Y., Wang, J., Fan, Y., Feng, S., Lu, Q., Yu, Q., Sui, D., Rothbart, M.K., Fan, M., & Posner, M.I. (2007). Short-term meditation training improves attention and self-regulation. Proceedings of the National Academy of Sciences, 104 (43), pp. 17152-17156.





Education

Ajouter une citation.


CognitiveArchitecture

Our schools may be wasting precious years by postponing the teaching of many important subjects on the ground that they are too difficult… the foundations of any subject may be taught to anybody at any age in some form (Bruner, 1960, p. 11).

Bruner, J. (1960). The Process of Education. Cambridge: Harvard University Press.


Consciousness

Ajouter du texte.


Vision

Ajouter du texte.


Action

Ajouter du texte.


Learning

Ajouter du texte.


Decision

Ajouter une citation.


Development

Ajouter du texte.




Neuroeducation

At the interface between neuroscience, psychology and education, neuro-education is a new inter-disciplinary emerging field that aims at developing new education programs based on results from cognitive neuroscience and psychology (Martínez-Montes, Chobert, & Besson, 2016, p. 342).

Martínez-Montes, E., Chobert, J., & Besson, M. (2016). Neuro-education and neuro-rehabilitation. Lausanne: Frontiers Media.


The aim at the moment, therefore, is to determine what limitations anatomy places to educational process, and thus to obtain a rational basis from wich to attack many of the pedagogical problems (Donaldson, 1895, p. 342).

Donaldson, H. H. (1895). The growth of the brain : A study of the nervous system in relation to education. The Contemporary Science Series. Havelock Ellis (Ed.). London: Walter Scott.


This article examines those results, interpretations, and conclusions-a set of claims that I will call the neuroscience and education argument. The negative conclusion is that the argument fails. The argument fails because its advocates are trying to build a bridge too far. Currently, we do not know enough about brain development and neural function to link that understanding directly, in any meaningful, defensible way to instruction and educational practice. We may never know enough to be able to do that. The positive conclusion is that there are two shorter bridges, already in place, that indirectly link brain function with educational practice. There is a well-established bridge, now nearly 50 years old, between education and cognitive psychology. There is a second bridge, only around 10 years old, between cognitive psychology and neuroscience. This newer bridge is allowing us to see how mental functions map onto brain structures. When neuroscience does begin to provide useful insights for educators about instruction and educational practice, those insights will be the result of extensive traffic over this second bridge. Cognitive psychology provides the only firm ground we have to anchor these bridges. It is the only way to go if we eventually want to move between education and the brain (Bruer, 1997, p. 4).

Bruer, J. T. (1997). Education and the brain : A bridge too far. Educational Researcher. Vol. 26, No. 8, pp. 4-16.



NeuroEthics

Ajouter du texte.


NeuroPsychology

Ajouter du texte.


NeuroEducation

Ajouter du texte.


Educational Psychology

Ajouter du texte.


Specialized Education

Ajouter du texte.


Reeducation

Ajouter du texte.


Cognitive Rehabilitation

Ajouter du texte.


Cognitive Remediation

Ajouter du texte.


Pedagogical Reeducation

Ajouter du texte.






Computer science

If an a-machine prints two kinds of symbols, of which the first kind (called figures) consists entirely of 0 and 1 (the others being called symbols of the second kind), then the machine will be called a computing machine (Turing, 1936, p. 232).

Turing, A.M. (1936). On Computable Numbers, with an Application to the Entscheidungsproblem. Proceedings of the London Mathematical Society. 42(1), pp. 230-265.


Computing machine

The ideal computing machine must then have all its data inserted at the beginning, and must as free as possible from human interference to the very end. This means that not only must the numerical data be inserted at the beginning, but also all the rules for combining them, in the form of instructions covering every situation which may arise in the course of the computation. This the computing machine must be a logical machine as well as an arithmetic machine and must combines contingencies in accordance with a systematic algorithm (Wiener, 1948, p. 118).

Wiener, N. (1948). Cybernetics: or control and communication in the animal and the machine. Second Edition (1961). Cambridge, The MIT Press.


Consciousness

Ajouter du texte.


Vision

Ajouter du texte.


Action

Ajouter du texte.


Learning

Ajouter du texte.


Decision

Ajouter une citation.


Development

Ajouter du texte.




Robotics

Ajouter une citation.


Computing machine


Consciousness

Ajouter du texte.


Vision

Ajouter du texte.


Action

Ajouter du texte.


Learning

Ajouter du texte.


Decision

Ajouter une citation.


Development

Ajouter du texte.




Applications

Ajouter du texte.

ResearchApplications

Ajouter du texte.


NeuroApplications

Ajouter du texte.


PsyApplications

Ajouter du texte.


EducationApplications

Ajouter du texte.





Exemple avec un span qui est de type inline



Art

Ajouter du texte.

Sport

Ajouter du texte.