Philosophy of Mind

Arichive : Philosophy of Mind
Three questions have dominated the philosophy of mind in the analytic tradition since Descartes. They are: what are thoughts and thinking? How can the mind represent the world? What is consciousness? Most contemporary analytic philosophers attempt to answer these questions within a broadly materialistic framework since they think that there is overwhelming reason to believe that human beings are biological organisms entirely composed of ordinary matter. Recently the central questions in the philosophy of mind have been given some new twists and partial answers by developments within cognitive science. This article reviews some of the main ideas in cognitive science and its impact on these issues in the philosophy of mind.
Introduction
Jon Leefmann, Elisabeth Hildt, in The Human Sciences after the Decade of the Brain, 2017
While theoretical philosophy and especially the philosophy of mind provides widely acknowledged examples for some kind of interaction between empirical neuroscience and the a priori reasoning of philosophy, the possible interactions of neuroscience with other disciplines form the humanities is much less investigated. In the last chapter of Part I, Mattia Della Rocca therefore turns to historiography as another field of research that has been affected by neuroscience in the past decade. In his contribution he shows how the field neurohistory has formed itself alongside the more established history of neuroscience and how both approaches differ from one another. The former presents itself as a methodology for doing historiography, based on the findings of current cognitive and brain sciences and aims to explain how historical and cultural changes developed out of the confrontation of the nervous system with an ever-changing physical and cultural environment. The chapter, in contrast, occupies itself with the framing of neuroscientific triumphs into the chronological collection of historical recording and with the discovery of “precursors” of the discipline to justify and celebrate current research in neuroscience. Both approaches, Della Rocca argues, are, however, one-sided and do not suffice for a fruitful interaction between history and neuroscience. Neurohistory, by explaining historical shifts based on the dominant knowledge available in neuroscience at a time, has a tendency toward presentism, ahistoricism, and the neglect of the sociocultural embeddedness of neuroscientific explanation. The history of neuroscience, instead, is unable to account for the influence of cognitive foundations guiding human behavior in history. Referring to the example of neuroplasticity, Della Rocca shows how a third possibility—a critical interaction of neuroscience and history—could avoid both kinds of restrictions.
Philosophical Puzzles Evade Empirical Evidence
I. Sarıhan, in The Human Sciences after the Decade of the Brain, 2017
Abstract
This chapter analyzes the relation between brain sciences and philosophy of mind, in order to clarify how philosophy can contribute to neuroscience and how neuroscience can contribute to philosophy. Especially since the 1980s and the emergence of “neurophilosophy,” more and more philosophers have been bringing home morals from neuroscience to settle philosophical issues. I mention examples from the problem of consciousness and philosophy of perception, and I argue that such attempts are not successful in trying to settle questions like whether psychology can be reduced to neuroscience or whether we see the external world directly in perception. The failure results from an ability of the philosophical questions to evade the data. What makes these questions persisting philosophical questions is precisely that there is no way to settle them through empirical evidence, as they are conceptual questions and their solution lies in conceptual analysis.
Consciousness and Mind as Philosophical Problems and Psychological Issues
George Mandler, in Perception and Cognition at Century’s End, 1998
II WHAT IS A PHILOSOPHY OF MIND ABOUT?
There are two major problems confronting an attempt to delimit a philosophy of mind. First, some philosophers are not at all sure that it would be possible to arrive at any understanding of mind, whatever it is. And second, there is no agreement whether “mind” refers to the contents of consciousness or whether something else or more is implied.
Thomas Nagel is an excellent example of a philosopher who, though implicitly claiming otherwise, denies the possibility of understanding the mind, without quite telling us what this “mind” might be. It is described as a “general feature of the world” like matter (1986, p. 19) that cannot be understood by any physical reduction and which also is beyond any evolutionary explanation. Nagel assures us that “something else” must be going on, and he is sure that whatever it may be, it is taking us to a “truer and more detached” understanding of the world (p. 79). Whereas I do not wish to advertise any great advance in contemporary psychology, it is difficult to follow someone who on the one hand refuses to examine current psychological knowledge and on the other hand insists that “the methods needed to understand ourselves do not yet exist” (p. 10). Nagel contends that “the world may be inconceivable to our minds” (p. 10). Humans are by no means omniscient, but one cannot truly claim to know or to prejudge what knowledge is or is not attainable. There surely are aspects of the world that are currently inconceivable, and others that were so centuries ago, but many of the latter are not now and the former may not be in the future.
There seems to be no public agreement as to the referent for the ubiquitous term mind, and dictionaries are not particularly helpful. For example, Webster’s is quite catholic in admitting “the complex of elements in an individual that feels, perceives, thinks, wills, and esp. reasons” AND “the conscious mental events and capabilities in an organism” AND “the organized conscious and unconscious adaptive mental activity of an organism.” Philosophers rarely tell us which of these minds they have in mind. One wonders how obscure these deliberations must appear to a French or German reader who has no exact equivalent for our “mind” and must rely on esprit, Sinn, Seele, Geist, or Psyche. Apart from the public display of disunity, it is likely that most philosophers would agree to a use of “mind” as a quasi-theoretical entity that is causally involved in mental events, including consciousness. I will return to the conflict between seeing “mind” as representing the contents (and sometimes functions) of consciousness compared with using ”mind” as a summary term for the various mechanisms that we assign to conscious and unconscious processes. First, some considerations of the questions that puzzle us about consciousness.
Cognitive Science: Overview
G. Strube, in International Encyclopedia of the Social & Behavioral Sciences, 2001
2.2 Philosophical Foundations: Functionalism and the Computational Theory of Mind
Mental states have been analyzed as ‘intentional attitudes’ in the philosophy of mind, consisting of a propositional content (e.g., P = the sun is shining) and an attitude that characterizes one’s own relation to that proposition (e.g., I wish that P would become true). Fodor (1975) developed this approach further, arriving at a ‘language of thought’ that treats the propositional content as data and the intentional relation as an algorithmic one.
If we accept these as the elements of a ‘language of thought,’ then the question arises of how mental states relate to brain states: a well-known problem in philosophy. Following Putnam (1960), Fodor and others conceptualize the relation between brain and mental states as being parallel to the relation between a computer (i.e., the hardware) and a program running on that computer: the mind as the software of the brain. This approach is known as the computational theory of mind. It fits well with the PSSH, and it soon became the dominant framework in CS. However, it addresses (potentially) conscious thought only, ignoring lower cognitive processes.
Wittgenstein, Ludwig (1889–1951)
E. von Savigny, in International Encyclopedia of the Social & Behavioral Sciences, 2001
Ludwig Wittgenstein (1889–1951) was a leading figure in twentieth-century philosophy of language and philosophy of mind. In his Tractatus Logico-Philosophicus, he devised a theory of logical truth based on a naturalistic metaphysics and a referential and speaker-oriented theory of the meaning of declarative sentences. Since its fundamental postulates turned out to be inapplicable to ordinary language, Wittgenstein made a fresh start, in his Philosophical Investigations, by grounding linguistic meaning in a sign’s role in social interaction. Relevant ways of using signs have to be socially established; therefore, meaning is essentially public rather than intentional. Applied to first person sensation language, this resulted in public accessibility and social determination of subjective psychological states, all the more so since Wittgenstein treated verbal expressive behavior as a special case of expressive behavior in general. While very influential in philosophy, his ideas took little substantial effect in other areas. Semantic theories of meaning were influenced by the Tractatus, whereas the later ideas are still waiting to be transformed into research programs.
Thinking with the Body
Ricardo Sanz, … Idoia Alarcón, in Handbook of Cognitive Science, 2008
Hierarchical control
A real plant can be very simple or can be extremely complex. Room thermostats—a favorite in philosophy of mind—are bang-bang controllers of extreme simplicity that are controlling a single magnitude in the plant room temperature. A real temperature control in a chemical industrial reactor can imply tens of sensors, actuators, and heterogeneous-nested control loops to achieve the desired performance.
A real industrial plant can have thousands of magnitudes under control and the organization of all these control loops is a major control system design challenge. This is so because not only must the different magnitudes be under control but they must be so in a co-ordinated way in order to achieve the global objectives of plant operation.
The strategy used to accomplish this is to organize the control loops in a hierarchy where low-level references for controllers are computed by upper-layer controllers that try to achieve more abstract and general setpoints. For example, in the reaction unit of a chemical plant, many low-level controllers control individual temperatures, pressures, flows, etc. to fulfill the higher level control objectives of the unit such as production and quality levels (Figure 20.11).
Figure 20.11. A hierarchical distributed control system (DCS) of an industrial plant is structured in many different control layers. Control objectives become more abstract at higher levels of control. The lower the level, the bigger the importance of temporal criticality and precision.
What is most interesting in studying the phenomenon of control in biological systems and large-scale process plant controllers are the striking similarities between both. While robot control systems in many cases try to mimic biosystems by exploiting what is known—or hypothesized—about their control systems, in the case of process plants, the bioinspired movement is yet to arrive (except possibly at the levels of expert process control, Åström et al., 1986).
Industrial control systems technology has developed following its own evolutionary path from the early analogical controllers of the mid-20th century to the fully computerized, zillion lines-of-code of today’s whole plant controllers. Different types of organizations have appeared in the structuring of the core processes, in the structuring of control architectures, and, quite recently, in the co-structuring of process and control.
From the perspective of the control system we can observe an evolution that somewhat parallels the development of mental capabilities in biosystems:
- 1.
-
The most simple control mechanism is a purely reactive mechanism that triggers some activity when certain conditions are met. Some examples of these are a large part of all protection and safety mechanisms in industrial systems. The overall behavior is similar to a multitude of safety reflexes in biosystems.
- 2.
-
An additional level of complexity is achieved when the raw sensorial information is minimally processed to extract meaningful information for behavior triggering. This is done in elementary control and protection systems. In the case of biological systems, a well-known study in this field is the work of Lettwin et al. (1959)that involves retinal processing in frog’s eyes.5
- 3.
-
The next layer appears when it is possible to conceptualize the operation of the controller and feed its specific parametric values (e.g., setpoints or controller parameters). This layer is hence integrable with upper-level controls opening the possibility for a control hierarchy. It is also well known in biosystems that some motor actions coming from upstream in the CNS are executed by low-level controllers (core examples are the homeostatic control systems of the body, Cannon, 1932).
- 4.
-
Using the conceptual openness of the control loop it is hence possible to layer control loop over control loop—this is called control loop nesting—so that upper-level behavior relies on the robust performance of lower level behavior—thanks to the integrated controller. In this way it is possible to use a production quality control in a chemical reactor with a plethora of lower level controllers underneath keeping flows, pressures, and temperatures at a suitable level. Following the homeostatic example of the previous case, we discover that large systemic processes—for example, digestion—rely on lower level processes to keep bodily magnitudes apace. Another interesting example is how the process of gait control relies on lower level muscular control (Grillner, 1985).
- 5.
-
An interesting step forward occurs when engineers reach the conclusion that it is possible in general to separate controllers into two parts: a universal engine and data that specifies the particular control strategy to follow. This opens new possibilities for the reuse of engines. A clear example of this are the MPC controllers mentioned in section “Model-Predictive Control” and the controllers based on expert systems technology (Sanz et al., 1991).
- 6.
-
The next and most interesting step in the development of complex control systems is the realization that a conceptualization of this separability (engine+knowledge) renders a new level of controller openness to metacognitive processes (Meystel & Sanz, 2002). In the case of human control systems this gives rise to introspection capabilities and the well known phenomenon of memetics and culture (Blackmore, 1999).
What is most interesting in the parallelism between technical industrial control systems and biological controllers is that they have come about in almost complete isolation. Certainly, the evolution of technical controllers has not substantially affected the evolution of control mechanisms in biosystems. However, the opposite is also true—with the possible exception of knowledge-based control where human expertize does figure in the technical system.
This could be interpreted in the sense that evolutionary pressure on control/cognition points toward the direction of layered metacognitive controllers; that is, it points toward the direction of consciousness (Sanz et al., 2002). In order to fully grasp this phenomenon a deeper analysis of the model-based nature of the control capability is needed.
Comparative Reasoning
Daniel Krawczyk, Aaron Blaisdell, in Reasoning, 2018
Testing for a Concept of Self in Animals
Theory of mind is a term describing the capacity to attribute states of mind, or intentions to oneself and to others. It originates from philosophy of mind, a category of inquiry dealing with the ability to read the intentions of other beings. Theory of mind has been widely adopted by developmental psychology to describe the capacity of small children to show empathy or understand others. Theory of mind has also been used to characterize the differences in the perception of others observed in individuals with autism or schizophrenia. Such individuals often show difficulties in comprehending the likely views, intentions, or thoughts of other people, thereby making social interactions more difficult.
For an individual to have a theory of mind, he or she must appreciate that there is a difference between oneself and others. Such an appreciation could then lead to the ability of an individual to simulate or imagine that the way that he or she sees the world is similar to the way that another individual views the world. Alternatively, one may use evidence gained from social interactions in order to conclude that another individual is perceiving things differently. A concept of self has been tested for in numerous species using a technique that has become known as the “mirror test,” which was originally carried out by Gordon Gallup in 1970 to determine whether chimpanzees had the ability to differentiate themselves from other individuals. Gallup initially presented the mirror test to two chimpanzees (1970). The mirror was introduced and a variety of behaviors were noted such as making threat gestures and faces. The key test occurred when Gallup placed a mark on the brow ridge of the chimps. When provided a mirror, they were able to scratch at their own bodies, rather than attempting to investigate the mark on the strange chimpanzee in the mirror (Fig. 4.15). Since that original test, several other species including elephants and magpies, a type of crow, have been able to pass the mirror test. In one of the more interesting variations, captive dolphins were tested after a trainer drew tattoo-like patterns of lines and shapes on their backs. After a mirror was placed outside the glass of their tank, the dolphins passed the mirror test by swimming over to the mirror and twisting and turning at angles that would have allowed them to view their newly decorated bodies (Tschudin, Call, Dunbar, Harris, & van der Elst, 2001). This rigorous test included “sham” markings made using a non-marking pen. The sham markings ensured that the dolphins were not responding to mere touch sensations, but rather they appeared to be interested in viewing the new visual markings that appeared on their bodies. Having an appreciation of oneself is an important stepping-stone on the way to being able to understand others and reason about their intentions. While the mirror test offers an interesting clue about the ability of an organism, it may not be sufficient to adequately determine whether an animal has a self-concept. The fact remains that we cannot fully appreciate how the animal is responding in this particular case, or what cues may influence this task.
Figure 4.15. Passing the mirror test is considered to be an indication that an individual has a sense of self. This is a gateway skill leading to elaborated social reasoning abilities.
From Shutterstock.com
Phenomenology: Philosophical Aspects
K. Mulligan, in International Encyclopedia of the Social & Behavioral Sciences, 2001
5 Types of Coexistence
Scheler’s taxonomy of forms of social coexistence draws on distinctions to be found in the classical writings of Tönnies, Weber , and Simmel and on his own philosophy of mind and ethical psychology. In particular, four types of coexistence are distinguished with reference to different types of collective intentionality: masses, communities, societies, and superordinate entities such as nation-states, state-nations, empires such as the Belgian Empire, and the Catholic Church. A mass is characterized by affective contagio
Reviews
There are no reviews yet.