Advertisement
If you have a new account but are having problems posting or verifying your account, please email us on hello@boards.ie for help. Thanks :)
Hello all! Please ensure that you are posting a new thread or question in the appropriate forum. The Feedback forum is overwhelmed with questions that are having to be moved elsewhere. If you need help to verify your account contact hello@boards.ie
Hi there,
There is an issue with role permissions that is being worked on at the moment.
If you are having trouble with access or permissions on regional forums please post here to get access: https://www.boards.ie/discussion/2058365403/you-do-not-have-permission-for-that#latest

Theory of consciousness

  • 06-08-2014 1:41pm
    #1
    Registered Users, Registered Users 2 Posts: 1,991 ✭✭✭


    Howzers.

    The subject title is a slight misnomer- I was going to call it 'Academia and non-academics'.
    The problem lies in getting a non-academic's philosophical work (such as mine) read by an academic in a particular field- in this case philosophy of mind. Although not trained in philosophy, over the years of studying philosophy of mind and philosophy of science, I have developed a theory of consciousness from an evolutionary point of view. Essentially it speculates a cognitive architecture, such that the mechanisms and systems of cognition are the framework upon which consciousness 'appeared'. As such it is an abstract hypothesis of the neurobiology of cognition, using Eliminativism and Folk Psychology as reference points.

    I have a very rough draft of a book finished, and subsequent to this have an academic paper (26 pages) written. I would like to get the paper published first, but would first like it read by a philosopher with an interest in philosophy of mind. However, any professors of philosopher in the philosophy faculty of the universities I have asked (one each in Galway, Dublin and England) to read the paper say they are far too pressed for time (especially, I'd assume, from an non-expert; and who could blame them I suppose). Having said that I did get a favourable response to the synopsis of the book from one of them.

    Does anybody know of someone who would be willing to take the time to have a look at the paper and give some pointers? Here's the abstract from it:

    "This paper hypothesizes about the evolution of the neurobiology of cognition, and its relationship to consciousness. The cognitive mechanisms and systems theorized herein are theorized as having evolved as the basis of learning in all animals. The relationship between the development of cognition and learning on the one hand, and consciousness on the other, are laid out, such that consciousness is seen not solely as a gradual linear progression of this evolutionary development, but as the sudden acquisition of a distinct cognitive capacity. The hypothesis initially takes what appears to be a saltationist stance with regard to the evolution of consciousness, but in the end rejects it, while singling out a sudden cognitive capacity- and it is this unique capacity we call consciousness.
    Neural heuristics is postulated as the common basis to learning in all animals, including humans. A genetic anomaly within this specific cognitive capacity is hypothesized as bringing about what we call consciousness (or the capacity thereof). This then allowed the cognitive move from heuristics- which all animals, including us, use- into semantics; which we alone use- as the basis of language and human culture. The anomalous source of this new cognitive capacity is further hypothesized."

    Someone might know someone who knows someone, as they say.
    Thanks.

    Carl.

    p.s. in case the question pops up, let me cut it off in advance- I have never wanted to do a philosophy degree. Besides, I've been far too busy trying to earn a crust.


Comments

  • Closed Accounts Posts: 1,014 ✭✭✭Baked.noodle


    I think one big hurdle you face is academic ignorance. Unfortunately, most academic philosophers in universities are indeed pressed for time and have a very general understand of a few fields, and only specialise on a few specific interests. Understandably, they are often unwilling to supervise and review papers that include significant unfamiliar information. This problem may be further compounded by the scientific aspects of neurobiology. If you can, contact PhD students who are producing work in the field of epistemology and philosophy of mind. They should have more time on their hands, a hunger for ideas, and a good foundation in academic philosophy. In the end perhaps you could submit your work to the journals of philosophy interested in this field. You may have to reduce your work a great deal and be persistent, but if you have a novel idea it's worth a shot. Good luck :)


  • Registered Users, Registered Users 2 Posts: 3,831 ✭✭✭Torakx


    When can I read it :D
    Or I would like to hear more about your theory. Unless you are protecting it right now, then.. ah well..
    Either way, I wish you luck with it.
    I think though a person has to die to be recognized when they are true genius self taught :D
    Otherwise the currently living would be shamed lol


  • Registered Users, Registered Users 2 Posts: 1,991 ✭✭✭cdgalwegian


    If you can, contact PhD students who are producing work in the field of epistemology and philosophy of mind. They should have more time on their hands, a hunger for ideas, and a good foundation in academic philosophy. In the end perhaps you could submit your work to the journals of philosophy interested in this field.

    Thanks b.n.

    That's certainly helpful. I could contact some universities' faculty dept to see if there are such rare breeds around who might be interested.

    I was thinking of contacting the philosophical societies as well , or is that ridiculous?


  • Closed Accounts Posts: 1,014 ✭✭✭Baked.noodle


    Your welcome. I don't know about the societies but I guess it can't do any harm.


  • Registered Users, Registered Users 2 Posts: 1,991 ✭✭✭cdgalwegian


    I contacted NUIG for any PhD students who may be interested in the subject matter, but tbo I don't hold out much hope. Maybe. However, here's an additional note:

    I’ve changed the opening line of the abstract. Whereas it reads: “This paper hypothesizes about the evolution of the neurobiology of cognition, and its relationship to consciousness”. I’ve replaced it with: “This paper hypothesizes about the evolutionary source of consciousness in terms of the possible cognitive architecture which underlies it”.
    Although the hypothesis concerns the neurobiology of cognition, I wouldn’t in any way be an expert on the matter- more informed perhaps than your average punter, but certainly not, as I said, an expert. For that reason, it may be both misleading and off-putting for someone to see a paper with ‘neurobiology’ as its thrust. The nature of the paper is philosophy of mind; a speculative hypothesis on the cognitive architecture of animals, which talks in broad terms about how cognition might work, rather than any technical aspect of neurobiology.
    It essentially seeks to bridge what is called the explanatory gap- more commonly the mind/body problem i.e. thoughts cannot be physically located (whereas neurons can), so have the appearance of non-physicality. The question then is: Can there be a mechanics of mind, based on the mechanics of the brain? This is what I hope to show; the mechanics of cognition leads to experiences we call a mental life.
    After the ‘fall’ of Behaviourism in the fifties, and the view that the ‘black box’ of the mind was off limits (the coup de grace dealt by Noam Chomsky’s Generative Grammar theory of linguistics), the Cognitive Revolution of mind mechanics was ushered in. Unfortunately, it was dominated by computationalism- mind as machine. There have been more recent attempts in the past couple of decades to counter this, such as embodied cognition, but the actual mechanics of mind is still neglected (except maybe connectionism, but this is still an offshoot of computationalism.
    The paper, then, is an attempt to proffer a physicalist account of the mechanics of mind.

    p.s.Mr Reville had a piece in his science column in the IT on Thursday (7/8/14) about consciousness that relates to this post (I tried to add the url but I'm a newbie so it wouldn't let me).

    I sent a response, which wasn't printed, but if anyone is interested I could post it here.


  • Advertisement
  • Registered Users, Registered Users 2 Posts: 3,831 ✭✭✭Torakx


    http://www.youtube.com/watch?v=6nTQnvGxEXw
    I found this today. It seems to cover this topic fairly well, as an overview at least.

    I'm not convinced of the idea that behaviour cannot explain consciousness. Considering the connecitivty between different functions in the brain and the ability to funnel all these senses into a unified field, that is consciousness.
    Well maybe behaviour is the result of the process of creating consciousness, but I still think when combined with a physical measuring of these processes, it could be possible to follow some path to the cause.
    I did forget his points on behaviour and why it can't be that, so I admit I probably am misguided here in my thoughts.


  • Registered Users, Registered Users 2 Posts: 1,991 ✭✭✭cdgalwegian


    Good find Torakz. Very a propos. It is basic, but covers good ground- great for beginners. I listened to it just now- Searle has a very relaxed and informal style; I read a book of his years back (Mind), and have listened to some of his podcast lectures. I wouldn't agree with some of his stuff, but that's kinda irrelevant; he's a great example of a great teacher.

    Torakx wrote: »
    I'm not convinced of the idea that behaviour cannot explain consciousness.

    One crucial thing he only briefly touched on was qualitativeness, or in the jargon 'qualia'. This is the main problem with the so-called explanatory gap of science not being able to handle the 'feel' of subjective experience. Qualia, in this regard, cannot be explained by behaviour; for some, it is the core aspect of consciousness, and is the hardest nut to crack (or gap to be bridged). David Chalmers refers to science being able to deal with aspects of consciousness, but qualia is "the hard problem". He feels it is impossible to overcome the hard problem because consciousness is non-material. The video does a good job of highlighting this. The column piece I referred to by Reville in the earlier post is very relevant here:

    Mind and matter: tensions between the body and the soul
    Materialists claim that the self is illusory and free will doesn’t exist, but many scientists disagree
    Take a deep breath and ponder the following: scientific orthodoxy claims that, in one very important sense, you do not exist because your inner self is an illusion. This is a shocking idea, because most people believe that our capacity to think our own independent thoughts is the last human freedom. The idea that we are merely complex meat-robots is repellent. Well, take heart. Many scientists, myself included, do not accept this claim, and neither does philosopher Mary Midgley, whose recent book Are You an Illusion? I have just read.
    Science is materialistic in its method: it discovers material reasons to explain the material world of matter and energy. But science is not materialistic in its philosophy: it neither confirms nor denies the existence of the non-material or supernatural. Many scientists are materialists, believing that the universe is composed exclusively of matter and energy, and therefore that all of reality must be explicable in terms of the properties of matter and energy.
    To the strict materialist, it is inconceivable that non-material thoughts could cause material consequences, and therefore they hold that our experiences of the mind and free will must be illusions. This interpretation contradicts common sense. We all experience our thoughts as real and believe we can freely decide to take actions that have material consequences. We all behave on this basis, including materialists, who claim that the self is illusory and free will doesn’t exist.
    Matter and mind (spirit) were traditionally believed to have different natures, matter being inert and spirit causing human animation. All the heavy lifting necessary to explain human characteristics was done by the spirit. This interpretation changed with the advent of the theory of evolution, and modern science requires that all heavy lifting be attributed to natural selection acting on inert matter. Midgley doesn’t fully accept this, arguing that, since our concept of matter remains unchanged, modern scientific interpretation asks too much of such inert matter.

    Life options
    Science now believes that inert matter spontaneously sparked into life 3.8 billion years ago and evolved to produce mind. But matter cannot therefore be that “inert”, since it must intrinsically bear the potential for life. Also, Midgley holds that natural selection alone cannot fully explain the prodigious fruitfulness of life. Although it filters the life options that confront it, it cannot account for the intrinsic nature of these options. Therefore, life itself, and its most complex manifestation, the human mind, is far from being fully explained by science.
    Tension between science, broadly representing the body, and religion, representing the soul, has been ongoing since the Enlightenment began in the late 17th century. God has retreated from the public stage in the developed world to be replaced by secularism. Midgley sees attempts to banish the idea of the self as a continuation of this conflict. Noting that many people can live adequate lives while denying God, she wonders how they will fare while also denying themselves.
    How much of reality can be explained by the physical sciences? Materialists believe that reality is ultimately only what can be revealed by physics at the fundamental level of atoms. But this is to deny that there are different levels of explanation, which we study with different tools and in different contexts. A carpenter studies a table as an item of furniture. A physicist studies it as a collection of atoms, finding that the atoms are mostly composed of empty space. Both approaches are correct and one approach is not more “real” than the other. Similarly, because we can now study the neurons that make up our brains doesn’t mean our thoughts, memories and sense of self are not real. Thoughts and memories are a crucial part of reality for humans. I recently solved a riddle after several attempts. At each attempt I had to mentally rehearse my previous efforts and learn from my mistakes. I would never solve the riddle by studying neuronal activity in my brain. I predict, as a scientist, that the mind is an emergent property of the vastly complicated architecture of nerve cells in the brain. I doubt we will ever succeed in explaining mind in terms of atomic events.
    Midgley believes scientists get questions like the self so wrong because they are no longer trained in philosophy. It is scandalous that the standard eight-year university training in science does not include a single mandatory lecture in the history or philosophy of science.

    Here was my response:
    In his Science column (07/08/14) Mr Reville rightly insists that the history and philosophy of science should be an inherent feature of scientific approaches to any interpretation of reality. His idea is that this would be an antidote to the misguided thinking of the scientific orthodoxy that “your inner self is an illusion”. It is not however scientific orthodoxy, nor is it orthodoxy among philosopher’s of mind.
    His dismissal of the denial of the ‘reality’ of the self is based on opposition to the stated materialist position: “Materialists believe that reality is ultimately only what can be revealed by physics at the fundamental level of atoms”, which is mostly a true position. However, this is followed up by “But this is to deny that there are different levels of explanations, which we study using different tools in different contexts“. Materialism does not entail this denial. Contrariwise, many materialist Cognitive Scientists expressly search for the appropriate levels (and modes) of explanation, in an attempt to bridge the so-called ‘explanatory gap’ of the mind/body problem, while still holding that mental life is fundamentally physical/material, or plain made of atoms.
    Not only that, but a scientist (who might be believe in a God) can moreover believe that a bunch of neurons (made of atoms) in the form of a grey gelatinous mass encased in a skull, is capable of producing seemingly non-physical mental life, without additionally believing in a non-material basis for it (a principle known as Occam’s Razor). I don’t see how this could be a matter of further secular encroachment- merely the rigours of inquiry.
    Regarding the common sense approach to mental life, “(s)trictly speaking”, a common sense interpretation of the sun appearing to traverse the sky would be that the sun revolves around the earth. Common sense is a great rule-of-thumb to live by, but a more fine-grained analysis of life is sometimes impeded by the inertia of common sense in our attempts to understand ‘reality’.


  • Registered Users, Registered Users 2 Posts: 3,831 ✭✭✭Torakx


    For now I'm going with the meat robot theory, it makes the most sense to me. I've always considered it that way, the same for animals. I think we just have a ore complex built up set of functions. Like the prefrontal cortex IIRC where that bunch of nerves are apparently responsible for "the self".


  • Registered Users, Registered Users 2 Posts: 1,991 ✭✭✭cdgalwegian


    Torakx wrote: »
    For now I'm going with the meat robot theory, it makes the most sense to me. I've always considered it that way, the same for animals. I think we just have a ore complex built up set of functions. Like the prefrontal cortex IIRC where that bunch of nerves are apparently responsible for "the self".

    I must say, I’m very sympathetic to the postmodern take on notions of the self, so have no difficulty in giving serious weight to ideas of ‘the illusion of the self’, and other such concepts that have a hefty slice of subjectivity inherent in them.
    The issue of the robot metaphor, however, suffers from the explanatory gap problem, and simply will not go away. If you’ll forgive the pun, this is the meat of the issue.
    Close your eyes. Picture the blue of the sky. Now, while your eyes are closed (so that you have no access to external stimulations, such as the colour blue in this instance), tell me this: Where is the colour ‘blue’ located in your brain? That chunk of grey gelatinous mass encased in your skull stores information, but nowhere is there anything in the brain stored as experiences that give us what Ned Block calls ‘Mental Paint’. So there is something going on in our brains that is more than mere behaviour.
    Just like we might not understand things and ascribe a god to its occurrence, some people believe that this factor cannot be explained from a materialist point of view and invoke a non-material basis for it, such as Reville’s piece, where he takes the easy way out in terms of explanation. Explaining qualia is, as Chalmers said, the hard problem.

    The robot metaphor itself is interesting in terms of AI- in relation to qualia. Ned Block again has interesting terminology (which I like, and use), when he talks of access vs phenomenal consciousness. wiki-
    "In his more recent work on consciousness, he has made a distinction between phenomenal consciousness and access consciousness, where phenomenal consciousness consists of subjective experience and feelings and access consciousness consists of that information globally available in the cognitive system for the purposes of reasoning, speech and high-level action control. He has argued that access consciousness and phenomenal consciousness might not always coincide in human beings."
    Now, I don't agree with his definitions within the split, but I like the notion of access. It leads to the interesting question of access by AI entities i.e. would they be conscious if they had the capacity for such 'access'? In relation to the robot meat metaphor it reminds me of the Nietzsche quote: If a lion could talk, we could not understand him.


  • Registered Users, Registered Users 2 Posts: 3,831 ✭✭✭Torakx


    Most likely I am just too far behind to fully get these concepts.
    Imagining a blue sky for example.
    It would be my view that the blue sky in my mind is a representation through memory of the colours I see when my brain process light.
    I am guessing that my brain rememebers the pathways that build up this mental image and i just need torun my neurons or whatever, through that same learned pattern in order to remember that thing and bring up my interpretation of it as experienced.
    The same applies for music and allother senses.
    This to me explains why I have issues remembering things that i did a long time ago, because i haven't been exercising those pathways or patterns and they fade as the brain changes. I must uphold any patterns i order to remmeber them.
    Being a visual thinker, I can use visual symbols, like creating a pointer in memory when programming. A name like Kat could be seen as a cat and better remembered in my case.

    With all that said, my view is that the self, is a product of millions or more patterns laced through the brain. It is an illusion to me. A product of immitation from my senses.
    "In his more recent work on consciousness, he has made a distinction between phenomenal consciousness and access consciousness, where phenomenal consciousness consists of subjective experience and feelings and access consciousness consists of that information globally available in the cognitive system for the purposes of reasoning, speech and high-level action control."

    If considering this from a perspective of AI.
    I might say the access consciousness is the foundational engine, responsible for allthe basic functions and linking those functions.
    Like freeing up memory, assigning functions.

    The phenomenal consciousness in that scenario might be seen as the results of these functions brought together.
    I'm a game developer in the making by the way.. so easiest example to use right now.

    In a game engine I will create my own functions for jumping, pooling objects into memory for use later etc.
    The result when it's exported together and run is a phenomenon right?
    A result of all those process creating a temporary state, combining visuals, audio and an interaction/experience.

    For me I'm having trouble finding this gap at all.. I can't fully understand it.
    Because to me, there must be a part or many parts in the brain that act as the "build" of the engines and game created.
    This is what I refered to when talking about that bunch of nerves in the Pre Frontal Cortex.
    I suppose, these functions "build" the experience on the fly and we change the game code on the fly also. The engine then I suppose would be the coomon things all brains share to function.
    A broken or different engine may bring different results/experiences.

    But.. i am guessing I am still missing a glaringly obvious thing, which is a tiny bit frustrating as I am trying to understand lol

    Not sure I get that Nietzsche quote either.. maybe for the same reason?
    Lions already speak and some people can hear them.
    I talk to the dogs in my house using their language. We get along really well for that reason. Of course I am not as fluent as them! But I understand a lot. So yeah that one is lost on me. Maybe he means we can't have the same experiences? Which is true.


  • Advertisement
  • Registered Users, Registered Users 2 Posts: 1,991 ✭✭✭cdgalwegian


    Torakx wrote: »
    It would be my view that the blue sky in my mind is a representation through memory of the colours I see when my brain process light.
    I am guessing that my brain rememebers the pathways that build up this mental image and i just need torun my neurons or whatever, through that same learned pattern in order to remember that thing and bring up my interpretation of it as experienced.
    The same applies for music and allother senses.

    I think you missed the point Torakx. If you close your eyes and remember a sound of say a crashing wave on the beach, how do neurons store that sound such that it 'sounds' the way it does? Remember, if there are only neurons inside a skull, the brain can't retain a sound as a representation of it, just as it can't retain colours or sense of touch. It can only recognise a stimulus once it's encountered it already. In that way notions of higher order representations fall down because it cannot account for the 'magical' appearance of such a representation.
    In terms of the robot/behaviouristic metaphor, running on a representation-forming system runs into what is called the frame problem. Does everything that your visual system perceives need to be represented? Everything? Everything?! Down to the finest detail? What kind of computing power would that take? Ok, let's say that it is technically feasible. That's only if every object that you perceive on a day to day basis remains static. But in your example of a single solitary table, we all know that it's perspective changes when you move relative to it- which means the representation needs constant updating, and that's only in terms of perspective; what about a changing object, like say clouds crossing the sky? It needs the computer system in The Matrix to keep up. This framing problem is the main stumbling block for AI. The most notable philosopher connected to AI, Jerry Fodor, reckons it can't be solved.

    A question posed by Take Everything (The mind and the limits of understanding itself thread) relates to AI minds, and the capacity to know itself. Even apart from the framing problem, I would argue that a computational program, no matter how advanced, could gain consciousness, and have subjectivity. The reason, I would argue, relates to Ned Block's notion of access (and phenomenal) consciousness. You see, I think the Embodied Cognition group have got a large part of consciousness right; it figures consciousness is not of a representation-using system. The vast part of philosophy of mind and cognitive science is Representational- always has been; since Aristotle (or thereabouts). The question then is: Ok, can we build an AI system that it not representational? The point is though, any AI system you build is automatically non-representational, because representationalism doesn't work!
    The body does what it does, and arrived at consciousness, so why not a binary based system? The binary system has no way of prioritizing, through learning things that are important or not, because it is rule-based (similar to the framing problem). Biological systems however, are not ruled-based- they are 'instinct' based and learn a zillion times better than computers in real-life situations. What consciousness hinges on then (and this is another argument) to me is access to the 'machinery' of our animal ancestors, which we alone have. (Neanderthal showed access capacity as well- they were also conscious- now we're like ET; all alone).


  • Registered Users, Registered Users 2 Posts: 3,831 ✭✭✭Torakx


    Do we know for sure that the brain can't hold a sound as a representation?
    I did a quick bit of research on hearing to get more up to date on this.

    http://www.medicalnewstoday.com/releases/95546.php
    "Your entire sensory apparatus is there to make successful representations of the outside world," said Dr. Zador, who is director of the CSHL Swartz Center for Computational Neuroscience. "Sparse representations may make sensory stimuli easier to recognize and remember." Recognizing the brain's ability to distinguish "optimal stimuli" could help scientists find ways to improve how sounds are learned. Prior research has already yielded similar results when measuring sight, movement, and smell. This is the first evidence of a correlation between sparse representations and hearing.
    In the article above, they seem to be saying that different neurons react to different levels of energy from sounds and that's how the brain can distinguish the different sounds. But they also acknowledge that the brain uses representations of sounds through different neurons.

    My view(which could well be way off) with all senses, is that these patterns are recorded for later use, using neuron pathways.
    Our whole perception of reality, is a representation of our senses.
    We don't actually experience the real world. We as our selves are a collection of behaviours in a shell template.
    Using the old Occams Razor analogy, i don't see why this can't be the case just because we have not confirmed it scientifically. It seems the simplest explanation for consciousness.

    When I imagine the sea I hear the crushing sounds of the waves fairly clearly. I presume over my life as I listened to different versions of this sound, I built up a representation of it, as a pattern of neuron pathways. My thinking of it and bringing back the sound, could just be my brain routing through those pathways with the correct neurons responsible.
    I believe a child can learn language so much faster because their brain is still building fresh pathways. To me that is building the foundations of the brain. It's advised that children learn an instrument before the age of 12 or so, in order to develop important parts of the brain and even then if they chose guitar over piano, there is a different effect on that persons brain function in some areas.

    I just can't see a gap. If my brain can bring a visual image I had "recorded" in my head back to me. It to me is like a projector on the wall of my minds eye. the projection is not real. but a representation of my memories of the experience I recorded through those neurons pathways.
    Like recording a home movie and then playing it on your projector at home. The image on the wall is not real. It is a representation of the recording of the real event. it maybe low quality(say VHS) and not give all that clear an image as experienced ourselves. It's a phenomenon? Only that we know it is the product of light through lenses etc etc

    It would indeed take a lot of computing power to simulate these senses and I think the brain is an amazing thing for that.
    It would be very impressive if we humans managed to mimic millions of years of evolution over a couple of hundred years of technology.
    As the article shows, there are not just one type of neuron and there is possibly other things going on between them or other types of neurons yet to be found.
    We are still in the stone ages really with understanding everything.
    Was it that video i posted where he said we are able to account for 4% of the universe? That's nothing ^^
    I agree with you that this professor is being a bit lax with his interpretation of whats happening with all this. Anyone jumping to a metaphysical stance seems to be excluding all I just said, and all I just said has not been disproven I think.
    More so proven than the metaphysical stance? I think it is.

    I hope i have not missed once again your point.. I don't intend to frustrate!
    I just "see" a clear way these process could work together and so I don't have a mysterious gap available to me.
    The sounds and pictures are not literally in my head. They are stimulated to appear that way.
    Sound for example I would imagine to the brain, is not actually sound at all.
    It could be a series of electrical pulses. The same for vision.
    This would be a representation or symbol, allowing the brain to save memory really. It could have a language of electrical pulses that it uses to symbolize different senses. And/Or different neurons using the same paths to bring the sound of the wave and the picture of the beach together.

    Which part of all this would hold a gap? They seem all linked together in the case I set out.


  • Registered Users, Registered Users 2 Posts: 1,991 ✭✭✭cdgalwegian


    Torakx wrote: »
    My view(which could well be way off) with all senses, is that these patterns are recorded for later use, using neuron pathways.
    Our whole perception of reality, is a representation of our senses.
    We don't actually experience the real world. We as our selves are a collection of behaviours in a shell template.
    Using the old Occams Razor analogy, i don't see why this can't be the case just because we have not confirmed it scientifically. It seems the simplest explanation for consciousness.

    When I imagine the sea I hear the crushing sounds of the waves fairly clearly. I presume over my life as I listened to different versions of this sound, I built up a representation of it, as a pattern of neuron pathways. My thinking of it and bringing back the sound, could just be my brain routing through those pathways with the correct neurons responsible.

    I just can't see a gap. If my brain can bring a visual image I had "recorded" in my head back to me. It to me is like a projector on the wall of my minds eye. the projection is not real. but a representation of my memories of the experience I recorded through those neurons pathways.

    I think this is where you stand (taken from the Internet Encyc. of Phil.; the Hard Problem of C.) url - iep.utm.edu/hard-con/

    "A further point serves to support the claims of first-order representationalism. When we reflect on our experience in a focused way, we do not seem to find any distinctively mental properties. Rather, we find the very things first-order representationalism claims we represent: the basic sensory features of the world. If I ask you to reflect closely on your experience of a tree, you do not find special mental qualities. Rather, you find the tree, as it appears to you, as you represent it. This consideration, known as “transparency,” seems to undermine the claim that we need to posit special intrinsic qualia, seemingly irreducible properties of our experiences (Harman 1990, though see Kind 2003). Instead, we can explain all that we experience in terms of representation. We have a red experience because we represent physical red in the right way. It is then argued that representation can be given a reductive explanation. Representation, even the sort of representation involved in experience, is no more than various functional/physical processes of our brains tracking the environment. It follows that there is no further hard problem to deal with."

    Again though- with great access comes great imagination; the problem is: what is imagination made of. The representationalist argument (being reductionist) seems to be neurons. No matter how complicated the set-up though, there are no green, pink, violet etc neurons that can be referred to once you close your eyes and imagine something. Ok, you might say it is an illusion (which I would agree with), but how is the illusion created, such that even if you know you are experiencing an illusion (of pink elephants with orange eyelashes)- how are your illusions so coloured that you actually see, in your minds eye, these colours? It’s not enough to say you remember them, and recall them- where is the paint pot for the mental paint? Neurons don’t have them, no matter how you stack them.

    This is just to explain the difficulty that philosophy of mind has with qualitivity and subjectivity, that you said you thought you were missing. In the end, I actually agree with you; even the fundamentals of qualia, and more complicated stuff like the self is just an illusion, which according to Reville is “repellent”..As Schopenhauer said though:
    “All truth passes through three stages. First, it is ridiculed. Second, it is violently opposed. Third, it is accepted as being self-evident. “

    I like this quote in relation to the question of qualia- where in this case it relates to the self-evidency of what is taken to be the intrinsic qualities of our mental experiences. Yes, our memories can be mistaken, but if I think I can see blue in my ‘mind’s eye’ (which fails as a Homunculus Argument), then I am experiencing blue- period. That is my experience– it is self-evident. How could one be wrong in an experience of a quality I or anyone else experiences subjectively? As we have seen though, the blue (etc) doesn’t exist in the brain (nor therefore could it exist as part of a simulated projected image- it has to draw on something physically stored). The hard problem then, is not explaining the self-evidency problem- it is explaining how the self-evidency is ‘presented’ as an illusion i.e. the cognitive mechanics that leads to a mental life. That is what the academic paper I have written looks to explain from an evolutionary point of view.


  • Closed Accounts Posts: 3,780 ✭✭✭Frank Lee Midere


    I am not sure why there needs to be a specific neuron for each represented object.

    I tend to believe that the self is software, running on the brain's hardware. Unlike a general purpose CPU however it depends mostly on specific parts of the brain to get stuff done, and garner inputs. However the brain can compensate - as stroke victims testify parts of the brain can be trained to do something they weren't designed for or traditionally used. Like an OS using the CPU when the GPU is borked.

    Just as software can be modelled as abstracted from hardware but is dependent on it, so the self runs on the brain but is abstracted from it.

    So trees are represented as software not hardware. There are no neutrons designed per qualia. It's abstracted out. Same with the self.


  • Registered Users, Registered Users 2 Posts: 1,991 ✭✭✭cdgalwegian


    Hi Frank,
    I am not sure why there needs to be a specific neuron for each represented object.
    Well, I would imagine that there is proposed a cluster of neurons that are supposed to do the representing, not a one-to-one affair.
    I tend to believe that the self is software, running on the brain's hardware.
    I can picture that ok (even if though I don't accept computationalism as the basis of human cognition).
    Just as software can be modelled as abstracted from hardware but is dependent on it, so the self runs on the brain but is abstracted from it.

    So trees are represented as software not hardware. There are no neutrons designed per qualia. It's abstracted out. Same with the self.

    I don't understand what you mean by 'extracted out' in terms of qualia; in the case of software and hardware, I can see what you mean in relation to the self, in that it could be argued that a program could lead you (as our biology does) to 'know' ourselves. But how would colour, for example, be abstracted out of binary electronics (or neurons in the brain case)?


  • Closed Accounts Posts: 3,780 ✭✭✭Frank Lee Midere


    Hi Frank,


    Well, I would imagine that there is proposed a cluster of neurons that are supposed to do the representing, not a one-to-one affair.


    I can picture that ok (even if though I don't accept computationalism as the basis of human cognition).



    I don't understand what you mean by 'extracted out' in terms of qualia; in the case of software and hardware, I can see what you mean in relation to the self, in that it could be argued that a program could lead you (as our biology does) to 'know' ourselves. But how would colour, for example, be abstracted out of binary electronics (or neurons in the brain case)?

    What I mean is this:

    There are two ways of handling complex tasks on a modern device. Let's take facial recognition as an example. It can be done in the CPU/GPU or you can have a dedicated chip. A facial recognition chip. Mostly the former of course.

    Let's take that as an analogue for "colour recognition". In the first case there are no obvious parts of the device which do the facial recognition. It will use the general CPU and or the GPU to do it. The memory will be standard RAM, and any part of it. If you are looking for a subsystem you won't find it. What's doing the recognition is software.

    In the second case you will find a subsystem.
    Neuroscientists keep looking for case 2.

    The brain of course is not a computer. It mostly is specialised but this doesn't stop some algorithms being distributed across the brain, existing at a "software" level.

    The self could be like that. And some qualia.


  • Registered Users, Registered Users 2 Posts: 3,831 ✭✭✭Torakx


    In relation to the eye seeing colour and the brain reading it. I presume when we imagine a colour we are not actually seeing it in our minds eye. We are getting the sensation of colour. The brains interpretation.
    When imagining, we are stimulating the brain to mimic the sensations and neuron pathways etc that we created when actually experiencing light going through our eyes.
    So it might be that the brain routes through a part in the visual cortex that is used to distinguish different colours and shapes in order to make a symbolic representation.
    The more we run these pathways the better we can represent them.
    I can picture a cup very easy, but trying to picture something I have less experience with, might bring a more hazy image, where the brain is probably using the right side(global random access, as appose to the left which is local random access) to fill in the gaps based on our idea of what a thing would look like.
    Imagine a Seal with a hat. What colour is the hat?
    For each of us it depends on many variables and I believe each of us would have a fairly common image represented of a seal, but the hat has more variables and so the brain must have to sort colour and shape based on our own lifetime experiences with hats.

    So I don't think there is any "paint" at all. It could just use some of the conscious functions.
    Also If i try hard to visualize something, while keeping my eyes open, I find it hard to also concentrate on details in front of me. Becaus ei presume the brain is temporarily taking control of parts of the visual cortex.
    Much like day dreaming. I day dream quite a lot as a sort of meditation.
    And I am very right side dominant, at the least visually and probably with a lot of other functions too.


  • Moderators, Category Moderators, Science, Health & Environment Moderators, Society & Culture Moderators Posts: 47,539 CMod ✭✭✭✭Black Swan


    Consciousness, cognition, and the biology of behaviour?


  • Registered Users, Registered Users 2 Posts: 1,991 ✭✭✭cdgalwegian


    Black Swan wrote: »
    Consciousness, cognition, and the biology of behaviour?

    I'm a newbie here: does this mean there is a thread with that title?

    I suppose, originally, this was just looking for someone in the area of philosophy of mind who could assess the paper, before I submitted it- to see if was rubbish or innovative (or even both possibly). In a way, I'm now defending my position from a particular theoretical aspect (mine), rather than this being a general discussion about C, C and B of B, I suppose (at least so far- not many contributors).
    I've actually put the paper aside for a year or so, so I'm a bit rusty coming back to it, so anybody got any questions about the subject matter?


  • Moderators, Category Moderators, Science, Health & Environment Moderators, Society & Culture Moderators Posts: 47,539 CMod ✭✭✭✭Black Swan


    I'm a newbie here: does this mean there is a thread with that title?
    This new thread is good. We can discuss philosophical aspects along with the other content.


  • Advertisement
  • Registered Users, Registered Users 2 Posts: 1,991 ✭✭✭cdgalwegian


    Let's take that as an analogue for "colour recognition". In the first case there are no obvious parts of the device which do the facial recognition. It will use the general CPU and or the GPU to do it. The memory will be standard RAM, and any part of it. If you are looking for a subsystem you won't find it. What's doing the recognition is software.

    In the second case you will find a subsystem.
    Neuroscientists keep looking for case 2.

    The brain of course is not a computer. It mostly is specialised but this doesn't stop some algorithms being distributed across the brain, existing at a "software" level.

    The self could be like that. And some qualia.

    Personally, I find computer analogies distracting- the mechanics of mind is difficult enough to get to grips with without using a binary-based analogy to hold up as a simplified metaphor. However, the ‘functions’ rather than the ‘meat and gristle’ is what we can look at I suppose.

    It’s difficult to tell though, from the above if you believe that qualia ‘exist’ or not. I think you are say

    The key here, I think, as I have agreed with before, is tracking- which uses recognition. People get caught up with the notion of representation as the means to do this. But from an evolutionary perspective, representation wasn’t there at the start, and it remained that way. By this I mean the way simple organisms- as the basis for life as we know it now- exploited their environment; on a chemical level, let’s say, for crossing gradients in search of nutrients. Representation is not needed here- behaviour here is just to follow the trail for the nutrients. Perhaps the evolving species developed a crude visual perceptual system next, that simply responded to specific shapes, or tactile edges that reflexively engulfed particular shapes.

    My point is no representation modelling was needed for the species as it developed its increasingly complex response/behaviour systems in tandem with its perceptual systems. This development is how evolution works on the cognitive apparatus (and in general); it never starts again- it justs adding bits on. At no point in such development then, was there a cognitive system version 2 developed; a reboot as it were. All animals, including us were stuck with the cognitive trajectory we initially started off with.
    The reason I apostrophied ‘function’ above relates to my view on our assumptions of the notion of representation (and is I suppose a postmodernist angle): it is what we tend to project on to this process (because we are conscious, and automatically interpret things is what we do). In other words, from the feel of representation, we assume there is representation. I would argue then that the illusion of representation is a huge impediment that needs to be overcome, though it seems pretty hard to shake as the well entrenched view.

    If we ignore the whole issue of representation then, I would think you are saying (although you say for some reason ‘some’ qualia) that qualia are just as ‘unreal’ as motions of the self, and are thus a thorough materialist, and hence use the computer analogy. I suppose you like Gilbert Ryles’ expression- the "Ghost in the Machine". What no theory has come up with then is how our brains are able to come with ‘ghostly apparitions’ in our heads.


  • Registered Users, Registered Users 2 Posts: 1,991 ✭✭✭cdgalwegian


    Torakx wrote: »
    In relation to the eye seeing colour and the brain reading it. I presume when we imagine a colour we are not actually seeing it in our minds eye. We are getting the sensation of colour. The brains interpretation.
    When imagining, we are stimulating the brain to mimic the sensations and neuron pathways etc that we created when actually experiencing light going through our eyes.
    So it might be that the brain routes through a part in the visual cortex that is used to distinguish different colours and shapes in order to make a symbolic representation.

    Ok, so there is no mental paint. When you say "we are getting the sensation of colour", it might sound like we are getting nowhere- but then you say the brain can "mimic the sensations", which I think is spot on (but then you go on to use symbolic representation: using Occam's Razor, this unnecessary,as I've already said (though I hold a tiny minority view)). The hard work is to convince people of how this mimicking actually works i.e. that it is possible for the brain to produce a "Ghost in the Machine" from neurons, because it is the supreme ontological and metaphysical irony; the ghost in the machine is real, because our neurons have produced it, and we experience it, and it's self-evident (Ryle, in league with the Behaviourists, said to ignore it).
    There is no theory of consciousness that I know of that account for this (except for 1).


  • Closed Accounts Posts: 3,780 ✭✭✭Frank Lee Midere


    Personally, I find computer analogies distracting- the mechanics of mind is difficult enough to get to grips with without using a binary-based analogy to hold up as a simplified metaphor. However, the ‘functions’ rather than the ‘meat and gristle’ is what we can look at I suppose.

    It’s difficult to tell though, from the above if you believe that qualia ‘exist’ or not. I think you are say

    The key here, I think, as I have agreed with before, is tracking- which uses recognition. People get caught up with the notion of representation as the means to do this. But from an evolutionary perspective, representation wasn’t there at the start, and it remained that way. By this I mean the way simple organisms- as the basis for life as we know it now- exploited their environment; on a chemical level, let’s say, for crossing gradients in search of nutrients. Representation is not needed here- behaviour here is just to follow the trail for the nutrients. Perhaps the evolving species developed a crude visual perceptual system next, that simply responded to specific shapes, or tactile edges that reflexively engulfed particular shapes.

    My point is no representation modelling was needed for the species as it developed its increasingly complex response/behaviour systems in tandem with its perceptual systems. This development is how evolution works on the cognitive apparatus (and in general); it never starts again- it justs adding bits on. At no point in such development then, was there a cognitive system version 2 developed; a reboot as it were. All animals, including us were stuck with the cognitive trajectory we initially started off with.
    The reason I apostrophied ‘function’ above relates to my view on our assumptions of the notion of representation (and is I suppose a postmodernist angle): it is what we tend to project on to this process (because we are conscious, and automatically interpret things is what we do). In other words, from the feel of representation, we assume there is representation. I would argue then that the illusion of representation is a huge impediment that needs to be overcome, though it seems pretty hard to shake as the well entrenched view.

    If we ignore the whole issue of representation then, I would think you are saying (although you say for some reason ‘some’ qualia) that qualia are just as ‘unreal’ as motions of the self, and are thus a thorough materialist, and hence use the computer analogy. I suppose you like Gilbert Ryles’ expression- the "Ghost in the Machine". What no theory has come up with then is how our brains are able to come with ‘ghostly apparitions’ in our heads.

    I said "some qualia" because I wasn't sure whether this was universal or not.

    I think that a lot of philosophers don't really understand software, and it's abstraction. I am using the term "software" to not mean something somebody coded but a higher emergent property above the hardware. Dependent on it but not it. No ghost in a machine, but software running on it.

    If a complex program is running across a number of, say ten, different CPUS or computers to balance it's load you can take away and add to any of them, or all of them, provided you don't remove them all at once. The program, if written correctly, just balances it self across the other computers. So if the new computers are compatible, you can replace the entire hardware. The software still runs.
    So where is the software? Is it a ghost? Someone who didn't understand how this worked would see it like that.

    I take it the mind is software ( or analogous to it), because people have a sense of self even after injury to it, unless that injury is too deep. At the early stages of Alzheimer's, before the effect on the hardware affects the software too much, the software self is aware that something is wrong.

    Heres another software example. The hard drive on your computer looks to you like it has folders and files within those folders. This is a lie*: on the hard disk the files are laid side by side on the disk, and two files in a folder may be very well separated on the disk, while a file which is incredibly deep in the hierarchy may be right beside one at the top. Worse, files may be fragmented and be broken up at the hard drive so they don't even exist as continuous files on the hard drive. Even worse, folders don't really exist at all.

    So a folder hierarchy is a myth right? It doesn't exist right?

    It does exist. It exists in software, and from any point of view above the level of the filesystem driver, files and folders exist. I take this as being real, at that level of abstraction. Software can have multiple levels of abstraction.


    * at the literalist hardware level.


  • Registered Users, Registered Users 2 Posts: 1,991 ✭✭✭cdgalwegian


    The post ends clear and concrete:
    It does exist. It exists in software, and from any point of view above the level of the filesystem driver, files and folders exist. I take this as being real, at that level of abstraction.
    I take it to mean that what is real is the experience of the (illusory , or fabricated) Ghost in the Machine; a mental life.

    A couple of points about the lead up to this though:
    I take it the mind is software

    Maybe I read too much into your division here (possibly because of an unnecessary comparison with a computer analogy?). Is this to say that that the hardware has nothing to do with mind because it just computes, whereas mind is only the software? The mind in a materialist account has to be based in the brain. The cognitive processing part of the brain is obviously analogous to the hardware; what would the analogous part of software be?

    The second point is more sticky;
    I am using the term "software" to not mean something somebody coded but a higher emergent property above the hardware.

    I would be wary of using the concept ‘emergent’:

    From Stanford Encyclopedia on Emergentism:
    "In philosophy, emergentism is the belief in emergence, particularly as it involves consciousness and the philosophy of mind, and as it contrasts (or not) with reductionism. A property of a system is said to be emergent if it is in some sense more than the "sum" of the properties of the system's parts. An emergent property is said to be dependent on some more basic properties (and their relationships and configuration), so that it can have no separate existence. However, a degree of independence is also asserted of emergent properties, so that they are not identical to, or reducible to, or predictable from, or deducible from their bases. The different ways in which the independence requirement can be satisfied lead to variant types of emergence."

    From IEP on Supervenience and mind:
    "mental characteristics are in some sense dependent, or supervenient, on physical characteristics. Such supervenience might be taken to mean that there cannot be two events alike in all physical respects but differing in some mental respect, or that an object cannot alter in some mental respect without altering in some physical respect” (1970, p. 88).
    Not long after the appearance of Davidson’s essay, discussions of Non-Reductive Physicalism became the major locus of supervenience-talk in the philosophy of mind and the philosophical literature in general. Due to the popularity of functionalist accounts of mentality and the wide degree of multiple realizability they entail (that is, that the same mental property can be realized by events of different physical, chemical, and neural types), a dominant view in the philosophy of mind for the past few decades is the belief that mental properties are not identical with neural properties or any other properties of the natural sciences. Yet, many of those who reject psychophysical property-identities also claim to support Physicalism regarding mentality, the view that all mental phenomena obtain solely by virtue of physical phenomena. One wonders: how can mentality obtain solely by virtue of physical phenomena, as Physicalism maintains, if mental properties are not physical? The most popular answer is that while mental properties are not identical with physical properties, they nonetheless depend on nothing other than physical properties — that is, they supervene on physical properties."


    It’s opening up a whole new can of conceptual worms, which can be explored at a later point. However, perhaps we can look at it this way, tying in a similar approach, in order to temporarily bypass it:
    The experience of introspection that leads to notions of self and qualia are just that- experiences, where we can all agree that we undergo such experiences, despite knowing that they are based on neuronal activity; so they ‘exist’, as it were, in our heads. They are ‘useful fictions (I would argue indispensable, if malleable fictions), which can be interpreted as emergent. This then is an apparent emergence, just as all ‘mental’ activity is apparent.
    Additionally, if software supposedly produces something that it is not coded, then what you are talking about is not anything to do with the software (which you then take to be an emergent property), but the human interaction with it: the computer’s output, which we experience, usually as something on a screen. (For the nerds: this leads to issues of the Turing Test, and Searle’s Chinese Room argument in semantic theory, but this is about emergence.) Emergence from a programming analogy, so far as I can see, offers nothing in terms of exploring how we could investigate, even theoretically, how to explain the essence of consciousness. It's fine if you want to talk about outputs, but that is Behaviourism and Functionalism. Is it my turn to miss something here?


  • Registered Users, Registered Users 2 Posts: 3,831 ✭✭✭Torakx


    Ok, so there is no mental paint. When you say "we are getting the sensation of colour", it might sound like we are getting nowhere- but then you say the brain can "mimic the sensations", which I think is spot on (but then you go on to use symbolic representation: using Occam's Razor, this unnecessary,as I've already said (though I hold a tiny minority view)). The hard work is to convince people of how this mimicking actually works i.e. that it is possible for the brain to produce a "Ghost in the Machine" from neurons, because it is the supreme ontological and metaphysical irony; the ghost in the machine is real, because our neurons have produced it, and we experience it, and it's self-evident (Ryle, in league with the Behaviourists, said to ignore it).
    There is no theory of consciousness that I know of that account for this (except for 1).

    I may have used incorrect words(I am unsure really, I had to teach myself english through experience for the most part). By symbolic, I mean a placeholder that holds information for the "reader", an abstract way of storing the information to save space. A sort of short hand, like binary in computers.
    The computer reads all those ones and zeros as just that. Actually in a computer it will use a sequence or pattern of those, to form different functions. In the brain I see these neuron signals as short hand, speedy ways to transfer complicated information in the most efficient way possible. But I think this is just stating the obvious.
    Representation I mean as it represents something, using a different form.
    I'm unsure of any other philosophical fields etc like maybe representalionalism if it exists..

    If what I just said makes sense and is agreeable, the whole arguement seems to be a materialist one. Or what I mean is, based in the biology of the brain itself. Neuroscience.
    I can intuitively "see" clear ways this whole process can work including the "ghost in the machine".
    But to fully answer your problem I would need to do the work of a neuroscientist and discover new functions in the brain that explain this. Which they are striving to do still.

    If a certain neuron was needed or function in and/or between neurons was needed to explain this, I would have to invent it in theory and wait tosee if neuroscientists prove it. For me to do so, I would need to know allabout how neurons function and interact. I am far from that :(
    As are all neuroscientists maybe.
    It seems like you are trying to apply a metaphysical(in the common sense of the word) property to the brain, when I do not ackowledge one.

    Also we may not understand each other, because we have not laid out our foundations for each other.

    My foundations as it stands on evolution of consciousness, in a not so scientific way... more a general view, due to ignorance on the details of all these things.

    At the stage of our evolution where we started cooking our food, we became much more efficient at getting the energy we need to function and live(evolve).
    As we could cook meat and digest easier, the digestive tract became smaller. It had less work to do. The energy required to digest could then be put to other uses. Due to the function of evolution (use it or lose it) we started diverting resources to the brain. We no longer needed as much physical traits as we did mental traits. Tools then became the precursor to farming and so on. The evolution of consciousness in my view began when our bodies changed to adapt to our environment, which we began to change. Since we now changed our environment to make intelligence the main factor in survival, we will evolve more intelligence.

    The evolution of the self could be put down very generally to the need to function in a changing enviornment, where intelligence and interaction with intelligence is the key to survival.
    Considering all animals(that we are any way related to with the brain in mind) are in some way social(I hope I am not misrepresenting that word at the core), to survive our evolution would naturally work on the social aspect. Part of this evolution of this social interaction, I think, involves the development of the self, in relation to the group. Sociology or anthropology might be a place to look into that more.
    One reason the self might have naturally been needed, could be the need to retain variety in the area of cognition and social interaction. So it may evolve and not die out.
    Now that we more or less completely cover this planet, a great need for social progress is needed in our evolution and I believe it is constantly happening. As soon as we mess with it, it will change course and update generations. We may still be tens of thousands of years from evolving to "get along". Possibly we need to wipe ourselves out a bit more to instill that evolutionary need to get along. We are too many possibly on this planet. This could be why we stillwar. It might be neccesary to cull our numbers when we insist on lengthening lives and molly coddling the weak.
    This seems a very lucifarian view lol, but it is a logical one too I think.
    As a sidenote, this is one of the major issues I have with transhumanism and extending our lives to infinite. If we can make more room or leave the planet, then I will consider it a viable option.

    Sothatrs my general view on how consciousness and the self evolved.
    The biological process involved in that evolution I do not fully understand. I research the brain on and off out of interest, especially in relation to behaviour. But I do not know enough to explain how the neurons would have interacted inside the various parts of the brain to reach the point where I can say "this here is the process that creates the "ghost in the machine"."

    Are you looking for the biological process? And if you have a theory on the biological process, could you share it or give me something to go on. Then I can research in my own time and see how the information matches up with your theory(which i don't know yet) and mine.
    From that I would hope to evolve my own perception on this whole topic.
    Ugh, I guess I am saying there without realizing... "I need to see you work to know what you are talking about".

    I am unsure if you have studied for example, all about neurons and everything scientists know about them so far. Or about how the visual cortex works and maybe most importantly of all, how that bunch of nerves in the prefrontal cortex works, that is repsonible for decisions relating to the self and so many other things in conscious thought.

    Going back to philosophy. This may be a distraction, as I cannot explain the "feeling" or intuition it gives me on this topic.
    But this picture kind of reminds me of this conversation, with regards to this ghost and the way i was trying to explain the brain using our modes of processing our sesnes to recreate memories as pictures and also all other senses.
    10527490_902765556411123_1611540434095177455_n.jpg?oh=bc3f64700ef2ecdcd82bbd9f3afe56a7&oe=5464E60F

    If the object casting shadows is the brains functions for percieving the wolrd around us. The square could be our experience and the circle could be our memory and imagination.
    The circle created using the original function that casts the square shadow, could be said to be a secondary result using the same process in a different way.
    The object is real and so is the light. Both shadows are both real because we percieve them as real(through abstraction) and also not real, when using this as an analogy for the brain and mind.


  • Registered Users, Registered Users 2 Posts: 1,991 ✭✭✭cdgalwegian


    Torakx wrote: »
    It seems like you are trying to apply a metaphysical(in the common sense of the word) property to the brain, when I do not ackowledge one.
    I have no idea how you come to see how i have a metaphysical slant. Please show me.
    Torakx wrote: »
    At the stage of our evolution where we started cooking our food, we became much more efficient at getting the energy we need to function and live(evolve).
    As we could cook meat and digest easier, the digestive tract became smaller. It had less work to do. The energy required to digest could then be put to other uses. Due to the function of evolution (use it or lose it) we started diverting resources to the brain. We no longer needed as much physical traits as we did mental traits. Tools then became the precursor to farming and so on. The evolution of consciousness in my view began when our bodies changed to adapt to our environment, which we began to change. Since we now changed our environment to make intelligence the main factor in survival, we will evolve more intelligence.

    Here's the thing: there is an assumption that an increase in intelligence inevitably led to consciousness in humans (or, alternatively,that all animals are conscious). These days it is manly because comparative neuroscience studies between human and other animals' brains show specific and also unique structures that seem to correlate to our vastly enhanced intelligence relative to other animals (particularly structures to do with language). But the neuroscience behind this is like looking for the lost keys in the dark under the lamppost. Nothing about the neurobiology of the brain will ever lead us to figure out how the capacity for introspection came about, and that is not just down to its complexity. It is putting the cart before the horse.
    Back to basics then. My approach is through philosophy of mind via the speculation of the cognitive mechanisms and systems that evolved from the start of life itself to where we (all animals) are now. This is where the assumptions of the nature of intelligence comes in: just as there is no agreed criteria for what consciousness actually is, neither is there for intelligence (just look up Stanford Encyclopedia to see how difficult it is to demarcate it), so a theoretical underpinning of the cognitive structures can at least make a more a priori stab at trying to find the possible point of appearance of what we roughly agree as consciousness.
    Looking for this 'point of appearance' then, in terms of neuroscience is like a mechanic tinkering with the engine of a car (under a lamppost!), or an IT guy with a CPU, if either the car or computer was thought to be conscious (i.e. passing the Turing test). Embodied Cognition has it right (in this sense)- mind, and hence consciousness is biologically based (in the jargon, it is not multiple realizable (looking from a functionalist POV is again putting the cart before the horse)). The philosophy of mind/biological evolution doesn't concentrate on biology per se, but on the principles of cognitive evolution.
    Essentially, Cognitive Science is misplaced; Cognitive Studies is more apt. Here, the scientific approach is de-emphasized, because it emphasizes brute biology, where if all you have is a hammer, everything else starts to look like a nail.


  • Registered Users, Registered Users 2 Posts: 3,831 ✭✭✭Torakx


    Sorry, you are indeed right. I think I lazily came to that statement, when trying to get how representations can not be inherently part of the brains way of interpreting the outside world.
    If we don't represent it in another form, then we must record it in it's actual form? But after posting it I kind of knew in the back of my head you weren't leaning towards a metaphysical. But I also could not see how we could function without representations.
    The issue with me is possibly I am taking the word too literally? And in philosophy there is some other meaning? Or the true meaning fully embraced?

    So please forget I made that metaphysical statement :)
    It didn't make sense.
    I might say that the mind is a representation of the outside world. A reflection. The self might be the representation of WIll. Possibly linking to Nietzsches "will to power" ,but I am honestly unsure what he meant in that. It seemed more like a replacement for god than a description of our own actual will.

    The self awareness I see as being a natural progression in our evolution from apes. Well they are also self aware right? If that's the case, I suppose this self awareness happened earlier than I had first suggested, when talking about cooking food.
    http://news.discovery.com/animals/zoo-animals/chimpanzees-self-awareness-110504.htm
    Ok so they most likely are self aware.
    I wonder is this because they started eating less veg and more meat. Grasping at straws there :D

    I get what you are saying with the neuro science being a hammer. Forcing a method of investigation in what might be seen as a linear pattern or one direction.
    I am often thinking to myself, what is it that makes a cell want to reproduce and a collection of cells(animals and humans etc) want to reproduce the collection.
    So I like your suggestion to go back to basics as it were.
    But I can't help feeling this is going to go in circles, because to have this conversation/discussion involves telling us your theory. And I still don't know how you feel about that.

    I'm unable to do things the "left" brain way. I don't go in logical linear directions/patterns. I have one of those seemingly chaotic "right brain" minds. So I start at the very end of everything and piece things together as they appear to be usefull, based on the end goal and what it "looks" like. I need to end goal to know what pieces fit if that makes sense.
    So I do see I can be trying sometimes :D

    But I hope you will continue to explain about your ideas on how your decided method of investigation could yield answers and in what areas exactly of cognitive science could give answers.
    Do you have a theory that answers the question of consciousness itself? Or more a theory on how to answer the question?


  • Registered Users, Registered Users 2 Posts: 1,991 ✭✭✭cdgalwegian


    Torakx; my hypothesis might be complete hokum, or marvellously innovative. If it has not yet been 'lodged officially', and it turns out to be the latter, I'm hardly going to publish it on a web forum first. Sorry to disappoint.

    The initial purpose of the thread, as I said in the first post, was to see if I could get it read by someone in the field of expertise before submitting it to a peer-reviewed journal pertaining to philosophy of mind. I would imagine I'll end up just submitting it anyway, but a quick perusal by an expert prior to this would be very beneficial.
    In the meantime, I am enjoying the interactions- it's stimulating.
    Torakx wrote: »
    I might say that the mind is a representation of the outside world. A reflection.
    I recognised some common ground with my hypothesis and Embodied Cognition some time back, but never really investigated EC, until this forum rejuvenated my interest. I downloaded and have been reading:
    "The Phenomenological Mind- an introduction to philosophy of mind and cognitive science", by Gallagher and Zahavi. (very readable- good stuff in it).
    Here's a philosophical quagmire you can look up- internalism/externalism. It'll melt your brain. Then try a thought experiment i use, and compare your idea of 'representation' with 'mirror'.
    Picture cognition, in connection with perception, as an internal hall of windows housed as part of the brain proper. The cognitive biology is genetically geared towards reacting dynamically towards the environment- the environment changes the window configuration so as to make it more sensitive to elements of the environment that gets what the body needs. The cognitive part, in this way is distinct from the sense part of perception, such that it is the 'engine' that sorts out the various stimuli in order to prioritize what to look for, using the senses (no easy task- see the framing problem in an earlier post). EC, like my hypothesis, shows there is no need of representation; the brain has malleable mirrors (really windows) that uses cognition to act in the world in order to get what it is genetically programmed to get, in a (biologically) dynamic way. (you can also see how this is minimalist biology, so that it is mainly in the abstract).
    Note also, this is in relation to animal cognition, which I hypothesize as being the same as human cognition. Novel brain biology in humans is the result of a genetic anomaly that brought about a cognitive change which then allowed what we call consciousness, not the other way around. Additionally from this: A) animals are not conscious (but have the potential to be), and B) a very tricky point here (and related to point A); environmental pressure is not how consciousness appeared- the anomaly is predicted as being fantastically rare.
    Torakx wrote: »
    The self awareness I see as being a natural progression in our evolution from apes. Well they are also self aware right? If that's the case, I suppose this self awareness happened earlier than I had first suggested, when talking about cooking food.
    From above; the question of self awareness and consciousness. If, as I have said, animals are not conscious (according to my hypothesis), how do they display what we would see as self-recognition? The self-recognition test is assumed to be integral to consciousness, because it stems from self-reflection, a feature of consciousness, and not merely a behavioural capacity. However those in the field of Behavioural Ecology would contest that (as would I). Highly intelligent animals such as chimps have a much richer repository of body schemata from which to draw on, especially since they are such socially tactile animals. They display what is wrongly assumed as a necessary component of consciousness. What then, would a necessary component of consciousness be? I think the capacity to display semantic content and syntax is the only litmus test; aesthetics and language.


  • Registered Users, Registered Users 2 Posts: 1,991 ✭✭✭cdgalwegian


    Update:

    NUIG got back to me. Apparently a lecturer was approached with my request for a PhD student to read the paper, but I was told they couldn't help. Just submit it was the advice, which as I said I'll probably end up doing. Still though, if anyone knows someone....


  • Advertisement
  • Registered Users, Registered Users 2 Posts: 3,831 ✭✭✭Torakx


    Is the concern copyright?
    There must be protections you can use to publish it and have your name associated with that article and the date released into the public "arena".

    You have probably heard of people sending theories and books to themselves by registered post, to prove they wrote something first, when ownership of an idea was a concern.
    There must be a modern version of that, or maybe the old school way is still the best?

    If it is wrong, that's not a bad thing at all.
    It is just as important to teach the mistakes as it is the answers.
    The answers tell us a lot of the time how, the mistakes often tell us why.

    I'l have to do some reading on Embodied Cognition after your last reply :)


  • Registered Users, Registered Users 2 Posts: 1,158 ✭✭✭Joe1919


    In terms of security and protecting the paper, I would think about two options.

    The first would be to only ever send a copy as a locked pdf or better still as a scanned pdf so as chunks of your text wont be easily plagiarized. When I was a student, I have seen students copy and paste big chunks of other peoples work (and essays on line) in a hurry to complete essays. Could a problem occur in the future if the plagiarizer copies some of your work and is the first to use 'Turnitin'?

    A second possibility would be that you turn in your finished work to 'Turnitin' and keep a copy of the result. This would prove that your work was original as suggested below.
    http://www.gcu.edu/Academics/Academic-Integrity/About-Turnitin/Turnitin-FAQs.php

    However, take the above as a suggestion and possibly inquire further as I have only limited experience of the above.


  • Registered Users, Registered Users 2 Posts: 1,158 ✭✭✭Joe1919


    PS. I see 'Turnitin' have a link to a service called 'iThenticate'. It costs 50 dollars for a single paper up to 25,000 words. But whether this is necessary or not, I dont know.
    http://www.ithenticate.com/


  • Registered Users, Registered Users 2 Posts: 1,991 ✭✭✭cdgalwegian


    Joe1919 wrote: »
    In terms of security and protecting the paper, I would think about two options.

    The first would be to only ever send a copy as a locked pdf or better still as a scanned pdf so as chunks of your text wont be easily plagiarized. When I was a student, I have seen students copy and paste big chunks of other peoples work (and essays on line) in a hurry to complete essays. Could a problem occur in the future if the plagiarizer copies some of your work and is the first to use 'Turnitin'?

    A second possibility would be that you turn in your finished work to 'Turnitin' and keep a copy of the result. This would prove that your work was original.

    Thanks for that (numeral) Joe. ;)

    I read your post a couple of days ago, but have been mad busy- long work hours over the weekend, a funeral, studying a thesis on heuristics; ya know- the usual weekend stuff.

    Anyway, I've waited this long: I might as well tidy the paper up, and change a few bits around before submitting it to a peer-reviewed journal. The synopsis of the book upon which the paper it is based got a favourable review, so I suppose that indicates it won't be laughed at if I submitted it. It's a pity no-one seems to know any academics in philosophy that might enjoy getting their teeth into some juicy philosophy of mind theorization who could then give me some pointers on said tidying up. Another factor is advice on the correct journal to submit it to, as there are a lot of factors to consider.

    Thanks again,

    And in the meantime, if anyone knows anyone...


  • Registered Users, Registered Users 2 Posts: 2,651 ✭✭✭ShowMeTheCash


    Hey I read some of the posts but not all of them.
    I don't really want to get into a debate around the ideas of your paper as the ideas are not important but more how original the ideas are.

    But I will tell you some of the things I was told at Uni when getting something published.

    1. You are at a slight disadvantage as this is a bit of a club of who do you know, you get someone who is perhaps an expert in a field to support your argument then you are on your way.

    2. Probably the biggest and the harderst element is around how original your ideas are.
    I read some of the ideas in the synopsis and I am no expert but I know the subject matter to be fairly well talked about and written about, Antonio Damasio has written and talked on the subject matter in great detail.
    Like a lot of people you may have come up with the ideas in your paper yourself but you may or may not have been the first to document them or publish them. (This is the hardest part)

    Universities will not be interested in something that has been churned out before, it will need to challenge existing theories and not be similar to per-existing challenges.

    Generally if you publish something that is too similar to something someone else published then it can be embarrassing for the university no matter how obscure.


This discussion has been closed.
Advertisement