Quote from: jpinkman on January 22, 2013, 11:51 amA computerized gadget, say using nanotechnology, could be be programmed to perform all of the functions of bacteria, but it's hard to consider it self aware, or is it?How about a human android with advanced artificial intelligence that gives it all of our sensory perceptions and ability to make intelligent decisions based on reason and logic like us? Is it sentient? Is it self aware? Why not?I think these are the sorts of questions - what is sentience? what is consciousness? what is awareness? What is this very concept of "qualia" as they call it in philosophy - that holds the key as to whether the soul can survive death.I agree completely.The way I see it, there are two possibilities. The first is that consciousness is a result of a form of matter or energy that we've yet had no other evidence of. It's entirely possible, but the problem with this is that it doesn't really matter what sort of energy or matter consciousness arises from really: presumably if that energy or matter changes form, our consciousness will end. Now maybe there's a kind of energy or matter that simply doesn't ever change form. Sure, it's possible... I find it unlikely, personally, but it's entirely possible. Then consciousness, in some form, would continue. But what sort of consciousness there really can be without the human brain I'm not really sure. Not much, I'd wager. And certainly not one that feels like anything, I should think. Drugs and their effects on our feelings is about all the proof for that I really need, personally.The other possibility is that somehow, in some bizarre and almost wholly inexplicable way, extreme complexity gives rise to consciousness. This is such a vague, bizarre, ill defined concept that it's almost like saying the tooth fairy is real after all; but logically, it's the only other possibility I've ever been able to come up with. So let's say this is the case, and somehow, immense complexity and neuronal interaction gives rise to our consciousness. Then when our brain dies, we die. Period.The second possibility has some... very... well, "uncomfortable" implications. If it's true, then apparently at some point a totally lifeless, unconscious object suddenly becomes "alive" in a way. Just add one more little connection, one more tiny little piece of complexity and BOOM, IT LIVES! ... seems kind of unlikely. Possible, but unlikely. So if that isn't the case, then all objects, all matter, everything is conscious on a tiny, imperceptible, almost meaningless scale. But that's... sort of... I mean... the CPU in my computer is conscious, just a tiny itsy-bitsy little bit? Well damn, sucks to be it I guess. So what about planets? I mean is there something to the whole "Earthmother" thing after all? The implications are just... well, if it's complexity itself and not some additional constraint, I don't care for the implications. But who gives a fuck if I care for them, really -- what's true is true. Just a personal comment.I've always been somehow fascinated by the concept of paradox. There almost seems to be something magical about self reference. "This sentence is a lie." That isn't true. But it isn't false, either. It just... well, it's a paradox. But how can something be neither true nor false? And just what makes it neither true nor false -- it appears to be the quality of self reference. Any system that's capable of performing the equivalent of basic arithmetic, and that also allows for self reference, will always have paradoxes. Statements that can be formed using all perfectly valid rules of the system, but that the system has no answer for.We're self aware. We can consider ourselves. Almost like... self reference. Complexity + self reference = consciousness? LOL... Fuck if I know... I'm not done pondering it yet. But I like this thread -- thought I'd ramble and maybe get some comments or something.