Honestly I believe the discussion of consciousness,except when used as a convenient descriptor/analogy for a function or state of an intelligent (artificial or not) entity, should be left out of the science of artificial intelligence and left to the philosopher's, at least for now.
Work should be focused on creating a machine/software entity capable of consistently passing the Turing test when tested against humans who have never spoken to it before. Once that is accomplished, then move towards a machine/entity than can maintain a continued "friendship" while still passing the turing test, allowing proof of it's ability to learn and make of new information involved in the ongoing relationship between it and the tester.
*smuggles philosophy into science forum...*
Personally, until evidence shows something the contrary, I'm under the belief that the consciousness we experience as "ourself" is simply a by product (whether intentional or not) of our underlying biological machinery, and that there is nothing special about it.
Even if a hypothetical "consciousness" object could for instance be transferred from one physical object to another, without carrying over with it all the memories/experiences/learned information/behaviors etc, of the brain, it wouldn't even recognize it had been transferred to a new object.
Then that raises the question, that if logically it could not determine it had been transferred due to all the state data stored in the previous physical form, you could hypothetically transfer all state data from one physical entity to another, and "convince" it that it's consciousness had been transferred from another physical body.
Which would lead to a logical based assumption that consciousness as a separate entity other than the abstract representation/view of all the information/actions/behaviors of the underlying physical machine.