• Members of the previous forum can retrieve their temporary password here, (login and check your PM).

Hackers and the brain...

Migrated topic.
Writing a couple constants and imaginary functions to chemicals seems a little out of place. Unless you want to somehow discover and compile LOTS of complex biochemical interactions of a human body. In which case I'd wait for quantum computing or something similar and hire an incredibly bright team. However even leading scientists are unsure of how these things work entirely. Ask Dr. Nichols.

I would start with programming a 'brain' that learns from 'simulated' sensory input first. When or if that is accomplished, then you can maybe think about testing it's function with simulated 'drugs'.
 
InMotion said:
I would start with programming a 'brain' that learns from 'simulated' sensory input first. When or if that is accomplished, then you can maybe think about testing it's function with simulated 'drugs'.

That code was end goal type code. Not something to be started with. Just an idea to translate the concept simply, nothing more.

As far as learning things from simulated input, that's a goal as well, but I intend to more create a simulation of the human brain specifically, not just translate it's functionality to software. The idea I have in my mind, is a simulated biological organism capable of learning and being subjected to experiences with a direct real world analog, and for it to respond exactly as the real world organism would.

Unfortunately this community is the closest I've ever gotten to discussing these topics with someone who actually knows what they're talking about.

Just watched the entire 5 part video on Hierarchical Temporal Memory used in computer learning, posted in another thread yesterday... I would do anything to be able to learn from the people working on that stuff, and get to a point I could contribute.

This whole subject matter has me as excited as I first was when I started programming 12 years ago, when what I was learning was still completely magic.
 
hacking the brain doesnt require DMT. There's a whole world of ways of doing it but none are really pleasant. Traumatic experiences can alter the mind and DMT is considered by some to be slightly traumatic:p . Now just experiment with the variables and log what happens in a few years you will have a roadmap of your mind and know it well, or go insane. I chose both.
 
vovin said:
hacking the brain doesnt require DMT.

DMT has yet to be very insightful to me in this regards, primarily LSD. Shrooms/DMT/Ketamine all make me feel way to spiritual to care about science. Only philosophy. Course that's probably why I love them so much, because I never have spiritual feelings without them.
 
Interesting conversation. Perhaps as a stepping stone towards building a software version of the human brain you could build a software version of less complex versions of consciousness first? I imagine building the function library for an ant would be much easier than jumping right into the human experience. Perhaps even take it further back then that. Why not climb the entire evolutionary tree? Start with a single cell and build functional code on top of it, making it more complex with each iteration.

In a sense I think part of what this will end up coming down to is trying to program free will. Not to get too much philosophy into your science, but this experiment can only reach its logical conclusion if complex consciousness can be reduced to a set of functions. This is another reason I think to start with less complex brains. I won't claim to know if ants have less free will than humans, but they seem to act like it and so modeling that would seem to be easier.
 
Easiest way to gain a understanding of the malleability of the brain is to study abnormalities. A great introduction is the book phantoms in the brain, Also studies in multiple personality disorder and other brain variations. It lets you know what the mind is capable of. From there you can work on mastering such things.
 
I do not know if it fully pertains, but on regards to manipulation of the brain; I once watch something on discovery maybe, and the only thing i remember at the moment was he set up an experiment with random people, and they were separated, and put in front of a computer,

There they had the task of just clicking on a red circle, that would pop up in various places on the screen, and when they clicked it; it disappeared.

they were first started out with each circle to pop up at a 5 sec intervals, for maybe 5 mins or so, and without them knowing the interval was lowered to 4 sec intervals, and what they had found was when the interval was changed, the tester would think he was hitting the circle but it had in actuality the circle had already popped up and disappeared, leaving them to hit it one sec late.

Well this is what i remember from it ^ soo don't fully quote me on it, lol. I think this study, or test involved tricking and or manipulating the brain, in some manner that i don't fully remember, hopefully someone here just so happened to catch this show, and knows what i am talking about.

but anyways if you can make any sense out of what i wrote, i hope it helps towards your studies because i want honestly very intrigued after reading the first few post, ill be following this thread, and am interested in what you come up with.
 
I seen that documentary too and my memories are just as vague as yours on it but I do remember it was a very interesting show. I am going to have to look that one back up.
 
Are you familiar with the work of Ben Goertzel? He's a mathematician-turned-intelligence research working to create a form of AI he calls "Artificial General Intelligence", which attempts to model the activity of the whole human brain, in contrast with AI research which aims toward implementing software capable of solving specific, bounded problems in an "intelligent" manner. There is a chest of gems in his books, online talks, etc.

I'm curious what language you would use to implement a simulation of the brain? It seems that a functional-style of programming such as is used in a LISP-derived language would be necessary for modelling the self-referential features of the mind/brain: "I" can think about "myself", and can consider my behaviors, interests, properties of my mind/body, etc. In turn, "I" can think about {"myself" thinking about "myself"}, "I" can think about {"myself" thinking about {"myself" thinking about "myself"}}, etc. RAW suggests labelling the first "I" as I_1, the "I" who thinks about "me" as I_2, etc. A side question: what is the limit of the sequence I_1, I_2, etc., if there is one? Perhaps it's what the Vedas call ATMAN?

Another question: is the human brain capable of activities which a Turing-equivalent machine (ie one that behaves in a mechanical manner) cannot compute? How can one model the intuition necessary for solving all but the most basic of problems, and how can one model second-order or higher-level thinking?

Perhaps a fundamental insight akin to Turing's simple yet potent concept of the "Turing Machine" and "Universal Turing Machine" is necessary before we can implement software capable of modelling the whole brain.
 
lysergify said:
Are you familiar with the work of Ben Goertzel? He's a mathematician-turned-intelligence research working to create a form of AI he calls "Artificial General Intelligence", which attempts to model the activity of the whole human brain, in contrast with AI research which aims toward implementing software capable of solving specific, bounded problems in an "intelligent" manner. There is a chest of gems in his books, online talks, etc.

No I have not, I will definitely check him out, sounds like the information I'm looking for.

lysergify said:
I'm curious what language you would use to implement a simulation of the brain? It seems that a functional-style of programming such as is used in a LISP-derived language would be necessary for modelling the self-referential features of the mind/brain: "I" can think about "myself", and can consider my behaviors, interests, properties of my mind/body, etc. In turn, "I" can think about {"myself" thinking about "myself"}, "I" can think about {"myself" thinking about {"myself" thinking about "myself"}}, etc. RAW suggests labelling the first "I" as I_1, the "I" who thinks about "me" as I_2, etc. A side question: what is the limit of the sequence I_1, I_2, etc., if there is one? Perhaps it's what the Vedas call ATMAN?

I'm not sure which language would be used but I know some traits I think would be very beneficial. I think a dynamic typed, object oriented, language would be required. The end result would also be a polymorphic application which would be capable of rewriting it's own code to account for behavior modification, but the application itself would only be able to modify the parts the human brain could modify in doing it's job. Initial compilation would generate a blank slate type mind, and once it starts running, it will never be the same again due to self modification.

lysergify said:
Another question: is the human brain capable of activities which a Turing-equivalent machine (ie one that behaves in a mechanical manner) cannot compute? How can one model the intuition necessary for solving all but the most basic of problems, and how can one model second-order or higher-level thinking?

Perhaps a fundamental insight akin to Turing's simple yet potent concept of the "Turing Machine" and "Universal Turing Machine" is necessary before we can implement software capable of modelling the whole brain.

I believe a turing machine is already theoretically capable of implementing every possible function. The whole idea however is that it's not as simple to implement these processes it's imitating.
 
lysergify said:
Are you familiar with the work of Ben Goertzel?

Looks interesting.

Aetherius Rimor said:
Initial compilation would generate a blank slate type mind, and once it starts running, it will never be the same again due to self modification.

This brings up another interesting point: should the simulation start as a blank slate? Do you strictly believe in the tabula rasa? Perhaps a few variations on the simulation would be required (i.e. various genetic seeds).

This is a few stages ahead, but I would imagine that getting various seed versions to be able to talk to each other would lead to some interesting forms of learning.
 
onethousandk said:
This brings up another interesting point: should the simulation start as a blank slate? Do you strictly believe in the tabula rasa? Perhaps a few variations on the simulation would be required (i.e. various genetic seeds).

This is a few stages ahead, but I would imagine that getting various seed versions to be able to talk to each other would lead to some interesting forms of learning.

Ideally an initial blank state that functions as anticipated would be idea. However the blank slate would probably be best if modeled after the complete neurotypical human that has just finished cognitive development.

However over the course of usage of the hypothetical application, you would start building "seeds" that can be used for more advanced simulations.

The blank slate also wouldn't entirely be a "blank slate". It still would have a host of other variables, that would coincide with certain cognitive development differences whether caused by genetics or other acquired behavioral tendencies. Assuming we model a brain finished with cognitive development.

Later on, it would be possible to create a "true" blank state like that of a newborn, but in doing so you have to be able to simulate the growth/development of the brain in a correct manner. That would be far more difficult of a functionality to implement, and for an already complex and lofty goal, not something that'd be done first.
 
I think you guys are dancing around the concept of 'ego death'. Such a thing can be accomplished through several avenues usually bringing the individual to considerable stress.
 
Aetherius Rimor said:
I believe a turing machine is already theoretically capable of implementing every possible function. The whole idea however is that it's not as simple to implement these processes it's imitating.

It's not clear whether this is true - a Turing machine is theoretically capable of implementing every algorithmic or computable process. It's not clear that it's possible to create a Turing machine to do everything we're capable of doing, such as proving mathematical theorems or composing symphonies or interacting with other people (not to mention, "DMT entities"). There are reasons to suspect our brains are more powerful than a Turing machine, such as the difficulty in emulating the "intuition" we use for human activities such as math and music, the impossibility of constructing an algorithm for deciding whether any mathematical statement is true and false (ie Godel's Incompleteness Theorems), and, more fundamentally, the fact that our hardware/bodies are non-discrete, whereas Turing machines are discrete in nature.

The theory suggests that in practice, to emulate the brain, perhaps we should be looking for ways to improve *in theory* on a Turing machine. Other than the attempts to implement "quantum computing", using the states of an electron I don't know what work is being done in this regard- though apparently a quantum computer would be equivalent to a Turing machine and thus wouldn't be more powerful - though would be more potent/efficient - than a binary computer. BRB
 
Aetherius Rimor said:
But the TLDR

A part of the human brain is capable of peforming the complex mathmatical formulas to take arcs, velocity, gravity all into account to predict the location of an object thrown, to determine where to place your hand to catch it.

Tensor equation is the underlying formula to compute that. You can unconciously compute these formulas, but you can not conciously without great effort (it's college level trig I -think-?) Tensor - Wikipedia


Are you sure its not just from plain everyday experience or something else ? I know your explanation is more interesting, but is it true ?
 
Tona said:
Aetherius Rimor said:
But the TLDR

A part of the human brain is capable of peforming the complex mathmatical formulas to take arcs, velocity, gravity all into account to predict the location of an object thrown, to determine where to place your hand to catch it.

Tensor equation is the underlying formula to compute that. You can unconciously compute these formulas, but you can not conciously without great effort (it's college level trig I -think-?) Tensor - Wikipedia


Are you sure its not just from plain everyday experience or something else ? I know your explanation is more interesting, but is it true ?

I don't know if this is the argument Aetherius is making, but I don't think the brain is literally running through the Tensor equation every time you catch a ball. In a sense it is, it makes judgement based on the same information that goes into the mathematical representation, but I don't think the brain is actually crunching numbers. I think a closer metaphor is that your brain takes all of the experiences it has with predicting arcs (the previous times you tried to throw/catch a ball, etc) and spits a guesstimated answer into your hand. This answer then gets revised as you watch the object come towards you and you adjust your position in relation to it. You are going through the Tensor equation, but with memories instead of numbers.
 
lysergify said:
There are reasons to suspect our brains are more powerful than a Turing machine, such as the difficulty in emulating the "intuition" we use for human activities such as math and music,

Such as? Software is capable of recognizing beats per minute, separating out the incoming audio into it's composing frequencies to identify patterns in the music and act or predict on those. Spoken to people with education in the mathematical construction of music who could explain numerically how to invoke certain emotions using a set of notes with defined frequencies for each, and others who have shown me the formulaic breakdown of a musical track that the majority of our music follows.

Every really intelligent musician I've spoken to, says it's all math and correlating the desired emotional responses to the frequency/beats/melodies. Sure there is evolution and discoveries made, but that can also be implemented in an AI using supervised learning. Have it follow the patterns/formulas for musical track creation and experiment with various modifications/examples and then have humans respond with a numerical value rating system, and you now have an AI capable of discovering new musical techniques or ideas. If you have an AI that can mimic the human response perfectly, it could easily be unsupervised learning as it could test it's experiments itself.

lysergify said:
the impossibility of constructing an algorithm for deciding whether any mathematical statement is true and false (ie Godel's Incompleteness Theorems),

Reading the wiki page, but not quite sure how you're meaning to refer to it. Are you saying that we can determine if something is inherently true, but a computer can not, since it can't use an algorithm to do so? Unfortunately I am unable to comment on this issue since I don't understand the concept fully, and even if I did, it still might just be a very complex but solvable problem through a some abstraction. It also could be, that all these "unprovable truths", are learned through observation and formalized when an attempt at a creating a mathmatical model for the universe performed.

To complex of a subject for me to do anything but speculate on however.

lysergify said:
and, more fundamentally, the fact that our hardware/bodies are non-discrete, whereas Turing machines are discrete in nature.

What exactly is your definition of discrete in this statement? My understanding of discrete, is there is not an infinite number of possible states. I'd consider them both technically continuous, not discrete. I don't see how they're different from each other no matter which way you define them.

lysergify said:
The theory suggests that in practice, to emulate the brain, perhaps we should be looking for ways to improve *in theory* on a Turing machine. Other than the attempts to implement "quantum computing", using the states of an electron I don't know what work is being done in this regard- though apparently a quantum computer would be equivalent to a Turing machine and thus wouldn't be more powerful - though would be more potent/efficient - than a binary computer. BRB

What other capabilities of the brain are you aware of, that you think the turning machine is incapable of performing?
 
Aetherius Rimor said:
lysergify said:
There are reasons to suspect our brains are more powerful than a Turing machine, such as the difficulty in emulating the "intuition" we use for human activities such as math and music,

Such as?

What other capabilities of the brain are you aware of, that you think the turning machine is incapable of performing?

I've been thinking about this a bit more, and I'm now more agnostic about the possibility that the human brain is equivalent to a Turing machine. This would be strange, since the Turing machines are countably infinite, ie: they can be listed in order, based on the starting sequence and list of instructions/moves; if the human brain is equivalent to a Turing machine then it (or a Turing machine isomorphic to the human brain) appears somewhere on this list. There is no known way to show whether the human brain is or is not Turing-equivalent (ie: everything the human brain can do is in effect a mechanical operation.

However, every instance of a computer/robot emulating the human brain - such as the examples you give of software recognizing BPM, or invoking certain emotions using notes with pre-defined frequencies - implement what David Chalmers describes as the "easy problems of consciousness", such as "the ability to discriminate, categorize, and react to
environmental stimuli; the focus of attention; the difference between wakefulness and sleep", ie: those capabilities which can be explained in terms of physical mechanisms. Approaches to AI such as those you cite show that we are able to implement these attributes of consciousness in a machine.

It is what Chalmers describes as the "hard problem of consciousness", ie: experience or subjectivity, that I think is where the difficulty arises in implementing a human brain mechanically. Is it possible to create a computer that experiences itself as an "I", and if it is possible, how can we be sure we've done it?
 
Back
Top Bottom