cross-posted from: https://lemmy.ml/post/14869314

“I want to live forever in AI”

You are viewing a single thread.
View all comments
110 points

Even if it were possible to scan the contents of your brain and reproduce them in a digital form, there’s no reason that scan would be anything more than bits of data on the digital system. You could have a database of your brain… but it wouldn’t be conscious.

No one has any idea how to replicate the activity of the brain. As far as I know there aren’t any practical proposals in this area. All we have are vague theories about what might be going on, and a limited grasp of neurochemistry. It will be a very long time before reproducing the functions of a conscious mind is anything more than fantasy.

permalink
report
reply
51 points

Counterpoint, from a complex systems perspective:

We don’t fully know or are able toodel the details of neurochemistry, but we know some essential features which we can model, action potentials in spiking neuron models for example.

It’s likely that the details don’t actually matter much. Take traffic jams as an example. There is lots of details going on, driver psychology, the physical mechanics of the car etc. but you only need a handful of very rough parameters to reproduce traffic jams in a computer.

That’s the thing with “emergent” phenomena, they are less complicated than the sum of their parts, which means you can achieve the same dynamics using other parts.

permalink
report
parent
reply
32 points

Even if you ignore all the neuromodulatory chemistry, much of the interesting processing happens at sub-threshold depolarizations, depending on millisecond-scale coincidence detection from synapses distributed through an enormous, and slow-conducting dendritic network. The simple electrical signal transmission model, where an input neuron causes reliable spiking in an output neuron, comes from skeletal muscle, which served as the model for synaptic transmission for decades, just because it was a lot easier to study than actual inter-neural synapses.

But even that doesn’t matter if we can’t map the inter-neuronal connections, and so far that’s only been done for the 300 neurons of the c elegans ganglia (i.e., not even a ‘real’ brain), after a decade of work. Nowhere close to mapping the neuroscientists’ favorite model, aplysia, which only has 20,000 neurons. Maybe statistics will wash out some of those details by the time you get to humans 10^11 neuron systems, but considering how badly current network models are for predicting even simple behaviors, I’m going to say more details matter than we will discover any time soon.

permalink
report
parent
reply
15 points

Thanks fellow traveller for punching holes in computational stupidity. Everything you said is true but I also want to point out that the brain is an analog system so the information in a neuron is infinite relative to a digital system (cf: digitizing analog recordings). As I tell my students if you are looking for a binary event to start modeling, look to individual ions moving across the membrane.

permalink
report
parent
reply
2 points

Yes the connectome is kind of critical. But other than that, sub threshold oscillations can and are being modeled. It also does not really matter that we are digitizing here. Fluid dynamics are continuous and we can still study, model and predict it using finite lattices.

There are some things that are missing, but very clearly we won’t need to model individual ions and there is lots of other complexity that will not affect the outcome.

permalink
report
parent
reply
9 points

I heard a hypothesis that the first human made consciousness will be an AI algorithm designed to monitor and coordinate other AI algorithms which makes a lot of sense to me.

Our consciousness is just the monitoring system of all our bodies subsystems. It is most certainly an emergent phenomenon of the interaction and management of different functions competing or coordinating for resources within the body.

To me it seems very likely that the first human made consciousness will not be designed to be conscious. It also seems likely that we won’t be aware of the first consciousnesses because we won’t be looking for it. Consciousness won’t be the goal of the development that makes it possible.

permalink
report
parent
reply
2 points

I’d say the details matter, based on the PEAR laboratory’s findings that consciousness can affect the outcomes of chaotic systems.

Perhaps the reason evolution selected for enormous brains is that’s the minimum necessary complexity to get a system chaotic enough to be sensitive to and hence swayed by conscious will.

permalink
report
parent
reply
3 points

PEAR? Where staff participated in trials, rather than doing double blind experiments? Whose results could not be reproduced by independent research groups? Who were found to employ p-hacking and data cherry picking?

You might as well argue that simulating a human mind is not possible because it wouldn’t have a zodiac sign.

permalink
report
parent
reply
25 points

We don’t even know what consciousness is, let alone if it’s technically “real” (as in physical in any way.) It’s perfectly possible an uploaded brain would be just as conscious as a real brain because there was no physical thing making us conscious, and rather it was just a result of our ability to think at all.
Similarly, I’ve heard people argue a machine couldn’t feel emotions because it doesn’t have the physical parts of the brain that allow that, so it could only ever simulate them. That argument has the same hole in that we don’t actually know that we need those to feel emotions, or if the final result is all that matters. If we replaced the whole “this happens, release this hormone to cause these changes in behavior and physical function” with a simple statement that said “this happened, change behavior and function,” maybe there isn’t really enough of a difference to call one simulated and the other real. Just different ways of achieving the same result.

My point is, we treat all these things, consciousness, emotions, etc, like they’re special things that can’t be replicated, but we have no evidence to suggest this. It’s basically the scientific equivalent of mysticism, like the insistence that free will must exist even though all evidence points to the contrary.

permalink
report
parent
reply
8 points

Also, some of what happens in the brain is just storytelling. Like, when the doctor hits your patellar tendon, just under your knee, with a reflex hammer. Your knee jerks, but the signals telling it to do that don’t even make it to the brain. Instead the signal gets to your spinal cord and it “instructs” your knee muscles.

But, they’ve studied similar things and have found out that in many cases where the brain isn’t involved in making a decision, the brain does make up a story that explains why you did something, to make it seem like it was a decision, not merely a reaction to stimulus.

permalink
report
parent
reply
1 point

That seems like a lot of wasted energy, to produce that illusion. Doesn’t nature select out wasteful designs ruthlessly?

permalink
report
parent
reply
2 points

let alone if it’s technically “real” (as in physical in any way.)

This right here might already be a flaw in your argument. Something doesn’t need to be physical to be real. In fact, there’s scientific evidence that physical reality itself is an illusion created through observation. That implies (although it cannot prove) that consciousness may be a higher construct that exists outside of physical reality itself.

If you’re interested in the philosophical questions this raises, there’s a great summary article that was published in Nature: https://www.nature.com/articles/436029a

permalink
report
parent
reply
15 points

On the contrary, it’s not a flaw in my argument, it is my argument. I’m saying we can’t be sure a machine could not be conscious because we don’t know that our brain is what makes us conscious. Nor do we know where the threshold is where consciousness arises. It’s perfectly possible all we need is to upload an exact copy of our brain into a machine, and it’d be conscious by default.

permalink
report
parent
reply
3 points

That’s pseudoscientific bullshit. Quantum physics absolutely does tell us that there is a real physical world. It’s incredibly counterintuitive and impossible to fully describe, but does exist.

permalink
report
parent
reply
1 point

Physical reality exists inside consciousness. Consciousness is the thing that can be directly observed.

permalink
report
parent
reply
15 points

Consciousness might not even be “attached” to the brain. We think with our brains but being conscious could be a separate function or even non-local.

permalink
report
parent
reply
14 points

I read that and the summary is, “Here are current physical models that don’t explain everything. Therefore, because science doesn’t have an answer it could be magic.”

We know consciousness is attached to the brain because physical changes in the brain cause changes in consciousness. Physical damage can cause complete personality changes. We also have a complete spectrum of observed consciousness from the flatworm with 300 neurons, to the chimpanzee with 28 billion. Chimps have emotions, self reflection and everything but full language. We can step backwards from chimps to simpler animals and it’s a continuous spectrum of consciousness. There isn’t a hard divide, it’s only less. Humans aren’t magical.

permalink
report
parent
reply
3 points

I understand your point. But science has also shown us over time that things we thought were magic were actually things we can figure out. Consciousness is definitely up there in that category of us not fully understanding it. So what might seem like magic now, might be well-understood science later.

Not able to provide links at the moment, but there are also examples on the other side of the argument that lead us to think that maybe consciousness isn’t fully tied to physical components. Sure, the brain might interface with senses, consciousness, and other parts to give us the whole experience as a human. But does all of that equate to consciousness? Is the UI of a system the same thing as the user?

permalink
report
parent
reply
1 point
*

And we know the flatworm and chimp don’t have non-local brains because?

I’m just saying, it didn’t seem like anyone was arguing that humans were special, just that consciousness may be non-local. Many quantum processes are, and we still haven’t ruled out the possibility of Quantum phenomena happening in the brain.

permalink
report
parent
reply
5 points

Thank you for this. That was a fantastic survey of some non-materialistic perspectives on consciousness. I have no idea what future research might reveal, but it’s refreshing to see that there are people who are both very interested in the questions and also committed to the scientific method.

permalink
report
parent
reply
8 points

I think we’re going to learn how to mimic a transfer of consciousness before we learn how to actually do one. Basically we’ll figure out how to boot up a new brain with all of your memories intact. But that’s not actually a transfer, that’s a clone. How many millions of people will we murder before we find out the Zombie Zuckerberg Corp was lying about it being a transfer?

permalink
report
parent
reply
3 points

What’s the difference between the two?

permalink
report
parent
reply
3 points

A. You die and a copy exists

B. You move into a new body

permalink
report
parent
reply
7 points

You could have a database of your brain… but it wouldn’t be conscious.

Where is the proof of your statement?

permalink
report
parent
reply
7 points

Well there’s no proof, it’s all speculative and even the concept of scanning all the information in a human brain is fantasy so there isn’t going to be a real answer for awhile.

But just as a conceptual argument, how do you figure that a one-time brain scan would be able to replicate active processes that occur over time? Or would you expect the brain scan to be done over the course of a year or something like that?

permalink
report
parent
reply
5 points

You make a functional model of a neuron that can behave over time like other neurons do. Then you get all the synapses and their weights. The synapses and their weights are a starting point, and your neural model is the function that produces subsequent states.

Problem is brians don’t have “clock cycles”, at least not as strictly as artificial neural networks do.

permalink
report
parent
reply
3 points

Why would bits not be conscious?

permalink
report
parent
reply
-13 points
Deleted by creator
permalink
report
parent
reply
23 points

ChatGPT is not conscious, it’s just a probability language model. What it says makes no sense to it and it has no sense of anything. That might change in the future but currently it’s not.

permalink
report
parent
reply
7 points
*

And it doesn’t have any internal state of mind. It can’t “remember” or learn anything from experience. You need to always feed everything into the context or stop and retrain it to incorporate “experiences”. So I’d say that rules out consciousness without further systems extending it.

permalink
report
parent
reply
2 points

That reads like something ChatGPT wrote.

permalink
report
parent
reply
1 point

Dumbed down, your brain is also just a probability model.

permalink
report
parent
reply
7 points
*
Deleted by creator
permalink
report
parent
reply
3 points

From a lecture by Roger Penrose

Wikipedia has an article and he has some videos on YouTube

https://en.m.wikipedia.org/wiki/Orchestrated_objective_reduction

permalink
report
parent
reply
3 points

🥱

The only people with this take are people who don’t understand it. Plus growth and decline is an inherent part of consciousness, unless the computer can be born, change then die in some way it can’t really achieve consciousness.

permalink
report
parent
reply

Programmer Humor

!programmer_humor@programming.dev

Create post

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

  • Keep content in english
  • No advertisements
  • Posts must be related to programming or programmer topics

Community stats

  • 7.2K

    Monthly active users

  • 954

    Posts

  • 37K

    Comments