Decisions

Published March 16, 2022
Advertisement

The factors that are influencing the decisions we make a limited by number. This is something I`ve already brought into discussion in the past. There is this idea floating in the air that we are self aware and that the machines can`t poses this self awareness. If you ask me I would trim the expression ‘self aware’ to plain ‘aware’. Awareness means that you take into account all relevant factors when making a decision. Awareness means educated decision making.

Previous Entry Dogs
Next Entry Interface
0 likes 4 comments

Comments

JoeJ

I'm sure almost any decision we make is also influenced by unawareness, the unconscious mind. Maybe that's important to understand intelligence.

I mean, if we try to self analyze, we reduce our thoughts to logic models we could eventually model with a computer program.
But at the same time, if we do this, we also ignore the unconscious part. So maybe we block the path to enlightenment by trying hard.

It's quite an old question among AI researchers - ‘Does true intelligence require consciousness / self awareness? Or not?’
But the question is just rhetorical. We can not define any of those terms in that question, so it's probably pointless.

The problem with AI is the unspecified problem. We know we can't do it. But we don't know what precisely we can not do.

So i would think we need to work on the definition of the problem first. And regarding that, i can formulate a promising question:
‘What is the simplest problem, which requires intelligence to solve?’ - Why can we not answer such a simple question? Can you? Anybody?
If we could answer this, we could work on AI for real. But personally i can't. If i try, i come up with a simple puzzle game, but the algorithm to play it won't be generally intelligent. I can generalize the algorithm, so it can also tackle other but similar problems. ML would be an example. It replaces strict data structures and functions with flexible and dynamic pattern detection, fitting curves, and what not. Idk, but it's not intelligent on its own. It's fed with intelligent input and mimics it, that seems all.

I feel stuck here. But i have a belief on what made us more intelligent than animals: Communication. (ignoring the chicken and egg issue of that claim)
This gave us the ability to empathize, so we learned to be self aware as well, by conscious differentiation between you and me.
It also gave us the ability to formulate questions, so we learned to build mental models of what is and what could eventually be.
Which, in turn, allows a much more complex model of a potentially advantageous reality. So we can effect reality in clever ways to get more of what we want.
In turned out intelligence is beneficial to our primal motivation of survival and improving life standard. So we kept thinking harder, and it evolved a more and more capable human brain.
Notice: My dogs can't do any of this.
Notice as well: The flexible mental modeling skill can also turn against us. We can use it to scroll tiktok all day. Smiling, but no longer using our skills to improve our standards. So it evolves backwards, and we become dumber again instead smarter.

So maybe, what we need is not definitions of terms or answering questions, but simulation of agents of competing species, which can communicate but also have the option to improve their communication on their own.
If we simulate life, eventually combined with the AI tech we already have, maybe general intelligence emerges even unintentionally. Maybe Esenthel was right assuming my web3 predictions would lead to Skynet.

It's not that i'm so serious about our geeky amateurish AGI discussions, but i do think it can give us two outcomes: New science fiction stories, and better AI for our games.

March 16, 2022 10:26 PM
Calin

I'm sure almost any decision we make is also influenced by unawareness, the unconscious mind.

When I take a decision I know exactly Why I`m taking it. Like I never take decisions for reasons I can`t explain.

We are rational beings. We take decision as a result of thinking. It`s true at the same time that I might have a tough time expressing my reasons, but they are always logical and their origin can be traced back in time. People also take irrational decisions driven by feelings (which can be good feelings and bad feelings) but even feelings are driven by a logical process. Like when I`m taking a decision driven by a good feeling (gratefulness) rather than reason (like giving something back when I don`t have to) I`m aware my answer is not logical but I choose to be on the loosing side for the greater good. So when I`m taking a decision driven by a good feeling I`m being irrational on purpose. Being a Christian is being irrational on purpose.

Also I could be driven by fear that was triggered by an event that happened in the past. That fear is a thought that can be expressed in logical terms but sometimes the fear is there for so long that it becomes a theme of our personality and if we are to be questioned about our reasons behind an action we made, we might give the wrong answer even if we have no intention of lying. It`s always cause and effect, we don`t take decisions for no reason.

What is the simplest problem, which requires intelligence to solve

1+1 or extracting the kernel from the nut shell would be the starting point for problems requiring intelligence to solve. 1+1 can be solved my a machine. extracting the kernel could be solved artificially (by the means of a machine) as well but with limited degree of autonomy, some human input would still be required.

March 17, 2022 08:45 AM
Calin

the AGI he develops

I don`t know what that is and I`m not sure if I want to know what that is but when you quote it it gets me thinking about agumented reality (which is something I understand).

March 17, 2022 09:53 AM
JoeJ

When I take a decision I know exactly Why I`m taking it. Like I never take decisions for reasons I can`t explain.

Unconscious decisions just slip below your radar. You do not realize them, so you never consider there might be a need for explanation.

1+1 or extracting the kernel from the nut shell

1+1 is no problem at all. Cracking the nut is one. But it requires knowledge. We need to know there is a tasty kernel inside, and it is possible to destroy the shell. Knowledge comes from memorizing observations. And we may have seen the other guy cracking the shell with a stone. It was an accident at first, but was memorized and adopted by the herd.

So that's more an application of learning, which ML can actually do quite well. It could successfully label the nut to be a nut, then associate what related properties we have in memory, like cracking and eating it. There is no intelligence involved?

So do we need a harder problem? And, assuming we would increase difficulty of our problems just gradually, would it turn out intelligence (as i expect) does not exist but learning is good enough?

March 17, 2022 10:32 AM
You must log in to join the conversation.
Don't have a GameDev.net account? Sign up!
Profile
Author
Advertisement
Advertisement