Jump to content

Predicting the Future Changes the Future


khan2012
 Share

Recommended Posts

I feel that anything that can hold a moderately decent conversation with a human could pass for intelligent. I had an "AI" program called HAL that I downloaded, and one of the features what that it could be used in an instant messenger program. It also learned to communicate from whoever was talking to it. I used it on my cousin and several of my co-workers for weeks, and they were none the wiser. I finally had to clue them in, and it took a while to get them to believe me.

 

It provided me some good entertainment while it lasted

 

 

Link to comment
Share on other sites

  • Replies 37
  • Created
  • Last Reply

Top Posters In This Topic

Some people are more left brain oriented and some more right brain oriented.

 

No one mentioned the hemispheres or how they work together, interesting.

 

Creativity vs. logic =

 

potential vs. known variable calculation?

 

Would anyone buy a painting created and thought of 100% by a machine?

 

'just curious.

 

(yes I realize just posting "thought of" above will open up critcisms huge, this is in off topic now after all, indulge me - though if a person never thought of the end result...who would we say did?).

 

 

Link to comment
Share on other sites

I read that when Newton lectured at Uni that no one would

 

show up because they all thought he was nuts.

There's an article in the current Scientific American that I glanced at a few hours ago which outlines some of the ideas of a quantum physicist that went in the direction of Newton rather than Einstein. The physicist proposed a theory which "unhooked" time from space (so relativistic space-time is set aside}, to provide a more Newtonian approach.

 

Relativity may be valid as far as it goes, but no one has been able to really go much beyond it. It is the idea of many that we are on the verge of a seminal breakthrough, but there are apparently some blocks in our thinking that have trapped progress.

 

:)

 

 

Link to comment
Share on other sites

So now that you have unequivocally answered my question, the follow-up should be expected:

 

Precisely where in the list of intelligent entities do you draw the "line" of the "threshold" between what is intelligent and what is not?

 

 

I thought long and hard.....and came up with the following criteria for intelligence.

 

1) The ability to purposefully ( with a specific goal in mind ) solve problems without external instruction to do so.

 

2) The ability to respond to problems with a response that is not pre-programmed.

 

3) The ability to ponder what intelligence is.

 

Some birds can perform tasks that comply with (1) and (2). I would almost regard there as being two levels of intelligence. A lower level, defined by such creatures ( or machines ) that can match (1) and (2)..........and a higher level (3) for those entities that are sufficiently 'self aware' to be able to ponder such things as 'what is intelligence ?'.....'how the hell did I get here ?'...and so on.

 

I think this division into two removes the need for a linear scale. So intelligence becomes a case of passing certain thresholds.

 

 

Link to comment
Share on other sites

I thought long and hard.....and came up with the following criteria for intelligence.

Well, Twighlight, you are a pretty durned smart guy... but I am not surprised that even after your deep thought that you couldn't come up with definitions of intelligence that do not have problems. But it is not easy. I don't think anyone has really come up with satisfactory definitions as yet, so nothing to be ashamed of.

 

1) The ability to purposefully ( with a specific goal in mind ) solve problems without external instruction to do so.

Still a bit nebulous, although I would presume we could agree on what does (and does not) constitute "external instruction". The need to solve any problem comes from an assessment of external conditions, and I assume you do not mean that. I assume you mean only some other intelligent being defining the problem set and potentially valid solution criteria.

 

2) The ability to respond to problems with a response that is not pre-programmed.

This is problematic and simply would not pass muster with cognitive scientists. The main reason is because you are not taking the "black box" approach to evaulating intelligence independent of the means for achieving it. You are prescribing what is in the box (or rather, in this case, what cannot be in the box). For example, a great deal of your autonomic nervous system is "pre-programmed" (in a genetic sense), and one can argue that the functions it performs are partially necessary to exhibit intelligence at conscious levels. But the biggest problem with this part of your definition is that you cannot adequately define something as visceral as intelligence by exclusion. The bigger problem is that this criteria positively excludes any belief that one can create something intelligent. Hence, you are eliminating anything created by man as having any hope of being deemed intelligent before you even have a defacto definition for when it has been achieved. That won't fly.

 

3) The ability to ponder what intelligence is.

This is problematic for what should be an obvious reason: circular logic. But one would also need a sufficiently rigorous definition of what constitutes "pondering". For example, I have always maintained that "exploring relationships between objects and their functions" as being one element of what is needed to exhibit intelligence. That could well be a satisfactory example of "pondering" and many AI programs possess such capabilities.

 

I think this division into two removes the need for a linear scale. So intelligence becomes a case of passing certain thresholds.

I respect your belief, but I am not buying it. It is still too flimsy. And I know of several AI programs that could (arguably) pass your three criteria, leaving out your need to exclude a certain type of action-reaction connection (pre-programmed). And just for sake of completeness of argument, let's review what wiki says regarding defintions of intelligence:

 

Wikipedia: Definitions of Intelligence

 

"Individuals differ from one another in their ability to understand complex ideas, to adapt effectively to the environment, to learn from experience, to engage in various forms of reasoning, to overcome obstacles by taking thought. Although these individual differences can be substantial, they are never entirely consistent: a given person's intellectual performance will vary on different occasions, in different domains, as judged by different criteria. Concepts of "intelligence" are attempts to clarify and organize this complex set of phenomena. Although considerable clarity has been achieved in some areas, no such conceptualization has yet answered all the important questions and none commands universal assent. Indeed, when two dozen prominent theorists were recently asked to define intelligence, they gave two dozen somewhat different definitions."

 

 

 

snip

 

 

 

"A very general mental capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly and learn from experience. It is not merely book learning, a narrow academic skill, or test-taking smarts. Rather, it reflects a broader and deeper capability for comprehending our surroundings—"catching on", "making sense" of things, or "figuring out" what to do."

 

Even these are terribly nebulous. One wonders if even the ability to recognize intelligence at all is biased by possessing intelligence?

 

RMT

 

 

Link to comment
Share on other sites

Without thinking too hard about it, my reaction would be to say;

 

Potential factoring, logical deduction, communication, unassisted speculation which stores information in a memory bank and utilizes it in a loop starting over - which over time grows, evolves and uses the same basis to become more complex.

 

Although if comparing human qualities to a machine most would probably say something

 

about emotion which to logical people in terms of efficiency would probably be undesirable

 

in the larger scheme of things to get things done.

 

Some would say to sort through "garbage" is a complete waste of time - no pun intended, though

 

sometimes there was something that could have been recycled...not necessarily useful in it's

 

present form, but with some work can be used nonetheless.

 

As well, I've heard a German Shepherd has the approximate comprehension of a four year old human child, and if you asked most law enforcement officers or military that have them part of their teams, they'd probably tell you that with alot of work and bonding that is a very low approximation.

 

Sometimes "gut" reaction is better than second guessing but then is that not partly a function of instinct?

 

I personally think my only redeeming quality at times when it comes to complex problems or situations that I am perhaps not familiar with,

 

is having an extremely open mind and realizing that, yes, I may very well be wrong.

 

That tends to stop any bias from forming and placing limitations on seeing a larger scope of potential.

 

Bias forming habits are what holds us as humans back in terms of intelligence as a whole at times I would imagine.

 

Though that's just my opinion and it's very open to change :)

 

Is it possible to not have a bias of anything? Probably not, but I'm referring to exploring

 

the unknown or an innovation.

 

Interesting food for thought. Sometimes you guys make my mind very tired, but I usually come away with more than I arrived with, thanks, I mean that.

 

 

Link to comment
Share on other sites

Still a bit nebulous, although I would presume we could agree on what does (and does not) constitute "external instruction". The need to solve any problem comes from an assessment of external conditions, and I assume you do not mean that. I assume you mean only some other intelligent being defining the problem set and potentially valid solution criteria.

 

Specifically, what I mean is without any instructions, hints, or solution assistance from an external intelligent device or person. In other words, the intelligent 'solution' to a problem is arrived at entirely within the framework of the entity's own cognitive abilities. I think this could be furthered by adding the proviso that the entity actually arrive of its own accord at the conclusion that there IS a problem to be solved. A good example of this would be the ability of crows to drop pebbles into a container of water to raise the water level and get to a worm that is floating on the surface.

 

This is problematic and simply would not pass muster with cognitive scientists. The main reason is because you are not taking the "black box" approach to evaulating intelligence independent of the means for achieving it. You are prescribing what is in the box (or rather, in this case, what cannot be in the box). For example, a great deal of your autonomic nervous system is "pre-programmed" (in a genetic sense), and one can argue that the functions it performs are partially necessary to exhibit intelligence at conscious levels. But the biggest problem with this part of your definition is that you cannot adequately define something as visceral as intelligence by exclusion. The bigger problem is that this criteria positively excludes any belief that one can create something intelligent. Hence, you are eliminating anything created by man as having any hope of being deemed intelligent before you even have a defacto definition for when it has been achieved. That won't fly.

 

Hmm. I think what I meant more clearly was 'without any pre-programmed solution to that particular problem'. Obviously, if you program a device to do what the crow does with the pebbles, the device is simply following a set of instructions and I really don't see how that is intelligence. By the same token..I would question whether a machine that can assemble a car is 'intelligent'. It is merely following a set of orders. If it seems too complex to be lacking intelligence....then consider a weaving loom. It can create the most amazingly complex patterns, but nobody would argue that is has its own intelligence.

 

What I am fundamentally getting at here is the question....in what sense is any pre-programmed activity ( no matter how complex and seemingly intelligent ) any different to the weaving loom ? My agument is essentially one against complexity being a criteria for intelligence, because there is an invariable tendency to think that the more complex a problem is, the more intelligent a device must be to solve it. Sure...lets go into the 'black box' and argue that a supercomputer that can ( as is now possible ) 'recognise cars, animals, people, etc' is merely shuffling 0s and 1s in memory according to a preset algorithm. In what sense is it doing anything different to the weaving loom ? My hypothetical weaving loom is a key instrument in deciding what is intelligent.

 

This is problematic for what should be an obvious reason: circular logic. But one would also need a sufficiently rigorous definition of what constitutes "pondering". For example, I have always maintained that "exploring relationships between objects and their functions" as being one element of what is needed to exhibit intelligence. That could well be a satisfactory example of "pondering" and many AI programs possess such capabilities.

 

Well...'pondering' was simply a replacement for self-awareness, as I was trying to bring in consciousness without seeming to do so ( the ability to evade the issue would be a key sign of intelligence...lol ). What I mean specifically is that a truly intelligence device must not just solve a problem....but be AWARE that it is solving a problem. This is what seperates us from the weaving loom or the supercomputer. Bearing in mind the weaving loom...I am of the view that only self-awareness can break that mechanistic 'following preset instructions' dilemna.

 

But let me turn all this around and pose you with a problem. Let's take the weaving loom, which is set up to make a carpet. A really complex carpet. The operator of the loom has no idea what the carpet will look like.....he just applies the right colour threads at the right time, and operates the foot pedals to make the loom work. The end result is an amazingly complex carpet, which clearly must have required intelligence to bring about. Yet the loom had no idea what it was doing, and the operator merely pressed the pedals and applied the threads, with no idea of the end result. So......where in this did the intelligence lie ?

 

It seems to me that to answer THAT question is to answer what intelligence is.

 

 

Link to comment
Share on other sites

Specifically, what I mean is without any instructions, hints, or solution assistance from an external intelligent device or person.

I think you're being too hardline on the concept, but that's just my POV.

 

Do a google search about a man raised as a chicken. ;)

 

Quote;

 

When he was found in a chicken coop in 1979, 'Sujit would mostly hop around like a chicken, peck at his food, on the ground, perch and make a noise like the calling of a chicken. He would prefer to roost on the floor to go to sleep rather than sleep in a bed.'

 

So again, what constitutes intelligence without input ?

 

... it appears our memory banks do require some instruction...

 

 

Link to comment
Share on other sites

I think you're being too hardline on the concept, but that's just my POV.

 

Intelligence is extremely hard to pin down.....so one needs as rigorous a definition as possible.

 

The real dilemna with intelligence is to isolate a component that is not the result of pre-programming or falsely perceived complexity.

 

My computing experience goes back to the days of paper tape and punch cards. When you actually physically SEE a program in such form....rather than nicely tucked away on 0s and 1s in computer memmory.....you realise it's no different to the weaving loom instructions, which are often held in that same punch card format. This very much removes most of the 'black box' mystique of it all that can often give a false perception of 'intelligence' because the equivalent of weaving loom instructions are nicely hidden away from view.

 

Anything that is simply automatically responding to holes in a piece of paper is clearly NOT intelligent......any more that some old style music box with its pins rotating on a drum is a musician.

 

A crucial criteria is that intelligence requires not just solving a problem, but UNDERSTANDING the problem. I would say that is probably the single most important definition of intelligence. Anything that can solve a problem, but which has not fundamentally understood or perceived the nature of the problem.....is simply responding as per the weaving loom and cannot be said to be intelligent.

 

This is where I do not believe that the Japanese robot ASIMO is intelligent. Sure it can walk, talk, serve coffee, etc, etc. But nowhere in all that circuitry and programming do I see any sense in which it comprehends it's environment...let alone itself. It's just a glorified weaving loom....which may appear to be 'intelligent' because we ourselves are programmed to recognise behaviour as such......but which fundamentally is merely an illusion of complexity.

 

I think if you carry along that line of reasoning, then it is inescapable that intelligence requires self awareness........consciousness. Such an entity can stand back and mentally 'look' at a problem within its own mental space, as if being an external observer to the process. This is crucial. Whereas a weaving loom simply responds to one hole in the instruction card at a time, and is not aware of 'the whole'.......true intelligence requires seeing the pattern as a whole.

 

 

Link to comment
Share on other sites

But let me turn all this around and pose you with a problem. Let's take the weaving loom, which is set up to make a carpet. A really complex carpet. The operator of the loom has no idea what the carpet will look like.....he just applies the right colour threads at the right time, and operates the foot pedals to make the loom work. The end result is an amazingly complex carpet, which clearly must have required intelligence to bring about. Yet the loom had no idea what it was doing, and the operator merely pressed the pedals and applied the threads, with no idea of the end result. So......where in this did the intelligence lie ?

By the way, I think IBM programming started with the type of punch cards used to control looms, didn't it?

 

So, the conclusion seems to be that the intelligence is in the program. I am half afraid that is true. We might like to think that intelligence is a personal attribute, but we always seem to be getting the answer that we don't run our lives but are along for the ride. Are are we like 'our hero Arnold' in Future Recall who is experiencing a recorded memory ?

 

 

Link to comment
Share on other sites

So, the conclusion seems to be that the intelligence is in the program. I am half afraid that is true. We might like to think that intelligence is a personal attribute, but we always seem to be getting the answer that we don't run our lives but are along for the ride. Are are we like 'our hero Arnold' in Future Recall who is experiencing a recorded memory ?

 

Well, one can run into real difficulties there. For intelligence to be in the program, one has to allow for the 'imparting' of that intelligence from an external source.....and thus assume the existence of that external intelligence source. Which, of course, is precisely what creationists and 'intelligent design' people advocate.

 

Clearly, if there is no intelligent designer, then the only option is to consider that intelligence may evolve. Though even there, one could argue that this process itself is the result of intelligent design. The problem with an intelligent designer is that one is forever stuck in a loop......who designed the designer ?

 

As for running our lives.....there was a fascinating documentary I saw recently in which it was scientifically shown that a brain scanner could know the outcome of a decision a person was going to make....a full 6 seconds before the person being scanned was themselves consciously aware of having made a decision.

 

On one level that appears to knock 'free will' on the head. But I don't see it quite so gloomy. I think that decisions are probably made at some deep quantum level some seconds before emerging at a higher, deterministic level.

 

I wish they'd get a move on with making quantum computers, as I suspect that some amazing insights will arise from that.

 

 

Link to comment
Share on other sites

 Share

×
×
  • Create New...