Computers learn much faster and deeper than humans

Started by Scowler, March 12, 2018, 12:19:20 PM

Previous topic - Next topic

Scowler

I had the opportunity to have conversations about AI with some Christians. Every one of them argued, that computers only do what you teach them, so they are inherently inferior to humans. They cannot come up with something new on their own. The example of Watson did not convince them. So how about AlphaZero? The system which started from zero, in Go and Shogi and Chess? It had no access to textbooks about opening strategies, or endgame theories, or millions of previous games, analyzed by the human grandmasters. AlphaZero was given the rules of the moves, and then it started to play against itself. The result is more than breathtaking. In a few hours it overtook the best programs.

https://en.chessbase.com/post/the-future-is-here-alphazero-learns-chess

Daniel

#1
Interesting. I think I may have to look into machine learning some day.

But as impressive as this is, it doesn't really prove that machines have the capacity to come up with something "new". As you say, the computer started off with "the rules of the moves". And given enough trial and error, all good strategies can be arrived at from only the rules. But start with nothing, and you end up with nothing.

Scowler

Quote from: Daniel on March 12, 2018, 03:24:38 PM
Interesting. I think I may have to look into machine learning some day.

But as impressive as this is, it doesn't really prove that machines have the capacity to come up with something "new". As you say, the computer started off with "the rules of the moves". And given enough trial and error, all good strategies can be arrived at from only the rules. But start with nothing, and you end up with nothing.

For the system everything was new. It started with the rules only, and discovered everything else. Moreover, it improved on several openings, too - and that is definitely something "new". The point is that this was not a "brute force" trial and error strategy. The system employed a "deep learning" method, just like the best humans do. Except it performed this task incredibly fast. How long would it take for an isolated human to become better than the world champion, even if she were a prodigy?

Of course, we humans are being "taught" since the cradle. Parents, teachers, friends, acquaintances ... every second of our life is filled with learning. Yet, observe a "feral child" - like Mawgli in the Jungle Book, and see how far it would develop in isolation.

It is not just the speed that is incredible. It is the fact that there was no one else to learn from, the system achieved everything on its own. Sic itur ad astra. :)

cgraye

Quote from: Scowler on March 12, 2018, 12:19:20 PM
I had the opportunity to have conversations about AI with some Christians. Every one of them argued, that computers only do what you teach them, so they are inherently inferior to humans. They cannot come up with something new on their own. The example of Watson did not convince them. So how about AlphaZero? The system which started from zero, in Go and Shogi and Chess? It had no access to textbooks about opening strategies, or endgame theories, or millions of previous games, analyzed by the human grandmasters. AlphaZero was given the rules of the moves, and then it started to play against itself. The result is more than breathtaking. In a few hours it overtook the best programs.

https://en.chessbase.com/post/the-future-is-here-alphazero-learns-chess

Comparing a computer program to a human being is a rather apples to oranges comparison.  Human beings are "superior" to computer programs in that they perform genuine intellectual activity - that is, they posses abstractions and can reason from one to another.  Computer programs cannot do this even in principle.  But they can process a huge amount of data very quickly, compared to a human being.  That might be enough to outperform a human being at a game like chess or go.  So in that sense a computer program might be "superior" to a human being at chess or go.  But then again, a computer program will never know what a game even is in the same way as human being.  Which is superior?  It depends on what you want.

I am a computer engineer, and though AI is not the field I work in, I have studied it in some depth for possible application in my field.

Kreuzritter

And every output of a computer's "thought" is utterly without meaning in the absence of a real subject to interpret its abstract symbols and translate them into the real ideas they are intended to signify. Material objects only create more material objects, and "information", to a machine, is only "information" in relation to how the machine "processes" it by a set of mechanical rules - rules which cannot, out of unconscious matter - create a single cosnmcious perception which is essential to meaning; but when one realises how those who deny God are so often the same who deny the real existence of subjects as ontologically distinct from the mere abstract identity of a complex material object like a machine, one realises how pointless it is to engage in any discussion with them.

Daniel

#5
By nothing "new", what I meant was that all chess strategies are derivable solely from the rules of chess.

Perhaps I am misunderstanding how machine learning works (again, I haven't really looked into it much), but I am under the impression that it's basically a kind of "evolution". The computer starts with some rules, then does a lot of trial and error, and generates results which might be favourable or disfavourable. It eliminates all the "strategies" which produce disfavourable results, and is left with only the favourable. After doing this enough times it eventually ends up with some extremely effective strategies.

If I am mistaken on any of that, feel free to ignore me. Or better yet, correct me.

But my point is, nowhere along the line does the computer ever learn anything "new" that wasn't already implicitly contained within the rules it started with. Unlike animals, the computer has no senses or awareness. It has no way of acquiring knowledge on its own. Somebody must directly give it the rules of the pieces before it can "learn" any strategies. (And this isn't exactly comparable to humans learning from teachers either, since a human can learn at least the rules of the pieces by observation alone, without a teacher.)

Moreover, as cgraye pointed out, this is more comparable to a man's memory than to his understanding. Even when the computer has "learned" some effective strategies, the computer really has no clue why those strategies are effective. It doesn't even know what a strategy is, or what chess is for that matter.

Scowler


Quote from: cgraye on March 12, 2018, 09:14:37 PM
Comparing a computer program to a human being is a rather apples to oranges comparison. 

You missed the point. I was comparing the method of being pre-programmed vs. the self-improving development. Humans are capable to change their internal structures (neural activity) and so are the computers / robots / AI's. (Suggested reading: E. F. Codd (the father of relational databases) "Cellular automata". Codd describes the construction of self-reproducing automata, in a much simpler format than von Neumann did.

Quote from: cgraye on March 12, 2018, 09:14:37 PM
Human beings are "superior" to computer programs in that they perform genuine intellectual activity - that is, they posses abstractions and can reason from one to another. 

"Genuine" intellectual activity? How is that defined? Some autistic people are incapable of thinking in abstractions, and yet, they can be very intelligent / productive / creative in their field.

Quote from: cgraye on March 12, 2018, 09:14:37 PM
Computer programs cannot do this even in principle. 

You are not qualified to make such a pronouncement. No one is.

Quote from: cgraye on March 12, 2018, 09:14:37 PM
But they can process a huge amount of data very quickly, compared to a human being.  That might be enough to outperform a human being at a game like chess or go.  So in that sense a computer program might be "superior" to a human being at chess or go.  But then again, a computer program will never know what a game even is in the same way as human being.  Which is superior?  It depends on what you want.

I am a computer engineer, and though AI is not the field I work in, I have studied it in some depth for possible application in my field.

Well, I used to be a math professor, a linguist and a computer engineer before retirement. I wonder what will happen when an AI will "beat" the Turing test. It will have interesting ramifications, especially in theology. As for the "same" way, as humans do, that is irrelevant. Cars do not employ the same method for locomotion as humans do, but they can get from A to B much faster. And they don't even need a human driver to do it.

If you are interested in science fiction and futurology, I suggest that you pick up the book by Stanislaw Lem, the title is "Summa Technologiae". It is not a coincidence that the title is similar to Aquinas's book.

---------------

Quote from: Daniel on March 13, 2018, 07:06:17 AM
By nothing "new", what I meant was that all chess strategies can be derived solely from the rules of chess.

I see. But that is a very narrow understanding of "new". By the same token, you could argue that the human writers cannot create anything "new" either. Everything they "create" is already embedded in the letters of the alphabet, and the rules of grammar. :) Is that how you understand "new"? I hope not. ;)

Quote from: Daniel on March 13, 2018, 07:06:17 AM
Perhaps I am misunderstanding how machine learning works (again, I haven't really looked into it much), but I am under the impression that it's basically a kind of "evolution". The computer starts with some rules, then does a lot of trial and error, and generates results which might be favourable or disfavourable. It eliminates all the "strategies" which produce disfavourable results, and is left with only the favourable. After doing this enough times it eventually ends up with some extremely effective strategies.

If I am mistaken on any of that, feel free to ignore me. Or better yet, correct me.

But my point is, nowhere along the line does the computer ever learn anything "new" that wasn't already implicitly contained within the rules it started with. The computer must already have something to work with before it can "learn".

That is exactly how it happens... both with humans and with AI. I presented the example of "feral children" on purpose. Humans cannot function in sterile environment either.

Quote from: Daniel on March 13, 2018, 07:06:17 AM
Moreover, as cgraye pointed out, this is more comparable to a man's memory than to his understanding. Even when the computer has "learned" some effective strategies, the computer really has no clue why those strategies are effective. It doesn't even know what a strategy is, or what chess is for that matter.

Ah, that is another question, and a very interesting one. The basic problem is: "how to differentiate between the 'real McCoy' and a good imitation of it"? How can you know if your conversation partner "really" understands the subject, or merely "fakes" it? There is only one method I am aware of, the "proof of the pudding" combined by the "duck principle". If the other party exhibits something that we would call "understanding", then he/she/it DOES understand. Or we can use the Forrest Gump way of putting it: "stupid is as stupid does" or "genius is as genius does".

---------------

Quote from: Kreuzritter on March 13, 2018, 06:28:20 AM
And every output of a computer's "thought" is utterly without meaning in the absence of a real subject to interpret its abstract symbols and translate them into the real ideas they are intended to signify. Material objects only create more material objects, and "information", to a machine, is only "information" in relation to how the machine "processes" it by a set of mechanical rules - rules which cannot, out of unconscious matter - create a single cosnmcious perception which is essential to meaning; but when one realises how those who deny God are so often the same who deny the real existence of subjects as ontologically distinct from the mere abstract identity of a complex material object like a machine, one realises how pointless it is to engage in any discussion with them.

Obviously you did not realize it, since you attempted to engage in a "pointless" conversation with me. :) 


Jacob

QuoteWell, he acts like he has genuine emotions. Um, of course he's programmed that way to make it easier for us to talk to him, but as to whether or not he has real feelings is something I don't think anyone can truthfully answer.
"Arguing with anonymous strangers on the Internet is a sucker's game because they almost always turn out to be—or to be indistinguishable from—self-righteous sixteen-year-olds possessing infinite amounts of free time."
--Neal Stephenson

Scowler

Quote from: Jacob on March 13, 2018, 06:09:06 PM
Well, he acts like he has genuine emotions. Um, of course he's programmed that way to make it easier for us to talk to him, but as to whether or not he has real feelings is something I don't think anyone can truthfully answer.

Indeed. But this is the same problem that we have with humans. :) All you have access to is the "seemingly" genuine emotions. Whether they are really genuine, or not, cannot be decided. No one has access to the neural network of a human either. And we do not consider it a problem. The default response is to accept the displayed emotions as genuine - unless there is a very serious reason to suspect otherwise.

This is a special subset of the more generic question: "how to differentiate between the 'original' (or 'real') anything and a very convincing emulation of it"? The rational answer is: "if there is no way to tell them apart, then the question is irrelevant". Let's put it this way: "if a question cannot be answered (in principle) then it is an irrelevant question".

Non Nobis

If someone is totally unaware of reality then he is not conscious (let alone intelligent) no matter what conclusions others may rightly make about him given only all external facts.  If he is conscious, he is so regardless of conclusions that others may come to.
[Matthew 8:26]  And Jesus saith to them: Why are you fearful, O ye of little faith? Then rising up he commanded the winds, and the sea, and there came a great calm.

[Job  38:1-5]  Then the Lord answered Job out of a whirlwind, and said: [2] Who is this that wrappeth up sentences in unskillful words? [3] Gird up thy loins like a man: I will ask thee, and answer thou me. [4] Where wast thou when I laid up the foundations of the earth? tell me if thou hast understanding. [5] Who hath laid the measures thereof, if thou knowest? or who hath stretched the line upon it?

Jesus, Mary, I love Thee! Save souls!

cgraye

#10
Quote from: Scowler on March 13, 2018, 10:07:53 AM
"Genuine" intellectual activity? How is that defined? Some autistic people are incapable of thinking in abstractions, and yet, they can be very intelligent / productive / creative in their field.

It is defined as possessing a form (in the Aristotelian sense) without being an instance of it.  A being with an intellect can posses forms intentionally (in the intellect) rather than entitatively (by being that kind of thing).  I can possess the form of a chess piece when I grasp what a chess piece is, but I do not become a chess piece.  Even severely autistic people do the same thing.  A computer, no matter what program it is running, only possesses the form of a computer, and never a chess piece, unless it is melted down and made into a chess piece, at which point it is no longer a computer.

QuoteYou are not qualified to make such a pronouncement. No one is.

It follows from the fact that a computer is an entirely material thing, but the intellect is immaterial.  Material things can only posses one substantial form at a time, but an intellect can posses multiple substantial forms intentionally.  An intellect abstracts from concrete particulars and possesses these forms.  A computer program can only emulate this activity by pattern matching.  Human beings also perform a kind of pattern matching in the brain, but this is not the same thing as the possessing of abstractions and reasoning from one to another, so a computer will never be able to do the same thing as a human intellect.

That is why currently Amazon's Alexa is randomly laughing at people.  It is wrongly parsing some spoken phrases as, "Alexa, laugh," and it is programmed to play a laughing sound on that command.  This could be fixed by changing the command to something that is less likely to be wrongly parsed as the command phrase, additional learning that would factor in phrases that are more likely to occur around the command to laugh to reduce the possibility of an error, or other things of that nature.  But Alexa will never possess the abstractions "laughter," "humor," or anything else, and so it will never learn not to laugh at the wrong time like a human being would.

QuoteI wonder what will happen when an AI will "beat" the Turing test. It will have interesting ramifications, especially in theology. As for the "same" way, as humans do, that is irrelevant. Cars do not employ the same method for locomotion as humans do, but they can get from A to B much faster. And they don't even need a human driver to do it.

I don't imagine anything will happen.  When CGI got good enough to create realistic scenery in movies, people started using that instead of scouting for actual locations.  And that served the movies well enough.  But you still couldn't go to that forest you saw in that movie and cut down a tree.  If you need something that looks like a tree, you've got it, but if you need lumber, you will have to look for the genuine article.

Tales

Computers are just complex systems of gates.  AI and Deep Learning and whatever next is to come is just more of the same, more gates in more complex and useful ways.  If you think your mind is only a system of gates as well then sure, those computers have surpassed you.  But I do not think my mind is just a system of gates, so I see this as apples and oranges.

Perhaps off topic, but given that indeed you did computer engineering for a career, perhaps this analogy will be relevant to you.

Two objects are created and the same variables assigned to them, but with differing values attributed to each object.  The code is run.  What happens?  Nothing.

Now a function is created and the objects with their differing variables are programmed to run through them.  The code is run.  What happens?  Something (depends on what the function orders to be done - but the point is that something, not nothing, happened).

Two pieces of matter are created, variables like mass and charge are assigned to them.  One is very heavy, the other very light, the former has a positive charge, the latter a negative charge.  The matter is placed next to each other, what happens?  Nothing.

Now a law is created, which tells matter of opposite charge to attract.  The matter is then brought into proximity, what happens?  The pieces of matter move closer together.

Without there literally, somewhere, somehow, in some form existing the function to parse the objects, nothing would happen when the objects are called into existence.  Likewise, without there literally, somewhere, somehow, in some strange form, existing "laws" that "parse" matter, matter would do nothing simply by being created and attributed with a variety of variables.

While I do see said functions existing on my computer (I can manipulate them in Nature via software, or if I was very adventurous, via hardware), I do not see said laws existing anywhere in Nature (I cannot slice off a piece of the 2nd Law, or take a bite out of relativity).  But logically the laws must exist since we see matter taking action, and also we clearly are mapping what the design of the laws are (via scientific study).  From this I see that not only Nature exists, but that there is also the supernatural.

Aquila

AI can do some incredible things, and I expect it to only get more advanced in the future. That being said, and AI will never "think" like a human (although they will be able to give a fairly compelling simulation of it). Searle's "Chinese Room Argument" demolishes that idea pretty effectively, IMO:

https://www.iep.utm.edu/chineser/

Note: I'm a programmer who has done a lot of work with robotics and some (basic) machine learning.
Extra SSPX Nulla Salus.
Dogmatic Sedeplenist.

Tales

The computer can mimic the brain in so far as both are complex networks of gating systems.  But the mind is not the brain - the brain is the tool within nature through which the supernatural mind expresses itself.

Scowler


Quote from: Non Nobis on March 13, 2018, 08:25:48 PM
If someone is totally unaware of reality then he is not conscious (let alone intelligent) no matter what conclusions others may rightly make about him given only all external facts.  If he is conscious, he is so regardless of conclusions that others may come to.

Newborns and those in persistent vegetative state fit into this category just fine. (Also people who have seriously damaged brains) :) How do you know if the other party is "conscious" or not? By attempting to interact with them. The proof of the pudding is that it is edible. If their answers to your questions indicate that that they are conscious and intelligent, then they ARE conscious and intelligent.

-----------------------------------

Quote from: cgraye on March 13, 2018, 09:42:08 PM
It is defined as possessing a form (in the Aristotelian sense) without being an instance of it.  A being with an intellect can posses forms intentionally (in the intellect) rather than entitatively (by being that kind of thing).  I can possess the form of a chess piece when I grasp what a chess piece is, but I do not become a chess piece.  Even severely autistic people do the same thing.  A computer, no matter what program it is running, only possess the form of a computer, and never a chess piece, unless it is melted down and made into a chess piece, at which point it is no longer a computer.

Again, you bypass the question of "what is intellect"? Severely autistic people cannot fathom the concept of a "dog". When they hear the word "dog", their mind creates a continuously running internal "film" of all the dogs they encountered during their life. For them there is no abstract dog. Newborns and toddlers are unable to conceptualize at all, until they learn the process. Seriously mentally handicapped people cannot learn it at all. Some "primitive" people can only conceptualize the numerals "one" and "two"; everything else is "many" for them. They simply cannot understand the difference between "three" and "four" apples.

So, what is that conceptualization? Using the Aristotelian terminology, it is an internal "form" for an object in our mind. Or using a correct terminology, it is a "model" of the objective reality. Now, what encoding / encryption is used to contain the "model" is not relevant. It can be encoded into a "wetware" (neural network composed of neurons) or a "hardware" (silicon based system). The method of encoding is irrelevant, as long as it works the same way. What is "chess"? It contains the description of the chessboard, the pieces, the rules of movement for each piece, and the desired outcome. The AlphaZero is certainly aware of all these details, so we can say that AlphaZero understands chess. Furthermore, AlphaZero also knows the rules, etc... of "Go", of "Shogi" and possibly any other game. As a matter of fact, if I would present you with an 8x8 board, and would present the usual chess pieces, you could not figure out what my intents are - since the rules of movement and the desired outcome are not obvious. Many different games can be played using the same physical pieces. (Some of these are called "fairy chess", also excellent games on their own right.) Whether you can play them on a master level, or you only know how the pieces move, is irrelevant.

It is true that the board and the pieces are composed of matter themselves, but the rules are not. The rules are directives to describe the legal motions of the pieces. As such they are immaterial, but that does not make them "supernatural". 

The whole point of this thread was that the "empty AI", which only knew the board, the pieces and the laws of motion learned on its own how to play the game on an unheard of advanced level, surpassing not just the best humans, but also the best computer programs. And to add "insult to injury" it performed this mindboggling task in a few hours! The point is that the AI taught itself. Of course we, humans can do the same thing, not necessarily as fast as the AI, but that is not relevant. The AI performed the human task (learning!), and did it faster and better than the humans could. The reason for this thread is to "rub the nose" of those people (into their own stinking excrement :) ) who are so haughty and assert that the computers only know hat they are programmed to do.


Quote from: cgraye on March 13, 2018, 09:42:08 PM
It follows from the fact that a computer is an entirely material thing, but the intellect is immaterial.

You use the word "immaterial" without a proper definition. Of course the objective physical reality also contains many immaterial aspects of it. There are the "attributes" (like heavy, light, near, far, and zillions of others). Then there are the "relationships" between the physical objects (like before, after, in between, next to, and other zillions of others). Then there are the "activities" performed by the physical objects (like thinking, walking, growing, and yet another zillions of others). None of these are composed of material building blocks, so they are "immaterial". But none of them can exist without a physical framework. There is no such "thing" as heaviness without a physical object AND someone who wishes to lift that object. There is no such "thing" as "between" without three objects residing on straight line. There is no such "thing" as motion without a physical object changing its physical location.

I suggest that you ALL devote some time to understand the concept of "material" vs. "immaterial" and "natural" vs. "supernatural" (more correctly called "unnatural"), because these are not usable interchangeably. Without a clear definition between these categories there cannot be a meaningful conversation about them. Example is this thread, where I have to correct your erroneous assumptions over and over again.

Your brain is also fully material. It consists of neurons (and some auxiliary stuff, like I/O organs, blood vessels, etc.) and the networks of neurons, which processes information via electro-chemical interactions among them. Of course the AI running on a silicon based hardware does the same thing - processing the information. Obviously Aristotle et al. had no idea about this. All they could do is perform some speculations about the reality.

Quote from: cgraye on March 13, 2018, 09:42:08 PM
Material things can only posses one substantial form at a time, but an intellect can posses multiple substantial forms intentionally.  An intellect abstracts from concrete particulars and possesses these forms.  A computer program can only emulate this activity by pattern matching.  Human beings also perform a kind of pattern matching in the brain, but this is not the same thing as the possessing of abstractions and reasoning from one to another, so a computer will never be able to do the same thing as a human intellect.

Another unsubstantiated "proclamation". First you need to show that there is a fundamental difference between the "emulation" and the "real thing". What argument can you provide for this? And you also need to present some argument that the emulation is always "deficient" in some significant aspect.

Quote from: cgraye on March 13, 2018, 09:42:08 PM
That is why currently Amazon's Alexa is randomly laughing at people.  It is wrongly parsing some spoken phrases as, "Alexa, laugh," and it is programmed to play a laughing sound on that command.  This could be fixed by changing the command to something that is less likely to be wrongly parsed as the command phrase, additional learning that would factor in phrases that are more likely to occur around the command to laugh to reduce the possibility of an error, or other things of that nature.  But Alexa will never possess the abstractions "laughter," "humor," or anything else, and so it will never learn not to laugh at the wrong time like a human being would.

How many people have no sense of humor? There are no "jokes" and especially "puns" that are funny for everyone, even if they happen to speak the language. It is true that the AI of Alexa is still very simple and rudimentary. But you are not qualified to make assessments for the future development of AI. "Never" is a very long time. :)

Quote from: cgraye on March 13, 2018, 09:42:08 PM
I don't imagine anything will happen.  When CGI got good enough to create realistic scenery in movies, people started using that instead of scouting for actual locations.  And that served the movies well enough.  But you still couldn't go to that forest you saw in that movie and cut down a tree.  If you need something that looks like a tree, you've got it, but if you need lumber, you will have to look for the genuine article.

What is your point? Of course we cannot "eat" or "drink" the emulation of food and water, and no one suggested otherwise. But a majority of our actions do not happen on the fields, growing wheat etc... The fastest growing endeavors happen in the information processing and the service sector. And those are ripe for automation. The simplest existing example is the AI called Watson. Originally designed to be a Jeopardy contestant, and it could beat the living daylight of the best humans. Of course that was only the first step. Today Watson consults thousands of physicians worldwide, providing diagnoses for thousands and potentially millions of patients. That is serious work, diagnosticians are very important part of the medical profession. And it is performed by never "tiring" AI-s, "who" (yes, who!) are always attentive, who do not bicker about pay-raises, who do not backstab their colleges... and, of course "who" do not need some undefined and unspecified "soul" to perform their tasks. :D That is what will happen, and the AlphaZero is but the first step on this road.

-----------------------------------

Quote from: Davis Blank - EG on March 14, 2018, 01:06:46 AM
Computers are just complex systems of gates. 

That is not precise. There is also the operating system. And the mind is the "operating system" in the wetware we call the brain. Uncountable actual physical experiments substantiate this model. The brain-mind complex is an embedded parallel-processing system, where the operating system cannot be separated from the underlying wetware. (Of course this is not a big problem. If you would study information theory, you would know that every computer - be it silicon or neuron based - can be reduced to a Turing machine. https://en.wikipedia.org/wiki/Turing_machine)

Quote from: Davis Blank - EG on March 14, 2018, 01:06:46 AM
AI and Deep Learning and whatever next is to come is just more of the same, more gates in more complex and useful ways.

This reminds me of the old joke: "What is an elephant? asks one ant from the other. ---- It is a very big, huge ant - answers the other". You forget that quantitative changes frequently change into qualitative changes. Keep on piling up radioactive atoms into an ever growing pile, and as soon the critical mass is reached, a huge explosion occurs. Reality in NOT linear. The AlphaZero is NOT an overblown pocket calculator.

Quote from: Davis Blank - EG on March 14, 2018, 01:06:46 AM
AI and Deep Learning and whatever next is to come is just more of the same, more gates in more complex and useful ways.  If you think your mind is only a system of gates as well then sure, those computers have surpassed you.  But I do not think my mind is just a system of gates, so I see this as apples and oranges.

What you believe is irrelevant until you can present arguments for it. Do you have any arguments?

Quote from: Davis Blank - EG on March 14, 2018, 01:06:46 AM
While I do see said functions existing on my computer (I can manipulate them in Nature via software, or if I was very adventurous, via hardware), I do not see said laws existing anywhere in Nature (I cannot slice off a piece of the 2nd Law, or take a bite out of relativity).  But logically the laws must exist since we see matter taking action, and also we clearly are mapping what the design of the laws are (via scientific study).  From this I see that not only Nature exists, but that there is also the supernatural.

You really need to study harder. The so called "laws of nature" (and capitalizing the word is useless) are observed regularities, which we can use to make predictions about nature. The "second law of thermodynamics" is an especially incorrect example, because it is not even a deterministic "law", rather a "stochastic" one. What you say - namely that there is a creator who is responsible for the laws of nature is so "far out", that it is not even wrong. ;)

-----------------------------------

Quote from: Aquila on March 15, 2018, 06:38:49 AM
AI can do some incredible things, and I expect it to only get more advanced in the future. That being said, and AI will never "think" like a human (although they will be able to give a fairly compelling simulation of it). Searle's "Chinese Room Argument" demolishes that idea pretty effectively, IMO:

https://www.iep.utm.edu/chineser/

Note: I'm a programmer who has done a lot of work with robotics and some (basic) machine learning.

Again, what is the fundamental difference between emulation (or simulation) and the "real thing" - which makes the emulation inferior? (Is the "million dollar man" inferior compared to the biological humans?) The Chinese room experiment refuted nothing. The individual "neurons" or the participating individuals have no understanding - obviously - but that does not mean that whole collections of them does not understand either. As for the AI not doing the "thinking" the same way, as humans do, that is true but irrelevant. The cars don't use the same algorithm for locomotion as humans do, but they can perform the same action much better and faster. :)

Treat the humans and the computers as "black boxes". If you interact with the black boxes, and you cannot discern the difference in their behaviors, then there is no rational reason to assume that they are fundamentally different. That is what the Turing test is designed to do. And it is getting better and better in "emulating" the real McCoy. :)

-----------------------------------

Quote from: Davis Blank - EG on March 15, 2018, 07:08:27 AM
The computer can mimic the brain in so far as both are complex networks of gating systems.  But the mind is not the brain - the brain is the tool within nature through which the supernatural mind expresses itself.

The brain is not the mind, just like "walking" is not the legs. But no one says that the legs are the material framework through which the supernatural walking expresses itself. ;D There is nothing "supernatural" about walking. Read the paragraphs above where I described the "immaterial" aspects of the physical existence, the concepts of "attributes", the "relationships" and the "activities". None of these are composed of quarks and electrons (matter), but that fact does not make them "supernatural".