The overall argument that John Searle is attempting to distinguish between strong AI and weak AI. His demonstration includes a computer that is able to pass the Turing test in Chinese. The issue arises when an English speaking person uses an English version of the computer, and is able to simulate Chinese output to an unknowingly bystander. Thus, the illusion is that the machine is able to think and understand, but he claims that this is not the case. Searle's paper attacks Schank and predispositions relating to AI by claiming that machines will never possess an intangible aspect of the human brain that allows it to understand certain subjects and think in intentional terms.
Throughout the paper, Searle provides several arguments to counter his theory and his responses to these arguments. Some of the counter arguments include the Systems Reply, the Robot Reply, the Brain Simulator Reply, the Combination Reply, the Other Minds Reply, and the Many Mansions Reply. However, as Searle presented his initial set-up and introduction, I fumbled through my previous biases and attempted to form logical points to provide reasons that Searle might be viewing this problem from the wrong lens.
First, my initial response to Searle arose from defining certain terms that he uses throughout the paper, such as 'understanding' and 'intentional'. He does provide his definition of the terms, but rather, they are similar to Webster definitions and they fail to adhere to the arguments presented. Without the true definition of understanding, one cannot make a case either in favor of Searle or against. Thus, my predisposition was eerily similar to the "Other Minds Reply" from Yale. My reasoning was that if Searle cannot explain how humans understand something, then he can never assert if machines are capable of understanding. What makes my ability to communicate in English truly mean I understand English? How am I any different from accepting input (reading) and dispensing output (blogging) as a machine that can accept input of Chinese characters and output a response to a story in Chinese characters.
Searle counters the Other Minds Reply with some garbage about cognitive states. In no way whatsoever did he address the main question. Instead, he asserts a few meaningless statements that do not assess the underlying question of what makes humans understand a language or anything else for that matter. If Searle is able to not only define understanding, but then delineate how and if people are to understand something, then I can take his view point into consideration. Until that point, his entire paper falls on deaf ears.
Second, Searle launches a continual attack on the lack of intentional actions in machines. He states they are simply instantiations that are able to mimic human behavior. Although this is true to a certain degree, he fails to also look at the other side of the spectrum.Searle does not analyze humans. There are countless ethic studies that question whether humans are merely the right combination of biological parts or if we are something more. And quite frankly, the answer to that question is still in debate among many prestigious institutions. This question is analogous to whether machines are capable of reaching strong AI status on every dimension. Thus, until mainstream philosophers answer the question to whether humans are more than chemical bonds, the argument of whether machines can 'understand' something is an impossible to answer question.
Lastly, I have a few defenses for the counter argument that Searle asserts that machines will never be able to truly have intentions. To begin, I claim that humans are merely pattern recognizing entities. We see something in the environment, note the inputs, take a course of action, and then store it in memory for latter reference. If the outcome was benevolent, we refer to it in similar situations with identical environmental stimuli. If the outcome was degrading, then when we see similar inputs, we choose a different course of action. I assert that machines use pattern recognition in their programs to perform the same utility. Although the field of pattern recognition has much room to grow, it is essentially the same concept that humans are performing in day to day activities. Second, I claim that animals are extremely similar to humans on many levels. There is not much differentiating a human from a monkey on a philosophical level. However, what does distinguish us from animals is that we write books on them, and they don't write books on us. This is classically acceptable in the realm of ethics to distinguish between 'us' and 'them'. However, machines are not void of this ability. They have the ability to write other programs to accomplish certain tasks. Especially in compilers, machines and programs are very capable of writing other programs, just as humans have written the programs themselves. In this regard, it is apparent that one cannot assert that computers or machines will never be able to achieve strong AI status.
In conclusion, my remarks to Searle include that the discussion of machine understanding is currently unattainable. By today's standards, machines definitely should possess the ability to be considered as understanding beings rather than ruled out from the beginning. In essence, some human philosophical answers, such as human composition, have to emerge before we are able to transpose the analogous equivalent questions to machines. Searle wrote a thought provoking paper, but jumped to conclusions prematurely. In the end, evolution of computers will unveil new doors to answering the question of whether they will truly possess the capacity to learn and make decisions intentionally.
No comments:
Post a Comment