Field computation and nonpropositional knowledge
Loading...
Authors
MacLennan, Bruce J.
Subjects
Neurocomputers, neural network? optical computers, molecular
computers, field computers, universal field computer, associative
memory, parallel processing, massive parallelism
Advisors
Date of Issue
1987-09
Date
Publisher
Monterey, California. Naval Postgraduate School
Language
eng
Abstract
Most current AI technology has been based on propositionally represented theoretical knowledge. It is argued that if AI is to accomplish its goals, especially in the tasks of sensory interpretation and sensorimotor coordination, then it must solve the problem of representing embodied practical knowledge. Biological evidence shows that animals use this knowledge in a way very different form digital computation. This suggests that if these problems are to be solved, then we will need a new breed of computers, which we call field computers. Examples of field computers are: neurocomputers, optical computers, molecular computers, and any kind of massively parallel analog computer. The author claims that the principle characteristic of all these computers is their massive parallelism, but we use this term in a special way. He argues that true massive parallelism comes when the number of processors is so large that it can be considered a continuous quantity. Designing and programming these computers requires a new theory of computation, one version of which is presented in this paper. Described is a universal field computer, that is, a field computer that can emulate any other field computer. It is based on a generalization of Taylor's theorem to continuous dimensional vector spaces. A number of field computations are illustrated, including several transformations useful in image understanding, and a continuous version of Kosko's bidirectional associative memory
Most current AI technology has been based on proposition ally represented theoretical knowledge. We argue that if AI is to accomplish its goals, especially in the tasks of sensory interpretation and sensorimotor coordination, then it must solve the problem of representing embodied practical knowledge. Biological evidence shows that animals use this knowledge in a way very different from digital computation. This suggests that if these problems are to be solved, then we will need a new breed of computers, which we call field computers. Examples of field computers are: neurocomputers, optical computers, moleculax computers, and any kind of massively parallel analog computer. We claim that the principle characteristic of all these computers is their massive parallelism, but we use this term in a special way. We argue that true massive parallelism comes when the number of processors is so large that it can be considered a continuous quantity. Designing and programming these computers requires a new theory of computation, one version of which is presented in this paper. We describe a universal field computer, that is, a field computer that can emulate any other field computer. It is based on a generalization of Taylor's theorem to continuous dimensional vector spaces. A number of field computations are illustrated, including several transformations useful in image understanding, and a continuous version of Kosko's bidirectional associative memory.
Most current AI technology has been based on proposition ally represented theoretical knowledge. We argue that if AI is to accomplish its goals, especially in the tasks of sensory interpretation and sensorimotor coordination, then it must solve the problem of representing embodied practical knowledge. Biological evidence shows that animals use this knowledge in a way very different from digital computation. This suggests that if these problems are to be solved, then we will need a new breed of computers, which we call field computers. Examples of field computers are: neurocomputers, optical computers, moleculax computers, and any kind of massively parallel analog computer. We claim that the principle characteristic of all these computers is their massive parallelism, but we use this term in a special way. We argue that true massive parallelism comes when the number of processors is so large that it can be considered a continuous quantity. Designing and programming these computers requires a new theory of computation, one version of which is presented in this paper. We describe a universal field computer, that is, a field computer that can emulate any other field computer. It is based on a generalization of Taylor's theorem to continuous dimensional vector spaces. A number of field computations are illustrated, including several transformations useful in image understanding, and a continuous version of Kosko's bidirectional associative memory.
Type
Technical Report
Description
Series/Report No
Department
Computer Science
Identifiers
NPS Report Number
NPS52-87-040
Sponsors
supported by the Office of Naval Research
Funder
N0001487WR-24037
Format
Citation
Distribution Statement
Approved for public release; distribution is unlimited.