CA1042109A - Adaptive information processing system - Google Patents

Adaptive information processing system

Info

Publication number
CA1042109A
CA1042109A CA225,516A CA225516A CA1042109A CA 1042109 A CA1042109 A CA 1042109A CA 225516 A CA225516 A CA 225516A CA 1042109 A CA1042109 A CA 1042109A
Authority
CA
Canada
Prior art keywords
information processing
output
aij
transfer function
input
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired
Application number
CA225,516A
Other languages
French (fr)
Inventor
Leon N. Cooper
Charles Elbaum
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nestor Associates Inc
Original Assignee
Nestor Associates Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nestor Associates Inc filed Critical Nestor Associates Inc
Application granted granted Critical
Publication of CA1042109A publication Critical patent/CA1042109A/en
Expired legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F7/00Methods or arrangements for processing data by operating upon the order or content of the data handled
    • G06F7/02Comparing digital values
    • G06F7/023Comparing digital values adaptive, e.g. self learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
    • G06N3/065Analogue means

Abstract

ABSTRACT OF THE DISCLOSURE
An adaptive information processing system includes a module, called a Nestor TM module, having a plurality (N) of input-terminals 1, 2 ..., j ..., N, adapted to receive N input signals s1, s2 ..., sj ..., sN, respectively, and a plurality (n) of output terminals 1, 2 ..., i ..., n, adapted to present n output responses r1, r2 ..., ri ..., rn, respec-tively. A plurality of junction elements, called mnemonders, couple various ones (or a multiplicity) of the input terminals with various ones (or a multiplicity) of the output terminals.
These mnemonders provide a transfer of information from an input terminal j to an output terminal i in dependence upon the signal sj appearing at the input terminal j and upon the mnemonder transfer function Aij. Means are provided for modify-ing the transfer function Aij of the mnemonders in dependence upon the product of at least one of the input signals and one of the output responses of the Nestor module. In a preferred embodiment of the invention, the modification to the transfer function of each mnemonder takes the form:

Description

FIELD OF INVENTION

The present invention relates to adaptive infor-mation processing systems, which systems are also known as learning meachines, neuron networks, trainable systems, self-organizing de~ices, and/or adaptive memory systems or devices.

BRIEF DESCRITION OF -THE DRAWINGS
Fig. 1 is a block diagram of a typical prior art information processing network.
Fig. 2 is a block diagram of a nouveron network according t~ the~present invention.
Fig. 3 is a block diagram of a Nestor TM module according to the present invention employing numerous nouveron networks of the type illustrated in Fig. 2. For clarity the feedback lines from the summers have been omitted.
Fig. 4 is a block diagram o-~- an information pro-cessing system incorporating a Nestor module of the type illustrated in Fig; 3.
Fig. 5 is a representational diagram illustrating the response of a Nestor module according to the present in-vention to an external fabric of even's.
F~g. 6 is a representationaJ diagram illustratino . .
the signai flow in an ideal Nestor mo~ule according to the . .
present invention.

Fig. 7 is a representational diagram illustrating -- a particular principle of operation 'n a l~estor module accord-ing to the present invention.

-~ -. . - - . - - :,: . :

.~- - . .. . . . . . ~

.~.. ,. ., , . . : : , :, . - . -- . ~
., ., . . :, --: .. :
~', ~. ' ' . ' lO~Z~0'3 Fig. 8 is a representational diagram of an optical-auditory system incorporating a plurality of Nestor modules according to the present invention.
Fig. 9 is a block diagram illustrating output response-determining apparatus which may be used with a Nestor module according to the present invention.
Fig. 10 is a representational diagram showing the response to two Nestor modules to the same external fabric of events.
Fig. 11 is a block diagram showing apparatus which may be employed with a Nestor module according to the present invention to achieve a specific type of output response.
Fig. 12 is a block diagram illustrating a portion of a nouveron network incorporating a charge storage mnemonder according to a preferred embodiment of the present invention.
Fig. 13 is a schematic diagram of a summation cir-cuit which may be employed in the nouveron network of Fig. 12.
Fig. 14 is a schematic diagram of a mnemonder which may be employed in the nouveron network of Fig. 12.
Fig. 15 is a block diagram of apparatus, which may be employed with the nouveron network of Fig. 12, for processing network input sisnals by Deans of the network output.

., ,~ .

., , - , . .. , ::: - .
, - - -: . .
~,, -. .
v. - ' , , : .

;; ., . ~ ' "~, .

lO~'~iO~
BACKGROUND OF THE INVENTION
Adaptive information processing systems have been extensively explored during the past several years. Somc o the most notable systems include ~he Adaline and Madaline systems at Stanford Electronic Laboratory, the Perceptron at Cornell Aeronautical Laboratories, and the Minos I and II
at Stanford Research Institute. Some of the U.S. patents which relate. to adaptive information processing systems are Patent No. 3,287,649 to Rosenblatt; Patent No. 3,408,627 to Xettler et al. Patent No. 3,435,~22 to Gerhardt et a~.; ;
Patent No. 3,533,072 to Clapper; and Patent No. 3,601,811 to Yoshino. This list of references is merely exemplary ana constitutes only a small part of the large body of .prior art in existence to date.

Such prior art adaptive information processing systems operate, in general, to produce an output response for a given input.signal, which response is measured against some predetermined ~correct) output response.
These prior art systems are caused to modify themselves, or ~learn~, often in dependence upon the difference between the actual and the.predètermined output response until the.
prodetermined output response is achieved. The object of ~uch a system is to have the system find its own way ~by some algorithm) to a predetermined relation:
input signal ~ output response.

~C

. ~ ' , ,. . - . , : . . ; .

10-~,10~

It should be noted here that whenever the term ~input signal" is used in this discussion it is intended to include the possibility of a set of separate input signals wh-_h a_e ZFPl ' ea, su~t2n.ti211y simultaneousl~, to a corresponding set of input terminals of an information processing system. Similarly, the term "output response"
is intended to define the entire system response to a given input signal, although this response may comprise a plurality of individual output responses appearing substantially simultaneously at a set of system output terminals.
- A typical prior art.aaaptive system is illustrated in Fig. 1. This system comprises, as its essential elements, a network of inputs 1, 2, 3 ..., N, which are respectively connected to a plurality of variahle weighting elements Gl, ;
G2, G3 ..., ~ having variable weights which, or example may be variable gains in the case of weighting amplifiers or variable resistances in the case of variable resistors. The outputs of the weighting elements G are applied to a summer S
which produces a single networ~ output in proportion to the sum of the weighting element outputs. The weighting value of each weighting elemqnt Gl, G2, G3 ..., ~ is individualiy controlled by means of a so-callea ~training algorithm~ T that conditions the network to respond to a particular input signal with a desired output response.
. In operation of the network, a particular signal i~ repetitively applied at the network inputs 1, 2, 3 N. After each application of the specific input signal, the network output response i8 compared to a predetermined desired output response, for example by means of a subtrac-tor D, and the difference, or error, i8 utilized in the training algorithm to modify the weights of the individual weighting elements ~1~ G2~ G3 ..., GN.

_ 5 _ ~ ~,c:~

' 104;~05' Each application of thé specific ihput signal, and the subsequent modification of the weighting elements G, is called a "training cyclen. As successive training cycles occur, the network output response approaches more closely the desired output response until the network is conditioned to respond ~niquely to the particular input signal which is to provide the desired output response.
In the adaptive information p~oce~sing systems of the prior art, emphasis has been given to finding a suitable training algorithm which permits a system to "learn" or adapt to the applied input signals at a rapid rate. Needless to say, numerous ingenious algorithms have been devised, however, in all cases the training algorithm has been made dependent in some way upon the predetermir,ed de;ired output which is to be generated in response to a given input. -~
It is an object of the present invention to provide an adaptive information processing system which has the ability to construct its own distinctive output -~
response-for any given input signal. In particular, it ~s an object of the present invention to provide a system with the striking characteristic that it can modify itself to construct an internal mapping - input signal output response - that functions as a memory or a program without ~ny outside intervention or choice as to what output response i8 desired-or what input pattern is presented. This type of training procedure or self-modification of the adaptive infor-~ation processing system will hereinafter be called ~passive learning~ or ~passive modificationn.
The importance of this ability of a system to passively modify itself will be appreciated by considering a simple example. Because it is not necessary with such a system to know! beforehand, a predetermined, desired output i~ .

.

10~:10~

response for a given input signal, it i8 ~os'sible to apply input signals with unknown content to the system and, after a period of training, determine the informational content of the input signals by considering the output responses.
For instance, if the unknown input signals happen to be informational signals (having some unknown structure) that are buried in noise, since the structure of the output responses is isomorphic to that of the buried informational signals, the unknown structure will be mapped into and be represented by the output responses. In this way the unknown informational content of any input signals may be deciphered by the informa- -tion'processing system.
It is also an object of the present invention to provide an adaptive information processing system which, li~e the systems of the prior art, can produce a predetermined, desired output ~esponse to any given input signal.' This 'procedure; which will hereinafter be called "active learning"
or ~active modification~, reguires knowledge on the part of the human operator of the desired output response to be associated with each individual input signal.' It is a further object of the present invention to provide an adaptive information processing system in which the learning growth rate - that is, the rate at which the system trains itself to produce a particular output response in terms of the number of pre~entations of an input signal - i8 very rapid.- In particular, it is an object of the present invention to provide an information processing system having an expo-nential, rather than linear or other slower, learning growth .
rate.
It is a further object of the present invention to provide an adaptive information processing system that is capable of functioning as a memory which is distributed and ' ~ 7 ~

.

' - ` '::
. - .

l04~ln~
highly invulnerable to the malfunction of individual components.
Such a memory will be an adaptive and se~4-organizing memory that has the ability to acquire information solely as a result of experience. In addition, this distributed memory in general will have the capacity, reliability and accuracy of a conven-tional digital computer memory (such as a ferrite core) of the type that stores information at a local site.
It is a further object of the present inVentiDn to provide al: adaptive information processing system which ic capable of great density of storage. For example, it is noted that the inf.~rmation processiUg system is capable of realization ~y integrated circuitry and does not require disçrete elements such as ferrite cores.
It is a fu-ther object of the present in~ention to provide an adaptive information processing system which is capable of great rapidity of operation; more particularly, a system in which of the order of or more than 2n bits of information can be recalled and/or processed in a single electronic operation ~where n ..s the number of system output terminals).
Finally, and perhaps most importantly, it is an object of the present invention to provide an adaptive information processing system which is capable of exhibiting each of the following properties:

~ .

10~

1. e^cgn-tior: The ability to prcduce a strong output response to an event or input signal that the system has seen before. Obviously, the information processing system will initially respond diffusely to a particular input signal.
However, after successive presentations of that input signal the system will learn to Nrecognize" the input signal by producing a characteristic output response.
2. Recollection: The ability to produce a unique output response for each of a num~er of particular input signals. This characteristic provides the function of memory since the system is thereby able to produce a unique output response on its (n) output terminals (containing of the order of or more than 2n bits of information) upon receipt of a ~ . . . . . ..
particular input signal on its set of input terminals.
3. Generalization: The ability to extract a common element from a number of different events or input signals.
In particular, if a number of differing input signals are successively applied to the information processing system, the system will learn to recognize a feature that is common to these input signals. For example, if a particular infor-mational signal that is buried in noise is repeatedly applied to the system input terminals, the system will extract, retain, and subsequently recognize the informational signai.

_ g_ - ' ' ' ' - '' ' ' ' ' '~ ' ' .
. .
.

lU~10~
4. Association: The ability to recall a first input signal upon receipt of a second after the two input signals have been applied to the information processing system more or less concurrently. That is, when two input signals are simultaneously applied, the system will not only learn these input signals, but will "associate" one with the other.
Thus, at a future time, the system will be able to recall either one or both of _he nput signals if only one of the input signals is applied. This characteristic of association can be effective, for example, in the analysis of unknown signals. If two unknown input signals are applied to the system, the system will be able to determ~ne whether one is related to the other in any way.
5. ~etrieval From Partial (Fraomentary) Inputs: The ability to retrieve an entire input signal from a portion of that input signal. This characteristic may be vie~ed as a ~self-association";that is,"association" between parts of the same signal. If a particular input signal is applied to the system until it is "learnean, the system will "associate" any portion of this ~ignal with the ent re signal so that, at a later time, the application of a portion of the input signal .will result in the production by the system of the entire signal ~usually with a reduced signal to noise ratio).

104~10~

In accordance with the foregoing objects, there is provided an information processing module comprising in combination: a plurality (N) of input terminals 1, 2..., j....
N adapted to receive N input signals sl, s2 ..., s; ..., sN, respectively; a plurality (n) of output terminals 1, 2..., i....
n adapted to present n output responses rl, r2 -~ ri ' rn' respectively; a plurality of junction elements, called mnemonders, each mnemonder coupling one of said input terminals (input j) with one of said output terminals (output i) and providing a 10 transfer of information from input j to output i in dependence upon the signal Sj appearing at the input j and upon the mnemonder transfer function Aij; and means for modifying the transfer ~ function Aij of at least one of said mnemonders, when in a learn-ing mode, in dependence upon the product of at least one of said input signals and one of said output responses; whereby modifi-cations to the transfer functions of the mnemonders, when in a learning mode, take the form: ;

O Aij = F (sl, s2 ..., sj ..., SN;
rl, r2 ~ ri ' Nn) There is also provided an information processing element comprising, in combination: a plurality (N) of input terminals 1, 2 ..., j ..., N adapted to receive N input signals sl, s2 ....
sj ..., SN, respectively; an output terminal i adapted to present an output response ri; a plurality (N) of junction elements, -:
called mnemonders, each mnemonder coupling one of`said input terminals (input j) of said output terminal (output i) and pro-viding a transfer of information from input j to output i in dependence upon the signal s; appearing at ehe input j and upon 30 the mnemonder transfer function Aij; and means for modify.ing the transfer function Aij of at least one of said mnemonders, when in -lOa-. .

- - - - . ~ . : .

o~
a learning mode, in dependence upon the product of at least one of said input signals and said output response; whereby the modifications to the transfer function of the mnemonders, when in a learning mode, take the form:
Ai; = f (51' 52 ~ sj ~ N;
There is also provided a method of processing information comprising the steps of: receiving a plurality of input signals sl, s2 ..., s; ..., sN; producing a plurality of intermediate g 1 5 il' s i2' ~ s ij ~ s iN~ each s'ij ~f which is dependent upon a respective one of said input signals sj and - an associated transfer function Aij; producing a plurality of rl, r2...ri...rn, each of which is dependent upon a plurality of said intermediate signals S~ij; modifying at least one of said transfer functions Aij, when in a learning mode, in dependence upon the product of at least one of said input signals sj and one of said output responses ri; and setting at least one of said transfer functions Aij, when in a memory mode, to a particular desired value representing stored information, thereby to create and utilize a distributed memory.
There is further provided a method of processing in- :
formation comprising the steps of: receiving a plurality of input signals si, s2..., sj..., sN producing a plurality of intermediate signals s il~ s i2~ ...s ij ~5 iN' e ij of which is dependent upon a respective one of said input signals s; and an associated transfer function Aij; producing an output response ri which is dependent upon at least one of said intermediate signals s'ij; and modifying at least one of said transfer functions Aij, when in a learning mode, in dependence upon the product of at least one of said input signals sj and said output response ri.

-lOb-1~4ZlOg SUMMARY OF THE INVENTI ON
The various objects of the present invention, set forth above, may be achieved by providing an information pro-cessing module having a plurality (N) of input terminals 1, 2 ..., j ..., N, adapted to receive N input signals sl, s2, ....
sj ..., sN, respectively; a plurality (n) of output terminals 1, 2 ..., i ...., n, adapted to present n output responses rl, r2 ~ ri ~ rn' respectively; and a plurality of junction elements, called mnemonders, coupling various ones (or a multi-plicity) of the input terminals with various ones (or a multi-plicity) of the output terminals. According to the invention, each mnemonder effects a transfer of information from an input terminal j to an output terminal i in dependence upon the signal sj appearing at the input terminal j and upon a so-called "transfer function" Aij of the mnemonder. Means are provided for modifying the matrix of transfer functions of the mnemonders in dependence upon the product of at least one of the input signals and one of the output responses of the module.
The term "transfer function", as it appears through-out the specification and claims herein, is used in its broad sense to define merely a function which modifies, in some way, the transfer of information (a signal) from the input to the output of a mnemonder. In the simplest case, the transfer function is equivalent to the gain or amplification of a mnemonder; however, it will be appreciated that a signal appearing at the input of a mnemonder may be modified in other ways to produce an appropriate mnemonder output signal.
In any case, we shall specify the mnemonder output signal s -- 1 1 -- . , :: ~
: ' ': .''. - ~ - '. .' - . ' -: ; .
: . . , - - ~ . - . .

as being the product of its input signal s~ and the mnemonder transfer function Aij, thus:

Sj~ = Aij s;

A module of above described type, which will herein-after be called a Nestor TM module, is illustrated in Fig. 3.
Fig. 3 represents a particular module in which each of the N
input terminals are connected to each of the n output terminals by a single mnemonder- (~). For the purposes of clarity, the feedback lines from the summers have been omitted and only the mnemonders coupled to the input terminal ; ~mnemonders lj, 2j ..., ij ..., nj) and to the outiut terminal i (mnemonders il, i2 ..., i; ..., iN) are shown. It will be understood, however, that an array of N x n mnemonders are provided in this module - so that the matrix of transfer functions A j will be:

' All A21 -- Anl A12 A22 -- An2 : : :
-AlN A2N AnN
The part~cular Nestor module, in which each of the N input terminals is connected to each of the n output terminals by a single mnemonder, exhibits ~hat is called (N, n) connect-vity.
In actual practice, one or more of the mnemonder connections between the input and output terminals may be severed without - harmful degradation of module function. In fact, the connec-- tions between the input terminals and output terminals via the mnemonders may be entirely random, provlded that a ~ufficient number of connections (~nemonders) are present to provide for the transmission of information from input to output and for the storage of information in the mnemonders.

-~2-10~;~10~
In a preferred embodiment of the Nestor module according to the present invention, the response ri at each output terminal is a linear function of the inputs applied thereto so that:

ri j-l Ai; Sj.

This linear relation, which is indicated in Fig. 3 by means of the summers ~ 2 ' ~ n~ is the simplest relationship which gives the desired results. It will be under-stood, however, that the Nestor module is not restricted to this linear relation and that other powerful results may be obtained if the output response ri at one or more output terminals i is made a different function of the inputs supplied thereto. For example, the output response ri may be made proportional to the product of the inputs Aij sj, over all j.
In the Nestor module illustrated in Fig. 3, the numbers of input terminals N and output terminals n, respective-ly, may assume any values. The number of input terminals may exceed the number of output terminals, or vice versa, or they may be equal (N=n). It will be understood that the amount of information which may appear at any given time at the output terminals of the Nestor module is of the order of or larger than 2n bits and, further, that the higher the value of n, the greater will be the signal to noise ratio of the module. It is there-fore desirable to make n reasonably large, taking into consider-ation the necessity for a commensurate increase in the number of circuit elements. Thus, for convenience, in a later discussion n will be assumed to be equal to N.

', .

~ , , .
: ~ . . . .

104Z10~
The Nestor module shown in Fig. 3 msy be represented by, or considered to be comprised of a plurality (n) of summers i each having, associated therewith, a plurality (N~ of input terminals and mnemonders and a single output terminal i. Such a subcomponent of the Nestor module, hereinafter called a ~nouveron", is illustrated in Fig. 2. By comparing Fig. 2 with Fig. 3, it may be noted that the Nestor module compri es a plurality ~n) of nouverons of the type illustrated in Fig. 2.
As shown in Fig. 2, each nouveron produces a single output response ri at an output terminal i. This output response is generated by the nouveron summer ~i in dependence upon the 1 ' 2 -~ 5~ ..., sN of the N mnemonders il, i2 ..., ij ..., iN, respectively.
As mentioned above, it is not necessary that the mnemonders of the Nestor module couple every input terminal to every output terminal of the module. Consequently, the nouveron illus~trated in~Fig. 2 may comprise fewer than N
mnemonders so that not e-~ery one of the input termlnals 1, 2 , j .... ........, N will be coupled to the summer i.
Also as mentioned above, in the learning mode the transfer function Aij Of at least one ~and preferabiy all) of the mnemonders ij of the Neætor module is modified in dependence upon the product of at least one of the input signals and one of the output responses of the module. This algorithm for the modifications to A ~apart from uniform decay unrelated to the inputs and outputs) may be written:
~Aij ~ f ~51' S2 --' 9~ --' N~
rl, r2 ~ ri ' rn) ~ - -where it is understood that the function f may be dependent upon only one of the input signals sl, 52 --~ æ; ..., sN

10~105 and only one of the output responses rl, r2 ~ ~ ri ~ rn.
To avoid any unnecessary complication in wiring the Nestor module, we prefer to make the modifications to the transfer function Aij of a mnemonder dependent only upon the input signals to and output response of the nouveron of which that mnemonder is a part. Therefore, in a preferred embodiment of the present invention, we make the modifications to Aij dependent upon the signal sj and the output response ri associated with the particular mnemonder; i.e.:
~ Aij = f (sj, ri), where it is understood here that the function f is always dependent upon both of the variables sj and ri.
The function f (sj, ri) may be expanded using Taylor's formula for functions of several variables. If this is done, we have f (Sj~ ri) = aO0 + aOl Sj + a10 ri + a + a 1 ri2 5j + a31 ri Sj -- amn The first three terms of the Taylor expansion are of no immediate interest to us since we require the modifications to Aij to be dependent upon the product of the input signal Sj and output response ri. It may be proven that a modification to Aij which is a function of one or more of the first three terms only of the Taylor expansion does not result in an information processing system having the properties of intelligence exhibited by the Nestor module.
The lowest term of the Taylor expansion which does result in the desired properties according to the present invention is the fourth term; namely all ri sj. This term (which, because it is readily amenable to rigorous analytic treatment will be treated in depth in the discussion that follows) .

, ~ . ', ,"' ., . ' " `~ . . - ' '' - .
' ~ ~ ~ '' , ' " '' , ' ' ' 104'~10~

yields a modlfication to Aij in the following form:
~Aij = n ri sj , where n is the constant of proportionality.
It will be understood, however, that other terms of the Taylor expansion above the third term may also produce powerful results when employed in the Nestor module. Terms with even powers of the variables, s; or ri, do not provide polarity discrimination in the modifications. Terms containing od~ powers of these variables such as the sixth term a31 ri sj, do provide this discrimination. Either can impart interesting properties to the Nestor module. In particular, since the various terms of the Taylor expansion yield different weightings in the modifications to A, these weightings can be used to a~vantag- to o~ta-n spe-i.ic desired properties.
15Fig. 4 illustrates one way in which a Nestor module according to the present invention may be connected and utilized in an adaptive information processing system. The Nestor module shown in this figure exhibits (N, N) connectivity; that is, - the module comprises N input terminals and N output terminals ana each of the input terminals is connected to each of the output terminals via a mnemonder as represented in Fig. 3.
The input signal~ el, s2 ..., s; ..., sN to the Nestor module characterize an "event" in the environment designated in Fig. 4 as the "input~. This event can be an - 25 optical event, such as the sight of a pattern, an auditory event, such as the hearing of a tone, or any other imaginable or unimaginable event, for example the rec~ipt of radiation aignals from outer space. The only requirement for the .. . .

~04Z109 event is that it be translatable in some way into a plurality of input signals sl, s2 -~ 5~ SN which retain sufficient detail about the event to be of interest.
g ls sl, s2 ..., s; ..., sN are generated by a trans-lator which pèrforms some kind of analysis of the event andproduces signals in response to this analysis. As an example, if the input is an optical event or "scene", the translator may divide the scene into a plurality of raster elements and produce signals sl, s2 ..., Sj ..., sN in proportion to the optical density at each respective raster element. If the input is an auditory event, the translator may perform a Fourier analysis of the auditory information and produce:
signals sl, 92 -~ 8j ..., sN in proportion to the amplitude of sound at each of the Fourier frequencies. It will be 1~ understcod, howc~Pr, ~h~t ~he '~ ~'a~^r~~o ~e us~ the Nestor module is entirely a matter of choice and that numerous types of translators are well known in the art. Furthermore, since the translator E~ se forms no part of the p~esent inven-tion, it will not be discussed herein in detail.
As noted above, the Nestor module produces a plurality of output responses rl, r2 ~ ri --~ rN in re-pon~e to a set of input signals sl, 52 ~ sj ..., sN. In a preferred embodiment of the present invention these output - responses are continuous variables; that is, they may assume values from zero to any positive and/or negative maximum value deter~ined by real world constraints that are dictated - by the construction of the Nestor module.

. - ' ' ` ' ' ~ -.

.. ~.,...... . . . . - ~ .
.. . . - , . . .
.~. . , -, '~ .. - `: ' .'' ' ' . ~: ' , , 10~'~105 If desired, in order to "force" the Nestor module to assume a particular output response (set of individual responses rl, r2 ~ ri ~ rN) upon the presentation of a particular input signal (set of individual signals sl, s2 ..., Sj ....
sN), the information processing system may be provided with a suitable arrangement for impressing or applying specific responses (e.g., voltages) rlA, r2A ~ riA ~ rNA to selected ones of all of the output terminals 1, 2, ..., i ....
N. In this way, the Nestor module may be operated in the "active learning" mode and caused to provide a predetermined desired output response to any given input signal.
Also, if desired, the output terminals 1, 2 ....
i ..., N of the Nestor module may be connected to a plurality of threshold elements Tl, T2 ~ Ti - ~ TN~ respectively, such ~ ~ -as Schmitt triggers or the like, which produce an output signal ~ - -if the output response app}ied thereto exceeds an adjustable threshold level ~1~ 2 '' ~ N~ respectively. These threshold elements effectively convert the analog output response of the module into a digital output signal which may ~ -be conveniently used in subsequent processing. In addition, these threshold elements serve a decisional function to determine if and when a particular output response has been generated.
The threshold elements Tl, T2 ~ Ti ~ TN may also be utilized in a mode of operation, which may be termed a ~suppression mode", that assists in training the Nestor module. As will be discussed in detail hereinafter, this mode of operation requires the output of each of the threshold elements to be fed back to the Nestor module to inactivate all summers E ~:

. . .
... ... .. .. . ....... -.. - - . - . . - . ~ .

10~10~

except the summer producing the output. In this way, all of the output responses rl, r2 --~ rN except that response ri applied as an input to the threshold element Ti which ls producing an output signal, will be suppressed. The advantage of this is that the Nestor module will thereby rapidly train itself to produce a pattern of output responses r1, r2 ~ ri ~ rN (and therefore output responses from the threshold elements Tl, T2 --~ TN) in which only one of these responses is non-zero upon the presentation of a given set of input signals 51' 52 ~ j N
1'he output terminals of the Nestor module, or of the threshold elements Tl, T2 --, TN, if these are provided, may be connected to any type of output device or processor depending upon the action to be taken in dependence upon the output responses of the module. If the Nestor module is utilized-to identify visual patterns for example (such as bank check signatures), the outputs of the threshold elements may simply be connected to an alarm device which advises a human operator when a particular pattern has, or has not, been detected (e.g., a valid or a fraudulent signature). If the .
Nestor modul'e is utilized as a pure distributed memory for example, it may be coupled directly to a conventional digital computer (i.e., without a separate translator at the input side or threshold elements at the output side). A digital-to-analcg converter must, of course, be provided at the input interface of the module to convert the digital computer output to analog input signals sl, s2 ..., SN; and an analog-to-digital converter must be provided at the output interface of the module to quantize the analog output responses rl, r2 ~ rN for input to the computer.

~1 04~10~

Obviously, the Nestor module is capable of being used for a practically infinite number of purposes and the particula~ output device or processor employed will be determined by the particular circumstances of each case.
Since the output device or processor per se forms no part of the present invention it will not be described herein in any further detail.
'The Nestor module, according to the present invention, may be employed in an information processing system in which several modules are connected together either in series or in parallel, or in series/parallel. For example, the output terminals of two modules may be connected to the input terminals of a third module so that the first two modules may "pre-process" informa~ion received from the environment and pass this inform~tion ~o tha thir~ module for uLti~at~ proc~s--~ing an~
st,orage. Series connections and parallel connections between modules may thus increase the intellectual power of the infor-mation processing system.
The Nestor module, according to the present invention, may be constructed to "learn" at a desired rate. In the learning mode, the modifications to the transfer functions Ai of the mnemonders should be as defined above; in a p.eferred embodiment, these modifications take the form:

~Aij = n ri Sj By adjusting the value Of n , for example, it is possible to control the rate of modification, or rate of "learning" of the module. By setting n = 0 (~Aij = 0) it is also possible to completely "turn off" the learning mode of the module so ~0~;~10~ , that the module operates as a pure distributed memory. The use of the Nestor module in this memory mode, in which the transfer functions Aij are predetermined and fixed, is one of the important features of the present invention.
The Nestor module, according to the present invention, may also be constructed to "forget" at a predetermined rate, as well as to learn. Such operation may be accomplished by per-mitting the values of the transfer functions Aij to decay, for example at a constant rate. When in a learning mode such a loss of stored information is helpful since the Nestor module may thereby "forget" details of its previous experience and thus generalize more rapidly. Conversely, after the Nestor module has been trained and it is operating in a memory mode, it is desirable to reduce any decay of the transfer functions Aij to "zero" (that is, the lowest value possible with existing components) so that the information stored in the Nestor module may be retained as long as possible without use of a buffer. - -When utilizing a plurality of Nestor modules connected in series or in parallel, different modules may be operated in different modes to carry out different functions within an information processing system. For example, one or more modules may be operated in an information processing or learning mode (e.g., where n as well as the rate of uniform decay are reason-ably large) while one or more modules may be operated in a pure memory mode (~Aij and the rate of decay are zero). Fig. 8 shows an example of one such system consisting of three Nestor modules. Referring to that figure it is apparent that the number of inputs of the bank H can be equal to, larger than I

. .

104;~105 or smaller than the sum of the outputs of Ro and RA, and that each output of Ro and RA can be connected to one or more in- _ puts of H in an ordered or random fashion.
Finally, it will be appreciated that once a Nestor module has been "trained" after a period of operation in the learning mode, the values of the transfer functions Aij may be stored and used to instantly train another Nestor module. This "instant training" is accomplished simply by setting the values of the transfer functions Aij f a Nestor module to initial values A(0)ij before the module is placed into operation.
In a preferred embodiment of the present invention, - the information processing system is provided with a conven-tional buffer storage device teither analog or digital) to which the values of the transfer functions Ai; may be trans-ferred from a trained modu~e, and from which these val~es ~ay be taken when the transfer functions of the same or a different module are set to their initial values A~O)ij.
Having stated and described the basic structure of the Nestor module and of the information processing system according to the present invention, it is now necessary to con~ider the nature and operation of this structure in detail.
Accordingly, in the Detailed Description of the Invention that follows, we will explore the various properties exhibited by this module and this system.

.

104;~10~
DETAILED DESCRIPTION OF THE INVENTION
The present invention will now be described in detail with reference to Figs. 5 - 14 of the drawings. Presented immediately below is a discussion of the theoretical basis for the invention; there follows a description of a specific preferred embodiment for realizing the invention.
I. THEORETICAL EXPLANATION
A. Space of Events and Representations Reference is made to Fig. 5 which illustrates a Nestor module that is subjected to an environment constituted by a number of "events". The duration and extent of an "event"
will finally be defined self-consistently in terms of the interaction between the environment and the adaptive system containing the Nestor module. For easier description, however, we proceed initially as though an event were a well-defined objective happening and envision a space of events E labeled el, e2, e3 ... ek. These events are "mapped" by the sensory and early processing devices of the adaptive system through an external mapping P (for "processing") into a signal distri-bution in the Nestor input space S labeled sï, s2, s3 ... sk. The external mapping P is denoted by the double arrow in Fig. 5.
For the time being, we assume that this external mapping is not modified by experience.
Although we need not describe the mapping P in any detail, since *he particular type of translation from the envir-onment to the input space is not important for our discussion, we note that this external mapping should be rich and detailed enough so that a sufficient amount of information is preserved to be of interest. In particular, the set of inputs S should reflect the degree of "separation" between events: that is, 10-~10~3 the degree of similarity (or other relationship) in any aspect of two or more events. We thus assume that the external mapping P from E to S has the fundamental property of preserving, in a sense, the "closeness" or "separateness"_ of events.
We now define a set of input signals s~ which corres-pond to the vth incoming event e~, and a set of input signals s~ which correspond to the ~th incoming event e~. In this notation two events ev and e~ map into inputs s~ and s~ whose separation is related to the separation of the original events.
In a vector representation, which will be employed throughout this discussion, we imagine that two events as similar as a white cat and grey cat map into vectors which are close to parallel while two events as different as the sound of a bell and the sight of food map into vectors which are close to cr~hcgonal to each other.
Given the input signal distribution in S which is the result of an event in E, we imagine that this signal dis-tribution is internally mapped onto a set R of output responses by an internal mapping A denoted by the single arrow in Fig.
5. This latter type of mapping is modifiable in a manner to be described in detail below.
The actual connections between the inputs s and outputs r of the Nestor module may be random and redundant;
there may be many or no connections between a particular input and output. However, for the purposes of discussion we idealize the network by replacing any multiplicity of connections between an input and output by a single junction ~, called a mnemonder, that summarizes logically the effect of all of the information 30- transferred forward between the input terminal j in the S bank and the output terminal i in the R bank. As is illustrated in .. . . .

10-~10~

Fig. 6, each of the N inputs in S is connected to each of the N outputs in R by a single mnemonder ~. The summers work so that the response or the signal on any output terminal, say i in R, namely ri, is mapped from the signals sj on all the input terminals in S by:
ri = Aij s j=l where Aij is the transfer function of the ijth junction or mnemonder ~. This is the fundamental relation which gives the influence of input signals in S on the output signals in R. Although the satisfactory functioning of the Nestor module does not require so specific an assumption ti-e-, ri need not be a linear function of all N inputs), the simpli-city of this relation makes it easier to display the results in an explicit analytic form.
B. Associative Mapping, Memory and Logical Processes It is in modifiable, internal mappings of the type A
that the experience and memory of the Nestor module are stored.
In contrast with present machine memory which is local (an event stored in a specific place) and addressable by locality (requiring some equivalent of indices and files) the Nestor module memory is distributed and addressable by content or by association. We shall show below that the mapping A can have the properties of a memory that is non-local, content addressable and in which "logic" is a result of association and an outcome of the nature of the memory itself.
The mapping A is most easily written in the basis of the mapped vectors the system has experienced. In a preferred algorithm, A may be defined as:
v A = C~V r x s ~v .

10~10~

where the corresponding sets of output signals for the vth and ~th events ev and e~ are rv and r~, respectively, and the parameter C~V is the coefficient of coupling between *he vth input signals 5v and the ~th output signals r~. As we shall see, the coefficient c normally grows with time as successive events e are mapped into the inputs s.
The ijth element of A gives the strength of the mnemonder between the incoming signal sj in the S bank and the outgoing responsé ri in the R bank. Thus, if only sj is 0 non-zero:
ri = Aij Sj-Since Aij = ~ c~vri 5v 15- the ijt mnemonder strength is composed of the entire experience Gf the system as reflected in the input and output signals con--nected to this mnemonder. Each experience or association (~vj, however, is stored over the entire array of N x N mnemonders.
This is the essential meaning of a distributed memory. Each event is stored over a large portion of the system, while at any particular local point many events are superimposed.
1. Recognition and Recollection: The fundamental problem posed by a distributed memory is the address and accuracy of recall of the stored events. Consider first the "diagonal"
portion of A which is defined as follows:

(A) _R_ ~ c rv x SV
diagonal v v~
(where he script R stands for "recognition and recollection").
An arbitrary event, e, mapped into the input signals, will generate the respon~e in R:
. = A s lO~Z.109 If we equate recognition with the strength of this response r, ~ay the value of tr, r~ = ~ r2 ~ i--1 i ,-(the rinner product" of the vector s with itself, i.e., the square of the length of r), then the mapping A will distinguish between those events it contains (the s~, ~ = 1, 2 ... k) and other events which are separated from these.
Thè word "separated" used in this context now requires a more precise definition. In a type of argument used by J. A.
Anderson, Math. Bio-sc~ences 8, 137 ~1970), in analyzing a dis-tributed memory, the vectors s~ are assumed to be independent of one another and to satisfy the requirements that on the average N
Si =
i--1 N ~ 2 (Si) - 1-Any two such vectors have components which are random with respect to one another so that a new vector, s, presented to R above gives a noise-like response since on the average (s~, s) is small. The presentation of a vector seen previously, say s~, however, gives the response R sA = c~ r~ + noise.
It is then shown that if the number of imprinted events, k, is small compared to N, the signal to noise ratios are rçasonable.
If we define separated events as those which map into orthogonal vectors, then clearly a recognition matrix composed of k orthogonal vectors s , s ... sk R = ~ c~ rV x s~ 1 k will distinguish between those vectors contained, s ... s , 10~ 10''.~

and all vectors separated from (perpendicular to) these.
Further, the response of R to a vector previously recorded is___ unique and_completely accurate: _ R s~ ~ c~ r~.

In this special situation the distributed memory is as precise as a localized memory.
In addition, as has been pointed out by H. C. Longuet-Higgins, Proc. R. Soc. Lond. B, 171 327 (1968), a distributed memory may have the interesting property of recalling an entire response vector r~ even if only part of the signal s~ is pre-sented. This is the case for the distributed memory discussed here. Let - s~ = s~ + s~.
If only part of s~, say s~, is presented, we obtain R sA = c~ ~s~, s~j r~ + noise.

The result is thus the entire response to the full s~ with a reduced coefficient plus noise.
2. Association: The presentation of the event ev which generates the vector sV results in recognition and recollection if R Sv = c rV + noise Then the off-diagonal terms off-diagonal ~ ~~v C~V r x (where the script A stands for ~association") may be interpreted as leading to the association of events initially separated from one another e~ ~ SV ~ ~ rv e~ ~ s~ r~

~04;2~0~

where (s , s~) = 0.
For with such terms the presentation of the event ev will generate not only r~ (which is equivalent to recognition of eV) but also (perhaps more weakly) r~ which should result S with the presentation of e~. Thus, for example, if r~ will initiate some response (originally a response to e~) the pre-sentation of e~ when C~V ~ will also initiate this response.
We, therefore, can write the association matrix:

A = ~ c~ r~ x SV = R + A, - where R = (A)diagonal~~ Cvv r x s (recognition) and A = (A) . - ~ c r~ x sv (association).
Off-dlagona~ v llV
The C~V are then the "direct" recognition and association co-efficients.
3. Generalization: In actual experience the events to which the system would be exposed would not in general be highly separated or independent in a s~atistical sense. There is no reason, therefore, to expect that all vectors, s~, printed into A would be orthogonal or even very far from one another. Rather, it seems likely that often large numbers of these vectors would lie close to one another. Under these circumstances a distri-buted memory of the type contained in A will become confused and make errors. It will "recognize" and "associate~ events never in fact seen or associated before.
To illustrate, assume that the system has been exposed to a class of non-separated events {e ... ek} : {e~ which map into the k vectors {s ... sk} : {s~}. The closeness of the mapped events can be expressed in a linear space by the concept . .

lQ4A~105.~

of "community". We define the community of a set of vectors, such as {s~} above, as the lower bound of the inner products (sU, st) of any two vectors in this set. Specifically, the community of the set of vectors {sa} is r, c[sa] = r, if r is the lower bound of (sU, st) for all su and st in {sa}.
If each exposure results in an addition to A (or to R) of an element of the form cvv rV x sV, then the response to an event su from this class, su ~ {sa}~ is K Su = r = ~ cvv rV (sV~ sU) = cuu rU + ~ (s , s ) cVvr where (s , s ) > r.

If r is large enough the response to SU is, therefore, not very clearly distinguished from that of any other s contained in . ' {sa}.
If a new event, e~ ~, not seen before is presen~ed -15 to the system and this new event is close to the others in the class a, (for example, suppose that ek+l maps into sk+l which is a member of the community {sa}) then R sk+l will produce a response not too different from that produced for one of the vectors sU ~ ~sa}. Therefore, the event ek+l will be recognized though not seen before.
This, of course, is potentially a very valuable error.
For the associative memory recognizes and then attributes pro-perties to events which fall into the same class as events already recognized. If, in fact, the vectors in {s~} have the form sv = s ~ n~
where nv , the noise factor, varies randomly, s will eventually be recognized more strongly than any of the particular s~
actually presented. In this way, for ex~ple, a -ep~2ted signal can be extracted from random noise.

.

104~1~

We have here an explicit realization of what might loosely be called a "logic" -- which, of course, is not logic at-all. Rather, what occurs might be described as the result of a built-in directive _o generalize. The associative memory S by its nature takes the step S + nl, S + n2 5 f nk O

which may be described in language as passing from particulars (e.g., catl, cat2, cat ...) to the "general" ("cat").
How fast this step is taken depends (as we will see 10 in the next section) on the parameters of the system. By alter-- ing these parameters, it is possible to construct mappings which vary from those which retain all particulars to which they are exposed, to those which lose the particulars and retain only common elements -- the central vector of any class.
In addition to "errors" of recognition, the associa-tive memory also makes errors of association. If, for example, all (or many) of the vectors of the class {s} with a reason-ably large community associate some particular r~ so that the mapping A contains terms of the form ~ C~v r~ x SV

with c~ 0 over much of v = 1,2 ... k, then the new event ek which maps into sk+l as in the previous example will not only be recognized (Rsk~l R sk+l) large but will also associate r~
A sk 1 = c r~ + ....
as strongly as any of the vectors in {sa;.

~ 0~'~10~

If errors of recognition lead to the process described in language as going from particulars to the general, errors of association might be described as going from particulars to an "universal": catl meows, cat2 meows ... ~ all cats meow.
There is, of course, no "justificati~n" for this process. It is performed as a consequence of the nature of the system. Whatever efficacy it has will depend on the order of the world in which the system finds itself.
By a sequence of mappings of the form above (or by feeding the output of A back to itself) one obtains a fabric of events and connections ~ ~

V ~ lJ ~
~ etc.
which is rich as well as suggestive. One easily sees the possibility of a flow of electrical activity influenced both by internal mappings of the form A and the external input.
This flow is governed not only by direct association co-efficients C~V (which can be explicity learned as described next) but also by indirect associations due to the overlapping of the mapped events as indicated in Fig. 7. In addition, one can easily imagine situations arising in which direct access to an event, or a class of events, has been lost (cyy = 0 in Fig. 7) while the existence of this event or class of events in A influences the flow of electrical acitivity.
4. Separation of Vectors: Any state in a distr~buted memory is generally a superposition of various vectors. Thus one has to find a means by which events (or the entities into which they are mapped) are distinguished from one another.

.

.

.:

~0~10~

There are various possibilities: It is not at all difficult to imagine non-linear or threshold devices that would separate one vector from another. But the occurrence of a vector in the class {s~} in a distributed memory results in a set of output responses over a large number of outputs ri, each of which is far from threshold. A basic problem, therefore, is how to associate the threshold of a single response with such a distributed signal. How this might be done ~ill be .
described in a later section.
In addition to the appearance of such threshold out-puts, there can be a certain separation of mapped signals due to actual localization of the areas in which these signals occur. For example, optical and auditory signals could be subjected to much processing before they actually meet in one Nesto_ module. It is pcssible ~o permit the identification Or optical or auditory signals (as optical or auditory) to take place first; connections between an optical and an auditory event might then occur subsequently in a second level of pro-cessing, from the response bank R to a second response bank H, as suggested in Fig. 8.

C. Module Modification, Learning The ijth element of the associative mapping A

Ai j = ~ Cl~V ri s jV ( 1 ) is a weighted sum over the j components of all mapped signals, sV~ and the i components of the responses, r~, appropriate for recollection or association.
Such a mapping may, of course, be attained by adjust-ing the weight of each mnemonder so that its value is equal to the corresponding Aij above. This is the simplest mode in which the Nestor module can function.

104;~10~

A most important characteristic of the Nestor module is its self-modification capacity. When functioning in a learning mode the Nestor module modifies the weights of its mnemonders so that (apart from a uniform decay described later) ij i j (2) This ~ Aij is proportional to the product of the input sj and the output ri. Alterations in junction strengths propor-tional only to sj or to the immediate junction response sj are also possible; however, such modifications do not result in the various properties discussed here. The addition of such changes in A indicated by the proportionality (2), above, for all associations r~ x SV results, also,in a mapping with the properties discussed in the previous section.
To make the modifications to Aij ~ A ~ r~ x SV (~) by the self-modification procedure of the Nestor module the system should have the signal distribution sv in its S bank and r~ in its R bank, where sV is mapped in from the event e~ by P.
In what we denote as "active learning" the Nestor module may be presented with an input s~ and be forced to produce the "correct" response, say r~. This can be done, for example, with apparatus of the type illustrated in Fig. 9 in which the desired response values rlA, r2A ~ riA ' 25 r may be applied to the outputs 1, 2, j N to con-strain the output signals rl, r2 ... ri ... rN to equal these desired values. Since the output signals are utilized in the Nestor module in the modificat~on of the elements Aij accord-ing to proportio~ality (2) above; i.e., according to -~4-. . ' ; , ' . :

o~
~A = ~r s (where ~ is the constant of proportionality), upon repeated application of the input s , the module very rapidly builds up a detailed and accurate memory of the output response r~
to the input s .
Active learning also describes a type of learning in which a system response to a set of inputs is matched against an expected or desired response and judged correct or incorrect. In this case, if the system is presented with some input s , its output response r~ thereto may be compared to the "right" response r and the elements Aij caused to be incremented in a direction which would result in a response that is closer to r if s were applied again.
It is apparent that with active learning the human operator of the system is required to know the appropriate response r to the various inputs s. However, the Nestor module is capable of another mode of operation, which we will call "passive learning", that does not require human inter-vention. In particular, the Nestor module is capable of identifying, recognizing, recalling, generalizing, or associa-ting features of the environment to which it is exposed even though no previous analysis or identification of the features in the external environment has been made. In this type of learning, the human operator need not even be aware of the relevant features in the environment which are being extracted and processed in the Nestor module.
To arrive at an algorithm which produces passive learning, we utilize a distinction between forming an internal representation of events in the external world as opposed to ~lUI
o lO~ O~
producing a responsc to thc~e cvents which is matched against what is expected or desired in the extcrnal world.
The simple but important idea is that the internal electri~al activity which in one module signals the presence of an external evcnt is not necessarily the same electrical activity which signals the presence of that same event for another module. There is nothing that requires that the same external event be mapped into the same signal distributions by different modules. The event eV,which for one module is mapped into the signal distributions rv and sV , in another module may be mapped into rlv and s~V . What is required for eventual agreement between modules in their description of the external world is not that the mapped electrical signals be identical but rather that the relation of the signals to each other and to events in the external world be the same. Figure 10 illustrates this principle in graphic form:
1. Passive Learning: Call A (t) the A matrix (that is, the set of Aij's) after the presentation of t events ("time" t).
We write:
A(t) = y A(t~l) + ~ A(t) where ~ A(t) = n rt X st In this equation, as Mentioned above, n is the constant of pro-portionality and y is a dimensionless "decay constant" which is a measure of the uniform decay or information at every site (a type of forgetting). Usually, 0 < y < 1.
We also now introduce the parameter ~, defined as a value of~ when the inputs St are normalized. ~, which is a measure of the rate at which modifrcations are made to the A's (a rate of learning), will be used in illustrative calculations made for normalized inputs s . The values of the para-meters ~, ~ and can be adjusted at the discretion of the user to produce the desired system properties. For example, .~' 221~/

104~10'~

during the period of acquisition~of information (learning, or program writing), n or ~ might be reasonably larger than zero (e.g , ~ ~ e ~ 1/10) and y might be reasonably smaller than one (e.g., y ~ 9/10) so that the system will acquire information and lose details. After the period of acquisi-tion, it may be useful to set n = ~ = 0 and y ~ 1 so that the system will no longer "learn" but will retain for an arbitrary period of time all the information it has acquired.
In a functioning module this storage time is determined by time constants characteristic of the circuits. For reasons of economy or con~enience these may be chosen to allow storage for periods of the order of minutes to hours. For longer storage, under such circumstances, one could transfer the contents of the distributed memory (for example, the values of the Aij) to a buffer memory for transfer back when needed.
In general, a system in which y < 1 loses details and has a greater capacity to generalize. It turns out that values of y slightly less than or equal to 1 are of the most interest.
In order to keep the system from becoming saturated, it is also convenient to make the modification zero (let n= ) when the output, r = As, exceeds a specified maximum, i.e.:
(r, r) = (As, As) ~ specified maximum.
In what follows we normalize all vcctors (s,s) = 1 so that ~, which is now taken to be constant,becomes dimensionless.
If we now say that r is rt y ~(t-l) St + rR + r~ , we see that the total response is composed of thrce terms:
a passive response, yA(t l)s , an active but random term rR, and an active response, rt. For purely passive learning we consider only the first term so that ~(t) = Er x St = tY ~(t-l) St t _ _ ~ 37-104'~0~

i~ere the responses are just those produced by the existing mapping, A(t 1), when the vector St in S is mapped into R:

--37a--~q 104;~10~

rt _ y ~tt-l)st The passive learning algorithm is then A(t) C~ A(t~ Y s x s ) t~ + E st X S ), where in general would usually be much smaller than one. Before any external events have been presented, A has the form A(0) which could be random. The effect of A(0) on the internal mapping will be analyzed below.
With this algorithm, after k events A has the form:
A(k) yk A(0) n (1 + 5V X S ), v=l where n is an ordered product in which the factors with lower indices stand to the left:

k nO s(v) = s(l) s(2) ... s(k).
v=l This can also be written:
A(k) k AtO) [1 + E ~k Sv ~ Sv + E2 ~ sV x s~ ~s , s ) v=l V~ll + .... + Ek s x Sk (s , s )(s , s )(s , s )...(sk 1, s~)].

The passive learning algorithm generates its own response A(0) 5v to the incoming vector sV, a response that depcnds on the original configuration of the network through ~() and on the vector SV mapped from the event eV. For example, if 5v is the only vector presentcd, A eventually takes the form A ~ rv x SV
where rv - A( ) sV.

2. Spccial Cases of A: We now display the form of A
in scveral spccial cases; in all of these ~ is assumed to be ' 30 conctant and small.

104;~0~

(a) If the k vectors are orthogonal, A becomes (k) k ~A(0~ ~ E A(0) ~ 5v x s ).
\J=l Letting A ) s - r , the second term takes the form of the ~diagonal" part of A

(A) - R = k rv x s dlagonal v=l and will serve for the recognition of the vectors sl ... s .
(It should be observed that the associated vectors rv are not given in advance; they are generated by the network.) If E
is small, however, this might be inadequate for recognition since the recognition term would be weak. Further, it will usually be more useful if recognition is set to occur only after repeated exposure to the same event.
(b) The following example demonstrates that the passive learning algorithm does build up recognition coefficients at an exponential rate for repeated inputs of the same event. If the same vector s is presented ~ times, A becomes eventually A(Q) yQ A(0) (1 + e 8 X S ).
If Q i8 large enough so that e E >> 1, the recognition term will eventually domlnate. When e E becomes large enough it may be desirable to adjust the value of E SO that there is no further growth. m is can be accomplished by making E a function of the response to the incoming vector so that beyond some maximum value there is no further increase of the coefficient.
(c) The presentation of m orthogonal vectors ~ Q~ ... Qm times results in a simple generalization of the second result.
When ~ ~ 1 for simplicity:

1(~4;~10~

m = A( ) (1 + m~ e vE v v v=l which is just a separated associative recognition and recall matrix if A = ~ c~ rv x sv e v _ Cvv ~

(d) Some of the effect of non-orthogonality can be displayed by calculating the result of an input consisting of ~ noisy vectors distributed randomly around a central s Sv = s + nV
Here n is a "stochastic vector" (i.e., a vector that varies randomly) whose magnitude is small compared to that of s.
We obtain C~) ~ (0) Q n2~ Q o o where n is the average magnitude of nV. We see that the generated A( ), with the additional factor due to the noise, is just of the form for recognition of s. Thus the repeated application of a noisy vector of the form above results in an A which recognizes the central vector s. This again provides a means of separating signal from noise.
3. Structure of the Mapped Space: The communities or separated classes of the signal or external spaces, E or S, will be the same as those of the mapped space, R, if ~ra~ rB) - (s~ sB) where r - A(0) s~

-4~-104'~10~

This will be the case ~ A(0) satisfies the relation (A ~ A - I ~the identity matrix) (4) or N A(0) A~0) =
where ~jk = 1 j = k = 0 j ~ k, for then it follows that (r , r~) = (A(0) Sa A(0) ~) a This can easily be arranged. If, for example, we choose A(0) = I~- .
then (4) is satisfied and the S space maps into itself:
- r - sa.
It i8 interesting to note that even a random A( will on the average satisfy the requirement (4). Suppose that Al ) is a random symmetric matrix and satisfies the conditions ~ N A~0) ~ 0 for all ; , ~ (AiO~ ) ) ~ 1 i~ 1 then N~ A(0) A(0) ~ 0 i ~ k while N ~0) lo) N (o) 2 i-l ij Aij i~l (Ai; ) # 1 i = k.

221~7 104~10~

Thus the condition (4) is satisfied; thereforc a random A~ ), as above, will lead to a mapped space with the same communi-ties and classes as the original signal space.
4. Association Terms: Off-diagonal or associative S terms can be generated as follows. Assume that A has attained the form k (0) v v k v v A = ~ A s x s = ~ r x s .
v=l v=l Now present the events e and e~ so that they are "associated", so that the vectors s and s~ occur or "map" together. (The precise conditions which result in such a simultaneous mapping of Sa and s~ will depend on the construction of the system.
The simplest situation to imagine is that in which the vector (sa + sB) is mapped if e and e~ are presented to the syst~m ciose enough to each other in ti~e.) iie may assume Lhat e~
and eB are separated so that (sa, s~) = 0. In the S bank, if the vector is normalized for convenience, we then have i/~ (sa + s~) ' After one such presentation of e~ and e~ , A becomes:
(again for simplicity setting~ = 1) A( )~ k r~ x s~ + 2(r~ x s~ + r~ x s~ ).
The second term given the association between ~ and ~ with the cocfficient c~ = c~ = /2 which generally (except ln special circumstances) would be most useful if small. If Sa and s~ do not occur again in association, c ~ or c~ (although they do grow upon the presentation of s~

-4~-B 1 O~ 0 ~

or s scparately) r~main small c~mpared to the rcspectivc recognition coefficients c~ or c~ owever, if l~(s~ + s~) is a frequent occurence (appearing for example Q times), the coefficient of the cross term becomes c~ ~ e~

and becomes as large as the recognition coefficient.
With the previous results we have established that the signal ar.d response spaces, along with the mapping that connects them, contain a structure that is analogous to the original structure in the external environment or the event space, E. This means the following:
(1) The classes or the communities of the response space, R, are the same as these of the external or signal spaces, E and S.
(2) Classes or events which are associated in the external space (those which occur in association during a learning period) become associated in the response space so that, after the learning period, the occurrence of one member of the associated classes or events in the external space E, and there-fore in the signal space S, will map both members of the associated classes or events in the response space R, even though they are very different types of events.

S. Separation of Events - Threshold Devices: We have dealt above with linear mappings anc spaces. As a conscquencc, a state is in general a superposition of several vectors. To aistinguish the evcnts -- or thc signals into which they are ~1 -10~;~10~
mapped -- from one another we can incorporate a threshold or other non-linear device into the system. There can also be a separation of mapped signals due to localization of the areas in which these signals occur. For example, as illustrated in Fig. 8, optical and auditory signals can be subjected to processing in separate modules before they actually meet in a common module. Thus the identification of optical or auditory signals would occur first from the module into which they are initially mapped. Associations between an optical and an auditory event would then arise in the common module.
An example of a threshold device is described below.
Since a signal in a distributed memory is spread over a large number of 7nputs or outputs, even a large signal, [(s, s) large] might be composed of components, Si, each of which is quite small. A basic problem, therefore, is how to associate the threshold of a single device with such a distributed signal. This can be accomplished by adding threshold devices Tl, T2 ~ Ti ~ TN to the basic module as illustrated in Fig. 11. For example, the threshold device Ti gives an output if the absolute value of its input ¦ri¦ exceeds some pre-determined value ¦ ri¦ > ~i ~
where ~i is the signal threshold of the device. It is impor-tant to note that the original input s could either be an original input from E or the output of a previous module.
A repeated input of the pattern s maps into the output pattern r which, by a repetition of the argument given above, grows exponentially. As a result an arbitrary but repeated input, s , will eventually activate a threshold O
10~ 0 dcvice which will respond to that in~. It is important to note that:
~a) The input need not be known beforehand;

(b) The input might be submerged in random noise;
S and (c) Which threshold device responds to the pattern s~ also need not be known in advance. (With the algorithm above, the largest component of the response rQ
determines which device responds.) By a simple variation, a particular threshold device could be designed to respond to a - particular pattern.
With the addition of lateral connections as indicated 15 in Fig. 11, the firing of a single threshold device in response to the pattern s~ would then suppress the response of the other threshold devices to this pattern. If the parameter y < 1 - during the period of acquisition, and sa ~ Ti, then the response to s would be modified due to the combined action of the decay ~y ~ 1) and the lateral suppression so that only the ith com-ponent of the response r~ to the input s~ would remain sub-stantially larger than zero. In a final state we would have i > Ti > output signal.
Thus, a single (or as many as desired) threshold devices could respond to a single pattern.
- In addition it is useful not to modify further (allowing, however, the decay) the mnenonders associated with the i h threshold element (li, 2i ....Ni) when this ith element produces an output signal in excess of a specified maximum.
This may be accomplished by ending the modifications in the above-mentioned mnemonders for some fixed number of events every time the said output exceeds the specified maximum.
~ - 45 -104Z10~
If such a system is presented with separated or ortho-gonal signals during a learing period, the threshold devices will, with the exponential rapidity described previously, come to be activated by the different patterns.

-45a-!

lQ^~109 Thus, for example, N orthogonal repeated incoming signals would, after the learning period, produce a response - in N different threshold devices.
In this way the threshold devices could learn to respond to repeated features in the environment even though these were not known to the user.
In addition,the association of these devices with output patterns of a prior module would serve for the separation of events or vectors mentioned above.

1~ ' * * *

In conclusion, from the theoretical explanation of the present invention set forth above, it will be appreciated that the Nestor module is an extremely powerful tool for pro-cessing information. In particular, this module is capable lS of exhibiting recognition, recollection, generalization andassociation, defined earlier, without the necessity for human intervention in any decisional or learnin~ process. A
specific preferred embodiment of the Nestor module will now be described which utilizes only conventional circuit elements and which lends itself to realization with known techniques of micro-miniaturization.

1~)4~109 II. A SPECIFIC REPRESENTATIVE EMBODIMENT
It will be appreciated, from the structural and theoretical explanation of the present invention set forth above, that the invention may be realized in a number of ways. The following is a description of what is at present a preferred embodiment of apparatus for realizing the present invention, which apparatus utilizes only standard electrical components such as resistors, capacitors, diodes and transistors.
It will be understood, however, that this preferred embodiment is described for purposes of explanation only, and is not intended to limit the scope of the invention.
It will be recalled that the i h nouveron (Fig. 2) of a Nestor module comprises N inputs sl, s2 ..., SN leading to N mnemonders il, i2 ..., iN connected to a summer ~1 which produces an output ri. The ij mnemonder has a transfer function Aij; that is, the output of this mnemonder is Sj~ =
Aij sj, where Sj is jth input to the Nestor module.
In the preferred embodiment to be described, it will be assumed that the information signals Sj, Sj~ and ri are, in all cases, represented by voltage levels. Again, at the risk of appearing repetitious, it will be understood that the information signals may also be represented by variations in -some other aspect of real world electrical signals. For example, the information signals may be represented by fre-quencies (pulse repetition frequencies or sinusoidal frequencies), by pulse widths, by currents, by magnetic fields, by magneti-zations or even a combination of these.
However, assuming that the information signals are to be represented by voltages, the transfer function Aij may be viewed as ~amplification" (sometimes amplification, some-times diminuation, sometimes change of sign) and, in particular, . .

104;~10~
. namplification" which is to be modified in dependence upon the - input 8 j to the mnemonder and the output ri f the ith nouveron.
In one preferred algorithm, the modification function is described by the following equation:

~Aij = n ri sj . t5) In order to realize an amplifier, the gain of which is controlled by increments to some previous gain, it is necessary to provide a means to store information of the previous gain and a means for adding and/or subtracting incre-ments to or from this stored value. The storage function inthe preferred embodiment of the invention is realized by an element, such as a capacitor, for storing an electrical charge Qs the incrementing function in thi~ embodiment is therefore realized by apparatus for varying the charge Q from zero to -15 ~QO or -QO' the limits of the storage element.
Referring now to Fig. 12, there is shown the ijth mnemonder 100 of the ith nouveron 102. A voltage source 104 applies a voltage Vij via line 106 to an amplifier 108 to control its gain Ai~. The amplifier 108 therefore changes the voltage signal 8~ appearing on line 110 and produces an output voltage 8~' - Aii 8~ on line 112.
~ he voltage Vi~, which regulates the gain, i8 derived from and i8 therefore proportional to Qi;' the charge stored in the i~th mnemonder. The dashed lines 114 and 116 indlcate, respectively, that this charge can be varied lincreased or decreased) in dependence upon the input signal 8~ appearing on line 110 and the output response ri appearing on line 118.
(~ine 116 is shown as going to the source 104 to convey symbol-ically the idea of modification by ri; in the present embodiment the feedback ri is actually directed to a device which modifies the width of the input pulses of heights sl,s2 ...~sN. See Fig. 14 104~10~

and the description thereof.) To modify Qii, and therefore Vij and Aij = sj'/sj in accordance with the equation (5), above, it is necessary to change the stored charge Qii by the product ri sj. [Note that the N charges of the i h nouveron (namely, Qil' Qi2 Qij ~ QiN) must be modified in proportion to the respective inputs they receive (namely, sl, s2 ..., s; ..., sN) and the common output ri of the nouveron.] To achieve the desired modification, we use as a signal sj the height of a voltage pulse 120, while the width ti of this pulse is made propor-tional to ¦ri¦. If ri ~ , the voltage pulse 120 is inverted.
Thus:
ii ~ sjti, ti ~ ri ri = +l ~ no inversion ri -1 ~ inversion Specific circuits for the summing aspect of nouveron 102 and the mnemonder 100 are shown schematically in Figs. 13 and 14. The summer 122 of the nouveron 102 is represented in Fig. 13 as a classic summation circuit employing an operation-al amplifier 124 connected to receive input signals sl', s2' sj' ..., SN' via a plurality of equally valued resistors 126, 128 ..., 130 ..., 132, respectively. The operational amplifier i~ provided with a resistor 134, in a known manner, and pro-vides an output signal ri on line 136 that is proportional to the sum of the various signals sl', s2' ..., sj~ ..., SN' which are the outputs of the individual mnemonders of the nouveron 102.
The output signal ri is supplied via a feedback line 138 to each mnemonder of the nouveron.

104~105~ .
The ijth mnemonder shown in Fig. 14 receives the input si~nal on line 110 and produces its output signal s - on line 112. The ~th input is in the form of a pulse o~
amplitude sj (which mav be either positive or negative) and -a standaEd pulse width dete~nined by the input processor or buffer that connects the-mnemonder, nouveron and Nestor module to the outside world (environment). As stated above, the input and output devices employed with the Nestor module form no part of the present invention and are consequently not described herein in detail. The construction of the input -processor will depend, of course, upon the type of events which are to be mapped into the Nestor module, while the output processor will be constructed in accordance with the ultimate cbjectives of the system; that is, the action to be taken in response to the outputs of the Nestor module.
Suffice it to say, for the purposes of this dis-cussion, that the input processor supplies a plurality of input pulses to the input terminals 1, 2 ..., j ..., N of the Nestor module. The input pulses, which are supplied simul-taneously for each event, all have a standard width, say several ~sec., and have amplitudes proportional to Ihe variables , 82 . ~ 8~ .., 8N~ respectively, which are to be processed.
~he pulse amplitudes can be neS2tive to reflect negat~ve values of the signals gl, s2 ..., g; ..., SN.
Referring again to Fig. 14, the input pulse of amplitude s; is applied to a pulse width modulator 140 which simultaneously receives a signal ri on l'ne 138 that is taken fro~ the output of the s~mmer ~j (i.e., the apparatus illus-trated in Fig. 13). The sisnal ri result~ from the su.~dation perfor~ed by the summer ~i on '.he ~ignal6 s'j put out by the mnemonders.
The pulse width modu;ator transmits a pulse on line 1~2 having the same abs~iute a~iitude as the input pulse on line 110, (but inverted if ri ~ ) and having a pulse width -i 1 O t~lO ~

proportional to the variable ¦ri¦.
The positive and negative pulses appearing on line 142 are amplified and separated according to polarity ~sign) by means of an operational amplifier 144 and its associated resistors 146, 148 and 150 and diodes 152 and 154. If a pulse appearing on line 142 is positive, it is channeled to a line 156 as a positive pulse; if negative, it is channeled to a line 158 as a negative pulse. Whether positive or negative, it will be appreciated that the area under the pulse (as viewed graph-ically) is proportional to the absolute magnitude of the product of Sj and ri.
The positive pulses on line 156 are applied to an inverting operational amplifier 160, which includes resistors 162, 164 and 166, and ultimately to the base of a PWP transistor 168. The pulses -arrivi~g at he base of the t_ans~sto- 158 thus have the appropriate polarity (sign) to activate the transistor, and to charge or discharge a capacitor 170 through a resistor 172. The amount of charge deposited on or removed from the capacitor 170 by each positive pulse is proportional to the product of the amplitude of the pulse, which determines the effective conductance of the transistor 168, and the pulso width, which determines the duration of the charging or dis-charging operation.
The negative pulses on line 158 are supplied to an inverting operational a~plifier 174, with its resistors 176, 178 and 180. In a mnnner analogous to that described for th~
PNP transistor 178, an NPN transistor 182 -is thereby activated by the inverted negative ~i.e. positive) pulses applied to the NPN transistor base. ~he capacitor 170 is consequently discharged 104;~10~

or charged th-ough a resis~o, 1~4. The amoun' of charge removed from or added to the cap2ci_o_-170 by each negative pulse is proportional to the p_oduct of the amplitude of the pulse, which determines 'he effective conductance of the t~ansistor 182, and the pulse.width, which determines the duration of the discharging or charging operation.
As a result of the operation described above, the charge across the capacitor 170, Qij and, in turn, the voltage Vij, is the re~ult of an initial charge state, which may be applied at an input terminal 186 before the mnemonder is-placed into operation, and the sum total of zll the increments and decrements which occur as the result of the repeated application of pulses to the input line 110. It should be appreciated that the capacitor 170 can be charged with either polarity, as lS we;l as change polarity,within the limits of the positive and.
negative voltage capabilities of the power supply.
In order to permit the mnemonder to "forget" the stored information over a period of time, the voltage Vij may be allowed to decay at an appropriate rate. This decay rate i8 related to the decay constant y discussed above in the theoretical explanation of the invention, where an infinite decay time (open circuit) is equivalent to r = 1 and a zero decay time (short circuit) equivalent to r = 0. As mentioned, values of y close to 1 are of greatest interest in practice, - 25 80 that the decay time constant should be made quite large. To this end, appropriate values of the capacity of capacitor 170 and resistance or impedance for all the elements are selected to yield the desired time consta~te.

10~ 0~
Finally, the voltage Vij across the capacitor 170 is applied via a line 192 to the control input of a gain con-trolled amplifier 194. If necessary, an amplification stage may be inserted between the capacitor 170 and the gain con-trolled amplifier 194. Here again an appropriately high input impedance of this amplifier is selected to obtain the desired decay constant y. The amplifier 194 also receives the pulses of amplitude Sj from the line 110 and "amplifies" ~again "amplification" includes amplification, diminuation and changes of polarity) these pulses, in accordance with the controlled level of gain, to produce output pulses of a~plitude Sj~ on line 112.
The processing of the inputs sj by means of the summed outputs ri is now considered in some detail with refer-ence to Fig. 15. It is noted ~irst that ri may be positive, negative, or zero, and that in the absence of provisions to the contrary, ri as obtained at the output of the summer ~22 of Fig. 12 is in the form of pulses. While this form may be satisfactory for purposes of the processors to be attached to line 136 of Fig. 13, the feedback operation is more conveniently accomplished in the present embodiment by means of a quasi-continuous form of ri. To this end the feedback indicated on line 138 :n Figures 13 and 14 is connected to a "track-and-hold"
device 196 in Fig- 15. . The purpose of this device is to extend the duration of each pulse of ri for a period approx-imately equal to the time sepa:ation between consecutive pulses, without changing the pulse amplitude. As shown schematically in Fig. 15 the "track-and-hold" dævice 196 is triggered by the .

104;~0S~

inputs tpulsed sj) through the trigger 198. (The trigger 198 may include provisions for signal amplification, shaping, . etc., as will be apparent to those skilled in the art). The pulsed output ri is thus converted to an essentially continuous S signal of time varying amplitude, hereinafter called ri'..
This signal ri' is in turn fed into a "full wavea rectifier 200 and then into a "voltage-to-pulse width" converter 202.
Thus the quasi-continuous signal to the converter.202, at location 204, is positive (or zero). The converter 202, when triggered by the trigger 198, as indicated in Fig. 15, produces pulses of a standard height and width, the width being small compared to that of the pulsed inputs Sj. The function of the . signal ri' is to broaden these na~row pulses (produced by 202) : in proportion to the amplitude of ri'. The pulses (of constant amplitude) produced by the converter 202 are fed to anoth-r ~track-and-hold" device 206 and to a gate 208, as shown in Fig.
lS. The incoming signals Sj, initially of a standard pulse width, enter first the "track-and-hold" device 206 which extends this pulse width to the duration, proportional to ri', of the pulse generated by the converter 202. The input signals s;
thon enter the gate 208 which is kept open for the same duration, also determined by the pulse width from the converter 202. If, due to a small amplitude of ri' the width of the latter pulses i8 reduced below the standard width of s~, the gate 208 remains - 25 open only the reduced duration of the pulses from the converter 202. It is this time of "open~ gate 208 which then determines the width of the input signal sj.

104;~10S' Tn order to retain the algebraic sign of the product sj ri, as required by the theory, the signal ri' is channeled-from location 210 (where it still appears with both polarities) to a switch 212. In this switch the incoming signal sj is inverted if ri' is negative, and allowed to go through with its incoming polarity if ri' is positive. The switch 212 has been located in Fig. 15 after the gate 208 for the convenience of sequential description. A technically preferable Location for this switch is before the "track-and-hold n device 206.
Finally, an "AND" gate 214 is included betweenlocation 204 and the gate 208 to suppress the passage of the incoming signal (and thereby of the product Sj ri) when ri' iS 80 small as to call for a pulse width narrower tnan the i~ standard puise widin g~neraied by 202. It is noied tnat a discontinuity thus occurs in the product s; ri between the value determined by the narrowest pulses obtained from the converter 202, and zero. Such a feature is merely an aspect of what one can generally refer to as noise, and as may be inferred from the theory the Nestor module is particularly 'nvulnerable to noise or in fact to the imperfect functioning of individual components. Furthermore, it is well within the established art-in the field of electronic circuitry to bring about improvements of the signal-to-noise ratio in general and particularly a reduction of the discontinuity mentioned.
It should also be notcd that since modifications to the Aij depend only on the input signals to and the ou'put response of the nouveron of which that mnemonder is a part, the 104;~10~

pulse widths and possible inversion due to ¦ri¦ and ri/¦ri¦ apply in the same way to each of the pulsed signals sl, s2 ._., sN entering the nouveron. Therefore most of the electronics indicated in Fig. 15, as will be evident to those skilled in the art, serves at the same time all of the mnemonders of a nouveron and thus is required only once for each nouveron.
The description of the specific preferred embodiment of the present invention is now complete. Although this embodiment has been described with reference to electrical signals and charges bearing the informational content of an information processing system, numerous other techniques for - representing signals and for storing information will occur to those skilled in the art. It will also be understood that i~ tne present invention itseif is susceptible to various modifi-cations, changes, and adaptations which fall within its spirit and scope. For example, the algorithm for the modifications to A of the mnemonders need not be restricted to the fourth term of the Taylor series - namely, that term embodied in equation (5) above; rather, other terms such as the sixth term may provide equally powerful results. Accordingly, it is lntended that the present invention may be limited only by the following claims and their equivalents.

Claims (103)

WE CLAIM:
1. An information processing module comprising, in combination:
(a) a plurality (N) of input terminals 1, 2..., j ..., N adapted to receive N input signals s1, s2 ..., sj ..., sN, respectively;
(b) a plurality (n) of output terminals 1, 2..., i ..., n adapted to present n output responses r1, r2 ..., ri ..., rn, respectively;
(c) a plurality of junction elements, called mnemonders, each mnemonder coupling one of said input terminals (input j) with one of said output terminals (output i) and providing a transfer of information from input j to output i in dependence upon the signal sj appearing at the input j and upon the mnemonder transfer function Aij; and (d) means for modifying the transfer function Aij of at least one of said mnemonders,when in a learning mode, in dependence upon the product of at least one of said input signals and one of said output responses;
whereby modifications to the transfer functions of the mnemonders, when in a learning mode, take the form:

.delta.Aij = f (s1, s2 ..., sj ..., sN ;
r1, r2 ..., ri ..., rn).
2. The information processing module defined in claim 1, wherein the number of output terminals equals the number of input terminals (n = N).
3. The information processing module defined in claim 1, wherein the number of output terminals is less than the number of input terminals (n < N).
4. The information processing module defined in claim 1, wherein the number of output terminals is greater than the number of input terminals (n > N).
5. The information processing module defined in claim 1, wherein at least one mnemonder is coupled to each one of said input terminals.
6. The information processing module defined in claim 1, wherein each one of said output terminals is coupled to at least one mnemonder.
7. The information processing module defined in claim 1, wherein at least one mnemonder is coupled to each input terminal and wherein each output terminal is coupled to at least one mnemonder.
8. The information processing module defined in claim 1, wherein each one of said input terminals is coupled to at least one of said output terminals through a mnemonder.
9. The information processing module defined in claim 1, wherein each one of said input terminals is coupled to each one of said output terminals through a mnemonder.
10. The information processing module defined in claim 1, wherein the output sj' of each mnemonder equals the product of its transfer function Aij and the signal sj applied at its input (sj' = Aij sj).
11. The information processing module defined in claim 1, wherein said transfer function modifying means includes means for modifying the transfer function Aij of each of said mnemonders.
12. The information processing module defined in claim 1, wherein said transfer function modifying means includes means for modifying the transfer function Aij of at least one of said mnemonders in dependence upon the input signal sj applied thereto and the output response ri at the output terminal to which the mnemonder is coupled;
whereby the modifications to the transfer function of at least one of the mnemonders take the form:
.delta.Aij = f (sj, ri) .
13. The information processing module defined in claim 12, wherein said transfer function modifying means includes means for modifying the transfer function Aij of each one of said mnemonders in dependence upon the input signal sj applied thereto and the output response ri at the output terminal to which the mnemonder is coupled;
whereby the modifications to the transfer function of each of the mnemonders take the form:
.delta.Aij = f (sj, ri) .
14. The information processing module defined in claim 12, wherein said transfer function modifying means includes means for modifying the transfer function Aij of at least one of said mnemonders in proportion to the product of the input signal sj applied thereto and the output response ri at the output terminal to which the mnemonder is coupled;
whereby the modifications to the transfer function of at least one of the mnemonders take the form:
.delta.Aij = ? sj ri, where ? is the constant of proportionality.
15. The information processing module defined in claim 14, wherein said transfer function modifying means includes means for modifying the transfer function Aij of each one or said mnemonders in proportion to the product of the-input signal sj applied thereto and the output response ri at the output terminal to which the mnemonder is coupled;
whereby the modifications to the transfer function of each of the mnemonders take the form:
.delta.Ai; = n sj ri , where ? is the constant of proportionality.
16. The information processing module defined in claim 1, further comprising logic element means, connected between each output terminal and the mnemonders coupled thereto, for receiving mnemonder outputs sj' and producing an output response ri in dependence upon said mnemonder outputs.
17. The information processing module defined in claim 16, wherein each of said logic element means produces said output response ri in proportion to the sum of the mnemonder outputs sj' applied thereto;
whereby said output response ri takes the form:

.
18. The information processing module defined in claim 1, further comprising:
a plurality (n) of threshold means T1, T2..., Ti ..., Tn, each connected to one of said output terminals, for producing an output signal if the output response ri applied thereto is greater than a threshold level .THETA.i (ri > .THETA.i or ¦ri¦ > .THETA.i).
19. The information processing module defined in claim 18, further comprising:
signal suppression means, connected to the outputs of each of said threshold means, for suppressing all of the output responses r1, r2 ..., rn except that response ri applied as an input to the threshold means Ti which is pro-ducing an output signal.
20. The information processing module defined in claim 1, further comprising:
source means for selectively applying a specific desired output response riA to at least one of said output terminals;
whereby said information processing module may be caused to provide a "correct" output response r.omega. during its operation.
21. An information processing element comprising, in combination:
(a) a plurality (N) of input terminals 1, 2 ..., j ..., N adapted to receive N input signals s1, s2 ..., sj ..., sN, respectively;
(b) an output terminal i adapted to present an output response ri;
(c) a plurality (N) of junction elements, called mnemonders, each mnemonder coupling one of said input terminals (input j) to said output terminal (output i) and providing a transfer of information from input j to output i in dependence upon the signal sj appearing at the input j and upon the mnemonder transfer function Aij; and (d) means for modifying the transfer function Aij of at least one of said mnemonders,when in a learning mode, in dependence upon the product of at least one of said input signals and said output response;
whereby the modifications to the transfer function of the mnemonders, when in a learning mode, take the form:
.delta.Aij = f (s1, s2 ..., sj ..., sN; ri).
22. The information processing element defined in claim 21, wherein the output sj' of each mnemonder equals the product of its transfer function Aij and the signal sj applied at its input (sj' = Aij sj).
23. The information processing element defined in claim 21, wherein said transfer function modifying means includes means for modifying the transfer function Aij of each of said mnemonders.
24. The information processing element defined in claim 21, wherein said transfer function modifying means includes means for modifying the transfer function Aij of at least one of said mnemonders in dependence upon the input signal sj applied thereto and the output response ri at the output terminal;
whereby the modifications to the transfer function of at least one of the mnemonders take the form:
.delta.Aij = f (sj, ri).
25. The information processing element defined in claim 24, wherein said transfer function modifying means includes means for modifying the transfer function Aij of each one of said mnemonders in dependence upon the input signal sj applied thereto and the output response ri at the output terminal;

whereby the modifications to the transfer function of each of the mnemonders take the form:
.delta.Aij = f (sj, ri) .
26. The information processing element defined in claim 24, wherein said transfer function modifying means includes means for modifying the transfer function Aij of at least one of said mnemonders in proportion to the product of the input signal sj applied thereto and the output response ri at the output terminal;
whereby the modifications to the transfer function of at least one of the mnemonders take the form:
.delta.Aij = ? sj ri ;
where ? is the constant of proportionality.
27. The information processing element defined in claim 26, wherein said transfer function modifying means includes means for modifying the transfer function Aij of each one of said mnemonders in proportion to the product of the input signal sj applied thereto and the output response ri at the output terminal;
whereby the modifications to the transfer function of each of the mnemonders take the form:

.delta.Aij = ? sj ri , where ? is the constant of proportionality.
28. The information processing element defined in claim 21, further comprising logic element means, connected between said output terminal and the mnemonders coupled thereto, for receiving mnemonder outputs sj' and producing an output response ri in dependence upon said mnemonder outputs.
29. The information processing element defined in claim 28, wherein said logic element means produces said output response ri in proportion to the sum of the mnemonder outputs sj' applied thereto;
whereby said output response ri takes the form:

.
30. The information processing element defined in claim 21, further comprising:
a threshold means Ti, connected to said output terminal, for producing an output signal if the output response ri applied thereto is greater than a threshold level .THETA.i (ri>
.THETA.i or ¦ri¦ > .THETA.i).
31. The information processing element defined in claim 21, further comprising:
source means for selectively applying a specific desired output response riA to said output terminal;
whereby said information processing module may be caused to provide a "correct" output response r.omega. during its operation.
32. The information processing element defined in claim 21, wherein said modifying means includes means for reducing the value of the transfer function Aij of at least one of said mnemonders with a decay at a predetermined rate.
33. The information processing element defined in claim 21, wherein the signals s1, s2 ..., sj ..., sN and ri are represented by voltage levels.
34. The information processing element defined in claim 33, wherein each of said mnemonders includes a gain controlled amplifier, the transfer function Aij of a mnemonder being represented by the gain of the mnemonder amplifier.
35. The information processing element defined in claim 21, wherein each mnemonder includes means for storing its transfer function Aij.
36. The information processing element defined in claim 35, wherein said storage means comprises a device for storing an electrical charge.
37. The information processing element defined in claim 36, wherein said device is a capacitor.
38. The information processing element defined in claim 35, wherein said modifying means includes means for adding said increments .delta.Aij to the stored transfer function Aij of a mnemonder.
39. An information processing system comprising, in combination:
(a) at least one information processing module as defined in claim 1; and (b) means for adjusting the transfer function Aij of each mnemonder of said module to a predetermined desired value, whereby the system may be trained in a single operation.
40. An information processing system comprising in combination:

(a) at least one information processing module as defined in claim l; and (b) means for selectively interrupting the operation of said modifying means of said module so that no modifications are made to the transfer function Aij of the mnemonders (.delta.Aij = 0), whereby the system may be operated as a pure memory.
41. An information processing system comprising, in combination:
(a) at least one information processing module as defined in claim l;
(b) buffer storage means for storing the transfer function Aij of each mnemonder of said module; and (c) means for selectively transferring the transfer function Aij of each mnemonder to said buffer storage means, whereby the transfer functions may be preserved for later use after a period of training of said module.
42. An information processing system comprising, in combination:
(a) at least one information processing module as defined in claim l;
(b) buffer storage means for storing a plurality of transfer functions Aij; and (c) means for selectively transferring a transfer function Aij to at least one of said mnemonders of said module, whereby said module can be trained in a single operation.
43. An information processing system as defined in claim 42, wherein said transferring means includes means for selectively transferring the transfer function Aij of each mnemonder to said buffer storage means.
44. An information processing system comprising, in combination:
(a) a plurality of information processing modules as defined in claim 1;
(b) means for connecting the output terminals of one of said modules to the input terminals of another, whereby at least two modules are connected in series.
45. An information processing system comprising, in combination:
(a) a plurality of information processing modules as defined in claim 1;
(b) means for connecting the output terminals of one of said modules to the output terminals of another, whereby at least two modules are connected in parallel.
46. An information processing system comprising, in combination:
(a) a plurality of information processing modules as defined in claim 1, (b) means for connecting the input terminals of one of said modules to the input terminals of another, whereby at least two modules are connected in parallel.
47. A method of processing information comprising the steps of:
(a) receiving a plurality of input signals s1, s2...,sj...,sN;
(b) producing a plurality of intermediate signals s'i1,s'i2,...,s'ij...,s'iN, each s'ij of which is dependent upon a respective one of said input signals sj and an associated transfer function Aij;
(c) producing a plurality of output responses r1,r2 ...ri...rn, each of which is dependent upon a plurality of said intermediate signals s'ij;
(d) modifying at least one of said transfer functions Aij, when in a learning mode, in dependence upon the product of at least one of said input signals sj and one of said output responses ri; and (e) setting at least one of said transfer functions Aij, when in a memory mode, to a particular desired value represent-ing stored information, thereby to create and utilize a distributed memory.
48. The process defined in claim 47, wherein the number of output responses of the information processing method equals the number of input signals thereof (n = N).
49. The process defined in claim 47, wherein the number of output responses of the information processing method is less than the number of input signals thereof (n < N).

6?
50. The process defined in claim 47, wherein the number of output responses of the information processing method is greater than the number of input signals thereof (n > N).
51. The process defined in claim 47, wherein at least one transfer function of the information processing method is associated with each one of said input signals thereof.
52. The process defined in claim 47, wherein each of said transfer functions Aij are set to a particular desired value representing stored information when in a memory mode.
53. The process defined in claim 47, wherein each of said transfer functions Aij are modified, when in a learning mode, in dependence upon the product of at least one of said input signals sj and one of said output responses ri.
54. The process defined in claim 47, wherein each one of said input signals of the information processing method is coupled to at least one of said output responses thereof through a transfer function.
55. The process defined in claim 47, wherein each one of said input signals of the information processing method is coupled to each one of said output responses thereof through a transfer function.
56. The process defined in claim 47, wherein each intermediate signal s'ij of the information processing method equals the product of its associated transfer function Aij and an input signal sj (sj' = Aij sj).
57. The process defined in claim 47, wherein said transfer functions Aij are modified, when in a learning mode, in dependence upon the product of an associated input signal sj and output response ri.
58. The process defined in claim 47, wherein each of said output responses ri is proportional to the sum of a plurality of intermediate signals s'ij;
whereby said output responses ri take the form:

59. The information processing module defined in claim 1, further comprising:
source means for selectively applying a specific desired output response riA to each one of said output terminals whereby said information processing module may be caused to provide a "correct" output response rw during its operation.
60. The information processing module defined in claim 1, wherein said modifying means includes means for reducing the value of the transfer function Aij of at least one of said mnemonders with a decay at a predetermined rate.
61. The information processing element defined in claim 21, wherein said modifying means includes means for reducing the value of the transfer function Aij of each one of said mnemonders with a decay at a predetermined rate.
62. The information processing module defined in claim 1, wherein said modifying means includes means for reducing the value of the transfer function Aij of each one of said mnemonders with a decay at a predtermined rate.
63. The information processing module defined in claim 1, wherein the input signals s1, s2 ..., sj ..., sN
and the output responses r1, r2 ..., ri ..., rn are represented by voltage levels.
64. The information processing module defined in claim 63, wherein each of said mnemonders includes a gain controlled amplifier, the transfer function Aij of a mnemonder being represented by the gain of the mnemonder amplifier.
65. The information processing module defined in claim 1, wherein each mnemonder includes means for storing its transfer function Aij.
66. The information processing module defined in claim 65, wherein said storage means comprises a device for storing an electrical charge.
67. The information processing module defined in claim 66, wherein said device is a capacitor.
68. The information processing module defined in claim 65, wherein said modifying means includes means for adding said increments .delta.Aij to the stored transfer function of a mnemonder.
69. The information processing module defined in claim 68, wherein electrical charge stored in said storage means of the ij'th mnemonder is Qij;
wherein in each of said increments .delta.Aij; is proportional to a respective increment .delta.Qij;
and wherein said means for adding increments .delta.Aij to the stored transfer function comprises, in combination:
(a) means for producing a first pulse having an amplitude proportional to the input signal sj and having a width ti proportional to the absolute value of the output response ri; and (b) means, connected to said pulse producing means, for applying a second pulse to said device for storing an electrical charge, the pulse height and pulse width of said second pulse being proportional to the pulse height and pulse width of said first pulse and the polarity of said second pulse being equal to ri/ ¦ri¦.
70. The information processing element defined in claim 38, wherein electrical charge stored in said storage means of the ij'th mnemonder is Qij;
wherein in each of said increments .delta.Aij is proportional to a respective increment .delta.Qij;
and wherein said means for adding increments .delta.Aij to the stored transfer function comprises, in combination:
(a) means for producing a first pulse having an amplitude proportional to the input signal sj and having a width ti proportional to the absolute value of the output response ri; and (b) means, connected to said pulse producing means, for applying a second pulse to said device for storing an electrical charge, the pulse height and pulse width of said second pulse being proportional to the pulse height and pulse width of said first-pulse and the polarity of said second pulse being equal to ri/?ri?.
71. The information processing system defined claim 44, wherein said output terminals of one of said modules are connected to the input terminals of another in a random fashion.
72. The information processing system defined in claim 44, wherein the output terminals of one of said modules are connected to the input terminals of another in an ordered fashion, whereby each one of the output terminals of said one module is connected to one of the input terminals of said other module.
73. An information processing system comprising in combination:
(a) a plurality of information processing modules as defined in claim 1;
(b) means connecting the output terminals of at least one of said modules to the input terminals of at least one other of said modules, whereby at least two modules are connected in series
74. The information processing system defined in claim 73, wherein the connecting means (b) includes means for connecting the output terminals of at least two of said modules to the input terminals of one other module, whereby at least two modules are connected in series to a third.
75. The information processing system defined in claim 73, wherein the connecting means (b) includes means for connecting the output terminals of one of said modules to the input terminals of at least two other modules, whereby one module is connected in series to at least two other modules.
76. The information processing system defined in claim 73, wherein the total number of output terminals of said at least one module is greater than the total number of input terminals of said at least one other module.
77. The information processing system defined in claim 73, wherein the total number of output terminals of said at least one module is equal to the total number of input terminals of said at least one other module.
78. The information processing system defined in claim 73, wherein the total number of output terminals of said at least one module is less than the total number of input terminals of said at least one other module.
79. A method of processing information comprising the steps of:
(a) receiving a plurality of input signals s1,s2...,sj...,sN;
(b) producing a plurality of intermediate signals s'i1,si2,...s'ij...,s'iN, each s'ij of which is dependent upon a respective one of said input signals sj and an associated transfer function Aij;
(c) producing an output response ri which is de-pendent upon at least one of said intermediate signals s'ij; and (d) modifying at least one of said transfer functions Aij, when in a learning mode, in dependence upon the product-of at least one of said input signals sj and said output re-sponse ri.
80. The method defined in claim 79, wherein each intermediate signal s'ij equals the product of the respective input signal sj and the associated transfer function Aij (s'ij = Aij sj).
81. The method defined in claim 7, wherein said output response ri is dependent upon all of said intermediate signals s'ij.
82. She method defined in claim 79, wherein said output response ri is proportional to the sum of a plurality of said intermediate signals s'ij.
83. She method defined in claim 82, wherein said output response ri is proportional to the sum of all of said intermediate signals s'ij.
84. The method defined in claim 79, wherein step (d) includes the step of modifying each of said transfer functions Aij in dependence upon the product of at least one of said input signals sj and said output response ri.
85. The method defined in claim 79, wherein step (d) includes the step of modifying at least one of said transfer functions Aij in dependence upon the product of the input signal sj, associated therewith, and said output response ri.
86. The method defined in claim 85, wherein step (d) includes the step of modifying each of said transfer functions Aij in dependence upon the product of the input signal sj, associated therewith, and said output response ri.
87. The method defined in claim 79, wherein step (d) includes the step of modifying at least one of said transfer functions Aij in proportion to the product of the input signal sj, associated therewith, and said output response ri.
88. The method defined in claim 79, wherein step (d) includes the step of modifying each one of said transfer functions Aij in proportion to the product of the input signal sj, associated therewith, and the output response ri.
89. The method defined in claim 79, wherein step (c) includes the step of producing a plurality of output responses r1, r2..., ri..., rn, each response ri of which is dependent upon at least one of said intermediate signals s'ij, where j varies from 1 to N.
90. The method defined in claim 89, wherein step (c) includes the step of producing plurality of output responses r1, r2..., ri..., rn, each response ri of which is dependent upon all of said intermediate signals s'ij, where j varies from 1 to N.
91. The method defined in claim 89, wherein step (c) includes the step of producing a plurality of output responses r1, r2..., ri...rn, each response ri of which is proportional to the sum of a plurality of said intermediate signals s'ij, where j varies from 1 to N.
92. The method defined in claim 91, wherein step (c) includes the step of producing a plurality of output responses r1, r2..., ri...rn, each response ri of which is proportional to the sum of all of said intermediate signals s'ij, where j varies from 1 to N.
93. The method defined in claim 89, further comprising the step of suppressing all the output responses r1, r2..., rn accept that response ri which exceeds a prescribed threshold.
94. The method defined in claim 89, further comprising the step of setting at least one of said output responses ri equal to a specific desired output response riA.
95. The method defined in claim 89, further comprising the step of setting each of said output responses r1, r2..., ri..., rn equal, respectively, to a specific desired output response r1A, r2A..., riA..., rnA.
96. The method defined in claim 79, further com-prising the step of storing each of said transfer functions Aij in a storage device.
97. The method defined in claim 96, further com-prising the step of transferring at least one of the stored transfer functions Aij to another storage device.
98. The method defined in claim 96, further com-prising the step of transferring each of the stored transfer functions Aij to another storage device.
99. The method defined in claim 79, further com-prising the steps of:
(1) receiving a second plurality of input signals s21, s22..., s2j..., s2N;
(2) producing a second plurality of intermediate signals s'2i1, s'2i2..., s'2ij..., S'2iN, each s'2ij of which is dependent upon a respective one of said second input signals s2j and upon said transfer function Aij that is associated with a respective one of the first intermediate signals s'ij produced in step (b); and (3) producing a second output response r2i which is dependent upon at least one of said second intermediate signals s'2ij.
100. The information processing element defined in claim 32, wherein said rate is constant, thereby to provide a uniform decay.
101. The information processing module defined in claim 60, wherein said rate is constant, thereby to provide a uniform decay.
102. The information processing element defined in claim 61, wherein said rate is constant, thereby to provide a uniform decay.
103. The information processing module defined in claim 62, wherein said rate is constant, thereby to provide a uniform decay.
CA225,516A 1974-06-06 1975-04-25 Adaptive information processing system Expired CA1042109A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US05/477,080 US3950733A (en) 1974-06-06 1974-06-06 Information processing system

Publications (1)

Publication Number Publication Date
CA1042109A true CA1042109A (en) 1978-11-07

Family

ID=23894450

Family Applications (1)

Application Number Title Priority Date Filing Date
CA225,516A Expired CA1042109A (en) 1974-06-06 1975-04-25 Adaptive information processing system

Country Status (11)

Country Link
US (1) US3950733A (en)
JP (1) JPS6012671B2 (en)
CA (1) CA1042109A (en)
CH (1) CH620307A5 (en)
DE (1) DE2524734C3 (en)
ES (3) ES436945A1 (en)
FR (1) FR2274088A1 (en)
GB (1) GB1457338A (en)
IT (1) IT1036906B (en)
MX (1) MX143269A (en)
NL (1) NL176313C (en)

Families Citing this family (131)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS5272504A (en) * 1975-12-15 1977-06-17 Fuji Xerox Co Ltd Device for recognizing word audio
US4044243A (en) * 1976-07-23 1977-08-23 Nestor Associates Information processing system
US4218733A (en) * 1978-11-13 1980-08-19 Sybron Corporation Adaptive gain controller
US4308522A (en) * 1979-03-19 1981-12-29 Ncr Corporation Identity verification apparatus and method
US4254474A (en) * 1979-08-02 1981-03-03 Nestor Associates Information processing system using threshold passive modification
US4326259A (en) * 1980-03-27 1982-04-20 Nestor Associates Self organizing general pattern class separator and identifier
JPS5752261A (en) * 1980-09-11 1982-03-27 Canon Inc Character processor
US4450530A (en) * 1981-07-27 1984-05-22 New York University Sensorimotor coordinator
US4479241A (en) * 1981-08-06 1984-10-23 Buckley Bruce S Self-organizing circuits for automatic pattern recognition and the like and systems embodying the same
JPH0658665B2 (en) * 1982-05-21 1994-08-03 株式会社日立製作所 Image signal processor
US5297222A (en) * 1982-05-04 1994-03-22 Hitachi, Ltd. Image processing apparatus
US4518866A (en) * 1982-09-28 1985-05-21 Psychologics, Inc. Method of and circuit for simulating neurons
US4620286A (en) * 1984-01-16 1986-10-28 Itt Corporation Probabilistic learning element
US4599692A (en) * 1984-01-16 1986-07-08 Itt Corporation Probabilistic learning element employing context drive searching
US4599693A (en) * 1984-01-16 1986-07-08 Itt Corporation Probabilistic learning system
US4593367A (en) * 1984-01-16 1986-06-03 Itt Corporation Probabilistic learning element
US4648044A (en) * 1984-06-06 1987-03-03 Teknowledge, Inc. Basic expert system tool
US4697242A (en) * 1984-06-11 1987-09-29 Holland John H Adaptive computing system capable of learning and discovery
JPS619729A (en) * 1984-06-26 1986-01-17 Toshiba Corp Reasoning system
US4660166A (en) * 1985-01-22 1987-04-21 Bell Telephone Laboratories, Incorporated Electronic network for collective decision based on large number of connections between signals
US4760604A (en) * 1985-02-15 1988-07-26 Nestor, Inc. Parallel, multi-unit, adaptive, nonlinear pattern class separator and identifier
US4670848A (en) * 1985-04-10 1987-06-02 Standard Systems Corporation Artificial intelligence system
US4967340A (en) * 1985-06-12 1990-10-30 E-Systems, Inc. Adaptive processing system having an array of individually configurable processing components
US5077807A (en) * 1985-10-10 1991-12-31 Palantir Corp. Preprocessing means for use in a pattern classification system
US5060277A (en) * 1985-10-10 1991-10-22 Palantir Corporation Pattern classification means using feature vector regions preconstructed from reference data
JPH0634236B2 (en) * 1985-11-02 1994-05-02 日本放送協会 Hierarchical information processing method
DE3683847D1 (en) * 1985-11-27 1992-03-19 Univ Boston PATTERN CODING SYSTEM.
WO1987003399A1 (en) * 1985-11-27 1987-06-04 The Trustees Of Boston University Pattern recognition system
US4773024A (en) * 1986-06-03 1988-09-20 Synaptics, Inc. Brain emulation circuit with reduced confusion
US4752890A (en) * 1986-07-14 1988-06-21 International Business Machines Corp. Adaptive mechanisms for execution of sequential decisions
GB8619452D0 (en) * 1986-08-08 1986-12-17 Dobson V G Signal generating & processing
US4852018A (en) * 1987-01-07 1989-07-25 Trustees Of Boston University Massively parellel real-time network architectures for robots capable of self-calibrating their operating parameters through associative learning
US5224066A (en) * 1987-03-16 1993-06-29 Jourjine Alexander N Method and apparatus for parallel implementation of neural networks
US5053974A (en) * 1987-03-31 1991-10-01 Texas Instruments Incorporated Closeness code and method
US4881178A (en) * 1987-05-07 1989-11-14 The Regents Of The University Of Michigan Method of controlling a classifier system
US4807168A (en) * 1987-06-10 1989-02-21 The United States Of America As Represented By The Administrator, National Aeronautics And Space Administration Hybrid analog-digital associative neural network
US5526298A (en) * 1987-06-10 1996-06-11 Hamamatsu Photonics K.K. Optical associative memory
US4858147A (en) * 1987-06-15 1989-08-15 Unisys Corporation Special purpose neurocomputer system for solving optimization problems
US4809223A (en) * 1987-06-18 1989-02-28 West Virginia University Apparatus and method for long term storage of analog signals
WO1988010474A1 (en) * 1987-06-18 1988-12-29 University Of West Virginia State analog neural network and method of implementing same
US4914708A (en) * 1987-06-19 1990-04-03 Boston University System for self-organization of stable category recognition codes for analog input patterns
US5133021A (en) * 1987-06-19 1992-07-21 Boston University System for self-organization of stable category recognition codes for analog input patterns
US5251269A (en) * 1987-07-15 1993-10-05 Research Development Corporation Of Japan Multi-layer neural network modelled after the striate cortex for recognizing visual patterns
US5072452A (en) * 1987-10-30 1991-12-10 International Business Machines Corporation Automatic determination of labels and Markov word models in a speech recognition system
US5285522A (en) * 1987-12-03 1994-02-08 The Trustees Of The University Of Pennsylvania Neural networks for acoustical pattern recognition
US5040230A (en) * 1988-01-11 1991-08-13 Ezel Incorporated Associative pattern conversion system and adaptation method thereof
US4874963A (en) * 1988-02-11 1989-10-17 Bell Communications Research, Inc. Neuromorphic learning networks
US4958375A (en) * 1988-02-17 1990-09-18 Nestor, Inc. Parallel, multi-unit, adaptive pattern classification system using inter-unit correlations and an intra-unit class separator methodology
US4931868A (en) * 1988-05-31 1990-06-05 Grumman Aerospace Corporation Method and apparatus for detecting innovations in a scene
US5050095A (en) * 1988-05-31 1991-09-17 Honeywell Inc. Neural network auto-associative memory with two rules for varying the weights
US4893255A (en) * 1988-05-31 1990-01-09 Analog Intelligence Corp. Spike transmission for neural networks
US4926064A (en) * 1988-07-22 1990-05-15 Syntonic Systems Inc. Sleep refreshed memory for neural network
WO1990002381A1 (en) * 1988-08-31 1990-03-08 Fujitsu Limited Neurocomputer
US5063601A (en) * 1988-09-02 1991-11-05 John Hayduk Fast-learning neural network system for adaptive pattern recognition apparatus
US4979124A (en) * 1988-10-05 1990-12-18 Cornell Research Foundation Adaptive, neural-based signal processor
US5093781A (en) * 1988-10-07 1992-03-03 Hughes Aircraft Company Cellular network assignment processor using minimum/maximum convergence technique
US4930099A (en) * 1988-10-07 1990-05-29 Hughes Aircraft Company Wavefront vector correlation processor and method
US4914604A (en) * 1988-10-07 1990-04-03 Hughes Aircraft Company Processor for analyzing angle-only data
US5001631A (en) * 1988-10-07 1991-03-19 Hughes Aircraft Company Cellular network assignment processor using randomly triggered adaptive cell thresholds
US4920506A (en) * 1988-10-07 1990-04-24 Hughes Aircraft Company Ultra-high speed two-dimensional coordinate transform processor
US5003490A (en) * 1988-10-07 1991-03-26 Hughes Aircraft Company Neural network signal processor
US4951239A (en) * 1988-10-27 1990-08-21 The United States Of America As Represented By The Secretary Of The Navy Artificial neural network implementation
US4906865A (en) * 1988-12-09 1990-03-06 Intel Corporation Sample and hold circuit for temporal associations in a neural network
US4912655A (en) * 1988-12-14 1990-03-27 Gte Laboratories Incorporated Adjusting neural networks
US4912652A (en) * 1988-12-14 1990-03-27 Gte Laboratories Incorporated Fast neural network training
US4912653A (en) * 1988-12-14 1990-03-27 Gte Laboratories Incorporated Trainable neural network
US4912649A (en) * 1988-12-14 1990-03-27 Gte Government Systems Corporation Accelerating learning in neural networks
US4912654A (en) * 1988-12-14 1990-03-27 Government Systems Corporation Gte Neural networks learning method
US4914603A (en) * 1988-12-14 1990-04-03 Gte Laboratories Incorporated Training neural networks
US4912651A (en) * 1988-12-14 1990-03-27 Gte Laboratories Incorporated Speeding learning in neural networks
US4912647A (en) * 1988-12-14 1990-03-27 Gte Laboratories Incorporated Neural network training tool
JP2676397B2 (en) * 1989-01-05 1997-11-12 株式会社エイ・ティ・アール視聴覚機構研究所 Dynamic trajectory generation method for dynamic system
US4941122A (en) * 1989-01-12 1990-07-10 Recognition Equipment Incorp. Neural network image processing system
US4974169A (en) * 1989-01-18 1990-11-27 Grumman Aerospace Corporation Neural network with memory cycling
US5033020A (en) * 1989-02-08 1991-07-16 Grumman Aerospace Corporation Optically controlled information processing system
US5222195A (en) * 1989-05-17 1993-06-22 United States Of America Dynamically stable associative learning neural system with one fixed weight
US5119469A (en) * 1989-05-17 1992-06-02 United States Of America Neural network with weight adjustment based on prior history of input signals
US5041976A (en) * 1989-05-18 1991-08-20 Ford Motor Company Diagnostic system using pattern recognition for electronic automotive control systems
US4939683A (en) * 1989-05-19 1990-07-03 Heerden Pieter J Van Method and apparatus for identifying that one of a set of past or historical events best correlated with a current or recent event
US5146542A (en) * 1989-06-15 1992-09-08 General Electric Company Neural net using capacitive structures connecting output lines and differentially driven input line pairs
US5479578A (en) * 1989-06-15 1995-12-26 General Electric Company Weighted summation circuitry with digitally controlled capacitive structures
US5274745A (en) * 1989-07-28 1993-12-28 Kabushiki Kaisha Toshiba Method of processing information in artificial neural networks
US5319587A (en) * 1989-08-07 1994-06-07 Rockwell International Corporation Computing element for neural networks
US5361328A (en) * 1989-09-28 1994-11-01 Ezel, Inc. Data processing system using a neural network
JP2724374B2 (en) * 1989-10-11 1998-03-09 株式会社鷹山 Data processing device
JPH03167655A (en) * 1989-11-28 1991-07-19 Toshiba Corp Neural network
US5299284A (en) * 1990-04-09 1994-03-29 Arizona Board Of Regents, Acting On Behalf Of Arizona State University Pattern classification using linear programming
JPH0438587A (en) * 1990-06-04 1992-02-07 Nec Corp Input area adaptive type neural network character recognizing device
US5247605A (en) * 1990-07-02 1993-09-21 General Electric Company Neural nets supplied synapse signals obtained by digital-to-analog conversion of plural-bit samples
US5151970A (en) * 1990-07-02 1992-09-29 General Electric Company Method of generating, in the analog regime, weighted summations of digital signals
US5251626A (en) * 1990-07-03 1993-10-12 Telectronics Pacing Systems, Inc. Apparatus and method for the detection and treatment of arrhythmias using a neural network
US5181171A (en) * 1990-09-20 1993-01-19 Atlantic Richfield Company Adaptive network for automated first break picking of seismic refraction events and method of operating the same
US5615305A (en) * 1990-11-08 1997-03-25 Hughes Missile Systems Company Neural processor element
JP3088171B2 (en) * 1991-02-12 2000-09-18 三菱電機株式会社 Self-organizing pattern classification system and classification method
US5239594A (en) * 1991-02-12 1993-08-24 Mitsubishi Denki Kabushiki Kaisha Self-organizing pattern classification neural network system
US5204872A (en) * 1991-04-15 1993-04-20 Milltech-Hoh, Inc. Control system for electric arc furnace
US5263122A (en) * 1991-04-22 1993-11-16 Hughes Missile Systems Company Neural network architecture
US5671335A (en) * 1991-05-23 1997-09-23 Allen-Bradley Company, Inc. Process optimization using a neural network
US5170071A (en) * 1991-06-17 1992-12-08 Trw Inc. Stochastic artifical neuron with multilayer training capability
US5963930A (en) * 1991-06-26 1999-10-05 Ricoh Company Ltd. Apparatus and method for enhancing transfer function non-linearities in pulse frequency encoded neurons
US5226092A (en) * 1991-06-28 1993-07-06 Digital Equipment Corporation Method and apparatus for learning in a neural network
US5337371A (en) * 1991-08-09 1994-08-09 Matsushita Electric Industrial Co., Ltd. Pattern classification system
US10361802B1 (en) 1999-02-01 2019-07-23 Blanding Hovenweep, Llc Adaptive pattern recognition based control system and method
US8352400B2 (en) 1991-12-23 2013-01-08 Hoffberg Steven M Adaptive pattern recognition based controller apparatus and method and human-factored interface therefore
US5299285A (en) * 1992-01-31 1994-03-29 The United States Of America As Represented By The Administrator, National Aeronautics And Space Administration Neural network with dynamically adaptable neurons
US5438629A (en) * 1992-06-19 1995-08-01 United Parcel Service Of America, Inc. Method and apparatus for input classification using non-spherical neurons
DE69329218T2 (en) * 1992-06-19 2001-04-05 United Parcel Service Inc Method and device for input classification with a neural network
GB9214514D0 (en) * 1992-07-08 1992-08-19 Massachusetts Inst Technology Information processing
US5479574A (en) * 1993-04-01 1995-12-26 Nestor, Inc. Method and apparatus for adaptive classification
US6167390A (en) * 1993-12-08 2000-12-26 3M Innovative Properties Company Facet classification neural network
JPH07239938A (en) * 1994-02-28 1995-09-12 Matsushita Electric Ind Co Ltd Inspection method
US7469237B2 (en) * 1996-05-02 2008-12-23 Cooper David L Method and apparatus for fractal computation
US6041322A (en) * 1997-04-18 2000-03-21 Industrial Technology Research Institute Method and apparatus for processing data in a neural network
US7096192B1 (en) * 1997-07-28 2006-08-22 Cybersource Corporation Method and system for detecting fraud in a credit card transaction over a computer network
US7403922B1 (en) * 1997-07-28 2008-07-22 Cybersource Corporation Method and apparatus for evaluating fraud risk in an electronic commerce transaction
US6052679A (en) * 1997-09-11 2000-04-18 International Business Machines Corporation Artificial neural networks including Boolean-complete compartments
JP4147647B2 (en) 1998-11-09 2008-09-10 ソニー株式会社 Data processing apparatus, data processing method, and recording medium
JP4517409B2 (en) * 1998-11-09 2010-08-04 ソニー株式会社 Data processing apparatus and data processing method
US8364136B2 (en) 1999-02-01 2013-01-29 Steven M Hoffberg Mobile system, a method of operating mobile system and a non-transitory computer readable medium for a programmable control of a mobile system
US7966078B2 (en) * 1999-02-01 2011-06-21 Steven Hoffberg Network media appliance system and method
JP4344964B2 (en) * 1999-06-01 2009-10-14 ソニー株式会社 Image processing apparatus and image processing method
US7865427B2 (en) 2001-05-30 2011-01-04 Cybersource Corporation Method and apparatus for evaluating fraud risk in an electronic commerce transaction
JP2003010762A (en) * 2001-06-28 2003-01-14 Konica Corp Coating apparatus and coating method
US7398260B2 (en) * 2003-03-24 2008-07-08 Fiske Software Llc Effector machine computation
US8712942B2 (en) * 2003-03-24 2014-04-29 AEMEA Inc. Active element machine computation
US8010467B2 (en) * 2003-03-24 2011-08-30 Fiske Software Llc Active element machine computation
US8019705B2 (en) * 2003-03-24 2011-09-13 Fiske Software, LLC. Register and active element machines: commands, programs, simulators and translators
US20060112056A1 (en) * 2004-09-27 2006-05-25 Accenture Global Services Gmbh Problem solving graphical toolbar
US7904398B1 (en) 2005-10-26 2011-03-08 Dominic John Repici Artificial synapse component using multiple distinct learning means with distinct predetermined learning acquisition times
US9152779B2 (en) 2011-01-16 2015-10-06 Michael Stephen Fiske Protecting codes, keys and user credentials with identity and patterns
US10268843B2 (en) 2011-12-06 2019-04-23 AEMEA Inc. Non-deterministic secure active element machine

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3275986A (en) * 1962-06-14 1966-09-27 Gen Dynamics Corp Pattern recognition systems
US3408627A (en) * 1964-12-28 1968-10-29 Texas Instruments Inc Training adjusted decision system using spatial storage with energy beam scanned read-out

Also Published As

Publication number Publication date
JPS5121749A (en) 1976-02-21
DE2524734B2 (en) 1980-03-20
FR2274088A1 (en) 1976-01-02
US3950733A (en) 1976-04-13
NL176313C (en) 1985-03-18
CH620307A5 (en) 1980-11-14
ES436945A1 (en) 1977-04-01
JPS6012671B2 (en) 1985-04-02
GB1457338A (en) 1976-12-01
IT1036906B (en) 1979-10-30
NL7506761A (en) 1975-12-09
NL176313B (en) 1984-10-16
DE2524734C3 (en) 1981-01-08
ES453378A1 (en) 1977-12-16
ES453377A1 (en) 1977-11-16
FR2274088B1 (en) 1979-05-25
DE2524734A1 (en) 1975-12-18
MX143269A (en) 1981-04-13

Similar Documents

Publication Publication Date Title
CA1042109A (en) Adaptive information processing system
Pal et al. Genetic algorithms for pattern recognition
US5263097A (en) Parameter normalized features for classification procedures, systems and methods
Wang et al. Complex temporal sequence learning based on short-term memory
Hinton Preface to the special issue on connectionist symbol processing
US20190311291A1 (en) Artificial intelligent system including pre-processing unit for selecting valid data
Hsieh et al. A neural network model which combines unsupervised and supervised learning
Pal et al. A new shape representation scheme and its application to shape discrimination using a neural network
Camargo Learning algorithms in neural networks
Xie et al. Learning winner-take-all competition between groups of neurons in lateral inhibitory networks
Denning The science of computing: Neural networks
Cumminsy Representation of temporal patterns in recurrent networks
Hampson et al. Representing and learning boolean functions of multivalued features
Osana et al. Successive learning in chaotic neural network
Mitchell et al. Optimising memory usage in n-tuple neural networks
Markert et al. Detecting sequences and understanding language with neural associative memories and cell assemblies
Ryan The resonance correlation network
Hoskins An iterated function systems approach to emergence
De Waard Neural techniques and postal code detection
Yovcheva et al. A generalized net model of the deep learning algorithm
Bouzerdoum Convergence of Symmetric Shunting
Howells et al. BCN: A novel network architecture for RAM-based neurons
Tang et al. A model of neurons with unidirectional linear response
Dempsey Neural network implementation of the Hough transform
Osana Chaotic associative memory using distributed patterns for image retrieval