CA2041922C - Dense aggregative hierarchical techniques for data analysis - Google Patents

Dense aggregative hierarchical techniques for data analysis

Info

Publication number
CA2041922C
CA2041922C CA002041922A CA2041922A CA2041922C CA 2041922 C CA2041922 C CA 2041922C CA 002041922 A CA002041922 A CA 002041922A CA 2041922 A CA2041922 A CA 2041922A CA 2041922 C CA2041922 C CA 2041922C
Authority
CA
Canada
Prior art keywords
level
data items
data
value
attribute
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CA002041922A
Other languages
French (fr)
Other versions
CA2041922A1 (en
Inventor
James V. Mahoney
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xerox Corp
Original Assignee
Xerox Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xerox Corp filed Critical Xerox Corp
Publication of CA2041922A1 publication Critical patent/CA2041922A1/en
Application granted granted Critical
Publication of CA2041922C publication Critical patent/CA2041922C/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/42Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation
    • G06V10/422Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation for representing the structure of the pattern or shape of an object therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/457Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by analysing connectivity, e.g. edge linking, connected component analysis or slices

Abstract

A body of data is operated upon hierarchically in such a way that, at one or more levels of the hierarchy, the number of aggregative data items produced is not substantially less than the number produced at the preceding level.
The body of data can be an image, so that each aggregative data item indicates an attribute of a distinct image region. Such attributes include presence of a single connected component or properties of a component such as width, orientation and curvature. A class of abstract computation structures, called exhaustive hierarchical structures, is introduced in which such dense or exhaustive hierarchical aggregative data analysis processes can be embedded. The embedding of exhaustive hierarchical analysis in a computation structure of this class is analogous, and in some implementations similar in processing efficiency, to the embedding of conventional hierarchical aggregative data analysis processes in tree structures. The exhaustive hierarchical embedding introduced analyzes extensively overlapping regions in a manner that places minimum demands on the number of communication links, memory resources, and computing power of the individual processing units. Specifically, the embedding scheme is a general scheme for mapping locations in an array into nodes at two adjacent levels of a binary exhaustive hierarchical structure. The scheme establishes positional relations in the array that correspond to parent-child relations at a given level in the exhaustive hierarchical computing structure; these positional relations are uniform power-of-two offsets in each array dimension at a given hierarchical level. Consequently, this exhaustive hierarchical analysis can be implemented efficiently using conventional communication techniques, including hypercube and grid techniques, on a massively parallel processor. To minimize memory requirements, hierarchical results at each location can be encoded across all levels.

Description

DENSE AGGREGATIVE HIERARCHlCAL
TECHNIQUES FOR DATA ANALYSIS

Background of the Invention The present invention relates to techniques for analyzing a body of data, such as data defining an image or another signal. More specifically, the invention relates to techniques that produce results indicating overall attributes of extended subsets of a body of data.

A number of techniques have been proposed for producing results indicating attributes of a body of data.

Tanimoto et al., US-A-4,622,632, describe a pyramidal array of processors that can operate on a pyramidal hierarchical data structure with a number of image resolution levels. Fig. 1 shows such a data structure in which the neighborhood of each unit cell includes four unit cells on the next lower level, and in the pyramidal processing system of Fig. 2 a data element corresponds exactly with a unit cell in the pyramidal data structure. For each unit cell, the pyramidal processing unit comprises registers, a virtual processor, and external storage. The neighborhood of a cell and relationships within neighborhoods are described at cols. 5-6. The association of actual and virtual processors and communication between processors is described in relation to Figs. 3-5b. Control of the pyramidal processing unit and matching operations are described in relation to Figs.
11-13a.

I

2Q41~22 Miller, R., and Stout, Q.F., "Simulating Essential Pyramids," IEEE
Transactions on Computers, Vol.37, No.12, December 1988, pp. 1642-1648, describe pyramid techniques that are useful when an image contains multiple objects of interest. These techniques simulate a separate "essential" pyramid over each object. Fig. 1 shows a standard pyramid computer, while Fig. 5 illustrates an essential pyramid, defined at pages 1644-164~. Implementations of essential pyramids are described at pages 1645-1646.

Conventional multigrid image processing techniques are also hierarchical, employing substantially fewer processing units at each level than at the next lower level. Frederickson, P.O., and McBryan, O.A., "Parallel Superconvergent Multigrid," Multi~rid Methods: Theory, Applications and Supercomputin~, McCormick, Marcel-Dekker, April 1988, pp. 195-210, describe a parallel superconvergent multigrid (PSMG) algorithm for the solution of large sparse linear systems on massively parallel supercomputers. Section 2.1 describes the use of the PSMG algorithm for a discretization problem by constructing two coarse grid solutions and combining them to provide a fine grid correction that is better than the coarse grid solutions. If the nurnber of processors on a massively parallel m~chine is comparable to the number of fine grid points, the two coarse grid solutions can be solved simultaneously. For a problem in d dimensions, 2d coarse grids are obtained and the fine grid solution is defined by performing a suitable linear interpolation of the coarse grid points. This technique therefore obtains more results at each level of the hierarchy.

- ~ 204 T 922 --Kent, US-A-4,6~1,055, describes an iconic-to-iconic low-level image processor with a sequence of identical intermediate stages between an input stage and an output stage, as shown and described in relation to Fig. 1. As shown in Figs. 1-3, forward, recursive, and retrograde paths allow various operations, including forward and retrograde neighborhood operators. A
discussion of the neighborhood operator begins at col. 6, line 42, and Fig. 5 illustrates the neighborhood operators, path functions, and combining logic for a plurality of neighborhood operations. A region of interest operator is described be~nnin~ at col. 13, line 48. The discussion begirlnin~ at col. 13, line 59 explains how the processor can construct multiresolution, pyramid, sequences of images by sampling and pixel doubling, and an example of pyramid-based processing is described in relation to Fig. 9.

S-lmm~ry of the Invention The techniques of the present invention analyze a body of data items to obtain information about attributes of groups of the data items. For example, data ~lefining an image can be pr~cesse~ to obtain inforrnation about geometric structures in regions of the image. ~n general, the techniques of the invention can be efficiently implemented with a parallel processor.

One aspect of the invention ori~in~ l in the recognition of a problem with conventional techniques for analyzing an image to obtain information about its geometric structures. Pavlidis, T., Algorithms for Graphics and Image Plocessing, Computer Science Press, Rockville, Md., 1982, pp. 99-127, describes an example of a conventional hierarchical image analysis A

~ = _, technique, a data structure called the binary image tree. A node at a given level in the tree corresponds to a respective rectangular region in the image, and each child of a node corresponds to half of the node's respective region, as defined by a vertical or horizontal split. The root node corresponds to the entire image. The data item in a node at a given level in the hierarchy is aggregative in the sense that it is produced by operating on the data of a group of child nodes from the next lower level such that the parent node's data item indicates an attribute of the image region corresponding to the parent node. An operation at a node in the hierarchy which produces a data item that indicates an attribute of the corresponding subset of the input data set is subsequently referred to as an aggregative operation.

Since the binary image tree includes fewer nodes at higher levels in the hierarchy, the resulting description is more and more sparse at higher levels. In other words, the sizes and positions of the respective regions corresponding to the nodes simil~rly become progressively more constrained at the higher levels, reducing the geometric structures that can be detected.
A geometric structure in an image can occur at any size and position within the image, so that the sizes and positions of the regions of interest cannot be known in advance. In other words, if an analysis technique constrains the regions of interest to particular sizes or positions, as conventional hierarchical techniques based on trees or pyramids do, the technique is unable to detect geometric structures that do not fit into the constrained regions.

This problem is not unique to two-dimensional image analysis and other analysis of signals that map into two dimensions. It can also af~ect analysis ~41~22 of signals that map into one dimension, such as time varying signals, or of signals that map into more than two dimensions, such as a sequence of images over time or a depth image such as from a laser range finder.
Furthermore, the problem could affect analysis of any body of data in which operations are performed on groups of data items in the aggregate to obtain information about large-scale attributes of the data.

This aspect of the invention is further based on the discovery that it is possible to operate hierarchically on a body of data in such a way that, at one or more levels of the hierarchy, the number of aggregative data items produced (e.g., the number of distinct image regions described) is not substantially less than the number described at the next lower level. For example, it is possible to hierarchically describe image regions in such a way that while the size of the image regions doubles, say, from one level to another (as in the binary image tree), the regions described at a higher level are defined at nearly as many distinct positions as the regions at the next lower level. (This implies that the regions described at the higher level will partially overlap.) The set of data items produced by such an operation will subsequently be referred to as a dense hierarchy of aggregative data items. That is, a hierarchical collection of data items is dense if, for one or more levels of thehierarchy, the number of distinct aggregative data items is not substantially less than the number of data items at the next lower level. A dense hierarchy of data items can be exhaustive in the sense that the number of distinct aggregative data items at each level is equal to the number of data items at the next lower level. Producing a dense or exhaustive hierarchy of aggregative data items substantially alleviates the problem of constrained regions described above by addressing directly the issue of constrained positions.

This technique can be applied to image analysis for example, with pixelvalues of an image being the data items of the lowest level of the hierarchy.
The aggregative operations can operate on groups of data items that together relate to a region of the image. In this way, an aggregative data item indicates an attribute of a respective larger region and the lower level data items from which it is produced indicate attributes of respective smaller regions that together form the respective larger region.

This aspect of the invention is further based on the discovery of a class of abstract computation structures in which exhaustive hierarchical aggregative data analysis processes can be embedded. This class of computation structures is referred to as exhaustive hierarchical computation structures. The embedding of exhaustive hierarchical analysis in a computation structure of this class is analogous, and in some implementations (notably certain parallel implementations) simil~r in processing efficiency, to the embedding of conventional hierarchical aggregative data analysis processes in tree structures. (The latter has been illustrated above through the example of image analysis using the binary image tree.) The idea of embedding hierarchical analysis in a computation structure may be understood by considering it in the f~miliar context of the binary tree. A
binary tree computation structure may be viewed as a set of nodes divided into subsets each corresponding to a particular level of hierarchical processing, such that (i) there exists a top level consisting of a single node;
(ii) there exists a base level; (iii) for each level P except the base level, there exists one other level C (possibly the base level) consisting of twice as many nodes as P, and each node of P is uniquely associated with two nodes of C.
The base level nodes of such a structure can be mapped to the elements of any body of data, such as a two-dimensional image. Through an appropriate mapping of the base level nodes of such a structure to locations in a two-dimensional image, a correspondence between the nodes at non-base levels and certain rectangular image regions may be established, as seen above in the case of the binary image tree. Under such a mapping, the analysis of the rectangular image regions corresponding to the tree nodes may be said to be embedded in the tree structure in the sense that a node in the tree and all its descen.l~nts together carry out the analysis of the node's corresponding region.

One exhaustive hierarchical computation structure that is closely related to the preceding binary tree structure may be viewed as a set of nodes divided into subsets each corresponding to a particular level of hierarchical processing, such that (i) there exists a top level; (ii) there exists a base level;
(iii) for each level P except the base level, there exists one other level C
(possibly the base level) cont~ining the same number of nodes as P, and each node of P is associated with two nodes of C, each of which is associated with just one other node of P. This structure is subsequently referred to as a binary computational jungle, or just a binary jungle. Through an appropriate mapping of the nodes of this structure to the locations of a one-~ime~sional array, or of a variant of this structure (one that is two--~ 204 1 922 dimensional in a particular sense) to locations in a two-dimensional array, a correspondence between the nodes at non-base levels and rectan~lar array regions may be established. The ideal structure as defined above may be appropriately modified for such a mapping to deal with edge effects and the like.

One aspect of the invention pertains to this mapping. It is a general scheme for mapping locations in an array into nodes at two adjacent levels of a binary exhaustive hierarchical structure. In other words, this is a scheme for establishing positional relations in the array that correspond to parent-child relations at a given level in the exhaustive hierarchical computation structure. The scheme defines a node and its first child to correspond to the s~me location in the array. The scheme defines the array location corresponding to a node's second child to have a non-zero offset in one ~imen~ion from the node's array location and to have a zero offset in the other ~im~nci~nc the non-zero offset's ~hsoll~, value is a power of two ~lel~ Pd by level in the hierarchy. For inst~nce, in a one-~limensional array, the offset at the first level can be 2 = 1, so that each second level data item is produced by operation on adjacent first level data items; the offset at the second level can be 21 = 2, so that each third level data item is produced by operation on the second level data items that are separated by another data item; the offset at the third level can ~imil~rly be 22 = 4, and so forth. In a two-dimensional array, the non-zero offsets at even levels are in one ~limen~ion, and the non-zero offsets at odd levels are in the other ~lime~sion.

In what will be referred to as a fully parallel implementation of hierarchical analysis using a binary tree, each node in the tree corresponds to a distinct ,.

~ 204 1 922 processing unit, so that all computations at a given level can take place simultaneously. In addition, each processing unit is able to communicate direc~y, for example is conn~t~d by a wire, to each procP~sin~ unit corres-ponding to one of its child nodes at the ne~ct lower level. A single pass of fully parallel hierarchical analysis in a tree can therefore be completed in time proportional to the logarithm of the number of leaves of the tree, where at each level, each processing unit operates upon a fixed amount of data read from its child processinF units. Binary hierarchical analysis can, of course, be implemented by a serial processor, or other kinds of parallel processors.
The importance here of the time performance of the fully parallel implementation is that it represents the theoretical ideal for hierarchical analysis using a binary tree structure. The attributes of a fully parallel implementation that enable this ideal to be achieved are (i) a one to one correspondence between processin~ units and nodes in the tree structure; (ii) simultaneous operation of all processing units at a given level; and (iii) direct comrnunication links between proce~sin~ units, which means that communication is completed in a small fixed amount of time. The primary advantage of embe~ in~ the analysis of two~ mension irnage regions,for example in a binary tree structure, derives from the computational principle of divide-and-conquer: the amount of computational effort expended by each processin~ unit is const~nt across all levels of the hierarchy, but the size of the region described by this computational effort doubles with every step up the hierarchy. This advantage ensues even in a serial implementation.

In a fully parallel implementation of exhaustive hierarchical analysis in a binary jungle, each node in the jungle corresponds to a ~iist;nct processin~
unit, so that all computations at a given level can take place simultaneously.

g _~ , ~ 2041922 Fully parallel hierarchical analysis in a jungle can therefore also be completed in time proportional to the logarithm of the number of leaves of the tree. Again, this time performance represents the theoretical ideal for hierarchical analysis using a binary jungle structure. The divide-and-conquer advantage claimed above for the binary tree also ensues with exhaustive hierarchical analysis embedded in a jungle.

A further important advantage follows from the fact that the regions described at any given level of the tree overlap. Since the areas of the regions at a given level increase exponentially as level in the hierarchy increases, and since at each level the regions are defined at every possible position, the m~imum overlap between regions at a given level also increases exponentially with hierarchical level. However, at any given level of the hierarchy, a given node corresponds to the union of two regions that do not overlap, just as in the binary image tree. In other words, the overall hierarchical process describes regions that overlap extensively at all levels, but the individual operations of the process done by each processing unit never involve overlapping regions, which simplifies the computations.

To fully appreciate this advantage, it is worthwhile to imagine a non-hierarchical computation structure in which the analysis of densely overlapping regions could be embedded. This structure would consist of just two sets of nodes R and I. Each node in I is mapped to a distinct location in a two dimensional image. Each node of R is associated with the elements of a distinct subset of the nodes in I, and thus with a distinct region of the image.In a fully parallel implementation, there would be a distinct processing unit for each node in R and I, and the association between a node in R and a node ~~ 204 ~ 922 in I would be implemented by a direct communication link. If two nodes in R
correspond to overlapping regions in the image, then there are a set of nodes in I, corresponding to the intersection of the regions, which are associated with both nodes. Th~.,fole, the ~l~c~ P units con~i,pol~ding to locations in the region's il~f.~j~ n would each have c~ ic~tion links to both processing units in R corresponding to the regions. The number of communication lin~s into a processor would be equal to the number of regions that include the co~ o~ ng image loca~on.

This is a problem~t;c situation because it encounters several practical engineering limits in the design of the individual processing units of a massively parallel computer. First, the nurnber of direct communication links into a processin~ unit is limited. ~Sec~n~, the ~ .u~-~ of ll~e.~lol~
storage for incoming values in a processing unit is also limited. In addition, the computing power of an individual processor is limited, so it could not process a large number of incoming values all at once, even if it could receive and buffer them simultaneously; that is, the processin~ unit is not itself a powerful, parallel ~cess~r. Although they have been illustrated in the extreme case of a non-hierarchical computation structure, these dif~lculties also arise in a hierarchical computation structure to the extent that an individual processor is ~ssori~ted with multiple parent processors. The exhaustive hierarchical embedding introduced in the invention represents the computational ideal in so far as regions with extensive overlap are analyzed in a manner that places minimum lem~n-l~ on the number of . ~

21~41922 communication links, memory resources, and computing power of the individual processing units.

Another aspect of the invention is based on the recognition of the difficulty of re~ ing a fully parallel implementation. Such an implementation would employ a hierarchical parallel processor with a respective processing unit for each data item, from the body of data at the lowest level up to the highest level of aggregative data items. In such a processor, the processing units of each level could be directly connected to provide their results to the processing units of the next higher level, for very fast communication. This structure would be difficult and expensive to build on a large scale. The invention solves this problem based on the recognition that the operations that produce the hierarchy are parallel at each level of the hierarchy but are serial between levels, so that the aggregative data items at one of the higher levels can only be produced after the data items at the next lower level are produced. Therefore, a satisfactory emulation of the hierarchical parallel processor can be obtained with a single level parallel processor if at each step the necessary communication of results between the processing units can be completed in a small, fixed time. This solution can be implemented based on the discovery that conventional communication techniques for a massively parallel processor, including hypercube and grid techniques, can under some conditions provide sufficiently fast power-of-two communication. This type of implementation is subsequently referred to as a flat parallel implementation.

A further aspect of the invention is based on the observation that for a flat parallel implementation, in which each processin~ unit produces all the
2 ~ 2 2 results for all levels of the hierarchy, the local memory of each processing unit may not be sufflcient to store all of the results it produces at all of thelevels of the hierarchy This problem can be solved by encoding or selecting among the data items of lower levels after operations on them are completed.
For example, the level on which a result changes from one binary value to another can be stored. A flat parallel implementation in which the processing units do not store the results at all levels is referred to as an in-place implementation.

Although in principle a fully parallel implementation can most closely approach ideal time performance in exhaustive hierarchical analysis, under certain conditions serial implementations and other parallel implementations may in practice provide suitable performance. Exhaustive hierarchical analysis may be use~ully employed in any system in which the ms~imum amount of time to carry out any basic operation of the analysis, such as arithmetic or logical operations, or parent-child communication, for all nodes at any given level of the hierarchy, is fixed and sufficiently small.
In a machine without dedicated communication links at all levels of the hierarchy, this m~imum time would most likely be dominated by the long-range communication operations involved at the highest levels. The critical general requirement, therefore, for implementing exhaustive hierarchical analysis in a form other than the parallel implementations discussed above, is a rate of data transfer high enough to allow communication at the highest levels to be treated as a basic, or unit time, operation.

A further aspect of the invention is based on the discovery of techniques that employ a dense or exhaustive hierarchy to obtain attributes of regions ~0~ ~2~
in an image. These techniques are not only useful for square and oblong rectangular regions, but can also be used for one-pixel wide slices of the image, by idling at appropriate levels of the hierarchy.

Another aspect of the invention is based on the recognition of a problem in propagating an attribute of a region downward through a dense hierarchy of data items. In such a hierarchy, each region is included in a number of larger regions, and the larger regions frequently have different values for the attribute.

In general, this problem can be solved by operating on a set of higher level data items to produce data items at the next lower level that indicate an attribute of a subregion that is included in each of the regions in the set.

In one specific technique, the next lower level data item could include a central value, e.g. the mode, mean, or median, or an extremum, i.e. the m~imum or minimum, ofthe attribute values of the regions that include it.
This solution can be implemented by a downward pass through the hierarchy, choosing at each region the m~ximum (or minimum or average) of the attribute values from its parent regions. This solution is appropriate for attributes such as topological class, local extent, and local width.

In another specific technique, the next lower level data item could include an attribute value of one of the regions that include it based on a criterion that selects one of the regions. For example, for orientation, a pixel can be given the value from the minimum-scale connected component that includes ~41922 it and that has sufficient pixels to provide a good orientation estimate. This technique can be implemented by a downward pass through the hierarchy.

Another aspect of the invention is based on the recognition of a general problem in analysis of an image or other body of data. It is often desirable to obtain general information about an image or other body of data quickly, but a body of data as large as an image of a practical size includes too much information to permit rapid identification of its global properties. To create a histogram, for example, requires time-consuming construction of a global distribution.

This aspect of the invention is further based on the discovery of a technique for rapidly obtaining general information about a body of data by hierarchically processing local information. This technique detects one or more values that have a relatively high local frequency within the image, called prominent values. The technique also provides a rough estimate of the relative global frequencies of the prominent values.

The technique can be implemented by producing a hierarchy of prominent values in which each prominent value is selected from a set of prominent values at the next lower level of the hierarchy. Each prominent value at the next lower level has a count roughly indicating its frequency up to that level in the hierarchy. The prominent value at the higher level is the more frequent of the prominent values at the next lower level, as indicated by the counts. If two of the prominent values at the next lower level are sufficiently - similar, their counts are summed to obtain a count for the prominent value at the higher level, but otherwise the count of.the more frequency is used.

When a pfo"~ ent value is obtailled in this manner, the data item~ in the body of data with that value can be ignored in subsequent opP~tionc~ so that other pfOIIIil~ t values can be obt~ d. This can be n p~l~ until in~llffioient itiQn~l value~ remain.

Other aspects of this invention are as follows:
A method of analyzing 8 body of data that includes a plurality of data items; the method being performed by operating a system that includes memory and a processor connected for accessing the memory; the method comprising steps of:

storing in the memory the data items of the body of data; and operating the processor to produce a hierarchy of data items, the hierarchy including the data items of the body of data as a lowest level and at least one higher level of aggregative data ibms, each higher level having a respective next lower level, each level of the hierarchy having a respe~:live number of data ibms;

the step of operating the processor to produce the hierarchy comprising, for each of the higher levels, a respective s~ step of producing each aggregative .~i'A

20~ 1 922 data item of the level by operating on a respe tive group of the data items of the re~eclive neYt lower level, each aggregative data item of the level being for indicating an attribute of the respective group of the data items of the respective next lower level;

the respective number of aggregative data items of a first one of the higher levels being not substantially less than the respective number of data items of the next lower level.

A s~ for analyzing a body of data that includes a plurality of data items; the ~yY,l~n comprising:
memory for storing the data items of the body of data; and a processor connected for ~Ccessin~ the data items stored in the memory, the processor further being for producing a hierarchy of data items, the hierarchy including the data items of the body of data as a lowest level and a series of higher levels of.aggregative data items, each of the higher levels having a re.,~e_live next lower level, each level of the hierarchy having a respective number of data items;

the processor producing each of the higher levels of the hierarchy by producing each aggregative data item of the level by operating on a respective group of the data items of the respective next lower level, each --16a--.. . .

aggregative data item of the level indicating an attribute of the respective group of the data items of the respective next lower level;

the respective number of aggregative data items of a first one of the higher levels being not substantially less than the r~s~c~! :ve number of data items of the next lower level.

A method of operating a system that includes a plurality of processin~
units and, for each processin~ unit, respective local memory, each processing unit being connected for accessing its respective local memory, the procescing units being connected for cornm-lnic~tion; the method comprising steps of:

storing in the respective local memory of each of the processing units a respective lowest level data item, the respective lowest level data items together forming a body of data; and producing a hierarchy of data items, the hierarchy including the data items of the body of data as a lowest level and at least one higher level of aggregative data items, each higher level having a respective next lower level, each level of the hierarchy having a respective number of data items;
the step of producing the hierarchy comprising, for each higher level, substeps of:

--16b---operating each of the processing units to produce a respective aggregative data item of the higher level by operating on a respective group of the data items of the respective neYt lower level; each aggregative data item of the respective higher level indicating an attribute of tbe respective group of the data items of the respective next lower level; and operating each of the processing units to store the respective aggregative data item produced by each processing unit in its local memory;

the substep of operating each of the processing units to produce a respective aggregative data item comprising a substep of communicating with a connected proce~sing unit to obtain a first one of the respective group of the data items, the first one of the respective group of data items being stored in the local memory of the connected procPssin~ unit;

the respective number of aggregative data items of each of the higher levels being equal to the respective number of data items of the lowest level.

A method of operating a system that includes memory and a processor connected for ~Ccessing the memory, the method comprising steps of:

---16c---.,, 204 ~ 922 storing in the memory a body of data defining an image that includes a plurality of pixels, the body of data including a plurality of data items, each including a pisel value for a re~e~l,ive one of the pixels; and operating the processor to produce a dense hierarchy of levels of attribute data ite_s by operating on the data items in the body of data, each attribute data item indicating an attribute of a respective region of the image; the levels further including a lowest level and a sequence of higher levels, each of the higher levels having a respective next lower level in the hierarchy;
each level of the hierarchy having a respective number of attribute data items; the step of operating the proces~r comprising substeps of:

operating on each of the data items in the body of data to produce a resl.c~live starting attribute data item for each pixel, the lowest level of the hierarchy including the respective starting attribute data items;
and for each of the higher levels, producing each attribute data item of the level by comhining a respective set of the attribute data items of the res~e~Live next lower level; the respective number of attribute data items of each of the higher levels being not substantially less than the respective number of attribute data items of the respective next lower level.

---16d---.* . ,, , ~ .

A method of operating a system that includes a plurality of processing units and, for each processing unit, respective local memory~ each processing unit being connected for ~ccessing its respective local memory; the method comprising steps of:

storing in the re~l.ective local memory of each of the processing units a rei,~e~1:ve lowest level data item, the res~cc~ ve lowest level data items together forrning a body of data; ant producing a hierarchy of data items, the hierarchy including the data items of the body of data as a lowest level and a plurality of higher levels of data items, each higher level having a respective next lower level; the lowest level being the respective next lower level of one of the higher levels;

the step of producing the hierarchy comprising, for each of the higher levels, a substep of operating each of the processing units to produce a respective data item of the higher level by operating on a respective set of the data items of the respective next lower level; each data item of the respective higher level indicating an attribute of the re:i~e~live set of the data items ofthe respective next lower level; the attribute being a binary attribute having a firstvalue and a second value; each processing unit producing a respective sequence of data items, the respective sequence including one data item for each of the higher levels; each respective sequence including at most one ---16e---~, . ~, - .

-~ 204 1 922 --trausition from the first value to the second value; the respective sequence of a first one of the proce~sing units including a transition from the first value to the second value between first and second ones of the levels;

the step of producing the hierarchy further comprising a substep of operating the first processin~ unit to store level data indicating the transition from the first value to the second value between the first and second levels.
A method of operating a system that includes m~mory and a processor connected for Accessing the memory, the method comprising steps of:

storing in the mem~ry a body of data lefinin~ an image that includes a plurality of pixels, the body of data including a plurality of data items, each including a pixel value for a respective one of the pixels;

operating the processor to produce a hierarchy of levels of upward attribute data items by operating on the data items in the body of data, each upward attribute data item indicating an upwart attribute of a respective region of the image; the levels further including a lowest level and a sequence of higher levels, each of the higher levels having a respective next lower level in the hierarchy; each level of the hierarchy having a respective number of upward attribute data items; the respective number of upward attribute data items of each of the higher levels being not sl~bsPntially less than the respective nllmber of upward attribute data items of the respect;ve next lower level; and --16f--2~)4 ~ 922 for a first one of the higher levels in the hierarchy, operating the processor to produce a number of downward attribute data items at a second level in the hierarchy by operating on a respective set of the upward attribute data items of the first higher level, the second level being the respective next lower level of the first higher level, each downward attribute data item indicating a downward attribute of a respective region of the image; each of the upward attribute data items of the second level having a respective one of the downward attribute data items.

A method of operating a system that includes memory and a processor connected for ~rcessing the memory, the method comprising steps o storing in the memory a body of data that includes a plurality of data items, each including one of a set of respective values; and operating the processor to produce a first hierarchy of levels of prominent value data items by operating on the data items in the body of data, each prominent value data item including one of the respective values and a count; the levels further including a lowest level and a sequence of higher levels, each of the higher levels having a resl.e~"ve ne~t lower level in the first hierarchy; the step of operating the processor comprising substeps of:

operating on each of the data items in the body of data to obtain a re:~e~:l,ive starting prominent value data item for each data item in the body of data, each data item's re~c~liv~ starting prominent value data --16g--'A~''' ~4 1 922 item including the data item's respective value; the lowest level of the first hierarchy including the respective starting prominent value data items; and for each of the higher levels, producing each prominent value data item t of the higher level by combining a respective set of the prominent value data items of the respective nest lower level, each prominent value data item including one of the rcsl.e.:l,ive values from the prominent value data items in the respective set, the count in each prominent value data item being produced by operating on the counts in the prominent value data items in the respective set.

.
The following description, the drawings and the claims further set forth t~
these and other objects, features and advantages of the invention. -~

Brief Description of the Drawings Fig. 1 shows an image array that includes a geometric st~ucture.

Fig. 2 is a flow chart of general steps of a methq~t of a~alyzing a body of dataaccording to the invention.

Fig. 3 is a schpm~t;c block diagram showing general cnmponents of a system that perforrns the steps of Fig. 2.

--16h--,~ .

- 204 ~ 922 Fig. 4 is a schematic diagrarn showing a one-dimensional body of data and levels of an exhaustive aggregative hierarchy produced by performing operations on the body of data.

--16i--~. ~

2 ~ 2 2 Fig. 5 is a schematic diagram showing a series of images tessell~ted into regions as indicated in the levels of a binary hierarchical tree.

Fig. 6 is a schematic diagram showing an extension of a tree machine according to the invention.

Fig. 7 is a schematic block diagram showing components of a system thatsimulates the m~chine of Fig. 6.

Fig. 8 is a schematic diagram showing communications between processingunits in the system of Fig. 7.

Fig. 9 shows a series of shifts on a grid network that provide the communications of Fig. 8.

Fig. 10 is a flow chart showing general steps in producing a dense or e2~haustive hierarchy according to the invention.

Fig. 11 shows a sequence of respective regions of increasing size for each of several pixels in an image.

Fig. 12 is a simple image of sixteen pixels.

Fig. 13 illustrates a technique for hierarchically counting black pixels according to the invention.

2 ~
Fig. 14 illll~tr~tPs a technique for counting pixels of horizontal slices of an image according to the invention.

Fig. 15 illustrates a technique for encoding region attribute data.

Fig. 16 illustrates a technique for prop~hng information dOwllw~~~ within a hiel~cLy.

Fig. 17 ill~ lf;s the process of ob~ining prominent values.

Fig. 18 shows general steps in an image analysis operation based on chllnkin~.

Fig. 19 ill~-stratPs an operation which finds the results of various test for the respective region at every pixel at all hierarchical levels.

Fig. 20 shows an operation which obtains an ~rient~tion at each pixel that ~ti~fiPs col~t~int~ according to the invention.

Fig. 21(a)-21(c) illllstr~te three single connP~te~1 black colll~nent slices intersecting a component.

Fig. 22 shows a downw~d propagation operation that uses a m~lrimllm criterion.

Fig. 23 shows a chunk coloring operation that contin~lçs until no new pixels arecolored.

Fig. 24 illustrates general steps in applying a selection criterion to perform aselection operation.

Fig. 25 illustrates an operation that labels all pixels that are simil~r to the current focus pixel according to the invention.

Fig. 26 illustrates an operation that propagates a globally salient m~ximum value to the upper left corner of an image, after which it can be propagated downward as in Fig. 25.

Fig. 27 shows an operation which finds a hierarchical mode and stores the result for further operations.

Fig. 28 shows how prominent values of the distribution can be established in sequence by an iterative process.

Detailed Description A. Conceptual Framework The following conceptual framework is helpful in understanding the broad scope of the invention, and the terms defined below have the meanings indicated throughout this application, including the cl ~ im~.

20~922 A "data processor" or "processor" is any component, combination of components, or system that can process data, and may include one or more central processing units or other processing components. A "processing unit" is a processor that is a component within another processor. Two processing units are "connected" by any combination of connections between them that permits comrnunication of data from one of the processing units to the other.

"Memory" is any component, combination of components, or system that can store data, and may include local and remote memory and input/output devices.

A processor "accesses" data or a data structure by any operation that retrieves or modifies the data or data included in the data structure, such as by reading or writing data at a location in memory. A processor can be "connected for accessing" data or a data structure by any combination of connections with memory that permits the processor to access the data or the data structure.

A "data structure" is any combination of interrelated items of data. An item of data is included" in a data structure when it can be accessed using the locations or data of other items in the data structure; the included item of data may be another data structure. An "array of data" or "data array" or "array" is a data structure that includes items of data that can be mapped into an array. A "two-dimensional array" is a data array whose items of data can be mapped into an array having two dimensions.

20~ 1 922 A processor "operates on" data or a data structure by performing an operation that includes obt~ining a logical or numerical result that depends on the data or data structure.

To "obtain" or "produce" data or a data structure is to perform any combination of operations that begins without the data or the data structure and that results in the data or data structure. Data or a data structure can be "obtained from" or "produced from" other data or another data structure by any combination of operations that obtains or produces the data or data structure by operating on the other data or on data in the other data structure. For ex~mple, an array can be obtained from another array by operations such as producing a smaller array that is the same as a part of the other array, producing a larger array that includes a part that is the sarne as the other array, copying the other array, or modifying data in the other array or in a copy of it.

A "hierarchy" of data items includes data items, each of which is at one of a series of levels within the hierarchy. To "produce" a hierarchy of data items is to perform a combination of operations that begins without the complete hierarchy of data items and that includes the production of all of the data items of the hierarchy that are not present at the beginning. In other words, a hierarchy may be produced by a combination of operations that ends when all of the data items of the hierarchy have been produced, whether or not all of the data items are still stored. All of the data items of all of the levels could still be stored at the end of the operations, but the hierarchy is ~ o ~ 2 produced even though some of the data items are not stored after being used to produce data items at a higher level.

To produce a hierarchy "sequentially" is to produce the hierarchy by a sequence of substeps in which the first substep produces a first higher level of data items from a lowest level of data items, the second substep produces a second higher level of data items from the first higher level, and so forth.

Data "indicates" an attribute when the data indicates the presence of the attribute or a measure of the attribute. An "aggregative data item" is an item of data that indicates an attribute of a group of other data items. In a hierarchy of data items, a given level can include aggregative data items, each of which indicates an attribute of a respective group of data items of the next lower level of the hierarchy.

An "aggregative operation" is an operation on a set of data items, called input data items below, that produces a set of aggregative data items, called resulting data items below, with each of the aggregative data items being produced by operating on a respective set of the input data items. The respective sets of input data items are "evenly distributed" in relation to the complete set of input data items if each of the input data items is included in roughly the same number of respective sets of input data items as every other input data item and if no two of the respective sets are identical.

If the respective sets of input data items on which an aggregative operation is perforrned are all of the same size a, the "aggregation degree" of the aggregative operation is equal to a. More generally, the respective sets of -. ~ 204 1 922 input data items could each have one of a small number of different sizes al, a2, . . . For the aggregative operations discussed below, a is generally greater than 1 and small comp~red to the number of input data items, except as otherwise indicated.

The "density" of an aggregative operation is the ratio c of the number of resulting data items to the number of input data items. This ratio can be related to the aggregatio~ degree a as follows, assuming in each case that the respective sets are evenly distributed: A "minim~l aggregative operation" is one for which c is approximately equal to 1/a, so that each of the input data items is in one of the respective sets of input data items. A
"dense aggregative operation" is one for which c is not substantially less than 1, so that each of the input data items is in not substantially less than a respective sets of input data items. An exhaustive aggregative operation is a dense aggregative operation for which c is equal to 1, so that each of the input data items is in a respective sets of input data items.

A "hierarchical aggregative operation" is a combin~tion of operations that sequentially produce a hierarchy and in which each substep of the sequence is an aggregative operation. An "aggregative hierarchy" is a hierarchy produced by a hierarchical aggregative ope~ti-~n An agg.~
hierarchy can be described as "minim~l," "exhaustive," or "dense" if all of the substeps of the hierarchical aggregative operation that produces it are minim~l, exhaustive, or dense, respectively. A "mixed aggregative hierarchy" is produced by a hierarchical aggregative operation that includes aggregative operations of varying densities, possibly including minim~l, ~ ~ 4 ~
exhaustive, and other densities that are between minimP~l and exhaustive or greater than exhaustive.

An "image" is a pattern of light. Data "defines" an image or another signal when the data includes sufficient information to produce the image or signal. For example, an array can define all or any part of an image, with each item of data in the array providing a value indicating the color of a respective location of the image.

A "dimensioned body of data" is a body of data that maps into a space that includes one or more dimensions. For example, an array that defines a two--limensional image is a dimensioned body of data. A "geometric structure" is a configuration of data items that occurs in a dimensioned body of data. Examples of geometric structures include points; relations ~mong points; properties of points, such as color, surface orientation, or depth;
configurations of points, such as lines and curves, line junctions, corners, angles, connected regions, region boundaries, surfaces, solids; and so forth.

Each location in an image may be called a "pixel." In a body of data definin~
an image in which each item of data provides a value, each value indicating the color of a location may be called a "pixel value." Each pixel value is a bitin the "binary form" of the image, a grey-scale value in a "grey-scale form" of the image, or a set of color space coordinates in a "color coordinate form" of the image, the binary for~n, grey-scale form, and color coordinate form each being a body of data defining the image.

A "connected cornponent" or "blob" is a set of pixels in an image, all of which have pixel values that meet a criterion and all of which are pairwise connected through an appropriate rule such as that the pixels in a pair are connected by a chain of neighbors within the set. For example, a connected component of a binary form of an image can include a connected set of pixels that have the same binary value, such as black.

A "data space" is a space into which the data items of a dimensioned body of data can be mapped. In general, a number of bodies of data can be mapped into the same data space. For example, arrays defining many different images can all be mapped into the same two-~limensional data space.

An "analysis region" or "region" of a data space or of any of the bodies of datathat can be mapped into the data space is a bounded part of the data space, defined without regard to the values of the data items mapped into the analysis region. A region of the array defining an image defines an analysis region of the image, so that an aggregative data item defines an attribute of an analysis region of an image when it indicates an attribute of the data items in an analysis region of the array defining the image. The attribute could, for example, be the presence of exactly one connected component in a respective analysis region. The size and position of the aggregative data item's respective analysis region do not depend on the presence or absence of a connected component, but rather on the set of data items on which operations are performed to produce the aggregative data item. An image is therefore divided into analysis regions by the aggregative operations perforrned on an array defining the image in a way that does not depend on the pixel values in the image. Typically, each pixel value is in at least one 204 t 922 analysis region at the lowest level of the hierarchy, and the analysis regions of each higher level are formed by combining analysis regions of the next lower level. Analysis regions "overlap" if they share one or more pixels.

A slice" is a rectangular analysis region with a width of one pixel, and a slice "extends" in the direction of its length.

An item of data is produced by "combining" other items of data when logical or arithmetic operations are performed on the other items of data that yield an item of data of the same type. For exarnple, if the other items of data are simple booleans, the c~mhined item of data is a simple boolean. If the other items of data are numbers, the comhine~ item of data could be a nurnber, produced by ~d~in~ the other items of data, calc~ t;ng the mean of the other items of data, select;n~ one of the other items of data, or a simil~r operation that produces a number.

A "power-of-two offset" within an array that defines a ~iimensioned body of data is an offset that spans one of the integral exponential powers of two, e.g.
20--1,21=2,22=4,etc.

2~1g2~
An operation "encodes" data items when performing the operation on the data items produces different data from which the encoded data items can subsequently be recovered.

An image input device" is a device that can receive an image and provide a signal ~lefining a version of the image. A "scanner" is an image input device that receives an image by a sc~nning operation, such as by sc~nning a document. A "user input device" is a device such as a keyboard or a mouse that can provide signals based on actions of a user. The data from the user input device may be a "request" for an operation, in which case the system may perform the requested operation in response. An "image output device"
is a device that can provide an image as output. A "display" is an image output device that provides information in visual forrn, such as on the screen of a cathode ray tube.

Pixels are "neighbors" or "neighboring" within an image when there are no other pixels between them and they meet an appropriate criterion for neighboring. If the pixels are rectangular and appear in rows and columns, each pixel may have 4 or 8 neighboring pixels, depending on the criterion used.

An edge" occurs in an image when two neighboring pixels have differentpixel values. The term "edge pixel" may be applied to one or both of the two neighboring pixels.

A "border" of a polygonal region, such as a rectangle, is the line of pixels at the perimeter of the region along one of its sides. A '~oundary" of a region is 2 ~ 2 a perimeter, defined by the portions of the boundaries of its pixels along which those pixels either have no neighboring pixels or have neighboring pixels that are not in the region. A connected component "crosses" a boundary of a region if the connected component includes a pair of neighboring pixels that are on opposite sides of the boundary, one being in the region and the other not being in the region.

2~4192~
B. General Features of Dense Aggregative Hierarchies Figs. 1-4 illustrate general features of the invention. Fig. 1 shows an image array that includes a geometric structure. Fig. 2 shows general steps in a method of analyzing data by producing a dense aggregative hierarchy that includes a data item indicating an attribute of a group of data items in a body of data. Fig. 3 shows general components in a system that operates on a body of data and produces a dense aggregative hierarchy from which data indicating a group attribute can be obtained. Fig. 4 shows how a body of data and levels of an exhaustive aggregative hierarchy produced by operating on it can be related.

Fig. 1 shows image array 10, which is a square binary image array including 16 pixels, each of which is either dark, me~ning it is on, or light, me~ning it is off. Pixels 12, 14, 16, and 18 form one row of image array 10, with pixels 12 and 18 being off and pixels 14 and 16 being on.

One of the geometric structures in image array 10 is the adjacent two on pixels 14 and 16. Although this structure is visible to a human viewing Fig.
1, it is not explicitly indicated by any one of the data items in image array 10. Image array 10 can be analyzed to produce a data item indicating this geometric structure, but not all analysis techniques would be sure to detect this geometric structure. For example, a technique that pairs pixels 12 and 14 with each other and pixels 16 and 18 with each other would not find the pair of adjacent on pixels 14 and 16. On the other hand, a technique that 2~1922 pairs pixels 14 and 16 with each other would find the pair of adjacent on pixels.

Fig. 2 illustrates general steps in a method of obt~ining data indicating group attributes. This method can be used in image processing to alleviate the problem illustrated by Fig. 1. The step in box 30 stores the data items of a body of data. The step in box 32 then operates on the stored body of data to produce a dense aggregative hierarchy including a data item that indicates an attribute. If the attribute is a geometric structure in an image, the data item could indicate its presence in an analysis region or could indicate some other measure of it. The dense aggregative hierarchy can be an exhaustive aggregative hierarchy that includes, at each level, a respective aggregative data item for each pixel position of the image.

Fig. 3 illustrates general components of a system that can perform the method of Fig. 2. Processor 50 is connected for accessing stored body of data 52, which can be a data array that defines an image. Processor 50 operates on body of data 52 to produce dense aggregative hierarchy 54, including a data item indicating an attribute of body of data 52. Processor 50 also produces data 56 indicating the attribute, which could be all or part of hierarchy 54 or could be other data produced by operating on hierarchy 54.

Fig. 4 illustrates a general technique by which a processor such as processor 50 could operate on one-dimensional body of data 60 to produce levels 62 and 64 of an exhaustive aggregative hierarchy. The processor operates on two cent data items of body of data 60 to produce each aggregative data item in le~el 62. Simil~rly, the processor operates on two of the data items in 204 ~ 922 level 62 to produce each aggregative data item in level 64, but the data items in level 62 are separated by another data item, an offset of two. In other words, level 62 is produced by operating on data items that are offset by 2 = 1, while level 64 is produced by operating on data items that are offset by 21--2. The general technique of operating on data items with increasing power-of-two offsets can be extended to higher levels and can be generalized to two or more dimensions as discussed in greater detail below.

C. An Implementation of Dense Aggregative Hierarchies The invention has been implemented on a Connection Machine from Thinking Machines Corporation. The Connection M~chine implementation can be viewed as a two-dimensional simulation of a three-dimensional network of processing units, referred to herein as a "binary image jungle"
(BIJ).

1. Binary Image Jungle The BIJ contrasts with conventional tree or pyramid structures, illustrated in Fig. 5. Fig. 6 illustrates part of a BIJ in one rlimension.

Fig. 5 shows a series of images 80, 82, 84, 86, and 88, each tessellated into one or more rectangular analysis regions of a respective level of a binary tree 90, part of which is shown. Image 80 shows the analysis regions at level 0 of tree 90; each analysis region of image 80 could, for example, be a square pixel. Image 82 shows level 1, at which pairs of pixels from image 80 are combined to fo~n analysis regions of a larger size; each pair includes a top pixel and a bottom pixel, so that the resulting rectangular analysis regions are twice as long in the y-direction as in the x-direction. At level 2, shown inimage 84, pairs of the rectangular analysis regions from image 82 combine to form square analysis regions of a larger size; each pair includes a left analysis region and a right analysis region. Image 86 shows level 3, at which pairs of square analysis regions from image 84 combine to form rectangular analysis regions, and image 88 shows level 4, at which rectangular analysis regions from image 86 combine to form a square analysis region that is the entire image.

Fig. 5 thus illustrates that, for a square image of width N, the binary tree has 2 log N + 1 levels. The nodes on the lowest level can be pixel values of an image. At odd levels of the hierarchy, rectangular analysis regions are formed by the union of adjacent square analysis regions at the next lower level, and, at even levels of the hierarchy, square analysis regions are formed by the union of adjacent rectangular analysis regions. The analysis regions combined at odd levels can be called the "top child" and the "bottom child," while those combined at even levels can be called the "left child" and the "right child." The region formed by combining two child regions can be called the "parent". For simplicity, the term "first child" will be used to mean the top child at odd levels and the left child at even levels, and the term "second child" will be used to mean the bottom child at odd levels and the right child at even levels.

A tree or pyramid of processors, called a tree or pyramid m?~l hine, could be used to implement hierarchical tree 90. This implementation would include a processing unit for each node of tree 90, and each processing unit would have a small, fixed amount of memory, the exact amount depending on the operations to be performed. The primitive operations of a simple tree or pyramid machine would be arithmetic and logical operations performed by the processing unit at each node of the tree on data stored in local memory at that node or read from its children. Such a machine can be controlled to loop sequentially through all the levels of the hierarchy, either upward or downward, perforrning the same operations in parallel at all the nodes at each level, storing each node's results in its local memory. At any given time, only the processing units of one level are performing operations on data, and the only communication of data is between those processing units and their children. As the level in the hierarchy increases, the results of the operations become more and more sparse.

Fig. 6 shows one-dimensional hierarchical network 100. Hierarchical network 100 has local connectivity simil~r to the tree or pyrarnid m~chine but the results of the operations do not become more sparse as the level increases. In other words, each level of the network includes as many processing units as the number of data items in the body of data at the lowest level of the network, so that the network can produce an exhaustive aggregative hierarchy of data items. If the body of data is an image, each level could include as many processing units as the number of pixels in the image.

Fig. 6 shows levels 102, 104, 106, and 108 of network 100. Each level includes 11 processing units, shown as circles, and the processing units of ent levels are connected, the connections being shown as lines. The processing units at lowest level 102 store the data items of the body of data.

~041922 Each processing unit at each of higher levels 104, 106, and 108 is directly connected to two child processing units in the next lower level, and operates on the results of the two connected child processing units to produce an aggregative data item. The resulting structure may be viewed as a collection of intertwined tree m~chines that share processing units at their lower levels, and is therefore referred to herein as a "jungle." Because each higher level processing unit in network 100 is connected to two child processing units, network 100 is a "binary jungle."

Fig. 6 illustrates connections at power-of-two offsets. The first child of each processing unit on levels 104, 106, and 108 is shown directly below its parent, while the second child is offset by an integer power of two. For level 104, the offset is 2 = 1; for level 106, 21 = 2; and for level 108, 22 = 4. If the levels are numbered from 0 to 3, the power-of-two offset at level ~ can be c~lc~ ted as 2l- 1. In a two-dimensional binary jungle, x- and y-offsets at even levels could be (0, 2~ ) and at odd levels could be (2~~~2, 0).

As a result of the connections shown in Fig. 6, each aggregative data item produced by a proces~ing unit at level 108 indicates an attribute of a set of data items at level 102 that extend from directly below the processing unit at level 108 to the right. For an implementation in two dimensions, each processing unit at the highest level could produce an aggregative data item indicating an attribute of a set with a corner value stored by the lowest level processing unit directly below the highest level processing unit.

If the two-dimensional implementation is used to process an image whose pixels are stored at its lowest level, it operates as a "binary image jungle" or "BIJ." If the two-~limensional implementation as described above were used as a BIJ, with pixel values definin~ an image being stored by the processing units at the lowest level, each higher level processing unit in network 100 would produce an aggregative data item indicating an attribute of a respective analysis region, and the respective analysis regions of the processing units at a given level would all be of the same size, with a processing unit for every possible positioning of an analysis region of that size within the image. In other words, network 100 produces an exhaustive hierarchy of aggregative data items.

The study of BIJ techniques is rooted in the hypothesis that the computational power and ef~lciency requirements of middle-level vision could be met almost entirely by a BIJ. The term "middle-level vision"
describes those visual processes that, on the one hand, are practically independent of the observer's immediate goals or current situation, and, on the other hand, are essentially independent of prior knowledge of particular objects or configurations. In particular, processes for initially separating figure from ground and for establishing useful shape properties and relations are in the realm of middle-level vision. Ullman, S., "Visual Routines," Co~nition, Vol. 18, 1984, pp. 97-157, describes routines that can be thought of as performing middle-level vision functions.

2. The Connection M~çhine System Despite their promise, BIJ networks would be ~iffiClllt and expensive to build for a practical image size. But a BIJ of a practical size can be simulated on a single instruction, multiple data (SIMD) parallel processing machine, ~ 2 an example of which is the Connection M~chine from Thinking Machines Corporation. A SIMD m~chine can be thought of as including a powerful central controller and a collection of processing units, usually many simple processing units. The controller broadcasts a series of instructions, constituting the program, to the processing units, which synchronously execute each instruction.

Fig. 7 shows components of a Connection M~chine system that performs image processing by simulating a BIJ. System 120 in Fig. 7 includes front end processor 122, such as a Symbolics or Sun workstation, connected for receiving image input from image input device 124, which could be a scanner or other input/output device capable of providing data defining an image. Front end processor 122 is also connected for receiving user signals from user input device 126, which could be a keyboard and mouse, so that an input image could be interactively drawn by the user. Front end processor 122 is connected for providing image output to image output device 128, which could be a display.

Front end processor 122 is connected to Connection M~chine 130 in the conventional manner, and controls Connection Machine 130 by executing instructions from program memory 140, in the process accessing data memory 160 for storage and retrieval of data. Front end processor 122 can initialize variables, cold boot Connection M~chine 130, and allocate storage to Connection Machine 130 in the conventional manner. To cause processing units in Connection Machine 130 to perform arithmetic and logical operations, front end processor 122 makes appropriate *Lisp calls in the conventional manner~

Program memory 140, in the illustrated implementation, stores *Lisp code 142, used to execute high level routines 144 which, in turn, call aggregative subroutines 146 which are written in *Lisp~ Aggregative routines 146 call children's results subroutines 148, also written in *Lisp, and also perform encoding of data items in the hierarchy.

P~mpl~s of high level l~uLines 144 and aggl~g~ b~uLle~ 146 are described in de~ail below and in co~ci~n~ U.S. -Patent No. 5,305,395, en'dtled ''F h~llctive Hierarchical Near Neighbor Opf~ nc on an Image" and 5,231,676, ent~tl~d "Hie~archical OpP~hon~ on Borde~ Attribute Data for Image l~ ion~".

Data memory 160 includes image files 162, in which front end processor 122 can store an image for subsequent analysis or as a result of analysis. Front end processor 122 can also store geometric results 164 in data memory 160 as a result of analysis.

Connection Machine 120 provides hypercube network communications, which can be used to simulate grid network communications. Both communication techniques could be used for power-of-two offset communication, as discussed in more detail below.

'~5 ~
3. Exhaustive Hierarchical Computation on the Connection Machine Fig. 8 illustrates one way of using Connection Machine 130 to simulate a BIJ for a two-dimensional image. Fig. 9 illustrates communication by power-of-two shifts on a grid network. Fig. 10 illustrates steps followed by system 120 in m~king a pass through a dense hierarchy of aggregative data items.

Fig. 8 shows a part of array 180 of processing units in Connection Machine 130. As can be understood from Fig. 8, in order to simulate a BIJ, array 180 requires a mech~nism for communicating across any power-of-two offset in the x- or y-dimensions in a small, fixed time and sufficient local storage for each processing unit. If these requirements are met, array 180 can simulate a BIJ with a plane of processing units, called a "flat" implementation. A flat implementation conserves processing units but sacrifices the opportunity for .
plpellnlng.

Fig. 8 shows the connections that lead to processing unit 182 with solid arrows and other connections that feed into the results received by processing unit 182 with dashed arrows. Connections at each level are indicated by 1l, l2, and so forth. At level 0, processing unit 182 does not receive results from other processing units, but stores a pixel value of the image being analyzed. At level 1, processing unit 182 receives a pixel value from processing unit 184; at level 2, a level 1 result from processing unit 186;at level 3, a level 2 result from processing unit 188; at level 4, a level 3 result ~ 20~ ~ 9~2 -from processir~g unit 190; at level 5, a level 4 result from processing unit 192;
and at level 6, a level 5 result from processing unit 194.~

The Connection ~ chine provides uniforrn parallel grid communication as a primitive operation, and automatically optimi~es cornmunication across power-of-two offsets by employing a hypercube interconnect.

A hypercube network can lead to run-time proportional to log N
performance, because the network can be set up in such a way that processors corresponding to pixels at power-of-two offsets in the ~- and y-~limPnsions are always separated in the network by exactly two wires.

Fig.9 shows how the connections in Fig. 8 could be provided with a series of shifts on a grid network whose length is a power of two. At level 1, results from processing units 202 are shifted to processing units 204; at level 2, results from processir g units 206 to processing units 208; at level 3, results from processinF units 212 to processin~ units 214; at level 4, results from processinF units 216 to processing units 218; at level 5, results from processing units 222 to proces~;ng units 224; and at level 6, results from processinp units 226 to ~r.)ce~;n~ units 228.

Parallel comrnunication across a given offset is implemented in such a network by sequentially shifting values between adjacent nodes in array 180. For such a network, the communication time at level l of the hierarchy is proportional to 21. For an N s N array, the comm~nic~t;on time for one complete pass through the hierarchy is proportional to N + N/2 + N/4 + . . .1.
This arrangement would be suitable for exhaustive aggregative operations A

' 2~1922 if shifts between adjacent processing units were fast enough that the m~imum cornmunication time--the time to shift data across the full width N of array 180--could be treated as a single time step.

A single image-sized array of processing units can simulate a BIJ exactly if each processing unit in the array has h times as much local memory as a BIJ
processin~ unit, where h = 2 log N + 1 is the number of levels in the BIJ. At level I of such a simulation, each processing unit of the array simulates one processin~ unit of the BIJ and also simulates a child processing unit at the next lower level. Each processing unit's local memory can be divided into h segrnents, and the communication of results from the child processing unit can be simulated by a local memory transfer from segment l--1 to segment 1.

The amount of local memory required per processing unit can be reduced to a small fixed multiple of the memory in each BIJ processing unit with an in-place technique. The useful results at intermediate levels are encoded across levels to reduce the amount of memory required. For example, a processing unit's local memory could be divided into two segments--a parent segment and a child segment--each the size of the memory of each BrJ
proces~in~ unit being simulated. At the end of the aggregative operation at each level, the roles of the two segments are swapped, such as by swapping contents or by swapping pointers to the segments. A small amount of additional local memory is required to hold the encoded results across all levels.

Fig. 10 shows general steps followed by front end processor 122 in producing a dense or exhaustive hierarchy, taking into account the above discussion.

~13~

The test in box 250 begins an iterative loop that produces one level of the hierarchy during each iteration. If levels remain to be produced, the step in box 252 sets up the processing units of Connection M~chine 130 to produce the next level. Then, the step in box 2~4 causes each processing unit to operate on the results from the next lower level of its first and second children, C1 and C2, c~lling children's results subroutines 148 and encoding subroutines 150 as necessary. Children's results subroutines 148 can employ grid network or hypercube network communication as appropriate.
When all the levels have been produced in this manner, production of the hierarchy is completed.

More generally, steps like those in Fig. 10 can be followed in m~kin~ an upward or downward pass through an existing hierarchy, with several differences. If a downward pass is being made, the step in box 254 would operate on C1 and C2 results by writing them. If an upward pass is being made, the step in box 254 could operate on C1 and C2 results by re~-ling them. Furthermore, the step in box 254 could include decoding previously encoded results from intermediate levels.

Additional steps could be added to the steps in Fig. 10 to obtain a sequence of steps that includes both upward and downward movements within a hierarchy. For example, each iterative loop could be like the steps in Fig. 10, but additional prelimin~ry steps could determine whether each iterative loop made an upward or a downward movement.

The techniques described above can be used to provide, in effect, a progr~mming language for analyzing images. This approach includes -decomposing visual processes into basic sequences of simple operations;
perforrning the basic operations in a fixed time for an image of a given size;
and perforrning the basic operations with a m~chine that includes simple proce~sing units of limited speed and memory and in which the other proces~inF units to which a given processing unit is directly connected are small in number in relation to the total number of proces~in~ units, so that communications are local. The requirement of fixed time operations is especially important for real time response, such as in robotic vision or document analysis. The fixed time should not depend on the complexity of the image, and preferably would not depend on the size of the image.

Specific ~rp~ tionc of these techniques to image analysis are ~escrihed below and in co~cci~n~ U.S. Patent No. 5,305,395, entitled "F~hAllctive Hierarchical Near Neighbor Ope.rAffonc on an Imagen; 5,231,676, enti~ed "Hierarchical Operations on Border Attribute Data for Image p~.gioncN; 5,239,596, entitled NLabeling Pixels of an Image Based on Near Nei~hhor AttributesN; 5,255,354, entitled NCompAri.~on ofImage Shapes Based on Near Neighb~ r DataN; and U.S. Patent No. 5,193,125, entitled NLocal Hierarchical P~oc~ c~ Focus Shift Within an ImageN . More ~en~rAlly, the t~chniques can be used to ~rO.Ill three types of general op~ff~nc: (1) Select and shift, or focu of ~tt.qntion~ operAffonc move among Çe~lules in an image; (2) figure/
ground ,~ ;on operAffon.~ produce parallel results, such as new images of Figures;
(3) ~ro~.ly d~.iplion ope~ffonc produce scalar results, such as numbers or boolean values indicating shape properties or spatial relations in an image.

More advanced applications of these techniques could include image editing, to modify an analyzed image.

D. General Features of Hierarchical Operations on Regions Figs. 11-15 illustrate general features of some applications of the invention.
Fig. 11 shows, for each of several pixels in an image, a sequence of respective regions of increasing size. Fig. 12 is a simple image used for the examples in Figs. 13-14. Fig. 13 illustrates a technique for hierarchically counting the black pixels of respective regions of increasing size. Fig. 14 illustrates a technique for hierarchically counting the black pixels of respective horizontal slices of increasing size. Fig. 15 illustrates a technique for encoding region attribute data. Fig. 16 illustrates a downward propagation of region attribute data. Fig. 17 illustrates obt~ining prominent values.

Fig. 11 shows fr~grnent 310 of a two-dimensional binary image, each of whose pixels can be designated as (m, n) using coordinates as shown. For each pixel, a sequence of respective regions of increasing size can be defined, within each of which the pixel occupies the same position, such as the upper left-hand corner. For pixel (M, N) in fragrnent 310, the respective regions include two-pixel region 312 and four-pixel region 314; for pixel (M + 1, N), two-pixel region 316 and four-pixel region 318; for pixel (M + 2, N), two-pixel region 320 and a four-pixel region (not shown); and so forth. Each pixel's respective two-pixel region includes the neighboring pixel below it in ~41 922 fragment 310, so that, for example, two-pixel region 312 includes the two neighboring pixels (M, N) and (M, N + 1). ~imil~rly, each pixel's respective four-pixel region includes its own respective two-pixel region and the respective two-pixel region of the neighboring pixel to its right in fragment 310, so that, for example, four-pixel region 314 includes the pixels in regions 312 and 316, including pixels (M, N), (M, N+1), (M+l, N), and (M+l, N+1).

Fig. 12 shows binary image 330, a simple image in which black pixels are shown shaded. Binary image 330 can be used to illustrate several techniques by which attributes of regions can be determined with local hierarchical operations. Boundary 332 surrounds four pixels, (1, 1), (1, 2), (2, 1), and (2, 2), to which Figs. 13-14 relate.

Each of Figs. 13-14 shows a sequence of three data item arrays, each array including a respective data item for each of these pixels. The first array in each sequence shows starting data items upon which a processor can operate to produce the other arrays. The starting data items each indicate attribute values for the respective pixel. The data items of the second array in each sequence indicate attribute values for the pixel's respective two-pixel region, and the data items of the third array indicate attribute values for the pixel's respective four-pixel region.

Array 340 in Fig. 13 shows starting data items that are equal to the pixel values of the pixels within boundary 332 in Fig. 12, with black pixels having the value 1 and white pixels having the value 0. Array 342 shows, for each pixel, the sum of its pixel value in array 340 with the pixel value in array 340 of the pixel imrnediately below it in image 330,so that each value in array 342 indicates the number of black pixels in a respective two-pixel region. For exarnple, the respective two-pixel region of pixel (1, 2) includes two black pixels. Similarly, array 344 shows, for each pixel, the sum of its value in array 342 with the value in array 3420f the pixel imme~ tely to its right in image 330. For example, the respective four-pixel region of pixel (1, 2) includes four black pixels.

Fig. 13 thus illustrates several general features: Each value in an array is produced by operating on two values, one a previously obtained value for the ~ne pi~el and the other a p~eviou~ly obtained value for a~ lLC~ pi~d. The technique~ ihed in Section C above could lll~f~
applied, with a power-of-two offset between the pixels whose results are operated on to produce each value. Also, the operation perforrned on the two values is generally an operation that combines the values, as defined above.
In Fig. 13, each operation com~ines two numbers by addition, but other combining operations could be used. Finally, the sequence of steps in Fig.13 produces a hierarchy of data items, each indicating an attribute of a respective region of image 330.

Array 350 in Fig. 14 again shows starting data items that are equal to the pixel values. Array 352 shows, for each pixel, the same value as in array 350, so that array 352 can be viewed as produced by performing a null or "idle" operation on array 350. In ef~ect, array 352 in-lic~tes the number of black pixels in the upperrnost horizontal slice of the respective pixel's two-pixel region. Array 354 shows, for each pixel, the surn of its value in array 352 with the value in array 352 of the pixel imme~ tçly to its right in image --4~ --204 l 922 330. For ex~rnple, the respective slice for pixel (l, 2) includes two black pixels.

If the sequence of Fig. 14 were continued to larger regions, every other operation would be a null or idle operation, so that the values produced would indicate attributes of progressively longer horizontal slices. If Fig. 14 shows idling at odd steps in the sequence, idling at even steps in the sequence would produce values indicating attributes of progressively longer vertical slices, each the leftrnost vertical slice of a respective region of one of the pixels.

A~ c~ in Section C above, ~.h,.-..c~iv~ hie~archical operations like those described above can be perforrned with an in-place implementation on a parallel processor like the Connectio~ ~chine from Thinkin~ chines Corporation. As can be understood from Figs. 13-14, for a hierarchy that has a large number of levels, each processin~ unit in an in-place implementation can produce a large amount of data relating to the respective regions of a pixel. Fig. 15 illustrates one technique for reducing the amount of data through an encoding technique.

In Fig. 15, each data item includes two values indicating respectively the hi~hest levels for which the respective region is full, with no white pixels, and vacant, with no black pixels. Because flllne~ and vacantness are binary attributes that make only one transition from one value to the other during a given sequence of incre~singly large regions, the level at which the transition takes place can be encoded with a number. In array 390, each starting data item for a black pixel indicates that its highest full level is the first level and its highest vacant level is the zeroth level. In array 392, eachdata item of a two-pixel region that includes two black pixels indicates that its highest full level is the second level. The data item for pixel (2, 1) indicates that its highest vacant level is the first level, because its respective two-pixel region has one white pixel and one black pixel. In array 394, only the data item of pixel (1, 2) is changed, because its respective four-pixel region has all black pixels, while the other four-pixel regions each have both white and black pixels.

Fig. 16 illustrates a technique for propagating information such as attribute values downward within a dense hierarchy of data items. Node 400 is at the next lower level in the hierarchy from nodes 402 and 404. For each node, an upward attribute value U and a downward attribute value D are produced.
Each upward attribute value U is produced by operating on a set of upward attribute values from the next lower level. The upward attribute values can then be used to produce the downward attribute values, such as by starting at one of the levels of the hierarchy with downward attribute values equal to the upward attribute values of that level. In propagating downward, each downward attribute value D is simil~rly produced by operating on a set of downward attribute values from the next higher level. Fig. 16 illustrates one way to do this.

Node 400 begins with downward attribute value Do, which could, for ex~mple, be equal to its upward attribute value U. One step operates on Do and the downward attribute value D from node 402 to produce D1, a prelimin~ry downward attribute value shown in box 406. Then, another step operates on Dl and the downward attribute value D from node 404 to produce D2, the final downward attribute value shown in box 408. D2 could, for example, be the m~ximum or minimum of the values operated on, or could be a central value. Also, D2 could be chosen by applying a criterion to nodes 402 and 404 to select a downward attribute value for node 400.

Fig. 17 illustrates a technique for obt~inin~ information about prominent values in a body of data. The body of data could, for example, be an array of pixel values or other attribute values for the pixels of an image. Array 420 is produced from the body of data by preparing, for each pixel, a starting data item that includes the pixel's attribute value and a count that is initialized to one. In the manner illustrated in Fig. 11, a sequence of operations is then performed to produce arrays 422, 424, and 426. To produce array 422, each value in array 420 is compared with the neighboring value ilmmediately below it. If the neighboring value is different, the value is given a count of one, but if the neighboring value is the same, the value is given a count of two, as shown. Similarly, to produce array 424, each value in array 422 is compared with the neighboring value immediately to its right. If the neighboring value is different and its count is not greater, the value is left unchanged. If the neighboring value is different and its count is greater, the neighboring value and its count replaces the previous value. And if the neighboring value is the sarne, the counts are added. To produce array 426, each value in array 424 is compared with the value o~set by two below it, and the s~me rules are applied as above.

When array 426 is produced, the counts are compared to find the largest count, a count of three for the value b. The pair (b, 3) is produced in box 428 ~ 204 1 922 to indicate this result. Then, a new starting array 430 is produced in which locations that had the value b are inactive. To produce arrays 432, 434, and 436, the same rules are applied as above except that inactive locations are always treated as having a different value and a lower count. Again, the counts are compared to find the largest count, a count of 2 for the value c.
The pair (c, 2) is produced in box 438 to indicate this result.

The technique of Fig. 17 produces prominent values, but not necessarily in their order of frequency in the image. The count provided with each prominent value tends to indicate the size of the largest group of a given value rather than the total number of occurrences of the value. Nonetheless, in the simple example of Fig. 17, the technique sl~cces~lully identifies the two most frequent values, b and c.

E. An ~mr~ )n of Oper~tinnc for l?~jnn~

The invention has been implemented on a Connection Machine from Thinkin~ M~qchines Corporation, using the in-place implementation techniques described above to produce exhaustive hierarchies of data items.

1. Image Chunking The implementation provides image analysis based on simple local analysis regions called chunks. Chunks are ~lefine~l across a wide range of scales for a given image size, and chunks at each scale are positioned so that they overlap densely. The chunks at a given scale may include one chunk at ~ 1'" . ~. .~
rA

lg~2 every possible position within the image, providing an exhaustive set of chunks.

Fig. 18 shows general steps in an image analysis operation based on chllnking. The step in box 470 stores an image to be analyzed, with each processing unit's local memory containing the respective pixel's value.
Label bits in each processing unit's local memory are initialized. The step in box 472 finds chunks that meet a validity criterion by producing an exhaustive hierarchy of data items. The data items can optionally be encoded by storing the highest valid level at each processing unit. The step in box 474 finds one or more attributes of valid chunks.

2. Finding Valid Chunks For every location in the image in parallel, a hierarchical process canclassify corresponding rectangular regions at a series of scales as (i) containing no black connected components; (ii) containing a single connected component; or (iii) possibly cont~ining more than one connected component. Under this chunk validity criterion, a region known to contain a single connected component is referred to as valid. A region with no connected component is referred to as vacant. A region possibly cont~ining more than one connected component is referred to as invalid. A valid region with no holes (no white components) is referred to as full. Vacancy, validity, and filllness can be established hierarchically based on the following five rules:

20~ 1 922 1. A white pixel is vacant and a black pixel is initially both valid and full.

2. The union of two adjacent vacant regions is vacant.

3. The union of two adjacent full regions is full.
4. The union of an adjacent vacant region and valid region is valid.
5. The union of two adjacent valid regions is valid if the components in the two subregions are connected.

Note that under these cl~ssification rules, some regions cont,~ining a single connected component may be labeled invalid. This uncertainty in the cl~ssification arises because the classification process is local, whereas connectivity is a global relation.

Whether or not the components in two adjacent regions are connected can in turn be expressed locally and hierarchically. The condition is met if any black pixel in one region is adjacent to a black pixel in the other region. For example, 4-adjacency can be the criterion for adjacency of black pixels. A
region with a black pixel in its right border that is adjacent to a black pixel outside the region is referred to as right-connected. A region with a black pixel in its bottom border that is adjacent to a black pixel outside the region is referred to as down-connected. Right-connectedness and down-connectedness can be established hierarchically based on the following rules:

--.~1--20~ ~ 922 1. A black pixel is right-connected if the neighboring pixel with offsets (1,0) is also black. A black pixel is down-connected if the neighboring pixel with offsets (0,1) is also black.

2. At odd levels:

(a) A region is right-connected if either of its children is right-connected.

(b)A region is down-connected if its bottom child is down-connected.

3. At even levels:

(a) A region is right-connected if its right child is right-connected.

(b)A region is down-connected if either of its children is down-connected.

These rules lend themselves to exhaustive hierarchical operations like those described above. To process a square image of width N in a grid of the same width, each proces~ing unit can produce the data items at all levels for a respective pixel. Let l = O at the base level of the hierarchy, with top level h= 2 log N + 1. The computation is applied to rectangular regions r~nging in size from one pixel to the entire image. A processing unit at the current level I of the computation is denoted by P, and the region for which P
produces a data item is denoted by R, with subregions rl and r2 as described in the linking application. The processing unit at level I--1 that produced a data item for subregion r2 of R is denoted by P2. P itself produced a data item for subregion rl of R at level I--1. At each step, communication ~ ~ 2041922 between processi~ units is implemented by shifting the array. The of~sets of P2 from P (xo2, Yo2) are (0, 2(l- 1)/2) at odd levels and (2l/2- l, 0) at even levels.

The operation in Fig. 19 finds, for every pixel at all hierarchical levels, the results of the valid, vacant, and full tests for the respective region. In addition, it encodes the results by recording the m~ximum level at which each test succeeds.

The step in box 500 begins by storing the image and init~ in~ label bits and level bits for each processing unit. Label bits can include bits for the results of the valid, vacant, full, right-connected, and down-connected tests.
Level fields can include Lvalid~ Luacan~ and LfUII-The step in box 502 branches at each processing unit based on whether itspixel is white or black, to produce the appropriate data for the lowest level of the hierarchy. If white, the step in box 504 sets the vacant label to "on," setsthe other labels to "off," and changes LUacant to zero. If black, the step in box 506 sets the valid and full labels to "on," sets the vacant label to "off," and changes LValid and LfUIl to zero. The step in box 508 then shifts the pixel values to provide data so that each processing unit can deterrnine whether it has a black pixel that is right- or down-connected, with each label being set accordingly.

Operations ~laling to a topolo~ l class and local extent are described in detail in co~ ned U.S. Patent No. 5,231,676 entitled "Hierarchical .~
~, 204 1 92~
., ~ , ~ ns on Border Attribute Data for Image Regions".

The step in box ~10 then begins an iterative loop that produces each of the higher levels of the hierarchy. To begin, the step in box 512 obtains the level's offsets to P2, which are used by each processing unit to read the labelsin its P2. Then, in box 514, each processing unit applies the validity criterion, saving the result in its valid label. The step in box 516 increases LUalid to the current level if the criterion was met. The step in box 518 applies the other tests, saving the results in the labels and increasing LUacantand LfUIl if successful with the respective tests.

The operation in Fig. 19 produces a complete encoding of the results across scales from which it is easy to later recover results at any given scale.
Specifically, suppose LValid~ LUacan~ and LfUll are the m~imum valid, vacant, and full levels at a processing unit P. Suppose the null value for these measures is--1; that is, Lvalid=--1 at P if P does not find a valid region at any scale. Then, P would be labeled valid at level l if (i) l ~Luacant;
and (ii) I 5 LVal~d~ P would be labeled vacant at level l if l ' LVacan~- P
would be labeled full at level l if l s Lfull.

--~4 ,. . .

2041g22 3. Orientation The orientation of local segments of the boundary of the component in avalid region can also be measured hierarchically.

Eight configurations of pixels in a two pixel by two pixel neighborhood are defined in Table I. The top-left, top-right, bottom-left, and bottom-right Quadrant: tl tr bl br N-edge 0 0 NE-edge 1 0 E-edge 1 0 1 0 SE-edge 1 1 1 0 S-edge 1 1 0 0 SW-edge 1 1 0 W-edge 0 1 0 NW-edge 0 TABLE I

positions of the pixels are indicated by tl, tr, bl, and br, respectively. The eight configurations are referred to as edge pairs.

Based on the number of instances of each of these configurations whose origin, which can be the top-left pixel, falls within a valid region, it is possible to estimate the straightness and, if appropriate, the orientation of a component region. The number of instances of a particular edge pair is 3~:~

referred to as an edge pair count. Each edge pair count for a region can be obtained by s~lmming the corresponding counts for its child regions.

It follows from the properties of digitized arcs that a 1-exit region with a perfectly straight boundary gives rise to either one or two positive edge-pair counts. There is a simple correspondence between the quadrants and the non-zero edge counts which leads to the following formulas for estimating the displacements of a straight edge in x and y, from which the orientation can be calculated using table lookup or an arctan procedure. The respective edge-pair counts are denoted by N, S, E, W, NE, NW, SE and SW, as indicated in Table I. In the first quadrant dx=N+NW, dy = W+NW. In the second quadrant, dx=S+SW, dy=W+SW. In the third quadrant, dx=S+SE, dy=E+SE. In the fourth quadrant, dx=N+NE, dy=E+NE.

The population of a quadrant is defined to be max(ldxl,ldyl). A pixel p may be assigned an orientation if for some valid region R with upper left corner at p (i) the m~qximum quadrant population in R exceeds some constant , the smallest x or y displacement for which a useful orientation estimate may be measured; (ii) the non-m~im~l populations are all small enough relative to the m~imum population (i.e., the edge in the region is straight enough).

The operation in Fig. 20 obtains an orientation at each pixel that satisfies allof these constraints. The step in Box 630 begins by finding, for each pixel, the edge pair counts for the two pixel by two pixel region of which the pixel isin the top-left corner. The edge pair counts will all be zero except one. This step also initializes an orientation field for each processing unit.
6--2 ~ 2 The step in Box 632 begins an iterative loop that continues from 1=1 to21Og. The step in Box 634 begins the iterative loop by obt~ining the next level's P2 offsets and using them to read the edge pair counts from P2. The step in Box 636 obtains edge pair counts by summing the edge pair counts from the next lower level. The step in Box 638 obtains each quadrant's population using the formulas above, and then obtains the m~ rnum population s of the quadrants. The step in Box 640 determines whether the current level is valid and whether s is greater than ~. If so, and if the step in Box 642 deterrnines that no previous orientation has been stored for the processing unit, the step in Box 644 obtains the region's orientation from the edge pair counts, such as by table lookup or an arctan procedure.

4. Curvature The curvature of local segments of the boundary of the component in a valid region can also be measured hierarchically.

The orientation change along the boundary in a 1-exit region may be modeled in terms of an abrupt change between two straight segments of the boundary. Suppose each quadrant is bisected into two octants. Assuming that both straight segments do not fall in the same octant, there is a simple correspondence between the octants and the non-zero edge counts which leads to the following formulas for estimating the displacements of a straight segment in x and y, from which the orientation of each segment can be calculated using table lookup or an arctan procedure.

--~7-'~ .

The respective edge-pair counts are again denoted by N, S, E, W, NE, NW, SE and SW. Let the octants corresponding to a given quadrant q be denoted by qa and qb, where the a octant includes the horizontal and the b octant includes the vertical. In octant la~ dx=N+NW, dy~NW. In octant 1~, dx=NW, dy=W+NW. In octant 2a~ dx=S+SW, dy=SW. In octant 2b, SW, dy=W+SW. In octant 3a, dx=S+SE, dy=SE. In octant 3b, dx=SE, dy=E+SE. In octant 4a, dlx=N+NE. dy=NE. In octant 4b, dx=NE, dy=E+NE.

The population of an octant is defined to be max(ldxl,¦dyl). A pixel p may be ~igned an o~ ;o.l in a given octant if for some valid region R with upper left corner at p, ~e popula~on in R for dlat octant e~ some constant ~, the smallest x or y displacement for which a useful orientation estim~te may be measured. p may in addition be assigned an orientation difference with respect to R if two octants account for close to half of the edge pixels in R. A useful mP~s~lre of orientation difference for estimating curvature is the acute orientation difference ~ . 02). Os~l, 02c 360, which is defined as follows:

o~ ,02) = min(~h--I.(l+ 180)--h), where Oh=max(mod(~l, 180), mod(02, 180)) and ~l--min(mod(~l, 180), mod(02, 180)).

An operation similar to that of Fig. 20 can be used to measure local orientation differences.

--5~--2041~22 5. Thin Chunks For some geometric measures and operations, highly elongated chunks are more effective than compact chunks such as the square or one-to-two rectangles encountered above. One pixel wide vertical and horizontal single-component regions across a range of lengths--referred to as slices--support more robust computation of local width of elongated image components than compact rectangular regions do. This is because where narrow components merge or are closely spaced, m~xim~l single component regions are forced to be small, m~king a region-based measure of local width, such as area/perimeter, ineffective. The local width of an elongated figure at a given point is better approximated by the minimum of the lengths of vertical and horizontal slices of the figure that include that point.

Figs. 21(a)-21(c) illustrate the simplicity of the geometry of single connected black component slices by showing three vertical slices intersecting component 660. Edges in a slice correspond to border-edges in a compact chunk. A single-component-slice can have zero, one, or two edges. Slice 662 is completely within component 660 and therefore has zero edges. Slice 664 emerges from component 660 at one end and therefore has one edge. Slice 666 emerges from component 660 at both ends and therefore has two edges.

The number of exits in a single component slice is def~lned to be the number of edges. There is only one geometric property associated with a single-component-slice--length, which is the pixel count over the slice.
Slices with two edges support the computation of local width.

2 ~

Slices, like compact rectangular chunks, map well to binary hierarchical processes in two dimensions operating on a rectangular image tessellation.
In general, the exhaustive hierarchical operations described herein can be used for slices with minor modifications. Operations for vertical slices are variants of the corresponding operations for compact chunks except that they idle at even levels. Operations for horizontal slices idle at odd levels.
To idle, in the case of in-place computations, means to do nothing at all, as illustrated in Fig. 14. In the case of computations on a BIJ or BIJ emulation, processing units at an idling level simply read the results computed by their child processing units and store them for use by their parent. By idling at even levels, the computation skips all horizontal communication in the BIJ, and vice versa.

6. Propagation The above operations all fit within the general chllnking operation described in relation to Fig. 18. Furthermore, although described and implemented separately, all of the above operations could be performed at once in a single upward hierarchical pass, provided that the memory available to each processing unit is sufficient to store its results. Additional operations are required to propagate data such as chunk attributes within an image.

Propagation operations can perform several functions. A chunk attributecan be propagated to the pixels of the chunk, referred to as labeling. All of the pixels of a component can be labeled, referred to as component labeling.
A set of processing units can be selected by propagating data.

2~4192~
7. Labeling In general, the operations described above establish topological and geometric properties for regions at a series of scales and store the results forany given region at a particular image location--the upper left corner pixel of the region. "Label propagation" or "labeling" is the operation of transmitting such results from the pixels where they are originally stored to the other pixels in each respective region. Given such an operation, it is possible to label each black pixel with the topological class and geometric properties of some component to which it belongs. Subsequently, it is possible to select, or single out, all pixels labeled with similar values of a given property in parallel.

Label propagation is complicated by the fact that in an exhaustive hierarchical representation, each pixel belongs to many regions, not just one. Thus, a given black pixel will in general be part of many components defined by valid regions. Some criterion is therefore required for deciding from which of all these components a pixel should take its label for a given property. The term "saliency criterion" is used herein to refer to such a criterion, and the choice of a saliency criterion may be made differently for different properties. In the following paragraphs, saliency criteria and propagation operations are presented for several measures described above.

It is often useful for a pixel to take on the m~ximum (or minimum) of the attribute values associated with the components that include it, so that the m~ximum criterion and minimum criterion are both useful saliency criteria.

The may~mum criterion is appropriate, for example, for topological class, local extent, and local width.

If there were independent storage at each level of the hierarchy, component properties could be propagated to the included pixels by a straightforward hierarchical process, downward through the hierarchy from parent to child.
For each valid region, the saliency criterion would be applied to choose between that region's value of the attribute A in question and the parent value. That is, A(cl) = max(A(p), A(cl)) and A(c2)--max(A(p), A(c2)).

In an exhaustive hierarchical process care must be taken with processes that move infor-n~tion dow~wards in the hierarchy, however, because there is potential for data-clobbering. Unlike a tree, each processin~ unit has two parents--it is the first child of one parent and the second child of another.
Therefore, if the processing units at level I of a computation modify a variable v in both children, a processin~ unit P modifying variable v in its second child P2 must respect the fact that v in P2 holds a result computed by some other processin~ unit for which P2 was the first child.

Fig. 22 shows a downward propagation operation that uses a m~imum criterion. The step in Box 680 begins by initi~ r F Result and Temp fields for each proce~in~ unit to a null value and by starting at the top level of the hierarchy, where I--h. Each processin~ unit also has a Value field that contains the value of some attribute at each pixel.

The step in Box 682 begins an iterative loop that operates on each level of the hierarchy down to I = 1. The step in Box 684 begins the iterative loop by 204 l 922 obt~ining the of~sets to P2 on the next lower level and by using the offsets to read P2's Temp field. The step in Box 690 then determines whether the current level I is Lual~d. If so, the step in Box 692 applies the m~xin~um criterion by setting the Temp field to the m~ximum of its previous value and the Value field from this processing unit, which can introduce new values into the operation at each level for which any processing unit has LValid~ If the step in Box 694 determines that the Temp field has the null value, no propagation is performed, and the operation returns to the step in Box 682.

If Temp does not have the null value, the step in Box 700 branches based on whether the value of Temp from P2 has the null value. If so, the step in Box 702 writes the value in Temp to P2's Temp field. But if not, the step in Box 704 applies the m~ximum criterion by setting P2's Temp field to the m~ximum of its previous value and the value from Temp.

When all the levels of the hierarchy have been handled, the step in Box 710 determines whether this processing unit's respective pixel is black. If so, and if the step in Box 712 determines that Temp is not the null value, the step in Box 714 sets the Result field to the value in the Temp field.

The step in Box 710 could be omitted to label all pixels with the propagated value, not just the black pixels. The operation of Fig. 22 could also be changed to use a minimum criterion simply by modifying the steps in boxes 692 and 704 to apply the minimum criterion.

In some cases, it might useful for a pixel to take its value for a given attribute from the component that m?~imizes or minimi~es some other property, such as area or scale, which could be implemented by a simple extension of the above operation. For example, for orientation it is preferable for a pixel to take its value from the minimum-scale including component that has a non-null value. As described above, orientation is non-null for regions with a pixel count above some minimum required for getting a good orientation estimate. A simple modification of the operation in Fig. 22 above would propagate minimum scale geometric labels to the corresponding component pixels, using a non-null saliency criterion, simply elimin~ting the step in Box 704. A more complex criterion for orientation is that a pixel should take on the value associated with the largest including component that is straight enough, i.e., has a curvature below some threshold while m~imi~ing pixel count.
8. Component Labeling Component labeling, or coloring, provides the capacity for figure/ground separation. Coloring operations make it possible to pull out a particular curve or region in the image. Coloring can be viewed as an operation on a graph. In a conventional implementation of binary image component labeling, each black pixel may be treated as a node of the graph, and each four-adjacency between black pixels, in the four-connected case, may be treated as an edge. Coloring can then be performed iteratively by, at each step, coloring all uncolored nodes that share an edge with a colored node.
Note that at each step information is transmitted across each edge, in some sense. Therefore, coloring requires propagation of data.

Coloring image chunks can be more efficient than pixel-by-pixel coloring because fewer steps are required. Viewing chunk coloring as an operation on a graph, each chunk can be treated as a node in the graph and each intersection between the components of two chunks can be treated as an edge. It can be seen that the number of iterations for chunk coloring is essentially independent of scale and is determined mainly by the shape of the figure being colored.

In each iteration of chunk coloring, information is transmitted by a hierarchical process consisting of one upward pass and one downward pass.
During the upward pass, a valid parent becomes colored if either of its children is colored. During the downward pass, a colored parent colors both of its children. Given any set of initially colored locations, repeated iterations of this basic coloring operation will color all pixels connected to the starting pixels. The operation may be terrninated when no new pixels, or not enough new pixels, are colored at a step.

Fig. 23 shows a chunk coloring operation that continues until no new pixels are colored. The step in Box 730 begins by initi~ ing labels. In addition to a Colored label, each processing unit has Previous Label, Utemp and Dtemp labels in this implementation. The step in Box 730 initializes the Colored label of each seed pixel to be "on," clears each Previous Label.

The step in Box 732 determines whether every processing unit's Colored label is equal to its Previous Label, which is the test for termination of the ~ 2 operation. If not, the step in Box 734 copies each processing unit's Colored label to its Previous Label, and prepares for an upward pass starting at I = 1.

The step in Box 740 begins an iterative loop that performs the upward hierarchical pass. The step in Box 742 begins by obt~ining the next level's offsets to P2 and using the of~sets to read P2's Utemp label. The step in Box 744 determines whether this level ~ is at or below the level indicated by LValid~ in which case the step in Box 746 sets the Utemp label to the m~qximum of the previous Utemp label and the Utemp label read from P2, applying a m~imum criterion.

When the upward pass completes the top level of the hierarchy, the step in Box 748 prepares for a downward pass starting at I = h. The step in Box 750 begins an iterative loop that performs the downward hierarchical pass. The step in Box 752 determines whether this level I is the level indicated by LUal~d~ in which case the step in Box 754 introduces data from the upward pass by setting the processing unit's Dtemp label to the value of its Utemp label. Then the step in Box 756 obtains the next level's offsets to P2 and reads P2's Dtemp label. The step in Box 758 sets P2's Dtemp label to the m~imum of Dtemp and the value from P2 read in Box 756.

When the downward pass is completed, the step in Box 760 sets each processing unit's Colored label to its Dtemp label before returning to the test in Box 732.

The operation in Fig. 23 could be modifed to color white regions by using LUacant instead of Lualid. The operation in Fig. 23 is not limited to coloring a
9;~
single component, but could be perforrned to color a number of components, each including one or more of the seed pixels.

9. Selection .

Selective processing refers to the preferential allocation of processing resources to a set of image components that have similar local geometric properties. If each pixel has been labeled with the geometric properties of an including component, selective processing may be achieved by first performing a selection operation to select a set of pixels labeled with simil~r values of the given property, and then performing the processin~ in question on the selected set of pixels. A pixel may be selected by setting a single bit label "on" in its processing unit.

If S denotes the single bit label indicating selection in each processing unit, and v denotes the value of an attribute with a range of interest between low lirnit I and high limit h, a selection criterion can be defined as follows:

-- ; 1 if l~p<h O otherwise--.

Selection criteria can be nested because the result of selection is the labelingof each pixel with a value that can be treated as an attribute. If Sl and S2 denote respective single bit labels of two selection criteria, the first selection criterion can be defined as above for a first attribute and the second selectioncriterion can be defined as follows for a second attribute:

.. ; , 20~ t ~22 -- ; 1 if S1=l(i.e.lsS1sl)andlcpch O otherwise--.

Fig. 24 illustrates general steps in applying a selection criterion to perform aselection operation. The step in Box 770 performs prelimin~ry operations that produce the value of the attribute of interest at each pixel. This step can include operations like those discussed above in relation to attributes of valid chunks as well as labeling. Then, the step in Box 772 obtains the low and high limits of the range of interest, and propagates the limits to all of the processing units. The step in Box 774 uses the low and high limits to apply the selection criterion to the value at each processinF unit, setting S toits appropriate value as a result.

The step in Box 772 can be performed in a number of ways, including deriving limits from the attribute value at a current focus pixel or from salient values of the global distribution of attribute values. These two approaches may be called locally-driven selection and globally-driven selection.

Locally-driven selection is grounded in the notion of a processing focus, a particular pixel in the image upon which certain image analysis operations may be' applied. Dots in an image can be counted using a processing focus to e~amine each dot in turn with shift and mark operations as follows: while seen dots remain---(i) shift to an unseen dot; (ii) mark this dot seen and add 1 to the count; (iii) repeat.

20~ ~ 92~

Locally-driven selection singles out components of the image similar to the one that includes the current focus pixel. That is, in locally-driven selection the range of selection is determined by the attribute's value at the focus pixel. For example, if the focus pixel is on a curve, selection could be based on a value defined at that location for an attribute such as topological class (e.g., 2-exit), width, orientation, and so on.

The current focus pixel can be labeled by a single bit F, which is "on" at the current focus pixel's processing unit and "off" at all other processing units.
Fig. 26 illustrates an operation that labels all pixels that are similar to the current focus pixel with respect to an attribute A by propagating a, the value of A at the current focus pixel, to every other pixel, and then doing a comparison at every pixel to determine whether to set the selection label S
to "on." The propagation includes one upward pass and one downward pass through the hierarchy, with the upward pass re~-lin~ a to the upper left corner pixel of the image and the downward pass distributing a from there to every other pixel. The operation in Fig. 25 thus perforrns the steps in boxes 772 and 774 in Fig. 24.

The step in Box 800 initializes each processing unit, clearing S to "off" and, for the processing unit at which F is "on," setting a Result field to the value a. Result is set to a null value in all other processing units. The operation begins with I = 1, in preparation for the upward pass.

The step in Box 802 begins an iterative loop that perforIns the upward pass, with the step in Box 804 obt~ining the next level's offsets to P2 and using them to read P2's Result field. The step in Box 806 deter~nines whether P2's Result has the null value and, if not, the step in Box 808 sets the Result fieldto the value from P2's Result.

When the upward pass is completed, the step in Box 810 prepares for the downward pass with l = h. The step in Box 820 begins an iterative loop that performs the downward pass, with the step in Box 822 obtaining the next level's of ~sets to P2. The step in Box 824 determines whether the Result field has the null value and, if not, the step in Box 826 uses the offsets from Box 822 to set P2's Result field to the value from Result.

When both passes are completed, the step in Box 830 compares the value in Result with each processin~ unit's value for A. The comp~rison can test for equality or for a dif~erence that is less than a defined amount. If Result, which has the value a in every processin~ unit, is sufficiently simil~r to the value of A, the step in Box 832 sets S "on" to indicate that the proces~in~
unit is selected.

In contrast to locally-driven selection, globally-driven selection is selection based on a salient value of the global distribution of a local attribute, such as the m~Yimum~ minimum, or a prominent value. As explained more fully below, a prominent value is one whose population is large compared to most other values. Fig. 26 illustrates an operation that propagates a globally salient m~qximum value to the upper left corner of an image, after which it can be propagated downward as in Fig. 25.

-^70--' The step in Box 850 initi~ es each processing unit, setting its Result field to its value for the attribute A. The level is set to I = 1 to begin an upward pass through the hierarchy.

The step in Box 852 begins an iterative loop that performs the upward pass, with the step in Box 854 obt~ining the offsets to P2 and using the offsets to read P2's Result field. The step in Box 8~6 then applies the m~ximum criterion to set Result to the m~ximum of its previous value and P2's Result.
When all the levels have been handled, the operation can continue as in Box 810 in Fig. 25.

The operation in Fig. 26 could readily be modified to apply a minimum criterion instead of a m~ximum criterion.

Various operations can be performed on selected pixels once the selection operation is completed. For example, a coloring operation can be performed to propagate values from the selected pixels to other pixels in the same components. When the coloring operation of Fig. 23 is applied to the result of selection, performance can be moderately improved by computing LUalid for the set of selected pixels and passing it to the operation in Fig. 23. This is usually worthwhile since it takes less time than a single coloring step. A
simple modification to the operation in Fig. 23 applies it selectively--a pixel's Colored label is set "on" in Box 760 only if the pixel is in the selected set.
10. Finding Prominent Values As noted above, a prominent value for an attribute can be used in establi~hing a range of limits for a selection criterion. One example of a prominent value is a mode, which can be defined as the most common value of a distribution. A local hierarchical operation can establish the mode when the mode is suf~lciently prominent, that is, when the population associated with the mode is large enough compared to the populations of other values. Because the result of such a process is not necessarily the true mode, we refer to it as the hierarchical mode. The basic idea is that given a region R with two subregions rl and r2, the hierarchical mode in R is simply the subregion mode with greater associated population.

The operation in Fig. 27 finds a hierarchical mode, storing the result in the upper left corner of the image so that, for example, it can be propagated downward in a selection operation as in Fig. 25. The step in Box 870 begins by initi~ ing, which includes setting a Mode ~leld in each processing unit to equal the value in the Value fleld for the attribute being analyzed. Also, if the value in the Value field is non-null, a Count field in each processing unit is set to one. The level is set to 1=1 to start an upward pass through the hierarchy.

The step in Box 872 begins an iterative loop, with the step in Box 874 obt~ining the next level's P2 offsets and re~ling P2's Mode and Count fields.
The step in Box 880 branches based on whether a processing unit's Mode field has a value suff~lciently simil~r to P2's Mode field. The similarity measure used depends on the property in question, and may even be allowed to depend on the value of the property. For extent measures, such as width, more slop may be acceptable in comparing large values than small ones. For ~04 1 922 orientation, on the other hand, slop should not vary with value, but values must be allowed to wrap around.

If the Mode field values are suf~lciently simil~r, the step in Box 882 sets the proces~ing unit's Count field to be equal to the sum of its previous value and P2's Count field. But if the values in the Mode fields are not suf~lciently similar, the step in Box 884 compares the Count fields to determine which Mode held value ha~ a large~ count. If P2's Count field is greater, ~e step in Bo~c 886 c~ rs ~e Mode and Count fields to have ~e values in P2 's Mode and Count fields.

The steps in boxes 882 and 884 are biased to the mode in the first child, in the event of 5imil~r values of Mode or equal values of Count for the two children. This bias can be removed in Box 882 by setting ~Iode to the result of randomly choosing between its previous value and P2's value for Mode.
The bias can be removed in Box 884 by introducing a step that explicitly handles the case of equal vaiues of Count by randomly choosing which of the two values of Mode to accept.

The operation in Fig. 27 does not necess~rily detect the true mode because incomplete information about the distribution is transmitted up the hierarchy at each step. The tendency of this operation to detect the true mode may be characterized in terms of the prominence of the mode because as the population of the mode grows in relation to other populations, the likelihood increases that the mode will outnumber other values in a majority of regions at each scale.

--7.

., .~

~., 20419~
.

As described above, selection operations require the capacity to establish prominent values of the global distribution. The prominence of the hierarchical mode can be heuristically established by comparing its population to the total population---the hierarchical mode is prominent if it corresponds to a suf~lciently large fraction of the total population.

Fig. 28 shows how prominent values of the distribution can be established in sequence by an iterative process which at each step detects a prominent rem~ining value and deselects values in some range around that value.
Failure to detect the true mode at each step only affects the order in which these prominent peaks are found, and the modes are found in roughly descending order of prominence. The operation in Fig. 28 can be used in scene sllmm~rization~ because the overall geometric organization of a scene is often largely captured in the global distributions of local topological and geometric attributes. The computational problem of sllmm~rization involves characterizing the prominent peaks in the global distribution of each local attribute for a given image.

The step in Box 900 obtains values for the attribute of interest, saving them in a Value field at each processing unit. Other fields are initialized, including a Seen label indicating whether a pixel's Value field is included in a mode that has been found and a Selected label indicating whether a pixel is in the set of the current mode. The step in Box 902 determines whether enough modes have been found, which can be done by keeping a count of modes found and comparing it with a limit. Similarly, the step in Box 904 deterrnines whether enough pixels remain to f~lnd additional modes, which can be done by counting the number of pixels with unseen labels "on" and - Z04 ~ 922 comparing the count with a limit. If enough pixels rem~in, the step in Box 906 obtains the next mode from among the pixels with Seen labels "off' by following the steps in Fig. 27.

When a mode has been found, the step in Box glO can apply a simil~rity test to determine whether each pixel with its Seen label "o~' has a Value field sufficiently simil~r to the current mode to be included in the mode. If so, the step in Box 912 sets the pixel's Selected label "on." The step in Box 914 sums the pixels with Selected labels "on" and stores the value of the mode and the sum of pixels. The step in Box 916 prepares for the next iteration by setting the Seen label equal to the Selected label for each of the pixels whose Seen label was ofP' and by then clearing the Selected label.

When enough modes are found or insufficient pixels remain to find further modes, the step in Box 920 provides a list of pairs, each including a mode value and a pixel sum. This list indicates the prominent modes of the attribute of interest, providing a sllmm~ ry of the scene.

The values used in ~e opPration of Fig. 28 could be near neighbor values obtained as described in cc~cign~ U.S. Patent No. 5,305,395, entitled nFYh~stive Hierarchical Near Neighhor Operations on an Imagen.

P. Further Appli~tion.c 20~192~

The basic operations of scene analysis can be grounded effectively in simple local geometric units which are defined across a wide range of scales and in a densely overlapping manner at each scale. These image chunks are a critical missing link between traditional pointwise low-level visual representation---such as edge and texture images---and the high-level representation of scene objects and relations. They provide the representational power needed to support processes for controlling visual attention, separating figure from ground, and analyzing shape properties and spatial relations, and yet can be computed in fixed time even on simple, local parallel machines.

One main aspect of the representational power of image chunks is that they provide a useful peephole view of the objects in a scene, often obviating the cost and complications of extensive a priori segmentation into objects or other non-local spatial units. The other main aspect is that they can function as prefabricated spatial building blocks, leading to dramatic speedups in figure/ground separation rates--on simple local machines, chunks seem to provide the primary mech~n ism for exploiting parallelism in the context of inherently serial processes such as connected component labeling. Both of these aspects rely critically on the fact that image chunks are defined with dense overlap at multiple scales.

Various basic scene analysis operations can be performed on image chunks or representations derived from them. Schemes for selective analysis, sllmm~rization, visual indexing, and so on, can be implemented as operations on iconic (or image-like) representations. A local account of visual representation for scene analysis may eventually lead to a -7~--perspective in which higher-level vision, abstract spatial reasoning, and behaviors guided by vision are also understood as intrinsically spatial computations grounded directly in image-like representations.

It may be possible to extract the skiking curves in a figure, and describe qualitatively the background, without first performing a full segmentation of the image. To distinguish between foreground and background curves in complicated scenes, more extended segments of the foreground curves must be described than the ch-lnking method introduced here would be able to define directly. An efficient local iterative computation could label each point on an image curve with a measure of more global saliency in such situations. If globally salient structures are labeled in this way in a prepocessing stage, chllnking operations could be applied selectively to the more salient subset of the image or, for that matter, to the less salient subset.

The techniques described above can be extended to handle color images, rather than binary images, when every object in the scene is uniformly colored, giving rise to a uniformly colored region in the image. (Color, in this context, includes uniform shades of grey.) Single-component region chllnking could be extended to label a region as (i) being of a first color; (ii) as having one connected component of a second color and a specified number of connected components, up to four, of the first color; or (iii) as having components of more than two colors. The idea here is to let the components in a two-color-region play the roles that black and white did in the original method, as long as one of the colors is present in a single connected component. The first and second colors are defined on a per-region basis; in 2 0 4 ~ 9 2 2 .

other words, the roles of figure and ground are assigned locally by the ch-~nkin~ process. One simple and appealing way of managing the figure/ground assignments is to use a predefined partial order on the set of colors to guide them. Given any two colors, this ordering defines one of them to stand out in relation to the other.

A valid region could be defined to be a region with two colors, the first of which occurs in a single connected component, termed the foreground component. A vacant region is one with only the second color---it lacks a foreground component, so the first color is null. A region with more than two colors is invalid. The notion of a full region vanishes every single-color region is treated as background. Vacancy and validity can be computed hierarchically based on the following rules:

1. Every pixel is initially vacant.

2. The union of two ~j~ent vacant regions is vacant if the colors in the two regions are the same. The union of two ~dj~nt vacant regions is valid if the colors in the two regions are ~irr~ . In that cas~, the color labels cl and c2 are ~c~igneli a~;(l~uily to the two colors.

3. The union of a vacant region rl and a valid region r~ is valid if the color c2 in rl is the same as one of the colors in r2. The color assignment for the union is the same as the ~ssi~nment for the valid subregion. (If the color c2 in rl is the sarne as the foreground color cl in r2, the color ~SsiFnment in rl must be reversed---cl becomes c2 and c2 becomes nil---to make the color assignments in the union of the regions consistent with each subregion.) 4. The union of two adjacent valid regions is valid if the colors in rl are the same as the colors in r2, and the foreground component in rl is connected to the foreground component in r2.

The foreground components in two adjacent regions are connected if any foreground pixel in one region is adjacent to a foreground pixel of the same color in the other region. This relation may be established hierarchically by virtually the same process that was introduced earlier for the case of a binary image.

The techniques described above generally apply to binary images. Most of the time, however, it is not feasible to transform an image of a natural scene into a binary image in which black regions correspond to the meaningful scene objects. The objects of interest may correspond, at least in part, to image regions that vary slowly or are uniform in one or more local properties, but the properties defining each object are not fixed across all objects in the scene, and may even depend on the task.

Sometimes, pixel properties such as intensity or color, or aggregates of these properties over collections of pixels, are adequate for defining me~ningful image regions. This is the case, for exa~nple, when every object in the scene is uniformly colored. In such cases, the extension to color or discrete grey images proposed described above may be applied. More often, however, the appropriate local properties for defining image regions correspond to properties of physical markings on surfaces in the scene, or to individual scene objects that appear small in the image. This is the case, for example, when a scene object is painted with polkadots, or when the object of interest is the shape formed on a tabletop by an arrangement of jellybeans.

Texture analysis is the computational problem of detecting the geometric counterparts in the image of these surface markings or scene items, along with the image regions they define. Even in the domain of binary graphic representations (diagrams and text), it is important to be able to define regions on the basis of texture. Regions are often explicitly rendered by halftone shading, cross-hatching, and so on. In rl~nning text, regions defined implicitly by the layout of the characters correspond to paragraphs and other text units useful in the reading process.

Single-connected-component chllnking techniques may play a useful role in defining texture elements of an image---the geometric events corresponding to surface markings, etc. A textured image breaks up into many minute chunks, and these may provide the texture elements. As such, single-connected-component chllnking cannot directly yield information about the texture regions themselves.

Some texture regions have a very salient boundary. This boundary can be a sequence of sharp local differences in the properties of texture elements, and it is imme~liately striking in the same sort of way that foreground curves can be. Such boundaries may easily be detected in parallel by comparing the properties of neighboring texture elements. If texture boundaries are found in this way in a prepocessing stage, the techniques described above could be 20~1922 applied selectively to these boundaries just as they could to other salient image curves.

Not all discriminable texture regions, however, have salient or easily detectable boundaries. Thus, a scheme for detecting chunks of texture re~ions directly would be useful in addition to the boundary based method above. It is possible to imagine processes closely analogous to the single-connected-component chunking method in this paper that apply criteria other than connectivity for defining image components. A scheme for defining texture chunks would probably involve a criterion of near-uniformity or slow-variation in specified properties. Such a scheme might in addition characterize the texture by specifying how exactly the properties of texture elements vary within a chunk.

Most scenes give the strong impression that one is imrnediately aware of some of the objects as units, not piecemeal. It is probably not possible to define a process that pulls out objects in a scene in fixed time by uniform, local parallel computations. It may be possible, however, to define an extremely fast serial, focused object extraction process, one that is minim~lly serial in the way that scene sllmm~rization and selective processing, for example, can be. There are, in that case, two computational issues to account for. The first issue is attention or indexing--the rapid direction of processing resources to the likely locations of important objects.
The second issue is figure/ground separation--the process of rapidly m~king distinct that subset of the image corresponding to a particular scene object.

The figure/ground separation aspect of figural analysis is supported in part by single-connected-component chllnkin~ techniques, or their extensions to textured regions, since they provide an ef~ective representation for the rapid component labeling required. For many purposes, proximity relations are involved in defining objects, in addition to connectivity relations; therefore, near neighbor techniques as described in the linking application may also be necessary.

G. Source Code Appendix AP~L~ A is source code for imp~m~nting some of the fe~lul~s described above. Thesource code in Appendi~c A may differ in some l~,S~I~ from the above desc i~Lion, but e-~ution of the source code in Appendix A on a Col-n~ti- n M~chine provides sl~bst~nti~lly all the fealulcs described above. In some cases, the code has been optimized or incl~lde~
ad~ition~l fealufcs. Source code i~ nting other fealllLcs is inch~ded in co~igned U.S.
Patent No. 5,305,395, entitled "F.h~ tive Hierarchical Near Neighbor Operations on an Image"; 5,231,676, entitled "Hierarchical Operations on Border Attribute Data for Image Regions"; 5,239,596 entitled "Labelling Pixels of an Image Based on Near Neighbor Attributes"; 5,255,354 entitled "Compærison of Image Shapes Based on Near Neighbor Data"; and U.S. Patent No. 5,193,125 entitled "Local Hierarchical r~S~ g ~ . .
~. ~

Focus Shift Within an Image".

In general, before executing the code in Appendix A, it is necessary to boot the Connection M~chine so that it is configured as a square grid with a size that is an integer power of two. The size deterrnines the image size that can be processed.

Some of the functions in Appendix A perform operations in relation to a specified quadrant, performing operations in relation to the fourth quadrant by default. Other functions that do not specify a quadrant operate in relation to the fourth quadrant.

The following are included in Appendix A:

The macro function QCHILD-REF!! is called to read a result from a child.
This macro is called with the field in which the result is stored, a value indicating the first or second child, a level of the hierarchy, and a quadrant.
If a result is to be read from the first child, t_e field in local memory is returned. If a result is to be read from the second child, the *Lisp function news-border!! is called to copy the result from the same field in another processing unit whose location is indicated by offsets. The function news-border!! automatically exploits the hypercube network of the Connection M~chine at run time if it finds that the offsets are powers of two.

The set function QCHILD-REF!! is called to write a value to a child. This function is called with the field, a value indicating first or secon~l child, a ~o~g~2 level, a quadrant, and with a value to be written. If a value is to be written to the first child, the *Lisp function *set is called to write the value to the field in local memory. If a value is to be written to the second child, the *Lisp function *news is called to write the value to the same field in another processing unit whose location is indicated by offsets.

The macro function CHILD-REF!! and the set function CHILD-REF!!
operate in the same manner as the macro function QCHILD-REF!! and the set function QCHILD-REF!!, except only in the fourth quadrant.

The functions VALID-LEVEL?? and VALID-LEVEL = ?? decode m~ximum valid level encodings.

The functions PEi~OJECT-MAX-VALID-VALUE and PROJECT-PARENT-VALUE-TO-NULL-CHILD propagate information downward in the hierarchy, resolving collisions.

The function LABEL-VALID-SCALE establishes m~xim~l valid regions and encodes m~ximum valid levels.

The functions LABEL-LEFT-VALID-SCALE and LABEL-TOP-VALID-SCALE to establish m~xim~l valid slices.

Further functions establish properties of chunks and slices such as orientation and curvature.

-The function SELECT-VALU~ perforrns the ~asic selection operation.

The functions named READOUT-XXXX can be used to implement local or global selection.

The function READOUT-MODE provides the hierarchical mode.

The function L~ST-MODES provides a list of hierarchical modes.

The function COLOR-FROM-SEED labels connected components.

H. Mi~ n~u~

The invention has been described in terms of operations on binary images, but could be applied to images of all types and, more generally, to bodies of data that map into arrays simil~r to those described.

The impIementation described above divides an image into regions that each contain up to one connected component. The invention could also be implemented for regions that contain up to two or some higher number of connected components, or for regions of uniform color or texture.

Although described in terrns of a ~onnect;on M~chine, a SLMD m~chine with a powerful central controller, the invention could also be implemented on a m~chine with local distributed control, elimin~ting the need for a powerful central controller, for direct global comrnunication of instructions, and for strict synchronous operation. This technique, in which all . .

~19'22 processing units would be simple and all communication would be local, might be more parsimonious than the standard SIMD model of the Connection Machine. The standard SIMD model is, however, a straightforward, familiar convention supported by a number of existing large scale parallel processors.

Furthermore, the invention could be implemented with special purpose hardware, including specialized integrated circuits. Some features described above in terms of software could then be provided through appropriate hardware.

The code in Appendix A invokes hypercube network communication by news-border!! calls. In some implementations, grid network communication might be sufficiently fast for power-of-two offset communication, in which case a loop could be performed for the number of steps necessary to move data by the appropriate power-of-two offset within the grid network.

In general, power-of-two offset comrnunication could be implemented without a hypercube. If one processing unit is employed for each pixel of an image, for example, and if each processing unit has a bidirectional wire to each other processing unit with a power-of-two offset in each dimension, power-of-two offset communication is available. This arrangement can, of course, be mapped onto a hypercube just as a mesh can be embedded in a hypercube.

2 ~ 9 ~ 2 Although the invention has been described mainly in terms of image processing, the techniques of the invention might also be useful for other applications.

Although the invention has been described in relation to various implementations, together with modifications, variations and extensions thereof, other implementations, modifications, variations and extensions are within the scope of the invention. The invention is therefore not limited by the description contained herein or by the drawings, but only by the cls~im~.

APPENDIX A

=======================================
=========================_===========
;;;-*- Syntax: Common-lisp;mode: lisp; package: REVERSE-GRAPHICS; base: 10-*-. . .
", . . .
, "
. . .
, "
'defvaranalyze-dim 512.) defvar base-chunking-level 0) ,defvar top-chunking-level) (defun initialize-machine-parameters (&key (size analyze-dim)) (setq analyze-dim size top-chunking-level (round (* (log analyze-dim 2) 2)) base-chunking-level 0)) . . .
, "
(*proclaim '(ftype (function (t) (unsigned-byte 9)) child-2-xoff child-2-yoff)) (defsubst child-2-xoff (level) (if (oddp level) 0 (expt 2 (1- (truncate level 2))))) (defsubst child-2-yoff (level) (if (oddp level) (expt 2 (truncate (1- level) 2)) 0)) ;;
(*proclaim '(ftype (function (tttt) (field-pvar*))child-ref!! qchild-ref!!)) (defmacroCHlLD-REF!! (pvarchild-numberlevel &optional (border-val 0)) (case child-number (1 `,pvar) (2 `(news-border! ! ,pvar (! !f ,border-val) (child-2-xoff ,level) (child-2-yoff ,level))))) (defseff CHILD-REF! ! (pvar child-number level) (newval) `(case ,child-number (1 (*locally 'declare (type (field-pvar (pvar-length ,pvar)) ,pvar ,newval)) ignore ,level) ~*set ,pvar ,newval))) (2 (let ((c2x (child-2-xoff ,level)) (c2y (child-2-yoff ,level))) (*locally (declare (type fixnum c2x c2y)) (*unless (off-grid-border-relative-p! ! (!! c2x) (! ! c2y)) (*news ,newval ,pvar c2x c2y))))))) ;;
(*proclaim '(ftype (function (t t) fixnum) sign-quad-x-offset sign-quad-y-offset)) (defun sign-quad-x-offset (xoff quad) (select quad ((1 4) xoffl ((2 3) (- xoff)))) (defun sign-quad-y-offset (yoff quad) (select quad ((1 2~ (- yoff)) ((3 4) yoff))) (defmacro QCHILD-REF! ! (pvar child-number level quad &optional (border-val O)) (case child-number (1 `,pvar) (2 ` (news-border! !
,pvar (! !f ,border-val) (sig n-q uad-x-offset (ch i Id -2-xoff, level) ,quad) (sign-quad-y-offset (child-2-yoff ,level) ,quad))))) (defsetf QCHILD-REF! ! (pvar child-number level quad) (newval) ` (case ,child-number (1 (*locally 'declare (type (field-pvar (pvar-length ,pvar)) ,pvar ,newval)) ignore ,level ,quad) ~*set ,pvar ,newval))) (2 (let ((c2x (sign-quad-x-offset (child-2-xoff ,level) ,quad) ) (c2y (sign-quad-y-offset (child-2-yoff ,level) ,quad) )) (*locally (declare (type fixnum c2x c2y)) (*unless (off-grid-border-rerative-p! ! (! ! c2x) (! ! c2y)) (*news ,newval ,pvar c2x c2y))))))) ,, _ _ _ _ _ _ ===========================--===========
=

;; The functions below are utilities pertaining to, but not central to,the simulated BIJ.
. . .
", ", (defun layer-image-region-width (level &optional (primary-dim :y) (base-region-size (ash base-region-size (floor (if (eql primary-dim :x) (1 + level) level) 2))) (defun layer-image-region-height (level &optional (primary-dim :y) (base-region-size 1)) (ash base-region-size (floor (if (eql primary-dim :x) level (1 + level)) 2))) (defun layer-nbr-x (level &optional (primary-dim :y)) 204 ~ 922 ~ash 1 tfloor (if (eql primary-dim :x) (1 + level) level) 2))) (defun layer-nbr-y (level &optional (primary-dim :y)) (ash 1 (floor (if (eql primary-dim :x) level (1 + level)) 2))) (defsubst parent-width (level) (expt 2 (truncate level 2))) (defsubst parent-height (level) (expt 2 (truncate (1 + level) 2))) (defsubst parent-perimeter (I) (- (+ (* (parent-width 1) 2) (* (parent-height 1) 2)) 4)) (defun parent-diameter (I) (parent-width 1)) (defun parent-radiùs (I) (truncate (parent-width 1) 2)) (defun parent-level-from-width (w) (if (zerop w) O (* (ceiling (log w 2)) 2))) (defun parent-level-from-width* (w) (if (zerop w) O (~ (floor (log w 2)) 2))) (defun parent-level-from-radius (r) (parent-level-from-width (* r 2))) (defmacro ! !f (x) `(! ! (the fixnum ,x))) , "
,,, (*proclaim '(ftype (function (t t t) (field-pvar *)) cref ! !)) ;; this should use new-border! ! when it gets fixed.
(def macro cref ! ! (pva r xoff yoff) ` (*locally (declare (type (field-pvar (pvar-length ,pvar)) ,pvar) (type fixnum ,xoff ,yoff)) (nevvs-border! ! ,pvar (! ! O) ,xoff ,yoff))) (def~ etf CREF! ! (pvar xoff yoff) (value) `(* ocally (c eclare (type (field-pvar (pvar-length ,pvar)) ,pvar ,value) (type fixnum ,xoff ,yoff)) (*unless (off-grid-border-relative-p! ! (! ! ,xofffl (! ! ,yoffl) (*news ,value ,pvar ,xoff ,yoff)))) ;;;-*-Syntax: Common-lisp;mode: lisp; package: REVERSE-GRAPHICS; base: 10-*-. . .
, " This file contains propagation operations.
, "

' ' , "
;;; decoding max-valid-level at a given level ;; These functions are for decoding max-valid-level at a given level.
;; These functions are at the top of this file for reasons of ;; compilation order.
(*proclaim '(ftype (function (t t) boolean-pvar) valid-level?? valid-level = ??)) (*defun VALID-LEVEL?? (I max-valid-level) (declare (type (field-pvar (pvar-length max-valid-level)) max-valid-level)) (*let ((result nil ! !)) (declare (type boolean-pvar result)) (*set result (and! ! (~ = ! ! max-valid-level (! !f top-chunking-level)) (c = ! ! (! !f 1) max-valid-level))) result)) (*defun VALID-LEVEL=?? (I max-valid-level) (declare (type (field-pvar (pvar-length max-valid-level)) max-valid-level)) (*let ((result nil ! !)) (declare (type boolean-pvar result)) (*setresult(and!! (<=!! max-valid-level(!!ftop-chunking-level)) (= ! ! (! !f 1) max-valid-level))) result)) . . .
, ,, ;;; Propagating maximal value (*defun PROJECT-MAX-VALID-VALUE
(value result mvl &key dir active (combiner 'max! !) (top-level top-chunking-level)) (declare (type (field-pvar (pvar-length value)) value result) (type (field-pvar (pvar-length mvl)) mvl)) (*let ((ftemp (! ! 0)) (ptemp (! ! 0)) (nullval (! ! 0))) (declare (type (field-pvar (pvar-length value)) ftemp ptemp nullval)) (case combiner (min! ! (*set nullval (1 + ! ! (! !f (*max value)))))) '*setftemp (if!! (zerop!! value) nullval value)) :*set ptemp nullval) ~loop for I from top-level downto 1 do (*when (and!! (valid-level=?? I mvl) (=!! ptemp nullval)) (*set ptemp ftemp)) (when (or (null dir) (o (and (eq dir :x) (evenD 1)) :and (eq dir :y) (oddp l~))) (*let (ctemp (child-ref!! ptemp 21))) (dec are (type 'field-pvar (pvar-length value)) ctemp)) (setf (child-ref ! ptemp 21) (fpv (*funca I combiner ptemp ctemp)))))) 2~ ~ 922 (*if (--! ! ptemp nullval) (*set ptemp (! ! 0))) (if active (*set result (if ! ! (plusp! ! (fpvl active 1 )) ptemp (! ! 0))) (*set result ptemp)))) . . .
,,.
;;; Propagating minimum-scale non-null value (*defun PROJECT-PARENT-VALUE-TO-NULL-CHILD (value max-valid-level) (declare (type (field-pvar (pvar-length value)) value) (type (field-pvar (pvar-length max-valid-level)) max-valid-level)) (*let ((proj-value value)) (declare ~type (field-pvar (pvar-length value)) proj-value)) (loop for I from top-chunking-leve downto 1 do (*when (valid-level?? I max-valid- evel) (*let ((cvalue (child-ref! ! proj-value 2 I))) (declare (type (field-pvar (pvar-length value)) cvalue)) (setf (child-ref!! proj-value 21) (if! ! (zerop! ! cvalue) proj-value cvalue))))) (*set value proj-value))) ;; The remaining functions on this page are variations.
;; takes start-level rather than mvl (*defun PROJECT-PARENT-VALUE-TO-NULL-CHILD1 (value start-level &optional (q 4)) (declare (type (field-pvar (pvar-length value)) value)) (*let ((proj-value value)) (declare ~type (field-pvar (pvar-length value)) proj-value)) (loop for I from start-level downto 1 do (*let ((cvalue (qchild-ref!! proj-value 21 q))) (declare (type ffield-pvar (pvar-length value)) cvalue)) (setf (qchild-ref!! proj-value 21 q) (if!! (zerop!! cvalue) proj-value cvalue)))) (*set value proj-value))) ;; maybe generalize this to project-combine-value and pass combiner.
(*defun PROJECT-MAX-VALUE
(value start-level &optional (q 4)) (declare (type (field-pvar (pvar-length value)) value)) (*let ((proj-value value)) (declare ~type (field-pvar (pvar-length value)) proj-value)) (loop for I from start-level downto 1 do (*let ((cvalue (qchild-ref!! proj-value 21 q))) (declare (type (field-pvar (pvar^length value)) cvalue)) (setf (qchild-ref!! proj-value 21 q) (max! ! proi-value cvalue)))) (*set value proJ-value))) 204 ~ 922 ~ ...
. ,, . . .
,,, ;;;-*-Syntax: Common-lisp;mode: lisp; package: REVERSE-GRAPHICS; base: 10-*-... .
" This file contains chunking and labeling operations.
. . .
.,, ;; For all operations in this file, the arguments bitmap and ;; max-valid-level are pvars. bitmap is a 1-bit field pvar.
;; max-valid-level is a field with enough bits to hold (1 +
;; top-chunking-level), which is by convention the null value.
;; The top-level operations in this file are all named with the prefix ;; LABEL-. Usually they take two arguments---the input bitmap, and ;; result pvar. The caller is responsible for allocating these pvars ;; appropriately. There are some utilities are the very end of the file ;; for doing this allocation, but they are not central to this ;; file---allocation may be done in any manner appropriate to the ;; application.
;; Most of the top-level LABEL-XXX operations in this file (except for ;; the ones on the next page) are self-contained chunk-measure-and-label ;; processes. They involve ;; computing maximal valid regions, computing some measures on their ;; contents, and then propagating the results to pixels across each ;; regions.
;; The operations in this file are in-place implementations of the ;; chunking and labeling processes. They were, however, derived from ;; independent-storage implementations in such a way that getting back ;; to a working independent-storage version should be easy. The ;; conversion to the in-place form was done by commenting out lines of ;; code that involve only the first child, since these computations are ;; redundant in the in-place case. (Also, the child-ref! ! macro was ;; modified so that for the first child it simply expands to its ;; argument pvar.) . . .
,,, ;;; maximal valid regions (defun LABEL-VALID-SCALE-1 tbitmap max-valid-level &optional (start-level base-chunking-level) &aux (nullv (1 + top-chunking-level~)) (*locally (declare (type (field-pvar 1) bitmap) (type (field-pvar (pvar-length max-valid-level)) max-valid-level)) (*let (rexit?? dexit?? vacant?? valid??) 20~ 1 922 'declare (type (field-pvar 1) rexit?? dexit?? vacant?? valid??)) *set rexit?? (connections! ! bitmap: right)) *set dexit?? (connections! ! bitmap :down)) *set vacant?? (notO1 ! ! bitmap)) *set valid?? (if! ! ( = ! ! max-valid-level (! !f nullv)) (! ! O) bitmap)) loop for I from 1 to top-chunking-level do (*set valid??
(orO1 ! !
'andO1 !! 'child-ref!! valid?? 1 I) (child-ref!! vacant?? 21)) andO1 ! ! child-ref! ! vacant?? 1 I) (child-ref! ! valid?? 2 I)) ~andO1 ! ! ~child-ref! ! valid?? 1 I) (child-ref! ! valid?? 2 I) (child-ref!! (fpvl (if (oddp 1) dexit?? rexit??) 1)1 I)))) (*when (plusp!! valid??) (*set max-valid-level (! !f 1))) (when (oddp 1) (if (< = I start-level) (*set rexit?? (andO1 ! ! valid?? (child-ref ! ! valid?? 2 (1 + I))) dexit?? (andO1 ! ! valid?? (child-ref ! ! valid?? 2 ( + 1 2))) ) (*set rexit?? (orO1 !! (child-ref!! rexit?? 1 I) (child-ref!! rexit?? 21)) dexit?? (child-ref! ! dexit?? 21)))) (when (evenp 1) (if (< = I start-level) (*set dexit?? (andO1 ! ! valid?? (child-ref ! ! valid?? 2 (1 + I))) rexit?? (andO1 !! valid?? (child-ref!! valid?? 2 (+ 12))) ) (*set dexit?? (orO1 !! (child-ref!! dexit?? 1 I) (child-ref!! dexit?? 2 I)) rexit?? (child-ref! ! rexit?? 2 I)))) (*set vacant?? (andO1 ! ! (child-ref! ! vacant?? 1 I) (child-ref! ! vacant?? 2 I))))))) ;; assumes max-valid-level already initialized appropriately.
(defun LABEL-VALID-SCALE
(bitmap max-valid-level &optional (start-level base-chunking-level) &aux (nullv (1 + top-chunking-level))) (*locally (declare (type (field-pvar 1) bitmap) (type (field-pvar (pvar-length max-valid-level)) max-valid-level)) (*set max-valid-level (if ! ! (plusp! ! bitmap) (! ! O) (! !f nullv))) (label-valid-scale-1 bitmap max-valid-level start-level))) (defun LABEL-VACANT-SCALE
(bitmap max-vacant-level &aux (nullv (1 + top-chunking-level))) (*locally (declare (type (field-pvar 1) bitmap) (type (field-pvar (pvar-length max-vacant-level)) max-vacant-level)) (*let (vacant??) ~declare (type (field-pvar 1) vacant??)) *set vacant?? (notO1 ! ! bitmap)) *set max-vacant-level (if ! ! (plusp! ! vacant??) (! ! O) (! !f nullv))) ~loop for I from 1 to top-chunkinq-level do (*set vacant?? (andO1 ! ! (child-ref! ! vacant?? 1 I) (child-ref! ! vacant?? 2 I))) (*when (plusp!! vacant??)(*set max-vacant-level (!!f 1))))))) (defun LABEL-FULL-SCALE
(bitmap max-full-level &aux (nullv (1 + top-chunking-level))) (~locally 2 0 4 t 9 2 2 (declare (type (field-pvar 1) bitmap) (type (field-pvar (pvar-length max-full-level)) max-full-level)) (tlet ~full??) 'declare (type (field-pvar 1) full??)) *set full?? bi~map) *set max-full-level (if! ! (plusp! ! full??) (! ! O) (! !f nullv))) ;loop for I from 1 to top-chunkinq-level do (*set full?? (andO1 ! ! (child-ref! ! full?? 1 I) (child-ref! ! full?? 2 I))) (*when (plusp!! full??) (~set max-full-level (!!f 1))))))) . . .
,,, Thin chunks and local width.
;; The operations on this pa~ e are the thin chunk (slice) counterparts ;; of IABEL-VALID-SCALE. ~ dlin,e" ha~ been ;; implemented here and be ow~y commenting out the computations at odd ;; levels or at even levels, as appropriate.
(*defun LABEL-LEFT-VALID-SCALE
(bitmap max-valid-level &optional (top-level top-chunking-level) &aux (nullv (1 + top-chunking-level))) (declare (type (field-pvar 1) bitmap) (type (~ield-pvar (pvar-length max-valid-level)) max-valid-level)) (*let ~dexit?? Yacant?? valid??) 'declare (type (field-pvar 1) dexit?? vacant?? valid??)) *set dexit?? (connections! ! bitmap :down)) *setvacant?? (notO1!! bitmap) valid?? bitmap) ~set max-valid-level (if!! (plusp!! valid??) (!! O) (!!f nullv))) ;loop for I from 1 to top-level do (when (oddp 1) (*set valid??
(orO1 ! !
'anc 01 ! ! 'c ~i c -ref! ! valid?? 1 I' (child-ref! ! vacant?? 2 I)) ,anc' 01 ! ! c 1i c -ref! ! vacant?? I) (child-ref! ! valid?? 2 1)) ~anc 01 ! ! ~c li c -ref! ! valic ?? 1 I, (child-ref! ! valid?? 21~
(child-ref!! dexit?? 1 Il))) (*set dexit?? (child-ref! ! dexit?? 2 I)) (*set vacant?? (andO1 ! ! (child-ref! ! vacant?? 1 I) (child-ref! ! vacant?? 2 I)))) (when (evenp 1) ~*set valid?? (child-ref! ! valid?? 1 I)) *sét dexit?? (child-ref! ! dexit?? 11)) ;*set vacan~?? (child-ref! ! vacant?? 1 I)) ,*when (plusp!! valid??) (*set max-valid-level (!!f 1)))))) (*defun LABEL-TOP-VALID-SCALE
(bitmap max-valid-level &optional (top-level top-chunking-level) &aux (nullv (1 + top-chunking-level))) (declare (type (field-pvar 1) bitmap~
(type (field-pvar (pvar-length max-valid-level)) max-valid-level)) , ., j 204 ~ 922 - ~ (*let (rexit?? vacant?? valid??) 'declare (type (field-pvar 1) rexit?? vacant?? valid??)) *set rexit?? (connections!! bitmap :right)) *set vacant?? (notOt! ! bitmap) valid?? bitmap) *set max-valid-level (if ! ! (plusp! ! valid??) (! ! O) (! !f nullv))) ,loop for I from 1 to top-level do (when (oddp 1) '*set valid?? (child-ref! ! valid?? 1 I)) :*set rexit?? (child-ref ! ! rexit?? 1 I)) ,*set vacant?? (child-ref ! ! vacant?? 1 I)) (when (evenp 1) (*set valid??
(orO1!!
'andO1 ! ! 'child-ref! ! valid?? 1 I) (child-ref ! ! vacant?? 2 I)) andO1 ! ! child-ref! ! vacant?? 1 I) (child-ref! ! valid?? 2 I)) ,andO1 ! ! ~child-ref! ! valid?? 1 I) (child-ref! ! valid?? 2 I) (child-ref!! rexit?? 11)))) (*set rexit?? (child-ref! ! rexit?? 2 I)) (*setvacant??(andO1!!(child-ref!!vacant7?1 I)(child-ref!!vacant??21)))) (*when (plusp! ! valid??) (*set max-valid-level (! !f 1)))))) ;; The operations on this page are the thin chunk (slice) counterparts ;; of SUM-BORDER-EDGES.
(*defun SUM-LEFT-EDGES
(bitmap max-valid-level max-bedge2-level &optional (top-level top-chunking-level) &aux (nullv (1 + top-chunking-level))) (declare (type (field-pvar 1) bitmap) (t,vpe ~field-pvar (pvar-length max-valid-level)) max-valid-level max-bedge2-(*set max-bedqe2-level (!!f nullv)) (*let (left top-left bot-left) (declare (~pe (field-pvar 16) left) (type ,field-pvar 1) top-left bot-left)) (*set left ,! ! O) top-left bitmap bot-left bitmap) (loop for from 1 to top-level do (when (oddp 1) (*set left ( + ! ! (child-ref ! ! left 1 I) (child-ref ! ! left 2 I) (not= O1 ! ! (child-ref ! ! bot-lef- 1 I) (child-ref ! ! top-left 2 I)))) (*settop-left(child-ref!! top-left 1 I) (*set bot-left (child-ref! ! bot-left 2 I), ) (when (evenp 1) '*set left 'child-ref ! ! left 11)) *settop- eft(child-ref!! top-left 1 I)) *set bot- eft (child-ref! ! bot-left 1 I)) ~*when (valid-level=?? I max-valid-level) (*case left (2 (*set max-bedge2-level (! !f 1)))))))) (*defun SUM-TOP-EDGES

- (bitmap max-valid-level max-bedge2-level &optional (top-level top-chunking-level) &aux (nullv (1 + top-chunking-level))) (declare (type ~field-pvar 1) bitmap~
(type ~field-pvar (pvar-length max-valid-level)) max-valid-level max-bedge2-level)) (*set max-bedqe2-level (!!f nullv)) (*let (top top-left top-right) (declare (type (field-pvar 16) top) (type /field-pvar 1 ) top-left top-right)) (*set top ,! ! 0) top-left bitmap top-right bitmap) (loop for from 1 to top-level do (when (oddp 1) '*set top (child-ref ! ! top 1 I)) *settop-left(child-ref!! top-left 1 I)) *set top-right (child-ref! ! top-right 1 I)) ~when (evenp 1' (*set top (+ ! ! 'child-ref! ! top 1 I) (child-ref! ! top 2 I) (not=0' !! (child-ref!! top-ri~ht 1 I) (child-ref!! top-left 21)))) (*settop-left(child-ref!! top-left 1 I)) (*set top-right (child-ref! ! top-riqht 2 I))) (*when (valid-level--?? I max-valid-level) (*case top (2 (*set max-bedge2-level (! !f 1)))))))) ;; The operations on this page are the thin chunk (slice) counterparts ;; of SUM-BORDER-PIXELS.
(*defun SUM-LEFT-PIXELS (bitma~ max-valid-level sum &optional (top-level top-chunking-level)) (declare (type (field- ~var 1' bitmap) 'type (field-pvar 'pvar- ength max-valid-level)) max-valid-level) *l t~vlpfe )field-pvar ;pvar- ength sum)) sum)) ',declare (type (field-pvar 16) left)) '*set sum bitmap left bitmap) loop for I from 1 to top-level do (when (oddp 1) (*set left (+ ! ! ~child-ref! ! left 1 I) (child-ref! ! left 2 I)))) (when (evenp 1) '*set left (child-ref!! left 1 I)) ~*when (valid-level?? I max-valid-level) (*set sum left))))) (*defun SUM-TOP-PIXELS (bitmap max-valid-level sum &optional (top-level top-chunking-level)) (declare ~tvpe ',field- ~var 1' bitmap) ',type (fielc-pvar ',pvar- ength max-valid-level)) max-valid-level) :type (fielc-pvar ,pvar- ength sum)) sum)) (declare (tvpe (field-pvar 16) top)) (*set sum bltmap top bitmap) - (loop for I from 1 to top-level do 2 0 4 t 9 2 2 (when (oddp 1) ~; '*set top (child-ref ! ! top 1 I)) ,when (eYenp 1) (*set top (+ ! ! (child-ref! ! top 1 I) (child-ref! ! top 2 I)))) (*when (valid-level?? I max-valid-level) (*set sum top))))) ;; This is the only LABEL-X~ op~ that makes use of ;; thin ~h~lnl~, (defun LABEL-WIDTH (bitmap width) (*locally (declare (type (field-pvar (pvar-length width)) width)) (*let ((count (! ! O)) (max-bedge2-level (! ! O)) (max-valid-level (! ! O))) (declare (type (field-pvar (pvar-length width)) count) (type (field-pvar 8) max-valid-level max-bedge2-level)) 'label-left-valid-scale bitmap max-valid-level) 'sum-left-edges bitmap max-valid-level max-bedge2-level) sum-left-pixels bitmap max-bedge2-level count) 'project-max-valid-value count count max-bedge2-level :dir :y) '*set width count) 'label-top-valid-scale bitmap max-valid-level) 'sum-top-edges bitmap max-valid-level max-bedge2-level) 'sum-top-pixels bitmap max-bedge2-level count) 'project-max-valid-value count count max-bedge2-level :dir :x) ,*cond ((and!! (plusp!! width) (plusp!! count)) ' (*setwidth (min!! width count))) (t!!
(*set width (max! ! width count)))) (*when (zerop!! bitmap) (*setwidth (!! O)))))) ... .
", ;;; orientation ;; l~ese opç~ti~n co~ t~g on~nt~tinn (defvar min-orientation-edqe-pixels 8.) (defvar max-orientation-edge-pixels 8.) (* proclaim '(ftype (function (t t t t t t t t~ (f ield-pva r 9)) li ne-orientation ! ! )) (*defun line-orientation!!
(top top-left left bot-left bot bot-right right top-right &optional (minpixels min-orientation-edge-pixels)) (declare (type (field-pvar 16) top left bot right top-left bot-left bot-right top-right)) (*let((q1sum1 (+!!toptop-leftleft))(q1sum3(+!! bot~ot-riqhtriqht)) (q2sum2 (+ ! ! left bot-left bot)) (q2sum4 (+ ! ! right top-right top~)) A-t1 ~i ` .

, ~

204 l 922 (declare (type (field-pvar 16) q1sum1 q1sum3 q2sum2 q2sum4)) (*let ((q1sum (max!! q1sum1 q1sum3)) (q2sum (max!! q2sum2 q2sum4)) (result (!!
o))) (declare (type (field-pvar 16) q1sum q2sum) (type (field-pvar 9) result)) (*if (~ ! ! q 1sum q2sum) (*if (> =!! q1sum1 q1sum3) (*when (> =!! q1sum1 (!!f minpixels)) (*set result (atand*!! (crcs!! (+ !! top-left left)) (crcs! ! ( + ! ! top top-left))))) (*when (> =!! q1sum3 (!!f minpixels)) (*set result (mod!! (atand*!! (-!! (+!! bot-rightright)) (-! ! ( + ! ! bot bot-right))) (!! 180.))))) (*if (> = ! ! q2sum2 q2sum4) (*when (> =!! q2sum2 (!!f minpixels)) (*set result (mod!! (atand*!! (crcs!! (+!! left bot-left)) (-! ! (+ ! ! bot-left bot))) (!! 180.)))) (*when (> = ! ! q2sum4 (! !f minpixels)) (*set result (mod! ! (atand*! ! (-! ! (+ ! ! right top-right)) (crcs! ! ( + ! ! top-right top))) (!! 180.)))))) result))) (*defun GET-ORIENTATIONS
(bitmap orientation &optional (q 4) (top-level (parent-level-from-width* max-orientation-edge-pixels))) (declare ~type (field-~var 1) bitmap) (type (field-pvar 'pvar-length orientation)) orientation)) (*let (top bot left rig lt top-left bot-left top-right bot-right) (declare (type (field-pvar 16) top reft bot right top-left bot-left bot-right top-right)) (*set orientation (! ! O)) (let ((sum-list (list top bot left right top-left bot-left top-right bot-right))) (*let ((up-edges (directed-edges! ! bitmap: up)) 'left-edges (directed-edges! ! bitmap: left~) down-edges(directed-edges!! bitmap :down)) right-edges(directed-edges!! bitmap :right))) (declare (type ffield-pvar 1) up-edges left-edges down-edges right-edges)) (*set top (andO1 ! ! up-edges (news-border! ! up-edges (! ! O) 1 O)) left (andO1 ! ! left-edges (news-border! ! left-edges (! ! O) 0 1 )) bot (andO1 ! ! down-edges (news-border! ! down-edges (! ! O) 1 O)) right (andO1 !! right-edges (news-border! ! right-edges (! ! O) 0 1)) top-left 'andO1 ! ! up-edges (news-border! ! left-edges (! ! O) 1 -1)) bot-left andO1 ! ! down-edges (news-border! ! left-edges (! ! 0)1 1 )) bot-righ . (andO1 ! ! rig ht-edges (news-border! ! down-edges (! ! 0)1 - 1 )) top-right(andO1!! right-edges(news-border!! up-edges(!! 0)1 1))) (loop for I from 1 to top-level do (loop for sum in sum-list do (*locally (declare ~type (field-pvar 16) sum)) (*set sum ~+ ! ! (qchild-ref!! sum 1 I q) (qchild-ref!! sum 2 I q)))))) (~set orientation (line-orientation ! !
top top-left left bot-left bot bot-right right top-right)))))) (defun LABEL-ORIENTATION (bitmap orientation 8Optional (q 4)) (get-orientations bitmap orientation q) (project-parent-value-to-null-child1 orientation (parent-level-from-width* max-orientation-edge-pixels) q)) (defun LABEL-ORIEN~ATIONS (bitmap orientations) (loop for orientation in orientations forq from 1 do (get-orientations bitmap orientation q) (project-parent-value-to-null-child 1 orientation (parent-level-from-width* max-orientation-edge-pixels) q))) . . .
", ;;; curvature ;; These operations compute a measure related to Cu~

(defvar min-curvature-edge-pixels 8.) (defvar max-curvature-edge-pixels 16.) ;8 doesn't work, understandably.
(defvar cusp-ratio1 0.7) ; below this, terms can show up in result.
(defvar cusp-angl 60.) ; below this, high curvature interior ;points show up (defvar cusp-low 0.5) (defvar cusp-high 0.6) ;even 0.5 works (so does 0.7) (defmacro check-curvature! ! (qasum qbsum thresh minpixels low high) `(*let ((qabsum (+ ! ! ,qasum ,qbsum)~
(maxsum (max! ! ,qasum ,q ~sum))) (and!! (~=!! qabsum,thres~) '> = ! ! maxsum (round!! (*! ! qabsum (! !r ,low)))) < = ! ! maxsum (round! ! (*! ! qabsum (! !r ,high)))) ,> = ! ! (min ! ! ,qasum ,qbsum) (! !f ,minpixels))))) (*defun label-curvature-1 (top top-left left bot-left bot bot-riqht right top-right label &optional (minpixels min-cusp-edge-pixels) ~ratio1 cusp-ratio1) (low cusp-low) ~high cusp-high)) (declare (tvpe (field-pvar 8) top ~eft bot right top-left bot-left bot-riaht top-right) (type (field-pvar (pvar-length label)) label)~
(*set label (! ! 0)) (*let((q1asum(+!!toptop-left))(q1bsum(+!! lefttop-left)) .

204 ~ 922 'q3asum '+ !! bot bot-right)' (q3bsum (+ !! bot-right right)) 'q2asum + ! ! bot-left bot)) q2bsum (+ ! ! left bot-left)) ;q4asum '+ ! ! top-right top)t (q4bsum ( + ! ! right top-right))) (declare (~ype (field-pvar 8) q1asum q1bsum q2asum q2bsum q3asum q3bsum q4asum q4bsum)) (*let((q1atheta(atand*!! (crcs!! top-left)(crcs!! (+!! toptop-left)))) 'q1btheta (atand*!! (crcs!! (+!! top-left left)) (crcs!! top-left))) 'q2atheta (atand*!! (crcs!! bot-left) (-!! (+!! bot-left bot)')) 'q2btheta (atand*! ! (crcs! ! (+ !! left bot-left)) (-! ! bot-left )) q3atheta (atand* ! ! (-! ! bot-right) (-! ! ( + ! ! bot bot-right), )) 'q3btheta (atand * ! ! - ! ! ( + ! ! bot-rig ht right)) (- ! ! bot-rig ht))) q4atheta (atand*!! '-!! top-right) (crcs!! (+!! top-righttop)))) ~q4btheta (atand*!! ,-!! (+!! righttop-right)) (crcs!! top-right)))) (declare (type (field-pvar 9) q1atheta q1btheta q2atheta q2btheta q3atheta q3btheta q4atheta q4btheta)) (*let ((thresh (round! ! (* ! ! (+ ! ! top top-left left bot-left bot bot-right right top-right) (! ! r ratio1 ))))) (declare (type (field-pvar 16) thresh)) (loopforsum1 in (listq1asum q1bsum q2asum q2bsum q3asum q3bsum q4asum q4bsum) for theta1 in (list q 1 atheta q 1 btheta q2atheta q2btheta q3atheta q3btheta q4atheta q4btheta) do (loop for sum2 in (list q 1 asum q 1 bsum q2asum q2bsum q3asum q3bsum q4asum q4bsum) for theta2 in (list q 1atheta q 1 btheta q2atheta q2btheta q3atheta q3btheta q4atheta q4btheta) do (unless (eql sum1 sum2) (*when (check-curvature! ! sum1 sum2 thresh minpixels low high) (*set label (orientation-difference-180!! theta1 theta2)))))))))) (*defun GET-CURVATURE
(pixel?? curvature &optional (q 4) (top-level (parent-level-from-width* max-curvature-edge-pixels))) (declare (type (field-~var 1) pixel??) (type (field-pvar 'pvar-length curvature)) curvature)) (*let ~top bot left riq lt top-left bot-left top-right bot-right) (declare (type (field-pvar 8) top ~eft bot right top-left bot-left bot-right top-right)) (*set curvature (! ! O)) (let ((sum-list (list top bot left right top-left bot-left top-right bot-right))) (*let((up-edges(directed-edges!! pixel?? :up)) 'left-edges (directed-edges!! pixel?? :left)) 'down-edges(directed-edges!! pixel?? :down)) ,right-edges(directed-edges!! pixel?? :right))) (declare (type (field-pvar 1) up-edges left-edges down-edges right-edges)) (*set top (andO1 ! ! up-edges (news-border! ! up-edges (! ! O) 1 O)) left (andO1 ! ! left-edges (news-border! ! left-edges (! ! O) 0 1 )) bot ~andO1 ! ! down-edges (news-border! ! down-edges (! ! O) 1 O)) righ-(andO1!! right-edqes(news-border!! right-edges(!! O)0 1)) top- eft(andO1!! up-edges(news-border!! left-edges(!! 0)1-1)) bot- eft (andO1 ! ! down-edqes (news-border! ! left-edges (! ! O) 1 1)) bot-right (andO1 ! ! right-edges (news-border! ! down-edges (! ! O) 1 -1)) 20~ 1 922 top-right(andO1!! right-ed3es(news-border!! up-edges(!! 0)1 1))) (loop for I from 1 to top-level o (loop for sum in sum-list do (*locally (declare (type (field-pvar 8) sum)) (*set sum (+ ! ! (qchild-ref! ! sum 1 I q) (qchild-ref! ! sum 2 I q)))))) (label-curvature-1 top top-left left bot-left bot bot-right right top-right curvature))))) (defun LA8EL-CURVATURES (pixel?? curvatures) (loop for curvature in curvatures forqfrom 1 to4do (get-curvature pixel?? curvature q) (project-parent-value-to-null-child 1 curvature (parent-level-from-width* max-curvature-edge-pixels) q))) ... .
, "

;;;-*-Syntax: Common-lisp;mode: lisp; package: REVERSE-GRAPHICS; base: 10-*-This file contains operations that are applications of imagechunking and labeling.
selected, active, focus, and value are conventional names often used below as pvar parameters and internal pvars of functions. They should be interpreted as pvars unless otherwise indicated. The suffixes -pvar and -scalar are used when there is any ambiguity.
selected, active, and focus are always 1-bit pvars. value is always a field pvar with enough bits to hold the relevant result.

Selection operations .
All selection processes involve two stages. The first stage establishes a salient value to be selected, on either a local or global basis. The second stage distributes this value to all pixels, where a local selection decision is made.
This hierarchical operation will distribute a value from the upper-left corner of the image to all pixels. It distributes conditionally to agivensetof locationsdefined bycond-pvar.
(*defun DlSTRlBUTE-VALUE-honest (value value-pvar cond-pvar) (declare (type (field-pvar (pvar-length value-pvar)) value-pvar) 'type (field-pvar 1) cond-pvar) ,type fixnum value)) (*le- ((projpvar (! ! O))) 'declare (type (field-pvar (pvar-length value-pvar)) projpvar)) setf (pref-grid projpvar O O) value) ~loop for I from top-chunkinq-level downto 1 do (setf (child-ref!! projpvar 21~
(max!! projpvar (child-ref!! projpvar 21) ))) (*set value-pvar (if! ! (plusp! ! cond-pvar) projpvar (! ! O))))) 204 1 92~
.
;; The preceding operation is equivalent to the following, which is ;; slightly more efficient and simply makes a direct call to a primitive ;; *LISP operation ! ! function.
(*defun DISTRIBUTE-VALUE (value value-pvar cond-pvar) (declare (type (field-pvar (pvar-length value-pvar)) value-pvar) (type (field-pvar 1) cond-pvar) (type fixnum value)) (*set value-pvar (if! ! (plusp ! ! cond-pvar) ( ! ! value) (! ! 0)))) ;; The following operation calls DISTRIBUTE-VALUE on its scalar argument ;; VALUE.
;; If the value of the method argument is :fixed, slop and high-slop are ;; interpreted as allowed ranges below and above value, respectively.
;; If the value of the method argument is :variable (the default), slop ;; and high-slop are interpreted as fractions which when multiplied by ;; value give the allowed ran~es belowand above value, respectively.
;; In either case, high-slop detaults to slop.
;; Locations selected by this function are labeld 1 in the argument pvar ;; SELECTED. Locations not selected are labeled 0. Given the optional ;; keyword argument ACTIVE, this function implements nested selection ;; (i.e., it subselects locations that are ;; labeled 1 in ACTIVE).
(defvar default-similarity-scale-factor 0.25) (defun SELECT-VALUE (value value-pvar selected &key active (method :variable) (diff-fun '-! !) (slop default-similarity-scale-factor) high-slop (null-value 0.)) (*locally (declare (type (field-pvar (pvar-length value-pvar)) value-pvar) (type (field-pvar 1) selected)) (*let ~(temp (! ! 0))) 'declare (type (field-pvar (pvar-length value-pvar)) temp)) 'distribute-va I ue va I ue tem p (or active (! ! 1 ))) '*setselected (!! 0)) let ((Islop (case method (:fixed slop) (t (round (* value slop))))) (hslop (case method (:fixed (or high-slop slop)) (t (round (* value (or high-slop slop))))))) (*let ((diff (! ! 0))) 'declare (tYpe (signed-pvar 17) diff)) *set diff (*funcall diff-fun temp value-pvar)) ;*unless (= ! ! temp (! !f null-value)) (*when (or!! (and!! (not!! (minusp!! diff)) (< = ! ! diff (! !f Islop))) (and ! ! (minusp! ! diff) ( < = ! ! (abs ! ! diffl (! !f hslop)))) (*set selected (! ! 1))))))))) ,,, Readout operations extract a scalar value from pvar.
;; Read out the value in the upper left corner location of the image.
(defun READOUT-HOME (pvar) (pref pvar (cube-from-grid-address 0 0))) ;; The pvar focus is 1 at exactly one location, and zero elsewhere.
;; This function implements the uppward pass, and ;; then reads the upper-left corner value into the front end processor.
(defun READOUT-FOCUS-honest (value focus) (*locally (declare (type (field-pvar (pvar-length value)) value) (tvpe (field-pvar 1) focus)) (*let ~(temp (! ! 0)) (result (! ! 0))) (declare (type (field-pvar (pvar-length value)) result) (type (field-pvar 1) temp)) (*set temP focus result value) (loop for i from 1 to top-chunking-level do (*cond ;((plusp!! (child-ref!! temp 1 I)) ; (*set temp ( ! ! 1 ) result (chi Id-ref ! ! result 1 I))) ((plusp!! (child-ref!! temp 21)) set temp (! ! 1) result (child-ref ! ! result 2 I))))) (pref-grid result 0 0) ))) ;; The following is equivalent to READOUT-FOCUS-honest and slightly more ;; efficient; it dlrectly calls the *LISP operation *max.
(defun READOUT-FOCUS (value focus) (*locally (declare (type (field-pvar (pvar-length value)) value) (t pe (field-pvar 1) focus)) (*let result (! ! 0))) (declare (type (field-pvar (pvar-length value)) result)) (*when (plusp!! focus) (*set result value)) (or (*max result) 0)))) ;;
(defun READOUT-FOCUS-LlST(value-pvarsfocus) (loop forvalue-pvar in value-pvars collect (readout-focus value-pvar focus))) 204 ~ 922 ", (defmacro PARENT<-CHILD-1 +CHILD-2 (pvarcombination-function) ` (*locally (declare (type (field-pvar (pvar-length ,pvar)) ,pvar)) (loop for I from 1 to top-chunking-level do (*set ,pvar (,combination-function (child-ref! ! ,pvar 1 I) (child-ref! ! ,pvar 2 I)))))) ;; This macro operation is a utility for defining functions that read ;; out a globally salient value (max, min) or global aggregate value ;; (sum, or, and).
(defmacro READOUT-FUNCTION (pvar fun active) ` (*locally (declare (type (field-pvar (pvar-length ,pvar)) ,pvar) (type (field-pvar 1) ,active)) (*let (result) 'declare (type (field-pvar 32) result)) *set result (if ! ! (plusp! ! ,active) ,pvar (! ! 0))) parent<-child-1 +child-2 result,fun) ~readout-home result)))) ~ .
,, (defun READOUT-SUM (value-pvaractive) (readout-function value-pvar + ! ! active)) (defun READOUT-MIN (value-pvar active) (*locally (readout-function (unzero!! value-pvar) min!! active))) (defun READOUT-MAX (value-pvar active) (readout-function value-pvar max! ! active)) ... .
,,, ;; This operation establishes ;; the hierarchical mode.
(defun FLOAT-MODE
(mode count slop method diff-fun sum-fun &key (null-value 0) (stop-level top-chunking-level)) (* locally (declare (type (field-pvar (pvar-length mode)) mode) (type (field-pvar (pvar-length count)) count)) (*unless (= ! ! mode (! !f null-value)) (*set count (! ! 1))) (loop for I from 1 to stop-level do (*let (c2mode c2count) (declare (type (field-pvar (pvar-length mode)) c2mode) (type (field-pvar (pvar-length count)) c2count)) (*set c2mode (child-ref ! ! mode 21) 204 ~ 922 c2count (child-ref! ! count 2 I)) (*if (zerop! ! count) (*set mode c2mode count c2count) (*if (sloppy= ! ! mode c2mode slop method diff-fun sum-fun) (progn (*set mode (fpvl (random-choose (list mode c2mode)) (pvar-length mode))) (*set count (+ ! ! count c2count))) (*when (< ! ! count c2count) (*set mode c2mode count c2count)))))))) .(defun READOUT-MODE-AND-COUNT
(pvar active &optional (slop O) (method :variable) (diff-fun '-! !) (sum-fun ' + ! !) (null-value O)) (*locally (declare (type (field-pvar (pvar-length pvar)) pvar) (tYpe (field-pvar 1) active)) (*let ~mode (count (! ! O))) 'declare (type (field-pvar 32) mode count)) *setmode(if!! (plusp!! active) pvar(!!f null-value))) float-mode mode count slop method diff-fun sum-fun: null-value null-value) ~list (readout-home mode) (readout-home count))))) (defun READOUT-MODE
(pvar active &optional (slop O) (method :fixed) (diff-fun '-! !) (sum-fun ' + ! !) (null-value O)) (let ((mode-and-count (readout-mode-and-count pvar active slop method diff-fun sum-fun null-value))) (first mode-and-count))) . . .
,,, Set and integer operations.
;; The operations on this page are ;; simple utilities for set operations used in operations described ;; below.
;;
;; Set operations ;;
(defun S <-O (s) (*locally (declare (type (field-pvar 1) s)) (*set s (! ! O)))) (defun S<-S- (s) (*locally (declare (type (field-pvar 1 ) s)) (*set s (notO1 ! ! s)))) (defun S1 ~-S2 (s1 s2) (*locally (declare (type (field-pvar 1) s1 s2)) (*set s1 s2))) ;;
(defun 51 <-S1-S2 (s1 s2) (*locally (declare (type (field-pvar 1) s1 s2)) (*set s1 (if! ! (plusp! ! s2) (! ! O) s1)))) (defun S1 ~-S1 +S2 (s1 s2) (*locally (declare (type (field-pvar 1) s1 s2)) (*set s1 (orO1 ! ! s1 s2)))) (defun S1 < -S1 *S2 (s1 s2) (*locally (declare (type (field-pvar 1) s1 s2)) (*set s1 (andO1 ! ! s1 s2)))) ;;
;;
;;
(defun S<-P-> (s p v) (*locally (declare (type (field-~var 1) s) (type (field-pvar 'pvar-length p)) p)) (*set s (>01 ! ! p (! !f v )))) (defun S<-P-< (s p v) (*locally (declare (type (field-pvar 1) s) (type (field-pvar (pvar-length p)) p)) (*set s (<01 ! ! p (! !f v))))) (defun S < -P- = (s p v) (*locally (declare (type (field- I~var 1 ) s) (type (field-pvar 'pvar-length p)) p)) (*sets(=01!! p(!!fv,)))) ;;
;; integeroperations ;;
(defun P<-O (p) (*locally (declare (type (field-pvar (pvar-length p)) p)) (*set p (! ! O)))) (defun P<-P*S (p s) (*locally 2~4 (declare (type (field-pvar (pvar-length p)) p) (type (field-pvar 1) s)) (*when (zerop! ! s) (*set p (! ! O))))) (defun P1 <-P2 (p1 p2) (*locally (declare (type (field-pvar (pvar-length p1)) p1) (type (field-pvar (pvar-length p2)) p2)) (*set p1 p2))) (defun P1 <-P1 + P2 (p1 p2) (*iocally (declare (type (field-pvar (pvar-length p1)) p1) (type (field-pvar (pvar-length p2)) p2)) (*setp1 (+!! p1 p2)))) (defun P1 <-P1 -P2 (p1 p2 &optional (diffun #'-! !)) (*locally (declare (type (field-pvar (pvar-length p1)) p1) (type (field-pvar (pvar-length p2)) p2)) (*set p1 (abs!! (*funcall diffun p1 p2))))) (defun P1 <-P1-MAX-P2 (p1 p2) (*iocally (declare (type (field-pvar (pvar-length p1)) p1) (type (field-pvar (pvar-length p2)) p2)) (*set p 1 (max! ! p 1 p2)))) (defun P1 <-P1-MIN-P2 (p1 p2) (*locally (declare (type (field-~var(pvar-length p1)) p1) (type (field-pvar 'pvar-length p2)) p2)) (*set p1 (min ! ! p1 p2, ))) ,, transfer utils (defun s<-O-multiple (Ist) (loop for s in Ist do (s < -O s))) (defun s1 <-s2-list (11 12) (loop for s1 in 11 for s2 in 12 do (s1 < -s2 s1 s2))) . . .
,,, Scene summarization.
;; This function establishes hierarchical modes in sequence.
;; value-or-list is a value pvar or a list of value pvars.
(defun SELECT-VALUE-LIST

20~ 1 922 (value pvar-list selected &key active (cond-pvars (circular-list nil)) (method :variable) (diff-fun '-! !) (slop default-similarity-scale-factor)) (declare (special selected active value pvar-list slop method diff-fun)) (let-s (temp temp1) (s < ~ selected) (loop for value-pvar in pvar-list for cond-pvar in cond-pvars do 's1 <-s2 temp1 active) if cond-pvar (s1 <-s1 *s2 temp1 cond-pvar)) ,select-value value value-pvar temp :active temp1 :method method :diff-fun diff-fun :slop slop) (s1 <-s1 +s2 selected temp)))) (defvardefault-selection-termination-steps 5) (defvar default-selection-termination-factor 0.05) (defun LIST-MODES
(value-or-list selected &key 'diff-fun '-! !) method :variable) ,steps default-selection-termination-steps) slop default-similarity-scale-factor) 'cutoff default-selection-termination-factor) ,cond-pvars (circular-list nil))) (declare (special value-or-list selected cond-pvars cutoff steps slop method diff-fun)) (let ((vlist (if (listp value-or-list) value-or-list (list value-or-list))) (total 0) (modes ()) (counts ()) (step 0)) (declare (special vlist total modes counts step)) (let-s (active temp) (s1 <-s2 active selected) (let ((remaining (readout-sum active active)) mode count) rdeclare (special remaining mode count)) setqtotal remaininc) ~loop while (and (> / remaining total) cutoff) (< step steps)) do 'setq step (1 + step), setq mode (readout-mode-list vlist active 0 :fixed)) push mode modes) ~select-value-list mode vlist temp :active selected :cond-pvars cond-pvars :method method :diff-fun diff-fun :slop slop) 's1 <-s1-s2 active temp) 'setq count (readout-sum temp temp)) push count counts) setq remaininq (readout-sum active active)))) (I st (reverse modes) (reverse counts))))) ", Component labeling.

2Q4 ~ 922 -;; This function implements the basic coloring process.
(defun SPREAD-BY-CHUNKS
(active mvl figure &optional disp? step?
&aux (nullv (1 + top-chunking-level))) (*locally (declare (type (field-pvar 1) active figure) (type (field-pvar (pvar-length mvl)) mvl)) (let((top-level (*max(value-when!! mvl (not=01!! mvl (!!f nullv)))))) (*let ((ftemp (! ! 0)) (ptemp (! ! 0))) 'declare (type (field-pvar 1) ftemp ptemp)) *set ftemp figure) ~loop for I from 1 to top-level do (*when (valid-level?? I mvl) (*set ftemp (orO1 ! ! (child-ref! ! ftemp 1 I) (child-ref! ! ftemp 2 I))))) (loop for I from top-level downto 1 do (*when (valid-level = ?? I mvl) (*set ptemp ftemp)) (*let ((ctemp (child-ref!! ptemp 21))) (declare (type (field-pvar 1) ctemp)) (setf (child-ref! ! ptemp 2 I) (orO1 ! ! ptemp ctemp)))) (*set figure (orO1 ! ! figure (andO1 ! ! ptemp active))))))) ;; a1, a2, and RESULT are binary pvar. pixels in a1 adjacent to pixels ;; of a2 are set to 1 in RESULT. Other pixels are set to 0 in result.
(defun FlLTER-ADJACENT(a1 a2 result) (let-s (c2) 's1 c-s2 c2 a2) s <-s- c2) filter-edges c2 c2) s1 <-s1 *s2 c2 a1) ~s1 <-s2 result c2))) ;; This functions colors a connected component from one or more seed ;; locations in figure. Note that it computes the ;; maximum valid level for the given selected set, and pass the result ;; toSPREAD-BY-CHUNKS.
(defun COLOR-FROM-SEED
(selected figure &key (start-level 0) disp? step?) (declare (special selected figure start-level disp? step?)) (let-s (active new) (let-p (maxl-valid) (s1 <-s2 active selected) (if (zerop (readout-max active figure)) (s<-s- active)) 'label-valid-scale active maxl-valid start-level) filter-adjacent active figure new) ;display purpose only ~loop while (plusp (readout-sum new active)) do 'spread-by-chunks active maxl-valid figure disp? step?) s1 <-s1-s2 active figure) ~filter-adjacent active figure new)))))

Claims (61)

WHAT IS CLAIMED:
1. A method of analyzing a body of data that includes a plurality of data items; the method being performed by operating a system that includes memory and a processor connected for accessing the memory; the method comprising steps of:

storing in the memory the data items of the body of data; and operating the processor to produce a hierarchy of data items, the hierarchy including the data items of the body of data as a lowest level and at least one higher level of aggregative data items, each higher level having a respective next lower level, each level of the hierarchy having a respective number of data items;

the step of operating the processor to produce the hierarchy comprising, for each of the higher levels, a respective substep of producing each aggregative data item of the level by operating on a respective group of the data items of the respective next lower level, each aggregative data item of the level being for indicating an attribute of the respective group of the data items of the respective next lower level;

the respective number of aggregative data items of a first one of the higher levels being not substantially less than the respective number of data items of the next lower level.
2. The method of claim 1 in which the body of data defines an image, each data item indicating an attribute of a respective region of the image, the respective region of each aggregative data item including the respective regions of each data item in the respective group of the data items of the next lower level.
3. The method of claim 2 in which the higher levels of the hierarchy include a second level of the aggregative data items, a first one of the aggregative data items of the second level indicating that a geometric structure occurs in its respective region, the method further comprising a step of operating the processor to produce geometric data indicating the geometric structure by operating on the aggregative data items of the second level.
4. The method of claim 3 in which the higher levels of the hierarchy include a top level, all of the levels of the hierarchy except the top level having a next higher level, the second level being the top level.
5. The method of claim 2 in which the image has a plurality of pixels, each of the data items of the body of data including a pixel value for a respective one of the pixels of the image.
6. The method of claim 1 in which the respective number of aggregative data items of the first higher level is equal to the respective number of data items of the respective next lower level.
7. The method of claim 1 in which the data items at each level of the hierarchy form an array having a dimension, the data items in the respective group being separated from each other in the dimension by a power-of-two offset.
8. The method of claim 1 in which the data items at each level form a two-dimensional array having first and second dimensions, the levels of the hierarchy including alternating first and second types of levels; the data items in the respective group on each level of the first type being separated from each other by a power-of-two offset in the first dimension; the data items in the respective group on each level of the second type being separated from each other by a power-of-two offset in the second dimension.
9. The method of claim 1 in which the respective substeps of producing the aggregative data items for the higher levels of the hierarchy are performed in a sequence, the substep of producing the aggregative data items for the each higher level being, in the sequence, subsequent to the substep of producing the data items for the respective next lower level.
10. The method of claim 9 in which the respective substep of producing the aggregative data items for the first higher level comprises a substep of encoding the data items for the first higher level.
11. A system for analyzing a body of data that includes a plurality of data items; the system comprising:

memory for storing the data items of the body of data; and a processor connected for accessing the data items stored in the memory, the processor further being for producing a hierarchy of data items, the hierarchy including the data items of the body of data as a lowest level and a series of higher levels of aggregative data items, each of the higher levels having a respective next lower level, each level of the hierarchy having a respective number of data items;

the processor producing each of the higher levels of the hierarchy by producing each aggregative data item of the level by operating on a respective group of the data items of the respective next lower level, each aggregative data item of the level indicating an attribute of the respective group of the data items of the respective next lower level;

the respective number of aggregative data items of a first one of the higher levels being not substantially less than the respective number of data items of the next lower level.
12. The system of claim 11 in which the body of data defines an image, each data item indicating an attribute of a respective region of the image, the respective region of each aggregative data item including the respective regions of the respective group of the data items of the next lower level.
13. The system of claim 12 in which the higher levels of the hierarchy include a second level of the aggregative data items, a first one of the aggregative data items of the second level indicating a geometric structure that occurs in its respective region, the processor further being for operating on the aggregative data items of the second level to produce geometric data indicating the geometric structure of the image.
14. The system of claim 12 in which the image has a plurality of pixels, the data items of the body of data including, for each of the pixels of the image, arespective data item, the respective data item including a respective pixel value.
15. The system of claim 14 in which the respective next lower level of the first higher level of the hierarchy is the lowest level with the respective dataitems that include the respective pixel values for the pixels, the first higher level including a respective first level aggregative data item for a first one of the pixels; the processor comprising a plurality of processing units including a first respective processing unit for the first pixel; the first respective processing unit of the first pixel producing the first pixel's respective first level aggregative data item by operating on a respective group of pixel values from the respective data items of the lowest level, the respective group including the first pixel's respective pixel value.
16. The system of claim 15 in which the respective group includes only the first pixel's respective pixel value and a second pixel's respective pixel value.
17. The system of claim 15 in which the hierarchy of levels includes a second one of the higher levels, the respective next lower level of the second higher level being the first higher level, the second higher level including a respective second level aggregative data item for the first pixel; the processor further being for producing the first pixel's respective second level aggregative data item by operating on a respective group of data items from the first higher level, the respective group including the first pixel's respective first level aggregative data item.
18. The system of claim 17 in which the first higher level including a respective first level aggregative data item for a third pixel, the respective group including only the first pixel's respective first level aggregative data item and the third pixel's respective first level aggregative data item.
19. The system of claim 17 in which the plurality of processing units further includes, for the first pixel, a second respective processing unit; the first pixel's second respective processing unit producing the first pixel's respective second respective aggregative data item by operating on the first pixel's first respective aggregative data item.
20. The system of claim 19 in which the first pixel's first respective processing unit produces the first pixel's first respective aggregative data item by operating on the respective pixel values of the first pixel and of a second pixel and the first pixel's second respective processing unit produces the first pixel's second respective aggregative data item by operating on the first pixel's first respective aggregative data item and on another aggregative data item of the first higher level.
21. The system of claim 14 in which each of the higher levels of the hierarchy includes a respective aggregative data item for a first one of the pixels, the processor comprising a plurality of processing units including a first pixel processing unit for the first pixel; the first pixel processing unitproducing the first pixel's respective aggregative data item at each of the higher levels of the hierarchy by operating on a respective group of data items from the respective next lower level.
22. The system of claim 21 in which the respective group includes the first pixel's respective data item at the respective next lower level.
23. The system of claim 22 in which the respective group further includes only one additional data item at the respective next lower level other than the first pixel's respective data item
24. The system of claim 23 in which the additional data item is a respective data item of a paired pixel that is separated from the first pixel by a power-of-two offset in the image.
25. The system of claim 14 in which each of the higher levels of the hierarchy includes a respective aggregative data item for each of the pixels, the processor comprising a plurality of processing units including a respective processing unit for each of the pixels; the respective processing unit producing each pixel's respective aggregative data item at each of the higher levels of the hierarchy by operating on a respective group of data items from the respective next lower level.
26. The system of claim 25 in which, for each pixel, the respective group of data items from the respective next lower level includes the pixel's respective data item at the respective next lower level and at most one additional data item from the respective next lower level.
27. The system of claim 26 in which, for each pixel whose respective group includes one additional data item, the additional data item is a respective data item of a paired pixel, the paired pixel being separated from the pixel by a power-of-two offset in the image.
28. The system of claim 11 in which the respective number of aggregative data items of the first higher level is equal to the respective number of data items of the next lower level.
29. The system of claim 11, further comprising an image input device for receiving input image data defining an image, the body of data being the input image data.
30. The system of claim 11, further comprising an image output device for providing an output image and a user input device for receiving signals from a user requesting modifications in the output image to produce a modified image, the body of data being data defining the modified image.
31. A method of operating a system that includes a plurality of processing units and, for each processing unit, respective local memory, each processing unit being connected for accessing its respective local memory, the processing units being connected for communication; the method comprising steps of:

storing in the respective local memory of each of the processing units a respective lowest level data item, the respective lowest level data items together forming a body of data; and producing a hierarchy of data items, the hierarchy including the data items of the body of data as a lowest level and at least one higher level of aggregative data items, each higher level having a respective next lower level, each level of the hierarchy having a respective number of data items;
the step of producing the hierarchy comprising, for each higher level, substeps of:

operating each of the processing units to produce a respective aggregative data item of the higher level by operating on a respective group of the data items of the respective next lower level; each aggregative data item of the respective higher level indicating an attribute of the respective group of the data items of the respective next lower level; and operating each of the processing units to store the respective aggregative data item produced by each processing unit in its local memory;

the substep of operating each of the processing units to produce a respective aggregative data item comprising a substep of communicating with a connected processing unit to obtain a first one of the respective group of the data items, the first one of the respective group of data items being stored in the local memory of the connected processing unit;

the respective number of aggregative data items of each of the higher levels being equal to the respective number of data items of the lowest level.
32. The method of claim 31 in which the processing units are connected to form a hypercube network, the substep of communicating comprising a substep of communicating through the hypercube network.
33. A method of operating a system that includes memory and a processor connected for accessing the memory, the method comprising steps of:

storing in the memory a body of data defining an image that includes a plurality of pixels, the body of data including a plurality of data items, each including a pixel value for a respective one of the pixels; and operating the processor to produce a dense hierarchy of levels of attribute data items by operating on the data items in the body of data, each attribute data item indicating an attribute of a respective region of the image; the levels further including a lowest level and a sequence of higher levels, each of the higher levels having a respective next lower level in the hierarchy;
each level of the hierarchy having a respective number of attribute data items; the step of operating the processor comprising substeps of:

operating on each of the data items in the body of data to produce a respective starting attribute data item for each pixel, the lowest level of the hierarchy including the respective starting attribute data items;
and for each of the higher levels, producing each attribute data item of the level by combining a respective set of the attribute data items of the respective next lower level; the respective number of attribute data items of each of the higher levels being not substantially less than the respective number of attribute data items of the respective next lower level.
34. The method of claim 33 in which the respective number of attribute data items of each of the higher levels is equal to the respective number of data items of the respective next lower level.
35. The method of claim 33 in which each pixel's respective starting attribute data item is equal to the pixel's respective pixel value.
36. The method of claim 33 in which each pixel has a set of neighboring pixels, each pixel's respective starting attribute data item indicating whether the image includes an edge between the pixel and any of its neighboring pixels, the substep of operating on each of the data items in the body of data comprising a substep of operating on each pixel's respective data item and on the respective data item of each of the pixel's neighboring pixels to produce the pixel's respective starting attribute data item.
37. The method of claim 33 in which each attribute data item is a number;
each substep of producing each attribute data item comprising adding the attribute data items in the respective set of the attribute data items of the next lower level.
38. The method of claim 33 in which the higher levels of the hierarchy include alternating first and second types of levels, the substep of producing each attribute data item comprising, for each of the first type of level, producing each attribute data item by operating on a respective set of data items from the next lower level that includes only one respective data item, the next lower level being of the second type of level; each attribute data item at one of the higher levels of the first type being the same as the one respective data item included in its respective set of data items from the next lower level.
39. The method of claim 38 in which the image has a first dimension and a second dimension, the respective region of each attribute data item in the hierarchy being a slice extending in the first dimension.
40. A method of operating a system that includes a plurality of processing units and, for each processing unit, respective local memory, each processing unit being connected for accessing its respective local memory; the method comprising steps of:

storing in the respective local memory of each of the processing units a respective lowest level data item, the respective lowest level data items together forming a body of data; and producing a hierarchy of data items, the hierarchy including the data items of the body of data as a lowest level and a plurality of higher levels of data items, each higher level having a respective next lower level; the lowest level being the respective next lower level of one of the higher levels;

the step of producing the hierarchy comprising, for each of the higher levels, a substep of operating each of the processing units to produce a respective data item of the higher level by operating on a respective set of the data items of the respective next lower level; each data item of the respective higher level indicating an attribute of the respective set of the data items of the respective next lower level; the attribute being a binary attribute having a first value and a second value; each processing unit producing a respective sequence of data items, the respective sequence including one data item for each of the higher levels; each respective sequence including at most one transition from the first value to the second value; the respective sequence of a first one of the processing units including a transition from the first value to the second value between first and second ones of the levels;

the step of producing the hierarchy further comprising a substep of operating the first processing unit to store level data indicating the transition from the first value to the second value between the first and second levels.
41. The method of claim 40 in which the level data indicates that the first level is the highest level at which the binary attribute has the first value.
42. The method of claim 40 in which the body of data defines an image, each respective lowest level data item including a pixel value for a respective pixel of the image.
43. The method of claim 42 in which the data items of each of the higher levels are aggregative data items, each of the aggregative data items indicating the binary attribute for a respective region of the image.
44. A method of operating a system that includes memory and a processor connected for accessing the memory, the method comprising steps of:

storing in the memory a body of data defining an image that includes a plurality of pixels, the body of data including a plurality of data items, each including a pixel value for a respective one of the pixels;

operating the processor to produce a hierarchy of levels of upward attribute data items by operating on the data items in the body of data, each upward attribute data item indicating an upward attribute of a respective region of the image; the levels further including a lowest level and a sequence of higher levels, each of the higher levels having a respective next lower level in the hierarchy; each level of the hierarchy having a respective number of upward attribute data items; the respective number of upward attribute data items of each of the higher levels being not substantially less than the respective number of upward attribute data items of the respective next lower level; and for a first one of the higher levels in the hierarchy, operating the processor to produce a number of downward attribute data items at a second level in the hierarchy by operating on a respective set of the upward attribute data items of the first higher level, the second level being the respective next lower level of the first higher level, each downward attribute data item indicating a downward attribute of a respective region of the image; each of the upward attribute data items of the second level having a respective one of the downward attribute data items.
45. The method of claim 44 in which the step of operating the processor to produce the downward attribute data items comprises a substep of operating on each of the upward attribute data items of the second level and on the respective set of the upward attribute data items of the first higher level, themethod further comprising a step of replacing each upward attribute data item of the second level in the hierarchy with its respective downward attribute data item.
46. The method of claim 45 in which each of the respective sets of the upward attribute data item of the first higher level includes respective first and second ones of the upward attribute data items, the substep of operating on each of the upward attribute data items of the second level comprising substeps of:

operating on each upward attribute data item and on its respective first upward attribute data item to produce a respective preliminary downward attribute data item; and operating on each respective preliminary downward attribute data item and on the respective second upward attribute data item to produce the respective downward attribute data item.
47. The method of claim 44 in which the second level is one of the higher levels, the method further comprising a step of operating the processor to produce a number of downward attribute data items at a third level in the hierarchy by operating on a respective set of the downward attribute data items of the second level, the third level being the respective next lower levelof the second level; each downward attribute data item at the third level indicating a downward attribute of a respective region of the image; each of the upward attribute data items at the third level having a respective one of the downward attribute data items at the third level.
48. The method of claim 44 in which each of the upward attribute data items at the first higher level includes a respective value of the attribute, each respective set of the upward attribute data items including a maximum data item whose respective value is a maximum of the respective values of the data items in the respective set; the step of operating the processing unit to produce the downward attribute data items comprising a substep of producing each downward attribute data item so that it includes a respective value of the attribute that is equal to the respective value of the maximum data item.
49. The method of claim 44 in which each of the upward attribute data items at the first higher level includes a respective value of the attribute, each respective set of the upward attribute data items including a minimum data item whose respective value is a minimum of the respective values of the data items in the respective set; the step of operating the processing unit to produce the downward attribute data items comprising a substep of producing each downward attribute data item so that it includes a respective value of the attribute that is equal to the respective value of the minimum data item.
50. The method of claim 44 in which each of the upward attribute data items at the first higher level includes a respective value of the attribute;
the step of operating the processing unit to produce the downward attribute data items comprising a substep of producing each downward attribute data item so that it includes a respective value of the attribute that is a central value selected from the respective values of the attribute in the upward attribute data items.
51. The method of claim 44 in which the respective region of a first one of the upward attribute data items at the first higher level meets a criterion, the respective set of upward attribute data items of a second one of the downward attribute data items including the first upward attribute data item; the step of operating the processing unit to produce the downward attribute data items comprising a substep of producing the second downward attribute data item so that it indicates the attribute of the respective region that meets the criterion.
52. The method of claim 51 in which the criterion is that the respective region be included in a component for which a value for the attribute has been produced.
53. The method of claim 52 in which the attribute is orientation, the value for orientation being produced if the component includes a sufficient number of pixels.
54. A method of operating a system that includes memory and a processor connected for accessing the memory, the method comprising steps of:

storing in the memory a body of data that includes a plurality of data items, each including one of a set of respective values; and operating the processor to produce a first hierarchy of levels of prominent value data items by operating on the data items in the body of data, each prominent value data item including one of the respective values and a count; the levels further including a lowest level and a sequence of higher levels, each of the higher levels having a respective next lower level in the first hierarchy; the step of operating the processor comprising substeps of:

operating on each of the data items in the body of data to obtain a respective starting prominent value data item for each data item in the body of data, each data item's respective starting prominent value data item including the data item's respective value; the lowest level of the first hierarchy including the respective starting prominent value data items; and for each of the higher levels, producing each prominent value data item of the higher level by combining a respective set of the prominent value data items of the respective next lower level, each prominent value data item including one of the respective values from the prominent value data items in the respective set, the count in each prominent value data item being produced by operating on the counts in the prominent value data items in the respective set.
56. The method of claim 54 in which the data items of the body of data define an image, the respective value of each data item being a pixel value of a respective pixel of the image.
56. The method of claim 54 in which, for a first one of the prominent value data items at one of the higher levels, the substep of producing each prominent value data item comprises, if first and second ones of the prominent value data items in the respective set of prominent value data items of the respective next lower level are similar, a substep of adding the counts of the first and second next lower level prominent value data items to produce the count in the first higher level prominent value data item.
57. The method of claim 56 in which the count of the first next lower level prominent value data item is greater than the count of the second next lower level prominent value data item, the substep of producing each prominent value data item further comprising, if the first and second next lower level prominent value data items are not similar, a substep of producing the first higher level prominent value data item so that it includes the respective value and count of the first next lower level prominent value data item.
58. The method of claim 54 in which the count in each respective starting prominent value data item is one.
59. The method of claim 54 in which the first hierarchy includes a top level that does not have a next higher level, the top level including a top level prominent value data item that includes a top one of the respective values and a top level count, the method further comprising a step of producing the top respective value and the top level count from the top level prominent value data item.
60. The method of claim 59 in which the data items of the body of data include a subset of data items that have respective values other than the top respective value, the method further comprising a step of:

operating the processor to produce a second hierarchy of levels of prominent value data items by operating on the data items in the subset, each prominent value data item in the second hierarchy including one of the respective values other than the top respective value and a count; the levels further including a lowest level and a sequence of higher levels, each of the higher levels having a respective next lower level in the second hierarchy;
the step of operating the processor to produce the second hierarchy comprising substeps of:

operating on each of the data items in the subset to produce a respective starting prominent value data item for each data item in the subset, each data item's respective starting prominent value data item including the data item's respective value; the lowest level of the second hierarchy including the respective starting prominent value data items;
and for each of the higher levels, producing each prominent value data item of the higher level by combining a respective set of the prominent value data items of the respective next lower level, each prominent value data item including one of the respective values from the prominent value data items in the respective set, the count in each prominent value data item being produced by operating on the counts in the prominent value data items in the respective set.
61. The method of claim 54 in which the data items of the body of data define an image, each data item of the body of data including a pixel value for a respective pixel of the image; each of the prominent value data items indicating a prominent value of a respective region of the image.
CA002041922A 1990-06-08 1991-05-07 Dense aggregative hierarchical techniques for data analysis Expired - Fee Related CA2041922C (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US07/535,796 US5280547A (en) 1990-06-08 1990-06-08 Dense aggregative hierarhical techniques for data analysis
US535,796 1990-06-08

Publications (2)

Publication Number Publication Date
CA2041922A1 CA2041922A1 (en) 1991-12-09
CA2041922C true CA2041922C (en) 1995-07-18

Family

ID=24135802

Family Applications (1)

Application Number Title Priority Date Filing Date
CA002041922A Expired - Fee Related CA2041922C (en) 1990-06-08 1991-05-07 Dense aggregative hierarchical techniques for data analysis

Country Status (4)

Country Link
US (1) US5280547A (en)
EP (1) EP0460970A2 (en)
JP (1) JPH04296985A (en)
CA (1) CA2041922C (en)

Families Citing this family (40)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5359724A (en) * 1992-03-30 1994-10-25 Arbor Software Corporation Method and apparatus for storing and retrieving multi-dimensional data in computer memory
GB9217361D0 (en) * 1992-08-14 1992-09-30 Crosfield Electronics Ltd Methods and apparatus for generating graphics patterns
US5450603A (en) * 1992-12-18 1995-09-12 Xerox Corporation SIMD architecture with transfer register or value source circuitry connected to bus
US5408670A (en) * 1992-12-18 1995-04-18 Xerox Corporation Performing arithmetic in parallel on composite operands with packed multi-bit components
US5651121A (en) * 1992-12-18 1997-07-22 Xerox Corporation Using mask operand obtained from composite operand to perform logic operation in parallel with composite operand
US5375080A (en) * 1992-12-18 1994-12-20 Xerox Corporation Performing arithmetic on composite operands to obtain a binary outcome for each multi-bit component
EP0603425A1 (en) * 1992-12-22 1994-06-29 International Business Machines Corporation Representation of n-ary trees
US5533148A (en) * 1993-09-30 1996-07-02 International Business Machines Corporation Method for restructuring physical design images into hierarchical data models
JP2812168B2 (en) * 1993-12-27 1998-10-22 松下電器産業株式会社 Shape data compression method and shape data decompression method
JP3335009B2 (en) * 1994-09-08 2002-10-15 キヤノン株式会社 Image processing method and image processing apparatus
US5842004A (en) 1995-08-04 1998-11-24 Sun Microsystems, Inc. Method and apparatus for decompression of compressed geometric three-dimensional graphics data
US6747644B1 (en) 1995-08-04 2004-06-08 Sun Microsystems, Inc. Decompression of surface normals in three-dimensional graphics data
US5793371A (en) 1995-08-04 1998-08-11 Sun Microsystems, Inc. Method and apparatus for geometric compression of three-dimensional graphics data
US5832475A (en) * 1996-03-29 1998-11-03 International Business Machines Corporation Database system and method employing data cube operator for group-by operations
US6216168B1 (en) * 1997-03-17 2001-04-10 Cabletron Systems, Inc. Perspective-based shared scope address resolution method and apparatus
AUPO886397A0 (en) * 1997-08-29 1997-09-25 Canon Kabushiki Kaisha Utilisation of caching in image manipulation and creation
US6816618B1 (en) * 1998-03-03 2004-11-09 Minolta Co., Ltd. Adaptive variable length image coding apparatus
US6130676A (en) * 1998-04-02 2000-10-10 Avid Technology, Inc. Image composition system and process using layers
US6081624A (en) 1998-06-01 2000-06-27 Autodesk, Inc. Spatial index compression through spatial subdivision encoding
US6446109B2 (en) 1998-06-29 2002-09-03 Sun Microsystems, Inc. Application computing environment
US20020177911A1 (en) * 1998-09-30 2002-11-28 Lawrence Waugh Method and apparatus for determining an arrangement of components
JP3648089B2 (en) * 1999-03-18 2005-05-18 富士通株式会社 Design system and recording medium
IT1311443B1 (en) * 1999-11-16 2002-03-12 St Microelectronics Srl METHOD OF CLASSIFICATION OF DIGITAL IMAGES ON THE BASIS OF THEIR CONTENT.
WO2002029719A2 (en) * 2000-10-06 2002-04-11 Generic Vision Aps Method and system for multi-dimensional segmentation
EP2249284B1 (en) * 2001-01-22 2014-03-05 Hand Held Products, Inc. Optical reader having partial frame operating mode
US7268924B2 (en) 2001-01-22 2007-09-11 Hand Held Products, Inc. Optical reader having reduced parameter determination delay
GB2377205B (en) * 2001-07-04 2004-09-08 Hewlett Packard Co Lenticular images
US6961803B1 (en) * 2001-08-08 2005-11-01 Pasternak Solutions Llc Sliced crossbar architecture with no inter-slice communication
US7249125B1 (en) * 2003-10-09 2007-07-24 Computer Associates Think, Inc. System and method for automatically determining modal value of non-numeric data
WO2007000999A1 (en) * 2005-06-27 2007-01-04 Pioneer Corporation Image analysis device and image analysis method
US7539343B2 (en) * 2005-08-24 2009-05-26 Hewlett-Packard Development Company, L.P. Classifying regions defined within a digital image
US7609891B2 (en) * 2005-08-31 2009-10-27 Sony Corporation Evaluation of element distribution within a collection of images based on pixel scatterness
US7734575B1 (en) * 2005-11-16 2010-06-08 Amdocs Software Systems Limited System, method, and computer program product for scaleable data collection and audience feedback
US8037404B2 (en) * 2009-05-03 2011-10-11 International Business Machines Corporation Construction and analysis of markup language document representing computing architecture having computing elements
WO2015174395A1 (en) 2014-05-14 2015-11-19 新日鐵住金株式会社 Continuous casting method for slab
GB2528669B (en) * 2014-07-25 2017-05-24 Toshiba Res Europe Ltd Image Analysis Method
US10423693B2 (en) * 2014-09-15 2019-09-24 Autodesk, Inc. Parallel processing using a bottom up approach
US9704056B2 (en) * 2015-04-02 2017-07-11 Qualcomm Incorporated Computing hierarchical computations for computer vision calculations
WO2019005175A1 (en) 2017-06-30 2019-01-03 Intel Corporation Magnetoelectric spin orbit logic with displacement charge
WO2019005098A1 (en) 2017-06-30 2019-01-03 Go Logic Decision Time, Llc Methods and systems of assertional projective simulation

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4174514A (en) * 1976-11-15 1979-11-13 Environmental Research Institute Of Michigan Parallel partitioned serial neighborhood processors
US4224600A (en) * 1979-03-26 1980-09-23 The Perkin-Elmer Corporation Arrays for parallel pattern recognition
US4363104A (en) * 1980-09-22 1982-12-07 Hughes Aircraft Company Imaging system having multiple image copying and hierarchical busing
US4507726A (en) * 1982-01-26 1985-03-26 Hughes Aircraft Company Array processor architecture utilizing modular elemental processors
US4622632A (en) * 1982-08-18 1986-11-11 Board Of Regents, University Of Washington Data processing system having a pyramidal array of processors
US4601055A (en) * 1984-04-10 1986-07-15 The United States Of America As Represented By The Secretary Of Commerce Image processor
US4850027A (en) * 1985-07-26 1989-07-18 International Business Machines Corporation Configurable parallel pipeline image processing system
JPH0634236B2 (en) * 1985-11-02 1994-05-02 日本放送協会 Hierarchical information processing method
JPH0785248B2 (en) * 1986-03-14 1995-09-13 株式会社東芝 Data Isle System
US4809346A (en) * 1986-07-18 1989-02-28 Hughes Aircraft Company Computer vision architecture for iconic to symbolic transformation
US4860201A (en) * 1986-09-02 1989-08-22 The Trustees Of Columbia University In The City Of New York Binary tree parallel processor
US4802090A (en) * 1987-06-24 1989-01-31 General Electric Company Histogramming of pixel values on a distributed processing system
US5022091A (en) * 1990-02-28 1991-06-04 Hughes Aircraft Company Image processing technique
US5193125A (en) * 1990-06-08 1993-03-09 Xerox Corporation Local hierarchical processing focus shift within an image

Also Published As

Publication number Publication date
CA2041922A1 (en) 1991-12-09
EP0460970A2 (en) 1991-12-11
JPH04296985A (en) 1992-10-21
US5280547A (en) 1994-01-18
EP0460970A3 (en) 1994-04-27

Similar Documents

Publication Publication Date Title
CA2041922C (en) Dense aggregative hierarchical techniques for data analysis
Weems Architectural requirements of image understanding with respect to parallel processing
Shapiro Connected component labeling and adjacency graph construction
Cabaret et al. Parallel Light Speed Labeling: an efficient connected component algorithm for labeling and analysis on multi-core processors
EP0460971B1 (en) System for labeling picture elements
Havel et al. Efficient tree construction for multiscale image representation and processing
Borgefors et al. Hierarchical decomposition of multiscale skeletons
Jiang et al. High-level feature based range image segmentation
Braquelaire et al. Representation of segmented images with discrete geometric maps
US5231676A (en) Hierarchical operations on border attribute data for image regions
Meer et al. A fast parallel method for synthesis of random patterns
Zhang et al. A gamma-signal-regulated connected components labeling algorithm
He et al. A fast algorithm for integrating connected-component labeling and euler number computation
Proietti An optimal algorithm for decomposing a window into maximal quadtree blocks
Davy et al. A note on improving the performance of Delaunay triangulation
Ibrahim Image understanding algorithms on fine-grained tree-structured simd machines (computer vision, parallel architectures)
Narayanan et al. Replicated data algorithms in image processing
Mahoney et al. Position-invariant hierarchical image analysis
Brun et al. A new split and merge algorithm based on discrete map
Embrechts et al. MIMD divide-and-conquer algorithms for the distance transformation. Part I: City Block distance
Meer et al. Processing of line drawings in a hierarchical environment
Celenk et al. Parallel implementation of the split and merge algorithm on hypercube processors for object detection and recognition
Halabi New algorithm-simulation connected components labeling for binary images
Willersinn et al. Dual irregular voronoi pyramids and segmentation
Skourikhine Dilated contour extraction and component labeling algorithm for object vector representation

Legal Events

Date Code Title Description
EEER Examination request
MKLA Lapsed