US20050044056A1 - Searching for object images with reduced computation - Google Patents
Searching for object images with reduced computation Download PDFInfo
- Publication number
- US20050044056A1 US20050044056A1 US10/643,467 US64346703A US2005044056A1 US 20050044056 A1 US20050044056 A1 US 20050044056A1 US 64346703 A US64346703 A US 64346703A US 2005044056 A1 US2005044056 A1 US 2005044056A1
- Authority
- US
- United States
- Prior art keywords
- image
- database
- query
- object images
- images
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/53—Querying
- G06F16/532—Query formulation, e.g. graphical querying
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- General Health & Medical Sciences (AREA)
- Mathematical Physics (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- General Engineering & Computer Science (AREA)
- Image Analysis (AREA)
Abstract
Description
- The present invention relates generally to pattern recognition and, more particularly, to searching a database for object images with relatively reduced computational overhead.
- Recognition of a human face is one of the most fundamental and seemingly effortless human activities. To impart this capability to a machine has generated a lot of interest in the field of automated face recognition, and a number of implementation approaches have been proposed. The human face recognition is a challenging area of development in biometrics for positive identification of a person. Application of such a machine has a broad utility range varying from photo identification for personal identification, credit card verification, and criminal identification to real time matching of video images in different constraints in terms of processing requirements.
- Search mechanisms for images from a huge image database is crucial for success of such a biometric or recognition system. Typical exhaustive search mechanisms are counterproductive for practical applications.
- Thus, there is a continuing need for better ways to search an image of an object in a database, especially recognizing similar images with relatively reduced computational overhead.
-
FIG. 1 is a block diagram of a system in accordance with one embodiment of the present invention. -
FIG. 2 is a front view of a human face showing fiducial points for indexing an image thereof using contours in accordance with one embodiment of the present invention. -
FIG. 3 is a side view of the human face shown inFIG. 2 according to one embodiment of the present invention. -
FIG. 4A is a flow diagram of a method in accordance with one embodiment of the present invention. -
FIG. 4B is a flow diagram of a searching method in accordance with one embodiment of the present invention. -
FIG. 5 is a flow diagram of an image searching application for the system shown inFIG. 1 according to one embodiment of the present invention. -
FIG. 6 is a schematic depiction of a query image and search results from the image database shown inFIG. 1 consistent with one embodiment of the present invention. - In various embodiments, a system may take advantage of explicit information inherently present in both a front and a side view of human face images. Side profile strategy may be used to obtain an outline of the face profile and extract discrete features from it. In such manner, a feature set calculated from the front view may be enriched, providing much richer and explicit information to assist in face recognition.
- In certain embodiments, an active contour or snake algorithm (herein a “snake contour”, “snakes algorithm”, or “snakes”) may be used to detect certain boundaries of images. Such active contours may be obtained as discussed in Xu, et al., Snakes, Shapes, and Gradient Vector Flow, IEEE Transactions on Image Processing, pp. 359-369, Vol. 7, No. 3 (March 1998), in one embodiment. In one embodiment, a face boundary may suffice in the case of a side profile. In a front view, the snakes algorithm may be used to detect the face boundary and eye boundaries, eyebrow boundaries, a nose boundary and lip boundary, although the scope of the present invention is not limited in this regard. These snakes are curves defined within an image domain that can move under the influence of internal forces coming from within the curve itself and external forces computed from the image data. These internal and external forces are so defined that snakes may conform to an image boundary or other desired feature within an image. In various embodiments, these forces may be calculated from an energy-based model, and a final curve may evolve from an initial curve under the influence of external potentials while being constrained by internal energies. The initial snakes contour converges iteratively towards the solution of a partial differential equation. In certain embodiments, snakes may be developed around a face image using Gradient Vector Flow (GVF) snake-based contours, as discussed in Xu, et al.
- Referring to
FIG. 1 , asystem 10 may include aprocessor 20 coupled via aninterface 25 to amemory 30 storingsoftware system 10 may be any processor-based system or device to perform the content-based search. While thesystem 10 may vary in different embodiments, in one embodiment,system 10 may be a personal computer, notebook computer, a handheld device, or the like. To communicate externally and interactively, thesystem 10 may include conventional interfaces, including, for example, an interface (I/F) 40 (e.g., a communication interface) and an interface 45 (e.g., a user interface), coupling through theinterface 25 to theprocessor 20. - The
interface 25 may couple both theprocessor 20 and thememory 30 to astorage device 50, storing the images of objects in an image database (DB) 60. For example, the images of objects, such as different views of human faces, may be stored in theimage database 60 in the form of abinary tree 65 comprising nodes at different levels (e.g., levels T1, T2, and T3) in a hierarchical manner. Consistent with one embodiment, thesoftware image database 60. This data structure may be metadata that describes the images in a desired form. - In one embodiment, the
tree 65 may provide a distance-based index structure where distance computations between objects of the data domain involve a metric distance function. Using this distance-based index structure, similarity queries may be performed on metric spaces. - By indicating reference data points on the images of objects for the
binary tree 65, thesoftware image database 60 into spherical shell-like regions in a hierarchical manner. In this manner, when answering similarity queries, thebinary tree 65 may utilize pre-computed distances between reference data points of a query object and reference points at a node at a certain level in the distance-based index structure of thebinary tree 65 using fuzzy logic, in one embodiment. - For automated face recognition, a
controller 70 may couple to theprocessor 20 via theinterface 25 in accordance with one embodiment of the present invention. Thecontroller 70 may couple to adisplay 75, showing all the images which may be similar to the given query image in theimage database 60. To this end,database software 80 may include afeature extractor 90 to represent images in theimage database 60. In certain embodiments,image searching software 85 may include afuzzy searcher algorithm 95 to partition the representations of images in theimage database 60. Theimage searching application 85 may include a search algorithm capable of searching for images of objects in theimage database 60 similar to a query image, using fuzzy logic, in one embodiment. - In operation, approximate matches to a given query object from a collection of images of objects stored in the
image database 60 may be located using the distance-based index structure of thebinary tree 65. For example, theimage database 60 may be queried to find and retrieve an image in theimage database 60 that is similar to a query human image with respect to one or more specified criterion. As a result of the content-based search using the distance-based index structure of thebinary tree 65, theimage searching application 85 may display aquery image 100 and a plurality of similar images 102 (including result images 104(1) to 104(n)), forming a solution set in thedisplay 75 according to one embodiment of the present invention. While shown inFIG. 1 as including multiple result images, in other embodiments a single search result may be obtained. - In order to search for similar object images in the
image database 60, thebinary tree 65 may index the images of objects for similarity search queries. In certain embodiments,binary tree 65 may be indexed using fuzzy logic. Also, instead of comparing all distances (obtained from the images), a selected set of distances may be compared for the selected set of features in thebinary tree 65, using a feature vector. - A selected distance function for the selected points may compare a feature vector between the query image and one image of each of a plurality of sets stored in the
image database 60, in some embodiments of the present invention. By comparing only a single image from each image set at a feature level using a feature vector comparison, whole image comparisons may be obviated. In this manner, for automatic face recognition, a content-based search in theimage database 60 may reduce computational overhead, reducing the content for image matching. - By generating the
binary tree 65, theimage database 60 may be enabled for image detection indicative of whether a particular person's image may be recognized based on at least two views of the face, in some embodiments of the present invention. Distance values in terms of a distance-to-distance function may be computed from fiducial points to compare a query image of a human face with the images stored in theimage database 60. - The
binary tree 65 may have a multiplicity of nodes, each of which may include feature sets from a plurality of images, such as images of different individuals. Comparing only a single image from each node using fuzzy logic, theimage searching application 85 may perform in a rapid manner using fewer resources, in some embodiments of the present invention. In one embodiment, theimage database 60 may include images of human faces being represented by corresponding feature vectors obtained using thefeature extractor 90 which may apply the snake algorithm to obtain fiducial point information. The fiducial point information may be used to obtain a feature set of distances normalized and stored in theimage database 60, using thebinary tree 65 data structure, in various embodiments of the present invention. - A feature vector may include data values for distances between the fiducial points for the image of the human face. In this manner, the image of the human face in the
image database 60 having a feature vector may be compared with another image of the human face based on individual fiducial points by comparing the feature vectors completely or in a partial manner, as specified by the search algorithm. - To indicate an exact or approximate match, a distance difference, such as a normalized difference, may indicate a relatively close or no match. For example, if the distance difference between a
query image 100 and a database face image is relatively high, a mismatch therebetween may be indicated. However, if the normalized distance difference is relatively small, for example, within a similarity measure, a closer match to thequery image 100 may be indicated. - Referring now to
FIG. 2 , shown is a schematic representation of afront view 125 a of a human face showing fiducial points in accordance with one embodiment of the present invention. As shown inFIG. 2 , a face boundary 130 (shown inFIG. 2 as a dashed line) may be detected. In one embodiment, theimage searching application 85 may use a conventional snake algorithm to detect theface boundary 130, identifying the outline of the face profile in thefront view 125 a. For example, the snake algorithm may be initialized around the human face and converged using gradient vector flow (GVF) snake-based contours. In addition, other boundaries includingeye boundaries eye brow boundaries nose boundary 150, andlip boundary 155 may also be detected using a snakes algorithm, in one embodiment. - For human face recognition in accordance with one embodiment of the present invention, the
image searching application 85 may provide a relatively richer feature set using a face profile of a side view. Referring now toFIG. 3 , shown is aside view 125 b of the face ofFIG. 2 . Usingside view 125 b, a feature set obtained from thefront view 125 a may be enriched using a feature set having discrete features indicative of the outline of the face profile 195 (shown by the dashed line inFIG. 3 ) in theside view 125 b. In this manner, an enriched feature set based on the properties of image content in both the front andside views - After marking of fiducial points on the front and
side views side view 125 b, all distances may be normalized in terms of nose to chin distance, in one embodiment. Likewise, the features extracted in thefront view 125 a may be normalized in terms of distance between eye centers to nose tip, in one embodiment. - In an embodiment for human face recognition, for marking fiducial points on the front view, a multiplicity of reference points and locations thereof may be found on a face. Namely, inner and outer eye point locations, eye center, nose tip, eye brow point, and face width may be determined in one example embodiment. In such an embodiment, the location of inner and outer eye points may be found from snake contours converging around the two eyes. All the pixel values around each eye location are available. Referring to
FIG. 2 , for the right eye, theleftmost point 160 b gives the outer eye point and therightmost point 170 b gives the inner eye point. The mid-point of these two yields theiris center 172 b. Similarly, for the left eye, therightmost pixel location 160 a gives the outer eye point while theleftmost pixel 170 a gives the inner eye point. The mid point of these two is theiris center 172 a for the left eye. - In one embodiment, the mid point between the two iris centers of the two eyes calculated above gives
eye center 175. This point may be identical to the bridge point calculated from the side profile, as will be discussed below. In one embodiment, a snake contour converging on the nose area may yield a set of pixel values from which the nose point, i.e., the tip of the nose can be calculated. The mid point between the two extremes of all the nose points gives the nose tip. As shown inFIG. 2 ,nose tip 180 may thus be determined. On both the eyebrows a snake contour converges, providing the eye brow point. From all these points, the two extreme points may be chosen, and the mid point of the two extreme points of all the eyebrow points yields thecentral point face width 190 at thenose tip location 180. The leftmost and the rightmost points on the face boundary are noted to calculate the face width at nose tip location. - In an embodiment for human face recognition, in order to mark fiducial points on the side view, a multiplicity of reference points and locations thereof may be found on a face. The multiplicity of reference points and locations may include but are not limited to, a nose point, a chin point, a forehead point, a bridge point, a nose bottom point, a lip bottom point, and a brow point.
- In such an embodiment, the nose point may be the rightmost point of the side profile as the protrusion of nose is maximum in any normal human face. For example, referring to
FIG. 3 ,point 200 is the noise point. In case there is more than one such point, the bottommost point may be selected as the nose point. For determining the chin point, lines may be drawn recursively from the nose point to all points below it on the profile and the angle of these lines with the vertical or horizontal is calculated. The point on the lower profile which gives the maximum angle with the horizontal or the minimum angle with the vertical, may be taken as thechin point 235, in one embodiment. - The point on the profile above the nose point whose distance from the nose point is same as the distance between the nose point and chin point may be taken as the
forehead point 210, in one embodiment. Thebridge point 215 lies on the profile between thenose point 200 and theforehead point 210. The equation of the line joining the forehead point and nose point may be calculated. From this line, perpendiculars may then be drawn to all points on the profile which lie between these two points. The point having maximum perpendicular distance from the line joining the nose point and forehead point is marked as the bridge point. The tangent to the angle between the nose point and all points between nose point and chin point may be calculated. In one embodiment, the point with the minimum angle with the horizontal or maximum angle with the vertical may be marked as nosebottom point 220. Further, the leftmost point between chin point and nose bottom point is marked as the lip bottom point, also known aschin curve point 225. If there are more than one such point in succession, the central point out of these is marked as the lip bottom point. The brow point may be the most raised point betweenforehead point 210 andbridge point 215. The rightmost point between forehead point and bridge point is marked asbrow point 230 inFIG. 3 . - In one embodiment, after marking all the points on the front view and side view, feature vectors may be calculated by measuring a distance between two of the marked points. For example, a predetermined feature set of feature vectors may be calculated for each of the side view and front view. In one embodiment, seven features of the front view of the human face image and seven features of the side view of the human face image may be extracted using active contours or snakes. In such an embodiment, a side view may include the following:
-
- 1. nose to forehead distance (Dn-fh);
- 2. nose to bridge distance (Dn-b);
- 3. nose to nose bottom distance (Dn-nb);
- 4. brow to bridge distance (Db-b);
- 5. brow to chin distance (Db-c);
- 6. nose to chin distance (Dn-c); and
- 7. nose bottom to lip bottom distance (Dnb-lb).
- In such an embodiment, all distances may be normalized. For example, the distances may be normalized in terms of nose to bridge distance, in one embodiment.
- In one embodiment, a front view may include the following feature vectors:
-
- 1. distance between left and right iris center (Dle-re);
- 2. distance between two inner eye points (Diep);
- 3. distance between two outer eye points (Doep);
- 4. distance between eye center to nose tip (Dec-nt);
- 5. distance between left iris center and left eyebrow (Dlic-leb);
- 6. distance between right iris center and right eyebrow (Dric-reb); and
- 7. face width at nose tip (Dfw).
In such an embodiment, all distances may be normalized. For example, the distances may be normalized in terms of distance between eye center to nose tip, in one embodiment.
- Referring now to Tables 1 and 2 below, shown are feature sets for a side view and a front view, respectively, of an example face in accordance with one embodiment of the present invention:
TABLE 1 Nose to forehead distance 1.2606 Nose to bridge distance 1.0000 Nose to nose bottom distance 0.0942 Brow to bridge distance 0.3863 Brow to chin distance 2.0442 Nose to chin distance 1.2608 Nose bottom to lip bottom distance 0.6246 -
TABLE 2 distance between left and right iris center 1.4654 distance between two inner eye points 1.0652 distance between two outer eye points 1.9856 distance between eye center to nose tip 1.0000 distance between left iris center and left eyebrow 0.4676 distance between right iris center and right eyebrow 0.4676 face width at nose tip 2.7886 - As shown in Tables 1 and 2, the values may be normalized with respect to nose to bridge distance and eye center to nose tip, respectively. The above feature sets may be used to completely and uniquely represent a pair of images in an image database.
- After determining feature sets for front and side views of a face, the feature sets may be stored. For example, in the embodiment of
FIG. 1 , the feature sets may be stored in a memory such asstorage 50. More specifically, in the embodiment ofFIG. 1 , feature sets may be stored inimage database 60 in nodes ofbinary tree 65 as metadata of the images from which the sets were obtained. In such manner, in certain embodiments, a query image may be searched against similar images inimage database 60 in an efficient manner. - Referring to
FIG. 4A , shown is a flow diagram of a method of extracting feature information in accordance with one embodiment of the present invention. As shown inFIG. 4A , thefeature extractor 90 may generate contours for object images to be stored in theimage database 60 shown inFIG. 1 (block 250). While the generation of contours may vary in different embodiments, in one embodiment, contours may be generated using a snake algorithm, such as a Gradient Vector Flow (GVF) snake contour. According to one embodiment, the object images may include images of human faces. For example, one or more views, including thefront view 125 a shown inFIG. 2 and theside view 125 b shown inFIG. 3 , may be obtained from an individual and used to generate contours about the individual's face. In such an embodiment, contours may be developed around the individual's face boundary, and in certain embodiments, eye boundaries, nose boundary, brow boundaries, and lip boundary. A face boundary may also be generated for the side view. - Next, at
block 252, a set of fiducial points may be marked as reference points on the front andside views feature extractor 90 may calculate feature vectors for the object views (block 254). In this manner, a database, such asimage database 60, may be formed (block 256). - After a desired database is formed, a content-based search thereof may be enabled in different applications that involve image storage and searching, such as for human face recognition in the embodiment of
FIG. 1 . Theimage database 60 may be searched for similar images based on the feature sets derived therefrom, stored as feature vectors associated with the object images. - Referring to
FIG. 4B , shown is a flow diagram of a fuzzy searching method in accordance with one embodiment of the present invention. As shown inFIG. 4B , fuzzy searcher (FS) 95 may enable searching for similar object images in a database with reduced computation, in some embodiments of the present invention. For example, feature vectors representing images inimage database 60 may be searched. To this end, atblock 270, distance between two feature vectors of pairs of the object images in theimage database 60 may be computed. Using a first similarity threshold, images inimage database 60 may be partitioned into dual portions. In such manner, a first portion may include images having a similarity more than or equal to the first similarity threshold. For example, a first similarity threshold may be selected to be 0.5, in one embodiment of the present invention. The two portions may include the object images that partitions the data space into two sets. - In the embodiment of
FIG. 4B , a second similarity threshold may again partition theimage database 60 into two more different sets of object images (block 274). A set of object images may include similar images, forming a cluster of images that may be treated as a collection of items based on a single image that is indicative of the properties of the content therein. - Referring to
FIG. 5 , shown is a flow diagram of a searching algorithm in accordance with one embodiment of the present invention. As shown inFIG. 5 , theimage searching application 85 may receive a query image for a similarity search, in certain embodiments of the present invention (block 280). Using the fiducial points, a feature vector for the query image may be derived (block 282). For example, the feature vector may be formed as discussed above with regard to the calculation of feature vectors of object images obtained in forming an image database (e.g., block 254 ofFIG. 4A ). - Then using the feature vector for the query image, a content search for similar images thereto may be enabled. At
block 284, instead of comparing with every image in every set, one image per set may be compared with the query image consistent with some embodiments of the present invention. - A check at
diamond 286 may determine whether the current comparison provides a maximum similarity measure distance relative to the other sets in thepartitioned image database 60. After a set has been compared to the query image, control may pass back to block 284 for a comparison with a next set of thedatabase 60. On comparing the query image with one image of every set, the set which gives maximum similarity with the query image may be indicated to be the solution set (block 288). - According to one embodiment, a search algorithm may use fuzzy logic for human face recognition. In addition to face recognition, such a search algorithm may be used for other database-searching fields such as genetics (e.g., finding approximate DNA or protein sequences in a genetic database), text matching, or time-series analysis, for example.
- In one embodiment of the present invention, statistically, a distance between two images may be calculated using the formula:
where DP is the distance between two images with respect to a feature vector, N is the number of images in the database, X and Y are distance coordinates, and in one embodiment, for Euclidean distance computation, a value of p=2 may be selected, as an example. - Consistent with one embodiment, the search algorithm may be a fuzzy logic-based approach, based on fuzzy distance theory. The search algorithm for searching an image database having N images using the fuzzy logic based approach may use a similarity measure to determine a pattern match within a desired threshold.
- In accordance with the algorithm, a N X N similarity matrix S representing fuzzy distances may be constructed, where S(ij)=[1+distance(ij)]−1. Thus each fuzzy membership number in the matrix should be a fractional number representing a fuzzy distance. Then, the fuzzy transitivity of the matrix may be checked. If such transitivity does not exist, S may be replaced by S U S2, where U . . . SN−1.
- After construction of the similarity matrix and checking for fuzzy transitivity thereof, the matrix may be used to partition the
image database 60 into different portions or sets. In such manner, fuzzy searching of theimage database 60 may be enabled, allowing query image searching to be performed based on fuzzy parameters, i.e., fuzzy distances. - In one embodiment, a first threshold may be selected and used to partition the
image database 60 into two parts such that images having a similarity more than or equal to the first threshold are present in a first portion, and those less than the first threshold present in a second portion. In one such embodiment, the first threshold may be selected as 0.5. Given the desirability of partitioning a database into a plurality of different portions or sets so that a fuzzy search may be performed of the database in a computationally efficient manner and also provide for search results that approximate a query image, the database may be partitioned additional times. For example, the two partitions of the database may in turn be partitioned using additional thresholds, e.g., a second, third, and fourth threshold. In one embodiment, a second threshold may be 0.75, a third threshold 0.80, and fourth threshold 0.90. By partitioning a database in accordance with this embodiment of the present invention, the database may be partitioned into 16 sets, for example. - Then, when a query image is provided and processed (as discussed above with regard to
FIG. 5 ), the resulting feature vector may be compared with one image of every set in theimage database 60, and the set which gives maximum similarity with the query image may be taken as the solution set. - Referring now to
FIG. 6 , shown is a schematic depiction of aquery image 100 a in accordance with one embodiment of the present invention. As shown inFIG. 6 ,query image 100 a may be used to query animage database 60. Further shown is asolution set 102 a, which includes the results of the query, and has therein a first search result 104(1)a, a second search result 104(2)a, and a third search result 104(3)a. Additional fuzzy logic processing may be used to select from among the images of the solution set the closest match or matches, in certain embodiments. In such manner, a first, second, and third choice for a corresponding match of a face image may be determined, with fuzzy distance theory determining first, second, and third choices, as in the embodiment ofFIG. 6 . - Embodiments of the present invention may be implemented in code and may be stored on a storage medium having stored thereon instructions which can be used to program a system, such as
system 10 to perform the instructions. The storage medium may include, but is not limited to, any type of disk including floppy disks, optical disks, compact disk read-only memories (CD-ROMs), compact disk rewritables (CD-RWs), and magneto-optical disks, semiconductor devices such as read-only memories (ROMs), random access memories (RAMs), erasable programmable read-only memories (EPROMs), flash memories, electrically erasable programmable read-only memories (EEPROMs), magnetic or optical cards, or any type of media suitable for storing electronic instructions, including programmable storage devices. - While the number of features to be extracted from contours of images may vary, in certain embodiments, less than ten such features may be extracted from a front image, and similarly less than ten features may be extracted from a side image. In such manner, in certain embodiments computational requirements may be lessened and analysis may be performed more rapidly. Accordingly, in certain embodiments, facial recognition may be performed using lower power devices, such as handheld devices or other such systems. In one such embodiment, an identification system may be used to perform biometric analysis for identification of individuals seeking access to a secure environment, for example. In such a system, a video capture device may be used to obtain front and side images of an individual and process those images in a system. If positive identification of the individual is achieved, the individual may be given access to the secure environment.
- While the present invention has been described with respect to a limited number of embodiments, those skilled in the art will appreciate numerous modifications and variations therefrom. It is intended that the appended claims cover all such modifications and variations as fall within the true spirit and scope of this present invention.
Claims (23)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US10/643,467 US7191164B2 (en) | 2003-08-19 | 2003-08-19 | Searching for object images with reduced computation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US10/643,467 US7191164B2 (en) | 2003-08-19 | 2003-08-19 | Searching for object images with reduced computation |
Publications (2)
Publication Number | Publication Date |
---|---|
US20050044056A1 true US20050044056A1 (en) | 2005-02-24 |
US7191164B2 US7191164B2 (en) | 2007-03-13 |
Family
ID=34193886
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/643,467 Expired - Fee Related US7191164B2 (en) | 2003-08-19 | 2003-08-19 | Searching for object images with reduced computation |
Country Status (1)
Country | Link |
---|---|
US (1) | US7191164B2 (en) |
Cited By (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030098869A1 (en) * | 2001-11-09 | 2003-05-29 | Arnold Glenn Christopher | Real time interactive video system |
US20060078172A1 (en) * | 2004-06-03 | 2006-04-13 | Arizona Board Of Regents, A Body Corporate Of The State Of Arizona | 3D face authentication and recognition based on bilateral symmetry analysis |
US20060115161A1 (en) * | 2004-11-30 | 2006-06-01 | Samsung Electronics Co., Ltd. | Face detection method and apparatus using level-set method |
US20060135864A1 (en) * | 2004-11-24 | 2006-06-22 | Westerlund L E | Peri-orbital trauma monitor and ocular pressure / peri-orbital edema monitor for non-ophthalmic surgery |
US20070299802A1 (en) * | 2007-03-31 | 2007-12-27 | Mitchell Kwok | Human Level Artificial Intelligence Software Application for Machine & Computer Based Program Function |
US20080243750A1 (en) * | 2007-03-31 | 2008-10-02 | Mitchell Kwok | Human Artificial Intelligence Software Application for Machine & Computer Based Program Function |
US20080256008A1 (en) * | 2007-03-31 | 2008-10-16 | Mitchell Kwok | Human Artificial Intelligence Machine |
US20080281766A1 (en) * | 2007-03-31 | 2008-11-13 | Mitchell Kwok | Time Machine Software |
US20090012920A1 (en) * | 2007-03-31 | 2009-01-08 | Mitchell Kwok | Human Artificial Intelligence Software Program |
US20090043654A1 (en) * | 2007-05-30 | 2009-02-12 | Bates Daniel L | Method And System For Enabling Advertising And Transaction Within User Generated Video Content |
US20090164397A1 (en) * | 2007-12-20 | 2009-06-25 | Mitchell Kwok | Human Level Artificial Intelligence Machine |
US7773093B2 (en) | 2000-10-03 | 2010-08-10 | Creatier Interactive, Llc | Method and apparatus for associating the color of an object with an event |
US20110093418A1 (en) * | 2008-02-14 | 2011-04-21 | Mitchell Kwok | AI Time Machine |
US7941442B2 (en) | 2007-04-18 | 2011-05-10 | Microsoft Corporation | Object similarity search in high-dimensional vector spaces |
US8341152B1 (en) | 2006-09-12 | 2012-12-25 | Creatier Interactive Llc | System and method for enabling objects within video to be searched on the internet or intranet |
CN103425709A (en) * | 2012-05-25 | 2013-12-04 | 致伸科技股份有限公司 | Photographic image management method and photographic image management system |
US20160283780A1 (en) * | 2015-03-25 | 2016-09-29 | Alibaba Group Holding Limited | Positioning feature points of human face edge |
GB2537139A (en) * | 2015-04-08 | 2016-10-12 | Edward Henderson Charles | System and method for processing and retrieving digital content |
US10547610B1 (en) * | 2015-03-31 | 2020-01-28 | EMC IP Holding Company LLC | Age adapted biometric authentication |
CN112587148A (en) * | 2020-12-01 | 2021-04-02 | 上海数创医疗科技有限公司 | Template generation method and device comprising fuzzification similarity measurement method |
Families Citing this family (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20010053991A1 (en) * | 2000-03-08 | 2001-12-20 | Bonabeau Eric W. | Methods and systems for generating business models |
WO2003038749A1 (en) * | 2001-10-31 | 2003-05-08 | Icosystem Corporation | Method and system for implementing evolutionary algorithms |
EP1611546B1 (en) | 2003-04-04 | 2013-01-02 | Icosystem Corporation | Methods and systems for interactive evolutionary computing (iec) |
US7333960B2 (en) * | 2003-08-01 | 2008-02-19 | Icosystem Corporation | Methods and systems for applying genetic operators to determine system conditions |
US7356518B2 (en) * | 2003-08-27 | 2008-04-08 | Icosystem Corporation | Methods and systems for multi-participant interactive evolutionary computing |
US7707220B2 (en) * | 2004-07-06 | 2010-04-27 | Icosystem Corporation | Methods and apparatus for interactive searching techniques |
EP1782285A1 (en) * | 2004-07-06 | 2007-05-09 | Icosystem Corporation | Methods and apparatus for query refinement using genetic algorithms |
US8423323B2 (en) * | 2005-09-21 | 2013-04-16 | Icosystem Corporation | System and method for aiding product design and quantifying acceptance |
EP2032224A2 (en) * | 2006-06-26 | 2009-03-11 | Icosystem Corporation | Methods and systems for interactive customization of avatars and other animate or inanimate items in video games |
US7792816B2 (en) * | 2007-02-01 | 2010-09-07 | Icosystem Corporation | Method and system for fast, generic, online and offline, multi-source text analysis and visualization |
US9753948B2 (en) * | 2008-05-27 | 2017-09-05 | Match.Com, L.L.C. | Face search in personals |
US8645380B2 (en) | 2010-11-05 | 2014-02-04 | Microsoft Corporation | Optimized KD-tree for scalable search |
US20120173577A1 (en) * | 2010-12-30 | 2012-07-05 | Pelco Inc. | Searching recorded video |
US8370363B2 (en) | 2011-04-21 | 2013-02-05 | Microsoft Corporation | Hybrid neighborhood graph search for scalable visual indexing |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5579471A (en) * | 1992-11-09 | 1996-11-26 | International Business Machines Corporation | Image query system and method |
US5604531A (en) * | 1994-01-17 | 1997-02-18 | State Of Israel, Ministry Of Defense, Armament Development Authority | In vivo video camera system |
US5603328A (en) * | 1993-01-18 | 1997-02-18 | The State Of Israel, Ministry Of Defence, Armament Development Authority | Infra-red vascular angiography system |
US6057909A (en) * | 1995-06-22 | 2000-05-02 | 3Dv Systems Ltd. | Optical ranging camera |
US6121969A (en) * | 1997-07-29 | 2000-09-19 | The Regents Of The University Of California | Visual navigation in perceptual databases |
US20010025902A1 (en) * | 2000-03-22 | 2001-10-04 | Pascal Jule | Device for aircraft thrust recovery capable of linking a turboshaft engine and an engine strut |
US6345109B1 (en) * | 1996-12-05 | 2002-02-05 | Matsushita Electric Industrial Co., Ltd. | Face recognition-matching system effective to images obtained in different imaging conditions |
US20020032366A1 (en) * | 1997-12-15 | 2002-03-14 | Iddan Gavriel J. | Energy management of a video capsule |
US6400996B1 (en) * | 1999-02-01 | 2002-06-04 | Steven M. Hoffberg | Adaptive pattern recognition based control system and method |
US6418424B1 (en) * | 1991-12-23 | 2002-07-09 | Steven M. Hoffberg | Ergonomic man-machine interface incorporating adaptive pattern recognition based control system |
US20040019774A1 (en) * | 2002-06-07 | 2004-01-29 | Ryuji Fuchikami | Processor device and information processing device, compiling device, and compiling method using said processor device |
US20040024694A1 (en) * | 2001-03-20 | 2004-02-05 | David Lawrence | Biometric risk management |
-
2003
- 2003-08-19 US US10/643,467 patent/US7191164B2/en not_active Expired - Fee Related
Patent Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6418424B1 (en) * | 1991-12-23 | 2002-07-09 | Steven M. Hoffberg | Ergonomic man-machine interface incorporating adaptive pattern recognition based control system |
US5579471A (en) * | 1992-11-09 | 1996-11-26 | International Business Machines Corporation | Image query system and method |
US5751286A (en) * | 1992-11-09 | 1998-05-12 | International Business Machines Corporation | Image query system and method |
US5603328A (en) * | 1993-01-18 | 1997-02-18 | The State Of Israel, Ministry Of Defence, Armament Development Authority | Infra-red vascular angiography system |
US5604531A (en) * | 1994-01-17 | 1997-02-18 | State Of Israel, Ministry Of Defense, Armament Development Authority | In vivo video camera system |
US6057909A (en) * | 1995-06-22 | 2000-05-02 | 3Dv Systems Ltd. | Optical ranging camera |
US6345109B1 (en) * | 1996-12-05 | 2002-02-05 | Matsushita Electric Industrial Co., Ltd. | Face recognition-matching system effective to images obtained in different imaging conditions |
US6121969A (en) * | 1997-07-29 | 2000-09-19 | The Regents Of The University Of California | Visual navigation in perceptual databases |
US20020032366A1 (en) * | 1997-12-15 | 2002-03-14 | Iddan Gavriel J. | Energy management of a video capsule |
US6400996B1 (en) * | 1999-02-01 | 2002-06-04 | Steven M. Hoffberg | Adaptive pattern recognition based control system and method |
US20010025902A1 (en) * | 2000-03-22 | 2001-10-04 | Pascal Jule | Device for aircraft thrust recovery capable of linking a turboshaft engine and an engine strut |
US20040024694A1 (en) * | 2001-03-20 | 2004-02-05 | David Lawrence | Biometric risk management |
US20040019774A1 (en) * | 2002-06-07 | 2004-01-29 | Ryuji Fuchikami | Processor device and information processing device, compiling device, and compiling method using said processor device |
Cited By (26)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7804506B2 (en) | 2000-10-03 | 2010-09-28 | Creatier Interactive, Llc | System and method for tracking an object in a video and linking information thereto |
US7773093B2 (en) | 2000-10-03 | 2010-08-10 | Creatier Interactive, Llc | Method and apparatus for associating the color of an object with an event |
US20030098869A1 (en) * | 2001-11-09 | 2003-05-29 | Arnold Glenn Christopher | Real time interactive video system |
US7436988B2 (en) * | 2004-06-03 | 2008-10-14 | Arizona Board Of Regents | 3D face authentication and recognition based on bilateral symmetry analysis |
US20060078172A1 (en) * | 2004-06-03 | 2006-04-13 | Arizona Board Of Regents, A Body Corporate Of The State Of Arizona | 3D face authentication and recognition based on bilateral symmetry analysis |
US20060135864A1 (en) * | 2004-11-24 | 2006-06-22 | Westerlund L E | Peri-orbital trauma monitor and ocular pressure / peri-orbital edema monitor for non-ophthalmic surgery |
US7697736B2 (en) * | 2004-11-30 | 2010-04-13 | Samsung Electronics Co., Ltd | Face detection method and apparatus using level-set method |
US20060115161A1 (en) * | 2004-11-30 | 2006-06-01 | Samsung Electronics Co., Ltd. | Face detection method and apparatus using level-set method |
US8341152B1 (en) | 2006-09-12 | 2012-12-25 | Creatier Interactive Llc | System and method for enabling objects within video to be searched on the internet or intranet |
US20080281766A1 (en) * | 2007-03-31 | 2008-11-13 | Mitchell Kwok | Time Machine Software |
US20090012920A1 (en) * | 2007-03-31 | 2009-01-08 | Mitchell Kwok | Human Artificial Intelligence Software Program |
US20080256008A1 (en) * | 2007-03-31 | 2008-10-16 | Mitchell Kwok | Human Artificial Intelligence Machine |
US20080243750A1 (en) * | 2007-03-31 | 2008-10-02 | Mitchell Kwok | Human Artificial Intelligence Software Application for Machine & Computer Based Program Function |
US20070299802A1 (en) * | 2007-03-31 | 2007-12-27 | Mitchell Kwok | Human Level Artificial Intelligence Software Application for Machine & Computer Based Program Function |
US8224849B2 (en) | 2007-04-18 | 2012-07-17 | Microsoft Corporation | Object similarity search in high-dimensional vector spaces |
US7941442B2 (en) | 2007-04-18 | 2011-05-10 | Microsoft Corporation | Object similarity search in high-dimensional vector spaces |
US20110194780A1 (en) * | 2007-04-18 | 2011-08-11 | Microsoft Corporation | Object similarity search in high-dimensional vector spaces |
US20090043654A1 (en) * | 2007-05-30 | 2009-02-12 | Bates Daniel L | Method And System For Enabling Advertising And Transaction Within User Generated Video Content |
US20090164397A1 (en) * | 2007-12-20 | 2009-06-25 | Mitchell Kwok | Human Level Artificial Intelligence Machine |
US20110093418A1 (en) * | 2008-02-14 | 2011-04-21 | Mitchell Kwok | AI Time Machine |
CN103425709A (en) * | 2012-05-25 | 2013-12-04 | 致伸科技股份有限公司 | Photographic image management method and photographic image management system |
US20160283780A1 (en) * | 2015-03-25 | 2016-09-29 | Alibaba Group Holding Limited | Positioning feature points of human face edge |
US9916494B2 (en) * | 2015-03-25 | 2018-03-13 | Alibaba Group Holding Limited | Positioning feature points of human face edge |
US10547610B1 (en) * | 2015-03-31 | 2020-01-28 | EMC IP Holding Company LLC | Age adapted biometric authentication |
GB2537139A (en) * | 2015-04-08 | 2016-10-12 | Edward Henderson Charles | System and method for processing and retrieving digital content |
CN112587148A (en) * | 2020-12-01 | 2021-04-02 | 上海数创医疗科技有限公司 | Template generation method and device comprising fuzzification similarity measurement method |
Also Published As
Publication number | Publication date |
---|---|
US7191164B2 (en) | 2007-03-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7191164B2 (en) | Searching for object images with reduced computation | |
US20050041863A1 (en) | Enabling content-based search of objects in an image database with reduced matching | |
US7668405B2 (en) | Forming connections between image collections | |
US8386505B2 (en) | Identifying unique objects in multiple image collections | |
Srivastava et al. | Statistical shape analysis: Clustering, learning, and testing | |
US7596247B2 (en) | Method and apparatus for object recognition using probability models | |
Kotropoulos et al. | Frontal face authentication using discriminating grids with morphological feature vectors | |
US20040197013A1 (en) | Face meta-data creation and face similarity calculation | |
US20090141947A1 (en) | Method and system of person identification by facial image | |
US20070036398A1 (en) | Apparatus and method for partial component facial recognition | |
US20080279424A1 (en) | Method of Identifying Faces from Face Images and Corresponding Device and Computer Program | |
US20200356648A1 (en) | Device and method for user authentication on basis of iris recognition | |
Drira et al. | A riemannian analysis of 3D nose shapes for partial human biometrics | |
Wang et al. | Modeling and predicting face recognition system performance based on analysis of similarity scores | |
Maity et al. | 3D ear segmentation and classification through indexing | |
Efraty et al. | Facial component-landmark detection | |
Ohmaid et al. | Iris segmentation using a new unsupervised neural approach | |
Li et al. | A face recognition algorithm based on LBP-EHMM | |
Reddy et al. | A novel face recognition system by the combination of multiple feature descriptors. | |
Sahbi et al. | Robust matching by dynamic space warping for accurate face recognition | |
Thakral et al. | Comparison between local binary pattern histograms and principal component analysis algorithm in face recognition system | |
Yashavanth et al. | Performance analysis of multimodal biometric system using LBP and PCA | |
KR20220125422A (en) | Method and device of celebrity identification based on image classification | |
CN112766139A (en) | Target identification method and device, storage medium and electronic equipment | |
Jayaraman et al. | Efficient similarity search on multidimensional space of biometric databases |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: INTEL CORPORATION, CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:ACHARYA, TINKU;REEL/FRAME:014416/0043 Effective date: 20030818 |
|
AS | Assignment |
Owner name: INDIAN INSTITUTE OF TECHNOLOGY, INDIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:RAY, AJOY K.;MISHRA, RANJIT;REEL/FRAME:015193/0319 Effective date: 20040110 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
FPAY | Fee payment |
Year of fee payment: 4 |
|
FEPP | Fee payment procedure |
Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
FPAY | Fee payment |
Year of fee payment: 8 |
|
FEPP | Fee payment procedure |
Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
LAPS | Lapse for failure to pay maintenance fees |
Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
STCH | Information on status: patent discontinuation |
Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362 |
|
FP | Lapsed due to failure to pay maintenance fee |
Effective date: 20190313 |