WO2001065486A1 - System and method for locating an object in an image using models - Google Patents
System and method for locating an object in an image using models Download PDFInfo
- Publication number
- WO2001065486A1 WO2001065486A1 PCT/EP2001/001968 EP0101968W WO0165486A1 WO 2001065486 A1 WO2001065486 A1 WO 2001065486A1 EP 0101968 W EP0101968 W EP 0101968W WO 0165486 A1 WO0165486 A1 WO 0165486A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- feature
- model
- image
- image data
- data
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
Definitions
- the present invention pertains generally to the field of image processing, and in particular, the invention relates to a system and method for determining information related to features of an object in a digital image using models.
- Systems and methods are known that analyze digital images and recognize, for example, human faces. Determining the existence of a human face in an image or extracting facial feature information has been used for various applications such as in automated/surveillance systems, monitoring systems, human interfaces to computers, television and video signal analysis.
- facial templates are first determined based upon average positions of facial features (i.e., eyes, nose and mouth) for a particular sex or race. A digital image is then matched to a template to identify sex or race.
- facial features i.e., eyes, nose and mouth
- a digital image is then matched to a template to identify sex or race.
- expressions e.g., a smile
- Another shortcoming of this method is that the exact positions of facial feature such as the eyes and nose are not actually determined. Facial color tone detection and template matching typically only determine whether a human face is present in an image.
- Edge detection approaches are known to locate the position of eyes. Edge detection approaches are effective in this application because the eyes typically have high edge density values. However, eye glasses and facial hair such as a mustache may cause erroneous results. In addition, edge detection can not typically be used to determine the position of other facial feature such as a nose. Such edge detection approaches are also slow because a global search/operation must be preformed on the entire image.
- Methods have been used to improve video/image communication and/or to reduce the amount of information required to be transmitted.
- One method is called model- based coding.
- Low bit-rate communication can be achieved by encoding and transmitting only representative parameters of an object in an image. At the remote site, the object is synthesized using the transmitted parameters.
- One of the most difficult problems in model-based coding is providing feature correspondence quickly, easily and robustly.
- sequential frames the same features must be matched correctly.
- a block-matching process is used to compare pixels in a current frame and a next frame to determine feature correspondence. If the entire frame is searched for feature correspondence, the process is slow and may yield incorrect results due to mismatching of regions having the same gradient values. If only a subset of the frame is searched, the processing time may be improved. However, in this situation, the process may fail to determine any feature correspondence.
- One aspect of the present invention to provide a feature extraction system that uses front-end models to define regions of interest in an image so that positions of specific feature are determined quickly.
- an image processing device includes front-end modeling of an object in an image to improve accuracy and reduce processing time of feature determination, as well as reduce the amount of memory required to store information related to features of the object and overall image.
- One embodiment of the invention relates to an image processing apparatus including an object detector arranged to determine whether an object is present in image data, at least one model of the object, and a feature extractor which identifies at least one feature of the object. The feature is identified in accordance with the model.
- Another embodiments of the invention relate to a memory medium and method of determining positions of facial features in an image.
- Fig. 1 is a block diagram of a feature extraction system in accordance with one aspect of the present invention.
- Fig. 2 is a schematic front view of an exemplary human face model.
- Fig. 3 is a block diagram of an exemplary computer system capable of supporting the system of Fig. 2.
- Fig. 4 is a block diagram showing the architecture of the computer system of Fig. 3.
- Fig. 5 is a flow chart of a process in accordance with one embodiment of the invention.
- the system 10 includes at least one object determinator 11, a feature extraction unit 13 and a model 12. While two object determinators 11 are shown in Fig. 1 , it should be understood that one unit may be used.
- the model 12 approximates an object to be located in an image. The object maybe anything that can be approximated by a model such as a human face or any other physical object or measurable phenomenon.
- Image data comprising a left frame 14 and a right frame 15 is input into the system 10.
- This input data may be received, for example, from another system such as a video conferencing system, a security system, animation system or from a remote/local data source.
- the left and right frames e.g., stereo
- the object determiner 11 may use conventional color tone detection or template matching approaches to determine the presence/existence of an object in the left and right frames 14 and 15.
- a biometrics-based/disparity method is used by the object determinator 11, as described in U.S.
- Patent Application 09/385,280 filed August 30, 1999, incorporated herein by reference.
- disparity information is used to determine/locate a position of a human head in the left and right frames 14 and 15.
- the model 12 is applied to the object determined/located by the object determinator 11.
- the level of detail of the model 12 may be tailored to the particular needs of the application, e.g., video coding or animation, i.e., from generic to highly detailed.
- the model 12 is used to quickly determine the approximate positions of one or more specific features of the object. For example, if the object is a human face, the more specific features could be the eyes, nose and mouth.
- Fig. 2 shows an example of a human face model 100 based upon a face segmentation technique.
- the model 100 is created using a dataset that describes a parameterized face. This dataset defines a three-dimensional description of the human face object.
- the parameterized face is given as an anatomically-based structure by modeling muscle and skin actuators and force-based deformations.
- a set of polygons define the model 100.
- Each of the vertices of the polygons are defined by X, Y and Z coordinates.
- Each vertex is identified by an index number.
- a particular polygon is defined by a set of indices surrounding the polygon.
- a code may also be added to the set of indices to define a color for the particular polygon.
- Specific vertices define the approximate position of the eyes, nose and mouth.
- the feature extractor 13 determines the actual position/location of the specific features. The determination can be quickly performed because the search/determination is localized to a specific area of the left or right frame 14 or 15 in accordance with the model 12.
- the model 12 is used a template to determine and/or narrow where to start searching for the specific features.
- the feature extractor 13 uses a disparity map to extract specific feature positions or coordinates from the left and right frames 14 and 15, as described in U.S. Patent Application 09/385,280. As related to Fig. 2, these positions correlate to the various vertices of the model 100.
- the feature extractor 13 preferably provides information directly related to vertices 4, 5, 23 and 58 as shown in Fig. 2.
- the feature extractor 13 may also incorporate modeling methods and systems described in U.S. Patent Application 09/422,735, filed October 21, 1999, incorporated herein by reference.
- the polygon vertices of the model 100 can be adjusted to more closely match a particular human face.
- the model 100 template may be adapted and animated to enable movement, expressions and synchronize audio (i.e., speech).
- the model 100 may be dynamically transformed in real-time to more closely approximate a particular human face.
- the invention is not limited to 3D face models.
- the invention may be used with models of other physical objects and scenes; such as 3D or 2D models of automobiles, rooms and data structures.
- the feature extractor 13 gathers information related to the particular object or scene in questions, e.g., the position of wheels or the location data items. Further processing is then based on this information.
- the system 10 outputs the coordinates/location of the specific features of the object. This information may be stored in a database 16 and used as feedback to improve performance.
- the database 16 may contain a set 17 of objects previous processed by the system 10 or recognized (e.g., a list of specific faces related to individuals attending a video conference or a list of specific faces related to individuals employed by a specific company). Once an object (under determination) is identified to be included the set 17, the model 12 may be customized for that object.
- a set 17 of objects previous processed by the system 10 or recognized e.g., a list of specific faces related to individuals attending a video conference or a list of specific faces related to individuals employed by a specific company.
- the feature extractor 13 may be skipped or only used periodically to further improve the speed of processing.
- the feature extractor 13 may be used only on "key" frames (in a MPEG format).
- Features of other non-key frames can be determined by tracking corresponding feature points in a temporal fashion. This is particularly effective in a video stream of data such as from a video conference.
- the output of the system 10 may also be used in conjunction with model- based coding as described in U.S. Patent Application 09/422,735.
- FIG. 5 shows a flow diagram of various steps performed in accordance with a preferred embodiment of the present invention.
- multiple objects may be processed.
- the present invention may locate and process the faces (in an image) of all person attending a video conference.
- it is determined whether the object as been previously processed or recognized by comparing information related to the object to model/data of known objects stored in the database 16.
- the object may also be identified as o previously processed by estimating the position movement of the object from a previous frame. For example, if an object is located at position A in an image, in subsequent frames, if an object is located at position A plus or minus a positional value ) (and or.
- an updated model based upon a generic model is created and stored in the database 16.
- extraction of specific features may be performed in a temporal manner in S9.
- output information is also used as feedback data. This enables the system 10 to more quickly locate the object in the image because the feedback data may be used an approximate location (i.e., coordinate/position data) to being searching for the object in subsequent images/frames of input data.
- the database 16 also provides another significant advantage. This embodiment allows for a compact and efficient system and method of storing a large number of features related to an object. In contrast, conventional systems require a large amount of memory to store information related to a digital image. Based upon the information stored in the database 16, the entire image or at least important aspects of the image may be recreated locally or the image maybe transmitted to be recreated at a remote site.
- the system 10 is implemented by computer readable code executed by a data processing apparatus.
- the code may be stored in a memory within the data processing apparatus or read/downloaded from a memory medium such as a CD-ROM or floppy disk.
- hardware circuitry may be used in place of, or in combination with, software instructions to implement the invention.
- the invention may be implemented on a digital television platform using a Trimedia processor for processing and a television monitor for display.
- the invention can also be implemented on a computer 30 shown in Fig. 3.
- the computer 30 includes a network connection 31 for interfacing to a data network, such as a variable-bandwidth network or the Internet, and a fax/modem connection 32 for interfacing with other remote sources such as a video or a digital camera (not shown).
- the computer 30 also includes a display 33 for displaying information (including video data) to a user, a keyboard 34 for inputting text and user commands, a mouse 35 for positioning a cursor on the display 33 and for inputting user commands, a disk drive 36 for reading from and writing to floppy disks installed therein, and a CD-ROM drive 37 for accessing information stored on CD-ROM.
- the computer 30 may also have one or more peripheral devices attached thereto, such as a pair of video conference cameras for inputting images, or the like, and a printer 38 for outputting images, text, or the like.
- Figure 4 shows the internal structure of the computer 30 which includes a memory 40 that may include a Random Access Memory (RAM), Read-Only Memory (ROM) and a computer-readable medium such as a hard disk.
- the items stored in the memory 40 include an operating system 41, data 42 and applications 43.
- the data stored in the memory 40 may also comprise temporal information as described above.
- operating system 41 is a windowing operating system, such as UNIX; although the invention may be used with other operating systems as well such as Microsoft Windows95.
- the applications stored in memory 40 are a video coder 44. a video decoder 45 and a frame grabber 46.
- the video coder 44 encodes video data in a conventional manner
- the video decoder 45 decodes video data which has been coded in the conventional manner.
- the frame grabber 46 allows single frames from a video signal stream to be captured and processed.
- the CPU 50 comprises a microprocessor or the like for executing computer readable code, i.e., applications, such those noted above, out of the memory 50.
- applications may be stored in memory 40 (as noted above) or, alternatively, on a floppy disk in disk drive 36 or a CD-ROM in CD-ROM drive 37.
- the CPU 50 accesses the applications (or other data) stored on a floppy disk via the memory interface 52 and accesses the applications (or other data) stored on a CD-ROM via CD-ROM drive interface 53.
- Application execution and other tasks of the computer 30 may be initiated using the keyboard 34 or the mouse 35.
- Output results from applications running on the computer 30 may be displayed to a user on display 34 or, alternatively, output via network connection 31.
- input video data may be received through the video interface 54 or the network connection 31.
- the input data may be decoded by the video decoder 45.
- Output data from the system 10 may be coded by the video coder 44 for transmission through the video interface 54 or the network interface 31.
- the display 33 preferably comprises a display processor for forming video images based on decoded video data provided by the CPU 50 over the bus 55. Output results from the various applications may be provided to the printer 38.
Abstract
Description
Claims
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2001564100A JP2003525504A (en) | 2000-03-03 | 2001-02-21 | System and method for locating an object in an image using a model |
EP01905815A EP1177525A1 (en) | 2000-03-03 | 2001-02-21 | System and method for locating an object in an image using models |
KR1020017014057A KR20020021789A (en) | 2000-03-03 | 2001-02-21 | System and method for locating an object in an image using models |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US09/518,028 | 2000-03-03 | ||
US09/518,028 US6792144B1 (en) | 2000-03-03 | 2000-03-03 | System and method for locating an object in an image using models |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2001065486A1 true WO2001065486A1 (en) | 2001-09-07 |
Family
ID=24062232
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/EP2001/001968 WO2001065486A1 (en) | 2000-03-03 | 2001-02-21 | System and method for locating an object in an image using models |
Country Status (5)
Country | Link |
---|---|
US (1) | US6792144B1 (en) |
EP (1) | EP1177525A1 (en) |
JP (1) | JP2003525504A (en) |
KR (1) | KR20020021789A (en) |
WO (1) | WO2001065486A1 (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2003028377A1 (en) * | 2001-09-14 | 2003-04-03 | Vislog Technology Pte Ltd. | Apparatus and method for selecting key frames of clear faces through a sequence of images |
US20210027431A1 (en) * | 2013-03-13 | 2021-01-28 | Kofax, Inc. | Content-based object detection, 3d reconstruction, and data extraction from digital images |
US11593585B2 (en) | 2017-11-30 | 2023-02-28 | Kofax, Inc. | Object detection and image cropping using a multi-detector approach |
US11620733B2 (en) | 2013-03-13 | 2023-04-04 | Kofax, Inc. | Content-based object detection, 3D reconstruction, and data extraction from digital images |
Families Citing this family (48)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2000034867A1 (en) | 1998-12-09 | 2000-06-15 | Network Ice Corporation | A method and apparatus for providing network and computer system security |
US7346929B1 (en) | 1999-07-29 | 2008-03-18 | International Business Machines Corporation | Method and apparatus for auditing network security |
US8006243B2 (en) | 1999-12-07 | 2011-08-23 | International Business Machines Corporation | Method and apparatus for remote installation of network drivers and software |
AU2001257400A1 (en) | 2000-04-28 | 2001-11-12 | Internet Security Systems, Inc. | System and method for managing security events on a network |
US9027121B2 (en) * | 2000-10-10 | 2015-05-05 | International Business Machines Corporation | Method and system for creating a record for one or more computer security incidents |
US7130466B2 (en) * | 2000-12-21 | 2006-10-31 | Cobion Ag | System and method for compiling images from a database and comparing the compiled images with known images |
AU2002243763A1 (en) | 2001-01-31 | 2002-08-12 | Internet Security Systems, Inc. | Method and system for configuring and scheduling security audits of a computer network |
US6870956B2 (en) | 2001-06-14 | 2005-03-22 | Microsoft Corporation | Method and apparatus for shot detection |
US7657419B2 (en) | 2001-06-19 | 2010-02-02 | International Business Machines Corporation | Analytical virtual machine |
US8218829B2 (en) * | 2001-08-20 | 2012-07-10 | Polycom, Inc. | System and method for using biometrics technology in conferencing |
WO2003058451A1 (en) | 2002-01-04 | 2003-07-17 | Internet Security Systems, Inc. | System and method for the managed security control of processes on a computer system |
JP3999561B2 (en) * | 2002-05-07 | 2007-10-31 | 松下電器産業株式会社 | Surveillance system and surveillance camera |
US7274741B2 (en) * | 2002-11-01 | 2007-09-25 | Microsoft Corporation | Systems and methods for generating a comprehensive user attention model |
US20040088723A1 (en) * | 2002-11-01 | 2004-05-06 | Yu-Fei Ma | Systems and methods for generating a video summary |
US7116716B2 (en) * | 2002-11-01 | 2006-10-03 | Microsoft Corporation | Systems and methods for generating a motion attention model |
US7913303B1 (en) | 2003-01-21 | 2011-03-22 | International Business Machines Corporation | Method and system for dynamically protecting a computer system from attack |
US7164798B2 (en) * | 2003-02-18 | 2007-01-16 | Microsoft Corporation | Learning-based automatic commercial content detection |
US7260261B2 (en) * | 2003-02-20 | 2007-08-21 | Microsoft Corporation | Systems and methods for enhanced image adaptation |
US7400761B2 (en) | 2003-09-30 | 2008-07-15 | Microsoft Corporation | Contrast-based image attention analysis framework |
US7471827B2 (en) * | 2003-10-16 | 2008-12-30 | Microsoft Corporation | Automatic browsing path generation to present image areas with high attention value as a function of space and time |
US7657938B2 (en) | 2003-10-28 | 2010-02-02 | International Business Machines Corporation | Method and system for protecting computer networks by altering unwanted network data traffic |
US20050123886A1 (en) * | 2003-11-26 | 2005-06-09 | Xian-Sheng Hua | Systems and methods for personalized karaoke |
US9053754B2 (en) | 2004-07-28 | 2015-06-09 | Microsoft Technology Licensing, Llc | Thumbnail generation and presentation for recorded TV programs |
US7986372B2 (en) * | 2004-08-02 | 2011-07-26 | Microsoft Corporation | Systems and methods for smart media content thumbnail extraction |
US7562056B2 (en) * | 2004-10-12 | 2009-07-14 | Microsoft Corporation | Method and system for learning an attention model for an image |
US20070112811A1 (en) * | 2005-10-20 | 2007-05-17 | Microsoft Corporation | Architecture for scalable video coding applications |
US7773813B2 (en) * | 2005-10-31 | 2010-08-10 | Microsoft Corporation | Capture-intention detection for video content analysis |
US8180826B2 (en) | 2005-10-31 | 2012-05-15 | Microsoft Corporation | Media sharing and authoring on the web |
US8196032B2 (en) * | 2005-11-01 | 2012-06-05 | Microsoft Corporation | Template-based multimedia authoring and sharing |
US7599918B2 (en) * | 2005-12-29 | 2009-10-06 | Microsoft Corporation | Dynamic search with implicit user intention mining |
USD666663S1 (en) | 2008-10-20 | 2012-09-04 | X6D Limited | 3D glasses |
USD603445S1 (en) | 2009-03-13 | 2009-11-03 | X6D Limited | 3D glasses |
USD624952S1 (en) | 2008-10-20 | 2010-10-05 | X6D Ltd. | 3D glasses |
USRE45394E1 (en) | 2008-10-20 | 2015-03-03 | X6D Limited | 3D glasses |
CA2684513A1 (en) | 2008-11-17 | 2010-05-17 | X6D Limited | Improved performance 3d glasses |
US8542326B2 (en) | 2008-11-17 | 2013-09-24 | X6D Limited | 3D shutter glasses for use with LCD displays |
USD646451S1 (en) | 2009-03-30 | 2011-10-04 | X6D Limited | Cart for 3D glasses |
USD672804S1 (en) | 2009-05-13 | 2012-12-18 | X6D Limited | 3D glasses |
USD650956S1 (en) | 2009-05-13 | 2011-12-20 | X6D Limited | Cart for 3D glasses |
USD671590S1 (en) | 2010-09-10 | 2012-11-27 | X6D Limited | 3D glasses |
USD669522S1 (en) | 2010-08-27 | 2012-10-23 | X6D Limited | 3D glasses |
USD692941S1 (en) | 2009-11-16 | 2013-11-05 | X6D Limited | 3D glasses |
USD662965S1 (en) | 2010-02-04 | 2012-07-03 | X6D Limited | 3D glasses |
USD664183S1 (en) | 2010-08-27 | 2012-07-24 | X6D Limited | 3D glasses |
US10122970B2 (en) | 2011-09-13 | 2018-11-06 | Polycom, Inc. | System and methods for automatic call initiation based on biometric data |
USD711959S1 (en) | 2012-08-10 | 2014-08-26 | X6D Limited | Glasses for amblyopia treatment |
CN107392189B (en) * | 2017-09-05 | 2021-04-30 | 百度在线网络技术(北京)有限公司 | Method and device for determining driving behavior of unmanned vehicle |
CN109803114A (en) * | 2017-11-17 | 2019-05-24 | 中国电信股份有限公司 | Front monitoring front-end control method, device and video monitoring system |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5719951A (en) * | 1990-07-17 | 1998-02-17 | British Telecommunications Public Limited Company | Normalized image feature processing |
US5901244A (en) * | 1996-06-18 | 1999-05-04 | Matsushita Electric Industrial Co., Ltd. | Feature extraction system and face image recognition system |
WO1999022318A1 (en) * | 1997-10-27 | 1999-05-06 | Massachusetts Institute Of Technology | Image search and retrieval system |
WO1999053443A1 (en) * | 1998-04-13 | 1999-10-21 | Eyematic Interfaces, Inc. | Wavelet-based facial motion capture for avatar animation |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO1994023390A1 (en) * | 1993-03-29 | 1994-10-13 | Matsushita Electric Industrial Co., Ltd. | Apparatus for identifying person |
JP3017384B2 (en) | 1993-07-19 | 2000-03-06 | シャープ株式会社 | Feature region extraction device |
JP3030485B2 (en) * | 1994-03-17 | 2000-04-10 | 富士通株式会社 | Three-dimensional shape extraction method and apparatus |
US5852669A (en) | 1994-04-06 | 1998-12-22 | Lucent Technologies Inc. | Automatic face and facial feature location detection for low bit rate model-assisted H.261 compatible coding of video |
US6044168A (en) * | 1996-11-25 | 2000-03-28 | Texas Instruments Incorporated | Model based faced coding and decoding using feature detection and eigenface coding |
US6526161B1 (en) | 1999-08-30 | 2003-02-25 | Koninklijke Philips Electronics N.V. | System and method for biometrics-based facial feature extraction |
-
2000
- 2000-03-03 US US09/518,028 patent/US6792144B1/en not_active Expired - Fee Related
-
2001
- 2001-02-21 WO PCT/EP2001/001968 patent/WO2001065486A1/en not_active Application Discontinuation
- 2001-02-21 EP EP01905815A patent/EP1177525A1/en not_active Withdrawn
- 2001-02-21 JP JP2001564100A patent/JP2003525504A/en not_active Withdrawn
- 2001-02-21 KR KR1020017014057A patent/KR20020021789A/en not_active Application Discontinuation
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5719951A (en) * | 1990-07-17 | 1998-02-17 | British Telecommunications Public Limited Company | Normalized image feature processing |
US5901244A (en) * | 1996-06-18 | 1999-05-04 | Matsushita Electric Industrial Co., Ltd. | Feature extraction system and face image recognition system |
WO1999022318A1 (en) * | 1997-10-27 | 1999-05-06 | Massachusetts Institute Of Technology | Image search and retrieval system |
WO1999053443A1 (en) * | 1998-04-13 | 1999-10-21 | Eyematic Interfaces, Inc. | Wavelet-based facial motion capture for avatar animation |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2003028377A1 (en) * | 2001-09-14 | 2003-04-03 | Vislog Technology Pte Ltd. | Apparatus and method for selecting key frames of clear faces through a sequence of images |
US20210027431A1 (en) * | 2013-03-13 | 2021-01-28 | Kofax, Inc. | Content-based object detection, 3d reconstruction, and data extraction from digital images |
US11620733B2 (en) | 2013-03-13 | 2023-04-04 | Kofax, Inc. | Content-based object detection, 3D reconstruction, and data extraction from digital images |
US11818303B2 (en) * | 2013-03-13 | 2023-11-14 | Kofax, Inc. | Content-based object detection, 3D reconstruction, and data extraction from digital images |
US11593585B2 (en) | 2017-11-30 | 2023-02-28 | Kofax, Inc. | Object detection and image cropping using a multi-detector approach |
US11640721B2 (en) | 2017-11-30 | 2023-05-02 | Kofax, Inc. | Object detection and image cropping using a multi-detector approach |
US11694456B2 (en) | 2017-11-30 | 2023-07-04 | Kofax, Inc. | Object detection and image cropping using a multi-detector approach |
Also Published As
Publication number | Publication date |
---|---|
JP2003525504A (en) | 2003-08-26 |
KR20020021789A (en) | 2002-03-22 |
EP1177525A1 (en) | 2002-02-06 |
US6792144B1 (en) | 2004-09-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US6792144B1 (en) | System and method for locating an object in an image using models | |
US6580810B1 (en) | Method of image processing using three facial feature points in three-dimensional head motion tracking | |
US20210279971A1 (en) | Method, storage medium and apparatus for converting 2d picture set to 3d model | |
US5926575A (en) | Model-based coding/decoding method and system | |
Baskan et al. | Projection based method for segmentation of human face and its evaluation | |
CN113420719B (en) | Method and device for generating motion capture data, electronic equipment and storage medium | |
EP4198814A1 (en) | Gaze correction method and apparatus for image, electronic device, computer-readable storage medium, and computer program product | |
KR102124466B1 (en) | Apparatus and method for generating conti for webtoon | |
EP1125241A1 (en) | System and method for biometrics-based facial feature extraction | |
US20020164068A1 (en) | Model switching in a communication system | |
US20210158593A1 (en) | Pose selection and animation of characters using video data and training techniques | |
KR101996371B1 (en) | System and method for creating caption for image and computer program for the same | |
KR20120120858A (en) | Service and method for video call, server and terminal thereof | |
EP3218896A1 (en) | Externally wearable treatment device for medical application, voice-memory system, and voice-memory-method | |
CN113689527B (en) | Training method of face conversion model and face image conversion method | |
KR20160049191A (en) | Wearable device | |
KR101189043B1 (en) | Service and method for video call, server and terminal thereof | |
US20210158565A1 (en) | Pose selection and animation of characters using video data and training techniques | |
CN111145082A (en) | Face image processing method and device, electronic equipment and storage medium | |
CN113766297B (en) | Video processing method, playing terminal and computer readable storage medium | |
WO2001029767A2 (en) | System and method for three-dimensional modeling | |
CN115424318A (en) | Image identification method and device | |
JP2023512359A (en) | Associated object detection method and apparatus | |
CN114067394A (en) | Face living body detection method and device, electronic equipment and storage medium | |
JPH0991432A (en) | Method for extracting doubtful person |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): JP KR |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2001905815 Country of ref document: EP |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
ENP | Entry into the national phase |
Ref country code: JP Ref document number: 2001 564100 Kind code of ref document: A Format of ref document f/p: F |
|
WWE | Wipo information: entry into national phase |
Ref document number: 1020017014057 Country of ref document: KR |
|
WWP | Wipo information: published in national office |
Ref document number: 2001905815 Country of ref document: EP |
|
WWP | Wipo information: published in national office |
Ref document number: 1020017014057 Country of ref document: KR |
|
WWW | Wipo information: withdrawn in national office |
Ref document number: 2001905815 Country of ref document: EP |