US20060050791A1 - Scene change detection method using two-dimensional DP matching, and image processing apparatus for implementing the method - Google Patents

Scene change detection method using two-dimensional DP matching, and image processing apparatus for implementing the method Download PDF

Info

Publication number
US20060050791A1
US20060050791A1 US11/247,330 US24733005A US2006050791A1 US 20060050791 A1 US20060050791 A1 US 20060050791A1 US 24733005 A US24733005 A US 24733005A US 2006050791 A1 US2006050791 A1 US 2006050791A1
Authority
US
United States
Prior art keywords
image data
feature amount
scene change
frame
candidate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US11/247,330
Inventor
Hirotaka Shiiyama
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Canon Inc
Original Assignee
Canon Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Canon Inc filed Critical Canon Inc
Priority to US11/247,330 priority Critical patent/US20060050791A1/en
Publication of US20060050791A1 publication Critical patent/US20060050791A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/71Indexing; Data structures therefor; Storage structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/7847Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using low-level visual features of the video content
    • G06F16/785Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using low-level visual features of the video content using colour or luminescence

Definitions

  • the present invention relates to a scene change detection method for detecting a change in moving image scene (a so-called scene change) from a moving image, and an image processing apparatus for implementing the method.
  • the method using the histograms has merits such as low computation cost, high-speed processing, and high real-time performance, but has the following demerits. That is, a scene change cannot be detected from scenes having similar histograms, or scene changes are excessively detected due to an abrupt deformation or rotation of an object.
  • the method using a motion vector can assure high precision, and can also be used in other applications such as object extraction and the like, but requires much time for computations, resulting in poor real-time performance. Even when motion vector information is extracted from data encoded by MPEG2 and motion vector computations are omitted, since the precision of these motion vector computations depends on the performance of an encoder, high performance cannot always be guaranteed for all MPEG2 files.
  • the present invention has been made in consideration of the aforementioned problems, and has as its object to provide a scene change detection method which considers color and composition, and an image processing apparatus for implementing the method.
  • an image processing apparatus comprises: labeling means for extracting frame image data from moving image data, segmenting the frame image data into blocks, and assigning labels in accordance with feature amounts acquired in units of blocks; label sequence generation means for generating a label sequence by arranging the labels assigned by the labeling means in a predetermined block order; label sequence accumulation means for accumulating the label sequence generated by the label sequence generation means in correspondence with the frame image data; similarity computation means for computing similarities between the generated label sequence and label sequences of a previous frame image data group; scene change detection means for detecting a scene change frame in the moving image from a computed similarity group; and scene change storage means for storing the detected scene change frame information in correspondence with the frame image data.
  • a scene change detection method comprises the steps of: extracting frame image data from moving image data, segmenting the frame image data into blocks, and assigning labels in accordance with feature amounts acquired in units of blocks; generating a label sequence by arranging the assigned labels in a predetermined block order; computing similarities between the generated label sequence and label sequences of a previous frame image data group; and detecting a scene change frame in the moving image from a computed similarity group.
  • a storage medium stores a control program for making a computer execute scene change detection, and the control program includes: the step of extracting frame image data from moving image data, segmenting the frame image data into blocks, and assigning labels in accordance with feature amounts acquired in units of blocks; the step of generating a label sequence by arranging the assigned labels in a predetermined block order; the step of computing similarities between the generated label sequence and label sequences of a previous frame image data group; and the step of detecting a scene change frame in the moving image from a computed similarity group.
  • FIG. 1 is a block diagram showing an example of the arrangement of an image processing apparatus having a scene change detection function according to an embodiment of the present invention
  • FIG. 2 is a block diagram showing an example of the arrangement of the scene change detection function according to the embodiment of the present invention
  • FIG. 3 is a view for explaining the storage state of scene change information in a scene change information accumulation/management DB according to the embodiment of the present invention
  • FIG. 4 is a flow chart showing the sequence of a scene change information generation process according to the embodiment of the present invention.
  • FIG. 5 shows an example of an image segmented into blocks according to the embodiment of the present invention
  • FIG. 6 is a view for explaining a multi-dimensional feature amount space according to the embodiment of the present invention.
  • FIGS. 7A to 7 D are views for explaining examples of block orders used upon generating a sequential label set according to the embodiment of the present invention.
  • FIG. 8 is a view for explaining the storage format of moving image data file information in a moving image management database according to the embodiment of the present invention.
  • FIG. 9 is a flow chart showing the detailed sequence of a scene change detection process in FIG. 4 according to the embodiment of the present invention.
  • FIG. 10 shows an example of a penalty matrix among labels used upon computing similarity by comparing label matrices according to the embodiment of the present invention
  • FIG. 11 is a view for explaining a similarity computation process according to the embodiment of the present invention.
  • FIG. 12 is a flow chart for explaining the sequence of similarity computation using two-dimensional DP matching according to the embodiment of the present invention.
  • FIG. 13 is a flow chart showing the sequence for setting a dynamic penalty value according to the embodiment of the present invention.
  • FIG. 14 is a view for explaining adjustment of a matching window in DP matching according to the embodiment of the present invention.
  • FIG. 1 is a block diagram showing an example of the arrangement of an image processing apparatus having a scene change detection function according to an embodiment of the present invention.
  • reference numeral 101 denotes a CPU which executes various kinds of computation and control in a scene change detection apparatus of this embodiment.
  • Reference numeral 102 denotes a ROM which stores a boot program executed upon starting up the apparatus, and various permanent data.
  • Reference numeral 103 denotes a RAM which stores control programs to be processed by the CPU 101 , and provides a work area used when the CPU 101 executes various kinds of control.
  • the RAM 103 has a sequential label set memory 103 a for storing a sequential label set of a plurality of frames of a moving image, and a program memory 103 b including a sequential label set generation module, DP matching module, and the like, and is also used as a frame image memory (to be described later).
  • Reference numeral 104 denotes a keyboard; and 105 , a mouse, which provide various input operation environments for the user.
  • Reference numeral 106 denotes an external storage device which comprises a hard disk, floppy disk, CD-ROM, or the like, and stores, e.g., a moving image management database (to be described later).
  • Reference numeral 108 denotes a network interface which allows communications with devices on a network.
  • Reference numeral 109 denotes an interface; and 110 , a peripheral device for inputting a moving image.
  • Reference numeral 111 denotes a bus for connecting the aforementioned components.
  • the external storage device 106 in the above arrangement may use the one connected on the network.
  • the peripheral device 110 for inputting a moving image indicates various devices for inputting a moving image such as a video deck, video player, television tuner, and the like in addition to a video camera.
  • the control programs in the RAM 103 may be loaded from the external storage device 106 , peripheral device 110 , or network, and may be executed.
  • FIG. 2 is a block diagram showing the arrangement of a scene change detection function of the image processing apparatus of this embodiment.
  • reference numeral 11 denotes a user interface unit which detects various operation inputs from the user using a display 107 , and the keyboard 104 and mouse 105 .
  • Reference numeral 12 denotes a moving image input unit for capturing frames of a moving image via the peripheral device 110 for inputting a moving image.
  • Reference numeral 13 denotes a frame image memory for storing frame image data captured by the moving image input unit 12 in a predetermined area of the RAM 103 .
  • Reference numeral 14 denotes an image feature amount extraction unit for extracting feature amounts from images stored in the frame image memory 13 in the sequence to be described later.
  • Reference numeral 15 denotes a sequential feature-label set generation unit for generating a sequential label set on the basis of feature amounts extracted by the feature amount extraction unit 14 .
  • Reference numeral 16 denotes a scene detection unit by means of pattern matching, which detects a scene change frame by computing similarities between the generated sequential label set and a group of sequential label sets of N neighboring or adjacent frame images stored in a predetermined area of the RAM 103 , and performing a threshold value comparison process of the computed similarities. Then, the unit 16 discards a sequential label set of the oldest frame image stored in the RAM 103 and stores a sequential label set of the current frame image in the FIFO (First In, First Out) principle.
  • FIFO First In, First Out
  • Reference numeral 17 denotes a scene change information accumulation unit which stores and accumulates information indicating a frame corresponding to a scene change point of moving image data obtained by the moving image input unit 12 and the like.
  • FIG. 3 is a view for explaining the storage state of scene change information in the scene change information accumulation unit 17 .
  • Image frames in moving image data are assigned unit image frame IDs in that moving image, and the scene change information accumulation unit 17 holds scene change information 112 in the form of an image frame ID from which a new scene starts.
  • Reference numeral 18 denotes a moving image management database (to be referred to as a moving image management DB hereinafter), which manages moving image data 113 and the scene change information stored in the scene change information accumulation unit 17 in correspondence with each other in the data format shown in FIG. 8 .
  • a moving image management database to be referred to as a moving image management DB hereinafter
  • Frame images are extracted in turn from a moving image, each of these frame images is segmented into a plurality of blocks, and labels are assigned in accordance with feature amounts acquired in units of blocks.
  • a sequential label set is generated by arranging the assigned labels on the basis of a predetermined block order, and generated sequential label sets for N previous frames are stored in the memory. At this time, the sequential label set of the current frame is compared with those of previous frames stored in the memory, and the presence/absence of a scene change is determined based on the comparison result.
  • a process for generating a sequential label set by extracting one frame image from a moving image, and segmenting the frame image into a plurality of blocks, assigning labels in accordance with feature amounts acquired in units of blocks, and arranging the assigned labels on the basis of a predetermined block order will be explained below.
  • FIG. 4 is a flow chart showing the sequence of the process for obtaining a sequential label set from a frame according to this embodiment.
  • steps S 11 to S 18 are repeated until it is determined in step S 10 that the remaining frame images are present.
  • step S 11 one frame image is read out by seeking the moving image file, and is held in the frame image memory 13 .
  • step S 12 the held image is segmented into a plurality of blocks.
  • the image is segmented into a plurality of vertical and horizontal blocks.
  • FIG. 5 shows an example of an image segmented into blocks according to this embodiment. As shown in FIG. 5 , in this embodiment the image is segmented into a total of nine (3 ⁇ 3) blocks.
  • step S 13 feature amounts of the segmented blocks are computed, and the obtained feature amounts are labeled in the following sequence.
  • FIG. 6 is a view for explaining a multi-dimensional feature amount space according to this embodiment.
  • the multi-dimensional feature amount space (RGB color space) is segmented into a plurality of blocks (color blocks), i.e., cells (color cells), and unique labels are assigned as serial numbers to the individual cells (color cells).
  • color blocks i.e., cells
  • unique labels are assigned as serial numbers to the individual cells (color cells).
  • the reason why the multi-dimensional feature amount space (RGB color space) is segmented into a plurality of blocks is to absorb delicate feature amount (color) differences.
  • step S 13 a predetermined image feature amount extraction computation process is done for each segmented block obtained in step S 12 to obtain a cell on the multi-dimensional feature amount space to which that block belongs, thus obtaining a corresponding label.
  • This process is done for all the blocks. That is, in the image feature amount extraction computation process of this embodiment, computation process is done to determine which color cells all pixels in each segmented block belong to respectively, and the label of color cell with the highest frequency of occurrence is determined to be a parameter label (color label) of that segmented image block. This process is done for all the blocks.
  • the parameter labels assigned to the blocks are arranged in a predetermined block order to generate a parameter sequential label set (to be referred to as a sequential label set hereinafter) in step S 14 .
  • FIGS. 7A to 7 D are views for explaining examples of block orders used upon generating a sequential label set.
  • the parameter labels are arranged in ascending order of numerals in boxes of the segmented image blocks shown in each of FIGS. 7A to 7 D to generate a sequential label set.
  • scan methods that can be applied to this embodiment include, for example:
  • FIG. 7A makes a left-to-right scan from up to down: FIG. 7A , making a left-to-right scan from down to up: FIG. 7C , making a right-to-left scan from up to down: FIG. 7B , making a right-to-left scan from down to up: FIG. 7D , and so forth are available); and
  • This embodiment adopts a scan method which satisfies the following conditions.
  • label matrices are time-serially compared, it is not preferable to reverse this order. Hence, all images must be scanned by a predetermined scan method to obtain label matrices.
  • Nearby blocks are preferably located at near positions in a sequential label set.
  • This embodiment adopts a scan method for making a horizontal scan from left to right, up to down, as shown in FIG. 7A .
  • Sequential label sets for N previous frames which are obtained in the above sequence, are accumulated on the memory, and a scene change is detected in step S 15 by comparing sequential label sets of N previous frames and a sequential label set of the current frame. Assume that the minimum value of N is 2.
  • step S 15 An example for detecting a scene change from a sequential label set of this embodiment will be explained below using the flow chart in FIG. 9 that shows details of step S 15 , taking as an example a method for detecting scene changes from a moving image obtained by connecting frames including quite different scenes.
  • Even the moving image obtained by connecting frames including quite different scenes includes errors such as omission of a frame upon editing, a sudden change in brightness in only one frame upon, e.g., emission of flash light of a camera, and the like, and it is important to prevent operation errors (excessive detection of scene changes) against such moving image.
  • step S 20 the similarity between a sequential label set obtained from the current frame image and that obtained from the immediately preceding frame image is computed by a method to be described later.
  • step S 21 the similarity is compared with a threshold value. If the similarity is larger than the threshold value, status indicating the absence of status change is returned in step S 25 , and the flow returns to the main routine. To supplement, the flow then returns to FIG. 4 and advances from step S 16 to step S 18 to store the sequential label set of the current frame image in the RAM. After that the flow returns to step S 10 to proceed with the process for capturing a new frame image in step S 11 and the subsequent steps.
  • step S 21 If it is determined in step S 21 that the similarity is equal to or smaller than the threshold value, the sequential label set obtained from the current frame image is compared with that obtained from a frame image two frames before the current frame image, which was obtained and stored in the RAM previously, in step S 22 , and if it is determined in step S 23 that the similarity between these two sequential label sets is equal to or smaller than the threshold value, status indicating the presence of scene change is returned. If it is determined in step S 23 that the similarity is larger than the threshold value, it is determined that an edit error or a sudden change in brightness in only one frame upon, e.g., emission of flash light of a camera has occurred, and status indicating the absence of scene change is returned in step S 25 .
  • a method of computing the similarity between frame images i.e., comparing two sequential label sets to check if they are similar to each other (to compute their similarity) will be described in detail below.
  • a sequential label set acquired in step S 14 will be referred to as a query label matrix of a query frame image hereinafter.
  • step S 20 or S 22 in step S 15 the label matrices are compared in consideration of this penalty matrix, and in this case, two-dimensional DP matching (to be referred to as 2 D DP matching hereinafter) to be described below is used.
  • 2 D DP matching two-dimensional DP matching
  • FIG. 11 is a view for explaining the similarity computation process according to this embodiment.
  • the query sequential label set acquired in step S 14 can be arranged, as shown in a center of FIG. 11 , in accordance with its scan method. Also, when one of label matrices of frame images for N previous frames, which are stored in the RAM is used as a test sequential label set of a test frame image, that sequential label set can be arranged, as shown in a left of FIG. 11 .
  • the distances between a label sequence “abc” in the first line of the test sequential label set, and sequential label sets (“123”, “456”, “789”) in the first to third lines of the query sequential label set are computed by DP matching, and the line number of the label sequence that minimizes distance in the query sequential label set is stored at the corresponding position in a similar line matrix (a right of FIG. 11 ).
  • the obtained minimum distance is larger than a predetermined threshold value, it is determined that the label sequence of interest of the test sequential label set is similar to none of the lines, and “!” is stored at the corresponding position in the similar line matrix.
  • Even when an image angle has horizontally changed slightly, a similar line can be detected by the aforementioned process owing to the nature of DP matching.
  • the similar line matrix in the column direction shown in a right of FIG. 11 can be obtained.
  • DP matching selects a route in which similarity distance is minimized under a constraint condition of a matching window, as an optimum solution.
  • the constraint condition may be given by a width of a matching window.
  • FIG. 12 is a flow chart for explaining the sequence of similarity computation using 2 D DP matching according to this embodiment. The process that has been explained with reference to FIG. 11 will be explained in more detail below with reference to the flow chart in FIG. 12 . Note that the process shown in this flow chart is executed for different similarity test images in steps S 20 and S 22 .
  • step S 101 variable i indicating the line number of the test frame image and variable j indicating the line number of the query frame image are initialized to 1 to both indicate the first line.
  • step S 104 the distance between the two sequential label sets acquired in steps S 102 and S 103 is computed by DP matching using the penalty matrix of color cells described in FIG. 10 .
  • step S 105 if the distance obtained in step S 104 is a minimum value of those obtained so far in association with the i-th line, the line number (j) of interest is stored in a line matrix element LINE[i].
  • steps S 103 to S 105 are repeated for all lines of the similarity test image (steps S 106 and S 107 ). In this manner, the number of the line with a minimum distance of those included in the query frame image to the sequential label set of the i-th line of the test frame image is stored in LINE[i].
  • step S 108 LINE[i] obtained by the above process is compared with a predetermined threshold value (Thresh). If LINE[i] is equal to or larger than Thresh, the flow advances to step S 109 , and “!” indicating that the i-th line is similar to none of lines in the query image is stored in LINE[i].
  • Thresh a predetermined threshold value
  • step S 102 to step S 108 are executed for all the lines in the test frame image (steps S 110 and S 111 ) to obtain LINE[imax] of LINE[i], which is output as a similar line matrix LINE[i].
  • step S 113 DP matching between a standard line matrix [ 1 , 2 , . . . , imax] and similar line matrix LINE[ 1 , 2 , . . . , imax] is done to compute the distance therebetween.
  • the standard line matrix starts from 1, and increases in unitary increments in the column direction.
  • the present invention proposes a dynamic penalty as penalty setups of DP matching between the similar line matrix and standard line matrix in the column direction.
  • the dynamic penalty dynamically sets a penalty between the line numbers, and the penalty between the line numbers changes depending on images.
  • sequential label set distances in the horizontal (line) direction of the similarity query image itself are computed, and penalties between lines are obtained based on these distances.
  • FIG. 13 is a flow chart showing the setup sequence of a dynamic penalty value according to this embodiment.
  • step S 121 variables i and j are respectively set at 1 and 2.
  • a sequential label set of the i-th line of the query frame image is acquired in step S 122
  • a sequential label set of the j-th line of the query frame image is acquired in step S 123 .
  • step S 124 DP matching between the sequential label sets of the i- and j-th lines of the query frame image is done using the color penalty matrix to obtain distance.
  • step S 125 the DP matching distance obtained in step S 124 is stored in LINE[i][j] as a penalty between the sequential label sets of the i- and j-th lines of the query frame image, and is also stored in LINE[j][i] as a penalty between the sequential label sets of the j- and i-th lines of the query frame image.
  • steps S 123 to S 125 are repeated until the value of variable j reaches jmax in step S 126 .
  • penalty values between the sequential label set of the i-th line, and sequential label sets of the (i+1)-th to (jmax)-th lines are determined.
  • the processes in steps S 123 to S 126 are repeated until the value of variable i reaches (imax-1) in steps S 128 , S 129 , and S 130 .
  • step S 131 penalty values of the diagonal components of LINE[i][j], which are not determined in the above processes, are determined.
  • zero distance is obtained and, hence, zero penalty is stored.
  • a penalty for “!” is determined in step S 132 . That is, a penalty for “!” is set to have a value larger by the maximum one of all the penalty values of LINE[i][j] to some extent. If this penalty value is set to be extremely large, the feature of ambiguous search may suffer.
  • step S 113 is done using penalty among sequential label sets computed for the query frame images in this manner, thus obtaining the similarity between the query frame image and test frame image.
  • the aforementioned matching process also has the following feature. If the matrices shown in FIGS. 11A and 11B are very similar to each other, a similar line matrix “123” is obtained, and their distance is zero. On the other hand, if a similar line matrix is “!12” or “212”, the test frame image is likely to have deviated downward from the query frame image; if a similar line matrix is “23!” or “233”, the test frame image is likely to have deviated upward from the query frame image. On the other hand, if a similar line matrix is “13!” or “!13”, the test frame image is likely to be reduced in scale with respect to the query frame image. Likewise, a test frame image obtained by enlarging the query frame image may be detected.
  • step S 113 by DP matching between the similar line matrix and standard line matrix, vertical deviation can be effectively absorbed. For this reason, the difference between the query frame image and test frame image resulting from the aforementioned upward or downward deviation, enlargement, reduction, or the like can be effectively absorbed, and the similarity between the frame images of a moving image can be satisfactorily determined.
  • 2 D DP matching of this embodiment allows ambiguity between the label sequences of the neighboring label matrices, and absorbs the influence of position deviations of an image.
  • the tinctures of blocks may become slightly different, but such differences are absorbed by the aforementioned penalty matrix.
  • the synergism of 2 D DP matching that allows ambiguity and allowance of ambiguity of feature amounts by means of the penalty matrix according to this embodiment, matching that is less influenced by upper, lower, right, and left deviations, and those caused by enlargement/reduction can be achieved.
  • the similar line matrix is obtained using sequential label sets corresponding to the horizontal block arrangements.
  • a similar line matrix can be obtained using label sequences corresponding to the vertical block arrangements by the same method as described above.
  • both the horizontal and vertical directions may be combined.
  • color information is selected as an image feature amount.
  • the present invention is not limited to such specific image feature amount, and may be practiced by obtaining other image parameters in units of image segmented blocks.
  • the level of ambiguity upon comparing a query frame image and test frame image can be desirably set by changing the width of a so-called matching window in DP matching.
  • FIG. 14 is a view for explaining a matching window in DP matching.
  • the width of the matching window can be changed by changing the value r.
  • similarity computations can be made with a desired ambiguity level (width of the matching window), and such change is effective for scene change detection in a moving image including very quick motions, and that in a movie that considerably suffers camera shake.
  • the value r is changed in accordance with a mount of shaking detected by a sensor.
  • the user may change the sensitivity of scene change by manually change the value r at the keyboard 104 , or the value r may be automatically increased when scene changes are detected too frequently or may be decreased when fewer scene changes are detected.
  • the width of a matching window in horizontal DP matching and that of a matching window in vertical DP matching may be independently set. Or the two matching windows may be changed at different rates. In this manner, ambiguity levels can be set very flexibly upon similarity computations. For example, when the block order shown in a left of FIG. 7 is used, and the horizontal movement of an object of interest in a query image is to be allowed, the width of the matching window in horizontal DP matching can be increased to increase the ambiguity level in the horizontal direction.
  • step S 16 it is checked in step S 16 if a scene change is detected in step S 15 . If YES in step S 16 , scene change information is additionally stored in the scene change information accumulation unit by the process in step S 17 . Finally, sequential label set of N previous frame image is discarded, and the sequential label set of the current frame image is stored in the RAM in step S 18 . The flow then returns to the process in step S 10 to repeat the processes as long as frame images to be processed remain.
  • a feature amount group (a group of feature amounts obtained by segmenting a feature amount space) is expressed by a single symbol (i.e., labeled), and a distance based on the similarity between labels is given using the 2 D DP matching and the penalty matrix described above. For this reason, since the computation volume of the distance between two image blocks can be greatly reduced, and similar feature amounts can be expressed by identical labels, the similarity between two images can be satisfactorily computed.
  • an inter-frame pattern matcher which produces a clearly low inter-frame similarity output in response to appearance of a frame which has absolutely no continuity while absorbing some continuous changes in moving image (e.g., some color differences caused by a change in image angle, and a change in position or deformation of an object upon, e.g., panning of a camera, or a change in image sensing condition such as a light source or the like) in association with neighboring or adjacent image frames, can be implemented.
  • the present invention may be applied to either a system constituted by a plurality of devices (e.g., a host computer, an interface device, a reader, a printer, and the like), or an apparatus consisting of a single equipment (e.g., a copying machine, a facsimile apparatus, or the like).
  • a system constituted by a plurality of devices (e.g., a host computer, an interface device, a reader, a printer, and the like), or an apparatus consisting of a single equipment (e.g., a copying machine, a facsimile apparatus, or the like).
  • the objects of the present invention are also achieved by supplying a storage medium, which records a program code of a software program that can implement the functions of the above-mentioned embodiments to the system or apparatus, and reading out and executing the program code stored in the storage medium by a computer (or a CPU or MPU) of the system or apparatus.
  • a computer or a CPU or MPU
  • the program code itself read out from the storage medium implements the functions of the above-mentioned embodiments, and the storage medium which stores the program code constitutes the present invention.
  • the storage medium for supplying the program code for example, a floppy disk, hard disk, optical disk, magneto-optical disk, CD-ROM, CD-R, magnetic tape, nonvolatile memory card, ROM, and the like may be used.
  • the functions of the above-mentioned embodiments may be implemented not only by executing the readout program code by the computer but also by some or all of actual processing operations executed by an OS (operating system) running on the computer on the basis of an instruction of the program code.
  • OS operating system
  • the functions of the above-mentioned embodiments may be implemented by some or all of actual processing operations executed by a CPU or the like arranged in a function extension board or a function extension unit, which is inserted in or connected to the computer, after the program code read out from the storage medium is written in a memory of the extension board or unit.
  • the storage medium stores a program including program codes corresponding to the aforementioned flow charts (shown in FIGS. 4, 9 , 12 , 13 , and the like).
  • scene change detection can be implemented by an inter-frame pattern matcher which produces a clearly low inter-frame similarity output in response to appearance of a frame which has absolutely no continuity while absorbing some continuous changes in moving image (e.g., some color differences caused by a change in image angle, and a change in position or deformation of an object upon, e.g., panning of a camera, or a change in image sensing condition such as a light source or the like) in association with neighboring or adjacent image frames.
  • some continuous changes in moving image e.g., some color differences caused by a change in image angle, and a change in position or deformation of an object upon, e.g., panning of a camera, or a change in image sensing condition such as a light source or the like

Abstract

There is provided a scene change detection method which is independent from an encoder of a file, and can assure high real-time performance and effective, quick processes, and an image processing apparatus that can implement the method. Frame image data is extracted from moving image data (S11), the frame image data is segmented into a plurality of blocks (S12), labels are assigned in accordance with feature amounts acquired in units of blocks (S13), a sequential label set is generated by arranging the assigned labels in a predetermined block order (S14), similarities between the generated sequential label set and sequential label sets of a previous frame image data group are computed by two-dimensional DP matching and a scene change frame in the moving image is detected from a computed similarity group (S15, S16), and the detected scene change frame information is stored in correspondence with the frame image data.

Description

    FIELD OF THE INVENTION
  • The present invention relates to a scene change detection method for detecting a change in moving image scene (a so-called scene change) from a moving image, and an image processing apparatus for implementing the method.
  • BACKGROUND OF THE INVENTION
  • As conventional methods of extracting a scene change from a moving image, a method of computing changes in histogram of colors of frames that form the moving image, and detecting an evaluation value by executing some threshold value process, a method using motion vector information which is used in MPEG2 or the like, and the like have been proposed.
  • The method using the histograms has merits such as low computation cost, high-speed processing, and high real-time performance, but has the following demerits. That is, a scene change cannot be detected from scenes having similar histograms, or scene changes are excessively detected due to an abrupt deformation or rotation of an object.
  • On the other hand, the method using a motion vector can assure high precision, and can also be used in other applications such as object extraction and the like, but requires much time for computations, resulting in poor real-time performance. Even when motion vector information is extracted from data encoded by MPEG2 and motion vector computations are omitted, since the precision of these motion vector computations depends on the performance of an encoder, high performance cannot always be guaranteed for all MPEG2 files.
  • SUMMARY OF THE INVENTION
  • The present invention has been made in consideration of the aforementioned problems, and has as its object to provide a scene change detection method which considers color and composition, and an image processing apparatus for implementing the method.
  • It is another object of the present invention to provide a scene change detection method which is independent from an encoder of a file, and can assure high real-time performance and effective, quick processes, and an image processing apparatus that can implement the method.
  • In order to achieve the above objects, an image processing apparatus according to the present invention, comprises: labeling means for extracting frame image data from moving image data, segmenting the frame image data into blocks, and assigning labels in accordance with feature amounts acquired in units of blocks; label sequence generation means for generating a label sequence by arranging the labels assigned by the labeling means in a predetermined block order; label sequence accumulation means for accumulating the label sequence generated by the label sequence generation means in correspondence with the frame image data; similarity computation means for computing similarities between the generated label sequence and label sequences of a previous frame image data group; scene change detection means for detecting a scene change frame in the moving image from a computed similarity group; and scene change storage means for storing the detected scene change frame information in correspondence with the frame image data.
  • In order to achieve the above objects, a scene change detection method according to the present invention comprises the steps of: extracting frame image data from moving image data, segmenting the frame image data into blocks, and assigning labels in accordance with feature amounts acquired in units of blocks; generating a label sequence by arranging the assigned labels in a predetermined block order; computing similarities between the generated label sequence and label sequences of a previous frame image data group; and detecting a scene change frame in the moving image from a computed similarity group.
  • In order to achieve the above objects, a storage medium stores a control program for making a computer execute scene change detection, and the control program includes: the step of extracting frame image data from moving image data, segmenting the frame image data into blocks, and assigning labels in accordance with feature amounts acquired in units of blocks; the step of generating a label sequence by arranging the assigned labels in a predetermined block order; the step of computing similarities between the generated label sequence and label sequences of a previous frame image data group; and the step of detecting a scene change frame in the moving image from a computed similarity group.
  • Other features and advantages of the present invention will be apparent from the following description taken in conjunction with the accompanying drawings, in which like reference characters designate the same or similar parts throughout the figures thereof.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The accompanying drawings, which are incorporated in and constitute a part of the specification, illustrate embodiments of the invention and, together with the description, serve to explain the principles of the invention.
  • FIG. 1 is a block diagram showing an example of the arrangement of an image processing apparatus having a scene change detection function according to an embodiment of the present invention;
  • FIG. 2 is a block diagram showing an example of the arrangement of the scene change detection function according to the embodiment of the present invention;
  • FIG. 3 is a view for explaining the storage state of scene change information in a scene change information accumulation/management DB according to the embodiment of the present invention;
  • FIG. 4 is a flow chart showing the sequence of a scene change information generation process according to the embodiment of the present invention;
  • FIG. 5 shows an example of an image segmented into blocks according to the embodiment of the present invention;
  • FIG. 6 is a view for explaining a multi-dimensional feature amount space according to the embodiment of the present invention;
  • FIGS. 7A to 7D are views for explaining examples of block orders used upon generating a sequential label set according to the embodiment of the present invention;
  • FIG. 8 is a view for explaining the storage format of moving image data file information in a moving image management database according to the embodiment of the present invention;
  • FIG. 9 is a flow chart showing the detailed sequence of a scene change detection process in FIG. 4 according to the embodiment of the present invention;
  • FIG. 10 shows an example of a penalty matrix among labels used upon computing similarity by comparing label matrices according to the embodiment of the present invention;
  • FIG. 11 is a view for explaining a similarity computation process according to the embodiment of the present invention;
  • FIG. 12 is a flow chart for explaining the sequence of similarity computation using two-dimensional DP matching according to the embodiment of the present invention;
  • FIG. 13 is a flow chart showing the sequence for setting a dynamic penalty value according to the embodiment of the present invention; and
  • FIG. 14 is a view for explaining adjustment of a matching window in DP matching according to the embodiment of the present invention.
  • DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
  • Preferred embodiments of the present invention will now be described in detail in accordance with the accompanying drawings.
  • Example of Arrangement of Image Processing Apparatus of This Embodiment
  • FIG. 1 is a block diagram showing an example of the arrangement of an image processing apparatus having a scene change detection function according to an embodiment of the present invention.
  • Referring to FIG. 1, reference numeral 101 denotes a CPU which executes various kinds of computation and control in a scene change detection apparatus of this embodiment. Reference numeral 102 denotes a ROM which stores a boot program executed upon starting up the apparatus, and various permanent data. Reference numeral 103 denotes a RAM which stores control programs to be processed by the CPU 101, and provides a work area used when the CPU 101 executes various kinds of control. For example, the RAM 103 has a sequential label set memory 103a for storing a sequential label set of a plurality of frames of a moving image, and a program memory 103b including a sequential label set generation module, DP matching module, and the like, and is also used as a frame image memory (to be described later). Reference numeral 104 denotes a keyboard; and 105, a mouse, which provide various input operation environments for the user.
  • Reference numeral 106 denotes an external storage device which comprises a hard disk, floppy disk, CD-ROM, or the like, and stores, e.g., a moving image management database (to be described later). Reference numeral 108 denotes a network interface which allows communications with devices on a network. Reference numeral 109 denotes an interface; and 110, a peripheral device for inputting a moving image. Reference numeral 111 denotes a bus for connecting the aforementioned components.
  • Note that the external storage device 106 in the above arrangement may use the one connected on the network. Also, the peripheral device 110 for inputting a moving image indicates various devices for inputting a moving image such as a video deck, video player, television tuner, and the like in addition to a video camera. Furthermore, the control programs in the RAM 103 may be loaded from the external storage device 106, peripheral device 110, or network, and may be executed.
  • FIG. 2 is a block diagram showing the arrangement of a scene change detection function of the image processing apparatus of this embodiment.
  • Referring to FIG. 2, reference numeral 11 denotes a user interface unit which detects various operation inputs from the user using a display 107, and the keyboard 104 and mouse 105. Reference numeral 12 denotes a moving image input unit for capturing frames of a moving image via the peripheral device 110 for inputting a moving image. Reference numeral 13 denotes a frame image memory for storing frame image data captured by the moving image input unit 12 in a predetermined area of the RAM 103.
  • Reference numeral 14 denotes an image feature amount extraction unit for extracting feature amounts from images stored in the frame image memory 13 in the sequence to be described later. Reference numeral 15 denotes a sequential feature-label set generation unit for generating a sequential label set on the basis of feature amounts extracted by the feature amount extraction unit 14. Reference numeral 16 denotes a scene detection unit by means of pattern matching, which detects a scene change frame by computing similarities between the generated sequential label set and a group of sequential label sets of N neighboring or adjacent frame images stored in a predetermined area of the RAM 103, and performing a threshold value comparison process of the computed similarities. Then, the unit 16 discards a sequential label set of the oldest frame image stored in the RAM 103 and stores a sequential label set of the current frame image in the FIFO (First In, First Out) principle.
  • Reference numeral 17 denotes a scene change information accumulation unit which stores and accumulates information indicating a frame corresponding to a scene change point of moving image data obtained by the moving image input unit 12 and the like.
  • FIG. 3 is a view for explaining the storage state of scene change information in the scene change information accumulation unit 17.
  • Image frames in moving image data are assigned unit image frame IDs in that moving image, and the scene change information accumulation unit 17 holds scene change information 112 in the form of an image frame ID from which a new scene starts.
  • Reference numeral 18 denotes a moving image management database (to be referred to as a moving image management DB hereinafter), which manages moving image data 113 and the scene change information stored in the scene change information accumulation unit 17 in correspondence with each other in the data format shown in FIG. 8.
  • A scene change detection process will be described in detail below.
  • Frame images are extracted in turn from a moving image, each of these frame images is segmented into a plurality of blocks, and labels are assigned in accordance with feature amounts acquired in units of blocks. A sequential label set is generated by arranging the assigned labels on the basis of a predetermined block order, and generated sequential label sets for N previous frames are stored in the memory. At this time, the sequential label set of the current frame is compared with those of previous frames stored in the memory, and the presence/absence of a scene change is determined based on the comparison result.
  • Operation Example of Image Processing Apparatus of This Embodiment
  • An example of the operation of the image processing apparatus of this embodiment with the above arrangement will be described below. Note that the example to be described below adopts three colors, i.e., red (R), green (G), and blue (B) as image feature amounts that pay attention to colors, and will be explained using processes in a three-dimensional color space.
  • (Process for Obtaining Sequential Label Set from Frame)
  • A process for generating a sequential label set by extracting one frame image from a moving image, and segmenting the frame image into a plurality of blocks, assigning labels in accordance with feature amounts acquired in units of blocks, and arranging the assigned labels on the basis of a predetermined block order will be explained below.
  • FIG. 4 is a flow chart showing the sequence of the process for obtaining a sequential label set from a frame according to this embodiment.
  • When a moving image file to be subjected to scene change detection is designated via the user interface unit 11, processes in steps S11 to S18 are repeated until it is determined in step S10 that the remaining frame images are present.
  • In step S11, one frame image is read out by seeking the moving image file, and is held in the frame image memory 13. In step S12, the held image is segmented into a plurality of blocks. In this embodiment, the image is segmented into a plurality of vertical and horizontal blocks. FIG. 5 shows an example of an image segmented into blocks according to this embodiment. As shown in FIG. 5, in this embodiment the image is segmented into a total of nine (3×3) blocks. In step S13, feature amounts of the segmented blocks are computed, and the obtained feature amounts are labeled in the following sequence.
  • FIG. 6 is a view for explaining a multi-dimensional feature amount space according to this embodiment.
  • As shown in FIG. 6, the multi-dimensional feature amount space (RGB color space) is segmented into a plurality of blocks (color blocks), i.e., cells (color cells), and unique labels are assigned as serial numbers to the individual cells (color cells). The reason why the multi-dimensional feature amount space (RGB color space) is segmented into a plurality of blocks is to absorb delicate feature amount (color) differences.
  • In step S13, a predetermined image feature amount extraction computation process is done for each segmented block obtained in step S12 to obtain a cell on the multi-dimensional feature amount space to which that block belongs, thus obtaining a corresponding label. This process is done for all the blocks. That is, in the image feature amount extraction computation process of this embodiment, computation process is done to determine which color cells all pixels in each segmented block belong to respectively, and the label of color cell with the highest frequency of occurrence is determined to be a parameter label (color label) of that segmented image block. This process is done for all the blocks.
  • After parameter labels are assigned to the individual blocks, the parameter labels assigned to the blocks are arranged in a predetermined block order to generate a parameter sequential label set (to be referred to as a sequential label set hereinafter) in step S14.
  • FIGS. 7A to 7D are views for explaining examples of block orders used upon generating a sequential label set. The parameter labels are arranged in ascending order of numerals in boxes of the segmented image blocks shown in each of FIGS. 7A to 7D to generate a sequential label set.
  • Note that scan methods that can be applied to this embodiment include, for example:
  • horizontal scans (e.g., scan methods for making a left-to-right scan from up to down: FIG. 7A, making a left-to-right scan from down to up: FIG. 7C, making a right-to-left scan from up to down: FIG. 7B, making a right-to-left scan from down to up: FIG. 7D, and so forth are available); and
  • vertical scans (e.g., scan methods for making an up-to-down scan from left to right, making an up-to-down scan from right to left, making a down-to-up scan from left to right, making a down-to-up scan from right to left, and so forth are available (none of these methods are shown)). However, the present invention is not limited to such specific methods.
  • This embodiment adopts a scan method which satisfies the following conditions.
  • (1) Since label matrices are time-serially compared, it is not preferable to reverse this order. Hence, all images must be scanned by a predetermined scan method to obtain label matrices.
  • (2) Nearby blocks are preferably located at near positions in a sequential label set.
  • (3) Matching can be made more easily as the labels of blocks that correspond to an object of interest appear as quickly as possible, and continuously appear for a long period of time.
  • (4) Even when an object has moved or camera angle has changed, an arrangement of labels is prevented from changing drastically.
  • This embodiment adopts a scan method for making a horizontal scan from left to right, up to down, as shown in FIG. 7A.
  • (Process for Detecting Scene Change)
  • Sequential label sets for N previous frames, which are obtained in the above sequence, are accumulated on the memory, and a scene change is detected in step S15 by comparing sequential label sets of N previous frames and a sequential label set of the current frame. Assume that the minimum value of N is 2.
  • An example for detecting a scene change from a sequential label set of this embodiment will be explained below using the flow chart in FIG. 9 that shows details of step S15, taking as an example a method for detecting scene changes from a moving image obtained by connecting frames including quite different scenes. Even the moving image obtained by connecting frames including quite different scenes includes errors such as omission of a frame upon editing, a sudden change in brightness in only one frame upon, e.g., emission of flash light of a camera, and the like, and it is important to prevent operation errors (excessive detection of scene changes) against such moving image.
  • In step S20, the similarity between a sequential label set obtained from the current frame image and that obtained from the immediately preceding frame image is computed by a method to be described later. In step S21, the similarity is compared with a threshold value. If the similarity is larger than the threshold value, status indicating the absence of status change is returned in step S25, and the flow returns to the main routine. To supplement, the flow then returns to FIG. 4 and advances from step S16 to step S18 to store the sequential label set of the current frame image in the RAM. After that the flow returns to step S10 to proceed with the process for capturing a new frame image in step S11 and the subsequent steps.
  • If it is determined in step S21 that the similarity is equal to or smaller than the threshold value, the sequential label set obtained from the current frame image is compared with that obtained from a frame image two frames before the current frame image, which was obtained and stored in the RAM previously, in step S22, and if it is determined in step S23 that the similarity between these two sequential label sets is equal to or smaller than the threshold value, status indicating the presence of scene change is returned. If it is determined in step S23 that the similarity is larger than the threshold value, it is determined that an edit error or a sudden change in brightness in only one frame upon, e.g., emission of flash light of a camera has occurred, and status indicating the absence of scene change is returned in step S25.
  • (Process for Computing Similarity)
  • A method of computing the similarity between frame images, i.e., comparing two sequential label sets to check if they are similar to each other (to compute their similarity) will be described in detail below. Note that a sequential label set acquired in step S14 will be referred to as a query label matrix of a query frame image hereinafter.
  • In order to give a small penalty (of distance) to neighboring cells and a large penalty to cells which are far from each other upon pattern matching between labels, a penalty matrix between labels shown in FIG. 10 is introduced. In step S20 or S22 in step S15, the label matrices are compared in consideration of this penalty matrix, and in this case, two-dimensional DP matching (to be referred to as 2 D DP matching hereinafter) to be described below is used.
  • FIG. 11 is a view for explaining the similarity computation process according to this embodiment.
  • The query sequential label set acquired in step S14 can be arranged, as shown in a center of FIG. 11, in accordance with its scan method. Also, when one of label matrices of frame images for N previous frames, which are stored in the RAM is used as a test sequential label set of a test frame image, that sequential label set can be arranged, as shown in a left of FIG. 11.
  • The distances between a label sequence “abc” in the first line of the test sequential label set, and sequential label sets (“123”, “456”, “789”) in the first to third lines of the query sequential label set are computed by DP matching, and the line number of the label sequence that minimizes distance in the query sequential label set is stored at the corresponding position in a similar line matrix (a right of FIG. 11). When the obtained minimum distance is larger than a predetermined threshold value, it is determined that the label sequence of interest of the test sequential label set is similar to none of the lines, and “!” is stored at the corresponding position in the similar line matrix. Even when an image angle has horizontally changed slightly, a similar line can be detected by the aforementioned process owing to the nature of DP matching. By repeating the aforementioned process for all the lines (“def”, “ghi”) in the similarity test image, the similar line matrix in the column direction shown in a right of FIG. 11 can be obtained.
  • In a right of FIG. 11, no line similar to “abc” is present in the query sequential label set, and a line similar to “def” is found in the first line of the query sequential label set, and a line similar to “ghi” is found in the second line of the query sequential label set. The similarity between the similar line matrix obtained in this manner, and a standard line matrix (the arrangement of lines in the query frame image, and “123” in this example) is further computed using DP matching, and is output as the similarity between the query and test frame images.
  • It is well known that DP matching selects a route in which similarity distance is minimized under a constraint condition of a matching window, as an optimum solution. The constraint condition may be given by a width of a matching window.
  • FIG. 12 is a flow chart for explaining the sequence of similarity computation using 2 D DP matching according to this embodiment. The process that has been explained with reference to FIG. 11 will be explained in more detail below with reference to the flow chart in FIG. 12. Note that the process shown in this flow chart is executed for different similarity test images in steps S20 and S22.
  • In step S101, variable i indicating the line number of the test frame image and variable j indicating the line number of the query frame image are initialized to 1 to both indicate the first line. In step S102, a label sequence of the i-th line of the test frame image is acquired. For example, in case of FIG. 11, if i=1, “abc” is acquired. In step S103, a sequential label set of the j-th line of the query frame image is acquired. For example, in case of FIGS. 11A to 11C, if j=1, “123” is acquired.
  • In step S104, the distance between the two sequential label sets acquired in steps S102 and S103 is computed by DP matching using the penalty matrix of color cells described in FIG. 10. In step S105, if the distance obtained in step S104 is a minimum value of those obtained so far in association with the i-th line, the line number (j) of interest is stored in a line matrix element LINE[i].
  • The aforementioned processes in steps S103 to S105 are repeated for all lines of the similarity test image (steps S106 and S107). In this manner, the number of the line with a minimum distance of those included in the query frame image to the sequential label set of the i-th line of the test frame image is stored in LINE[i].
  • In step S108, LINE[i] obtained by the above process is compared with a predetermined threshold value (Thresh). If LINE[i] is equal to or larger than Thresh, the flow advances to step S109, and “!” indicating that the i-th line is similar to none of lines in the query image is stored in LINE[i].
  • The aforementioned processes from step S102 to step S108 are executed for all the lines in the test frame image (steps S110 and S111) to obtain LINE[imax] of LINE[i], which is output as a similar line matrix LINE[i].
  • In step S113, DP matching between a standard line matrix [1, 2, . . . , imax] and similar line matrix LINE[1, 2, . . . , imax] is done to compute the distance therebetween. Note that the standard line matrix starts from 1, and increases in unitary increments in the column direction.
  • A penalty used in DP matching between the standard line matrix and similar line matrix will be explained below. The present invention proposes a dynamic penalty as penalty setups of DP matching between the similar line matrix and standard line matrix in the column direction. The dynamic penalty dynamically sets a penalty between the line numbers, and the penalty between the line numbers changes depending on images. In this embodiment, sequential label set distances in the horizontal (line) direction of the similarity query image itself are computed, and penalties between lines are obtained based on these distances.
  • FIG. 13 is a flow chart showing the setup sequence of a dynamic penalty value according to this embodiment.
  • In step S121, variables i and j are respectively set at 1 and 2. A sequential label set of the i-th line of the query frame image is acquired in step S122, and a sequential label set of the j-th line of the query frame image is acquired in step S123. In step S124, DP matching between the sequential label sets of the i- and j-th lines of the query frame image is done using the color penalty matrix to obtain distance. In step S125, the DP matching distance obtained in step S124 is stored in LINE[i][j] as a penalty between the sequential label sets of the i- and j-th lines of the query frame image, and is also stored in LINE[j][i] as a penalty between the sequential label sets of the j- and i-th lines of the query frame image.
  • The processes in steps S123 to S125 are repeated until the value of variable j reaches jmax in step S126. As a result, penalty values between the sequential label set of the i-th line, and sequential label sets of the (i+1)-th to (jmax)-th lines are determined. Then, the processes in steps S123 to S126 are repeated until the value of variable i reaches (imax-1) in steps S128, S129, and S130. As a result, penalty values determined by the above processes are stored in all LINE[i][j] except for diagonal components of i=j.
  • In step S131, penalty values of the diagonal components of LINE[i][j], which are not determined in the above processes, are determined. In this portion, since i=j, i.e., identical sequential label sets are compared, zero distance is obtained and, hence, zero penalty is stored. Also, a penalty for “!” is determined in step S132. That is, a penalty for “!” is set to have a value larger by the maximum one of all the penalty values of LINE[i][j] to some extent. If this penalty value is set to be extremely large, the feature of ambiguous search may suffer.
  • DP matching in step S113 is done using penalty among sequential label sets computed for the query frame images in this manner, thus obtaining the similarity between the query frame image and test frame image.
  • The aforementioned matching process also has the following feature. If the matrices shown in FIGS. 11A and 11B are very similar to each other, a similar line matrix “123” is obtained, and their distance is zero. On the other hand, if a similar line matrix is “!12” or “212”, the test frame image is likely to have deviated downward from the query frame image; if a similar line matrix is “23!” or “233”, the test frame image is likely to have deviated upward from the query frame image. On the other hand, if a similar line matrix is “13!” or “!13”, the test frame image is likely to be reduced in scale with respect to the query frame image. Likewise, a test frame image obtained by enlarging the query frame image may be detected.
  • As described in step S113 above, by DP matching between the similar line matrix and standard line matrix, vertical deviation can be effectively absorbed. For this reason, the difference between the query frame image and test frame image resulting from the aforementioned upward or downward deviation, enlargement, reduction, or the like can be effectively absorbed, and the similarity between the frame images of a moving image can be satisfactorily determined.
  • More specifically, 2 D DP matching of this embodiment allows ambiguity between the label sequences of the neighboring label matrices, and absorbs the influence of position deviations of an image. On the other hand, when the position of an object has changed due to, e.g., a difference in angle, and the position of the object extracted by blocks has changed, the tinctures of blocks may become slightly different, but such differences are absorbed by the aforementioned penalty matrix. In this manner, due to the synergism of 2 D DP matching that allows ambiguity and allowance of ambiguity of feature amounts by means of the penalty matrix according to this embodiment, matching that is less influenced by upper, lower, right, and left deviations, and those caused by enlargement/reduction can be achieved.
  • Other merits of the dynamic penalty will be discussed below. For example, when there is a query frame image of stretched wheat fields, all lines may have similar sequential label sets. On the other hand, if there is also a test frame image of stretched wheat fields, a similar line matrix of this image may store the first line number “1” and may become “111”. In such case, all lines of a similarity query image have similar images, and no hit occurs at shorter distances unless the penalty between line numbers is very small. However, when the dynamic penalty is used, the penalty between line numbers becomes small, and a result with high similarity can be obtained.
  • In the above embodiment, the similar line matrix is obtained using sequential label sets corresponding to the horizontal block arrangements. Alternatively, a similar line matrix can be obtained using label sequences corresponding to the vertical block arrangements by the same method as described above. Also, both the horizontal and vertical directions may be combined.
  • In the above embodiment, color information is selected as an image feature amount. However, the present invention is not limited to such specific image feature amount, and may be practiced by obtaining other image parameters in units of image segmented blocks.
  • The level of ambiguity upon comparing a query frame image and test frame image can be desirably set by changing the width of a so-called matching window in DP matching.
  • FIG. 14 is a view for explaining a matching window in DP matching. In FIG. 14, line A is given by J=I+r, and line B by J=I−r. The width of the matching window can be changed by changing the value r. Hence, when the value r is changed upon automatically or manually varying the ambiguity level, similarity computations can be made with a desired ambiguity level (width of the matching window), and such change is effective for scene change detection in a moving image including very quick motions, and that in a movie that considerably suffers camera shake. When the present apparatus is equipped in a movie, the value r is changed in accordance with a mount of shaking detected by a sensor. The user may change the sensitivity of scene change by manually change the value r at the keyboard 104, or the value r may be automatically increased when scene changes are detected too frequently or may be decreased when fewer scene changes are detected.
  • In 2 D DP matching in the above embodiment, the width of a matching window in horizontal DP matching and that of a matching window in vertical DP matching may be independently set. Or the two matching windows may be changed at different rates. In this manner, ambiguity levels can be set very flexibly upon similarity computations. For example, when the block order shown in a left of FIG. 7 is used, and the horizontal movement of an object of interest in a query image is to be allowed, the width of the matching window in horizontal DP matching can be increased to increase the ambiguity level in the horizontal direction.
  • Referring back to the flow chart in FIG. 4, it is checked in step S16 if a scene change is detected in step S15. If YES in step S16, scene change information is additionally stored in the scene change information accumulation unit by the process in step S17. Finally, sequential label set of N previous frame image is discarded, and the sequential label set of the current frame image is stored in the RAM in step S18. The flow then returns to the process in step S10 to repeat the processes as long as frame images to be processed remain.
  • As described above, a feature amount group (a group of feature amounts obtained by segmenting a feature amount space) is expressed by a single symbol (i.e., labeled), and a distance based on the similarity between labels is given using the 2 D DP matching and the penalty matrix described above. For this reason, since the computation volume of the distance between two image blocks can be greatly reduced, and similar feature amounts can be expressed by identical labels, the similarity between two images can be satisfactorily computed.
  • Since (1) the concept of defining distance between labels using the penalty matrix, and (2) 2 D DP matching that can ambiguously move label positions to be compared, and can implement comparison between label matrices to minimize the total distance (maximize similarity) are used, an inter-frame pattern matcher which produces a clearly low inter-frame similarity output in response to appearance of a frame which has absolutely no continuity while absorbing some continuous changes in moving image (e.g., some color differences caused by a change in image angle, and a change in position or deformation of an object upon, e.g., panning of a camera, or a change in image sensing condition such as a light source or the like) in association with neighboring or adjacent image frames, can be implemented.
  • Note that the present invention may be applied to either a system constituted by a plurality of devices (e.g., a host computer, an interface device, a reader, a printer, and the like), or an apparatus consisting of a single equipment (e.g., a copying machine, a facsimile apparatus, or the like).
  • The objects of the present invention are also achieved by supplying a storage medium, which records a program code of a software program that can implement the functions of the above-mentioned embodiments to the system or apparatus, and reading out and executing the program code stored in the storage medium by a computer (or a CPU or MPU) of the system or apparatus. In this case, the program code itself read out from the storage medium implements the functions of the above-mentioned embodiments, and the storage medium which stores the program code constitutes the present invention.
  • As the storage medium for supplying the program code, for example, a floppy disk, hard disk, optical disk, magneto-optical disk, CD-ROM, CD-R, magnetic tape, nonvolatile memory card, ROM, and the like may be used.
  • The functions of the above-mentioned embodiments may be implemented not only by executing the readout program code by the computer but also by some or all of actual processing operations executed by an OS (operating system) running on the computer on the basis of an instruction of the program code.
  • Furthermore, the functions of the above-mentioned embodiments may be implemented by some or all of actual processing operations executed by a CPU or the like arranged in a function extension board or a function extension unit, which is inserted in or connected to the computer, after the program code read out from the storage medium is written in a memory of the extension board or unit.
  • When the present invention is applied to the storage medium, the storage medium stores a program including program codes corresponding to the aforementioned flow charts (shown in FIGS. 4, 9, 12, 13, and the like).
  • To recapitulate, according to the present invention, scene change detection can be implemented by an inter-frame pattern matcher which produces a clearly low inter-frame similarity output in response to appearance of a frame which has absolutely no continuity while absorbing some continuous changes in moving image (e.g., some color differences caused by a change in image angle, and a change in position or deformation of an object upon, e.g., panning of a camera, or a change in image sensing condition such as a light source or the like) in association with neighboring or adjacent image frames.
  • As many apparently widely different embodiments of the present invention can be made without departing from the spirit and scope thereof, it is to be understood that the invention is not limited to the specific embodiments thereof except as defined in the claims.

Claims (19)

1-57. (canceled)
58. An image processing apparatus comprising:
extracting means for extracting frame image data from moving image data;
feature amount determination means for determining a feature amount of the frame image data extracted by said extracting means;
feature amount storage means for storing the feature amount determined by said feature amount determination means;
comparison means for comparing the feature amount of the frame image data which is a scene change frame candidate with each of N feature amounts of N frames of image data before the candidate, where N is an integer greater than 1, which have been stored in said feature amount storage means;
scene change detection means for detecting a scene change frame in the moving image data based on a result of comparison by said comparison means; and
scene change storage means for storing information of the detected scene change frame in connection with the moving image data,
wherein said scene change detection means detects a scene change frame candidate as a scene change frame when the feature amount of the candidate is not similar to every N feature amounts of the N frames image data before the candidate.
59. An apparatus according to claim 58, wherein said feature amount determination means divides the frame image data into a plurality of blocks and determines a feature amount for each of the blocks.
60. An apparatus according to claim 59, wherein said comparison means compares between feature amounts using a penalty table.
61. An apparatus according to claim 58, wherein said comparison means compares the feature amount of the frame image data which is a scene change frame candidate with a feature amount of second frame image data, the second frame image data being before first frame image data, when the feature amount of the candidate is not similar to a feature amount of the first frame image data before the candidate.
62. An apparatus according to claim 58, wherein the information of the detected scene change frame comprises a number of frames or an elapsed time from a top of the moving image data after which the scene change has been detected.
63. An apparatus according to claim 58, wherein said feature amount storage means stores at least N feature amounts.
64. An image processing method comprising the steps of:
extracting frame image data from moving image data;
determining a feature amount of the frame image data extracted in said extracting step;
storing the feature amount determined in said feature amount determination step;
comparing the feature amount of the frame image data which is a scene change frame candidate with each of N feature amounts of N frames of image data before the candidate, where N is an integer greater than 1, which have been stored in said feature amount storing step;
detecting a scene change frame in the moving image data based on a result of comparison in said comparing step; and
storing information of the detected scene change frame in connection with the moving image data,
wherein said scene change detecting step includes detecting a scene change frame candidate as a scene change frame when the feature amount of the candidate is not similar to every N feature amounts of the N frames image data before the candidate.
65. A method according to claim 64, wherein said feature amount determinating step includes dividing the frame image data into a plurality of blocks and determining a feature amount for each of the blocks.
66. A method according to claim 65, wherein said comparing step includes comparing between feature amounts using a penalty table.
67. A method according to claim 64, wherein said comparing step includes comparing the feature amount of the frame image data which is a scene change frame candidate with a feature amount of second frame image data, the second frame image data being before first frame image data, when the feature amount of the candidate is not similar to a feature amount of the first frame image data before the candidate.
68. A method according to claim 64, wherein the information of the detected scene change frame comprises a number of frames or an elapsed time from a top of the moving image data after which the scene change has been detected.
69. A method according to claim 64, wherein said feature amount storing step includes storing at least N feature amounts.
70. A computer-readable storage medium, storing a program product to perform an image processing method comprising the steps of:
extracting frame image data from moving image data;
determining a feature amount of the frame image data extracted in said extracting step;
storing the feature amount determined in said feature amount determination step;
comparing the feature amount of the frame image data which is a scene change frame candidate with each of N feature amounts of N frames of image data before the candidate, where N is an integer greater than 1, which have been stored in said feature amount storing step;
detecting a scene change frame in the moving image data based on a result of comparison in said comparing step; and
storing information of the detected scene change frame in connection with the moving image data,
wherein said scene change detecting step includes detecting a scene change frame candidate as a scene change frame when the feature amount of the candidate is not similar to every N feature amounts of the N frames image data before the candidate.
71. A storage medium according to claim 70, wherein said feature amount determinating step includes dividing the frame image data into a plurality of blocks and determining a feature amount for each of the blocks.
72. A storage medium according to claim 71, wherein said comparing step includes comparing between feature amounts using a penalty table.
73. A storage medium according to claim 70, wherein said comparing step includes comparing the feature amount of the frame image data which is a scene change frame candidate with a feature amount of second frame image data, the second frame image data being before first frame image data, when the feature amount of the candidate is not similar to a feature amount of the first frame image data before the candidate.
74. A storage medium according to claim 71, wherein the information of the detected scene change frame comprises a number of frames or an elapsed time from a top of the moving image data after which the scene change has been detected.
75. A storage medium according to claim 71, wherein said feature amount storing step includes storing at least N feature amounts.
US11/247,330 1999-02-15 2005-10-12 Scene change detection method using two-dimensional DP matching, and image processing apparatus for implementing the method Abandoned US20060050791A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US11/247,330 US20060050791A1 (en) 1999-02-15 2005-10-12 Scene change detection method using two-dimensional DP matching, and image processing apparatus for implementing the method

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
JP11-036524 1999-02-15
JP03652499A JP4146955B2 (en) 1999-02-15 1999-02-15 Image processing method and image processing apparatus
US09/503,477 US6977963B1 (en) 1999-02-15 2000-02-14 Scene change detection method using two-dimensional DP matching, and image processing apparatus for implementing the method
US11/247,330 US20060050791A1 (en) 1999-02-15 2005-10-12 Scene change detection method using two-dimensional DP matching, and image processing apparatus for implementing the method

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US09/503,477 Division US6977963B1 (en) 1999-02-15 2000-02-14 Scene change detection method using two-dimensional DP matching, and image processing apparatus for implementing the method

Publications (1)

Publication Number Publication Date
US20060050791A1 true US20060050791A1 (en) 2006-03-09

Family

ID=12472202

Family Applications (2)

Application Number Title Priority Date Filing Date
US09/503,477 Expired - Fee Related US6977963B1 (en) 1999-02-15 2000-02-14 Scene change detection method using two-dimensional DP matching, and image processing apparatus for implementing the method
US11/247,330 Abandoned US20060050791A1 (en) 1999-02-15 2005-10-12 Scene change detection method using two-dimensional DP matching, and image processing apparatus for implementing the method

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US09/503,477 Expired - Fee Related US6977963B1 (en) 1999-02-15 2000-02-14 Scene change detection method using two-dimensional DP matching, and image processing apparatus for implementing the method

Country Status (2)

Country Link
US (2) US6977963B1 (en)
JP (1) JP4146955B2 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090168871A1 (en) * 2007-12-31 2009-07-02 Ning Lu Video motion estimation
WO2009158363A1 (en) * 2008-06-25 2009-12-30 Joseph Christen Dunn Package dimensioner and reader
US20110109731A1 (en) * 2009-11-06 2011-05-12 Samsung Electronics Co., Ltd. Method and apparatus for adjusting parallax in three-dimensional video
CN105528594A (en) * 2016-01-31 2016-04-27 江南大学 Incident identification method based on video signal

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7075683B1 (en) * 1999-02-15 2006-07-11 Canon Kabushiki Kaisha Dynamic image digest automatic editing system and dynamic image digest automatic editing method
JP2003109009A (en) * 2001-09-26 2003-04-11 Communication Research Laboratory Method and instrument for measuring similarity of image
US7286749B2 (en) 2002-04-16 2007-10-23 Canon Kabushiki Kaisha Moving image playback apparatus, moving image playback method, and computer program thereof with determining of first voice period which represents a human utterance period and second voice period other than the first voice period
JP3819870B2 (en) * 2003-04-25 2006-09-13 三洋電機株式会社 Image display device
US7313183B2 (en) * 2003-06-24 2007-12-25 Lsi Corporation Real time scene change detection in video sequences
JP4235604B2 (en) 2004-11-22 2009-03-11 キヤノン株式会社 Image processing apparatus, image processing method, and program
JP4613724B2 (en) * 2005-07-15 2011-01-19 ソニー株式会社 Imaging apparatus and imaging method
JP2011176541A (en) * 2010-02-24 2011-09-08 Sony Corp Three-dimensional video processing apparatus and method, and program thereof
US20130170543A1 (en) * 2011-12-30 2013-07-04 Ning Lu Systems, methods, and computer program products for streaming out of data for video transcoding and other applications
WO2013157354A1 (en) * 2012-04-18 2013-10-24 オリンパス株式会社 Image processing device, program, and image processing method
US9544615B2 (en) * 2014-11-14 2017-01-10 Sony Corporation Method and system for processing video content
KR102192488B1 (en) * 2015-11-25 2020-12-17 삼성전자주식회사 Apparatus and method for frame rate conversion
CN106937114B (en) * 2015-12-30 2020-09-25 株式会社日立制作所 Method and device for detecting video scene switching

Citations (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5083860A (en) * 1990-08-31 1992-01-28 Institut For Personalized Information Environment Method for detecting change points in motion picture images
US5099322A (en) * 1990-02-27 1992-03-24 Texas Instruments Incorporated Scene change detection system and method
US5103305A (en) * 1989-09-27 1992-04-07 Kabushiki Kaisha Toshiba Moving object detecting system
US5459517A (en) * 1992-12-15 1995-10-17 Fuji Xerox Co., Ltd. Moving picture scene detection system
US5576950A (en) * 1993-07-28 1996-11-19 Nippon Telegraph And Telephone Corporation Video image search method and system using the same
US5732146A (en) * 1994-04-18 1998-03-24 Matsushita Electric Industrial Co., Ltd. Scene change detecting method for video and movie
US5745190A (en) * 1993-12-16 1998-04-28 International Business Machines Corporation Method and apparatus for supplying data
US5802361A (en) * 1994-09-30 1998-09-01 Apple Computer, Inc. Method and system for searching graphic images and videos
US5805733A (en) * 1994-12-12 1998-09-08 Apple Computer, Inc. Method and system for detecting scenes and summarizing video sequences
US5806733A (en) * 1996-11-26 1998-09-15 Nepsco, Inc. Shoulder carrying strap
US5821945A (en) * 1995-02-03 1998-10-13 The Trustees Of Princeton University Method and apparatus for video browsing based on content and structure
US5880775A (en) * 1993-08-16 1999-03-09 Videofaxx, Inc. Method and apparatus for detecting changes in a video display
US6014183A (en) * 1997-08-06 2000-01-11 Imagine Products, Inc. Method and apparatus for detecting scene changes in a digital video stream
US6055025A (en) * 1993-12-21 2000-04-25 Lucent Technologies, Inc. Method and apparatus for detecting abrupt and gradual scene changes in image sequences
US6208385B1 (en) * 1996-10-17 2001-03-27 Kabushiki Kaisha Toshiba Letterbox image detection apparatus
US6219382B1 (en) * 1996-11-25 2001-04-17 Matsushita Electric Industrial Co., Ltd. Method and apparatus for locating a caption-added frame in a moving picture signal
US6236806B1 (en) * 1996-11-06 2001-05-22 Sony Corporation Field detection apparatus and method, image coding apparatus and method, recording medium, recording method and transmission method
US20010003214A1 (en) * 1999-07-15 2001-06-07 Vijnan Shastri Method and apparatus for utilizing closed captioned (CC) text keywords or phrases for the purpose of automated searching of network-based resources for interactive links to universal resource locators (URL's)
US6400853B1 (en) * 1997-03-19 2002-06-04 Canon Kabushiki Kaisha Image retrieval apparatus and method
US6404920B1 (en) * 1996-09-09 2002-06-11 Hsu Shin-Yi System for generalizing objects and features in an image
US6466731B2 (en) * 1996-04-03 2002-10-15 Kabushiki Kaisha Toshiba Moving picture processing method and moving picture processing apparatus
US6571054B1 (en) * 1997-11-10 2003-05-27 Nippon Telegraph And Telephone Corporation Method for creating and utilizing electronic image book and recording medium having recorded therein a program for implementing the method
US6606636B1 (en) * 1993-07-29 2003-08-12 Canon Kabushiki Kaisha Method and apparatus for retrieving dynamic images and method of and apparatus for managing images

Patent Citations (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5103305A (en) * 1989-09-27 1992-04-07 Kabushiki Kaisha Toshiba Moving object detecting system
US5099322A (en) * 1990-02-27 1992-03-24 Texas Instruments Incorporated Scene change detection system and method
US5083860A (en) * 1990-08-31 1992-01-28 Institut For Personalized Information Environment Method for detecting change points in motion picture images
US5459517A (en) * 1992-12-15 1995-10-17 Fuji Xerox Co., Ltd. Moving picture scene detection system
US5576950A (en) * 1993-07-28 1996-11-19 Nippon Telegraph And Telephone Corporation Video image search method and system using the same
US6606636B1 (en) * 1993-07-29 2003-08-12 Canon Kabushiki Kaisha Method and apparatus for retrieving dynamic images and method of and apparatus for managing images
US5880775A (en) * 1993-08-16 1999-03-09 Videofaxx, Inc. Method and apparatus for detecting changes in a video display
US5745190A (en) * 1993-12-16 1998-04-28 International Business Machines Corporation Method and apparatus for supplying data
US6055025A (en) * 1993-12-21 2000-04-25 Lucent Technologies, Inc. Method and apparatus for detecting abrupt and gradual scene changes in image sequences
US5732146A (en) * 1994-04-18 1998-03-24 Matsushita Electric Industrial Co., Ltd. Scene change detecting method for video and movie
US5802361A (en) * 1994-09-30 1998-09-01 Apple Computer, Inc. Method and system for searching graphic images and videos
US5805733A (en) * 1994-12-12 1998-09-08 Apple Computer, Inc. Method and system for detecting scenes and summarizing video sequences
US5821945A (en) * 1995-02-03 1998-10-13 The Trustees Of Princeton University Method and apparatus for video browsing based on content and structure
US6466731B2 (en) * 1996-04-03 2002-10-15 Kabushiki Kaisha Toshiba Moving picture processing method and moving picture processing apparatus
US6404920B1 (en) * 1996-09-09 2002-06-11 Hsu Shin-Yi System for generalizing objects and features in an image
US6208385B1 (en) * 1996-10-17 2001-03-27 Kabushiki Kaisha Toshiba Letterbox image detection apparatus
US6236806B1 (en) * 1996-11-06 2001-05-22 Sony Corporation Field detection apparatus and method, image coding apparatus and method, recording medium, recording method and transmission method
US6219382B1 (en) * 1996-11-25 2001-04-17 Matsushita Electric Industrial Co., Ltd. Method and apparatus for locating a caption-added frame in a moving picture signal
US5806733A (en) * 1996-11-26 1998-09-15 Nepsco, Inc. Shoulder carrying strap
US6400853B1 (en) * 1997-03-19 2002-06-04 Canon Kabushiki Kaisha Image retrieval apparatus and method
US6014183A (en) * 1997-08-06 2000-01-11 Imagine Products, Inc. Method and apparatus for detecting scene changes in a digital video stream
US6571054B1 (en) * 1997-11-10 2003-05-27 Nippon Telegraph And Telephone Corporation Method for creating and utilizing electronic image book and recording medium having recorded therein a program for implementing the method
US20010003214A1 (en) * 1999-07-15 2001-06-07 Vijnan Shastri Method and apparatus for utilizing closed captioned (CC) text keywords or phrases for the purpose of automated searching of network-based resources for interactive links to universal resource locators (URL's)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090168871A1 (en) * 2007-12-31 2009-07-02 Ning Lu Video motion estimation
WO2009158363A1 (en) * 2008-06-25 2009-12-30 Joseph Christen Dunn Package dimensioner and reader
US20090323084A1 (en) * 2008-06-25 2009-12-31 Joseph Christen Dunn Package dimensioner and reader
US20110109731A1 (en) * 2009-11-06 2011-05-12 Samsung Electronics Co., Ltd. Method and apparatus for adjusting parallax in three-dimensional video
US8798160B2 (en) * 2009-11-06 2014-08-05 Samsung Electronics Co., Ltd. Method and apparatus for adjusting parallax in three-dimensional video
CN105528594A (en) * 2016-01-31 2016-04-27 江南大学 Incident identification method based on video signal

Also Published As

Publication number Publication date
JP4146955B2 (en) 2008-09-10
US6977963B1 (en) 2005-12-20
JP2000235639A (en) 2000-08-29

Similar Documents

Publication Publication Date Title
US20060050791A1 (en) Scene change detection method using two-dimensional DP matching, and image processing apparatus for implementing the method
Zabih et al. A feature-based algorithm for detecting and classifying scene breaks
US7466365B2 (en) Moving image processing apparatus and method, and computer readable memory
Zabih et al. A feature-based algorithm for detecting and classifying production effects
US5767922A (en) Apparatus and process for detecting scene breaks in a sequence of video frames
US20070030396A1 (en) Method and apparatus for generating a panorama from a sequence of video frames
Zhang et al. Automatic partitioning of full-motion video
Bouthemy et al. A unified approach to shot change detection and camera motion characterization
CN1222897C (en) Equipment for producing object identification image in vidio sequence and its method
US7577312B2 (en) Image sequence enhancement system and method
JP3361587B2 (en) Moving image search apparatus and method
Yu et al. An efficient method for scene cut detection
US6542625B1 (en) Method of detecting a specific object in an image signal
WO2004044846A2 (en) A method of and system for detecting uniform color segments
Drew et al. Video dissolve and wipe detection via spatio-temporal images of chromatic histogram differences
CN1411284A (en) Method for testing face by image
EP0851389B1 (en) Contour tracing method
CN105825476B (en) A kind of quick Overlap-scanning mode of image applied to DSP
JPH11288418A (en) Device and method for retrieving image
JP3379453B2 (en) Caption region detection method and device, and moving image search method and device
Bouthemy et al. Scene segmentation and image feature extraction for video indexing and retrieval
Toller et al. Video segmentation using combined cues
US8300926B2 (en) Image processing apparatus and image processing method
KR101706347B1 (en) Method for shot boundary detection, and image processing apparatus and method implementing the same method
JPH1139309A (en) Picture retrieval device and its method

Legal Events

Date Code Title Description
STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION