US20020126875A1 - Image processing apparatus - Google Patents

Image processing apparatus Download PDF

Info

Publication number
US20020126875A1
US20020126875A1 US09/564,535 US56453500A US2002126875A1 US 20020126875 A1 US20020126875 A1 US 20020126875A1 US 56453500 A US56453500 A US 56453500A US 2002126875 A1 US2002126875 A1 US 2002126875A1
Authority
US
United States
Prior art keywords
image
unit
objects
moving
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US09/564,535
Other versions
US6430303B1 (en
Inventor
Satoshi Naoi
Hiroichi Egawa
Morito Shiohara
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from JP12256393A external-priority patent/JP3288474B2/en
Application filed by Individual filed Critical Individual
Priority to US09/564,535 priority Critical patent/US6430303B1/en
Application granted granted Critical
Publication of US6430303B1 publication Critical patent/US6430303B1/en
Publication of US20020126875A1 publication Critical patent/US20020126875A1/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/254Analysis of motion involving subtraction of images

Definitions

  • the present invention relates to an image processing apparatus for accurately extracting one or a plurality of objects utilizing a thresholded differential image processing technique when a plurality of stationary objects and a plurality of moving objects are contained together in an image of a time sequence of images.
  • the present invention relates to an image processing apparatus, which allows both a background image and an image including at least one stationary object or at least one moving object (having a speed not more than a predetermined speed) to be extracted, and also allows a difference-calculation process to be carried out between the images.
  • the present invention relates to an image processing apparatus, in which a plurality of markers are provided in a background where the objects move, and these markers are extracted by utilizing an image processing technique similar to the case in which the moving objects are extracted, and further it is discriminated whether or not the thus extracted markers are in the steady state.
  • the markers are in the steady state, portions where the moving objects and the markers overlap each other can be determined. Therefore, the number of the markers (the size of markers displayed in each image), which are in the steady state and exist between two moving objects, can be calculated to obtain a distance between two moving objects.
  • supervisory systems using the above-mentioned image processing technique can be utilized in various places. Each of these supervisory systems serves to rapidly locate an accident, a disaster, and the like. Recently, such supervisory systems are likely to be utilized for preventing such accidents, disasters, and the like, in addition to a function of merely detecting the existence of an accident, etc.
  • the supervisory system detect and analyze the movement of each of a plurality of moving objects contained in a series of images. Further, it is also necessary for the supervisory system to rapidly calculate a distance between the two moving objects with a high degree of accuracy.
  • regions where moving objects may be positioned are extracted using a predetermined assumption.
  • a specified moving object is distinguished from the other objects, on the basis of various characteristics, e.g., the size of each of the regions, and the central position of each region.
  • the movement of the moving object can be analyzed.
  • a given portion of an image which is to be analyzed is extracted from the image.
  • various characteristics e.g., the position of projections and the location of central positions, are calculated, and used to distinguish the object from the other portions.
  • the process is executed with respect to a plurality of images in a time series, i.e., continuous motion type images.
  • the main object of the present invention is to provide an image processing apparatus which allows one or a plurality of objects to be rapidly and accurately extracted and analyzed, in a case where a large number of moving objects exist in a time series image.
  • a further object of the present invention is to provide an image processing apparatus which allows one or a plurality of objects to be rapidly and accurately extracted and analyzed, even in the case where a plurality of stationary objects exist, as well as a plurality of moving objects.
  • a still further object of the present invention is to provide an image processing apparatus which allows the movement of each of a plurality of moving objects to be rapidly and accurately extracted and analyzed, even in the case where the plurality of moving objects respectively move with different speeds.
  • a still further object of the present invention is to provide an image processing apparatus which allows all of the stationary and moving objects to be correlated with each other during real time processing with a processing rate determined by a frequency of a video signal, in the case where a plurality of stationary objects exist, as well as a plurality of moving objects, and also in a case where the plurality of moving objects respectively move with different speeds.
  • a still further object is to provide an image processing apparatus which allows a distance between two moving objects to be calculated, so that an abnormal object motion that may bring about an accident, a disaster, and the like, can be rapidly detected.
  • the image processing apparatus includes an image-input unit which inputs an image including a background and a plurality of the objects; a background image extract unit which extracts the background; a first average background extract unit which extracts an image that includes one or a plurality of stationary objects or moving objects each having a speed not higher than a predetermined first speed and also includes the background; and a second average background extract unit which extracts an image that includes the stationary objects or moving objects each having a speed not higher than a predetermined second speed and also includes the background.
  • the image processing apparatus of the present invention further includes a first difference-calculation processing unit which calculates a difference between an output from the background image extract unit and either one of outputs from the first average background extract unit, and then generates a first image containing objects moving at a first speed or stopped; a second difference-calculation processing unit which calculates a difference between respective outputs from the first and second average background extract units, and then generates a second image containing objects moving at a second speed; and a third difference-calculation processing unit which calculates a difference between an output from the image-input unit and either one of outputs from the first and second average background extract units, and then generates a third image containing objects moving at a third speed.
  • the image processing apparatus of the present invention includes a plurality of local-area characteristic extract processing units which process outputs from the image-input unit.
  • Each of the local-area characteristic extract processing units has a local-area determining unit which allocates the output from the image-input unit to each of a plurality of local areas; a labeling processing unit which separates at least one object from each of the local areas, by labeling the same object existing in each of the local areas; and a characteristic-amount calculating unit which calculates a plurality of characteristic-amounts or parameters, such as length and circumference, for the thus labeled object in the local areas.
  • the image processing apparatus of the present invention operates to calculate a difference between the background and an average background image at a low speed, and to extract one or a plurality of connected areas where objects overlap.
  • the image processing apparatus operates to produce a projection for each of the connected areas, and to calculate the position of the corresponding object in accordance with the projection, and to calculate a plurality of characteristics.
  • the image processing apparatus operates to estimate a change in the position of the object and a change in the characteristics of the object for each sampling time period in the time series, and to determine whether the object is a stationary object, in a case where both the change in the position of the object and the change in the characteristics are small.
  • the image processing apparatus of the present invention is adapted to calculate a distance between two moving objects.
  • the image processing apparatus includes an image-input unit which inputs the image including a background and a plurality of objects; a marker holding unit which places a plurality of markers in the background; a moving object extraction unit which extracts a plurality of moving objects; a tracing means which traces the plurality of moving objects; a marker extract unit which extracts the markers existing between the two different moving objects; and a distance measuring unit which calculates the distance between the moving objects, on the basis of the size of the extracted markers.
  • a plurality of other markers which are not connected with each other by the marker holding unit, are provided in the background.
  • the image processing apparatus of a preferred embodiment further includes a connected-area position/shape calculating unit which calculates the size, the shape, and the number of the markers; a marker dictionary unit which has a marker dictionary for storing in advance the size and the shape of the markers; and a marker collating unit which collates the shape of the markers existing between two different moving objects and also collates the marker dictionary.
  • the image processing apparatus of a preferred embodiment is adapted to calculate the number of the markers which can be identified as true markers based on a result of the collation in the marker collating unit, and to calculate the distance between two moving objects.
  • the image processing apparatus of the present invention is adapted to calculate a distance between two cars in the case where a plurality of cars are the moving objects.
  • a plurality of white lines are used as markers; these white lines are perpendicular to the direction in which the cars move with equal spaces between adjoining white lines.
  • the image processing apparatus has a distance measuring unit, which extracts the number of continuous white lines, and calculates the distance between two cars on the basis of the total sum of spaces between the continuous white lines between the two cars.
  • a plurality of objects existing in an image can be classified into a plural images each with objects of a different speed therein on the basis of the speed of each object, and the images can be analyzed in a process independent of each other.
  • the image processing apparatus of the present invention by extracting (or identifying) markers which can be easily processed by means of an image processing technique, portions where the moving objects and the markers overlap with each other can be easily determined, even though only a part of each moving object can be detected. Therefore, by calculating a distance between portions where the moving objects and the markers overlap with each other, it becomes possible to obtain a distance between two moving objects with a sufficiently high accuracy.
  • FIG. 1 is a schematic block diagram showing an essential embodiment based on the principal of the present invention
  • FIG. 2 is a schematic block diagram showing a first preferred embodiment of an image processing apparatus according to the present invention.
  • FIGS. 3 (A) and 3 (B) are diagrams showing an original image taken by an image-input unit in a tunnel in different sampling time, respectively;
  • FIG. 4 is a diagram for explaining a plurality of local areas in a first preferred embodiment of the present invention.
  • FIGS. 5 (A) and 5 (B) are diagrams for explaining an example in which plural images are respectively extracted in a first preferred embodiment of the present invention
  • FIGS. 6 (A) and 6 (B) are diagrams for explaining another example in which plural images are respectively extracted in a first preferred embodiment of the present invention
  • FIG. 7 is a block diagram showing the construction of local-area characteristic-amount extraction units in a first preferred embodiment of the present invention.
  • FIGS. 8 (A) to 8 (F) are diagrams for explaining operations of a first preferred embodiment of the present invention in the case where a plurality of stationary objects and a plurality of objects moving at a low speed exist together;
  • FIGS. 9 (A) to 9 (F) are diagrams for explaining operations of a first preferred embodiment of the present invention in the case where a plurality of objects moving at a middle speed exist;
  • FIGS. 10 (A) to 10 (F) are diagrams for explaining operations of a first preferred embodiment of the present invention in the case where a plurality of objects moving at a high speed exist;
  • FIGS. 11 (A) and 11 (B) are diagrams for explaining operations of a first preferred embodiment of the present invention in the case where a large-scale car and a small-scale car exist together;
  • FIGS. 12 (A) to 12 (E) are diagrams for explaining operations of a second preferred embodiment of the present invention in the case where a large-scale moving object and a small-scale moving object exist together in an airport;
  • FIGS. 13 (A) and 13 (B) are diagrams for explaining a process of obtaining a projection of a large-scale moving object in a second preferred embodiment of the present invention.
  • FIGS. 14 (A) to 14 (C) are diagrams for explaining a process of calculating a distance between two moving objects in a first preferred embodiment of the present invention
  • FIG. 15 is a schematic block diagram showing a third preferred embodiment of an image processing apparatus according to the present invention.
  • FIG. 16 is a block diagram showing in detail the main part of a third preferred embodiment of the present invention.
  • FIGS. 17 (A) to 17 (C) are diagrams showing the condition in which markers are provided and various information about markers is registered in a marker dictionary, in a third preferred embodiment of the present invention.
  • FIGS. 18 (A) and 18 (B) are diagrams for explaining a process of setting a region to be processed for the passage of moving objects in a third preferred embodiment of the present invention.
  • FIGS. 19 (A) to 19 (C) are diagrams respectively showing a region to be processed, a binary code processing unit, and a noise canceling unit, in a third preferred embodiment of the present invention.
  • FIGS. 20 (A) to 20 (C) are diagrams for explaining a process of labeling a given object in a third preferred embodiment of the present invention.
  • FIGS. 21 (A) and 21 (B) are diagrams for explaining a process of projecting a labeled object in a third preferred embodiment of the present invention.
  • FIGS. 22 (A) and 22 (B) are diagrams for explaining a process of extracting a moving object which is a car having a color other than white and which passes through markers, in a third preferred embodiment of the present invention.
  • FIGS. 23 (A) to 23 (C) are diagrams showing other markers which can be utilized in a third preferred embodiment of the present invention.
  • FIGS. 24 (A) and 24 (B) are diagrams for explaining a process of extracting a moving object which is a white car and which passes through markers, in a third preferred embodiment of the present invention.
  • FIGS. 25 (A) to 25 (E) are diagrams showing various tables which are utilized for calculating a distance between two moving objects in a third preferred embodiment of the present invention.
  • FIGS. 26 (A) to 26 (E) are diagrams for explaining a process of extracting a contour in a connected area in a third preferred embodiment of the present invention.
  • FIG. 1 is a schematic block diagram showing an essential embodiment based on the principal of the present invention.
  • FIG. 1 fundamental components necessary for realizing an image processing apparatus of the present invention are illustrated.
  • a plurality of stationary objects i.e., stopped objects
  • a plurality of moving objects are contained together as a group of objects in an image which is to be processed.
  • an image processing apparatus of the present invention includes an image-input unit 1 , a background image extract unit 2 , a first average background extract unit 3 , a second average background extract unit 4 , a first difference-calculation processing unit 5 , a second difference-calculation processing unit 6 , and a third difference-calculation processing unit 7 .
  • an image-input unit 1 is typically constituted by a video camera, and serves to input an image including a background and all the objects captured by the camera.
  • the background image extract unit 2 extracts only a background by excluding the stopped objects and the moving objects from the input image. If the stationary objects and the moving objects do not exist in the image, the input image is stored in the background image extract unit 2 .
  • This background may be incorporated in advance into an image processing apparatus or image processing system.
  • the average background extract unit 3 extracts an image which includes the stationary objects, moving objects each having a low speed, and the background.
  • the second average background extract unit 4 extracts an image which includes the stationary objects, the moving objects each having a low speed, moving objects having each having a middle speed, and the background.
  • the first difference-calculation processing unit 5 calculates a difference between an output from the background image extract unit 2 and an output from the first average background extract unit 3 . Further, the first difference-calculation processing unit 5 generates a first image including slow moving and stationary objects.
  • the second difference-calculation processing unit 6 calculates a difference between an output from the first average background extract unit 3 and an output from the second average background extract unit 4 . Further, the second difference-calculation processing unit 6 generates a second image including objects moving at a higher speed.
  • the third difference-calculation processing unit 7 calculates a difference value between an output from the image-input unit 1 and an output from the second average background extract unit 4 . Further, the third difference-calculation processing unit 7 generates a third image including objects moving at a still higher speed.
  • an image “a” is sent from the image-input unit 1 and input to the background image extract unit 2 , the first average background extract unit 3 , and the second average background extract unit 4 .
  • the image “a” is processed by the background image extract unit 2 and an image “b” including a background (background image) is output.
  • the image “a” is also processed by the first average background extract unit 3 , and an image “c” is output.
  • the image “a” is also processed by the second average background extract unit 4 , and an image “d” is output.
  • an image “e” corresponding to a difference between the background image “b” and the image “c” is output by the first difference-calculation processing unit 5 .
  • an image “f” corresponding to a difference between the image “c” and the image “d” is output by the second difference-calculation processing unit 6 .
  • an image “g” corresponding to a difference between the image “d” and the image “a” is output by the third difference-calculation processing unit 7 .
  • a stopped car, a low speed car, and an obstacle existing in a traffic lane at the left side of a road are extracted by the first difference-calculation processing unit 5 and output in the image “e”.
  • two middle speed cars existing in the traffic lane at the right side are extracted by the second difference-calculation processing unit 6 and output in the image “f”.
  • FIG. 2 is a schematic block diagram showing a first preferred embodiment of an image processing apparatus according to the present invention.
  • each of an image-input unit 1 , a background image extract unit 2 , and a first average background extract unit 3 have the same construction as that shown in FIG. 1. Therefore, each of these components in FIG. 2 is indicated with the same reference number as that is used in FIG. 1.
  • the image processing apparatus shown in FIG. 2 includes N average background extract units where N denotes any natural number more than 2:N>2.
  • these extract units include a second average background extract unit through an N-th average background extract unit will be indicated as the N-th average background extract units 4 ′.
  • the image processing apparatus shown in FIG. 2 includes N+1 difference-calculation processing units.
  • these difference-calculation processing units include a third difference-calculation processing unit through an N+1-th difference-calculation processing unit will be indicated as N+1-th difference-calculation processing units 7 ′.
  • the image processing apparatus shown in FIG. 2 further includes a first local-area characteristic—amount extract unit 8 , a second local-area characteristic-amount extract unit 9 , N+1-th local-area characteristic-amount extract units 10 , and a locus calculation unit 20 .
  • the first average background extract unit 3 extracts one or a plurality of stopped objects, one or a plurality of objects moving at a low speed, and a background. Further, the N-th average background extract units 4 ′ extracts one or a plurality of stopped objects, a plurality of moving objects moving at speeds ranging from a low speed to a high speed, and a background. By utilizing these first average background extract unit 3 and N-th average background extract units 4 ′, it becomes possible to generate images in which objects are classified by speed ranges.
  • the second average background extract unit 4 shown in FIG. 1 may be provided between the first average background extract unit 3 and the N-th average background extract units 4 ′.
  • the first difference-calculation processing unit 5 calculates a difference between an output from the background image extract unit 2 and an output from the first average background extract unit 3 .
  • the second difference-calculation processing unit 6 calculates a difference between an output from the first average background extract unit 3 and either one of the respective outputs from the N-th average background extract units 4 ′.
  • the N+1-th difference-calculation processing units 7 ′ calculates a difference between either one of the respective outputs from the N-th average background extract units 4 ′ and an output from the image-input unit 1 .
  • the number of these difference-calculation processing units depends on the number of average background extract units provided on the input side. However, the number of the difference-calculation processing units is independent of the average background extract units. It can be optionally determined which combinations of two outputs are selected from among the respective outputs from the average background extract units, to calculate the difference between the different outputs from the average background extract units, depending on the speed range(s) for which extraction of objects is desired. That is, the target speed ranges determines the number of difference-calculation processing units.
  • X is a sum of the number of the average background extract units, the background image extract unit, and an original image (from the image-input unit).
  • the first local-area characteristic-amount extract unit 8 receives an output from the first difference-calculation processing unit 5 . Further, as hereinafter described, the first local-area characteristic-amount extract unit 8 checks or determines whether certain characteristic-amounts or parameters that can be used to identify an object, which will for simplicity be called an object parameter, exists in a given first local area and calculates a characteristic or parameter concerning the shape of an object in the area, and the like. These characteristic amounts or parameters can include length, circumference, center-of-gravity and will be discussed in greater detail later herein.
  • the change in position of an object can be determined. Further, by calculating a characteristic concerning the shape of the object, an attribute of the object having the shape, e.g., a bus, or a passenger car, can be determined.
  • the second local-area characteristic-amount extract unit 9 receives an output from the second difference calculation processing unit 6 . Further, as hereinafter described, the second local-area characteristic-amount extract unit 9 checks whether any characteristic-amounts or object parameters exist in a given second local area and calculates a characteristic or parameter concerning the shape of the object, and the like.
  • the N+1-th local-area characteristic-amount extract units 10 respectively receives outputs from the N+1-th difference-calculation processing units 7 ′. Further, as hereinafter described, the N+1-th local-area characteristic-amount extract units 10 checks whether any characteristic-amounts or parameters exist in the N+1-th local area and calculates a characteristic or parameter concerning the shape of a specified object, and the like.
  • the locus calculation unit 20 detects the change in existence of characteristic-amounts or object parameters in the time series images and calculates a locus of the same moving object, on the basis of an output from each of a plurality of local-area characteristic-amount extract units 9 , 10 .
  • the locus calculation unit 20 includes a character analyzing unit 20 - 1 which determines a locus of the same moving object, on the basis of a character concerning the shape of the moving object.
  • the locus calculation unit 20 includes a list making unit 20 - 2 which detects an existence of the moving object in each of a plurality of local areas in time series, on the basis of an output from each of the local-area characteristic-amount extract units 9 , 10 , and which creates a list with respect to the results of the detection.
  • the locus calculation unit 20 includes a list analyzing unit 20 - 3 which analyzes the list and recognizes a locus of the same moving object, even in a case where a large-scale moving object exists with a plurality of small-scale moving objects.
  • FIGS. 3 (A) and 3 (B) are diagrams showing original images which are taken by an image-input unit in a tunnel at different sampling times, respectively.
  • an image processing apparatus of the present invention is applied to a supervisory system for supervising a road in a tunnel, will be described.
  • FIG. 3(A) indicates an original image which is taken at a certain or first sampling time and by means of an image-input means 1 that is placed in the tunnel; and FIGS. 3 (B) indicates another original image which is taken in a different or second sampling time that occurs several seconds after the first sampling time and by the same image-input means 1 .
  • the image-input means e.g., a video camera, continuously takes images in the tunnel at high rate corresponding to a frequency of a video signal, which are sampled by a technique using a sampling time interval function such as a time series filter. That is, the image samples used for processing are taken at a frequency that is lower than the video frequency and can be as much as several seconds apart.
  • an image area which is captured by the image input means 1 is allocated or divided into a plurality of local areas, as shown in a diagram of FIG. 4.
  • L 0 to L 4 denote local areas which are used to trace cars moving in the traffic lane on the left side L.
  • a local area L 0 is used to detect a large-scale car in the traffic lane on the left side
  • local areas L 1 to L 4 are used to trace all the cars on the left side.
  • R 0 to R 4 denote local areas which are used to trace cars moving in the traffic lane at the right side R.
  • a local area R 0 is used to detect a large-scale car in the traffic lane on the right side
  • local areas R 1 to R 4 are used to trace all the cars on the right side.
  • FIGS. 5 (A) and 5 (B) are diagrams for explaining an example in which images containing objects moving at different speeds are respectively extracted in a first preferred embodiment of the present invention.
  • one extracted image will include objects moving at a first speed and another extracted image will include objects moving at a second speed different from the first speed.
  • FIG. 5(A) shows a background stored in a first image memory 2 - 1 in a background image extract unit 2 .
  • This background is input to a look-up table ⁇ circle over ( 2 ) ⁇ (in FIGS. 5 (A) and 5 (B), “look-up table” is abbreviated “LUT”) in a first difference calculation processing unit 5 , as one input “i” of the look-up table ⁇ circle over ( 2 ) ⁇ .
  • LUT look-up table
  • an image corresponding to a sum of a background image, a stopped objects image, and an image of moving objects moving at a low speed is input to the look-up table ⁇ circle over ( 2 ) ⁇ .
  • produces an output only in those portions of the input image where stopped objects and moving objects at a low speed exist, and this is then input to a look-up table ⁇ circle over ( 1 ) ⁇ as one input “i”. That is, the output “k” includes stopped objects and slow moving objects only.
  • the other input “j” of the look-up table ⁇ circle over ( 1 ) ⁇ receives an input image (Background+Stopped+Low speed+Middle speed+High speed), which is taken by an image-input means 1 .
  • This input image includes a background image, stopped objects (including an obstacle), objects moving at a low speed, objects moving at a middle speed, and objects moving at a high speed. Therefore, the value of
  • the output of the table ⁇ circle over ( 1 ) ⁇ is an image that includes only the stopped and slow moving objects found in the latest image from the image-input means 1 .
  • an input image which is captured by the image-input means 1 , is also input to a look-up table ⁇ circle over ( 3 ) ⁇ in a first average background extract unit 3 , as input “i”.
  • an image which has been stored in a second image memory 3 - 1 , is also input to the look-up table ⁇ circle over ( 3 ) ⁇ as the other input “j”.
  • the thus obtained value is also stored in the image memory 3 - 1 .
  • a value of (j ⁇ i) is in the range from zero through th31(0 ⁇ (j ⁇ i) ⁇ th31)
  • a value, which is obtained by subtracting an offset value ⁇ 31 from a value of the other input “j” is output from the look-up table ⁇ circle over ( 3 ) ⁇ .
  • the thus obtained value is also stored in the image memory 3 - 1 .
  • the reason why the output “k” is an output that includes image portions including objects moving at a low speed will be hereinafter described.
  • the output “k” from the look-up table ⁇ circle over ( 3 ) ⁇ is input to a look-up table ⁇ circle over ( 2 ) ⁇ ′ in a second difference-calculation processing unit 6 , as one input“i”.
  • the look-up table ⁇ circle over ( 2 ) ⁇ ′ outputs the value of
  • the look-up table ⁇ circle over ( 1 ) ⁇ ′ is constructed to operate in a manner similar to the case of look-up table ⁇ circle over ( 1 ) ⁇ . Therefore, with regard to the output “k” of the look-up table ⁇ circle over ( 1 ) ⁇ ′, only objects moving at a middle speed are extracted from the input image (middle speed).
  • an input image which is taken by an image-input means 1 , is also input to a look-up table ⁇ circle over ( 4 ) ⁇ in a second average background extract unit 4 , as one input “i”.
  • an image which has been stored in a third image memory 4 - 1 , is also input to the look-up table ⁇ circle over ( 4 ) ⁇ as the other input “j”.
  • the value of an output “k” in output from the look-up table ⁇ circle over ( 4 ) ⁇ is a portion of the image including the background, stopped objects, objects moving at a low speed, and also objects moving at a middle exist (Background+Stop+Low speed+Middle speed).
  • An output “k” outputs such image portions including objects moving at a middle speed will be hereinafter described.
  • the output “k” from the look-up table ⁇ circle over ( 4 ) ⁇ is input to the look-up tables ⁇ circle over ( 2 ) ⁇ ′ as the other input “j”, and also input to a look-up table ⁇ circle over ( 2 ) ⁇ ′′ in a third difference-calculation processing unit 7 , as one input “i”.
  • the look-up table ⁇ circle over ( 2 ) ⁇ ′′ As the other input “j” of the look-up table ⁇ circle over ( 2 ) ⁇ ′′, an input image, which is captured by the image-input means 1 , is input.
  • the input image includes the background, stopped objects, objects moving at a low speed, objects moving at a middle speed, and objects moving at a high speed.
  • the look-up table ⁇ circle over ( 2 ) ⁇ ′′ is implemented in a manner similar to the case of the look-up table ⁇ circle over ( 2 ) ⁇ shown in FIG. 5(B).
  • the look-up table ⁇ circle over ( 2 ) ⁇ ′′ outputs the value of
  • plural images are generated on the basis of plural reference speeds, and plural types of objects are individually extracted from an original image based on the speed ranges. Therefore, it becomes possible to reduce the number of the objects which are to be supervised and processed, for example, to those objects having a low speed. All others are filtered out.
  • the data in the image memories can be modified before the object moves outside a region of an original or first image in which the object appears.
  • the offset value is set to a large value, and the data in the image memories are intended to be rapidly modified.
  • the offset value is set to a small value, and the data in the image memories are intended to be modified for a longer period of time.
  • the offset value is changed in accordance with the speed of the moving object to be processed. Therefore, even when there are a lot of moving objects to be processed, the difference in speeds between these moving objects can be easily discriminated, and all the necessary objects can be distinguished from each other.
  • FIGS. 6 (A) and 6 (B) are diagrams for explaining another example in which images containing objects moving at different speeds are respectively extracted in a first preferred embodiment of the present invention.
  • FIG. 6(A) illustrates a plurality of images which are captured during a sampling time period
  • FIG. 6(B) illustrates an image in which a plurality of moving objects have been classified into several groups on the basis of different speeds.
  • a stopped object B can be extracted with a value which is obtained by accumulating the above-mentioned images of the number of n and averaging the thus accumulated images. Namely, since the stopped object B does not change position at each sampling time t 1 , t 2 , . . . t n , the stopped object B can be easily extracted from the average background as shown in the lower part of FIG. 6(A).
  • the density value of the moving object A is smaller than a threshold value, and consequently when the averaged image compared or tested against the threshold the moving object A disappears from the average background. As a result, the stopped object B remains in the average background, while the moving object A appears not to exist in the average background.
  • the image only including the moving object A can be easily extracted by calculating a difference between the average background and each image at each sampling time t 1 , t 2 , . . . t n .
  • the location of the moving object D moving at a low speed at sampling time t 2 , t 3 partially overlaps with a location of moving object D at the first sampling time t 1 .
  • the moving object D at the fourth sampling time t 4 does not overlap with the location of the moving object D at the first sampling time t 1 .
  • a moving object E moving at a middle speed at a second sampling time t 2 partially overlaps with the location of moving object E at the first sampling time t 1 .
  • the location of moving object E at the third sampling time t 3 does not overlap with the location of moving object E at the first sampling time t 1 .
  • the density value of the moving object E moving at a middle speed becomes smaller than a threshold value in the thus divided image.
  • the object D moving at a low speed at sampling times t 2 , t 3 partially overlaps with the object D at a first sampling time t 1 and consequently the moving object D can be extracted.
  • the stopped object B also can be extracted, simultaneously with the moving object D.
  • the moving object D can be finally distinguished from the other objects B, E.
  • the moving object E at a middle speed can be isolated and extracted.
  • a first difference-calculation processing unit on the basis of these images, it is possible to calculate a difference between a background output from a background image extract unit and the image containing objects moving at a low speed which is output from the first average background extract unit. Consequently, a portion of an image including stopped objects, and objects moving at a low speed can be obtained from the first difference-calculation processing unit.
  • a second difference-calculation processing unit it is possible to calculate a difference between the image output from the first average background extract unit and the image output from the second average background extract unit. Consequently, a portion of an image including objects moving at a middle speed can be obtained from the second difference-calculation processing unit.
  • a third difference-calculation processing unit it is possible to calculate a difference between the image output from the second average background extract unit and an original input image. Consequently, a portion of an image including objects moving at a high speed can be obtained from the third difference-calculation processing unit.
  • the first average background extract unit 3 only extracts a background, stopped objects (including obstacles), and objects moving at a low speed.
  • the second average background extract unit 4 only extracts a background, stopped objects, objects moving at a low speed, and objects moving at a middle speed.
  • an image “c” is obtained from the first average background extract unit 3
  • an image “d” is obtained from the second average background extract unit 4 .
  • an image “e” only including the stopped objects and the objects moving at a low speed is obtained from the first difference-calculation processing unit 5 . Further, an image If only including the objects moving at a middle speed is obtained from the second difference-calculation processing unit 6 . Further, an image “g” only including the objects moving at a high speed is obtained from the third difference-calculation processing unit 7 .
  • each of a first local-area characteristic-amount extract unit 8 , a second local-area characteristic-amount extract unit 9 , and an N+1-th local-area characteristic-amount extract units 10 have the same construction. Therefore, the construction of the first local-area characteristic-amount extract unit 8 which calculates object parameters such as center of gravity, length, circumference, etc. will be representatively described with reference to FIG. 7 described in detail hereinafter.
  • FIG. 7 is a block diagram showing the construction of the first local-area characteristic-amount extract unit in a first preferred embodiment of the present invention.
  • an area of the whole input image (the entire captured image), which is taken or captured by an image-input unit 1 , is allocated (or divided) in advance into a plurality of local areas.
  • a plurality of local areas L 0 to L 4 are established as local areas which are used to trace cars moving in the traffic lane on the left side L.
  • a plurality of local areas R 0 to R 4 are established as local areas which are used to trace cars moving in the traffic lane on the right side R.
  • a plurality of local area extract processing units 8 - 1 , 8 - 2 , . . . 8 -m (“m” denotes any natural number more than 2) respectively are provided.
  • m denotes any natural number more than 2
  • a first local area extract processing unit 8 - 1 includes a first local-area determining unit 11 - 1 , a first noise canceling unit 12 - 1 , a first labeling processing unit 13 - 1 , and a first characteristic-amount calculation unit 14 - 1 .
  • the first local-area determining unit 11 - 1 defines one of the local areas which must be processed by the first local area extract processing unit 8 - 1 .
  • the first local-area determining unit 11 - 1 defines the range of the local area L 0 and extracts a portion of the input image within this range.
  • the first noise canceling unit 12 - 1 eliminates noise from a signal which is sent from the first local-area determining unit 11 - 1 .
  • the noise canceling unit 12 - 1 is implemented by a low pass filter.
  • the first labeling processing unit 13 - 1 carries out a labeling process.
  • the labeling process is executed to provide the same label to each of the same objects with respect to input images generated in time series in the given local area.
  • the first characteristic-amount calculation unit 14 - 1 checks to determine whether the thus labeled area exists. If a plurality of the thus labeled area actually exist, the first characteristic-amount calculation unit 14 - 1 produces a projection for each of the labeled areas, and further calculates a position of the “center-of-gravity” in each of the labeled areas, the value of the length and breadth of each of the labeled areas, and the value of an area (space) in each of the labeled areas. Namely, the first characteristic-amount calculation unit 14 - 1 estimates a plurality of characteristic-amounts or object parameters for each of the labeled areas that can be used to identify and track objects.
  • the other local area extract processing units 8 - 2 . . . 8 -m respectively include the corresponding local-area determining units 11 - 2 . . . 11 -m, the corresponding noise canceling units 12 - 2 . . . 12 -m, the corresponding labeling processing unit 13 - 2 ... 13 -m, and the corresponding characteristic-amount calculation unit 14 - 2 . . . 14 -m.
  • FIGS. 8 (A) to 8 (F) are diagrams for explaining operations of a first preferred embodiment of the present invention in a case where a plurality of stationary objects (e.g., a stopped car and an obstacle exist in the same series of images) and a plurality of moving objects exist together.
  • a plurality of stationary objects e.g., a stopped car and an obstacle exist in the same series of images
  • a plurality of moving objects exist together.
  • FIG. 8(A) it is assumed that there are a stopped car P 1 which has tail lamps flashing, an obstacle P 2 , and a low speed car P 3 , at a certain sampling time “t”, to simplify the explanation.
  • an output from the first difference-calculation processing unit 5 is indicated by an image shown in FIG. 8(C).
  • this image all the objects including the obstacle P 2 are extracted.
  • an image as shown in FIG. 8(E) is obtained.
  • cars exist in a portion of the local areas L 1 , L 2 and L 4 . Therefore, characteristic-amounts or object parameters can be calculated for the three local areas.
  • FIG. 8(A) changes to another image as shown in FIG. 8(B) at the sampling time when several seconds have elapsed after the sampling time “t” (i.e., sampling time “t+several seconds”). That is, the images of 8 (A) and 8 (B) are captured with a sampling interval of several seconds between them. Further, an output from the first difference-calculation processing unit 5 is indicated by the image shown in FIG. 8(D). When this image and the local areas in FIG. 4 overlap with each other, similar to the case of FIG. 8(C), an image as shown in FIG. 8(F) is obtained. In FIG. 8(F), cars exist in the local areas L 2 and L 4 . Therefore, characteristic-amounts can be also calculated in these two local areas.
  • the characteristics or parameter extraction is processed in a time series and used by a list making unit 20 - 2 in a locus calculation unit 20 of FIG. 2, and a list is created by the list making unit 20 - 2 .
  • An example of the list is shown in the following table 1.
  • Each of the circles ( ⁇ ) in the table 1 indicates that characteristic-amounts, such as the center-of-gravity, can be or are obtained at each corresponding sampling time; namely, any object (including an obstacle) exists at the given time in the area.
  • the list is used to track the location or locus of an object by the list analyzing unit 20 - 3 . In this case, as is apparent from FIG.
  • FIGS. 9 (A) to 9 (F) are diagrams for explaining operations of a preferred embodiment of the present invention in the case where a plurality of moving objects moving at a middle speed exist.
  • FIGS. 9 (A) to 9 (F) speed range images, extraction images, and the condition in which two different images overlap with each other, are illustrated at sampling time “t” and “t”+several seconds”, respectively, in the case where middle speed cars exist in the images.
  • FIGS. 10 (A) to 10 (F) are diagrams for explaining operations of a preferred embodiment of the present invention in the case where a plurality of objects moving at a high speed may exist.
  • FIGS. 10 (A) to 10 (F) a high speed car is not illustrated. Also, in a table 3 corresponding to these figures, a circle is not inserted as shown below. TABLE 3 High speed L0 L1 L2 L3 L4 R0 R1 R2 R3 R4 TIME t t 1 t 2 t 3 t 4 t 5 t 7 t 8 . . . . . . t + SEVERAL SECONDS
  • a list analyzing unit 20 - 3 in a locus calculation unit 20 of FIG. 2 analyzes the content of the table 3. Consequently, it is determined that a high speed car moving at a high speed does not exist in both of the traffic lines.
  • the table 1 mentioned before is rather complicated and difficult to analyze. However, on the basis of the table 1, the below-mentioned facts can be discriminated or determined.
  • the above-mentioned analyzing process is carried out by discriminating whether an object exists in local areas, with the relationship between time and position being taken into consideration.
  • the value of a length and breadth of the object or the value of an area of the object can be utilized as a characteristic-amount or parameter.
  • a table 4 illustrates an example in which a car executing a change of the traffic lane is detected.
  • the change of the traffic lane can be easily discriminated or detected by tracing the movement of circles in the table 4 on the basis of the abovementioned description.
  • An object which has existed in a local area until a given time, instantaneously disappears. However, at that time when the object disappears from one local area, another object appears in another local area, particularly in the adjoining local area. In this case, it is discriminated or determined that the object disappeared before is a car executing a change of the traffic lane.
  • a speed of the object can be also calculated. For example, when the value of a distance length of a certain local area is defined as L, and the value of the length of time in which the object is positioned in the local area is defined as T, a speed of the object can be calculated by a calculation of L/T.
  • FIGS. 11 (A) and 11 (B) are diagrams for explaining operations of a first preferred embodiment of the present invention in a case where a large-scale car and a small-scale car exist together in the image.
  • FIG. 11(A) an image, in which both a large-scale car P L and a small-scale car P S move in the traffic lane on the right side, is illustrated. Further, when this image and a plurality of local areas shown in FIG. 4 overlap with each other, an image as shown in FIG. 11(B) is obtained.
  • the character analyzing unit 20 - 1 analyzes a characteristic or parameter concerning the shape of the object, and discriminates the same moving object. On the basis of a result of this discrimination, the character analyzing unit 20 - 1 instructs the list making unit 20 - 2 to determine a locus of the same moving object. In this way, it becomes possible to easily obtain the locus of the same moving object.
  • FIGS. 12 (A) to 12 (E) are diagrams for explaining operations of a second preferred embodiment of the present invention in a case where a large-scale moving object and a small-scale moving object exist together in an airport.
  • FIGS. 12 (A) to 12 (E) the case where an image processing apparatus of the present invention is applied to a spot supervisory system utilizing a view of a predetermined spot in an airport will be described.
  • an attempt is made to distinguish a large-scale moving object moving at a low speed (for example, an airplane) from small-scale moving objects moving at a middle or high speed (for example, special cars used for various work such as baggage handling), and to examine attributes of the large-scale moving objects.
  • a low speed for example, an airplane
  • small-scale moving objects moving at a middle or high speed for example, special cars used for various work such as baggage handling
  • a plurality of local areas C 0 , L 0 to L 7 , and R 0 to R 7 are provided in a manner shown in FIG. 12(A).
  • the local area C 0 is intended to detect an airframe of the airplane.
  • the other local areas L 0 to L 7 , and R 0 to R 7 are intended to detect the other small-scale moving objects, e.g., the special cars.
  • FIG. 12(B) shows a condition in which an airplane J stops in a spot
  • FIG. 12(C) shows the condition in which a plurality of special cars SP 1 , SP 2 move in various directions.
  • FIGS. 12 (D) and 12 (E) show a situation in which an image in FIG. 12(B) and the local areas in FIG. 12(A) overlap with each other; and FIG. 12(E) shows the situation in which an image in FIG. 12(C) and the local areas in FIG. 12(A) overlap with each other.
  • the value of a length of the airframe is calculated by producing a projection of the airplane J in a direction corresponding to the longer sides of the local area C 0 .
  • the value of a length of the airframe can be measured. Further, it becomes possible to identify a type of the airplane J on the basis of the measured length of the airframe. In the case where the airplane J comes close to the spot and stops, the value of a length of the airframe changes gradually changes as illustrated in the following table 7. TABLE 7 Length of airplane LENGTH OF PROJECTION OF C0 TIME t 0 t 1 50 t 2 100 t 3 150 t 4 200 t 5 200 t 6 200 t 7 200 t 8 200
  • FIG. 13(B) the condition in which the value of a projection in the local area L 5 is obtained will be illustrated in FIG. 13(B).
  • two types of projections exist, the first in X direction (X 0 , X 1 , . . . X n ) and the second in Y direction (Y 0 , Y 1 , . . . Y n ) , respectively. Therefore, it can be discriminated that two different objects independently exist in the local area.
  • the various speeds of a plurality of moving objects have been classified into only three ranges (low speed, middle speed, and high speed).
  • speeds of a plurality of moving objects are not limited to these three ranges.
  • FIGS. 14 (A) to 14 (C) are diagrams for explaining a process of calculating a distance between two moving objects in a first preferred embodiment of the present invention.
  • a process of calculating a distance between two moving objects is assumed to be carried out only by the first preferred embodiment, to compare the first preferred embodiment with a third preferred embodiment that will be hereinafter described.
  • FIGS. 9 (A) to 9 (E) and the table 2 some moving objects moving at a middle speed exist in the local areas L 1 , L 4 .
  • various characteristic-amounts or object parameters are extracted by a moving object extract unit 100 , by utilizing an extraction process previously described, for each sampling time period.
  • the moving object extract unit 101 corresponds to the average background units and the difference-calculation processing units illustrated in FIG. 1 or FIG. 2.
  • a moving object correlating unit 101 correlates the same objects in a time series of images with each other, in accordance with the value of the characteristic-amounts using a limit value for speed of the moving objects.
  • characteristic-amounts or parameters such as a contour of each moving object, and a position or inclination of the surface of each moving object, may be extracted from an original input image, by utilizing image density and color information for each of the moving objects.
  • a distance measuring unit 102 measures a distance between two moving objects of the thus correlated moving objects. In this way, a compression process that converts image data into numerical data can be carried out. On the basis of such numerical data, an analysis and anticipation of the movement of each of the moving objects can be carried out.
  • an original input image is classified on the basis of the speeds of moving objects existing in the original input image. Further, a plurality of images are generated, and the thus generated images are correlated with all the moving objects. Therefore, moving objects respectively having different speeds are correlated with each other with a sufficient degree of accuracy.
  • a video camera image-input unit
  • an original image is input.
  • the video camera is set above a road, and inputs a plurality of moving objects, e.g., cars passing through the view of the video camera, the video camera takes an image of a plurality of moving objects on the road having a black color.
  • FIG. 15 is a schematic block diagram showing a third preferred embodiment of an image processing apparatus according to the present invention.
  • a plurality of markers are placed in advance in a background where all the objects move, by means of a marker holding unit 110 .
  • the position, the shape, and the size of each marker, et al. are calculated or known in advance.
  • a plurality of markers are placed or created by drawing white lines in the road at regular spacings or intervals.
  • the markers used in the present invention are not limited to white lines, and any other things having various shapes can be utilized as the markers.
  • FIG. 15, 114 denotes an image-input unit similar to that used in FIG. 1 or FIG. 2.
  • a marker/moving object extract unit 111 calculates or determines a portion of the image in which the moving objects and the markers overlap with each other, and extracts each of the moving objects. Further, in regard to a plurality of images which are input over a period of time, a portion of each image in which the moving objects and the markers overlap with each other can be easily extracted.
  • a marker/moving object correlating unit 112 correlates the obtained data, and identifies the same moving object. Further, on the basis of the number of markers existing between two different moving objects, a distance measuring unit 113 calculates a distance between two moving objects.
  • the time series data concerning the markers may be correlated with each other, in place of the time series data on the moving objects.
  • the data about the markers it becomes possible to grasp or identify the markers existing between the same moving objects extracted at the different sampling times, and to calculate a distance between two moving objects.
  • an abnormality such as an accident viewed by an image processing apparatus can be anticipated.
  • FIG. 16 is a block diagram showing in detail the main parts of a third preferred embodiment of the present invention.
  • the reference numeral 110 denotes a marker holding unit; 111 denotes a marker/moving object extract unit; 112 denotes a marker/moving object correlating unit; 113 denotes a distance measuring unit; 121 denotes a processed area setting means; 122 denotes a binary code processing unit; 123 denotes a noise canceling unit; 124 denotes a connected-area extract unit; 125 denotes a connected-area position/shape calculating unit; and 126 denotes a marker collating unit.
  • the reference numeral 127 denotes a marker dictionary unit; 128 denotes a moving object extract unit; 129 denotes a marker extract unit; 131 denotes a moving object/marker time-series table making unit; and 132 denotes a moving object/marker correlating unit.
  • FIGS. 17 (A) to 17 (C) are diagrams showing the condition in or positions at which markers are provided and various information about markers is registered in a marker dictionary, in this third preferred embodiment of the present invention.
  • the marker holding unit 110 places or notes a plurality of markers in the background.
  • the data about these markers are stored in advance in the marker dictionary unit 127 .
  • the markers are obtained by coating or painting the road (hatched portion) with a plurality of white lines P 1 , P 2 . . . . P 25 thereon at equal spacings.
  • the value of a width in each of the white lines P 1 , P 2, . . . P 25 is 50 cm, and twenty-five (25) white lines are drawn with a spacing of 50 cm.
  • the value of 50 cm is indicated in a display screen by ten dots (10 dots) or image pixels.
  • the left end X coordinate is defined as the position corresponding to fifty bits in X coordinate direction.
  • the right end X coordinate is defined as the position corresponding to five hundred bits (500 bits) in the X coordinate direction.
  • a coordinate (x, y) at the left upper end of the white lines P 25 is represented as ( 50 , 480 ; and a coordinate (x, y) at the right lower end thereof is represented as ( 500 , 490 ).
  • the data for each coordinate (x, y) is registered or stored in the marker dictionary unit 127 by the marker holding unit 110 .
  • a coordinate (x, y) at the left upper end of each of the other white lines is registered or stored in the marker dictionary unit 127 .
  • a coordinate (x, y) at the right lower end thereof is represented as ( 500 , 490 ) is registered in the marker dictionary unit 127 .
  • the shape of the markers is registered in the marker dictionary unit 127 .
  • FIG. 18(A) and 17 (B) are diagrams used for explaining a process of setting a region of the object to be processed for the passage of moving objects in the third preferred embodiment.
  • the processed area setting means 121 previously mentioned with respect to FIG. 16 determines a region in which the distance between two moving objects is to be measured, in the case where some moving objects exist in a region where a plurality of white lines (markers) are placed.
  • FIG. 18(A) the region, in which the objects are to be processed, is defined as the hatched portions in the case of the road having two opposed traffic lanes.
  • the marker dictionary unit 127 is updated.
  • FIG. 18(B) illustrates the region concerning the traffic lane on the left side. In a case, where the markers are provided by taking into consideration the region in which the objects are to be processed in the traffic lanes on the left and right sides, it is not necessary to define such a processed region.
  • FIGS. 19 (A) to 19 (C) are diagrams respectively showing region isolation processing, binary code processing, and noise canceling, in the third preferred embodiment.
  • a rectangular size of the region to be processed is defined by a coordinate (x 1 , y 1 ) at the left upper end and also by a coordinate (x 2 , y 2 ) at the right lower end. Further, it is assumed that an input image of a pixel (i,j) is INij, and an output image of a pixel (i,j) is OUTij.
  • any input image existing in the region to be processed is directly output as an output image by the unit 121 .
  • the image features existing outside this region are output as zero (0).
  • the binary code processing unit 122 outputs an output image OUTij which has the value of “1”, when the value of an input image pixel INij is smaller than a threshold value th1 and larger than a threshold value th2. In other cases, the binary code processing unit 122 outputs an output image OUTij which has the value of “0”.
  • these threshold values th1, th2 are set in accordance with environmental illumination. However, in the environment in which a change of illumination may occur, these threshold values th1, th2 are adaptively adjusted, e.g., by calculating a histogram of a density of the image, etc.
  • the noise canceling unit 123 eliminates an isolated point of noise, in a case where the isolated point exists, in an output from the noise canceling unit 123 . Namely, the noise canceling unit 123 extracts a pattern in which a plurality of dots (pixels), e.g., a group of dots which are positioned or exist in four positions.
  • pixels e.g., a group of dots which are positioned or exist in four positions.
  • a group of pixels are detected by utilizing a logical filter F 3 ⁇ 3 pixels in size.
  • the noise canceling unit 123 outputs an output image OUTij which has the value of “1”.
  • FIGS. 20 (A) to 20 (C) are diagrams for explaining a process of labeling a given object in the third preferred embodiment.
  • the connected-area extract unit 124 provides the same label for a pattern in which a large number of dots (pixels), e.g., a group of dots which are positioned adjacent to each other (an eight direction test).
  • a group of pixels are connected when the binary value, in at least one of all eight directions including the four oblique directions, the upper and lower directions, and the left and right directions, is “1”.
  • the label 2 , 3 and 4 are provided in a manner as shown in an image A 2 .
  • the labeling process is executed by scanning an input image by means of several pixel patterns A to F each constituted by a matrix of 2 ⁇ 3. Further, in a case where the value of a given pixel E is equal to 1”, the label is updated in accordance with circumferential patterns A to F.
  • an input image is first scanned. Thereafter, by utilizing a table corresponding the relation between labels, labels are attached to the input image.
  • This technique is disclosed in Japanese Unexamined Patent Publication (Kokai) No. 3-206574 (Raster Scan Type Labeling Processing System).
  • the above-mentioned labeling process is realized by generalized CPU (Central Processing Unit), or DSP (Digital Signal Processor).
  • CPU Central Processing Unit
  • DSP Digital Signal Processor
  • the related techniques are disclosed in Japanese Unexamined Patent Publication (Kokai) No. 61-243569 (System for Labeling to Digital Picture Area) and No. 63-27508 (Labeling Circuit for connected Area).
  • a label ( 1 ) is attached to a first white line P 1 ; a label ( 2 ) is attached to a second white line P 2 ; and a label ( 25 ) is attached to a twenty-fifth white line P 25 .
  • FIGS. 21 (A) and 21 (B) are diagrams for explaining a process of projecting a labeled object in the third preferred embodiment.
  • the connected-area position/shape calculating unit 125 calculates the shape and the position of a portion of each label. For example, as shown in FIG. 21(A), with respect to a label image LK to which the same labels are attached, a projection V in the vertical direction and a projection H in the horizontal direction are produced. Further, the position and the shape of each of these projections are calculated or determined. Namely, projections are estimated for every projection.
  • the projection H in the horizontal direction is obtained by calculating a histogram in the horizontal direction.
  • the projection V in the vertical direction is obtained by calculating a histogram in the vertical direction.
  • the position of the projection H is a longitudinal position, and the size thereof is (Pjh2, Pjh1).
  • the position of the projection V is a transverse position, and the size thereof is (Pjv2, Pjv1).
  • the marker collating unit 126 collates the marker dictionary 127 , and discriminates whether a portion of the same label calculated by the connected—area position/shape calculating unit 125 is a marker which overlaps a moving object. More specifically, a coordinate of a left upper end P n (x 1 , y 1 ), and a coordinate of a right lower end P n (x 2 , y 2 ) of a marker (white line) stored in the marker dictionary 127 are read out. As described before, the data about the white lines shown in FIG. 17(C), the data for the white lines shown in FIG. 18(B), and the like, are stored in advance in the marker dictionary 127 .
  • FIGS. 22 (A) and 22 (B) are diagrams for explaining a process of extracting a moving object which is a car having a color other than white and which passes through or over the markers in a third preferred embodiment.
  • the moving object extract unit 128 extracts a moving object (e.g.,car) which overlaps a marker.
  • a moving object e.g.,car
  • cars each having a color (e.g., black, red, or blue) other than white move over a plurality of markers
  • the two cars C 1 , C 2 and markers partially overlap with each other, as shown in FIG. 22(A).
  • the two cars C 1 , C 2 and markers are simultaneously captured by a video camera, et al., from the above or overhead position, the two cars C 1 , C 2 are separated by the markers. Therefore, moving objects such as cars can be extracted.
  • FIGS. 24 (A) and 24 (B) are diagrams for explaining a process of extracting a moving object which is a white car and passes through markers, in the third preferred embodiment.
  • FIG. 24(A) In the case where white cars move on the markers of white lines, the condition of the cars C 1 , C 2 and the markers is illustrated in FIG. 24(A).
  • this condition is input as an original input image for carrying out a labeling process, the same label is attached to a portion in which one car C 1 and the markers overlap with each other. Also, the same label is attached to a portion in which the other car C 2 and the corresponding markers overlap with each other.
  • the marker extract unit 129 extracts the markers, and calculates a distance between two cars.
  • a label ( 5 ) shown in FIG. 22(A) it is discriminated or determined whether the size and the shape (rectangular) of a portion to which the same label is attached conforms to the data stored in the marker dictionary 127 . If it is confirmed that the size and the shape of the same label conform to the data stored in the marker dictionary 127 , the same label is detected as one of markers.
  • the size of each of the markers and the space between the markers can be calculated, and a distance between two cars can be calculated.
  • each of the markers has the shape shown in FIG. 23(A)
  • a projection of an original marker and a continuous region from Pjh1 to Pjh2 can be obtained, and on the basis of a region from Pjh1 to Pjh2, a distance between two cars can be calculated.
  • FIGS. 25 (A) to 25 (E) are diagrams showing various tables which are utilized for calculating a distance between two moving objects in the third preferred embodiment.
  • the moving object/marker time-series table making unit 13 creates a time-series table of the moving objects and the markers associated with which the moving objects exist. As shown in FIG. 25(A), the time-series table of the moving objects indicates the position of each moving object with respect to the markers. In this case, it is assumed each of the moving objects moves from marker P 25 to P 2 .
  • the moving object/marker correlating unit 132 provides the same number for each of the moving objects which are discriminated or determined to be the same. In this discrimination, the condition that different moving objects have a predetermined distance, and also a direction (in this case, the direction in which moving objects move is P 25 to P 1 ) are considered. Further, as shown in FIG. 25(C), by making a time-series table showing markers existing between the moving objects, the correspondence relation between the moving objects (shown in FIG. 25(B)) and the related markers can be clarified.
  • each of the markers is traced by taking only markers existing between moving objects into consideration. Therefore, the number necessary for the correlation between the moving objects and the markers can be reduced. Consequently, as compared to the case in which moving objects are extracted without markers, the technique of the third embodiment allows an extracting or identification process to be carried out at a high speed.
  • a distance between two cars is defined by two white lines P 4 , P 5 at the sampling time T 2 , while the distance is defined by two white lines P 3 , P 4 at the sampling time T 3 .
  • the subject car exists over a plurality of white lines.
  • FIG. 25(E) the same mark is provided for a plurality of white lines, to indicate that the subject car moves on or over a plurality of white lines.
  • the distance measuring unit 113 measures a distance between two moving objects, e.g., cars. For example, as shown in FIG. 25(B), on the basis of a correlation in the table between moving objects, the distance between two moving objects is estimated by calculating a distance between white lines (markers) at each sampling time. Further, the maximum value, the minimum value, and the average value are calculated. For example, moving objects are correlated with a plurality of images at the sampling time T m to T n , and the average value are calculated by utilizing the following equation (4E).
  • a difference between white lines (WL) is defined as ⁇ n ⁇ m.
  • the modified connected-area extract unit 124 extracts a contour corresponding to a portion where each binary code of binary information is “1”.
  • a contour obtained by a color (color of the marker) extracting process can be extracted by the connected-area extract unit 124 .
  • a starting point for the extracting process the maximum and minimum value of x, y, a length of a circumference are stored in advance, and a contour extracting process is started.
  • the maximum and minimum value of x, y, and a length of the circumference of the contour are calculated.
  • the connected-area position/shape calculating unit 125 compares the maximum and minimum value in both of the x-component and y-component and the value of the length of the circumference with the values stored in the marker dictionary 127 . Further, it is concluded that a contour has a rectangular shape, in a case where the maximum and minimum value in both of the x-component direction and y-component direction and the value of the length of the circumference conform to the value stored in the marker dictionary 127 .

Abstract

An image processing apparatus for extracting the specified objects has a background image extract unit for extracting a background; a first average background extract unit which extracts an image that includes a plurality of stationary and moving objects each having a speed not higher than a predetermined first speed and also the background; a second average background extract unit which extracts an image that includes the stationary and moving objects each having a speed not higher than a predetermined second speed and also the background; a first difference-calculation processing unit which calculates a difference between an output from the background image extract unit and an output from the first average background extract unit as a first speed image; a second difference-calculation processing unit which calculates a difference value between two outputs from the first and second average background extract units as a second speed image; and a third difference-calculation processing unit which calculates a difference value between an original image and either one of outputs from the first and second average background extract units as a third speed image.

Description

    BACKGROUND OF THE INVENTION
  • 1. Field of the Invention [0001]
  • The present invention relates to an image processing apparatus for accurately extracting one or a plurality of objects utilizing a thresholded differential image processing technique when a plurality of stationary objects and a plurality of moving objects are contained together in an image of a time sequence of images. [0002]
  • More specifically, the present invention relates to an image processing apparatus, which allows both a background image and an image including at least one stationary object or at least one moving object (having a speed not more than a predetermined speed) to be extracted, and also allows a difference-calculation process to be carried out between the images. [0003]
  • In such an image processing system, it is possible to distinguish a stationary object or a moving object in the images and it is also possible to analyze the movement of each of the moving objects. [0004]
  • Further, the present invention relates to an image processing apparatus, in which a plurality of markers are provided in a background where the objects move, and these markers are extracted by utilizing an image processing technique similar to the case in which the moving objects are extracted, and further it is discriminated whether or not the thus extracted markers are in the steady state. [0005]
  • If the markers are in the steady state, portions where the moving objects and the markers overlap each other can be determined. Therefore, the number of the markers (the size of markers displayed in each image), which are in the steady state and exist between two moving objects, can be calculated to obtain a distance between two moving objects. [0006]
  • In general, supervisory systems using the above-mentioned image processing technique can be utilized in various places. Each of these supervisory systems serves to rapidly locate an accident, a disaster, and the like. Recently, such supervisory systems are likely to be utilized for preventing such accidents, disasters, and the like, in addition to a function of merely detecting the existence of an accident, etc. [0007]
  • To meet this need, it is necessary to extract or identify an object which moves with an abnormal motion that will cause such an accident, a disaster, and the like. Therefore, an efficient technique is needed for rapidly and accurately detecting a moving object which demonstrates such an abnormal motion. [0008]
  • More specifically, it is required for the supervisory system to detect and analyze the movement of each of a plurality of moving objects contained in a series of images. Further, it is also necessary for the supervisory system to rapidly calculate a distance between the two moving objects with a high degree of accuracy. [0009]
  • 2. Description of the Related Art [0010]
  • Some techniques for analyzing the movement of each of a plurality of moving objects by utilizing an image processing apparatus are typically disclosed in Japanese Unexamined Patent Publication (Kokai) No. 5-159057 and No. 5-159058. [0011]
  • In each of these techniques, first, regions where moving objects may be positioned are extracted using a predetermined assumption. Next, a specified moving object is distinguished from the other objects, on the basis of various characteristics, e.g., the size of each of the regions, and the central position of each region. Subsequently, in accordance with a change of the position of the moving object with a lapse of time, the movement of the moving object can be analyzed. [0012]
  • For example, when an analysis of the motion of a man is to be performed, a given portion of an image which is to be analyzed, is extracted from the image. Next, with respect to the extracted portion, i.e., an object to be processed, various characteristics, e.g., the position of projections and the location of central positions, are calculated, and used to distinguish the object from the other portions. Further, the process is executed with respect to a plurality of images in a time series, i.e., continuous motion type images. [0013]
  • According to the above-mentioned technique, to ensure obtaining adequate attributes, e.g., a speed of the object, it is necessary to analyze all the areas where the same original object can exist in the time series, and to identify the objects as the same original object. [0014]
  • More specifically, if a plurality of objects respectively existing in a plurality of the time series images are not accurately correlated with each other, by analyzing all the areas where the object can exist, it is difficult to calculate the speed of the object with a sufficiently high accuracy. [0015]
  • In the case where only one moving object exists, a process for correlating a plurality of objects in the continuous images with each other is relatively simple. In this case, it is possible to easily obtain the attributes, e.g., a speed of the moving object, using changes in the time base. [0016]
  • However, especially in the case where a large number of moving objects exist in one image, a process for correlating a plurality of objects in the time series images with each other for all the moving objects becomes difficult. [0017]
  • Further, when a plurality of stationary objects exist, as well as a plurality of moving objects, it becomes extremely difficult to rapidly complete such a correlation process for all of the stationary and moving objects using real time processing with a frame rate processing determined by a frequency of a video signal. [0018]
  • Furthermore, when a plurality of moving objects respectively move with a speed different from each other, it becomes almost impossible to complete the correlation process for all of the moving objects using real time processing determined by the frequency of a video signal (a video frame rate). [0019]
  • SUMMARY OF THE INVENTION
  • In view of the above-described problems existing in the prior art, the main object of the present invention is to provide an image processing apparatus which allows one or a plurality of objects to be rapidly and accurately extracted and analyzed, in a case where a large number of moving objects exist in a time series image. [0020]
  • A further object of the present invention is to provide an image processing apparatus which allows one or a plurality of objects to be rapidly and accurately extracted and analyzed, even in the case where a plurality of stationary objects exist, as well as a plurality of moving objects. [0021]
  • A still further object of the present invention is to provide an image processing apparatus which allows the movement of each of a plurality of moving objects to be rapidly and accurately extracted and analyzed, even in the case where the plurality of moving objects respectively move with different speeds. [0022]
  • A still further object of the present invention is to provide an image processing apparatus which allows all of the stationary and moving objects to be correlated with each other during real time processing with a processing rate determined by a frequency of a video signal, in the case where a plurality of stationary objects exist, as well as a plurality of moving objects, and also in a case where the plurality of moving objects respectively move with different speeds. [0023]
  • A still further object is to provide an image processing apparatus which allows a distance between two moving objects to be calculated, so that an abnormal object motion that may bring about an accident, a disaster, and the like, can be rapidly detected. [0024]
  • To attain these objects, the image processing apparatus according to the present invention includes an image-input unit which inputs an image including a background and a plurality of the objects; a background image extract unit which extracts the background; a first average background extract unit which extracts an image that includes one or a plurality of stationary objects or moving objects each having a speed not higher than a predetermined first speed and also includes the background; and a second average background extract unit which extracts an image that includes the stationary objects or moving objects each having a speed not higher than a predetermined second speed and also includes the background. [0025]
  • Further, the image processing apparatus of the present invention further includes a first difference-calculation processing unit which calculates a difference between an output from the background image extract unit and either one of outputs from the first average background extract unit, and then generates a first image containing objects moving at a first speed or stopped; a second difference-calculation processing unit which calculates a difference between respective outputs from the first and second average background extract units, and then generates a second image containing objects moving at a second speed; and a third difference-calculation processing unit which calculates a difference between an output from the image-input unit and either one of outputs from the first and second average background extract units, and then generates a third image containing objects moving at a third speed. [0026]
  • Preferably, the image processing apparatus of the present invention includes a plurality of local-area characteristic extract processing units which process outputs from the image-input unit. Each of the local-area characteristic extract processing units has a local-area determining unit which allocates the output from the image-input unit to each of a plurality of local areas; a labeling processing unit which separates at least one object from each of the local areas, by labeling the same object existing in each of the local areas; and a characteristic-amount calculating unit which calculates a plurality of characteristic-amounts or parameters, such as length and circumference, for the thus labeled object in the local areas. [0027]
  • Further, preferably, the image processing apparatus of the present invention operates to calculate a difference between the background and an average background image at a low speed, and to extract one or a plurality of connected areas where objects overlap. [0028]
  • Further, preferably, the image processing apparatus operates to produce a projection for each of the connected areas, and to calculate the position of the corresponding object in accordance with the projection, and to calculate a plurality of characteristics. [0029]
  • Further, preferably, the image processing apparatus operates to estimate a change in the position of the object and a change in the characteristics of the object for each sampling time period in the time series, and to determine whether the object is a stationary object, in a case where both the change in the position of the object and the change in the characteristics are small. [0030]
  • In a preferred embodiment, the image processing apparatus of the present invention is adapted to calculate a distance between two moving objects. The image processing apparatus includes an image-input unit which inputs the image including a background and a plurality of objects; a marker holding unit which places a plurality of markers in the background; a moving object extraction unit which extracts a plurality of moving objects; a tracing means which traces the plurality of moving objects; a marker extract unit which extracts the markers existing between the two different moving objects; and a distance measuring unit which calculates the distance between the moving objects, on the basis of the size of the extracted markers. [0031]
  • Further, in the image processing apparatus of a preferred embodiment, a plurality of other markers, which are not connected with each other by the marker holding unit, are provided in the background. [0032]
  • The image processing apparatus of a preferred embodiment further includes a connected-area position/shape calculating unit which calculates the size, the shape, and the number of the markers; a marker dictionary unit which has a marker dictionary for storing in advance the size and the shape of the markers; and a marker collating unit which collates the shape of the markers existing between two different moving objects and also collates the marker dictionary. [0033]
  • Further, the image processing apparatus of a preferred embodiment is adapted to calculate the number of the markers which can be identified as true markers based on a result of the collation in the marker collating unit, and to calculate the distance between two moving objects. [0034]
  • In a modified embodiment, the image processing apparatus of the present invention is adapted to calculate a distance between two cars in the case where a plurality of cars are the moving objects. In this case, a plurality of white lines are used as markers; these white lines are perpendicular to the direction in which the cars move with equal spaces between adjoining white lines. [0035]
  • Further, in this modified embodiment, the image processing apparatus has a distance measuring unit, which extracts the number of continuous white lines, and calculates the distance between two cars on the basis of the total sum of spaces between the continuous white lines between the two cars. [0036]
  • According to the image processing apparatus of the present invention, a plurality of objects existing in an image can be classified into a plural images each with objects of a different speed therein on the basis of the speed of each object, and the images can be analyzed in a process independent of each other. [0037]
  • Therefore, in the case where there are a large number of objects moving at various speeds, it becomes possible to separate objects in a certain classified range of speed from the remaining objects. Consequently, it becomes possible to easily and rapidly analyze the movement of only the objects within a certain range of speed. [0038]
  • Further, according to the image processing apparatus of the present invention, by extracting (or identifying) markers which can be easily processed by means of an image processing technique, portions where the moving objects and the markers overlap with each other can be easily determined, even though only a part of each moving object can be detected. Therefore, by calculating a distance between portions where the moving objects and the markers overlap with each other, it becomes possible to obtain a distance between two moving objects with a sufficiently high accuracy.[0039]
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The above objects and features of the present invention will be more apparent from the following description of the preferred embodiments with reference to the accompanying drawings, wherein: [0040]
  • FIG. 1 is a schematic block diagram showing an essential embodiment based on the principal of the present invention; [0041]
  • FIG. 2 is a schematic block diagram showing a first preferred embodiment of an image processing apparatus according to the present invention; [0042]
  • FIGS. [0043] 3(A) and 3(B) are diagrams showing an original image taken by an image-input unit in a tunnel in different sampling time, respectively;
  • FIG. 4 is a diagram for explaining a plurality of local areas in a first preferred embodiment of the present invention; [0044]
  • FIGS. [0045] 5(A) and 5(B) are diagrams for explaining an example in which plural images are respectively extracted in a first preferred embodiment of the present invention;
  • FIGS. [0046] 6(A) and 6(B) are diagrams for explaining another example in which plural images are respectively extracted in a first preferred embodiment of the present invention;
  • FIG. 7 is a block diagram showing the construction of local-area characteristic-amount extraction units in a first preferred embodiment of the present invention; [0047]
  • FIGS. [0048] 8(A) to 8(F) are diagrams for explaining operations of a first preferred embodiment of the present invention in the case where a plurality of stationary objects and a plurality of objects moving at a low speed exist together;
  • FIGS. [0049] 9(A) to 9(F) are diagrams for explaining operations of a first preferred embodiment of the present invention in the case where a plurality of objects moving at a middle speed exist;
  • FIGS. [0050] 10(A) to 10(F) are diagrams for explaining operations of a first preferred embodiment of the present invention in the case where a plurality of objects moving at a high speed exist;
  • FIGS. [0051] 11(A) and 11(B) are diagrams for explaining operations of a first preferred embodiment of the present invention in the case where a large-scale car and a small-scale car exist together;
  • FIGS. [0052] 12(A) to 12(E) are diagrams for explaining operations of a second preferred embodiment of the present invention in the case where a large-scale moving object and a small-scale moving object exist together in an airport;
  • FIGS. [0053] 13(A) and 13(B) are diagrams for explaining a process of obtaining a projection of a large-scale moving object in a second preferred embodiment of the present invention;
  • FIGS. [0054] 14(A) to 14(C) are diagrams for explaining a process of calculating a distance between two moving objects in a first preferred embodiment of the present invention;
  • FIG. 15 is a schematic block diagram showing a third preferred embodiment of an image processing apparatus according to the present invention; [0055]
  • FIG. 16 is a block diagram showing in detail the main part of a third preferred embodiment of the present invention; [0056]
  • FIGS. [0057] 17(A) to 17(C) are diagrams showing the condition in which markers are provided and various information about markers is registered in a marker dictionary, in a third preferred embodiment of the present invention;
  • FIGS. [0058] 18(A) and 18(B) are diagrams for explaining a process of setting a region to be processed for the passage of moving objects in a third preferred embodiment of the present invention;
  • FIGS. [0059] 19(A) to 19(C) are diagrams respectively showing a region to be processed, a binary code processing unit, and a noise canceling unit, in a third preferred embodiment of the present invention;
  • FIGS. [0060] 20(A) to 20(C) are diagrams for explaining a process of labeling a given object in a third preferred embodiment of the present invention;
  • FIGS. [0061] 21(A) and 21(B) are diagrams for explaining a process of projecting a labeled object in a third preferred embodiment of the present invention;
  • FIGS. [0062] 22(A) and 22(B) are diagrams for explaining a process of extracting a moving object which is a car having a color other than white and which passes through markers, in a third preferred embodiment of the present invention;
  • FIGS. [0063] 23(A) to 23(C) are diagrams showing other markers which can be utilized in a third preferred embodiment of the present invention;
  • FIGS. [0064] 24(A) and 24(B) are diagrams for explaining a process of extracting a moving object which is a white car and which passes through markers, in a third preferred embodiment of the present invention;
  • FIGS. [0065] 25(A) to 25(E) are diagrams showing various tables which are utilized for calculating a distance between two moving objects in a third preferred embodiment of the present invention; and
  • FIGS. [0066] 26(A) to 26(E) are diagrams for explaining a process of extracting a contour in a connected area in a third preferred embodiment of the present invention.
  • DESCRIPTION OF THE PREFERRED EMBODIMENTS
  • FIG. 1 is a schematic block diagram showing an essential embodiment based on the principal of the present invention. In FIG. 1, fundamental components necessary for realizing an image processing apparatus of the present invention are illustrated. In this case, it is assumed that a plurality of stationary objects (i.e., stopped objects) and a plurality of moving objects are contained together as a group of objects in an image which is to be processed. [0067]
  • As shown in FIG. 1, an image processing apparatus of the present invention includes an image-[0068] input unit 1, a background image extract unit 2, a first average background extract unit 3, a second average background extract unit 4, a first difference-calculation processing unit 5, a second difference-calculation processing unit 6, and a third difference-calculation processing unit 7.
  • More specifically, an image-[0069] input unit 1 is typically constituted by a video camera, and serves to input an image including a background and all the objects captured by the camera.
  • The background [0070] image extract unit 2 extracts only a background by excluding the stopped objects and the moving objects from the input image. If the stationary objects and the moving objects do not exist in the image, the input image is stored in the background image extract unit 2. This background may be incorporated in advance into an image processing apparatus or image processing system.
  • The average [0071] background extract unit 3 extracts an image which includes the stationary objects, moving objects each having a low speed, and the background.
  • The second average [0072] background extract unit 4 extracts an image which includes the stationary objects, the moving objects each having a low speed, moving objects having each having a middle speed, and the background.
  • The first difference-[0073] calculation processing unit 5 calculates a difference between an output from the background image extract unit 2 and an output from the first average background extract unit 3. Further, the first difference-calculation processing unit 5 generates a first image including slow moving and stationary objects.
  • The second difference-[0074] calculation processing unit 6 calculates a difference between an output from the first average background extract unit 3 and an output from the second average background extract unit 4. Further, the second difference-calculation processing unit 6 generates a second image including objects moving at a higher speed.
  • The third difference-[0075] calculation processing unit 7 calculates a difference value between an output from the image-input unit 1 and an output from the second average background extract unit 4. Further, the third difference-calculation processing unit 7 generates a third image including objects moving at a still higher speed.
  • Here, as shown by an image “a” in FIG. 1, it is assumed that a stopped or stationary car exists in a traffic lane on the left side of a road, and also a low speed car moving at a low speed exists in the same traffic lane, and also an obstacle (indicated by a black mark in FIG. 1) exists in the same traffic lane. Further, it is assumed that two middle speed cars each moving at a middle speed exist in the traffic lane at the right side of the road. In such a case, the operation of the image processing apparatus of the present invention will be described. [0076]
  • First, an image “a” is sent from the image-[0077] input unit 1 and input to the background image extract unit 2, the first average background extract unit 3, and the second average background extract unit 4. Next, the image “a” is processed by the background image extract unit 2 and an image “b” including a background (background image) is output. The image “a” is also processed by the first average background extract unit 3, and an image “c” is output. The image “a” is also processed by the second average background extract unit 4, and an image “d” is output.
  • Thereafter, an image “e” corresponding to a difference between the background image “b” and the image “c” is output by the first difference-[0078] calculation processing unit 5. Further, an image “f” corresponding to a difference between the image “c” and the image “d” is output by the second difference-calculation processing unit 6. Further, an image “g” corresponding to a difference between the image “d” and the image “a” is output by the third difference-calculation processing unit 7.
  • More specifically, a stopped car, a low speed car, and an obstacle existing in a traffic lane at the left side of a road are extracted by the first difference-[0079] calculation processing unit 5 and output in the image “e”. Further, two middle speed cars existing in the traffic lane at the right side are extracted by the second difference-calculation processing unit 6 and output in the image “f”.
  • However, in this case, a high speed car moving at a speed higher than the middle speed does not exist. Therefore, nothing is extracted by the third difference-[0080] calculation processing unit 7 and output in the image “g”.
  • In this way, it becomes possible to easily and selectively extract the movement of a plurality of moving objects, e.g., cars, which move at different speeds, at a relatively high rate corresponding to a frequency of a video signal. [0081]
  • Hereinafter, a more detailed description of and preferred embodiments of the present invention will be given with reference to FIGS. [0082] 2 to 26(E). Further, any component which is the same as that mentioned previously will be referred to using the same reference number.
  • FIG. 2 is a schematic block diagram showing a first preferred embodiment of an image processing apparatus according to the present invention. [0083]
  • In FIG. 2, each of an image-[0084] input unit 1, a background image extract unit 2, and a first average background extract unit 3 have the same construction as that shown in FIG. 1. Therefore, each of these components in FIG. 2 is indicated with the same reference number as that is used in FIG. 1.
  • Further, unlike the apparatus in FIG. 1, the image processing apparatus shown in FIG. 2 includes N average background extract units where N denotes any natural number more than 2:N>2. Here, these extract units include a second average background extract unit through an N-th average background extract unit will be indicated as the N-th average [0085] background extract units 4′.
  • Also, the image processing apparatus shown in FIG. 2 includes N+1 difference-calculation processing units. Here, these difference-calculation processing units include a third difference-calculation processing unit through an N+1-th difference-calculation processing unit will be indicated as N+1-th difference-[0086] calculation processing units 7′.
  • The image processing apparatus shown in FIG. 2 further includes a first local-area characteristic—[0087] amount extract unit 8, a second local-area characteristic-amount extract unit 9, N+1-th local-area characteristic-amount extract units 10, and a locus calculation unit 20.
  • The first average [0088] background extract unit 3 extracts one or a plurality of stopped objects, one or a plurality of objects moving at a low speed, and a background. Further, the N-th average background extract units 4′ extracts one or a plurality of stopped objects, a plurality of moving objects moving at speeds ranging from a low speed to a high speed, and a background. By utilizing these first average background extract unit 3 and N-th average background extract units 4′, it becomes possible to generate images in which objects are classified by speed ranges.
  • In this case, between the first average [0089] background extract unit 3 and the N-th average background extract units 4′, the second average background extract unit 4 shown in FIG. 1 may be provided.
  • The first difference-[0090] calculation processing unit 5 calculates a difference between an output from the background image extract unit 2 and an output from the first average background extract unit 3.
  • The second difference-[0091] calculation processing unit 6 calculates a difference between an output from the first average background extract unit 3 and either one of the respective outputs from the N-th average background extract units 4′.
  • The N+1-th difference-[0092] calculation processing units 7′ calculates a difference between either one of the respective outputs from the N-th average background extract units 4′ and an output from the image-input unit 1.
  • In such a construction, by virtue of plural difference-calculation processing units, it becomes possible to extract objects moving at a given speed or at the speed not higher than the given speed. In this case, as already described with reference to FIG. 1, the image processing apparatus may be constructed with only two average background extract units (N=2). [0093]
  • The number of these difference-calculation processing units depends on the number of average background extract units provided on the input side. However, the number of the difference-calculation processing units is independent of the average background extract units. It can be optionally determined which combinations of two outputs are selected from among the respective outputs from the average background extract units, to calculate the difference between the different outputs from the average background extract units, depending on the speed range(s) for which extraction of objects is desired. That is, the target speed ranges determines the number of difference-calculation processing units. [0094]
  • However, the maximum number of the difference-calculation processing units that can be provided is represented by the following equation:[0095]
  • X*(X−1)/2
  • where X is a sum of the number of the average background extract units, the background image extract unit, and an original image (from the image-input unit). [0096]
  • For example, in the case of FIG. 1, a value of X is 4, and the maximum of the difference-calculation processing units becomes 6. [0097]
  • The first local-area characteristic-[0098] amount extract unit 8 receives an output from the first difference-calculation processing unit 5. Further, as hereinafter described, the first local-area characteristic-amount extract unit 8 checks or determines whether certain characteristic-amounts or parameters that can be used to identify an object, which will for simplicity be called an object parameter, exists in a given first local area and calculates a characteristic or parameter concerning the shape of an object in the area, and the like. These characteristic amounts or parameters can include length, circumference, center-of-gravity and will be discussed in greater detail later herein.
  • In this case, by checking whether or not any characteristic-amounts or parameters exist, the change in position of an object can be determined. Further, by calculating a characteristic concerning the shape of the object, an attribute of the object having the shape, e.g., a bus, or a passenger car, can be determined. [0099]
  • The second local-area characteristic-[0100] amount extract unit 9 receives an output from the second difference calculation processing unit 6. Further, as hereinafter described, the second local-area characteristic-amount extract unit 9 checks whether any characteristic-amounts or object parameters exist in a given second local area and calculates a characteristic or parameter concerning the shape of the object, and the like.
  • The N+1-th local-area characteristic-[0101] amount extract units 10 respectively receives outputs from the N+1-th difference-calculation processing units 7′. Further, as hereinafter described, the N+1-th local-area characteristic-amount extract units 10 checks whether any characteristic-amounts or parameters exist in the N+1-th local area and calculates a characteristic or parameter concerning the shape of a specified object, and the like.
  • The [0102] locus calculation unit 20 detects the change in existence of characteristic-amounts or object parameters in the time series images and calculates a locus of the same moving object, on the basis of an output from each of a plurality of local-area characteristic- amount extract units 9, 10.
  • More specifically, the [0103] locus calculation unit 20 includes a character analyzing unit 20-1 which determines a locus of the same moving object, on the basis of a character concerning the shape of the moving object.
  • Further, the [0104] locus calculation unit 20 includes a list making unit 20-2 which detects an existence of the moving object in each of a plurality of local areas in time series, on the basis of an output from each of the local-area characteristic- amount extract units 9, 10, and which creates a list with respect to the results of the detection.
  • Further, the [0105] locus calculation unit 20 includes a list analyzing unit 20-3 which analyzes the list and recognizes a locus of the same moving object, even in a case where a large-scale moving object exists with a plurality of small-scale moving objects.
  • FIGS. [0106] 3(A) and 3(B) are diagrams showing original images which are taken by an image-input unit in a tunnel at different sampling times, respectively. In this case, an example, in which an image processing apparatus of the present invention is applied to a supervisory system for supervising a road in a tunnel, will be described.
  • More specifically, FIG. 3(A) indicates an original image which is taken at a certain or first sampling time and by means of an image-input means [0107] 1 that is placed in the tunnel; and FIGS. 3(B) indicates another original image which is taken in a different or second sampling time that occurs several seconds after the first sampling time and by the same image-input means 1. The image-input means, e.g., a video camera, continuously takes images in the tunnel at high rate corresponding to a frequency of a video signal, which are sampled by a technique using a sampling time interval function such as a time series filter. That is, the image samples used for processing are taken at a frequency that is lower than the video frequency and can be as much as several seconds apart. Therefore, other images can be obtained during the sampling time period between the two images respectively shown in FIGS. 3(A) and 3(B). However, the illustration of the other images or images between those of FIGS. 3(A) and 3(B) will be omitted to simplify the explanation of FIGS. 3(A) and 3(B).
  • As apparent from FIGS. [0108] 3(A) and 3(B), in a traffic lane on the right side R, moving objects, e.g., a plurality of cars, normally move. However, in a traffic lane on the left side L, a stopped or stationary car P1 exists in the lane, and the tail lamps of the car P1 are flashing as illustrated by the dots. Also, an obstacle P2 (for example, a fallen object) exists at the back of or behind the stopped car P1. Further, it is assumed that the car P3 which follows P1 decelerates and moves at a low speed, since the car P3 has seen the obstacle P2.
  • Further, an image area which is captured by the image input means [0109] 1 is allocated or divided into a plurality of local areas, as shown in a diagram of FIG. 4. In FIG. 4, L0 to L4 denote local areas which are used to trace cars moving in the traffic lane on the left side L. Among these local areas L0 to L4, a local area L0 is used to detect a large-scale car in the traffic lane on the left side, while local areas L1 to L4 are used to trace all the cars on the left side.
  • On the other hand, R[0110] 0 to R4 denote local areas which are used to trace cars moving in the traffic lane at the right side R. Among these local areas R0 to R4, a local area R0 is used to detect a large-scale car in the traffic lane on the right side, while local areas R1 to R4 are used to trace all the cars on the right side.
  • FIGS. [0111] 5(A) and 5(B) are diagrams for explaining an example in which images containing objects moving at different speeds are respectively extracted in a first preferred embodiment of the present invention. For example, one extracted image will include objects moving at a first speed and another extracted image will include objects moving at a second speed different from the first speed.
  • In FIGS. [0112] 5(A) and 5(B), all the objects are classified into three types of objects based on their speed. These three types of objects are stopped objects, objects moving at a low speed, and objects moving at a high speed, and the images containing these objects are output from the respectively corresponding extract units, as already described in FIG. 1 and FIG. 2 (N=2).
  • To be more specific FIG. 5(A) shows a background stored in a first image memory [0113] 2-1 in a background image extract unit 2. This background is input to a look-up table {circle over (2)} (in FIGS. 5(A) and 5(B), “look-up table” is abbreviated “LUT”) in a first difference calculation processing unit 5, as one input “i” of the look-up table {circle over (2)}. As the other input “j” of the look-up table {circle over (2)}, an image corresponding to a sum of a background image, a stopped objects image, and an image of moving objects moving at a low speed (Background+Stop+Low speed) is input to the look-up table {circle over (2)}.
  • As shown in FIG. 5(B), if a value of |i−j| is equal to or larger than a threshold value th2 (for example, th2=50; |i−j|≧th2), the look-up table {circle over ([0114] 2)} outputs the value of |i−j| as an output “k”. With respect to a background image, since a value of one input “i” is equal to a value of the other input “j”, a value of the output “k” is zero (k=0). Therefore, with respect to the output “k” of the look-up table {circle over (1)}, the value of |i−j| produces an output only in those portions of the input image where stopped objects and moving objects at a low speed exist, and this is then input to a look-up table {circle over (1)} as one input “i”. That is, the output “k” includes stopped objects and slow moving objects only.
  • At this time, the other input “j” of the look-up table {circle over ([0115] 1)} receives an input image (Background+Stopped+Low speed+Middle speed+High speed), which is taken by an image-input means 1. This input image includes a background image, stopped objects (including an obstacle), objects moving at a low speed, objects moving at a middle speed, and objects moving at a high speed. Therefore, the value of |i−j| input to the look-up table {circle over (1)} produces an output only in a portion where stopped objects and moving objects at a low speed exist (Stopped+Low speed).
  • In this case, as shown in FIG. 5(B), if a value of one input “i” of the look-up table {circle over ([0116] 1)} is equal to or larger than a threshold value th1 (for example, th1=5; i≧th1), the look-up table {circle over (1)} outputs the value of the other input “j” as an output “k”. On the other hand, if a value of one input “i” of the look-up table {circle over (1)} is smaller than the threshold value th1, a value of the output “k” is zero (k=0). Therefore, with respect to the output “k” of the look-up table {circle over (1)}, only stopped objects and objects moving at a low speed are extracted from the input image (Stopped+Low speed). That is, the output of the table {circle over (1)} is an image that includes only the stopped and slow moving objects found in the latest image from the image-input means 1.
  • Further, in FIGS. [0117] 5(A) and 5(B), an input image, which is captured by the image-input means 1, is also input to a look-up table {circle over (3)} in a first average background extract unit 3, as input “i”. On the other hand, an image, which has been stored in a second image memory 3-1, is also input to the look-up table {circle over (3)} as the other input “j”. In this case, if a value of one input “i” of the look-up table {circle over (3)} is equal to a value of the other input “j” thereof, the same image as that stored in the image memory 3-1 is output from the look-up table {circle over (3)}, and then stored again in the image memory 3-1.
  • Further, if a value of (i−j) is in the range from zero through th31 (for example, th31=10; 0≦(i−j)≦th31), a value, which is obtained by adding an offset value α31 (for example, α31=1) to a value of the other input “j”, is output from the look-up table {circle over ([0118] 3)}. At the same time, the thus obtained value is also stored in the image memory 3-1. If a value of (j−i) is in the range from zero through th31(0≦(j−i)≦th31), a value, which is obtained by subtracting an offset value α31 from a value of the other input “j”, is output from the look-up table {circle over (3)}. At the same time, the thus obtained value is also stored in the image memory 3-1.
  • Further, if a value of (i−j) is larger than a threshold value th31 and equal to or smaller than a threshold value th32 (for example, th32=255; th31<(i −j)≦th32), a value, which is obtained by adding an offset value α 32 (for example, α 32=3) to a value of the other input “j”, is output from the look-up table {circle over ([0119] 3)}. At the same time, the thus obtained value is stored in the image memory 3-1. If a value of (j−i) is larger than a threshold value th31 and equal to or smaller than a threshold value th32 (for example, th32=255; th31<(i−j)≦th32), a value, which is obtained by subtracting an offset value α 31 from a value of the other input “j”, is output from the look-up table {circle over (3)}. At the same time, the thus obtained value is also stored in the image memory 3-1.
  • In this way, a value of an output “k” which is output from the look-up table {circle over ([0120] 3)} is a portion of an image including the background, stopped objects (this background and stopped objects exist in the condition =j), and also objects moving at a low speed exist (Background+Stopped+Low speed). The reason why the output “k” is an output that includes image portions including objects moving at a low speed will be hereinafter described. The output “k” from the look-up table {circle over (3)} is input to a look-up table {circle over (2)}′ in a second difference-calculation processing unit 6, as one input“i”.
  • At this time, as the other input “j” of the look-up table {circle over ([0121] 2)}′, an image including the background, stopped objects, objects moving at a low speed, and objects moving at a middle speed (Background+Stopped +Low speed+Middle speed) is input. In this case, this image has been output from a look-up table {circle over (4)} that will be hereinafter described. The look-up table {circle over (2)}′ is implemented in a manner similar to the case of the look-up table {circle over (2)} shown in FIG. 5(B). Therefore, if a value of |i−j| is equal to or larger than a threshold value th2 (|i−j|≧th2), the look-up table {circle over (2)}′ outputs the value of |i−j| as an output “k”.
  • With respect to the image portion corresponding to the background and the objects moving at a low speed, since a value of one input “i” is equal to a value of the other input “j”, a value of the output “k” from the look-up table {circle over ([0122] 2)}′ is zero (k=0). Therefore, with regard to the output “k” of the look-up table {circle over (2)}′, the value of |i−j| produces an output only in those portions of the image where objects moving at a middle speed exist (Middle speed), and then this image is input to a look-up table {circle over (1)}′ as one input “i”.
  • At this time, as the other input “j” of the look-up table {circle over ([0123] 1)}′, an input image, which is captured by the image-input means 1, is input. This input image includes the background, stopped objects, objects moving at a low speed, objects moving at a middle speed, and objects moving at a high speed. Therefore, the value of |i−j| is input to the look-up table {circle over (1)}′ produces an output only in a portion of the image where objects moving at a middle speed exist (Middle speed).
  • In this case, the look-up table {circle over ([0124] 1)}′ is constructed to operate in a manner similar to the case of look-up table {circle over (1)}. Therefore, with regard to the output “k” of the look-up table {circle over (1)}′, only objects moving at a middle speed are extracted from the input image (middle speed).
  • Further, in FIGS. [0125] 5(A) and 5(B), an input image, which is taken by an image-input means 1, is also input to a look-up table {circle over (4)} in a second average background extract unit 4, as one input “i”. On the other hand, an image, which has been stored in a third image memory 4-1, is also input to the look-up table {circle over (4)} as the other input “j”. In this case, if a value of one input “i” of the look-up table {circle over (4)} is equal to a value of the other input “j” thereof, the same image as that stored in the image memory 3-1 is directly output from the look-up table {circle over (4)}, and then stored again in the image memory 4-1.
  • Further, if a value of (i−j) is in the range from zero through th41 (for example, th41=10; 0≦(i−j)≦th31), a value, which is obtained by adding an offset value α41 (for example, α41=1) to a value of the other input “j”, is output from the look-up table {circle over ([0126] 4)}. At the same time, the thus obtained value is also stored in the image memory 4-1. If a value of (j−i) is in the range from zero through th41(0≦(j−i)≦th41), a value, which is obtained by subtracting an offset value α42 from a value of the other input “j”, is output from the look-up table {circle over (4)}. At the same time, the thus obtained value is also stored in the image memory 4-1. Further, if a value of (i−j) is larger than a threshold value th41 and equal to or smaller than a threshold value th42 (for example, th42=255; th41<(i−j)≦th42), a value, which is obtained by adding an offset value α 42 (for example, α42=10) to a value of the other input “j”, is output from the look-up table {circle over (4)}. At the same time, the thus obtained value is stored in the image memory 4-1.
  • In this way, the value of an output “k” in output from the look-up table {circle over ([0127] 4)} is a portion of the image including the background, stopped objects, objects moving at a low speed, and also objects moving at a middle exist (Background+Stop+Low speed+Middle speed). The reason why an output “k” outputs such image portions including objects moving at a middle speed will be hereinafter described. The output “k” from the look-up table {circle over (4)} is input to the look-up tables {circle over (2)}′ as the other input “j”, and also input to a look-up table {circle over (2)}″ in a third difference-calculation processing unit 7, as one input “i”.
  • At this time, as the other input “j” of the look-up table {circle over ([0128] 2)}″, an input image, which is captured by the image-input means 1, is input. The input image includes the background, stopped objects, objects moving at a low speed, objects moving at a middle speed, and objects moving at a high speed. The look-up table {circle over (2)}″ is implemented in a manner similar to the case of the look-up table {circle over (2)} shown in FIG. 5(B). Therefore, if a value of |i−j| is equal to or larger than a threshold value th2 (|i−j|≧th2), the look-up table {circle over (2)}″ outputs the value of |i−j| as an output “k”.
  • With respect to the image portion corresponding to the background, the objects moving at a low speed, and the objects moving at a middle speed, since a value of one input “i” is equal to a value of the other input “j”, a value of the output “k” from the look-up table {circle over ([0129] 1)}″ is zero (k=0). Therefore, with respect to the output “k” of the look-up table {circle over (2)}″, the value of |i−j| produces an output only in those portions of the image where objects moving at a high speed exist (High speed), and then this image is input to a look-up table {circle over (1)} “as one input “i”.
  • At this time, as the other input “j” of the look-up table {circle over ([0130] 1)}″, an input image, which is captured by the image-input means 1, is input. This input image includes the background, stopped objects, objects moving at a low speed, objects moving at a middle speed, and objects moving at a high speed. Therefore, the value of |i−j| input to the look-up tabled {circle over (1)} produces an output only in a portion of the image where objects moving at a high speed exist (High speed).
  • In the above-mentioned embodiment, plural images are generated on the basis of plural reference speeds, and plural types of objects are individually extracted from an original image based on the speed ranges. Therefore, it becomes possible to reduce the number of the objects which are to be supervised and processed, for example, to those objects having a low speed. All others are filtered out. [0131]
  • A detailed description will now be given regarding why an offset value is added to an output of the look-up tables {circle over ([0132] 3)} and {circle over (4)}, or subtracted from the output of these look-up tables, to modify and store data in the image memories 3-1 and 3-2.
  • When there is a difference between the data stored in each of these image memories and an input image which is larger than a predetermined threshold value, the data in each of these image memories are modified by adding or subtracting the offset value to or from the input image. [0133]
  • When the movement of an extracted object is slow, the data in the image memories can be modified before the object moves outside a region of an original or first image in which the object appears. [0134]
  • In such a situation, it is possible to make the data stored and output by the image memories conform to the input image, by frequently modifying the data with a relatively small offset value. [0135]
  • However, when the movement of an extracted object is fast, the object can move outside a region of an original or first image in which the object appears in a short time and finally disappear. To address this problem, it is necessary to modify the data with a relatively large offset value because the number of times available for opportunities for modification is reduced. [0136]
  • Namely, in detecting an object moving at a high speed, the offset value is set to a large value, and the data in the image memories are intended to be rapidly modified. On the other hand, in detecting an object moving at a low speed, the offset value is set to a small value, and the data in the image memories are intended to be modified for a longer period of time. In such an approach, the offset value is changed in accordance with the speed of the moving object to be processed. Therefore, even when there are a lot of moving objects to be processed, the difference in speeds between these moving objects can be easily discriminated, and all the necessary objects can be distinguished from each other. [0137]
  • FIGS. [0138] 6(A) and 6(B) are diagrams for explaining another example in which images containing objects moving at different speeds are respectively extracted in a first preferred embodiment of the present invention.
  • More specifically, FIG. 6(A) illustrates a plurality of images which are captured during a sampling time period; and FIG. 6(B) illustrates an image in which a plurality of moving objects have been classified into several groups on the basis of different speeds. [0139]
  • As shown in FIG. 6(A), it is assumed that both a moving object A moving in the direction indicated by arrows and a stopped object B exist in the scene/image. In each image captured at sampling times t[0140] 1, t2, . . . tn, the moving object A and the stopped object B are positioned as indicated in the upper part of FIG. 6(A). All the images are respectively stored in memories (not shown) and accumulated. Further, the thus accumulated images are averaged, and an average background is calculated.
  • In this case, a stopped object B can be extracted with a value which is obtained by accumulating the above-mentioned images of the number of n and averaging the thus accumulated images. Namely, since the stopped object B does not change position at each sampling time t[0141] 1, t2, . . . tn, the stopped object B can be easily extracted from the average background as shown in the lower part of FIG. 6(A).
  • On the other hand, with respect to the moving object A, especially in a case where the movement of the moving object A is fast, a plurality of moving objects A exist in the respective images at corresponding sampling times t[0142] 1, t2, . . . tn that do not overlap with each other. By carrying out an averaging process, one moving object A positioned at a certain sampling time t1 and a plurality of backgrounds including the object at different positions of the number of (n−1) at sampling time t2 . . . tn are averaged together. Therefore, an image density value of a moving object A in such an averaged image is extremely small. More specifically, the density value of the moving object A is smaller than a threshold value, and consequently when the averaged image compared or tested against the threshold the moving object A disappears from the average background. As a result, the stopped object B remains in the average background, while the moving object A appears not to exist in the average background.
  • In this case, the image only including the moving object A can be easily extracted by calculating a difference between the average background and each image at each sampling time t[0143] 1, t2, . . . tn.
  • Further, as shown in FIG. 6(B), where a moving object D moving at a low speed and a moving object E moving at a middle speed exist, as well as the stopped object B, these moving objects D, E can be distinguished from each other. [0144]
  • The location of the moving object D moving at a low speed at sampling time t[0145] 2, t3 partially overlaps with a location of moving object D at the first sampling time t1. However, the moving object D at the fourth sampling time t4 does not overlap with the location of the moving object D at the first sampling time t1.
  • Further, a moving object E moving at a middle speed at a second sampling time t[0146] 2 partially overlaps with the location of moving object E at the first sampling time t1. However, the location of moving object E at the third sampling time t3 does not overlap with the location of moving object E at the first sampling time t1.
  • Therefore, if two images at sampling time t[0147] 1, t4 are accumulated and divided by two, the density value of the moving object D moving at a low speed, the moving object E moving at a middle speed and the moving object A becomes smaller than a threshold value in the thus divided image. Therefore, in this way only the stopped object B can be extracted. That is, the stopped object can be isolated.
  • Further, if two images at sampling time t[0148] 1, t2 and t3 are accumulated and divided by three, the density value of the moving object E moving at a middle speed becomes smaller than a threshold value in the thus divided image. However, as described above, the object D moving at a low speed at sampling times t2, t3 partially overlaps with the object D at a first sampling time t1 and consequently the moving object D can be extracted. At this time, the stopped object B also can be extracted, simultaneously with the moving object D. In this case, by subtracting the former image obtained on the basis of the two images captured at sampling times t1, t4 from the latter image obtained on the basis of the three images captured at sampling times t1, t2 and t3, the moving object D can be finally distinguished from the other objects B, E.
  • Further, if the stopped object B and the moving object D moving at a low speed are eliminated from an original input image, the moving object E at a middle speed can be isolated and extracted. [0149]
  • By using the above technique, it is possible to accurately extract all the objects in an original input image, even in a case where moving objects exist in the images each having a speed higher than the middle speed. In this case, it should be noted that a background can be extracted, together with the stopped object B. [0150]
  • In such an implementation, by utilizing a technique different from that in FIG. 5(A), it becomes possible to output an image corresponding to a sum of a background, stopped objects, and objects moving at a low speed (Background+Stop+Low speed) from a first average background extract unit. Also, it becomes possible to output an image corresponding to a sum of a background, stopped objects, objects moving at a low speed, and objects moving at a middle speed (Background +Stop+Low speed+Middle speed) from a second average background extract unit. [0151]
  • In a first difference-calculation processing unit, on the basis of these images, it is possible to calculate a difference between a background output from a background image extract unit and the image containing objects moving at a low speed which is output from the first average background extract unit. Consequently, a portion of an image including stopped objects, and objects moving at a low speed can be obtained from the first difference-calculation processing unit. [0152]
  • Further, in a second difference-calculation processing unit, it is possible to calculate a difference between the image output from the first average background extract unit and the image output from the second average background extract unit. Consequently, a portion of an image including objects moving at a middle speed can be obtained from the second difference-calculation processing unit. [0153]
  • Further, in a third difference-calculation processing unit, it is possible to calculate a difference between the image output from the second average background extract unit and an original input image. Consequently, a portion of an image including objects moving at a high speed can be obtained from the third difference-calculation processing unit. [0154]
  • For example, in FIG. 1 again, it is assumed that the first average [0155] background extract unit 3 only extracts a background, stopped objects (including obstacles), and objects moving at a low speed. Also, it is assumed that the second average background extract unit 4 only extracts a background, stopped objects, objects moving at a low speed, and objects moving at a middle speed. In such a case, an image “c” is obtained from the first average background extract unit 3, while an image “d” is obtained from the second average background extract unit 4.
  • As a result, an image “e” only including the stopped objects and the objects moving at a low speed is obtained from the first difference-[0156] calculation processing unit 5. Further, an image If only including the objects moving at a middle speed is obtained from the second difference-calculation processing unit 6. Further, an image “g” only including the objects moving at a high speed is obtained from the third difference-calculation processing unit 7.
  • However, it should be noted that there is no moving object at a high speed in the example shown in FIG. 1. [0157]
  • In this case, nothing appears in the image “g”. Therefore, in the case where nothing appears in the three images “e”, “f” and “g”, an original input image is stored in the background [0158] image extract unit 2.
  • Further, with reference to FIG. 2 again, each of a first local-area characteristic-[0159] amount extract unit 8, a second local-area characteristic-amount extract unit 9, and an N+1-th local-area characteristic-amount extract units 10 have the same construction. Therefore, the construction of the first local-area characteristic-amount extract unit 8 which calculates object parameters such as center of gravity, length, circumference, etc. will be representatively described with reference to FIG. 7 described in detail hereinafter.
  • FIG. 7, is a block diagram showing the construction of the first local-area characteristic-amount extract unit in a first preferred embodiment of the present invention. [0160]
  • In FIG. 7, an area of the whole input image (the entire captured image), which is taken or captured by an image-[0161] input unit 1, is allocated (or divided) in advance into a plurality of local areas.
  • For example, as shown in FIG. 4 mentioned previously, a plurality of local areas L[0162] 0 to L4 are established as local areas which are used to trace cars moving in the traffic lane on the left side L. On the other hand, a plurality of local areas R0 to R4 are established as local areas which are used to trace cars moving in the traffic lane on the right side R.
  • Further, for these local areas, a plurality of local area extract processing units [0163] 8-1, 8-2, . . . 8-m (“m” denotes any natural number more than 2) respectively are provided. For example, in FIG. 4, local areas L0 to L4 and local areas R0 to R4 are provided on the left side L and the right side R, respectively. Therefore, in this case, it becomes necessary to provide ten local area extract processing units (m=10).
  • Further, for a first local area in FIG. 7, a first local area extract processing unit [0164] 8-1 includes a first local-area determining unit 11-1, a first noise canceling unit 12-1, a first labeling processing unit 13-1, and a first characteristic-amount calculation unit 14-1.
  • More specifically, the first local-area determining unit [0165] 11-1 defines one of the local areas which must be processed by the first local area extract processing unit 8-1. For example, in the case where the first local-area determining unit 11-1 is to process a local area L0, the first local-area determining unit 11-1 defines the range of the local area L0 and extracts a portion of the input image within this range. The first noise canceling unit 12-1 eliminates noise from a signal which is sent from the first local-area determining unit 11-1. Typically, the noise canceling unit 12-1 is implemented by a low pass filter.
  • The first labeling processing unit [0166] 13-1 carries out a labeling process. The labeling process is executed to provide the same label to each of the same objects with respect to input images generated in time series in the given local area.
  • The first characteristic-amount calculation unit [0167] 14-1 checks to determine whether the thus labeled area exists. If a plurality of the thus labeled area actually exist, the first characteristic-amount calculation unit 14-1 produces a projection for each of the labeled areas, and further calculates a position of the “center-of-gravity” in each of the labeled areas, the value of the length and breadth of each of the labeled areas, and the value of an area (space) in each of the labeled areas. Namely, the first characteristic-amount calculation unit 14-1 estimates a plurality of characteristic-amounts or object parameters for each of the labeled areas that can be used to identify and track objects.
  • In a similar manner, the other local area extract processing units [0168] 8-2 . . . 8-m respectively include the corresponding local-area determining units 11-2 . . . 11-m, the corresponding noise canceling units 12-2 . . . 12-m, the corresponding labeling processing unit 13-2 ... 13-m, and the corresponding characteristic-amount calculation unit 14-2 . . . 14-m.
  • FIGS. [0169] 8(A) to 8(F) are diagrams for explaining operations of a first preferred embodiment of the present invention in a case where a plurality of stationary objects (e.g., a stopped car and an obstacle exist in the same series of images) and a plurality of moving objects exist together.
  • In FIG. 8(A), it is assumed that there are a stopped car P[0170] 1 which has tail lamps flashing, an obstacle P2, and a low speed car P3, at a certain sampling time “t”, to simplify the explanation. In this case, an output from the first difference-calculation processing unit 5 is indicated by an image shown in FIG. 8(C). In this image, all the objects including the obstacle P2 are extracted. When this image and the local areas in FIG. 4 are overlapped with each other, an image as shown in FIG. 8(E) is obtained. In FIG. 8(E), cars exist in a portion of the local areas L1, L2 and L4. Therefore, characteristic-amounts or object parameters can be calculated for the three local areas.
  • An image as shown in FIG. 8(A) changes to another image as shown in FIG. 8(B) at the sampling time when several seconds have elapsed after the sampling time “t” (i.e., sampling time “t+several seconds”). That is, the images of [0171] 8(A) and 8(B) are captured with a sampling interval of several seconds between them. Further, an output from the first difference-calculation processing unit 5 is indicated by the image shown in FIG. 8(D). When this image and the local areas in FIG. 4 overlap with each other, similar to the case of FIG. 8(C), an image as shown in FIG. 8(F) is obtained. In FIG. 8(F), cars exist in the local areas L2 and L4. Therefore, characteristic-amounts can be also calculated in these two local areas.
  • The characteristics or parameter extraction is processed in a time series and used by a list making unit [0172] 20-2 in a locus calculation unit 20 of FIG. 2, and a list is created by the list making unit 20-2. An example of the list is shown in the following table 1. Each of the circles (◯) in the table 1 indicates that characteristic-amounts, such as the center-of-gravity, can be or are obtained at each corresponding sampling time; namely, any object (including an obstacle) exists at the given time in the area. The list is used to track the location or locus of an object by the list analyzing unit 20-3. In this case, as is apparent from FIG. 8, a stopped car P1, an obstacle P2, and a low speed car P3 should be detected in the locus calculation unit 20.
    TABLE 1
    Stop + Low speed
    L0 L1 L2 L3 L4 R0 R1 R2 R3 R4
    TIME t
    t 1
    t 2
    t 3
    t 4
    t 5
    t 6
    t 7
    t 8
    . . .
    . . .
    . . .
    t + SEVERAL
    SECONDS
  • FIGS. [0173] 9(A) to 9(F) are diagrams for explaining operations of a preferred embodiment of the present invention in the case where a plurality of moving objects moving at a middle speed exist.
  • In FIGS. [0174] 9(A) to 9(F), speed range images, extraction images, and the condition in which two different images overlap with each other, are illustrated at sampling time “t” and “t”+several seconds”, respectively, in the case where middle speed cars exist in the images.
  • The result of characteristic or object parameter processing obtained from FIGS. [0175] 9(A) to 9(F) is also processed in a time series and used by the list making unit 20-2, and a list is created. An example of the list is shown in the following table 2.
    TABLE 2
    Middle speed
    L0 L1 L2 L3 L4 R0 R1 R2 R3 R4
    TIME t
    t 1
    t 2
    t 3
    t 4
    t 5
    t 6
    t 7
    t 8
    . . .
    . . .
    . . .
    t + SEVERAL
    SECONDS
  • FIGS. [0176] 10(A) to 10(F) are diagrams for explaining operations of a preferred embodiment of the present invention in the case where a plurality of objects moving at a high speed may exist.
  • However, in this case, a high speed car does not exist in the images. Therefore, in FIGS. [0177] 10(A) to 10(F), a high speed car is not illustrated. Also, in a table 3 corresponding to these figures, a circle is not inserted as shown below.
    TABLE 3
    High speed
    L0 L1 L2 L3 L4 R0 R1 R2 R3 R4
    TIME t
    t 1
    t 2
    t 3
    t 4
    t 5
    t 7
    t 8
    . . .
    . . .
    . . .
    t + SEVERAL
    SECONDS
  • In this case, a list analyzing unit [0178] 20-3 in a locus calculation unit 20 of FIG. 2 analyzes the content of the table 3. Consequently, it is determined that a high speed car moving at a high speed does not exist in both of the traffic lines.
  • Further, by an analysis of the table 2 mentioned previously, it can be determined that there are no circles in the local area on the left side (L[0179] 0 to L4), and therefore a car moving at a middle speed does not exist in the traffic line on the left side at that time. Further, at the sampling time “t” and “t1”, there are circles in two local areas on the right side (R1 and R4). Therefore, it is discriminated or determined that two cars moving at a middle speed exist in the traffic line on the right side at that time.
  • Also, in the table 2, at the sampling time “t3”, there are circles in adjoining local areas on the right side (R[0180] 1 and R2). Further, at the next sampling time “t4”, there is only one circle in a local area R2. Therefore, it can be discriminated or determined that an object (car) existing in the local area R1 and an object (car) existing in the local area R2 at the sampling time “t3” are related to the same object.
  • Further, the table 1 mentioned before is rather complicated and difficult to analyze. However, on the basis of the table 1, the below-mentioned facts can be discriminated or determined. [0181]
  • First, there is no circle in the local areas on the right side (R[0182] 0 to R4), and therefore a stopped car or a low speed car does not exist in the traffic lane on the right side at that time.
  • Second, there are circles in the local areas on the left side (L[0183] 1 to L4), and therefore a stopped car or a low speed car exists in the traffic line on the left side at that time.
  • In this case, circles appear and disappear at regular intervals in the local area L[0184] 4. Therefore, it can be presumed that a stationary object exists in the adjoining local area L3. In the table 1, circles exist in the local area L1 at the sampling time from “t” to “t3”. Further, circles exist in the local areas L1 and L2 at the sampling time from “t4” to “t7”. Further, circles exist in the local area L2 after the sampling time “t8”. On the basis of the changes in the location of circles in the time series, it is discriminated or determined that a moving object at a low speed exists.
  • Heretofore, the above-mentioned analyzing process is carried out by discriminating whether an object exists in local areas, with the relationship between time and position being taken into consideration. [0185]
  • However, in addition to such an analyzing process, the value of a length and breadth of the object or the value of an area of the object can be utilized as a characteristic-amount or parameter. By virtue of these characteristic-amounts or object parameters, it becomes possible to extract a great deal of information. [0186]
  • A table 4 illustrates an example in which a car executing a change of the traffic lane is detected. The change of the traffic lane can be easily discriminated or detected by tracing the movement of circles in the table 4 on the basis of the abovementioned description. An object, which has existed in a local area until a given time, instantaneously disappears. However, at that time when the object disappears from one local area, another object appears in another local area, particularly in the adjoining local area. In this case, it is discriminated or determined that the object disappeared before is a car executing a change of the traffic lane. [0187]
    TABLE 4
    Change of traffic lane
    L0 L1 L2 L3 L4 R0 R1 R2 R3 R4
    TIME t
    t
    1
    t 2
    t 3
    t 4
    t 5
    t 6
    t 7
    t 8
    t 9
    t 10
    t 11
    t 12
    . . .
    . . .
    . . .
  • In the table 4, circles have existed in the local area L[0188] 1 until the time corresponding to the sampling time from “t3”. Further, at the sampling time “t4”, circles appear to the three local areas L1, L2 and R2. Subsequently, at the sampling time from “t5”, “t6” and “t7”, circles exist in the local area R2. Therefore, it is easily discriminated or determined that the object executes a change of the traffic lane from the left side to the right side.
  • In this case, if the time necessary for the movement of the object and the size of each of the local areas can be obtained, a speed of the object can be also calculated. For example, when the value of a distance length of a certain local area is defined as L, and the value of the length of time in which the object is positioned in the local area is defined as T, a speed of the object can be calculated by a calculation of L/T. [0189]
  • FIGS. [0190] 11(A) and 11(B) are diagrams for explaining operations of a first preferred embodiment of the present invention in a case where a large-scale car and a small-scale car exist together in the image.
  • In FIG. 11(A), an image, in which both a large-scale car P[0191] L and a small-scale car PS move in the traffic lane on the right side, is illustrated. Further, when this image and a plurality of local areas shown in FIG. 4 overlap with each other, an image as shown in FIG. 11(B) is obtained.
  • In this case, the following table 5 is created when a large-scale car P[0192] L and a small-scale car PS exist in the same traffic lane.
    TABLE 5
    Large-scale car
    L0 L1 L2 L3 L4 R0 R1 R2 R3 R4
    TIME t
    t
    1
    t 2
    t 3
    t 4
    t 5
    t 6
    t 7
    t 8
    t 9
    t 10
    t 11
    t 12
    . . .
    . . .
    . . .
  • In the table 5, circles exist continuously in the local area R[0193] 0 in accordance with the movement of a large-scale car. Therefore, an existence of a large-scale car can be easily detected or determined from such a pattern.
  • Further, in the table 5 and FIG. 11, at the sampling time “t4”, a small-scale car P[0194] S moves outside the local area R4. Further, at the sampling time “t5”, a large-scale car PL passes through the local area R1 and enters the other local area R2.
  • Further, with reference to the following table 6, the condition in which a large-scale car P[0195] L causes a small-scale car PS to go out of sight because the large scale car obscures the small scale car will be described.
    TABLE 6
    Condition in which a large-scale car
    puts a small-scale car out of sight
    L0 L1 L2 L3 L4 R0 R1 R2 R3 R4
    TIME t
    t
    1
    t 2
    t 3
    t 4
    t 5
    t 6
    t 7
    t 8
    t 9
    t 10
    t 11
    t 12
    . . .
    . . .
    . . .
  • In the table 6, at the sampling time “t1”, a large-scale car moves into the local area R[0196] 1, while the other car (for example, a small-scale car) moves into the local area R3. Further, at the sampling time “t4”, the large-scale car moves into the local area R2. Further, at the sampling time “t7”, the large-scale car moves into the local area R3. At this time, as is apparent from the table 6, the large-scale car puts out of sight or obscures the other car moving in front of the large-scale car.
  • In this case, at the sampling time “t11”, the other car moves into the local area R[0197] 4; namely, the other car appears again in front of the large-scale car. The change of condition can be discriminated by the list analyzing unit 20-3.
  • In the [0198] locus calculation unit 20, the character analyzing unit 20-1 analyzes a characteristic or parameter concerning the shape of the object, and discriminates the same moving object. On the basis of a result of this discrimination, the character analyzing unit 20-1 instructs the list making unit 20-2 to determine a locus of the same moving object. In this way, it becomes possible to easily obtain the locus of the same moving object.
  • FIGS. [0199] 12(A) to 12(E) are diagrams for explaining operations of a second preferred embodiment of the present invention in a case where a large-scale moving object and a small-scale moving object exist together in an airport.
  • In the second preferred embodiment shown in FIGS. [0200] 12(A) to 12(E), the case where an image processing apparatus of the present invention is applied to a spot supervisory system utilizing a view of a predetermined spot in an airport will be described.
  • In the second preferred embodiment, an attempt is made to distinguish a large-scale moving object moving at a low speed (for example, an airplane) from small-scale moving objects moving at a middle or high speed (for example, special cars used for various work such as baggage handling), and to examine attributes of the large-scale moving objects. [0201]
  • In this embodiment, a plurality of local areas C[0202] 0, L0 to L7, and R0 to R7 are provided in a manner shown in FIG. 12(A). The local area C0 is intended to detect an airframe of the airplane. On the other hand, the other local areas L0 to L7, and R0 to R7 are intended to detect the other small-scale moving objects, e.g., the special cars.
  • FIG. 12(B) shows a condition in which an airplane J stops in a spot; and FIG. 12(C) shows the condition in which a plurality of special cars SP[0203] 1, SP2 move in various directions.
  • Further, FIGS. [0204] 12(D) and 12(E) show a situation in which an image in FIG. 12(B) and the local areas in FIG. 12(A) overlap with each other; and FIG. 12(E) shows the situation in which an image in FIG. 12(C) and the local areas in FIG. 12(A) overlap with each other.
  • When the airplane J comes close to the spot and decelerates, and finally stops, the airframe of the airplane J appears in the local area C[0205] 0. Such a condition is detected by utilizing a technique for extracting a moving object moving at a low speed or stopped which has been described previously.
  • In this case, as shown in FIG. 13(A), the value of a length of the airframe is calculated by producing a projection of the airplane J in a direction corresponding to the longer sides of the local area C[0206] 0.
  • Namely, the value of a length of the airframe can be measured. Further, it becomes possible to identify a type of the airplane J on the basis of the measured length of the airframe. In the case where the airplane J comes close to the spot and stops, the value of a length of the airframe changes gradually changes as illustrated in the following table 7. [0207]
    TABLE 7
    Length of airplane
    LENGTH OF PROJECTION OF C0
    TIME t  0
    t 1  50
    t 2 100
    t 3 150
    t 4 200
    t 5 200
    t 6 200
    t 7 200
    t 8 200
  • As apparent from the table 7, as time elapses over the sampling times “t1”, “t2” and “t3”, a length of a projection, i.e., the value of a projection increases. However, after a given time (in this case, at sampling time “t4”), the increase in the value of a projection stops. When an increase in the value of a projection stops and this value is stable, it is discriminated or determined that the airplane has completely stopped. Further, if a transformation equation is calculated in advance which establishes a relationship between the value of the projection and the value of an actual length, an actual length of the airplane can be accurately obtained. Consequently, in accordance with the actual length of the airplane, a type of the airplane can be determined. [0208]
  • In regard to the special cars SP[0209] 1, SP1 shown in FIGS. 12(C) and 12(E), these cars are detected by utilizing a technique for extracting a moving object moving at middle and high speeds that has been previously described. Similar to the case of an airplane J, by calculating the value of a projection of each of these special cars, it becomes possible to detect the total number of the objects existing in the local area.
  • For example, in FIG. 12(E), the condition in which the value of a projection in the local area L[0210] 5 is obtained will be illustrated in FIG. 13(B). As apparent from FIG. 13(B), two types of projections exist, the first in X direction (X0, X1, . . . Xn) and the second in Y direction (Y0, Y1, . . . Yn) , respectively. Therefore, it can be discriminated that two different objects independently exist in the local area.
  • In the case where special cars move as shown in FIG. 12(E), the number of these special cars detected in the local area changes as illustrated in the following table 8. [0211]
    TABLE 8
    Number of objects
    L0 L1 L2 L3 L4 L5 L6 L7
    TIME t 1
    t 1 1 1
    t 2 1 1
    t 3 1 1
    t 4 1 1
    t 5 2
    t 6 2
    t 7 1 1
    t 8 1 1
    t 9 2 1
    t 10 2
    t 11 1 1
    t 12 2
    t 13 2
    t 14 1 1
     t 15 1 1
     t 16 1 1
     t 17 1
     t 18 1
     t 19 1
    t 20 1
  • As is apparent from table 8, as the objects (special cars) move, the number of the objects existing in each local area changes. On the basis of the change in the number of the objects, it can be discriminated or determined that, at the sampling time “t5”, a special car which had existed in the local area L[0212] 6 moves to the local area L7, and the number of the objects in the local area L7 becomes two (2). Further, it is also discriminated or determined that, at the sampling time “t7”, one of two special cars moving in the local area L7 moves to the local area L5. Further, it is discriminated that, at the sampling time “t9”, the remaining one of the two special cars moves to the local area L5, and the number of the objects in the local area L5 becomes two (2) again.
  • Heretofore, a description has been given regarding the image processing apparatus of the present invention which is used in a tunnel or an airport. However, the present invention is not limited to these cases. Further, the various speeds of a plurality of moving objects have been classified into only three ranges (low speed, middle speed, and high speed). However, in this case, speeds of a plurality of moving objects are not limited to these three ranges. For example, it is possible for the speeds of the moving objects to be classified into plural speed range values, e.g., 0 to 30 km, 30 to 60 km, 60 to 90 km, 90 to 120 km, 120 to 150 km, and a speed value more than 150 km. [0213]
  • FIGS. [0214] 14(A) to 14(C) are diagrams for explaining a process of calculating a distance between two moving objects in a first preferred embodiment of the present invention. In this case, a process of calculating a distance between two moving objects is assumed to be carried out only by the first preferred embodiment, to compare the first preferred embodiment with a third preferred embodiment that will be hereinafter described.
  • For example, as shown in FIGS. [0215] 9(A) to 9(E) and the table 2, some moving objects moving at a middle speed exist in the local areas L1, L4. In the case where a distance between the two objects is to be calculated by means of the first preferred embodiment, first, as shown in FIGS. 9(A), various characteristic-amounts or object parameters are extracted by a moving object extract unit 100, by utilizing an extraction process previously described, for each sampling time period. The moving object extract unit 101 corresponds to the average background units and the difference-calculation processing units illustrated in FIG. 1 or FIG. 2.
  • Further, a moving [0216] object correlating unit 101 correlates the same objects in a time series of images with each other, in accordance with the value of the characteristic-amounts using a limit value for speed of the moving objects. At this time, characteristic-amounts or parameters such as a contour of each moving object, and a position or inclination of the surface of each moving object, may be extracted from an original input image, by utilizing image density and color information for each of the moving objects.
  • Further, a [0217] distance measuring unit 102 measures a distance between two moving objects of the thus correlated moving objects. In this way, a compression process that converts image data into numerical data can be carried out. On the basis of such numerical data, an analysis and anticipation of the movement of each of the moving objects can be carried out.
  • In a case where a distance between two moving objects is calculated by means of the technique of the first preferred embodiment, an original input image is classified on the basis of the speeds of moving objects existing in the original input image. Further, a plurality of images are generated, and the thus generated images are correlated with all the moving objects. Therefore, moving objects respectively having different speeds are correlated with each other with a sufficient degree of accuracy. [0218]
  • However, in a case where a distance between two moving objects is calculated by means of this technique, a contour of each of moving objects, and a position of the surfaces of each of the moving objects are typically used. Accordingly, as shown in FIGS. [0219] 14(B) and 14(C), it is difficult to correlate the moving objects with each other, at the same point on the same contour and at the same point on the same surface.
  • More specifically, in FIGS. [0220] 14(B) and 14(C), to calculate a distance between two moving objects, a video camera (image-input unit) is set above the moving objects, and an original image is input. When the video camera is set above a road, and inputs a plurality of moving objects, e.g., cars passing through the view of the video camera, the video camera takes an image of a plurality of moving objects on the road having a black color.
  • In the case where two cars CA[0221] 1, CA2 exist in an original image B1 as shown in FIGS. 14(B), it is assumed that an edge extraction process is carried out or performed for the two cars CA1, CA2. When such an edge extraction process is executed, since a color of each of the cars is similar to that of the road, a portion of each of the two cars CA1, CA2, disappears in an edge extraction image B2 of FIGS. 14(B). Therefore, if a distance between the two cars is calculated on the basis of edge extract image B2, an error corresponding to a difference between the actual value and the calculated value becomes relatively large, as shown in image B3 of FIGS. 14(B).
  • Further, in a case where only a large-scale car LC exists in an original image C[0222] 1 shown in FIGS. 14(C), it is also assumed that an edge extraction process is carried out for the large-scale car LC. When such an edge extraction process is executed, the entirety of the large-scale car LC is not completely extracted. In such a situation, it is possible that the large-scale car LC will be erroneously recognized as two separate parts, in an edge extraction image C2 as shown in FIGS. 14(C). Therefore, if a distance between two separate parts is erroneously calculated on the basis of edge extraction image C2, the calculated value has no meaning as shown in image C3 of FIGS. 14(C).
  • FIG. 15 is a schematic block diagram showing a third preferred embodiment of an image processing apparatus according to the present invention. [0223]
  • In the third preferred embodiment, with respect to the problem described above, as shown in FIG. 15, a plurality of markers are placed in advance in a background where all the objects move, by means of a [0224] marker holding unit 110. At this time, the position, the shape, and the size of each marker, et al., are calculated or known in advance.
  • In the implementation of the third embodiment, a plurality of markers are placed or created by drawing white lines in the road at regular spacings or intervals. However, the markers used in the present invention are not limited to white lines, and any other things having various shapes can be utilized as the markers. Further, in FIG. 15, 114 denotes an image-input unit similar to that used in FIG. 1 or FIG. 2. [0225]
  • In such an implementation, a marker/moving [0226] object extract unit 111 calculates or determines a portion of the image in which the moving objects and the markers overlap with each other, and extracts each of the moving objects. Further, in regard to a plurality of images which are input over a period of time, a portion of each image in which the moving objects and the markers overlap with each other can be easily extracted.
  • In this way, a portion of the image in which the moving objects and the markers overlap with each other can be obtained as time series data. A marker/moving [0227] object correlating unit 112 correlates the obtained data, and identifies the same moving object. Further, on the basis of the number of markers existing between two different moving objects, a distance measuring unit 113 calculates a distance between two moving objects.
  • In this case, the time series data concerning the markers may be correlated with each other, in place of the time series data on the moving objects. By utilizing the data about the markers, it becomes possible to grasp or identify the markers existing between the same moving objects extracted at the different sampling times, and to calculate a distance between two moving objects. By analyzing the thus calculated distance between two moving objects, an abnormality, such as an accident viewed by an image processing apparatus can be anticipated. [0228]
  • The construction and the operation of the third embodiment will be described in detail with reference to FIG. 16. Also, in this case, any component which is the same as that mentioned previously will be referred to using the same reference number. [0229]
  • FIG. 16 is a block diagram showing in detail the main parts of a third preferred embodiment of the present invention. [0230]
  • In FIG. 16, the [0231] reference numeral 110 denotes a marker holding unit; 111 denotes a marker/moving object extract unit; 112 denotes a marker/moving object correlating unit; 113 denotes a distance measuring unit; 121 denotes a processed area setting means; 122 denotes a binary code processing unit; 123 denotes a noise canceling unit; 124 denotes a connected-area extract unit; 125 denotes a connected-area position/shape calculating unit; and 126 denotes a marker collating unit.
  • Further, in FIG. 16, the [0232] reference numeral 127 denotes a marker dictionary unit; 128 denotes a moving object extract unit; 129 denotes a marker extract unit; 131 denotes a moving object/marker time-series table making unit; and 132 denotes a moving object/marker correlating unit.
  • FIGS. [0233] 17(A) to 17(C) are diagrams showing the condition in or positions at which markers are provided and various information about markers is registered in a marker dictionary, in this third preferred embodiment of the present invention.
  • The [0234] marker holding unit 110 places or notes a plurality of markers in the background. The data about these markers are stored in advance in the marker dictionary unit 127. In a case where the moving objects are cars, as shown in FIG. 17(A), the markers are obtained by coating or painting the road (hatched portion) with a plurality of white lines P1, P2. . . . P25 thereon at equal spacings. In FIG. 17(B), the value of a width in each of the white lines P1, P2, . . . P25 is 50 cm, and twenty-five (25) white lines are drawn with a spacing of 50 cm.
  • In this case, as shown in FIGS. [0235] 17(B) and 17(C), the value of 50 cm is indicated in a display screen by ten dots (10 dots) or image pixels. Further, the left end X coordinate is defined as the position corresponding to fifty bits in X coordinate direction. On the other hand, the right end X coordinate is defined as the position corresponding to five hundred bits (500 bits) in the X coordinate direction.
  • In such a condition, a coordinate (x, y) at the left upper end of the white lines P[0236] 25 is represented as (50, 480; and a coordinate (x, y) at the right lower end thereof is represented as (500, 490). The data for each coordinate (x, y) is registered or stored in the marker dictionary unit 127 by the marker holding unit 110.
  • In a similar manner, by virtue of the [0237] marker holding unit 110, a coordinate (x, y) at the left upper end of each of the other white lines is registered or stored in the marker dictionary unit 127. Also, a coordinate (x, y) at the right lower end thereof is represented as (500, 490) is registered in the marker dictionary unit 127. In addition, the shape of the markers (rectangular shape in this case) is registered in the marker dictionary unit 127.
  • FIG. 18(A) and [0238] 17(B) are diagrams used for explaining a process of setting a region of the object to be processed for the passage of moving objects in the third preferred embodiment.
  • The processed area setting means [0239] 121 previously mentioned with respect to FIG. 16 determines a region in which the distance between two moving objects is to be measured, in the case where some moving objects exist in a region where a plurality of white lines (markers) are placed.
  • For example, as shown in FIG. 18(A), it is assumed that a road has two opposed traffic lanes, and a plurality of cars move in the two opposite directions as respectively indicated by arrows. In this case, only hatched portions in the area correspond to the range of traffic lanes on the left and right sides, and are defined as a region in which the objects are to be processed. On the other hand, the other portions in the area are defined as a region in which the objects need not be processed. Namely, a masking process is executed to isolate the region in which the objects must be processed. [0240]
  • In FIG. 18(A), the region, in which the objects are to be processed, is defined as the hatched portions in the case of the road having two opposed traffic lanes. In accordance with the thus defined region, the [0241] marker dictionary unit 127 is updated. FIG. 18(B) illustrates the region concerning the traffic lane on the left side. In a case, where the markers are provided by taking into consideration the region in which the objects are to be processed in the traffic lanes on the left and right sides, it is not necessary to define such a processed region.
  • FIGS. [0242] 19(A) to 19(C) are diagrams respectively showing region isolation processing, binary code processing, and noise canceling, in the third preferred embodiment.
  • As shown in FIG. 19(A), it is assumed that a rectangular size of the region to be processed is defined by a coordinate (x[0243] 1, y1) at the left upper end and also by a coordinate (x2, y2) at the right lower end. Further, it is assumed that an input image of a pixel (i,j) is INij, and an output image of a pixel (i,j) is OUTij.
  • In this case, the relationship between the input and the output image of a pixel is represented by the following equation.[0244]
  • if (y1<i−row<y2 and x1<j−column<x2) then OUTij=INij
  • else OUTij=0
  • As is apparent from this equation, any input image existing in the region to be processed is directly output as an output image by the [0245] unit 121. However, the image features existing outside this region are output as zero (0).
  • As shown in FIG. 19(B), the binary [0246] code processing unit 122 outputs an output image OUTij which has the value of “1”, when the value of an input image pixel INij is smaller than a threshold value th1 and larger than a threshold value th2. In other cases, the binary code processing unit 122 outputs an output image OUTij which has the value of “0”. In this case, these threshold values th1, th2 are set in accordance with environmental illumination. However, in the environment in which a change of illumination may occur, these threshold values th1, th2 are adaptively adjusted, e.g., by calculating a histogram of a density of the image, etc.
  • The [0247] noise canceling unit 123 eliminates an isolated point of noise, in a case where the isolated point exists, in an output from the noise canceling unit 123. Namely, the noise canceling unit 123 extracts a pattern in which a plurality of dots (pixels), e.g., a group of dots which are positioned or exist in four positions.
  • For example, as shown in FIG. 19(C), a group of pixels are detected by utilizing a [0248] logical filter F 3×3 pixels in size. In a case where the binary value in each of four pixels positioned around a central pixel (in the upper and lower directions, and the left and right directions) becomes “1”, the noise canceling unit 123 outputs an output image OUTij which has the value of “1”.
  • FIGS. [0249] 20(A) to 20(C) are diagrams for explaining a process of labeling a given object in the third preferred embodiment.
  • The connected-[0250] area extract unit 124 provides the same label for a pattern in which a large number of dots (pixels), e.g., a group of dots which are positioned adjacent to each other (an eight direction test). A group of pixels are connected when the binary value, in at least one of all eight directions including the four oblique directions, the upper and lower directions, and the left and right directions, is “1”. For example, as shown in 20(A), with respect to an input image A1, the label 2, 3 and 4 are provided in a manner as shown in an image A2.
  • As shown in FIG. 20(B), the labeling process is executed by scanning an input image by means of several pixel patterns A to F each constituted by a matrix of 2×3. Further, in a case where the value of a given pixel E is equal to 1”, the label is updated in accordance with circumferential patterns A to F. [0251]
  • For example, when the value of pattern D is not equal to “0” (D≠0), the label of D is transferred to attach to the given pixel E. When the value of a pattern B is not equal to “0” (B≠0), and the value of a pattern B is not equal to that of a pattern D (B≠D), the fact that the label of a pattern B is the same as that of a pattern D is stored in a table. When the value of a pattern B is equal to “0” (B=0), and the value of a pattern C is not equal to that of a pattern D (C≠D), the fact that the label of a pattern C is the same as that of a pattern D is stored in a table. When the value of patterns A to D are all equal to “0”, a new label is attached to the given pixel E. [0252]
  • In such an implementation, an input image is first scanned. Thereafter, by utilizing a table corresponding the relation between labels, labels are attached to the input image. This technique is disclosed in Japanese Unexamined Patent Publication (Kokai) No. 3-206574 (Raster Scan Type Labeling Processing System). [0253]
  • The above-mentioned labeling process is realized by generalized CPU (Central Processing Unit), or DSP (Digital Signal Processor). However, with respect to a process utilizing a dedicated pipe-line processor operating at a video rate (33 msec/image), the related techniques are disclosed in Japanese Unexamined Patent Publication (Kokai) No. 61-243569 (System for Labeling to Digital Picture Area) and No. 63-27508 (Labeling Circuit for connected Area). [0254]
  • As a result of the labeling process, as shown in FIG. 20(C), in a case where a moving object does not exist in the input image, the same label is attached to each of the white lines in a region of object to be processed. For example, a label ([0255] 1) is attached to a first white line P1; a label (2) is attached to a second white line P2; and a label (25) is attached to a twenty-fifth white line P25.
  • FIGS. [0256] 21(A) and 21(B) are diagrams for explaining a process of projecting a labeled object in the third preferred embodiment.
  • The connected-area position/[0257] shape calculating unit 125 calculates the shape and the position of a portion of each label. For example, as shown in FIG. 21(A), with respect to a label image LK to which the same labels are attached, a projection V in the vertical direction and a projection H in the horizontal direction are produced. Further, the position and the shape of each of these projections are calculated or determined. Namely, projections are estimated for every projection.
  • The projection H in the horizontal direction is obtained by calculating a histogram in the horizontal direction. Also, the projection V in the vertical direction is obtained by calculating a histogram in the vertical direction. Further, the position of the projection H is a longitudinal position, and the size thereof is (Pjh2, Pjh1). On the other hand, the position of the projection V is a transverse position, and the size thereof is (Pjv2, Pjv1). [0258]
  • In this way, information about the shape of each of the labels, longitudinal size (Pjh2, Pjh1), transverse size (Pjv2, Pjv1), and an area SUM is obtained. In a case where a product of the longitudinal size and the transverse size is equal to the area of the histogram, it is discriminated or determined that this label has a rectangular shape. [0259]
  • Since the label image has various image densities, projections can be obtained for each density, i.e., for every label. [0260]
  • More concretely, as shown in FIG. 21(B), in a case where the size of the entire image is M×N, the projection value Pjh [k] [j] in the horizontal direction for a label K, and the projection value Pjv [k] [j] in the vertical direction for the same label K is represented by the following equations (1E) and (2E), with respect to an input image IN (i, j). [0261]
  • [projection value Pjh [k] [j] in the j-th row] [0262] for ( i = 1 , N ) { if IN ( i , j ) = k ( 0 ) , Pjh [ k ] [ j ] = Pjh [ k ] [ j ] + 1 } (1E)
    Figure US20020126875A1-20020912-M00001
  • [projection value Pjv [k] [j] in the i-th column] [0263] for ( i = 1 , M ) { if IN ( i , j ) = k ( 0 ) , Pjv [ k ] [ j ] = Pjv [ k ] [ j ] + 1 } (2E)
    Figure US20020126875A1-20020912-M00002
  • As is apparent from the equations (1E) and (2E), within the area of the same region, by adding 1 (+1) to the original projection value in each of the horizontal direction and the vertical direction, the projection value in the j-th row and i-th column can be calculated. In such a calculation process, the projection can be easily obtained. [0264]
  • Further, a sum (SUM) of the projections is represented by the following equations (3E). [0265]
  • [sum (SUM) of the projection] [0266] for ( J = 1 , M ) { SUM [ k ] = SUM [ k ] + Pjh [ k ] [ j ] } if IN ( i , j ) = k ( 0 ) , Pjv [ k ] [ j ] = Pjv [ k ] [ j ] + 1 } (3E)
    Figure US20020126875A1-20020912-M00003
  • The [0267] marker collating unit 126 collates the marker dictionary 127, and discriminates whether a portion of the same label calculated by the connected—area position/shape calculating unit 125 is a marker which overlaps a moving object. More specifically, a coordinate of a left upper end Pn(x1, y1), and a coordinate of a right lower end Pn(x2, y2) of a marker (white line) stored in the marker dictionary 127 are read out. As described before, the data about the white lines shown in FIG. 17(C), the data for the white lines shown in FIG. 18(B), and the like, are stored in advance in the marker dictionary 127.
  • FIGS. [0268] 22(A) and 22(B) are diagrams for explaining a process of extracting a moving object which is a car having a color other than white and which passes through or over the markers in a third preferred embodiment.
  • The moving [0269] object extract unit 128 extracts a moving object (e.g.,car) which overlaps a marker. When cars each having a color (e.g., black, red, or blue) other than white move over a plurality of markers, the two cars C1, C2 and markers partially overlap with each other, as shown in FIG. 22(A). When the two cars C1, C2 and markers are simultaneously captured by a video camera, et al., from the above or overhead position, the two cars C1, C2 are separated by the markers. Therefore, moving objects such as cars can be extracted.
  • More specifically, as shown in [0270] 22(B), in an image portion where cars and markers overlap with each other, at least one marker is divided by the cars into two parts. Namely, at least two labels are allocated to each of the cars. Consequently, as shown in FIG. 22(B), a number of labels are provided (markers (1), (2), . . . (9)), and the number of the markers seems to increase. Further, the size of the divided markers becomes relatively small. If the number and the size of these markers are collated using the marker dictionary 127, it can be easily discriminated whether or not these markers are generated due to an overlap of the markers with moving objects, and the moving objects can be extracted.
  • In the case where a color of a specified moving object, e.g., a car, is different from that of the markers, as shown in FIG. 23(A), it is possible to provide a marker LM much larger than cars C[0271] 1, C2. When the cars are moving on the marker LM as shown in FIG. 23(B), the projection value in the horizontal direction is obtained, as shown in FIG. 23(C). On the basis of the projection value, the position of each of the cars can be easily extracted or determined. In this case, it is not necessary to carry out a labeling process.
  • FIGS. [0272] 24(A) and 24(B) are diagrams for explaining a process of extracting a moving object which is a white car and passes through markers, in the third preferred embodiment.
  • In the case where white cars move on the markers of white lines, the condition of the cars C[0273] 1, C2 and the markers is illustrated in FIG. 24(A). When this condition is input as an original input image for carrying out a labeling process, the same label is attached to a portion in which one car C1 and the markers overlap with each other. Also, the same label is attached to a portion in which the other car C2 and the corresponding markers overlap with each other.
  • Therefore, as shown in FIG. 24(B), two different labels ([0274] 1), (2) are attached. Therefore, when the size of each of the labels is calculated in the connected-area position/shape calculating unit 125, an area of each of the labels (a sum (SUM) of all the projections) is larger than the area of a single marker stored in the marker dictionary 127. Further, when the shape of each of the labels is examined in the connected-area position/shape calculating unit 125, the value of a length (Pjh2−Pjh1)×a width (Pjv2−Pjv1) is different from the value stored in the marker dictionary 127. On the basis of such a discrimination or detection process, the cars can be extracted.
  • In a case where white cars and the markers do not completely overlap each other, as in a label ([0275] 2) shown in FIG. 23(B), the size and the shape of each of the labels conform to the size and the shape of one marker. Therefore, it is possible to detect or determine a distance between two cars.
  • The [0276] marker extract unit 129 extracts the markers, and calculates a distance between two cars. In the case where cars having a color other than white and pass through or over the markers, as in a label (5) shown in FIG. 22(A), it is discriminated or determined whether the size and the shape (rectangular) of a portion to which the same label is attached conforms to the data stored in the marker dictionary 127. If it is confirmed that the size and the shape of the same label conform to the data stored in the marker dictionary 127, the same label is detected as one of markers. In this case, in accordance with the data in the marker dictionary 127, the size of each of the markers and the space between the markers can be calculated, and a distance between two cars can be calculated.
  • In the case where each of the markers has the shape shown in FIG. 23(A), a projection of an original marker and a continuous region from Pjh1 to Pjh2 can be obtained, and on the basis of a region from Pjh1 to Pjh2, a distance between two cars can be calculated. [0277]
  • FIGS. [0278] 25(A) to 25(E) are diagrams showing various tables which are utilized for calculating a distance between two moving objects in the third preferred embodiment.
  • The moving object/marker time-series [0279] table making unit 13 creates a time-series table of the moving objects and the markers associated with which the moving objects exist. As shown in FIG. 25(A), the time-series table of the moving objects indicates the position of each moving object with respect to the markers. In this case, it is assumed each of the moving objects moves from marker P25 to P2.
  • In a case where only the moving objects are taken into consideration, as indicated by circles in FIG. 25(A), the relationship between the time when the moving objects exist in the image and the relative position of each marker (the number of white lines) is established by the time-series table. [0280]
  • Further, as shown in FIG. 25(A), by creating a table showing markers existing between the moving objects, a locus of each marker existing between the moving objects can be indicated. [0281]
  • The moving object/[0282] marker correlating unit 132 provides the same number for each of the moving objects which are discriminated or determined to be the same. In this discrimination, the condition that different moving objects have a predetermined distance, and also a direction (in this case, the direction in which moving objects move is P25 to P1) are considered. Further, as shown in FIG. 25(C), by making a time-series table showing markers existing between the moving objects, the correspondence relation between the moving objects (shown in FIG. 25(B)) and the related markers can be clarified.
  • In such a technique, each of the markers is traced by taking only markers existing between moving objects into consideration. Therefore, the number necessary for the correlation between the moving objects and the markers can be reduced. Consequently, as compared to the case in which moving objects are extracted without markers, the technique of the third embodiment allows an extracting or identification process to be carried out at a high speed. [0283]
  • In this case, as shown in FIG. 25(A), a distance between two cars is defined by two white lines P[0284] 4, P5 at the sampling time T2, while the distance is defined by two white lines P3, P4 at the sampling time T3. At this sampling time, the subject car exists over a plurality of white lines. This phenomenon is illustrated in FIG. 25(E). In FIG. 25(E), the same mark is provided for a plurality of white lines, to indicate that the subject car moves on or over a plurality of white lines.
  • The [0285] distance measuring unit 113 measures a distance between two moving objects, e.g., cars. For example, as shown in FIG. 25(B), on the basis of a correlation in the table between moving objects, the distance between two moving objects is estimated by calculating a distance between white lines (markers) at each sampling time. Further, the maximum value, the minimum value, and the average value are calculated. For example, moving objects are correlated with a plurality of images at the sampling time Tm to Tn, and the average value are calculated by utilizing the following equation (4E). Here, a difference between white lines (WL) is defined as Δn−Δm.
  • average=ΕΔ i (distance between WL)/(m−n+1)
  • i=n,m   (4E)
  • Further, a modification of a connected-[0286] area extract unit 124 and a connected-area position/shape calculating unit 125 in the third preferred embodiment will be described with reference to FIGS. 26(A) to 26(E).
  • In this case, the modified connected-[0287] area extract unit 124 extracts a contour corresponding to a portion where each binary code of binary information is “1”. Alternatively, a contour obtained by a color (color of the marker) extracting process can be extracted by the connected-area extract unit 124.
  • Further, a starting point for the extracting process, the maximum and minimum value of x, y, a length of a circumference are stored in advance, and a contour extracting process is started. In executing the contour extracting process, the maximum and minimum value of x, y, and a length of the circumference of the contour are calculated. [0288]
  • As shown in FIG. 26(B), in a case where the contour extracting process is carried out only on the white lines, the thus extracted data conforms to the data stored in the [0289] marker dictionary 127. However, in the case where moving objects overlap the white lines, as shown in FIGS. 26(C) and 26(D), at least one of the data items does not conform to the data stored in the marker dictionary 127. A technique for color extracting process is disclosed in Japanese Unexamined Patent Publication (Kokai) No. 63-314988 (Video Rate Color Extracting Device).
  • The connected-area position/[0290] shape calculating unit 125 compares the maximum and minimum value in both of the x-component and y-component and the value of the length of the circumference with the values stored in the marker dictionary 127. Further, it is concluded that a contour has a rectangular shape, in a case where the maximum and minimum value in both of the x-component direction and y-component direction and the value of the length of the circumference conform to the value stored in the marker dictionary 127.
  • Further, as shown in FIGS. [0291] 26(E), in carrying out such a contour extracting process, the entire image is scanned by utilizing a logical filter of 3×3 pixels. In the case where the value of a portion around a central pixel (i, j) is all “1”, it is discriminated that this region is related to an inner part of each marker, and “0” is output. In the other case, it is discriminated that this region is a boundary, and “1” is output.
  • While the present invention has been described as related to the preferred embodiments, it will be understood that various changes and modifications may be made without departing from the spirit and the scope of the invention as hereinafter claimed. [0292]

Claims (15)

1. An image processing apparatus for extracting one or a plurality of objects, in the case where a plurality of stationary objects and a plurality of moving objects are contained together as a group of objects in one image, comprising:
an image-input unit (1) which inputs an image including a background and a plurality of said objects;
a background image extract unit (2) which extracts and stores said background;
a first average background extract unit (3) which extracts an image that includes one or a plurality of said stationary objects or moving objects each having a speed not higher than a predetermined first speed and also includes said background;
a second average background extract unit (4) which extracts an image that includes one or a plurality of said stationary objects or moving objects each having a speed not higher than a predetermined second speed and also includes said background;
a first difference-calculation processing unit (5) which calculates a difference value between an output from said background image extract unit (2) and either one of outputs from said first average background extract unit (3), and then generates a first speed image;
a second difference-calculation processing unit (6) which calculates a difference value between respective outputs from said first and second average background extract unit (3, 4), and then generates a second speed image; and
a third difference-calculation processing unit (7) which calculates a difference value between an output from said image-input unit (1) and either one of outputs from said first and second average background extract unit (3, 4), and then generates a third speed image.
2. An image processing apparatus as set forth in claim 1, wherein said apparatus further comprises a plurality of local-area characteristic-amount extract processing units which deal with outputs from said image-input unit (1), each of said plurality of local-area characteristic-amount extract processing units including:
a local-area determining unit (11) which allocates said outputs from said image-input unit (1) to each of a plurality of local areas;
a labeling processing unit (13) which separates at least one object.from each of said plurality of local areas, by labeling the same object existing in each of said plurality of local areas; and
a characteristic-amount calculating unit (14) which calculates a plurality of characteristic-amounts for the thus labeled object in said plurality of local areas.
3. An image processing apparatus as set forth in claim 1, wherein:
said apparatus is operative to calculate a difference between said background and an average background image at a low speed, and to extract one or a plurality of connected areas from a difference image obtained by the thus calculated difference, wherein;
said apparatus is operative to produce each projection for each of said connected areas, and to calculate each position of the corresponding object in accordance with said each projection, and to calculate a plurality of characteristic-amounts, which at least include the value of a length and breadth of said object or the value of an area of said object, wherein;
said apparatus is operative to estimate a change in said each position of said object and also a change in said characteristic-amounts of said object for every sampling time in time series, and wherein;
said apparatus is operative to determine said object as a stationary object, in the case where both of the change in said position of said object and the change in said characteristic-amounts thereof are small.
4. An image processing apparatus as set forth in claim 2, wherein, said apparatus further comprises local-area characteristic-amount extract units (8, 9 and 10) which respectively have said local-area characteristic-amount extract processing units, corresponding to said local areas, for every speed image which is output from each of said difference-calculation processing units (5, 6 and 7).
5. An image processing apparatus as set forth in claim 2, wherein said apparatus comprises a locus calculation unit (20) having a list making unit (20-2), which detects an existence of said object in each of said plurality of local areas in time series, on the basis of an output from each of said plurality of local-area characteristic-amount extract processing units, and which makes out a list concerning the result of the detection.
6. An image processing apparatus as set forth in claim 5, wherein said locus-calculation unit (20) includes a character analyzing unit (20-1) for making a locus of the same moving object, on the basis of a character concerning the shape of said moving object, which is selected among said characteristic-amounts calculated by said characteristic-amount calculating unit (14).
7. An image processing apparatus as set forth in claim 5, wherein said apparatus is adapted to set a plurality of local areas which are to be processed, and to check whether or not said moving object passes through each of said local processed areas, and to calculate the locus of the same moving object, in accordance with the character concerning the shape of said moving object and the time when said moving object passes through each of said local processed areas, and to discriminate an abrupt change in the speed of said moving object and an abrupt change in the direction thereof, and to identify said moving object in said image.
8. An image processing apparatus as set forth in claim 5, wherein:
said apparatus is adapted to set a plurality of areas which are to be processed, in accordance with the size of a plurality of moving objects, and wherein;
said locus calculation unit (20) includes a list analyzing unit (20-3) for discriminating a locus of each of a plurality of small-scale cars in each of said local processed areas, and for recognizing a locus of the same moving object, even in the case where a large-scale car puts said plurality of small-scale cars out of sight.
9. An image processing apparatus as set forth in claim 6, wherein said apparatus is adapted to set a plurality of local areas which are to be processed, and to examine a periodicity as to whether or not said moving object passes through each of said local processed areas, and to examine a periodicity as to the character concerning the shape of said moving object, and to identify at least one flashing object.
10. An image processing apparatus for calculating a distance between two moving objects contained in one image, comprising:
an image-input unit (114) which inputs said image including a background and a plurality of objects;
a marker holding unit (110) which places a plurality of markers in said background;
a moving object extract unit (128) which extracts a plurality of moving objects;
a moving object/marker time-series table making unit (131) which traces said plurality of moving objects;
a marker extract unit (129) which extracts the markers existing between the two different moving objects; and
a distance measuring unit (113) which calculates said distance between said two moving objects, on the basis of the size of the thus extracted markers.
11. An image processing apparatus as set forth in claim 10, wherein a plurality of other markers, which are not connected with each other by said marker holding unit (110), are provided in said background, and wherein said apparatus further comprises:
a connected-area position/shape calculating unit (125) which calculates the size, the shape, and the number of said markers;
a marker dictionary unit (127) which has a marker dictionary for storing in advance the size and the shape of said markers; and
a marker collating unit (126) which collates the shape of the markers existing between the two different moving objects and also collates said marker dictionary, and wherein said apparatus is adapted to calculate the number of the markers which can be identified as true markers by a result of the collation in said marker collating unit (126), and to calculate said distance between said two moving objects.
12. An image processing apparatus as set forth in claim 11, wherein said apparatus is adapted to trace the the number of the markers existing between the two different moving objects, and to calculate said distance between said two moving objects.
13. An image processing apparatus for calculating a distance between two cars in the case where a plurality of cars are contained in one image as moving objects, comprising:
a marker holding unit (110) which draws a plurality of white lines which are perpendicular to the direction in which the cars move, at equal spaces between adjoining white lines, as a plurality of markers;
a marker dictionary unit (127) which has a marker dictionary for estimating in advance the value of a length and breadth of each of said white lines, and also storing in advance said value of said length and breadth thereof;
a connected-area extract unit (124) which extracts some connected areas where the white lines are connected with each other, and labels said connected areas;
a connected-area position/shape calculating unit (125) which calculates the size and the shape of the thus labeled areas respectively corresponding to regions formed by said connected white lines, and then confirms the size of the white lines in said regions formed by said connected white lines and the size of the rectangular shape corresponding to each of the thus labeled areas, in accordance with the value of a length and breadth of the thus labeled areas;
a moving object/marker correlating unit (132) which traces a specified region formed by continuous white lines corresponding to said connected white lines; and
a distance measuring unit (113) which extracts the number of continuous white lines from the white lines confirmed by said connected-area position/shape calculating unit (125), and calculates said distance between said two moving objects on the basis of the total sum of spaces between said continuous white lines.
14. An image processing apparatus as set forth in claim 13, wherein:
said connected-area position/shape calculating unit (125) is adapted to separate each of said connected areas into a plurality of connected components, and to obtain the value of a projection for every connected components; wherein;
said connected-area position/shape calculating unit (125) is adapted to calculate the value of length and breadth in each of said connected areas, and to compare a product of the value of said length and the value of said breadth with a total sum of the value of each projection, and wherein;
said connected-area position/shape calculating unit (125) is adapted to finally-discriminate whether or not each of said connected areas has the rectangular shape.
15. An image processing apparatus as set forth in claim 13, wherein:
said connected-area position/shape calculating unit (125) is adapted to extract a contour in XY two-dimensional image in which a binary code processing is carried out and a color extract processing is carried out, and to obtain the maximum and minimum value in both of x-component and y-component of said contour and also the value of a length of circumference of said contour, and wherein;
said connected-area position/shape calculating unit (125) is adapted to compare said maximum and minimum value in both of said x-component and y-component and said value of said length of circumference with the value stored in advance, and to conclude that said contour has the rectangular shape, in the case where said maximum and minimum value in both of said X-component and Y-component and said value of said length of circumference conform to said value stored in advance.
US09/564,535 1993-03-31 2000-05-04 Image processing apparatus Expired - Fee Related US6430303B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US09/564,535 US6430303B1 (en) 1993-03-31 2000-05-04 Image processing apparatus

Applications Claiming Priority (8)

Application Number Priority Date Filing Date Title
JP5-073319 1993-03-31
JP7331993 1993-03-31
JP12256393A JP3288474B2 (en) 1993-03-31 1993-05-25 Image processing device
JO5-122563 1993-05-25
JP5-122563 1993-05-25
US22092994A 1994-03-31 1994-03-31
US08/681,485 US6141435A (en) 1993-03-31 1996-07-23 Image processing apparatus
US09/564,535 US6430303B1 (en) 1993-03-31 2000-05-04 Image processing apparatus

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US08/681,485 Division US6141435A (en) 1993-03-31 1996-07-23 Image processing apparatus

Publications (2)

Publication Number Publication Date
US6430303B1 US6430303B1 (en) 2002-08-06
US20020126875A1 true US20020126875A1 (en) 2002-09-12

Family

ID=27465570

Family Applications (1)

Application Number Title Priority Date Filing Date
US09/564,535 Expired - Fee Related US6430303B1 (en) 1993-03-31 2000-05-04 Image processing apparatus

Country Status (1)

Country Link
US (1) US6430303B1 (en)

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020196341A1 (en) * 2001-06-21 2002-12-26 Fujitsu Limited Method and apparatus for processing pictures of mobile object
US6754370B1 (en) * 2000-08-14 2004-06-22 The Board Of Trustees Of The Leland Stanford Junior University Real-time structured light range scanning of moving scenes
US20050105773A1 (en) * 2003-09-24 2005-05-19 Mitsuru Saito Automated estimation of average stopped delay at signalized intersections
US20060274917A1 (en) * 1999-11-03 2006-12-07 Cet Technologies Pte Ltd Image processing techniques for a video based traffic monitoring system and methods therefor
US20070120706A1 (en) * 1993-02-26 2007-05-31 Donnelly Corporation Image sensing system for a vehicle
US7655894B2 (en) 1996-03-25 2010-02-02 Donnelly Corporation Vehicular image sensing system
US7972045B2 (en) 2006-08-11 2011-07-05 Donnelly Corporation Automatic headlamp control system
US8017898B2 (en) 2007-08-17 2011-09-13 Magna Electronics Inc. Vehicular imaging system in an automatic headlamp control system
US20110310245A1 (en) * 2010-06-21 2011-12-22 Nissan Motor Co., Ltd. Travel distance detection device and travel distance detection method
US9171217B2 (en) 2002-05-03 2015-10-27 Magna Electronics Inc. Vision system for vehicle
US9191634B2 (en) 2004-04-15 2015-11-17 Magna Electronics Inc. Vision system for vehicle
US9367751B2 (en) 2012-06-19 2016-06-14 Ichikoh Industries, Ltd. Object detection device for area around vehicle
US9436880B2 (en) 1999-08-12 2016-09-06 Magna Electronics Inc. Vehicle vision system
US9509957B2 (en) 2008-07-24 2016-11-29 Magna Electronics Inc. Vehicle imaging system
US10132971B2 (en) 2016-03-04 2018-11-20 Magna Electronics Inc. Vehicle camera with multiple spectral filters
CN108846864A (en) * 2018-05-29 2018-11-20 珠海全志科技股份有限公司 A kind of position capture system, the method and device of moving object
US10397544B2 (en) 2010-08-19 2019-08-27 Nissan Motor Co., Ltd. Three-dimensional object detection device and three-dimensional object detection method
US10875403B2 (en) 2015-10-27 2020-12-29 Magna Electronics Inc. Vehicle vision system with enhanced night vision
US11951900B2 (en) 2023-04-10 2024-04-09 Magna Electronics Inc. Vehicular forward viewing image capture system

Families Citing this family (96)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5877897A (en) 1993-02-26 1999-03-02 Donnelly Corporation Automatic rearview mirror, vehicle lighting control and vehicle interior monitoring system using a photosensor array
US6891563B2 (en) 1996-05-22 2005-05-10 Donnelly Corporation Vehicular vision system
JP3873554B2 (en) * 1999-12-27 2007-01-24 株式会社日立製作所 Monitoring device, recording medium on which monitoring program is recorded
US6882287B2 (en) 2001-07-31 2005-04-19 Donnelly Corporation Automotive lane change aid
US7697027B2 (en) 2001-07-31 2010-04-13 Donnelly Corporation Vehicular video system
US20030043172A1 (en) * 2001-08-24 2003-03-06 Huiping Li Extraction of textual and graphic overlays from video
JP3747866B2 (en) * 2002-03-05 2006-02-22 日産自動車株式会社 Image processing apparatus for vehicle
US7308341B2 (en) 2003-10-14 2007-12-11 Donnelly Corporation Vehicle communication system
JP2005215985A (en) * 2004-01-29 2005-08-11 Fujitsu Ltd Traffic lane decision program and recording medium therefor, traffic lane decision unit and traffic lane decision method
US7561720B2 (en) * 2004-04-30 2009-07-14 Visteon Global Technologies, Inc. Single camera system and method for range and lateral position measurement of a preceding vehicle
AU2005242076B2 (en) * 2004-05-01 2009-07-23 Eliezer Jacob Digital camera with non-uniform image resolution
GB0419882D0 (en) * 2004-09-08 2004-10-13 Bamford Excavators Ltd Calculation module
US7881496B2 (en) 2004-09-30 2011-02-01 Donnelly Corporation Vision system for vehicle
US7639841B2 (en) * 2004-12-20 2009-12-29 Siemens Corporation System and method for on-road detection of a vehicle using knowledge fusion
US7720580B2 (en) 2004-12-23 2010-05-18 Donnelly Corporation Object detection system for vehicle
US7561721B2 (en) * 2005-02-02 2009-07-14 Visteon Global Technologies, Inc. System and method for range measurement of a preceding vehicle
US7920959B1 (en) * 2005-05-01 2011-04-05 Christopher Reed Williams Method and apparatus for estimating the velocity vector of multiple vehicles on non-level and curved roads using a single camera
US20070031008A1 (en) * 2005-08-02 2007-02-08 Visteon Global Technologies, Inc. System and method for range measurement of a preceding vehicle
US7623681B2 (en) * 2005-12-07 2009-11-24 Visteon Global Technologies, Inc. System and method for range measurement of a preceding vehicle
WO2008127752A2 (en) 2007-01-25 2008-10-23 Magna Electronics Radar sensing system for vehicle
US20090005948A1 (en) * 2007-06-28 2009-01-01 Faroog Abdel-Kareem Ibrahim Low speed follow operation and control strategy
US7914187B2 (en) 2007-07-12 2011-03-29 Magna Electronics Inc. Automatic lighting system with adaptive alignment function
EP2191457B1 (en) 2007-09-11 2014-12-10 Magna Electronics Imaging system for vehicle
US8446470B2 (en) 2007-10-04 2013-05-21 Magna Electronics, Inc. Combined RGB and IR imaging sensor
TW201001338A (en) * 2008-06-16 2010-01-01 Huper Lab Co Ltd Method of detecting moving objects
EP2401176B1 (en) 2009-02-27 2019-05-08 Magna Electronics Alert system for vehicle
US8376595B2 (en) 2009-05-15 2013-02-19 Magna Electronics, Inc. Automatic headlamp control
WO2011014497A1 (en) 2009-07-27 2011-02-03 Magna Electronics Inc. Vehicular camera with on-board microcontroller
CN102481874B (en) 2009-07-27 2015-08-05 马格纳电子系统公司 Parking assistance system
ES2538827T3 (en) 2009-09-01 2015-06-24 Magna Mirrors Of America, Inc. Imaging and display system for a vehicle
US9117123B2 (en) 2010-07-05 2015-08-25 Magna Electronics Inc. Vehicular rear view camera display system with lifecheck function
US9180908B2 (en) 2010-11-19 2015-11-10 Magna Electronics Inc. Lane keeping system and lane centering system
WO2012075250A1 (en) 2010-12-01 2012-06-07 Magna Electronics Inc. System and method of establishing a multi-camera image using pixel remapping
TWI452540B (en) * 2010-12-09 2014-09-11 Ind Tech Res Inst Image based detecting system and method for traffic parameters and computer program product thereof
US9264672B2 (en) 2010-12-22 2016-02-16 Magna Mirrors Of America, Inc. Vision display system for vehicle
WO2012103193A1 (en) 2011-01-26 2012-08-02 Magna Electronics Inc. Rear vision system with trailer angle detection
US9194943B2 (en) 2011-04-12 2015-11-24 Magna Electronics Inc. Step filter for estimating distance in a time-of-flight ranging system
US9547795B2 (en) 2011-04-25 2017-01-17 Magna Electronics Inc. Image processing method for detecting objects using relative motion
WO2013016409A1 (en) 2011-07-26 2013-01-31 Magna Electronics Inc. Vision system for vehicle
TW201310392A (en) * 2011-08-26 2013-03-01 Novatek Microelectronics Corp Estimating method of predicted motion vector
DE112012003931T5 (en) 2011-09-21 2014-07-10 Magna Electronics, Inc. Image processing system for a motor vehicle with image data transmission and power supply via a coaxial cable
US9681062B2 (en) 2011-09-26 2017-06-13 Magna Electronics Inc. Vehicle camera image quality improvement in poor visibility conditions by contrast amplification
US9146898B2 (en) 2011-10-27 2015-09-29 Magna Electronics Inc. Driver assist system with algorithm switching
US10099614B2 (en) 2011-11-28 2018-10-16 Magna Electronics Inc. Vision system for vehicle
US10457209B2 (en) 2012-02-22 2019-10-29 Magna Electronics Inc. Vehicle vision system with multi-paned view
WO2013126715A2 (en) 2012-02-22 2013-08-29 Magna Electronics, Inc. Vehicle camera system with image manipulation
US8694224B2 (en) 2012-03-01 2014-04-08 Magna Electronics Inc. Vehicle yaw rate correction
US10609335B2 (en) 2012-03-23 2020-03-31 Magna Electronics Inc. Vehicle vision system with accelerated object confirmation
US9319637B2 (en) 2012-03-27 2016-04-19 Magna Electronics Inc. Vehicle vision system with lens pollution detection
WO2013158592A2 (en) 2012-04-16 2013-10-24 Magna Electronics, Inc. Vehicle vision system with reduced image color data processing by use of dithering
US10089537B2 (en) 2012-05-18 2018-10-02 Magna Electronics Inc. Vehicle vision system with front and rear camera integration
US9340227B2 (en) 2012-08-14 2016-05-17 Magna Electronics Inc. Vehicle lane keep assist system
DE102013217430A1 (en) 2012-09-04 2014-03-06 Magna Electronics, Inc. Driver assistance system for a motor vehicle
US9558409B2 (en) 2012-09-26 2017-01-31 Magna Electronics Inc. Vehicle vision system with trailer angle detection
US9446713B2 (en) 2012-09-26 2016-09-20 Magna Electronics Inc. Trailer angle detection system
US9707896B2 (en) 2012-10-15 2017-07-18 Magna Electronics Inc. Vehicle camera lens dirt protection via air flow
US9743002B2 (en) 2012-11-19 2017-08-22 Magna Electronics Inc. Vehicle vision system with enhanced display functions
US9090234B2 (en) 2012-11-19 2015-07-28 Magna Electronics Inc. Braking control system for vehicle
US10025994B2 (en) 2012-12-04 2018-07-17 Magna Electronics Inc. Vehicle vision system utilizing corner detection
US9481301B2 (en) 2012-12-05 2016-11-01 Magna Electronics Inc. Vehicle vision system utilizing camera synchronization
US20140218529A1 (en) 2013-02-04 2014-08-07 Magna Electronics Inc. Vehicle data recording system
US9092986B2 (en) 2013-02-04 2015-07-28 Magna Electronics Inc. Vehicular vision system
US9445057B2 (en) 2013-02-20 2016-09-13 Magna Electronics Inc. Vehicle vision system with dirt detection
US10027930B2 (en) 2013-03-29 2018-07-17 Magna Electronics Inc. Spectral filtering for vehicular driver assistance systems
US9327693B2 (en) 2013-04-10 2016-05-03 Magna Electronics Inc. Rear collision avoidance system for vehicle
US10232797B2 (en) 2013-04-29 2019-03-19 Magna Electronics Inc. Rear vision system for vehicle with dual purpose signal lines
US9508014B2 (en) 2013-05-06 2016-11-29 Magna Electronics Inc. Vehicular multi-camera vision system
US10567705B2 (en) 2013-06-10 2020-02-18 Magna Electronics Inc. Coaxial cable with bidirectional data transmission
US9260095B2 (en) 2013-06-19 2016-02-16 Magna Electronics Inc. Vehicle vision system with collision mitigation
US20140375476A1 (en) 2013-06-24 2014-12-25 Magna Electronics Inc. Vehicle alert system
US10755110B2 (en) 2013-06-28 2020-08-25 Magna Electronics Inc. Trailering assist system for vehicle
US10326969B2 (en) 2013-08-12 2019-06-18 Magna Electronics Inc. Vehicle vision system with reduction of temporal noise in images
US9499139B2 (en) 2013-12-05 2016-11-22 Magna Electronics Inc. Vehicle monitoring system
US9988047B2 (en) 2013-12-12 2018-06-05 Magna Electronics Inc. Vehicle control system with traffic driving control
US10160382B2 (en) 2014-02-04 2018-12-25 Magna Electronics Inc. Trailer backup assist system
US9623878B2 (en) 2014-04-02 2017-04-18 Magna Electronics Inc. Personalized driver assistance system for vehicle
US9487235B2 (en) 2014-04-10 2016-11-08 Magna Electronics Inc. Vehicle control system with adaptive wheel angle correction
US10328932B2 (en) 2014-06-02 2019-06-25 Magna Electronics Inc. Parking assist system with annotated map generation
US9925980B2 (en) 2014-09-17 2018-03-27 Magna Electronics Inc. Vehicle collision avoidance system with enhanced pedestrian avoidance
US9764744B2 (en) 2015-02-25 2017-09-19 Magna Electronics Inc. Vehicle yaw rate estimation system
US10286855B2 (en) 2015-03-23 2019-05-14 Magna Electronics Inc. Vehicle vision system with video compression
US10819943B2 (en) 2015-05-07 2020-10-27 Magna Electronics Inc. Vehicle vision system with incident recording function
US10214206B2 (en) 2015-07-13 2019-02-26 Magna Electronics Inc. Parking assist system for vehicle
US10078789B2 (en) 2015-07-17 2018-09-18 Magna Electronics Inc. Vehicle parking assist system with vision-based parking space detection
US10086870B2 (en) 2015-08-18 2018-10-02 Magna Electronics Inc. Trailer parking assist system for vehicle
US10187590B2 (en) 2015-10-27 2019-01-22 Magna Electronics Inc. Multi-camera vehicle vision system with image gap fill
US10144419B2 (en) 2015-11-23 2018-12-04 Magna Electronics Inc. Vehicle dynamic control system for emergency handling
US10282831B2 (en) * 2015-12-28 2019-05-07 Novatek Microelectronics Corp. Method and apparatus for motion compensated noise reduction
US11277558B2 (en) 2016-02-01 2022-03-15 Magna Electronics Inc. Vehicle vision system with master-slave camera configuration
US11433809B2 (en) 2016-02-02 2022-09-06 Magna Electronics Inc. Vehicle vision system with smart camera video output
US10160437B2 (en) 2016-02-29 2018-12-25 Magna Electronics Inc. Vehicle control system with reverse assist
US20170253237A1 (en) 2016-03-02 2017-09-07 Magna Electronics Inc. Vehicle vision system with automatic parking function
US10055651B2 (en) 2016-03-08 2018-08-21 Magna Electronics Inc. Vehicle vision system with enhanced lane tracking
US10300859B2 (en) 2016-06-10 2019-05-28 Magna Electronics Inc. Multi-sensor interior mirror device with image adjustment
US10750119B2 (en) 2016-10-17 2020-08-18 Magna Electronics Inc. Vehicle camera LVDS repeater
JP7104916B2 (en) * 2018-08-24 2022-07-22 国立大学法人岩手大学 Moving object detection device and moving object detection method

Family Cites Families (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
SE394146B (en) * 1975-10-16 1977-06-06 L Olesen SATURATION DEVICE RESP CONTROL OF A FOREMAL, IN ESPECIALLY THE SPEED OF A VEHICLE.
US4449144A (en) 1981-06-26 1984-05-15 Omron Tateisi Electronics Co. Apparatus for detecting moving body
CA1287161C (en) 1984-09-17 1991-07-30 Akihiro Furukawa Apparatus for discriminating a moving region and a stationary region in a video signal
JPS61243569A (en) 1985-04-19 1986-10-29 Fujitsu Ltd System for labeling to digital picture area
JPS6320578A (en) 1986-07-14 1988-01-28 Fujitsu Ltd Labeling circuit for connected region
JPS63314988A (en) 1987-06-18 1988-12-22 Fujitsu Ltd Video rate color extracting device
US5109435A (en) 1988-08-08 1992-04-28 Hughes Aircraft Company Segmentation method for use against moving objects
US4937878A (en) 1988-08-08 1990-06-26 Hughes Aircraft Company Signal processing for autonomous acquisition of objects in cluttered background
US5034986A (en) 1989-03-01 1991-07-23 Siemens Aktiengesellschaft Method for detecting and tracking moving objects in a digital image sequence having a stationary background
JPH03206574A (en) 1990-01-09 1991-09-09 Fujitsu Ltd Raster scan type labeling processing system
JP2975629B2 (en) 1990-03-26 1999-11-10 株式会社東芝 Image recognition device
US5150426A (en) 1990-11-20 1992-09-22 Hughes Aircraft Company Moving target detection method using two-frame subtraction and a two quadrant multiplier
US5243418A (en) 1990-11-27 1993-09-07 Kabushiki Kaisha Toshiba Display monitoring system for detecting and tracking an intruder in a monitor area
US5301239A (en) 1991-02-18 1994-04-05 Matsushita Electric Industrial Co., Ltd. Apparatus for measuring the dynamic state of traffic
US5309137A (en) * 1991-02-26 1994-05-03 Mitsubishi Denki Kabushiki Kaisha Motor car traveling control device
US5590217A (en) * 1991-04-08 1996-12-31 Matsushita Electric Industrial Co., Ltd. Vehicle activity measuring apparatus
JPH05159057A (en) 1991-12-04 1993-06-25 Fujitsu Ltd Measuring instrument for moving object
JPH05159058A (en) 1991-12-04 1993-06-25 Fujitsu Ltd Locus analysis device for moving object
DE69330513D1 (en) 1992-03-20 2001-09-06 Commw Scient Ind Res Org OBJECT MONITORING SYSTEM
US5515448A (en) * 1992-07-28 1996-05-07 Yazaki Corporation Distance measuring apparatus of a target tracking type
JP3206574B2 (en) 1998-12-17 2001-09-10 日本電気株式会社 Signal estimation device and storage medium storing program

Cited By (58)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070120706A1 (en) * 1993-02-26 2007-05-31 Donnelly Corporation Image sensing system for a vehicle
US7459664B2 (en) * 1993-02-26 2008-12-02 Donnelly Corporation Image sensing system for a vehicle
US8492698B2 (en) 1996-03-25 2013-07-23 Donnelly Corporation Driver assistance system for a vehicle
US8324552B2 (en) 1996-03-25 2012-12-04 Donnelly Corporation Vehicular image sensing system
US8222588B2 (en) 1996-03-25 2012-07-17 Donnelly Corporation Vehicular image sensing system
US8481910B2 (en) 1996-03-25 2013-07-09 Donnelly Corporation Vehicular image sensing system
US7994462B2 (en) 1996-03-25 2011-08-09 Donnelly Corporation Vehicular image sensing system
US7655894B2 (en) 1996-03-25 2010-02-02 Donnelly Corporation Vehicular image sensing system
US8993951B2 (en) 1996-03-25 2015-03-31 Magna Electronics Inc. Driver assistance system for a vehicle
US9436880B2 (en) 1999-08-12 2016-09-06 Magna Electronics Inc. Vehicle vision system
US20060274917A1 (en) * 1999-11-03 2006-12-07 Cet Technologies Pte Ltd Image processing techniques for a video based traffic monitoring system and methods therefor
US7460691B2 (en) * 1999-11-03 2008-12-02 Cet Technologies Pte Ltd Image processing techniques for a video based traffic monitoring system and methods therefor
US6754370B1 (en) * 2000-08-14 2004-06-22 The Board Of Trustees Of The Leland Stanford Junior University Real-time structured light range scanning of moving scenes
US7298394B2 (en) * 2001-06-21 2007-11-20 Fujitsu Limited Method and apparatus for processing pictures of mobile object
US20020196341A1 (en) * 2001-06-21 2002-12-26 Fujitsu Limited Method and apparatus for processing pictures of mobile object
US10351135B2 (en) 2002-05-03 2019-07-16 Magna Electronics Inc. Vehicular control system using cameras and radar sensor
US10118618B2 (en) 2002-05-03 2018-11-06 Magna Electronics Inc. Vehicular control system using cameras and radar sensor
US9834216B2 (en) 2002-05-03 2017-12-05 Magna Electronics Inc. Vehicular control system using cameras and radar sensor
US9643605B2 (en) 2002-05-03 2017-05-09 Magna Electronics Inc. Vision system for vehicle
US9555803B2 (en) 2002-05-03 2017-01-31 Magna Electronics Inc. Driver assistance system for vehicle
US10683008B2 (en) 2002-05-03 2020-06-16 Magna Electronics Inc. Vehicular driving assist system using forward-viewing camera
US11203340B2 (en) 2002-05-03 2021-12-21 Magna Electronics Inc. Vehicular vision system using side-viewing camera
US9171217B2 (en) 2002-05-03 2015-10-27 Magna Electronics Inc. Vision system for vehicle
US7747041B2 (en) * 2003-09-24 2010-06-29 Brigham Young University Automated estimation of average stopped delay at signalized intersections
US20050105773A1 (en) * 2003-09-24 2005-05-19 Mitsuru Saito Automated estimation of average stopped delay at signalized intersections
US9609289B2 (en) 2004-04-15 2017-03-28 Magna Electronics Inc. Vision system for vehicle
US9428192B2 (en) 2004-04-15 2016-08-30 Magna Electronics Inc. Vision system for vehicle
US9191634B2 (en) 2004-04-15 2015-11-17 Magna Electronics Inc. Vision system for vehicle
US10306190B1 (en) 2004-04-15 2019-05-28 Magna Electronics Inc. Vehicular control system
US11847836B2 (en) 2004-04-15 2023-12-19 Magna Electronics Inc. Vehicular control system with road curvature determination
US10462426B2 (en) 2004-04-15 2019-10-29 Magna Electronics Inc. Vehicular control system
US11503253B2 (en) 2004-04-15 2022-11-15 Magna Electronics Inc. Vehicular control system with traffic lane detection
US9736435B2 (en) 2004-04-15 2017-08-15 Magna Electronics Inc. Vision system for vehicle
US10735695B2 (en) 2004-04-15 2020-08-04 Magna Electronics Inc. Vehicular control system with traffic lane detection
US9948904B2 (en) 2004-04-15 2018-04-17 Magna Electronics Inc. Vision system for vehicle
US10015452B1 (en) 2004-04-15 2018-07-03 Magna Electronics Inc. Vehicular control system
US10187615B1 (en) 2004-04-15 2019-01-22 Magna Electronics Inc. Vehicular control system
US10110860B1 (en) 2004-04-15 2018-10-23 Magna Electronics Inc. Vehicular control system
US8162518B2 (en) 2006-08-11 2012-04-24 Donnelly Corporation Adaptive forward lighting system for vehicle
US8434919B2 (en) 2006-08-11 2013-05-07 Donnelly Corporation Adaptive forward lighting system for vehicle
US7972045B2 (en) 2006-08-11 2011-07-05 Donnelly Corporation Automatic headlamp control system
US10071676B2 (en) 2006-08-11 2018-09-11 Magna Electronics Inc. Vision system for vehicle
US11623559B2 (en) 2006-08-11 2023-04-11 Magna Electronics Inc. Vehicular forward viewing image capture system
US9440535B2 (en) 2006-08-11 2016-09-13 Magna Electronics Inc. Vision system for vehicle
US11396257B2 (en) 2006-08-11 2022-07-26 Magna Electronics Inc. Vehicular forward viewing image capture system
US11148583B2 (en) 2006-08-11 2021-10-19 Magna Electronics Inc. Vehicular forward viewing image capture system
US10787116B2 (en) 2006-08-11 2020-09-29 Magna Electronics Inc. Adaptive forward lighting system for vehicle comprising a control that adjusts the headlamp beam in response to processing of image data captured by a camera
US8017898B2 (en) 2007-08-17 2011-09-13 Magna Electronics Inc. Vehicular imaging system in an automatic headlamp control system
US11091105B2 (en) 2008-07-24 2021-08-17 Magna Electronics Inc. Vehicle vision system
US9509957B2 (en) 2008-07-24 2016-11-29 Magna Electronics Inc. Vehicle imaging system
US8854456B2 (en) * 2010-06-21 2014-10-07 Nissan Motor Co., Ltd. Travel distance detection device and travel distance detection method
US20110310245A1 (en) * 2010-06-21 2011-12-22 Nissan Motor Co., Ltd. Travel distance detection device and travel distance detection method
US10397544B2 (en) 2010-08-19 2019-08-27 Nissan Motor Co., Ltd. Three-dimensional object detection device and three-dimensional object detection method
US9367751B2 (en) 2012-06-19 2016-06-14 Ichikoh Industries, Ltd. Object detection device for area around vehicle
US10875403B2 (en) 2015-10-27 2020-12-29 Magna Electronics Inc. Vehicle vision system with enhanced night vision
US10132971B2 (en) 2016-03-04 2018-11-20 Magna Electronics Inc. Vehicle camera with multiple spectral filters
CN108846864A (en) * 2018-05-29 2018-11-20 珠海全志科技股份有限公司 A kind of position capture system, the method and device of moving object
US11951900B2 (en) 2023-04-10 2024-04-09 Magna Electronics Inc. Vehicular forward viewing image capture system

Also Published As

Publication number Publication date
US6430303B1 (en) 2002-08-06

Similar Documents

Publication Publication Date Title
US6430303B1 (en) Image processing apparatus
US6141435A (en) Image processing apparatus
EP0567059B1 (en) Object recognition system using image processing
EP3576008B1 (en) Image based lane marking classification
US5341437A (en) Method of determining the configuration of a path for motor vehicle
CN101030256B (en) Method and apparatus for cutting vehicle image
Javed et al. Tracking and object classification for automated surveillance
US8655078B2 (en) Situation determining apparatus, situation determining method, situation determining program, abnormality determining apparatus, abnormality determining method, abnormality determining program, and congestion estimating apparatus
US5987152A (en) Method for measuring visibility from a moving vehicle
Gupte et al. Detection and classification of vehicles
JP2917661B2 (en) Traffic flow measurement processing method and device
KR100201739B1 (en) Method for observing an object, apparatus for observing an object using said method, apparatus for measuring traffic flow and apparatus for observing a parking lot
US7298394B2 (en) Method and apparatus for processing pictures of mobile object
US20090309966A1 (en) Method of detecting moving objects
CN104951784A (en) Method of detecting absence and coverage of license plate in real time
KR102031503B1 (en) Method and system for detecting multi-object
JPH11252587A (en) Object tracking device
KR102318586B1 (en) Method of detecting median strip and predicting collision risk through analysis of images
EP3557529A1 (en) Lane marking recognition device
IL233367A (en) Measurement system for detecting an identifier assigned to a vehicle as it passes through a measurement cross section of a road
JP2001256485A (en) System for discriminating vehicle kind
JP2002008019A (en) Railway track recognition device and rolling stock using railway track recognition device
JPH09270014A (en) Extraction device for moving body
JP2910130B2 (en) Automatic car number reader
JP3816785B2 (en) Distance measuring device

Legal Events

Date Code Title Description
FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Free format text: PAYER NUMBER DE-ASSIGNED (ORIGINAL EVENT CODE: RMPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

CC Certificate of correction
FPAY Fee payment

Year of fee payment: 4

FPAY Fee payment

Year of fee payment: 8

REMI Maintenance fee reminder mailed
LAPS Lapse for failure to pay maintenance fees
STCH Information on status: patent discontinuation

Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362

FP Lapsed due to failure to pay maintenance fee

Effective date: 20140806