CN102542256B - The advanced warning system of front shock warning is carried out to trap and pedestrian - Google Patents

The advanced warning system of front shock warning is carried out to trap and pedestrian Download PDF

Info

Publication number
CN102542256B
CN102542256B CN201110404574.1A CN201110404574A CN102542256B CN 102542256 B CN102542256 B CN 102542256B CN 201110404574 A CN201110404574 A CN 201110404574A CN 102542256 B CN102542256 B CN 102542256B
Authority
CN
China
Prior art keywords
model
described image
image point
road surface
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201110404574.1A
Other languages
Chinese (zh)
Other versions
CN102542256A (en
Inventor
丹·罗森鲍姆
阿米亚德·古尔曼
吉迪昂·斯坦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Mobileye Technologies Ltd
Original Assignee
Mobileye Technologies Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mobileye Technologies Ltd filed Critical Mobileye Technologies Ltd
Priority to CN201710344179.6A priority Critical patent/CN107423675B/en
Publication of CN102542256A publication Critical patent/CN102542256A/en
Application granted granted Critical
Publication of CN102542256B publication Critical patent/CN102542256B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems
    • G08G1/165Anti-collision systems for passive traffic, e.g. including static obstacles, trees
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems
    • G08G1/166Anti-collision systems for active traffic, e.g. moving vehicles, pedestrians, bikes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30256Lane; Road marking

Abstract

Advanced warning system and method the present invention relates to carry out front shock warning to trap and pedestrian using the video camera that can be filled in a motor vehicle.The method obtains picture frame by known spacings.Patch can be selected at least one picture frame.Light stream between the picture frame of the multiple images point of traceable patch.Picture point can be fitted at least one model.Fitting based on picture point Yu at least one model, if contemplating that collision, it may be determined that collision time (TTC).Picture point can be fitted to road surface model and its part is modeled as imaging from road surface.Fitting based on picture point and road surface model determines that expection does not have collision.At least one model may also include mixed model, and picture point Part I can be modeled as imaging from road surface, and its Part II can be modeled as imaging from substantial orthogonality object.Picture point can be fitted to vertical surface model, and picture point part can be modeled as imaging from perpendicular objects.Picture point can be based on and determine TTC with the fitting of vertical surface model.

Description

The advanced warning system of front shock warning is carried out to trap and pedestrian
Background
1. technical background
Driver assistance system the present invention relates to provide front shock warning.
2. description of related art
Driver assistance system (driver assistance system, DAS) in recent years based on video camera Come into market;The driver assistance system include lane departur warning (lane departure warning, LDW), from Dynamic distance light control (Automatic High-beam Control, AHC), pedestrian's identification and front shock warning (forward Collision warning, FCW).
Lane departur warning (LDW) system is designed to be given a warning in the case of unintentional deviation.When Vehicle by or will by lane markings when give a warning.Use based on turn signal, the change of the angle of steering wheel, car Speed and brake activation determine driver intention.
In image procossing, Moravec Corner Detection Algorithms are probably that earliest Corner Detection Algorithm defines angle in the lump Point is the point with relatively low self-similarity.Moravec algorithms are big with neighbouring by considering to concentrate on the patch (patch) in pixel How similar partly overlapping patch has, and angle point whether there is from the point of view of each pixel in test image.By using two spots The difference of two squares and (sum of squared difference, SSD) between block measure similitude.The smaller explanation similitude of numeral It is bigger.The method of the angle point in optional detection image is based on the method proposed by Harris and Stephens, and the method is right The improvement of the method proposed by Moravec.Harris and Stephens by consider directly with the angle point fraction of directional correlation , adjacent to patch, the Corner Detection Algorithm to Moravec is improved for differential and non-usage Moravec.
In computer vision, the widely used differential method estimated for light stream be by Bruce D.Lucas and Takeo Kanade exploitations.Lucas-Kanade methods are assumed Constant, and basic optical flow equation is solved to all pixels in the neighborhood by criterion of least squares.By comprehensive from several The information of individual neighborhood pixels, Lucas-Kanade methods are generally possible to solve the inherent ambiguity of optical flow equation.With pointwise method Compare, the method is also insensitive to picture noise.On the other hand, because the method is pure partial approach, it can not Stream information in the internal unity region of image is enough provided.
General introduction
Feature of the invention, there is provided the distinct methods for sending front shock caution signal, methods described makes With the video camera that can be installed in a motor vehicle.Multiple images frame is obtained by known time interval.Can be at least one picture frame Middle selection image patch.The light stream of the multiple images point of patch can be tracked between picture frame.Picture point can be fitted at least one Individual model.Fitting based on picture point, it may be determined that whether contemplate that collision, and if it is expected that having, it may be determined that touch Hit the time (TTC).Picture point can be fitted to road surface model, and a part for picture point can be modeled as being imaged from road surface.Can base Determine that expection does not have collision in the fitting of picture point and road surface model.Picture point can be fitted to vertical surface model, wherein scheming A part for picture point can be modeled as imaging from perpendicular objects.When can determine collision with the fitting of vertical surface model based on picture point Between TTC.Picture point can be fitted to mixed model, and the wherein Part I of picture point can be modeled as imaging from road surface, and picture point Part II can be modeled as object of the imaging from substantial orthogonality or upright object rather than traverse road surface.
In picture frame, the candidate image of pedestrian is can detect, wherein, the patch is selected to include candidate's figure of pedestrian Picture.When best fit model is vertical surface model, it may be verified that candidate image be upright pedestrian image rather than road surface in Object.In picture frame, vertical line is can detect, wherein, the patch is selected to include the vertical line.When best fit mould When type is vertical surface model, it may be verified that vertical line is the image of perpendicular objects rather than the image of the object in road surface.
In distinct methods, can be given a warning less than threshold value based on collision time.In distinct methods, image can be based on Light stream between frame determines the relative scale of patch, and may be in response to the relative scale and time interval determines collision time (TTC).The method can be avoided it is determined that carrying out Object identifying before relative scale in patch.
Feature of the invention, there is provided the system including video camera and processor.The system may be used in can The video camera installed in a motor vehicle provides front shock warning.The system can also be used for obtaining many by known time interval Individual picture frame, for selecting patch at least one of picture frame;For track patch multiple images point picture frame it Between light stream;For picture point being fitted at least one model and being determined with the fitting of at least one model based on picture point Whether be expected to have collision, if being expected if determine collision time (TTC).The system can be additionally used in picture point It is fitted to road surface model.Picture point can be based on and determine that expection does not have collision with the fitting of road surface model.
Other embodiment of the invention, may be selected the patch in picture frame, and the patch can correspond to motor vehicle will be The location of after predetermined time interval.The patch can be monitored;Front shock is sent if object is imaged in the patch Warning.Object can be determined by the light stream between the picture frame of the multiple images point for tracking the object in patch is substantially No is vertical, upright or not in road surface.Picture point can be fitted at least one model.A part for picture point can be modeled To be imaged from object.Fitting based on picture point Yu at least one model, it is determined whether expection has collision, if it is expected that have Words then determine collision time (TTC).Front shock warning can be sent when best fit model includes vertical surface model.Image Point can be fitted to road surface model.Picture point can be based on and determine that expection does not have collision with the fitting of road surface model.
A kind of feature of the invention, there is provided system for providing front shock warning in a motor vehicle.It is described System includes that video camera and processor in a motor vehicle can be installed.Video camera can be used to obtain multiple by known time interval Picture frame.Processor can be used to select the patch in picture frame, and patch correspondence motor vehicle will institute after a predetermined interval of time The position at place.If object is imaged in patch, if it find that pair as if it is upright and/or can then send front portion not in road surface Conflict alert.Processor can also be used to be tracked between picture frame the multiple images point of the object in patch, and picture point is intended Close one or more models.The model may include perpendicular objects model, road surface model and/or mixed model, mixed model Comprise provide that one or more picture points from road surface and one or more images from the upright object not in road surface Point.Fitting based on picture point Yu model, it is determined whether contemplate that collision, if it is expected that there is collision then to determine collision time (TTC).Processor can be used to send front shock warning less than threshold value based on TTC.
The brief description of accompanying drawing
The present invention is only described with reference to the drawings by way of example herein, wherein:
Fig. 1 a and 1b schematically show feature of the invention, when vehicle is close to guardrail wires from installed in car Two images of the forward looking camera capture in.
Fig. 2 a show feature of the invention, for using the video camera in the main car (host vehicle) The method that front shock warning is provided.
The step of determination collision time that Fig. 2 b show feature of the invention, showing in fig. 2 a it is further Details.
Fig. 3 a show feature of the invention, upright surface picture frame (back side of van).
Fig. 3 c show feature of the invention, mainly road surface rectangular area.
Fig. 3 b show feature of the invention, the function as vertical image position (y) on Fig. 3 a point Vertical movement δ y.
Fig. 3 d show feature of the invention, the function as vertical image position (y) on Fig. 3 c point Vertical movement δ y.
Fig. 4 a show feature of the invention, including the image with horizontal line and the guardrail wires of rectangular patches Picture frame.
The more details of rectangular patches that Fig. 4 b and 4c show feature of the invention, showing in fig .4.
Fig. 4 d show feature of the invention, point song of the vertical movement (δ y) relative to vertical point position (y) Line chart.
Fig. 5 shows feature of the invention, mirage in picture frame another example.
Fig. 6 shows feature of the invention, method for providing front shock warning trap.
Front shock trap warning that Fig. 7 a and 7b show example feature of the invention, being triggered for wall Example.
The example of front shock trap warning that Fig. 7 c show example feature of the invention, being triggered for box Son.
Fig. 7 d show example feature of the invention, for automobile side triggered front shock trap police The example of announcement.
Fig. 8 a show according to an aspect of the present invention, the example with the object of obvious vertical line on box.
Fig. 8 b show according to an aspect of the present invention, the example with the object of obvious vertical line on lamppost.
Fig. 9 and 10 show according to an aspect of the present invention, including the video camera or imageing sensor in vehicle System.
Describe in detail
With detailed reference to feature of the invention, its example is shown in the drawings, and wherein identical reference numeral is ad initio To referring to identical element eventually.Carry out Expressive Features to explain the present invention below with reference to figure.
Before feature of the invention is explained in detail, it should be appreciated that the present invention is not only restricted to its institute in the following description The state or application in the shown in the accompanying drawings design of part and the details of arrangement.The present invention has other features or energy Enough different modes are practiced or carried out.In addition, it should be understood that the phraseology and terminology used herein be for descriptive purposes and It is restricted that should not be construed.
By way of introducing, embodiments of the present invention are related to front shock to alert (FCW) system.According to United States Patent (USP) 7113867, the image of front truck is identified.The width of vehicle can be used for ratio or relative scale S of the detection between picture frame In change, and relative scale be used for determine collision time.Specifically, the width of such as front truck has in the first image and the The length (length as measured by for example with pixel or millimeter) represented with w (t1) and w (t2) respectively in two images.So may be used Selection of land, relative scale is S (t)=w (t2)/w (t1).
According to the teaching of United States Patent (USP) 7113867, front shock warning (FCW) system is relied on to barrier or the figure of object The identification of picture, for example, such as the front truck recognized in picture frame.In front shock warning system, such as United States Patent (USP) 7113867 Disclosed, the ratio of the size (such as width) of detected object (such as vehicle) changes for calculating collision time (TTC).However, object first be detected and with the scene cut of surrounding.Change this disclosure has described using relative scale System, its be based on light stream determine collision time TTC and collision possibility, if it is desired, send FCW warning.Light stream causes Mirage phenomenon (1ooming phenomenon):As the object being imaged becomes nearer, the image of perception seems bigger.According to Different characteristic of the invention, can perform object detection and/or identification, or can avoid object detection and/or identification.
Mirage phenomenon has been extensively study in biology system.Mirage is seemingly a kind of to people very low-level to be regarded Feel attention mechanism and natural reaction can be triggered.There are various trials to detect mirage in computer vision, or even there is silicon to sense Device is designed for detecting the mirage in the case of pure flat shifting.
Can with changing lighting condition, include the complex scene of multiple objects and the actual environment of main car Middle execution mirage detection, mirage detection includes that translation is dynamic and rotates.
Term as used herein " relative scale " refers to image patch in a picture frame and in subsequent picture frame Correspondence image patch relative size increase (or reduce).
Referring now to Fig. 9 and 10, according to an aspect of the present invention, Fig. 9 and 10 shows to include the shooting being arranged in vehicle 18 The system 16 of machine or imageing sensor 12.To imageing sensor 12 the transmission figure picture, these images in real time of field of front vision imaging It is captured with the time series of picture frame 15.Image processor 14 can be used to simultaneously and/or concurrently process picture frame 15 It is many driver assistance system services.It is soft in usable specific hardware circuit and/or memory 13 with onboard software Part control algolithm realizes driver assistance system.Imageing sensor 12 can be monochromatic or black and white, i.e., no color point From, or imageing sensor 12 can be sense color.By the way that for the example in Figure 10, picture frame 15 is used for serving pedestrian's warning (PW) 20, lane departur warning (LDW) 21, front portion of the teaching based on object detection and tracking according to United States Patent (USP) 7113867 Conflict alert (FCW) 22, the front shock based on image mirage alert (FCWL) 209 and/or based on FCW traps (FCWT) 601 Front shock warning 601.Image processor 14 is used to process picture frame 15 to detect for based on image mirage and FCWT The mirage of the image in the field of front vision of the video camera 12 of 601 front shock warning 209.Front shock based on image mirage Warning 209 and based on trap front shock warning (FCWT) 601 can with traditional executed in parallel of FCW 22, and and other Driver assistance function, pedestrian detection (PW) 20, lane departur warning (LDW) 21, road traffic sign detection and self motion detection Executed in parallel.FCWT 601 can be used to verify the normal signal from FCW 22.As used herein term " FCW signals " refers to Front shock caution signal.Term " FCW signals ", " front shock warning " and " warning " are interchangeably used herein.
Feature of the invention shows in Fig. 1 a and 1b for showing the example of light stream or mirage.When vehicle 18 close to metal During guardrail 30, captured two image from the forward looking camera 12 in vehicle 18 is shown.Image in Fig. 1 a The visual field and guardrail 30 are shown.Image in Fig. 1 b shows identical feature, and wherein vehicle 18 is closer to guardrail wires 30, if seen The small rectangle p 32 (being indicated with dotted line) in guardrail is examined, may see that horizontal line 34 seems with vehicle 18 close to shield in Figure 1b Stretch on column 30.
Referring now to Fig. 2 a, its show feature of the invention, for using the video camera 12 in main car 18 The method 201 that front shock alerts 209 (FCWL 209) is provided.Method 201 is independent of the object in the field of front vision of vehicle 18 Object identifying.In step 203, multiple images frame 15 is obtained by video camera 12.Between the time between the capture of picture frame Every being Δ t.Patch 32 in selection picture frame 15, and the relative scale of determination patch 32 in step 207 in step 205 (S).In step 209, based on the relative scale (S) and time interval between frame 15, (Δ t) determines collision time (TTC).
Referring now to Fig. 2 b, the step of determination collision time that it shows feature of the invention, showing in fig. 2 a 209 further details.In step 211, the multiple images point in patch 32 can be tracked between picture frame 15.In step In 213, picture point can be fitted to one or more models.First model can be vertical surface model, its may include such as pedestrian, The object of vehicle, wall, shrub, tree or lamppost.Second model can be road surface model, its spy for considering the picture point on road surface Levy.Mixed model may include one or more picture points from road, and one or more figures from upright object Picture point.For at least assuming to include the model of a part of picture point of upright object, multiple collision times can be calculated (TTC).In step 215, picture point makes it possible to choosing with the best fit of road surface model, vertical surface model or mixed model Select collision time (TTC) value.Based on the collision time (TTC) less than threshold value and when best fit model is vertical surface model Or during mixed model, can give a warning.
Alternatively, step 213 may additionally include the detection of the candidate image in picture frame 15.Candidate image can be pedestrian Or the vertical line of perpendicular objects such as lamppost.In the case where being pedestrian or vertical line, patch 32 may be selected and schemes with including candidate Picture.Once selected patch 32, then it is the image of upright pedestrian and/or the image of vertical line to be possible to perform candidate image Checking.The checking can confirm that, when best fit model is vertical surface model, candidate image is not the object in road surface.
Look back Fig. 1 a and 1b, the son of the patch 32 of the second image shown in the first image shown from Fig. 1 a to Fig. 1 b Pixel arrangement can cause size to increase by 8% or relative scale S increases by 8% (S=1.08) (step 207).Assuming that between image Δ t=0.5 seconds time difference, collision time (TTC) can use following equation 1 to calculate (step 209):
If it is known that the speed of vehicle 18 is v (v=4.8m/s), then range-to-go Z also can use following equation 2 to count Calculate:
Feature of the invention, Fig. 3 b and 3d are shown as the vertical fortune of the point of the function of vertical picture position (y) Dynamic δ y.Vertical movement δ y are zero at horizontal line, are negative value under horizontal line.The vertical movement δ y of point are with following equation 3 Show.
Equation (3) is the linear model on y and δ y and is of virtually two variables.Two points can be used to solve this Two variables.
For vertical surface because all of point is equidistant, in the image as shown by fig 3b away from From motion is in horizontal line (y0) place is zero and linearly changes with picture position.For road surface, point is more low then more in the picture Closely (Z is smaller), as shown by following equation 4:
Therefore, image motion δ y are not only increased with linear rate, such as in following equation 5 in as shown in the figure of Fig. 3 d 's.
Equation (5) is the secondary equation of constraint for being of virtually two variables.
Equally, two points are can be used to solve the two variables.
Referring now to Fig. 3 a and 3c of the different picture frame 15 of expression.In Fig. 3 a and 3c, two rectangular areas are shown with dotted line Go out.Fig. 3 a show upright surface (behind van).Square points are the points of tracked (step 211), move and are scheming The motion model on the upright surface shown in the image of the image motion (δ y) in 3b compared to the height y for putting matches and (walks It is rapid 213).The motion of triangle point in fig. 3 a mismatches the motion model on upright surface.Referring now to Fig. 3 c, it shows mainly It is the rectangular area on road surface.Square points be with Fig. 3 d compared to point height y image motion (δ y) image in institute The point that the road surface model for showing matches.The motion of triangle point mismatches the motion model on road surface and is exceptional value (outlier).Therefore in general, task here be to determine which point belong to model (and which model belonged to) and which Point is exceptional value, and this can be performed by robust approximating method as be explained below.
Referring now to Fig. 4 a, 4b, 4c and 4d, they show feature of the invention, two motions in image The typical situation of the mixing of model.Fig. 4 a show to include the picture frame 15 of the image of guardrail wires 30 and rectangular patches 32a, wherein The image of guardrail wires 30 has horizontal line 34.The further details of patch 32a show in figs. 4 b and 4 c.Fig. 4 b show one The details of the patch 32a in picture frame 15 before individual, Fig. 4 c show subsequent at one when vehicle 18 is closer to guardrail 30 Picture frame 15 in patch 32a details.In Fig. 4 c and 4d, some picture points are shown on vertical obstacle 30 Square, triangle and circle, and some picture points are illustrated on the road surface in the front of barrier 30.In the 32a of rectangular area Trace point show, some point in corresponding to the lower part of the region 32a of road model, and some point corresponding to upright Surface model region 32a upper part in.Fig. 4 d show vertical movement (δ y) a little compared to vertical point position (y) Curve map.In figure 4d, there are two parts with the model being resumed for illustrating:Bending (parabolical) part 38a and line Property part 38b.The bottom on the upright surface 30 of transition point correspondence between part 38a and 38b.The transition point is also by Fig. 4 c Horizontal dotted line 36 mark.There are some in figs. 4 b and 4 c by the point shown in triangle, their tracked but mismatching models, The tracked point of some Matching Models is shown by square and point that some are not tracked well is illustrated as circle.
Referring now to Fig. 5, its another example for showing the mirage in picture frame 15.In the picture frame 15 of Fig. 5, in patch There is no upright surface in 32b, only the accessible road in front, and transition point between two models at horizontal line with Dotted line 50 is marked.
The estimation of motion model and collision time (TTC)
The estimation (step 215) of motion model and collision time (TTC) is assumed to provide a region 32, such as in picture frame Rectangular area in 15.The example of rectangular area is the rectangle 32a and 32b for example shown in Fig. 3 and 5.Can be based on being examined The object of the such as pedestrian for surveying selects these rectangles based on the motion of main car 18.
1. trace point (step 211):
A () rectangular area 32 can be subdivided into 5x20 sub- rectangular grid.
B () can be that each sub- rectangle performs algorithm to find the angle point of image, for example, use Harris and Stephens Method, and the point can be traced.It is preferably used 5x5Harris points, it is contemplated that the characteristic value of matrix below,
And search out two strong characteristic values.
C () can be by some optimal differences of two squares of the exhaustive search in the rectangular search region with width W and height H (SSD) match to perform tracking.When starting, the exhaustive search is critically important, as it means that motion before is not adopted With, and the measurement from all sub- rectangles is statistically more independent.Light stream estimation is the use of after searching Fine setting, wherein light stream estimated service life such as Lukas Kanade methods.Lukas Kanade methods allow sub-pixel motion.
2. the models fitting (step 213) of robust:
A () randomly selects two or three points from 100 tracked points.
(b) be selected to quantity (NIt is right) car speed (v) is depended on, for example it is given by the following formula:
NIt is right=min (40, max (5,50-v)) (7)
Wherein v units are meter per second.Quantity (the N of triple (triplet)Triple) be given by the following formula:
NTriple=50-NIt is right(8)
C (), for two points, they can be fitted two models (step 213).One model hypothesis the two points are upright Object on.Second model hypothesis the two point all on road.
D (), for three points, they can also be fitted two models.Two points above one model hypothesis are upright right As upper, the 3rd (nethermost) point is on road.Under the uppermost point of second model hypothesis is on upright object Two points in face are on road.
Two models can be solved on three points, and this solves the first model (equation 3) by using two points, then with knot Fruit y0The second model (equation 5) is solved with the 3rd point.
E each model of () in (d) provides collision time TTC values (step 215).Each model is also based on 98 Individual other points and models fitting are obtained and how well obtain a fraction.Moved and the model sport of prediction between by the y of point The truncation quadratic sum (Sum of the Clipped Square ofthe Distance, SCSD) of distance provide the fraction. SCSD values are converted to the function similar to probability:
Wherein N is the quantity (N=98) of point.
F () is based on TTC values, the speed of vehicle 18 and assumes these on static object, can calculate these points Apart from Z=v x TTC.According to the x image coordinates of each picture point distance, the lateral attitude in world coordinates can be calculated:
(g) therefore calculate in the lateral attitude of time TTC.Binary system transverse direction fraction requirement from pair or triple point At least one of must be in the path of vehicle 18.
3. the fraction of multiframe:Can produce new model in each frame 15, each new model have its related TTC and Fraction.200 models of optimal (fraction highest) can be retained from 4 frames 15 before, its mid-score is weighted as follows:
Fraction (n)=αnFraction (12)
Wherein n=0.3 is the age (age) of fraction, and α=0:95.
4.FCW judges:If any one generation in three following conditions, real FCW warnings are sent:
A () has the TTC of the model of highest score under TTC threshold values and fraction is more than 0.75, and
(b) have highest score model TTC under TTC threshold values and
(c)
Fig. 3 and 4 has been shown how robustly to provide FCW warnings for the point in given rectangle 32.How rectangle is limited Depending on the application as shown in other example features by Fig. 7 a-7d and 8a, 8b.
The FCW traps of general stationary objects
Referring now to Fig. 6, its show feature of the invention, for providing front shock warning trap (FCWT) 601 Method 601.In step 203, multiple images frame 15 is obtained by video camera 12.In step 605, in selection picture frame 15 Patch 32, patch correspondence motor vehicle 18 will the location of after a predetermined interval of time.Then monitor in step 607 Patch 32.In step 609 is judged, if general object is imaged in patch 32 and is detected wherein, in step 611 In send front shock warning.The capture of otherwise picture frame is continued by step 203.
Fig. 7 a and 7b show example feature of the invention, the example of the warnings of FCWT 601 triggered for wall 70 Son;Example feature of the invention, the example of warning that is triggered of side for automobile 72 are shown in figure 7d;And And, the example of warning showing example feature of the invention in figure 7 c, being triggered for box 74a and 74b.Figure 7a-7d is the example of class-based detection, the general stationary objects before not requiring.Dashed rectangle region is defined as At a certain distance from, the target that W=1m is wide, the distance is main car by residing distance after t=4s.
Z=vt (16)
Wherein v is the speed of vehicle 18, and H is the height of video camera 12, and w and y are respectively the width of rectangle and in figure Upright position as in.The rectangular area is the example of FCW traps.If object is " dropped " into the rectangular area, if TTC Less than threshold value, FCW traps can produce warning.Performance is improved using multiple traps:
In order to improve verification and measurement ratio, FCW traps can be copied in 5 regions with 50% lap, to produce 3m Total trap area wide.
The dynamic position of FCW traps can be selected according to yaw rate (yaw rate):Can be based on according to Yaw rate sensor, The path of the vehicle 18 that the dynamic model of the speed of vehicle 18 and main car 18 determines laterally translates trap area 32.
FCW traps for verifying front shock caution signal
The special class object of such as vehicle and pedestrian can be used pattern recognition techniques to be detected in image 15.According to the U.S. The teaching of patent 7113867, can produce FCW 22 to believe after these objects with the change in time tracking, and use ratio Number.However, it is important that using the independent signals of technical identification FCW 22 before giving a warning.If system 16 will be activated If brake, then use the independent technology, such as application method 209 (Fig. 2 b) may particularly to weigh verifying the signals of FCW 22 Will.In the system of radar/vision fusion, independent checking may be from radar.It is independent in the system 16 for being based only on vision Checking is from independent vision algorithm.
The detection of object (such as pedestrian, front truck) is not problem.Detection rates very high can be realized and only had very Low error rate.Feature of this invention is that being produced without the reliable FCW signals of too many false alarm, too many false alarm Driver will be made irritated, or further worsened ground can cause driver unnecessarily to brake.One on traditional pedestrian's FCW systems Possible problem is that the front shock that avoid mistake is alerted because the substantial amounts of pedestrian in the scene and real front portion is touched The quantity for hitting situation is then very small.Even if 5% error rate also will imply that driver will likely receive frequently false alarm, And may never experience real warning.
Pedestrian target is especially challenging for FCW systems, because target is nonrigid, this causes that tracking is tired Difficult (according to the teaching of United States Patent (USP) 7113867), and ratio changes and especially can much be disturbed.Therefore, the model (side of robust Method 209) can be used to verify that the front shock for pedestrian is alerted.Rectangular area 32 can be determined by pedestrian detecting system 20.Root According to United States Patent (USP) 7113867, only being tracked by the performance objectives of FCW 22 can just produce FCW signals, and robust FCW (methods 209) give than can with or cannot the small TTC of predetermined one or more threshold values.Front shock warning FCW 22 can With the threshold value different from the threshold value used in the model of robust (method 209).
It is that pedestrian is typically occurred in the road of less structuring that one of factor of quantity of false alarm may be increased, The possible rather unstable of the driving model of driver in such road, including zig zag and lane change.Therefore for police Sending for accusing may need to include some further constraints:
When detecting curb or lane markings, if pedestrian is in the distally in curb or/and track and following without occurring During any one in condition, then FCW signals are prevented from:
1. pedestrian is through lane markings or curb (or close very fast).In this regard, the pin of detection pedestrian may be very It is important.
2. main car 18 does not pass through lane markings or curb (for example, as detected by the systems of LDW 21).
The more difficult prediction of intention of driver.If driver is honest to driving, without activation turn signal and not in advance in respect of Other lane markings, then it is reasonable to assume that driver will continue directly to march forward.Therefore, if pedestrian in the paths and TTC can then send FCW signals under threshold value.If however, driver turns, then he/her will continue turn or Stop turn and continue move ahead be it is also possible that.Therefore, when yaw rate is detected, only when assuming that vehicle 18 will be inclined with identical Boat angle continue turn and pedestrian in the paths, and if vehicle straight trip and pedestrian in the paths when just send FCW signals.
The concept of FCW traps 601 may extend into the main object comprising vertical line (or horizontal line).Such object is made Possibility problem with the technology based on point is that good Harris (angle point) points are as a rule by hanging down on the edge by object Straight line intersects to produce with the horizontal line of distant place background.The vertical movement of these points will be similar to that the road surface of distant place.
Fig. 8 a and 8b show the example of the object with obvious vertical line 82, the lamppost in figure 8b of the vertical line 82 On box 84 on 80 and in Fig. 8 a.Vertical line 82 is detected in trap area 32.Can track between images detected The straight line 82 for arriving.Straight line 82 can be matched by frame by frame and calculate the TTC models of each straight line pair, it is assumed that perpendicular objects, so The SCSD based on other straight lines 82 provides fraction afterwards, to perform the estimation of robust.Because the quantity of straight line may be smaller, generally It is the possible line pair for testing all combinations.Only use the straight line pair for having important lap.For horizontal line, and use The same when point, triple line also gives two models.
Indefinite article " one (a) " used herein, " one (an) ", such as " image " (" an image "), " a rectangle region Domain " (" a rectangular region "), the meaning with " one or more ", i.e. " one or more images " or " one Or multiple rectangular areas ".
While there has been shown and described that selected feature of the invention, but it is understood that the present invention is not only restricted to be retouched The feature stated.Conversely, it should be appreciated that these features can be changed without departing from principle of the invention and spirit, the present invention Scope by appended claims and their equivalents limit.

Claims (17)

  1. It is 1. a kind of to the method use the video camera that can be installed in a motor vehicle for determining method expected from front shock, Methods described includes:
    Multiple images frame is obtained by known time interval;
    Patch is selected at least one of described image frame;
    The light stream between the picture frame of the multiple images point of the patch is tracked to produce the light stream of tracking;
    For multiple models, at least one of the tracked light stream of described image point is fitted, to produce to the multiple mould Multiple fittings of type, wherein, the multiple model is selected from the group being made up of following item:(i) road surface model, wherein, described image A part for point is modeled as imaging from road surface, and a part for (ii) vertical surface model, wherein described image point is modeled as The object from substantial orthogonality, and (iii) mixed model are imaged, the Part I of wherein described image point is modeled as imaging Object of the imaging from substantial orthogonality is modeled as from the Part II on road surface, and described image point;
    Using at least a portion of described image point, to the light stream for being tracked and the fitting of each model score, it is each to produce Individual fraction;And
    There is a model for fraction by selection, it is determined that whether expection collides and determines collision time, the fraction pair Should be in described image point and the best fit of the light stream for being tracked.
  2. 2. the method for claim 1, also includes:
    Best fit based on described image point with the light stream for being tracked for corresponding to the road surface model, it is determined that expection does not have Collision.
  3. 3. the method for claim 1, also includes:
    It is optimal with the light stream for being tracked corresponding to the vertical surface model or the mixed model based on described image point Fitting, it is determined that contemplating that collision.
  4. 4. method as claimed in claim 3, also includes:
    The candidate image of pedestrian is detected in the patch;And
    When best fit model is the vertical surface model, verify the candidate image be upright pedestrian image without It is the image of the object in road surface.
  5. 5. method as claimed in claim 3, also includes:
    Vertical line is detected in described image frame, wherein selecting the patch with including the vertical line;
    When best fit model is the vertical surface model, verify the vertical line be vertical object image rather than The image of the object in road surface.
  6. 6. the method for claim 1, also includes:
    Given a warning less than threshold value based on the collision time.
  7. 7. method as claimed in claim 4, also includes:
    When the best fit model is road surface model, verify that the candidate image is the object in road surface and expection will not There is collision.
  8. 8. a kind of for determining system expected from front shock, it includes:
    Video camera;And
    Processor in a motor vehicle can be installed;Wherein described system is configured to determine that front shock is expected,
    Wherein described processor is configured as obtaining multiple images frame by known time interval;
    Wherein described processor is configured as selecting patch at least one of described image frame;
    Wherein described processor is configured as multiple model followings between the picture frame of the multiple images point of the patch Light stream, wherein the multiple model is selected from the group that is made up of following item:(i) road surface model, wherein, one of described image point Divide and be modeled as imaging from road surface, (ii) vertical surface model a, part for wherein described image point is modeled as imaging from fact Vertical object in matter, and (iii) mixed model, the Part I of wherein described image point are modeled as imaging from road surface, And the Part II of described image point is modeled as object of the imaging from substantial orthogonality;
    Wherein, the processor is configured as being fitted at least one of the tracked light stream of described image point, is arrived with producing Multiple fittings of the multiple model;
    Wherein, the processor be configured with described image point at least a portion and to the light stream for being tracked and each The fitting of model is scored to produce each fraction;And
    Wherein, the processor is configured with least a portion of described image point, has fraction by selection Model determines whether expection collides and determine collision time, and the fraction corresponds to described image point and the light stream for being tracked Best fit.
  9. 9. system as claimed in claim 8, wherein, the processor is configured as described image point being fitted to road surface mould Type;
    Wherein described processor is configured as the fitting with the road surface model based on described image point, it is determined that expection does not have and touches Hit.
  10. 10. a kind of method expected from determination front shock, the method use video camera and the place that can be installed in a motor vehicle Reason device, methods described includes:
    Multiple images frame is obtained by known time interval;
    Patch in selection picture frame, the patch correspondence motor vehicle will the location of after a predetermined interval of time;
    The light stream between the picture frame of the multiple images point of the patch is tracked to produce the light stream of tracking;
    For multiple models, at least one of the tracked light stream of described image point is fitted, to produce to the multiple mould Multiple fittings of type, wherein, the multiple model is selected from the group being made up of following item:(i) road surface model, wherein, described image A part for point is modeled as imaging from road surface, and a part for (ii) vertical surface model, wherein described image point is modeled as The object from substantial orthogonality, and (iii) mixed model are imaged, the Part I of wherein described image point is modeled as imaging Object of the imaging from substantial orthogonality is modeled as from the Part II on road surface, and described image point;
    Using at least a portion of described image point, to the light stream for being tracked and the fitting of each model score, it is each to produce Individual fraction;And
    There is a model for fraction by selection, it is determined that whether expection collides and determines collision time, the fraction pair Should be in described image point and the best fit of the light stream for being tracked.
  11. 11. methods as claimed in claim 10, also include:
    It is determined that whether the object being imaged in the patch includes the part of substantial orthogonality.
  12. 12. methods as claimed in claim 11, also include:
    Described image point is fitted to road surface model;And
    Best fit based on described image point with the road surface model, it is determined that expection does not have collision.
  13. 13. methods as claimed in claim 11, also include:
    When best fit model is vertical surface model or mixed model, front shock warning is sent.
  14. 14. methods as claimed in claim 10, also include:
    The yaw rate of the motor vehicle is input into or calculated from described image frame;And
    The yaw rate based on motor vehicle dynamic on described image frame laterally translates the patch.
  15. 15. is a kind of for determining system expected from front shock in a motor vehicle, and the system includes:
    Video camera, it can be arranged in the motor vehicle, and the video camera can be operated and obtained by known time interval Multiple images frame;
    Processor, it is configured as selecting the patch in picture frame, and the patch correspondence motor vehicle is by the predetermined time The location of behind interval;Wherein described processor is configured as the multiple images point in the patch for multiple model followings Picture frame between light stream to produce the light stream of tracking, wherein the multiple model is selected from the group being made up of following item:(i) road Surface model, wherein, a part for described image point is modeled as imaging from road surface, (ii) vertical surface model, wherein the figure A part for picture point is modeled as imaging from the object of substantial orthogonality, and (iii) mixed model, wherein described image point Part I is modeled as imaging from road surface, and the Part II of described image point is modeled as imaging from the right of substantial orthogonality As,
    Wherein, the processor is configured as being fitted at least one of the tracked light stream of described image point, is arrived with producing Multiple fittings of the multiple model;
    Wherein, the processor be configured with least a portion of described image point, to the light stream for being tracked and each mould The fitting of type is scored to produce each fraction;And
    Wherein, the processor is configured to selection, and there is the model of fraction to determine whether expection has collision and really Determine collision time, the fraction corresponds to the best fit of described image point and the light stream for being tracked.
  16. 16. systems as claimed in claim 15, wherein the processor be additionally configured to when described image point with tracked When the best fit of light stream is the best fit of the vertical surface model or the mixed model, it is determined that in the patch Whether the object of middle imaging includes the part of substantial orthogonality.
  17. 17. systems as claimed in claim 15, wherein the processor is configured as sending front portion less than threshold value based on TTC Conflict alert.
CN201110404574.1A 2010-12-07 2011-12-07 The advanced warning system of front shock warning is carried out to trap and pedestrian Active CN102542256B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710344179.6A CN107423675B (en) 2010-12-07 2011-12-07 Advanced warning system for forward collision warning of traps and pedestrians

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US42040510P 2010-12-07 2010-12-07
US61/420,405 2010-12-07

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN201710344179.6A Division CN107423675B (en) 2010-12-07 2011-12-07 Advanced warning system for forward collision warning of traps and pedestrians

Publications (2)

Publication Number Publication Date
CN102542256A CN102542256A (en) 2012-07-04
CN102542256B true CN102542256B (en) 2017-05-31

Family

ID=46349111

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201710344179.6A Active CN107423675B (en) 2010-12-07 2011-12-07 Advanced warning system for forward collision warning of traps and pedestrians
CN201110404574.1A Active CN102542256B (en) 2010-12-07 2011-12-07 The advanced warning system of front shock warning is carried out to trap and pedestrian

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN201710344179.6A Active CN107423675B (en) 2010-12-07 2011-12-07 Advanced warning system for forward collision warning of traps and pedestrians

Country Status (1)

Country Link
CN (2) CN107423675B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5877897A (en) 1993-02-26 1999-03-02 Donnelly Corporation Automatic rearview mirror, vehicle lighting control and vehicle interior monitoring system using a photosensor array
US6822563B2 (en) 1997-09-22 2004-11-23 Donnelly Corporation Vehicle imaging system with accessory control
US7655894B2 (en) 1996-03-25 2010-02-02 Donnelly Corporation Vehicular image sensing system
EP1504276B1 (en) 2002-05-03 2012-08-08 Donnelly Corporation Object detection system for vehicle
US7526103B2 (en) 2004-04-15 2009-04-28 Donnelly Corporation Imaging system for vehicle
WO2008024639A2 (en) 2006-08-11 2008-02-28 Donnelly Corporation Automatic headlamp control system
DE102013213812A1 (en) * 2013-07-15 2015-01-15 Volkswagen Aktiengesellschaft Device and method for displaying a traffic situation in a vehicle
US10380434B2 (en) * 2014-01-17 2019-08-13 Kpit Technologies Ltd. Vehicle detection system and method
CN109716255A (en) * 2016-09-18 2019-05-03 深圳市大疆创新科技有限公司 For operating movable object with the method and system of avoiding barrier

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7113867B1 (en) * 2000-11-26 2006-09-26 Mobileye Technologies Limited System and method for detecting obstacles to vehicle motion and determining time to contact therewith using sequences of images

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3515926B2 (en) * 1999-06-23 2004-04-05 本田技研工業株式会社 Vehicle periphery monitoring device
US7136506B2 (en) * 2003-03-03 2006-11-14 Lockheed Martin Corporation Correlation based in frame video tracker
US7089114B1 (en) * 2003-07-03 2006-08-08 Baojia Huang Vehicle collision avoidance system and method
JP2005226670A (en) * 2004-02-10 2005-08-25 Toyota Motor Corp Deceleration control device for vehicle
EP1741079B1 (en) * 2004-04-08 2008-05-21 Mobileye Technologies Limited Collision warning system
JP4304517B2 (en) * 2005-11-09 2009-07-29 トヨタ自動車株式会社 Object detection device
EP1837803A3 (en) * 2006-03-24 2008-05-14 MobilEye Technologies, Ltd. Headlight, taillight and streetlight detection
CN101261681B (en) * 2008-03-31 2011-07-20 北京中星微电子有限公司 Road image extraction method and device in intelligent video monitoring
US8050459B2 (en) * 2008-07-25 2011-11-01 GM Global Technology Operations LLC System and method for detecting pedestrians
US8812226B2 (en) * 2009-01-26 2014-08-19 GM Global Technology Operations LLC Multiobject fusion module for collision preparation system

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7113867B1 (en) * 2000-11-26 2006-09-26 Mobileye Technologies Limited System and method for detecting obstacles to vehicle motion and determining time to contact therewith using sequences of images

Also Published As

Publication number Publication date
CN107423675A (en) 2017-12-01
CN102542256A (en) 2012-07-04
CN107423675B (en) 2021-07-16

Similar Documents

Publication Publication Date Title
CN102542256B (en) The advanced warning system of front shock warning is carried out to trap and pedestrian
US10940818B2 (en) Pedestrian collision warning system
EP2993654B1 (en) Method and system for forward collision warning
US11741696B2 (en) Advanced path prediction
US10690770B2 (en) Navigation based on radar-cued visual imaging
RU2629433C2 (en) Device for detecting three-dimensional objects
CN106663193B (en) System and method for curb detection and pedestrian hazard assessment
US6191704B1 (en) Run environment recognizing apparatus
US7176959B2 (en) Vehicle surroundings display device and image providing system
US11100806B2 (en) Multi-spectral system for providing precollision alerts
JP5145585B2 (en) Target detection device
US20080036576A1 (en) Fusion of far infrared and visible images in enhanced obstacle detection in automotive applications
RU2616538C2 (en) Device for detecting three-dimensional objects
US20070211919A1 (en) Vehicle surroundings monitoring apparatus
CN106462727A (en) Systems and methods for lane end recognition
RU2635280C2 (en) Device for detecting three-dimensional objects
US20210201057A1 (en) Traffic light recognition system and method thereof
CN111508222A (en) Method and apparatus for providing a sophisticated pedestrian assistance system
US20220012503A1 (en) Systems and methods for deriving an agent trajectory based on multiple image sources
US20220012899A1 (en) Systems and methods for deriving an agent trajectory based on tracking points within images
Kim et al. An intelligent and integrated driver assistance system for increased safety and convenience based on all-around sensing
CN113051987A (en) Signal identification system and method thereof
JPH08320999A (en) Vehicle recognizing device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
ASS Succession or assignment of patent right

Owner name: WUBISHI VISUAL TECHNOLOGY CO., LTD.

Free format text: FORMER OWNER: MOBILEYE TECHNOLOGIES LTD.

Effective date: 20141120

C41 Transfer of patent application or patent right or utility model
TA01 Transfer of patent application right

Effective date of registration: 20141120

Address after: Israel Jerusalem

Applicant after: MOBILEYE TECHNOLOGIES LTD.

Address before: Cyprus Nicosia

Applicant before: Mobileye Technologies Ltd.

GR01 Patent grant
GR01 Patent grant