CN101393603B - Method for recognizing and detecting tunnel fire disaster flame - Google Patents

Method for recognizing and detecting tunnel fire disaster flame Download PDF

Info

Publication number
CN101393603B
CN101393603B CN 200810121371 CN200810121371A CN101393603B CN 101393603 B CN101393603 B CN 101393603B CN 200810121371 CN200810121371 CN 200810121371 CN 200810121371 A CN200810121371 A CN 200810121371A CN 101393603 B CN101393603 B CN 101393603B
Authority
CN
China
Prior art keywords
connected region
pixel
flame
value
color
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN 200810121371
Other languages
Chinese (zh)
Other versions
CN101393603A (en
Inventor
谢迪
廖胜辉
童若峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN 200810121371 priority Critical patent/CN101393603B/en
Publication of CN101393603A publication Critical patent/CN101393603A/en
Application granted granted Critical
Publication of CN101393603B publication Critical patent/CN101393603B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention relates to a method for identifying and detecting flame of a tunnel fire hazard. The method comprises the following steps: performing a pretreatment of eliminating illumination on an input video stream; performing motion detection on the video stream to obtain a moving pixel; performing color detection on the video stream to obtain a color pixel with flame characteristics; searching communication areas which are formed by all mutually connected pixels according with same characteristics; calculating out the perimeters and the areas of the obtained communication areas to perform shape analysis; and performing area variation analysis on each communication area to finally judge whether a fire hazard happens. The method, based on a fire hazard detection system with image processing, can correctly identify flame information from an image sequence with background noise according to the characteristics of flame images that the color, the shape and glittery characteristic, the position, the area and the brightness of the flame images change over time, thus the method can separate the real flame from a glittery vehicle lighting in a tunnel, greatly reduce the false alarm rate, and achieve the aim of fire hazard monitoring.

Description

The method of a kind of identification and detection tunnel fire disaster flame
Technical field
The present invention relates to the method for a kind of identification and detection tunnel fire disaster flame, can the car light of real flame and flicker in the tunnel be made a distinction, thereby reduced rate of false alarm greatly.Disturb very big fire hazard monitoring place (like tunnel, night fire hazard monitoring etc.) to be applied to the various light that receive.
Background technology
Based on the fire monitoring method of Flame Image Process utilized in the short time that fire just taken place flame image color, shape, blinking characteristics with and position, area, the time dependent characteristic of brightness; From the image sequence that contains background noise, correctly discern flame information, reach the purpose of fire monitoring.Compare with traditional thalposis, cigarette sense fire detection alarm system, following advantage arranged based on the fire detecting system of Flame Image Process:
1. because the large scale property of CCD camera, a kind of feasible means of large scene, large space being carried out detection are provided based on the fire detecting system of Flame Image Process.
2. applied widely.Because CCD camera and external environment are isolated, and therefore, can under the environment that conventional fire sniffers such as high temperature, Gao Chen can't normally be brought into play, carry out the quick detection at fire initial stage based on the fire detecting system of Flame Image Process.
3. accuracy rate is high, reaction velocity is fast.Owing to can from image, find out whether have fire to take place intuitively, in case therefore system produces warning, only need switch to the zone of warning, just can from monitor, directly confirm.Its reaction velocity and accuracy rate are all good than traditional fire hazard monitoring system.
4. be convenient to the cause of fire investigation.The CCD camera can be taken the overall process that fire begins, and these image informations will directly be kept in the memory device of pulpit.Therefore, can consult these image informations very easily afterwards, analyze culprit.
5. be convenient to utilize other functions of computer development.
Because its large scale property, be specially adapted to large scales such as megastore, cinema, tunnel, large-sized workshop workshop, hangar, large ship, cargo hold, large space indoor and outdoor building place based on the fire detecting system of Flame Image Process.Consider in the present tunnel or the closed-circuit TV monitoring system of high-rise and high, therefore can in a cover hardware system, realize above two kinds of monitoring functions fully based on the hardware degree of coupling of the fire hazard monitoring detection system of Flame Image Process.
Fire monitoring method based on Flame Image Process mainly carries out discriminance analysis from aspects such as color characteristic, textural characteristics, shape facility and motion features at present.
Color is the notable attribute of image, compares with other characteristics, and color characteristic calculates simple, stable in properties, and is all insensitive for rotation, translation, dimensional variation, shows very strong robustness.Color characteristic comprises color histogram, main color, mean flow rate etc.
Texture analysis is a research direction of computer vision always, and its method can roughly be divided into statistical method and structural approach.Statistical method is that the space distribution information of the color intensity of image is added up, can further be divided into again traditional based on model statistical method and based on the method for spectrum analysis, like Markov random field model, Fourier spectral characteristic etc.Structural approach at first suppose texture pattern by texture cell according to certain regularly arranged composition, so texture analysis just becomes and confirms these unit, their spatial disposition of quantitative test.
Shape analysis at first will split object from background, re-use the similarity comparison that the whole bag of tricks such as circularity, rectangle degree, square carry out shape.Shape facility has the unchangeability to translation, rotation, convergent-divergent, and the expression of shape can be divided into based on the border with based on 2 types in zone usually.Shape facility based on the border can comprise complicated border with less parameter, like the Fourier descriptor.Shape facility square invariant commonly used based on the zone is described.Because the similarity of shape relatively is still a very problem of difficulty, thereby makes in field of video processing at present and be used less.
Motion feature is the key character of video lens, has reflected that the time domain of video changes, and the behavioral characteristics main contents that also user can provide during video frequency searching often.The method of motion analysis has spatiotemporal mode, two-dimensional parameter motion model, pixel-recursive method and the bayes method etc. of the method based on optical flow equation, block-based method, MPEG motion vector, section.
Summary of the invention
The object of the present invention is to provide the method for a kind of identification and detection tunnel fire disaster flame, this method can make a distinction the car light of real flame and flicker in the tunnel, thereby has reduced rate of false alarm greatly.
Above-mentioned purpose of the present invention realizes by the following technical programs:
1. the method discerning and detect tunnel fire disaster flame is characterized in that comprising the steps:
1) video flowing of input is rejected the pre-service of illumination: for taking the black and white or the color video picture that get off by the video camera that is installed in tunnel top under the various situation in the tunnel; At first be converted into gray level image to coloured image; Use the method for gamma transformation to reject unnecessary illumination then, wherein the threshold value of gamma transformation dynamically confirms through the maximal value of pixel grey scale in the computed image;
2) video flowing is carried out motion detection; Obtain the motion pixel: to the image after the illumination pretreatment that is obtained in the step 1; Use has the time-domain difference method of fixed threshold and carries out motion detection; At first the initialization background image utilizes the relevance between frame and the frame to come background image updating and foreground image according to present frame then;
3) video flowing is carried out color detection; Acquisition has the flame characteristic color pixel: have the pixel of flame color through in training video and picture, extracting; Analyze its intensity level or RGB component value; The pixel range that meets the flame color characteristic is inner if the color value of current pixel is positioned at, and then this pixel is judged as the pixel with flame color, gets into the detection of next stage;
4) search meets the connected region of same characteristic features and interconnective pixel composition to all: the image for after motion inspection and the color detection, carry out the search of connected region; Connected region search comprises zone marker and two steps of range searching: at first use mask method respectively motion pixel region, flame color pixel region and the pixel region that belongs to flame fringe to be carried out mark, use the BFS algorithm to carry out the search of connected region then;
5) connected region of gained is calculated its girth and area, carry out shape analysis: shape analysis comprises: the border of using depth-first search algorithm combination morphology methods to extract each connected region; Calculate the girth on each connected region border respectively; Calculate the area of each connected region; Calculate the circularity of each connected region;
6) to each connected region, carry out area and change component analysis, judge at last whether fire takes place: comprise in this step that mark belongs to the pixel in flame fringe zone; The connected region of using the above-mentioned pixel of BFS algorithm search to be formed; Set up data structure and store the connected region that finds; Use arrives first the corresponding connected region of handling earlier of order coupling front and back frame; Calculate the area change amount of corresponding connected region, judge whether fire takes place.
The method of identification of the present invention and detection tunnel fire disaster flame; Be based on the fire detecting system of Flame Image Process; Can according to flame image color, shape, blinking characteristics with and position, area, the time dependent characteristic of brightness, correct identification flame information makes a distinction the car light of real flame and flicker in the tunnel from the image sequence that contains background noise; Reduce rate of false alarm greatly, can reach the purpose of fire monitoring.
Description of drawings
Fig. 1 is the process flow diagram of the inventive method.
Embodiment
Be distributed in the tunnel with the tunnel outside the camera in each highway section become taken screen transition simulating signal to pass through Optical Fiber Transmission to the CCTV controller of pulpit.On the CCTV controller; Simulating signal is converted into digital signal; A part is sent on the computer monitor screen that is positioned at each Control Room, and another part is sent in the digital monitoring main frame, and some be encoded (being generally mpeg encoded) stored in the DVR.
As shown in Figure 1, the present invention's invention comprises that input video stream changes processes such as component analysis through pre-service, motion detection, color detection, connected region search, shape analysis and area, produce a comprehensive judged result the most at last.
Be elaborated in the face of each step down:
1. pre-service
Because influence and interference ratio that relative confined space such as tunnel receives illumination are bigger; Directly be used for motion detection to original image and will produce unacceptable effect; Thereby the operation after the influence therefore need be through reducing the influence of illumination someway as far as possible.Adopt the method for gamma transformation (being also referred to as power time conversion) to handle.
The citation form of power time conversion is:
s=cr γ            (1)
Wherein c and γ are positive constant.Sometimes consider side-play amount (i.e. measured output when being input as 0), formula (2) is also write and is s=c (r+ ε) γIn any case, side-play amount normally shows deriving of demarcation, and generally in formula (3), neglects.The part value of γ is mapped to the broadband output valve to the dark value in input arrowband in the power time conversion.Also set up when on the contrary, importing the high value.
Method among the present invention at first changes into gray level image (if having) to coloured image, travels through entire image then, finds gray-scale value g the highest in all pixels.Promptly
g=max{g 1,g 2,...,g M×N} (2)
Use the deformation formula of power time conversion then:
s=c(r-g) γ          (3)
And
s=c(r+L-1-g) γ        (4)
Wherein L is the number of greyscale levels (being generally 256) of grayscale image.
2. motion detection
Through after the pre-service, just can carry out motion detection to the frame of video of output.The purpose of motion detection is preliminary car light and the flame of distinguishing motion.
Use the method for time-domain difference to judge motion pixel and moving region.(x, the gray-scale value of the pixel on y) is designated as g being positioned at coordinate in the i+1 frame i(x, y), (x, y) background pixel value on the coordinate is designated as B in first frame 0(x, y).
Initial situation B 0(x, y)=g 0(x, y); For each frame, the next frame background pixel value of being predicted is upgraded according to present frame background pixel value and current actual pixel value afterwards:
B I + 1 ( x , y ) = &alpha; &beta; i ( x , y ) + ( 1 - &alpha; ) g i ( x , y ) if | g i ( x , y ) - g i - 1 ( x , y ) | < T B i ( x , y ) else - - - ( 5 )
Wherein α is a scale-up factor, the speed of expression context update, and generally its value is near 1.
At last, if satisfy following inequality, then think coordinate position (x, the pixel on y) is the motion pixel:
|g i(x,y)-B i(x,y)|>T (6)
3. color detection
Have the pixel of flame color through in training video and picture, extracting, analyze its intensity level (black and white picture or video) or RGB component value (colour picture or video).The color value of note current pixel is I R, I G, I B(black and white then is intensity values of pixels I g), if then satisfy following condition, then this pixel is judged as the pixel with flame color, gets into the detection of next stage:
L R1<i R<l R2, L G1<i G<l G2, L B1<i B<l B2Or L G1<i g<l G2
4. connected region search
After coming out motion pixel and the element marking with flame color, will use a kind of method that is called mask (mask) that mark is carried out in the zone.
Three masks (all being changed to 0 to all correspondence positions) that have identical size with primitive frame of initialization at first, these three masks are respectively applied for motion detection, color detection and the area that will mention afterwards changes component analysis.After the motion detection step, be changed to 1 to all motion pixels value on the correspondence position in the motion mask in the present frame; Equally, through after the color detection step, be changed to 1 to all pixel values on the correspondence position in the color mask that meet the flame color characteristic in the present frame.
For the motion mask, use the breadth First algorithm to search for connected component then.
Use the necessary and sufficient condition of breadth First algorithm to have three:
1. one group of concrete state is arranged, and state is each situation that problem possibly occur; The state space that all states are constituted is limited; Problem scale is less.
2. in the answer process of problem, can from a state according to problem given condition, change another one or several state into.
3. can judge the legitimacy of a state, and clear and definite one or more dbjective states are arranged.
4. problem to be solved is: the original state according to given is found out dbjective state, or according to given original state and done state, finds out a path from the original state to the done state.
At first, construct a queue data structure, a search starting point is specified in the position that in mask, is marked as the motion pixel arbitrarily, writes down its coordinate, and makes it get into formation; Then with current point as basic point; Search for its 8 adjacent pixels; If there is the pixel that is marked as motion in its 8 adjacent pixels; Then join the team according to the order of sequence (noting its coordinate equally), in mask, be labeled as the value on the correspondence position simultaneously and handled according to the precedence of search.When can not find the point that meets said condition, then search stops.
Remember that all collection of pixels are V in the frame, all collection of pixels that are marked as motion are V in the same frame m, and V M &SubsetEqual; V . Being used for the formation of storing moving pixel is designated as Q.
Original state:
Figure G200810121371XD00072
V m = { v m 1 , v m 2 , . . . , v m k } , 0<k≤M * N
The first step: get i ∈ 1,2 ..., k} and
Figure G200810121371XD00074
V m = V m - { v m i } , Q = Q + { v m i } ;
Second step: if
Figure G200810121371XD00076
v m j &Element; V m And v m j &Element; N 8 ( v m j ) , Then V m = V m - { v m i } , Q = Q + { v m i } ;
The 3rd step: if &ForAll; v m j &Element; N 8 ( v m i ) , v m j &NotElement; V m And v m j &Element; Q , Then Q = Q - { v m i } .
Each order (first in first out) of pressing FIFO takes out a pixel
Figure G200810121371XD00085
and repeats three steps of the first step to the from Q, up to satisfying following end condition:
Figure G200810121371XD00086
and
Figure G200810121371XD00087
In like manner can carry out connected component search for the color mask.
5. shape analysis
(1) extract the border of each connected region:
Use the depth-first search method to extract the border of connected region.
The frontier point set of a connected region of note is E, E &Subset; V ; The set of all boundary pixels is designated as V e
Original state:
Figure G200810121371XD00089
A pixel v who belongs to the connected region border i
The first step: E=E+{v i, V e=V e-{ v i;
Second step: right &ForAll; v j &Element; N 8 ( v i ) , If &Exists; v k &Element; N 4 ( v j ) And v k &NotElement; V m , E=E+{v then j, V e=V e-{ v j;
Repeat this two steps afterwards at every turn, up to satisfying following end condition:
Figure G200810121371XD000813
(2) calculate the girth on each connected region border respectively:
In (1) step during the depth-first search connected region, use be recursive algorithm, every increase one deck of recursive tree so, the variable that is used for depositing the connected region perimeter value is also from increasing 1, when recurrence finished, what obtain was exactly the perimeter value of this connected region naturally.
(3) calculate the area of each connected region:
When the BFS connected region; Use formation to store pending pixel, so every next pixel is entered team, and the variable that is used for depositing the connected region area value is also from increasing 1; When satisfying end condition, what obtain is exactly the area value of this connected region naturally.
(4) calculate the circularity of each connected region:
The connected region girth that note is calculated is C, and area is S, and then circularity can be calculated as:
D circle=C 2/4πS(7)
The circularity that is calculated explains then that more near 1 the shape of connected region is more regular, and then it is that the probability of flame is low more.
6. area changes component analysis
(1) mark belongs to the pixel in flame fringe zone:
Still use the method for mask to come mark to belong to the pixel in flame fringe zone.If an intensity values of pixels is labeled as 1 in the mask corresponding position so, otherwise is labeled as 0 less than certain preassigned intensity level P (intensity level of expression flame kernel belongs to the edge of flame than little this pixel of just explaining of this value).
(2) search for the connected region that above-mentioned pixel is formed:
Use the BFS algorithm to search plain each connected region, process can no longer detail with reference to the 4th joint.
(3) set up data structure and store the connected region that finds:
Utilization structure body array is stored the connected region that finds, and is used for storing length and wide, the lower left corner apex coordinate of connected region boundary rectangle, the area of connected region self and the number of times that current connected region is judged as flame region of sequence number, the connected region boundary rectangle of connected region respectively.
Last is done explanation at this: if current connected region is judged as flame region in a frame, this method does not think at once that current region is a flame region; When having only the number of times that is judged as flame region continuously when this same connected region to surpass a threshold value, think that just this zone is a flame region.
(4) the corresponding connected region of frame before and after the coupling:
All connected region set are C on the note current video frame, and then initial situation is
Figure G200810121371XD00091
Afterwards whenever finding a connected region R iThe time, add it among the set C to by the order of FIFO:
C=C+{R i}
When handling next frame; At first check first connected region of current connected region set C; If this zone has exceeded indication range, promptly
Figure G200810121371XD00101
is then its deletion from the connected region set; Otherwise do not do any operation.
Then for each region R i, carry out the comparison of area change amount to itself and connected region when pre-treatment; If detect new connected region R k, k>Max{i} then adds it among connected region set C to: C=C+{R k.
(5) the area change amount of the corresponding connected region of calculating:
&Delta;A = dA dt = A R i - A R t i + 1 - t i
AR wherein iThe area of pairing connected region in the expression previous frame, and A RThe area of the current connected region that compares of expression.
At last, if T Low<Δ A<t High, this connected region possibly be flame region so.
Those of ordinary skill in the art will be appreciated that; Above embodiment is used for explaining the present invention; And be not that conduct is to qualification of the present invention; As long as in essential scope of the present invention, all will drop in the scope of claims of the present invention variation, the modification of the above embodiment.

Claims (1)

1. the method discerning and detect tunnel fire disaster flame is characterized in that comprising the steps:
1) video flowing of input is rejected the pre-service of illumination: for taking the black and white or the color video picture that get off by the video camera that is installed in tunnel top under the various situation in the tunnel; At first be converted into gray level image to coloured image; Use the method for gamma transformation to reject unnecessary illumination then, wherein the threshold value of gamma transformation dynamically confirms through the maximal value of pixel grey scale in the computed image;
2) video flowing is carried out motion detection; Obtain the motion pixel: to the image after the illumination pretreatment that is obtained in the step 1; Use has the time-domain difference method of fixed threshold and carries out motion detection; At first the initialization background image utilizes the relevance between frame and the frame to come background image updating and foreground image according to present frame then; Concrete grammar is:
1. use the method for time-domain difference to judge motion pixel and moving region; (x, the gray-scale value of the pixel on y) is designated as g being positioned at coordinate in the i+1 frame i(x, y), (x, y) background pixel value on the coordinate is designated as B in first frame 0(x, y);
2. initial situation B 0(x, y)=g 0(x, y); For each frame, the next frame background pixel value of being predicted is upgraded according to present frame background pixel value and current actual pixel value afterwards:
B i + 1 ( x , y ) = &alpha; B i ( x , y ) + ( 1 - &alpha; ) g i ( x , y ) if | g i ( x , y ) - g i - 1 ( x , y ) | < T B i ( x , y ) else
Wherein α is a scale-up factor, the speed of expression context update, and generally its value is near 1;
3. last, if satisfy following inequality, then think coordinate position (x, the pixel on y) is the motion pixel:
|g i(x,y)-B i(x,y)|>T
3) video flowing is carried out color detection; Acquisition has the flame characteristic color pixel: have the pixel of flame color through in training video and picture, extracting; Analyze its intensity level or RGB component value; The pixel range that meets the flame color characteristic is inner if the color value of current pixel is positioned at, and then this pixel is judged as the pixel with flame color, gets into the detection of next stage;
4) search for the connected region that all meet same characteristic features and interconnective pixel composition:, carry out the search of connected region for the image after motion detection and the color detection; Connected region search comprises zone marker and two steps of range searching: at first use mask method respectively motion pixel region, flame color pixel region and the pixel region that belongs to flame fringe to be carried out mark, use the BFS algorithm to carry out the search of connected region then;
5) connected region of gained is calculated its girth and area, carry out shape analysis: shape analysis comprises: the border of using depth-first search algorithm combination morphology methods to extract each connected region; Calculate the girth on each connected region border respectively; Calculate the area of each connected region; Calculate the circularity of each connected region; Concrete grammar is:
1. extract the border of each connected region:
Use the depth-first search method to extract the border of connected region;
The frontier point set of a connected region of note remembers that for E all collection of pixels are V in the frame,
Figure FDA0000082524880000021
The set of all boundary pixels is designated as V e
Original state:
Figure FDA0000082524880000022
A pixel v who belongs to the connected region border i
The first step: E=E+{v i, V e=V e-{ v i;
Second step: right
Figure FDA0000082524880000023
If
Figure FDA0000082524880000024
And E=E+{v then j, V e=V e-{ v j; Repeat this two steps afterwards at every turn, up to satisfying following end condition:
Figure FDA0000082524880000026
2. calculate the girth on each connected region border respectively:
When 1. the go on foot the depth-first search connected region, use be recursive algorithm, the every increase one deck of recursive tree, the variable that is used for depositing the connected region perimeter value is also from increasing 1, when recurrence finished, what obtain was exactly the perimeter value of this connected region;
3. calculate the area of each connected region:
When the BFS connected region, use formation to store pending pixel, whenever next pixel is entered team, and the variable that is used for depositing the connected region area value is also from increasing 1, and when satisfying end condition, what obtain is exactly the area value of this connected region;
4. calculate the circularity of each connected region:
The connected region girth that note is calculated is C, and area is S, and then circularity can be calculated as:
D circle=C 2/4πS
The circularity that is calculated explains that more near 1 the shape of connected region is more regular, and then it is that the probability of flame is low more;
6) to each connected region, carry out area and change component analysis, judge at last whether fire takes place: comprise in this step that mark belongs to the pixel in flame fringe zone; The connected region that the pixel of using the BFS algorithm search to belong to the flame fringe zone is formed; Set up data structure and store the connected region that finds; Use arrives first the corresponding connected region of handling earlier of order coupling front and back frame; Calculate the area change amount of corresponding connected region,
&Delta;A = dA dt = A R i - A R t i + 1 - t i
Wherein
Figure FDA0000082524880000032
The area of pairing connected region in the expression previous frame, A RThe area of the current connected region that compares of expression;
At last, if T Low<Δ A<T High, this connected region possibly be flame region so.
CN 200810121371 2008-10-09 2008-10-09 Method for recognizing and detecting tunnel fire disaster flame Active CN101393603B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 200810121371 CN101393603B (en) 2008-10-09 2008-10-09 Method for recognizing and detecting tunnel fire disaster flame

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 200810121371 CN101393603B (en) 2008-10-09 2008-10-09 Method for recognizing and detecting tunnel fire disaster flame

Publications (2)

Publication Number Publication Date
CN101393603A CN101393603A (en) 2009-03-25
CN101393603B true CN101393603B (en) 2012-01-04

Family

ID=40493892

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 200810121371 Active CN101393603B (en) 2008-10-09 2008-10-09 Method for recognizing and detecting tunnel fire disaster flame

Country Status (1)

Country Link
CN (1) CN101393603B (en)

Families Citing this family (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101763635B (en) * 2009-07-28 2011-10-05 北京智安邦科技有限公司 Method and device for judging region of background illumination variation in video image frame sequence
CN101944267B (en) * 2010-09-08 2012-04-18 大连古野软件有限公司 Smoke and fire detection device based on videos
CN102163361B (en) * 2011-05-16 2012-10-17 公安部沈阳消防研究所 Image-type fire detection method based on cumulative prospect image
CN102208018A (en) * 2011-06-01 2011-10-05 西安工程大学 Method for recognizing fire disaster of power transmission line based on video variance analysis
CN102542275B (en) * 2011-12-15 2014-04-23 广州商景网络科技有限公司 Automatic identification method for identification photo background and system thereof
CN102760230B (en) * 2012-06-19 2014-07-23 华中科技大学 Flame detection method based on multi-dimensional time domain characteristics
CN103617413B (en) * 2013-11-07 2015-05-20 电子科技大学 Method for identifying object in image
CN103942557B (en) * 2014-01-28 2017-07-11 西安科技大学 A kind of underground coal mine image pre-processing method
CN104978733B (en) * 2014-04-11 2018-02-23 富士通株式会社 Smog detection method and device
CN103886344B (en) * 2014-04-14 2017-07-07 西安科技大学 A kind of Image Fire Flame recognition methods
CN104091354A (en) * 2014-07-30 2014-10-08 北京华戎京盾科技有限公司 Fire detection method based on video images and fire detection device thereof
CN105574468B (en) * 2014-10-08 2020-07-17 深圳力维智联技术有限公司 Video flame detection method, device and system
CN105956618B (en) * 2016-04-27 2021-12-03 云南昆钢集团电子信息工程有限公司 Converter steelmaking blowing state identification system and method based on image dynamic and static characteristics
CN106373127A (en) * 2016-09-14 2017-02-01 东北林业大学 Laser scanning parallel detection method for wood species and surface defects
CN107085714B (en) * 2017-05-09 2019-12-24 北京理工大学 Forest fire detection method based on video
CN108520615B (en) * 2018-04-20 2020-08-25 吉林省林业科学研究院 Fire identification system and method based on image
CN108615327A (en) * 2018-06-11 2018-10-02 广州市景彤机电设备有限公司 Mobile terminal visual control pipeline spark method and mobile terminal
CN109100370A (en) * 2018-06-26 2018-12-28 武汉科技大学 A kind of pcb board defect inspection method based on sciagraphy and connected domain analysis
CN109063592A (en) * 2018-07-12 2018-12-21 天津艾思科尔科技有限公司 A kind of interior flame detection method based on edge feature
CN110005975B (en) * 2018-10-29 2021-01-26 中画高新技术产业发展(重庆)有限公司 Scene monitoring-based intelligent LED lamp
CN109684982B (en) * 2018-12-19 2020-11-20 深圳前海中创联科投资发展有限公司 Flame detection method based on video analysis and combined with miscible target elimination
CN109886227A (en) * 2019-02-27 2019-06-14 哈尔滨工业大学 Inside fire video frequency identifying method based on multichannel convolutive neural network
CN111368771A (en) * 2020-03-11 2020-07-03 四川路桥建设集团交通工程有限公司 Tunnel fire early warning method and device based on image processing, computer equipment and computer readable storage medium
CN112043991B (en) * 2020-09-15 2023-06-20 河北工业大学 Tunnel guide rail traveling fire-fighting robot system and use method
CN112070072B (en) * 2020-11-11 2021-02-19 国网江苏省电力有限公司经济技术研究院 Prefabricated cabin fire control system based on image recognition and control method thereof
CN112528755A (en) * 2020-11-19 2021-03-19 上海至冕伟业科技有限公司 Intelligent identification method for fire-fighting evacuation facilities
CN112906469A (en) * 2021-01-15 2021-06-04 上海至冕伟业科技有限公司 Fire-fighting sensor and alarm equipment identification method based on building plan
CN115802295B (en) * 2023-02-02 2023-05-30 深圳方位通讯科技有限公司 Tunnel broadcast multicast communication system based on 5G

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6184792B1 (en) * 2000-04-19 2001-02-06 George Privalov Early fire detection method and apparatus
US6956485B1 (en) * 1999-09-27 2005-10-18 Vsd Limited Fire detection algorithm
CN1852428A (en) * 2006-05-25 2006-10-25 浙江工业大学 Intelligent tunnel safety monitoring apparatus based on omnibearing computer vision
CN1979576A (en) * 2005-12-07 2007-06-13 浙江工业大学 Fire-disaster monitoring device based omnibearing vision sensor

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6956485B1 (en) * 1999-09-27 2005-10-18 Vsd Limited Fire detection algorithm
US6184792B1 (en) * 2000-04-19 2001-02-06 George Privalov Early fire detection method and apparatus
CN1979576A (en) * 2005-12-07 2007-06-13 浙江工业大学 Fire-disaster monitoring device based omnibearing vision sensor
CN1852428A (en) * 2006-05-25 2006-10-25 浙江工业大学 Intelligent tunnel safety monitoring apparatus based on omnibearing computer vision

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
B. Ugˇur To¨reyin, et al..Computer vision based method for real-time fire and flame detection.《pattern recognition letters》.2005,第27卷第49-58页. *
Shunsuke Kamijo, et al..Traffic Monitoring and Accident Detection at intersections.《IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS》.2000,第1卷(第2期),第108-118页. *
Thou-Ho (Chao-Ho) Chen, et al..The Smoke Detection for Early Fire-Alarming System Base on Video processing.《Intelligent Information Hiding and Multimedia Signal Processing, 2006》.2006,第427-430页. *
刘浏.基于图像的火焰识别算法的研究.《中国优秀硕士学位论文全文数据库》.2008,(第08期),全文. *

Also Published As

Publication number Publication date
CN101393603A (en) 2009-03-25

Similar Documents

Publication Publication Date Title
CN101393603B (en) Method for recognizing and detecting tunnel fire disaster flame
CN101515326B (en) Method for identifying and detecting fire flame in big space
CN105788142B (en) A kind of fire detection system and detection method based on Computer Vision
CN103069434B (en) For the method and system of multi-mode video case index
CN103761529B (en) A kind of naked light detection method and system based on multicolour model and rectangular characteristic
US20160260306A1 (en) Method and device for automated early detection of forest fires by means of optical detection of smoke clouds
CN101739827B (en) Vehicle detecting and tracking method and device
CN109637068A (en) Intelligent pyrotechnics identifying system
CN104601964A (en) Non-overlap vision field trans-camera indoor pedestrian target tracking method and non-overlap vision field trans-camera indoor pedestrian target tracking system
CN103310422B (en) Obtain the method and device of image
CN103617414B (en) The fire disaster flame of a kind of fire color model based on maximum margin criterion and smog recognition methods
JP2000222673A (en) Vehicle color discriminating device
CN103810722A (en) Moving target detection method combining improved LBP (Local Binary Pattern) texture and chrominance information
CN103093203A (en) Human body re-recognition method and human body re-recognition system
CN105740774A (en) Text region positioning method and apparatus for image
CN106228150A (en) Smog detection method based on video image
Huerta et al. Exploiting multiple cues in motion segmentation based on background subtraction
CN112560649A (en) Behavior action detection method, system, equipment and medium
Amosov et al. Roadway gate automatic control system with the use of fuzzy inference and computer vision technologies
KR100755800B1 (en) Face detector and detecting method using facial color and adaboost
CN109684982B (en) Flame detection method based on video analysis and combined with miscible target elimination
CN107610106B (en) Detection method, detection device, electronic equipment and computer-readable storage medium
CN107729811A (en) A kind of night flame detecting method based on scene modeling
CN110188693A (en) Improved complex environment vehicle characteristics extract and parking method of discrimination
KR102178202B1 (en) Method and apparatus for detecting traffic light

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant