CN104123542A - Device and method for positioning wheel hub work piece - Google Patents

Device and method for positioning wheel hub work piece Download PDF

Info

Publication number
CN104123542A
CN104123542A CN201410349103.9A CN201410349103A CN104123542A CN 104123542 A CN104123542 A CN 104123542A CN 201410349103 A CN201410349103 A CN 201410349103A CN 104123542 A CN104123542 A CN 104123542A
Authority
CN
China
Prior art keywords
wheel hub
image
point
detected
template
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410349103.9A
Other languages
Chinese (zh)
Other versions
CN104123542B (en
Inventor
陈喆
殷福亮
李丹丹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian University of Technology
Original Assignee
Dalian University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian University of Technology filed Critical Dalian University of Technology
Priority to CN201410349103.9A priority Critical patent/CN104123542B/en
Publication of CN104123542A publication Critical patent/CN104123542A/en
Application granted granted Critical
Publication of CN104123542B publication Critical patent/CN104123542B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention discloses a device and a method for positioning a wheel hub work piece. The device for positioning the wheel hub work piece comprises an image collection module, a wheel hub template information extraction module, a feature point extraction module of a wheel hub to be detected, a feature point matching module and a wheel hub positioning module, wherein the wheel hub template information extraction module is used to extract SIFI (scale invariant feature transform) feature points, a circle centre and positions of air nozzles and four points on the circumference of the outer edge of the wheel hub from a wheel hub template image. The device and the method for positioning the wheel hub work piece are in consideration of illumination influences and the problems of translation, rotation, scale variation and the like, which are encountered in the wheel hub image matching process, use an SIFI feature point matching method to match point pairs of corresponding spaces in the wheel hub template image and an image to be detected, then judge a space corresponding relation in wheel hub area images in the wheel hub template image and the image to be detected through the point pairs, and finally figure out points in a wheel hub area in the image to be detected, which are corresponding to known calibration points in the wheel hub template image, through the space corresponding relation between the wheel hub area images in the wheel hub template image and the image to be detected, and thereby achieve the purpose of positioning the wheel hub.

Description

The devices and methods therefor of a kind of wheel hub workpiece location
Technical field
The present invention relates to a kind of workpiece location technology, particularly the devices and methods therefor of a kind of wheel hub workpiece location.
Background technology
On automobile production line for automatically assembling, workpiece location is a kind of conventional operational requirements.The industrial robot with computer vision function is applied to automotive automation assembling, can effectively reduces interference from human factor, significantly improve production efficiency and product quality, reduce production costs.The robotization of carrying out automotive hub workpiece at industrial robot adds man-hour, to carry out with computer vision technique the practical work piece image of analytical industry collected by camera, from image, identify wheel hub, calculate geometric position information, determine accordingly crawl attitude and the movement locus of robot, control in real time industrial robot and capture and carry wheel hub.
Conventionally, wheel hub workpiece is casting workpiece, and the wheel hub side after roughing can leave many casting lines, and hub surface is more coarse; In addition, under actual working environment, wheel hub is normal other chaff interference that exists around, and wheel hub exists translation and rotation, and the shooting that wheel hub workpiece has is incomplete, places the position of wheel hub, its image background more complicated etc.In the case, wheel hub workpiece image is more complicated, thereby causes the location, position of wheel hub and valve thereof very difficult.
Now the prior art relevant with the present invention is described below:
1, the technical scheme of prior art one
Zhao Yuliang, Liu Weijun, Liu Yongxian document " systematic research of automotive hub ONLINE RECOGNITION. machine design and manufacture, 2007 (10): 164-166 " in studied the multiple automotive hub of automatic recognition classification under the state of random mixed flow on travelling belt.The basic step of the method has: Image Acquisition, image pre-service, feature extraction and discriminator.Wherein the committed step of wheel hub location and classification is to extract wheel hub image five category features: whether wheel hub center has hole; Hub diameter; The numbers of hole of wheel hub neighboring area; The area in the shared region of whole wheel hub; In gray level image, the pixel of the same gray level of hub area is counted maximum gray-scale values.
Prior art one is carried out matching wheel-hub contour circle by image region segmentation, Rober operator edge detection and least square method.Yet under actual processing environment, may exist and place the background more complicated of wheel hub or wheel hub image background when close with wheel hub color of object, when edge detection results is bad, may cause failing to judge and misjudging of characteristics of image.In addition, when wheel hub workpiece, partly take not full-timely, the method also cannot realize wheel hub morpheme and calculate.
2, the technical scheme of prior art two
Happy jade-like stone, Xu new people, Wu Xiaobo document " the wheel hub morpheme parameter detection method based on area array CCD. science and technology is circulated a notice of; 2009,25 (2): 196-201 " in a kind of high precision morpheme parameter detection method based on area array CCD imaging and computer image processing technology has been proposed.The basic step of the method has: clap and to get the wheel hub image with calibrating template, and by gradation of image change, Region Segmentation carries out Image Edge-Detection, in conjunction with optic aberrance revising model, geometric distortion rectification done in wheel hub border; Then adopt sub-pixel interpolation algorithm to make edge detection results more accurate; Finally according to position of mounting hole matching wheel hub morpheme parameter.
The positioning precision of prior art two depends on image region segmentation and edge detection results, particularly the shape details segmentation result of hub area inside.Yet under actual processing environment, may exist and place the background more complicated of wheel hub or wheel hub image background when close with wheel hub color of object, with image segmentation algorithm, can not give prominence to well the detailed shape of hub area inside, cause follow-up positioning step to carry out; In addition, when wheel hub workpiece, partly take not full-timely, the method also cannot realize wheel hub morpheme and calculate.
3, the technical scheme of prior art three
Hu Chao, Cui Jialin, Qiu Jun etc. patent " wheel hub automatic identification equipment and method: China; 103090790.A[P] .2013; 05,08 " in a kind of wheel hub center pit, wheel hub fitting surface and wheel hub fitting surface identified proposed to the device and method of the offset distance parameter of wheel hub bottom periphery plane.The method is the offset distance parameter to wheel hub bottom periphery plane by noncontact distance meter automatic acquisition wheel hub fitting surface in advance, covered every technical parameter of wheel hub comprehensively, create wheel hub information database, then adopted two image collecting devices to gather respectively the image of the above and below of wheel hub, the image of the top of wheel hub is wheel hub vertical view, by wheel hub vertical view, obtain the formal parameter of wheel hub, by the image of wheel hub below, it is the formal parameter that the upward view of wheel hub obtains wheel hub center pit and wheel hub fitting surface, the size of the pilot hole of fitting surface for example, position, profile etc.
Prior art three needs the bulk information of pre-stored wheel hub image, and in actual identifying, needs to obtain the direct picture of wheel hub upper and lower, and device and identifying are all more complicated.In addition, the object of the invention is to locate the privileged site of wheel hub, the relative position of camera and wheel hub is not fixed, so the method be not suitable for this scene.
4, the technical scheme of prior art four
Huang Qian, Wu Yuan, Tang Dajun have proposed a kind of hub type automatic recognition system in patent " a kind of detection system and detection method thereof of identifying hub type: China; 103425969.A[P] .2013 ", this system comprises host computer and ccd image sensor, and host computer is connected successively with ccd image sensor.This patent also provides a kind of method being realized by said system, comprises the steps: initialization setting; Obtain hub type identified region without wheel hub image; Create hub type data-base recording; Identification definite hub type.This patent can, according to creating in advance hub type database, realize automatic type identifier to entering the wheel hub of wheel hub cog region in system work process.
Prior art four needs pre-stored wheel hub image library, and partly takes not full-timely when wheel hub workpiece, and the method also cannot realize wheel hub morpheme and calculate.
In sum, there is following problem in existing wheel hub workpiece location technology: (1), in wheel hub matching process, when image runs into the problems such as illumination effect and translation, rotation, dimensional variation, wheel hub workpiece deviations is larger; (2), in wheel hub visual angle change, partial occlusion situation, be difficult to carry out wheel hub location.
The term that the present invention will use is called for short as follows:
SIFT:Scale-Invariant Feature Transform, the conversion of yardstick invariant features;
DoG:Difference of Gaussian, difference of Gaussian;
BBF:Best Bin First, optimum node is preferential;
RANSAC:Random Sample Consensus, random sampling consistance.
Summary of the invention
The problems referred to above that exist for solving prior art, the present invention will design the devices and methods therefor of a kind of wheel hub workpiece location, realizes following two objects:
(1), in wheel hub matching process, reduce the deviation of wheel hub workpiece location;
(2), in wheel hub visual angle change, partial occlusion situation, easily carry out wheel hub location.
To achieve these goals, technical scheme of the present invention is as follows: the device of a kind of wheel hub workpiece location, comprises image capture module, wheel hub Template Information extraction module, wheel hub feature point extraction module to be detected, Feature Points Matching module and wheel hub locating module; Described image capture module is used for gathering wheel hub workpiece gray level image; Described wheel hub Template Information extraction module is for extracting the position of SIFT unique point, the center of circle and valve on wheel hub template image and four points on wheel hub outward flange circumference; Wheel hub feature point extraction module to be detected is for extracting the SIFT characteristic point information on wheel hub image to be detected; Feature Points Matching module is used for finding the unique point pair that wheel hub image to be detected mates with wheel hub template image, and calculates the spatial mappings relation of wheel hub to be detected and template wheel hub; Wheel hub locating module is for locating the position of four points on the center of circle, valve and the wheel hub outward flange circumference of the wheel hub that image to be detected is corresponding, and calculates the radius length of wheel hub in image to be detected;
The output terminal of described image capture module is connected with wheel hub feature point extraction module to be detected with wheel hub Template Information extraction module respectively, the input end of described Feature Points Matching module is connected with wheel hub feature point extraction module to be detected with wheel hub Template Information extraction module respectively, and the output terminal of Feature Points Matching module is connected with wheel hub locating module.
A localization method for the locating device of wheel hub workpiece, comprises the following steps:
A, processed offline
In the processed offline stage, gather wheel hub workpiece image, extract and store the SIFT characteristic point information of wheel hub image, and on template image the position of the gentle mouth in the mark center of circle in advance; Specifically comprise the following steps:
A1, collection wheel hub workpiece image
By image capture module, gather wheel hub workpiece gray level image, while taking, need to be good in light conditions, noise is compared with circlet border, obtains desirable wheel hub template image; Require the background color of this template image even, and in image, only have wheel hub, without other chaff interference;
A2, wheel hub Template Information extract
By wheel hub Template Information extraction module, extract the SIFT unique point on wheel hub template image, demarcate the center of circle and the valve position of wheel hub template, and measure the radius of wheel hub template; Concrete steps are as follows:
The wheel hub workpiece masterplate image of A21, parsing input, in search wheel hub workpiece area, meet the pixel of SIFT unique point characteristic, add up and store SIFT unique point descriptor, according to step 1)~step 5) obtain the SIFT unique point Template Information of wheel hub:
A211, construct image pyramid T
Input picture is defined as f (x, y), and f (x, y) is to down-sampling I time, obtains the image pyramid T of (I+1) layer, wherein I=log 2[min (M, N)]-3, M and N are respectively line number and the columns of f (x, y).Described down-sampling refers to the pixel of average after as down-sampling of getting four pixels adjacent one another are.
In definition image pyramid model T, the image of the 0th layer is T 0(x, y), i.e. original image f (x, y); The image of i layer is defined as T i(x, y), does to original image f (x, y) image obtaining after I down-sampling, i=0, and 1,2 ..., I.
A212, structure gaussian pyramid L
With Gaussian convolution kernel function G (x, y, σ) to T i(x, y) does convolution, and changes continuously metric space factor sigma, and obtaining metric space is L i:
L i(x,y,σ)=G(x,y,σ)*T i(x,y) (1)
Wherein, symbol ' * ' represents convolution algorithm symbol, σ is the metric space factor, i=0, and 1,2 ..., I.
(I+1) width image in T is done to same operation, obtain L.
A213, structure DoG pyramid D
Get L iin every two adjacent images do poorly, obtain DoG space D i,
D i(x,y,σ)=[G(x,y,kσ)-G(x,y,σ)]*T i(x,y)=L i(x,y,kσ)-L i(x,y,σ) (2)
Wherein, symbol ' * ' represents convolution algorithm symbol, and k is the constant of two adjacent metric space multiples, i=0, and 1,2 ..., I.
(I+1) group image in L is done to same operation, obtain D.
Space Local Extremum in A214, detection D
Utilizing the Taylor expansion of DoG function, is zero situation solving this functional derivative
D ( X ) = D + ∂ D T ∂ X X + 1 2 X T ∂ 2 D T ∂ 2 X X - - - ( 3 )
Wherein, X=(x, y, σ) t.
When the derivative that makes D (X) is zero, obtain the extreme point of subpixel accuracy
X ^ = - ∂ 2 D - 1 ∂ X 2 ∂ D ∂ X - - - ( 4 )
A215, screen unstable extreme point, obtain SIFT feature point set
First remove the lower point of contrast in image, meet extreme point then utilize Hessian matrix to remove edge extreme point.
In DoG pyramid, the second derivative of the image of a certain yardstick in x direction is defined as D xx, Hessian matrix representation is:
H = D xx D xy D xy D yy - - - ( 5 )
Two eigenwerts of H are defined as respectively λ 1and λ 2, λ wherein 1>=λ 2and λ 1/ λ 2=r, here λ 1and λ 2correspondence image, to the principal curvatures value in x direction and y direction,, when r is greater than threshold value 10, judges that this extreme point is positioned at the marginal position of DoG curve respectively.
The mark that definition of T r (H) is H, the determinant that Det (H) is H,
Tr ( H ) 2 Det ( H ) = ( λ 1 + λ 2 ) 2 λ 1 λ 2 = ( r λ 2 + λ 2 ) 2 r λ 2 λ 2 = ( r + 1 ) 2 r > ( 10 + 1 ) 2 10 - - - ( 6 )
By calculating Tr (H) and Det (H), avoid directly asking eigenwert, thereby reduce calculated amount.
A216, calculating SIFT unique point descriptor
Using key point around the image-region of arbitrary size as scope of statistics, image-region is divided into some; Add up the histogram of gradients of each point in each piecemeal, calculate the vector that represents this area image information.
The mould value of definition gradient is m (x, y), and direction is θ (x, y),
m ( x , y ) = ( L ( x + 1 , y ) - L ( x - 1 , y ) ) 2 + ( L ( x , y + 1 ) - L ( x , y - 1 ) ) 2 - - - ( 7 )
First calculate the required image-region of descriptor, near neighborhood unique point is divided into 4 * 4 sub regions, the size of every sub regions is 3 σ, and wherein σ is the metric space factor; Then, add up the gradient orientation histogram of each sub regions: using the direction of unique point as reference direction, then calculate the gradient direction of each pixel in each sub regions with respect to the angle of reference direction, projecting to 0~2 π interval take in 8 directions that π/4 are interval, and add up the cumulative of Grad in each direction, after normalization operation, generate 8 dimensional vector descriptors; 8 dimensional vectors of finally gathering every sub regions, form the unique point descriptor that 4 * 4 * 8=126 ties up.
The positional information that A22, the demarcation of wheel hub Template Information extraction module the pixel of take are stored six pixels in wheel hub template image as unit: the center of circle O (x of wheel hub workpiece 0, y 0), the valve center O of wheel hub workpiece gas(x gas, y gas) and wheel hub outward flange circumference on four some O 1(x 1, y 1), O 2(x 2, y 2), O 3(x 3, y 3), O 4(x 4, y 4).
B, processing online
The processing stage of online, first extract the SIFT unique point on image to be detected; Then the unique point matching by the search of Best-Bin-First searching algorithm and wheel hub template; Then with RANSAC algorithm, reject Mismatching point, and calculate the spatial mappings relation between wheel hub and template image in image to be detected; Finally, according to the gauge point of template image, calculate the center of circle of wheel hub and the position of valve in image to be detected; Specifically comprise the following steps:
B1, extract the SIFT unique point on image to be detected
By wheel hub feature point extraction module to be detected, according to steps A 21, get the SIFT unique point on image to be detected, resolve the wheel hub image to be detected of input, in searching image, meet the pixel of SIFT unique point characteristic, add up and store
B2, matching characteristic point
By Feature Points Matching module, find the unique point that wheel hub image to be detected mates with wheel hub template image, and reject Mismatching point, calculate the spatial mappings relation of wheel hub to be detected and template wheel hub.Concrete steps are as follows:
B21, with arest neighbors/time nearest neighbor algorithm, reference picture and image to be matched are carried out to initial matching
By BBF algorithm search to (proper vector is v with unique point p to be matched i) the nearest adjacent features point p of Euclidean distance min(proper vector is v min) and time adjacent features point p min2(proper vector v min2), the point meeting the following conditions is to being the unique point of coupling:
Dist ( v i , v min ) Dist ( v i , v min 2 ) < 0.8 - - - ( 8 )
Wherein, Dist (v i, v min) expression v iand v minbetween mahalanobis distance, Dist (v i, v min2) expression v iand v min2between mahalanobis distance,
Dist ( v i , v min 2 ) = ( v i - v min 2 ) ( v i - v min 2 ) T - - - ( 9 )
Dist ( v i , v min ) = ( v i - v min ) ( v i - v min ) T - - - ( 10 )
Subscript T representing matrix transposition symbol wherein.
B22, use RANSAC algorithm are rejected Mismatching point, and are calculated the spatial correspondence of target area and template image.
Set up an office collection A and B is respectively the initial matching point obtaining in template image and detected image and gathers, and RANSAC algorithm concrete steps are as follows:
B221, at random in a pair set A and B, choose 4 pairs of matching double points, calculate the right projective transformation matrix H of these four pairs of points:
To 1 p (x, y) in image, this point transforms to a p ' (x ', y ') by matrix H,
x &prime; y &prime; 1 = H x y 1 - - - ( 11 )
Wherein, H = h 00 h 01 h 02 h 10 h 11 h 12 h 20 h 21 h 22 ,
Be that H can obtain by matching double points p (x, y) and p ' (x ', y ').Every 4 pairs of matching double points can calculate a projective transformation matrix.
B222, with the projective transformation matrix H calculating in step B221, all unique points in point set A are done to spatial alternation, obtain point set B ';
Calculate the error of coordinate of all corresponding point in point set B and B ', i.e. e=||B-B ' ||; The error threshold σ setting, if e < is σ, thinks that this point is right to being interior point, otherwise is exterior point pair;
B223, repeating step B221 and B222, find the interior linear transformation maximum to quantity, and the interior some pair set that this conversion obtained is as new point set A and B, carries out the iteration of a new round;
B224 iteration stops judgement: when iteration obtain interior when to number, point set A is consistent to number with point in B before time iteration therewith, iteration termination;
B225, iteration result: A and the B of last current iteration, be exactly reject mistake matching characteristic point to after coupling point set, projective transformation matrix H has represented the spatial alternation relation between our desired original image and image to be detected accordingly.
B3, locate the center of circle of wheel hub and the position of valve in image to be detected
By wheel hub locating module, locate the center of circle of wheel hub and the position of valve in image to be detected, and calculate the radius length of wheel hub in image to be detected.Concrete steps are as follows:
B31, the spatial alternation matrix H obtaining according to step B222, calculate six pixels corresponding with calibration point in image to be detected: the center of circle O ' of wheel hub workpiece (x ' 0, y ' 0), the valve center O of wheel hub workpiece ' gas(x ' gas, y ' gas) and wheel hub outward flange circumference on four some O ' 1(x ' 1, y ' 1), O ' 2(x ' 2, y ' 2), O ' 3(x ' 3, y ' 3), O ' 4(x ' 4, y ' 4).
With the coordinate O ' of wheel hub home position (x ' 0, y ' 0) be example:
x 0 &prime; = h 00 x 0 + h 01 y 0 + h 02 h 20 + h 21 + 1 y 0 &prime; = h 10 x 0 + h 11 y 0 + h 12 h 20 + h 21 + 1 - - - ( 12 )
B32, calculate the radius R of wheel hub workpiece in image to be detected ':
R &prime; = d 1 + d 2 + d 3 + d 4 4 - - - ( 13 )
Wherein d i = ( x i &prime; - x 0 &prime; ) 2 + ( y i &prime; - y 0 &prime; ) 2 , i = 1,2,3,4 .
Compared with prior art, the present invention has following beneficial effect:
1, in order effectively to carry out wheel hub workpiece location, the present invention considers illumination effect and the translation running in wheel hub images match process, rotation, the problems such as dimensional variation, it is right that employing yardstick invariant features conversion (SIFT) characteristic point matching method matches the point of space corresponding in template image and image to be detected, then by these spatial correspondence to hub area image in judge templet image and image to be detected, finally calibration point known in template image is calculated to point corresponding to hub area in image to be detected by the two spatial correspondence, thereby reach the object of wheel hub location.Document " Lowe D G.Distinctive image features from scale-invariant keypoints.International Journal of Computer Vision, 2004, 60 (2): 91-110. " proof in, on image, meet the unique point of SIFT characteristic in image generation illumination variation, translation, rotation, in the time of dimensional variation, can keep good SIFT characteristic, so the present invention is to ambient light, visual angle change and partial occlusion all have good robustness, under disturbance environment, can position wheel hub workpiece, there is good locating effect.
2, the present invention, before actual location, obtains the characteristic point information of wheel hub template image by processed offline mode, and demarcates in advance the center of circle and the valve of wheel hub template, and this has reduced the calculated amount of wheel hub in actual location process;
3, the present invention adopts SIFT algorithm as characteristic point matching method, can overcome the illumination, translation, rotation, the change of scale problem that in images match process, run into, ground unrest, visual angle change and partial occlusion is had to good robustness simultaneously.
4, the present invention adopts random RANSAC method to reject Mismatching point pair, has improved matching precision.
Accompanying drawing explanation
7, the total accompanying drawing of the present invention, wherein:
Fig. 1 is a kind of wheel hub workpiece localization method process flow diagram based on SIFT feature.
Fig. 2 is that a kind of wheel hub Working piece positioning device based on SIFT feature forms schematic diagram.
Fig. 3 is wheel hub template gauge point schematic diagram.
Fig. 4 is the positioning result that wheel hub rotary flat is shifted one's love under condition.
Fig. 5 is that wheel hub periphery has the positioning result in chaff interference situation.
Fig. 6 is the positioning result of wheel hub image background when inhomogeneous.
Fig. 7 is the positioning result under hub portion deletion condition.
Embodiment
Below in conjunction with accompanying drawing, the present invention is described further.As shown in Figure 2, concrete grammar flow process as shown in Figure 1 for the composition of the device of a kind of wheel hub workpiece location.
In order to verify validity of the present invention, objective examination and subjective testing have been carried out.
1, subjective performance measures (visual effect)
During camera collection wheel hub image, there will be different noise situations.In order to verify the validity of the inventive method, gathered some images in disturbance situation and tested.
In experiment, template image size is 690 * 691 pixels, and the size of detected image is 1280 * 960 pixels, and wherein wheel hub template and gauge point situation thereof are as shown in Figure 3.In Fig. 3, with cross labelled notation the center of circle and the valve center of wheel hub masterplate, with solid line circle mark wheel hub template outward flange circumference.In wheel hub position fixing process, need to first choose coordinate figure relevant in template image, in this experiment, in template image, the coordinate in the wheel hub center of circle is (353,351), and valve center point coordinate is (127,246), and hub radius is 339 pixels.
Consider length restriction, in wheel hub disturbance situation, respectively choose piece image, being located result is presented in Fig. 4~7, and with asterisk mark actual wheel hub centre point and valve center, with solid line circle mark detect the wheel hub outward flange circumference obtain, by cross mark the present invention detect wheel hub centre point and the valve center obtain, with dashed circle mark the wheel hub outward flange circumference that obtains of detection.In addition, on each wheel hub figure right side, gentle mouth position, the center of circle is amplified, more clearly to show testing result.
2, objective performance standard
For setting accuracy is carried out to objective evaluation, the present invention has added up the absolute difference of the radius value, the center of circle and valve centre coordinate value and the actual value that obtain under every kind of disturbed condition, i.e. the absolute value of positioning result and actual value deviation, and calculate mean absolute difference.In disturbance situation, the positioning result of the wheel hub center of circle, valve center, radius and the mean absolute difference situation of actual value are as shown in table 1, and in table, the unit of data is pixel.
The wheel hub positioning result of table 1 the inventive method and the mean absolute difference of actual value
As can be seen from Table 1, although in image there is rotation translation transformation in hub area, background is inhomogeneous or have chaff interference, there is excalation in wheel hub even, but wheel hub based on SIFT Feature Points Matching location all can obtain good positioning result, is disturbed hardly the impact of factor.
3,, for technical scheme of the present invention, following replacement scheme can complete goal of the invention equally
(1), for comparatively ideal working environment, the situation such as sufficient in illumination, not block, can adopt poor but other Feature Points Matching algorithms that calculated amount is also less of robustness as SURF algorithm, to do and mate;
(2) SIFT algorithm is a kind of method based on Feature Points Matching, for the Rotational Symmetry shape workpiece of this rule of wheel hub, also can replace some feature to do by line feature and mate.
(3) based on the present invention, in the good situation of illumination condition, also can be partitioned in advance hub area by image region segmentation, then carry out SIFT Feature Points Matching, can effectively reduce the operand of localization method.

Claims (2)

1. a device for wheel hub workpiece location, is characterized in that: comprise image capture module, wheel hub Template Information extraction module, wheel hub feature point extraction module to be detected, Feature Points Matching module and wheel hub locating module; Described image capture module is used for gathering wheel hub workpiece gray level image; Described wheel hub Template Information extraction module is for extracting the position of SIFT unique point, the center of circle and valve on wheel hub template image and four points on wheel hub outward flange circumference; Wheel hub feature point extraction module to be detected is for extracting the SIFT characteristic point information on wheel hub image to be detected; Feature Points Matching module is used for finding the unique point pair that wheel hub image to be detected mates with wheel hub template image, and calculates the spatial mappings relation of wheel hub to be detected and template wheel hub; Wheel hub locating module is for locating the position of four points on the center of circle, valve and the wheel hub outward flange circumference of the wheel hub that image to be detected is corresponding, and calculates the radius length of wheel hub in image to be detected;
The output terminal of described image capture module is connected with wheel hub feature point extraction module to be detected with wheel hub Template Information extraction module respectively, the input end of described Feature Points Matching module is connected with wheel hub feature point extraction module to be detected with wheel hub Template Information extraction module respectively, and the output terminal of Feature Points Matching module is connected with wheel hub locating module.
2. a localization method for the locating device of wheel hub workpiece, is characterized in that: comprise the following steps:
A, processed offline
In the processed offline stage, gather wheel hub workpiece image, extract and store the SIFT characteristic point information of wheel hub image, and on template image the position of the gentle mouth in the mark center of circle in advance; Specifically comprise the following steps:
A1, collection wheel hub workpiece image
By image capture module, gather wheel hub workpiece gray level image, while taking, need to be good in light conditions, noise is compared with circlet border, obtains desirable wheel hub template image; Require the background color of this template image even, and in image, only have wheel hub, without other chaff interference;
A2, wheel hub Template Information extract
By wheel hub Template Information extraction module, extract the SIFT unique point on wheel hub template image, demarcate the center of circle and the valve position of wheel hub template, and measure the radius of wheel hub template; Concrete steps are as follows:
The wheel hub workpiece masterplate image of A21, parsing input, in search wheel hub workpiece area, meet the pixel of SIFT unique point characteristic, add up and store SIFT unique point descriptor, according to step 1)~step 5) obtain the SIFT unique point Template Information of wheel hub:
A211, construct image pyramid T
Input picture is defined as f (x, y), and f (x, y) is to down-sampling I time, obtains the image pyramid T of (I+1) layer, wherein I=log 2[min (M, N)]-3, M and N are respectively line number and the columns of f (x, y); Described down-sampling refers to the pixel of average after as down-sampling of getting four pixels adjacent one another are;
In definition image pyramid model T, the image of the 0th layer is T 0(x, y), i.e. original image f (x, y); The image of i layer is defined as T i(x, y), does to original image f (x, y) image obtaining after I down-sampling, i=0, and 1,2 ..., I;
A212, structure gaussian pyramid L
With Gaussian convolution kernel function G (x, y, σ) to T i(x, y) does convolution, and changes continuously metric space factor sigma, and obtaining metric space is L i:
L i(x,y,σ)=G(x,y,σ)*T i(x,y) (1)
Wherein, symbol ' * ' represents convolution algorithm symbol, σ is the metric space factor, i=0, and 1,2 ..., I;
(I+1) width image in T is done to same operation, obtain L;
A213, structure DoG pyramid D
Get L iin every two adjacent images do poorly, obtain DoG space D i,
D i(x,y,σ)=[G(x,y,kσ)-G(x,y,σ)]*T i(x,y)=L i(x,y,kσ)-L i(x,y,σ) (2)
Wherein, symbol ' * ' represents convolution algorithm symbol, and k is the constant of two adjacent metric space multiples, i=0, and 1,2 ..., I;
(I+1) group image in L is done to same operation, obtain D;
Space Local Extremum in A214, detection D
Utilizing the Taylor expansion of DoG function, is zero situation solving this functional derivative
D ( X ) = D + &PartialD; D T &PartialD; X X + 1 2 X T &PartialD; 2 D T &PartialD; 2 X X - - - ( 3 )
Wherein, X=(x, y, σ) t;
When the derivative that makes D (X) is zero, obtain the extreme point of subpixel accuracy
X ^ = - &PartialD; 2 D - 1 &PartialD; X 2 &PartialD; D &PartialD; X - - - ( 4 )
A215, screen unstable extreme point, obtain SIFT feature point set
First remove the lower point of contrast in image, meet extreme point then utilize Hessian matrix to remove edge extreme point;
In DoG pyramid, the second derivative of the image of a certain yardstick in x direction is defined as D xx, Hessian matrix representation is:
H = D xx D xy D xy D yy - - - ( 5 )
Two eigenwerts of H are defined as respectively λ 1and λ 2, λ wherein 1>=λ 2and λ 1/ λ 2=r, here λ 1and λ 2correspondence image, to the principal curvatures value in x direction and y direction,, when r is greater than threshold value 10, judges that this extreme point is positioned at the marginal position of DoG curve respectively;
The mark that definition of T r (H) is H, the determinant that Det (H) is H,
Tr ( H ) 2 Det ( H ) = ( &lambda; 1 + &lambda; 2 ) 2 &lambda; 1 &lambda; 2 = ( r &lambda; 2 + &lambda; 2 ) 2 r &lambda; 2 &lambda; 2 = ( r + 1 ) 2 r > ( 10 + 1 ) 2 10 - - - ( 6 )
By calculating Tr (H) and Det (H), avoid directly asking eigenwert, thereby reduce calculated amount;
A216, calculating SIFT unique point descriptor
Using key point around the image-region of arbitrary size as scope of statistics, image-region is divided into some; Add up the histogram of gradients of each point in each piecemeal, calculate the vector that represents this area image information;
The mould value of definition gradient is m (x, y), and direction is θ (x, y),
m ( x , y ) = ( L ( x + 1 , y ) - L ( x - 1 , y ) ) 2 + ( L ( x , y + 1 ) - L ( x , y - 1 ) ) 2 - - - ( 7 )
First calculate the required image-region of descriptor, near neighborhood unique point is divided into 4 * 4 sub regions, the size of every sub regions is 3 σ, and wherein σ is the metric space factor; Then, add up the gradient orientation histogram of each sub regions: using the direction of unique point as reference direction, then calculate the gradient direction of each pixel in each sub regions with respect to the angle of reference direction, projecting to 0~2 π interval take in 8 directions that π/4 are interval, and add up the cumulative of Grad in each direction, after normalization operation, generate 8 dimensional vector descriptors; 8 dimensional vectors of finally gathering every sub regions, form the unique point descriptor that 4 * 4 * 8=126 ties up;
The positional information that A22, the demarcation of wheel hub Template Information extraction module the pixel of take are stored six pixels in wheel hub template image as unit: the center of circle O (x of wheel hub workpiece 0, y 0), the valve center O of wheel hub workpiece gas(x gas, y gas) and wheel hub outward flange circumference on four some O 1(x 1, y 1), O 2(x 2, y 2), O 3(x 3, y 3), O 4(x 4, y 4);
B, processing online
The processing stage of online, first extract the SIFT unique point on image to be detected; Then the unique point matching by the search of Best-Bin-First searching algorithm and wheel hub template; Then with RANSAC algorithm, reject Mismatching point, and calculate the spatial mappings relation between wheel hub and template image in image to be detected; Finally, according to the gauge point of template image, calculate the center of circle of wheel hub and the position of valve in image to be detected; Specifically comprise the following steps:
B1, extract the SIFT unique point on image to be detected
By wheel hub feature point extraction module to be detected, according to steps A 21, get the SIFT unique point on image to be detected, resolve the wheel hub image to be detected of input, in searching image, meet the pixel of SIFT unique point characteristic, add up and store
B2, matching characteristic point
By Feature Points Matching module, find the unique point that wheel hub image to be detected mates with wheel hub template image, and reject Mismatching point, calculate the spatial mappings relation of wheel hub to be detected and template wheel hub; Concrete steps are as follows:
B21, with arest neighbors/time nearest neighbor algorithm, reference picture and image to be matched are carried out to initial matching
By BBF algorithm search to (proper vector is v with unique point p to be matched i) the nearest adjacent features point p of Euclidean distance min(proper vector is v min) and time adjacent features point p min2(proper vector v min2), the point meeting the following conditions is to being the unique point of coupling:
Dist ( v i , v min ) Dist ( v i , v min 2 ) < 0.8 - - - ( 8 )
Wherein, Dist (v i, v min) expression v iand v minbetween mahalanobis distance, Dist (v i, v min2) expression v iand v min2between mahalanobis distance,
Dist ( v i , v min 2 ) = ( v i - v min 2 ) ( v i - v min 2 ) T - - - ( 9 )
Dist ( v i , v min ) = ( v i - v min ) ( v i - v min ) T - - - ( 10 )
Subscript T representing matrix transposition symbol wherein;
B22, use RANSAC algorithm are rejected Mismatching point, and are calculated the spatial correspondence of target area and template image;
Set up an office collection A and B is respectively the initial matching point obtaining in template image and detected image and gathers, and RANSAC algorithm concrete steps are as follows:
B221, at random in a pair set A and B, choose 4 pairs of matching double points, calculate the right projective transformation matrix H of these four pairs of points:
To 1 p (x, y) in image, this point transforms to a p ' (x ', y ') by matrix H,
x &prime; y &prime; 1 = H x y 1 - - - ( 11 )
Wherein, H = h 00 h 01 h 02 h 10 h 11 h 12 h 20 h 21 h 22 ,
Be that H can obtain by matching double points p (x, y) and p ' (x ', y '); Every 4 pairs of matching double points can calculate a projective transformation matrix;
B222, with the projective transformation matrix H calculating in step B221, all unique points in point set A are done to spatial alternation, obtain point set B ';
Calculate the error of coordinate of all corresponding point in point set B and B ', i.e. e=||B-B ' ||; The error threshold σ setting, if e < is σ, thinks that this point is right to being interior point, otherwise is exterior point pair;
B223, repeating step B221 and B222, find the interior linear transformation maximum to quantity, and the interior some pair set that this conversion obtained is as new point set A and B, carries out the iteration of a new round;
B224 iteration stops judgement: when iteration obtain interior when to number, point set A is consistent to number with point in B before time iteration therewith, iteration termination;
B225, iteration result: A and the B of last current iteration, be exactly reject mistake matching characteristic point to after coupling point set, projective transformation matrix H has represented the spatial alternation relation between our desired original image and image to be detected accordingly;
B3, locate the center of circle of wheel hub and the position of valve in image to be detected
By wheel hub locating module, locate the center of circle of wheel hub and the position of valve in image to be detected, and calculate the radius length of wheel hub in image to be detected; Concrete steps are as follows:
B31, the spatial alternation matrix H obtaining according to step B222, calculate six pixels corresponding with calibration point in image to be detected: the center of circle O ' of wheel hub workpiece (x ' 0, y ' 0), the valve center O of wheel hub workpiece ' gas(x ' gas, y ' gas) and wheel hub outward flange circumference on four some O ' 1(x ' 1, y ' 1), O ' 2(x ' 2, y ' 2), O ' 3(x ' 3, y ' 3), O ' 4(x ' 4, y ' 4);
With the coordinate O ' of wheel hub home position (x ' 0, y ' 0) be example:
x 0 &prime; = h 00 x 0 + h 01 y 0 + h 02 h 20 + h 21 + 1 y 0 &prime; = h 10 x 0 + h 11 y 0 + h 12 h 20 + h 21 + 1 - - - ( 12 )
B32, calculate the radius R of wheel hub workpiece in image to be detected ':
R &prime; = d 1 + d 2 + d 3 + d 4 4 - - - ( 13 )
Wherein d i = ( x i &prime; - x 0 &prime; ) 2 + ( y i &prime; - y 0 &prime; ) 2 , i = 1,2,3,4 .
CN201410349103.9A 2014-07-18 2014-07-18 A kind of devices and methods therefor of hub workpiece positioning Active CN104123542B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410349103.9A CN104123542B (en) 2014-07-18 2014-07-18 A kind of devices and methods therefor of hub workpiece positioning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410349103.9A CN104123542B (en) 2014-07-18 2014-07-18 A kind of devices and methods therefor of hub workpiece positioning

Publications (2)

Publication Number Publication Date
CN104123542A true CN104123542A (en) 2014-10-29
CN104123542B CN104123542B (en) 2017-06-27

Family

ID=51768947

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410349103.9A Active CN104123542B (en) 2014-07-18 2014-07-18 A kind of devices and methods therefor of hub workpiece positioning

Country Status (1)

Country Link
CN (1) CN104123542B (en)

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104680550A (en) * 2015-03-24 2015-06-03 江南大学 Method for detecting defect on surface of bearing by image feature points
CN105423975A (en) * 2016-01-12 2016-03-23 济南大学 Calibration system and method of large-size workpiece
CN105976358A (en) * 2016-04-27 2016-09-28 北京以萨技术股份有限公司 Rapid convolution calculating method used for characteristic-pyramid multiple convolution kernels
CN106325205A (en) * 2016-09-20 2017-01-11 图灵视控(北京)科技有限公司 Wheel hub mounting hole flexible and automatic machining system based on machine vision
CN107862690A (en) * 2017-11-22 2018-03-30 佛山科学技术学院 The circuit board element localization method and positioner of a kind of feature based Point matching
CN107866386A (en) * 2017-09-30 2018-04-03 四川绿能环保科技股份公司 Perishable rubbish identifying system and method
CN108491841A (en) * 2018-03-21 2018-09-04 东南大学 A kind of automotive hub type identification monitoring management system and method
CN108665057A (en) * 2018-03-29 2018-10-16 东南大学 A kind of more production point wheel hub image classification methods based on convolutional neural networks
CN109060262A (en) * 2018-09-27 2018-12-21 芜湖飞驰汽车零部件技术有限公司 A kind of wheel rim weld joint air-tight detection device and air-tightness detection method
CN109427050A (en) * 2017-08-23 2019-03-05 阿里巴巴集团控股有限公司 Guide wheel quality determining method and equipment
CN109592433A (en) * 2018-11-29 2019-04-09 合肥泰禾光电科技股份有限公司 A kind of cargo de-stacking method, apparatus and de-stacking system
CN109871854A (en) * 2019-02-22 2019-06-11 大连工业大学 Quick wheel hub recognition methods
CN111191708A (en) * 2019-12-25 2020-05-22 浙江省北大信息技术高等研究院 Automatic sample key point marking method, device and system
CN111259971A (en) * 2020-01-20 2020-06-09 上海眼控科技股份有限公司 Vehicle information detection method and device, computer equipment and readable storage medium
CN111687444A (en) * 2020-06-16 2020-09-22 浙大宁波理工学院 Method and device for identifying and positioning automobile hub three-dimensional identification code
CN112198161A (en) * 2020-10-10 2021-01-08 安徽和佳医疗用品科技有限公司 PVC gloves real-time detection system based on machine vision
CN112883963A (en) * 2021-02-01 2021-06-01 合肥联宝信息技术有限公司 Positioning correction method, device and computer readable storage medium
CN113432585A (en) * 2021-06-29 2021-09-24 沈阳工学院 Non-contact hub position accurate measurement method based on machine vision technology
CN113591923A (en) * 2021-07-01 2021-11-02 四川大学 Engine rocker arm part classification method based on image feature extraction and template matching
CN113720280A (en) * 2021-09-03 2021-11-30 北京机电研究所有限公司 Bar center positioning method based on machine vision
CN114800533A (en) * 2022-06-28 2022-07-29 诺伯特智能装备(山东)有限公司 Sorting control method and system for industrial robot
CN116977341A (en) * 2023-09-25 2023-10-31 腾讯科技(深圳)有限公司 Dimension measurement method and related device
CN117058151A (en) * 2023-10-13 2023-11-14 山东骏程金属科技有限公司 Hub detection method and system based on image analysis

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020154820A1 (en) * 2001-03-06 2002-10-24 Toshimitsu Kaneko Template matching method and image processing device
CN102799859A (en) * 2012-06-20 2012-11-28 北京交通大学 Method for identifying traffic sign
CN103077512A (en) * 2012-10-18 2013-05-01 北京工业大学 Feature extraction and matching method and device for digital image based on PCA (principal component analysis)
WO2014061221A1 (en) * 2012-10-18 2014-04-24 日本電気株式会社 Image sub-region extraction device, image sub-region extraction method and program for image sub-region extraction

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020154820A1 (en) * 2001-03-06 2002-10-24 Toshimitsu Kaneko Template matching method and image processing device
CN102799859A (en) * 2012-06-20 2012-11-28 北京交通大学 Method for identifying traffic sign
CN103077512A (en) * 2012-10-18 2013-05-01 北京工业大学 Feature extraction and matching method and device for digital image based on PCA (principal component analysis)
WO2014061221A1 (en) * 2012-10-18 2014-04-24 日本電気株式会社 Image sub-region extraction device, image sub-region extraction method and program for image sub-region extraction

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
乐莹 等: "基于面阵CCD的轮毂形位参数检测方法", 《科技通报》 *
李丹丹: "基于图像匹配技术的轮毂定位方法", 《学术资源发现平台》 *
程德志等: "基于改进SIFT算法的图像匹配方法", 《计算机仿真》 *

Cited By (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104680550A (en) * 2015-03-24 2015-06-03 江南大学 Method for detecting defect on surface of bearing by image feature points
CN105423975A (en) * 2016-01-12 2016-03-23 济南大学 Calibration system and method of large-size workpiece
CN105976358A (en) * 2016-04-27 2016-09-28 北京以萨技术股份有限公司 Rapid convolution calculating method used for characteristic-pyramid multiple convolution kernels
CN105976358B (en) * 2016-04-27 2018-07-27 北京以萨技术股份有限公司 A method of the fast convolution for the more convolution kernels of feature pyramid calculates
CN106325205A (en) * 2016-09-20 2017-01-11 图灵视控(北京)科技有限公司 Wheel hub mounting hole flexible and automatic machining system based on machine vision
CN106325205B (en) * 2016-09-20 2019-01-25 图灵视控(北京)科技有限公司 A kind of hub installing hole flexibility automatic processing system based on machine vision
CN109427050A (en) * 2017-08-23 2019-03-05 阿里巴巴集团控股有限公司 Guide wheel quality determining method and equipment
CN107866386A (en) * 2017-09-30 2018-04-03 四川绿能环保科技股份公司 Perishable rubbish identifying system and method
CN107862690A (en) * 2017-11-22 2018-03-30 佛山科学技术学院 The circuit board element localization method and positioner of a kind of feature based Point matching
CN107862690B (en) * 2017-11-22 2023-11-14 佛山科学技术学院 Circuit board component positioning method and device based on feature point matching
CN108491841A (en) * 2018-03-21 2018-09-04 东南大学 A kind of automotive hub type identification monitoring management system and method
CN108665057A (en) * 2018-03-29 2018-10-16 东南大学 A kind of more production point wheel hub image classification methods based on convolutional neural networks
CN109060262A (en) * 2018-09-27 2018-12-21 芜湖飞驰汽车零部件技术有限公司 A kind of wheel rim weld joint air-tight detection device and air-tightness detection method
CN109592433A (en) * 2018-11-29 2019-04-09 合肥泰禾光电科技股份有限公司 A kind of cargo de-stacking method, apparatus and de-stacking system
CN109871854A (en) * 2019-02-22 2019-06-11 大连工业大学 Quick wheel hub recognition methods
CN109871854B (en) * 2019-02-22 2023-08-25 大连工业大学 Quick hub identification method
CN111191708A (en) * 2019-12-25 2020-05-22 浙江省北大信息技术高等研究院 Automatic sample key point marking method, device and system
CN111259971A (en) * 2020-01-20 2020-06-09 上海眼控科技股份有限公司 Vehicle information detection method and device, computer equipment and readable storage medium
CN111687444A (en) * 2020-06-16 2020-09-22 浙大宁波理工学院 Method and device for identifying and positioning automobile hub three-dimensional identification code
CN111687444B (en) * 2020-06-16 2021-04-30 浙大宁波理工学院 Method and device for identifying and positioning automobile hub three-dimensional identification code
CN112198161A (en) * 2020-10-10 2021-01-08 安徽和佳医疗用品科技有限公司 PVC gloves real-time detection system based on machine vision
CN112883963B (en) * 2021-02-01 2022-02-01 合肥联宝信息技术有限公司 Positioning correction method, device and computer readable storage medium
CN112883963A (en) * 2021-02-01 2021-06-01 合肥联宝信息技术有限公司 Positioning correction method, device and computer readable storage medium
CN113432585A (en) * 2021-06-29 2021-09-24 沈阳工学院 Non-contact hub position accurate measurement method based on machine vision technology
CN113591923A (en) * 2021-07-01 2021-11-02 四川大学 Engine rocker arm part classification method based on image feature extraction and template matching
CN113720280A (en) * 2021-09-03 2021-11-30 北京机电研究所有限公司 Bar center positioning method based on machine vision
CN114800533B (en) * 2022-06-28 2022-09-02 诺伯特智能装备(山东)有限公司 Sorting control method and system for industrial robot
CN114800533A (en) * 2022-06-28 2022-07-29 诺伯特智能装备(山东)有限公司 Sorting control method and system for industrial robot
CN116977341A (en) * 2023-09-25 2023-10-31 腾讯科技(深圳)有限公司 Dimension measurement method and related device
CN116977341B (en) * 2023-09-25 2024-01-09 腾讯科技(深圳)有限公司 Dimension measurement method and related device
CN117058151A (en) * 2023-10-13 2023-11-14 山东骏程金属科技有限公司 Hub detection method and system based on image analysis
CN117058151B (en) * 2023-10-13 2024-01-05 山东骏程金属科技有限公司 Hub detection method and system based on image analysis

Also Published As

Publication number Publication date
CN104123542B (en) 2017-06-27

Similar Documents

Publication Publication Date Title
CN104123542A (en) Device and method for positioning wheel hub work piece
CN112818988B (en) Automatic identification reading method and system for pointer instrument
CN104574421B (en) Large-breadth small-overlapping-area high-precision multispectral image registration method and device
CN111028277B (en) SAR and optical remote sensing image registration method based on pseudo-twin convolution neural network
CN110097536B (en) Hexagonal bolt looseness detection method based on deep learning and Hough transform
CN112699876B (en) Automatic reading method for various meters of gas collecting station
CN111191629B (en) Image visibility detection method based on multiple targets
CN105335973B (en) Apply to the visual processing method of strip machining production line
WO2016062159A1 (en) Image matching method and platform for testing of mobile phone applications
CN108007388A (en) A kind of turntable angle high precision online measuring method based on machine vision
CN109859164B (en) Method for visual inspection of PCBA (printed circuit board assembly) through rapid convolutional neural network
CN113379712B (en) Steel bridge bolt disease detection method and system based on computer vision
Yu et al. Robust robot pose estimation for challenging scenes with an RGB-D camera
CN112396656B (en) Outdoor mobile robot pose estimation method based on fusion of vision and laser radar
CN110910350A (en) Nut loosening detection method for wind power tower cylinder
CN113470090A (en) Multi-solid-state laser radar external reference calibration method based on SIFT-SHOT characteristics
CN110634137A (en) Bridge deformation monitoring method, device and equipment based on visual perception
CN109447062A (en) Pointer-type gauges recognition methods based on crusing robot
CN110084842A (en) A kind of secondary alignment methods of machine user tripod head servo and device
CN111563896A (en) Image processing method for catenary anomaly detection
CN106056121A (en) Satellite assembly workpiece fast-identification method based on SIFT image feature matching
Jiang et al. Learned local features for structure from motion of uav images: A comparative evaluation
CN113705564B (en) Pointer type instrument identification reading method
Dey et al. A robust performance evaluation metric for extracted building boundaries from remote sensing data
CN117314986A (en) Unmanned aerial vehicle cross-mode power distribution equipment inspection image registration method based on semantic segmentation

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant