CN103745228B - Dynamic gesture identification method on basis of Frechet distance - Google Patents

Dynamic gesture identification method on basis of Frechet distance Download PDF

Info

Publication number
CN103745228B
CN103745228B CN201310752309.1A CN201310752309A CN103745228B CN 103745228 B CN103745228 B CN 103745228B CN 201310752309 A CN201310752309 A CN 201310752309A CN 103745228 B CN103745228 B CN 103745228B
Authority
CN
China
Prior art keywords
gesture
frame
sequence
video
characteristic sequence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201310752309.1A
Other languages
Chinese (zh)
Other versions
CN103745228A (en
Inventor
张长水
侯广东
崔润鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN201310752309.1A priority Critical patent/CN103745228B/en
Publication of CN103745228A publication Critical patent/CN103745228A/en
Application granted granted Critical
Publication of CN103745228B publication Critical patent/CN103745228B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention discloses a dynamic gesture identification method on the basis of a Frechet distance, which at least comprises the following steps of: acquiring gesture position information of a dynamic gesture fragment to be identified in an input video; carrying out matching on an acquired gesture state variation characteristic sequence and a characteristic sequence in a preset model according to the Frechet distance; and according to Frechet distance matching information, acquiring and outputting a similarity result. According to the invention, the extracted characteristic sequence and the pre-obtained model are subjected to similarity measurement in a certain mode and according to the similarity degree, a corresponding classification of the gesture to be identified is determined; and by utilizing the characteristic that the Frechet distance keeps stretching transformation of a time sequence curve along the time dimension unchanged, the dynamic gesture identification method can be well suitable for the condition of nonuniform distribution of the change speed of the dynamic gesture along the time dimension.

Description

Dynamic gesture identification method based on Fr é chet distance
Technical field
The present invention relates to a kind of dynamic gesture identification method based on Fr é chet distance.
Background technology
Gesture is that people are daily to express heart activity, to carry out the important method of communication exchange with other people.We are usual According to gesture situation of change on time dimension, gesture is divided into static gesture and dynamic gesture.
Static gesture refers to the most relatively-stationary finger, the position of palm, constitutes towards features such as, attitudes The particular space state of hands.Specific static gesture can represent with specified point corresponding in feature space.Phase therewith Corresponding is then dynamic gesture, and it is made up of the motion of continually varying hands in a period of time region, attitude sequence.For dynamic For state gesture, if we choose suitable feature, the gesture information of particular point in time is described, then over time T, t=1,2 ..., the consecutive variations of T, we have just obtained one section of characteristic sequence (F that this dynamic gesture is corresponding1,F2,..., FT).If each element in characteristic sequence is made in feature space by successively, then dynamic gesture always can use feature In space, a corresponding curve describes.
And after dynamic gesture describes out, problem below is gesture identification.It is said that in general, gesture identification is basic Task is exactly the feature corresponding to gesture to be matched with existing model, thus by the characteristic parameter space corresponding to gesture Curve or point are divided into the process of different sets or classification.
At present, the flow process framework of gesture recognition system is typically by three parts such as gesture modeling, gesture analysis and gesture couplings Constitute.
Gesture modeling is typically for given gesture.Under normal circumstances, the needs of choosing of gesture model combine actual answering Determine by background.As for dynamic gesture, choosing of gesture model needs to reflect its spy on time, Spatial Dimension Levy, the motion of gesture is considered as the process being front and back closely connected simultaneously.
Once gesture model is decided, and we will calculate the relevant feature parameters of gesture by gesture analysis process, this A little features actually represent for certain gestures attitude and certain description of movement locus.The process of gesture analysis is complex, It is typically the comprehensive of the processes such as gesture location, feature selection, feature extraction, model parameter estimation.By this serial procedures, Relevant to gesture is partially separated out from image or video by we, and according to the location to gesture, from relevant range or According to the relation of frame before and after in video, extract characteristic in order to the observer state of gesture is carried out suitable description, simultaneously root According to training sample feature, model parameter is estimated.The implementation method of above steps needs also exist for specifically should according to problem It is designed by background.
The basic task of gesture coupling is then feature to be matched with model, thus the characteristic parameter corresponding to gesture is empty Curve between or point are divided into the process of different subset.The most conventional thinking is with the model being previously obtained by the feature of extraction Carry out some form of similarity measurement, determine, according to similarity degree, the classification that gesture to be identified is corresponding.
Summary of the invention
For the problems referred to above, it is an object of the invention to provide a kind of dynamic hand gesture recognition side based on Fr é chet distance Method.
For reaching above-mentioned purpose, a kind of dynamic gesture identification method based on Fr é chet distance of the present invention, at least wraps Include following steps:
Obtain the hand gesture location information of dynamic gesture fragment to be identified in input video;
Obtain the gesture state variation characteristic sequence of frame the most front and back at described hand gesture location;
The gesture state variation characteristic sequence of acquisition is carried out according to Fr é chet distance with the characteristic sequence in preset model Coupling;
Obtain correlation result according to Fr é chet distance match information and export.
Preferably, the concrete steps of the hand gesture location information of dynamic gesture fragment to be identified in described acquisition input video For:
According to the rgb value of pixel in video, obtain the probit that in any one two field picture of video, area of skin color occurs;
Judge to obtain all area of skin color of distribution in each two field picture of video according to described probit;
Obtain each area of skin color light flow valuve of two field picture before and after continuously;
The light flow valuve of all area of skin color according to distribution, obtains the region that average light flow valuve is maximum, i.e. hand gesture location district Territory.
Preferably, at the described hand gesture location of described acquisition, the most front and back the gesture state variation characteristic sequence of frame includes motion Trail change characteristic sequence and attitudes vibration characteristic sequence.
Preferably, the coupling step of described movement locus variation characteristic sequence is:
The average light stream of any one frame and former frame that arrange hand gesture location region inner video image is direction vector, it may be assumed that F =(x, y), wherein x, y represent average light stream component in horizontal, longitudinal direction respectively;
Obtaining the motion feature sequence corresponding to each frame of inputted video image is (F1,F2,...,FT);
Obtaining the motion feature sequence arranged in model is (M1,M2,...,MT);
Choose motion feature sequence (F1,F2,...,FTAny one section of sequence fragment f=(F in)i,Fi+1,...,Fj), Arbitrarily vector (x in this fragment1,y1) and motion feature sequence (M1,M2,...,MTArbitrarily vector (x in)2,y2) away from From, it is:
d 1 ( ( x 1 , y 1 ) , ( x 2 , y 2 ) ) = 1 - x 1 · x 2 + y 1 · y 2 x 1 2 + y 1 2 · x 2 2 + y 2 2 ;
From the multiple d obtained1((x1,y1),(x2,y2)) maximizing, ask for the lower bound of maximum, just can get f= (Fi,Fi+1,...,Fj) and (M1,M2,...,MT) Fr é chet distance δF
One threshold epsilon is set1, it is judged that this threshold value and δFSize:
If δF≤ε1, then judge that in video to be identified, i-th frame to the corresponding gesture motion shown in jth frame is model coupling Gesture;
Otherwise, it is not.
Preferably, the coupling step of described attitudes vibration characteristic sequence is:
Arrange during in any one frame of hand gesture location region inner video image, finger fingertip and middle finger refer to the corresponding finger of root The coordinate of heart position is attitude vectors, is Z, and wherein Z is the characteristic vector of 12 dimensions;
Obtaining the posture feature sequence corresponding to each frame of inputted video image is (Z1,Z2,...,ZT);
Obtaining the posture feature sequence arranged in model is (N1,N2,...,NT);
Choose attitude motion characteristic sequence (Z1,Z2,...,ZTAny one section of sequence fragment f=(Z in)i,Zi+1,..., Zj), obtain any vector Z in this fragment and attitude motion characteristic sequence (N1,N2,...,NTArbitrarily vector N in) away from From, it is:
d2(Z,N)=||Z-N||2
From the multiple d obtained2(Z, N) maximizing, asks for the lower bound of maximum, just can get f=(Zi,Zi+1,..., Zj) and (N1,N2,...,NT) Fr é chet distance δF
One threshold epsilon is set2, it is judged that this threshold value and δFSize:
If δF≤ε2, then judge that in video to be identified, i-th frame to the corresponding gesture motion shown in jth frame is model coupling Gesture;
Otherwise, it is not.
The invention have the benefit that
The present invention, by the characteristic sequence of extraction is carried out some form of similarity measurement with the model being previously obtained, depends on Determine, according to similarity degree, the classification that gesture to be identified is corresponding, and utilize Fr é chet distance to have time-serial position along the time Dimension stretching keeps constant characteristic, it is possible to be well adapted for dynamic gesture pace of change along time dimension skewness Situation.
Accompanying drawing explanation
Fig. 1 is the block schematic illustration of dynamic gesture identification method based on Fr é chet distance described in the embodiment of the present invention;
Fig. 2 is embodiment of the present invention certain gestures orbiting motion schematic diagram;
Fig. 3 is embodiment of the present invention certain gestures attitudes vibration procedure chart;
Fig. 4 is that embodiment of the present invention gesture posture feature chooses schematic diagram;
Fig. 5 is the block schematic illustration that embodiment of the present invention posture feature is chosen.
Detailed description of the invention
Below in conjunction with Figure of description, the present invention will be further described.
As it is shown in figure 1, a kind of dynamic gesture identification method based on Fr é chet distance described in the embodiment of the present invention, at least Comprise the following steps:
Obtain the hand gesture location information of dynamic gesture fragment to be identified in input video;
Obtain the gesture state variation characteristic sequence of frame the most front and back at described hand gesture location;
Characteristic sequence in the gesture state variation characteristic sequence of acquisition and preset model is carried out according to Fr é chet distance Join;
Obtain correlation result according to Fr é chet distance match information and export.
Wherein it is desired to explanation: definition Fr é chet distance is: any one belongs toThe parametric curve in space can To be expressed as Continuous MappingsWherein a <b.Then for curveWith Then Fr é chet distance is
&delta; F ( f , g ) : = inf &alpha; : [ 0,1 ] &RightArrow; [ a , a &prime; ] &beta; : [ 0,1 ] &RightArrow; [ b , b &prime; ] max t &Element; [ 0,1 ] d ( f ( &alpha; ( t ) ) , g ( &beta; ( t ) ) )
In this formula, α, β are the continuous monotonically increasing function being defined on [0,1], and meet α (0)=a, α (1)=a', β (0)=b, β (0)=b'.
Said method step further explanation is illustrated:
First have to determine the place, position of gesture in video image, the hand gesture location of image to be positioned.It determines Concretely comprising the following steps of hand gesture location information:
According to the rgb value of pixel in video, obtain the probit that in any one two field picture of video, area of skin color occurs;
Judge to obtain all area of skin color of distribution in each two field picture of video according to described probit;
Obtain each area of skin color light flow valuve of two field picture before and after continuously;
The light flow valuve of all area of skin color according to distribution, obtains the region that average light flow valuve is maximum, i.e. hand gesture location district Territory.
The specific explanations of above-mentioned steps is such that in the identification of gesture path, and specific gesture implication is made by hands Show for overall movement locus.For each two field picture, color and kinematic constraint can be passed through to the hands of motion in video Position.
On the one hand, in video, each two field picture is made up of R, G, B triple channel, needs by colour of skin district in some pictures The rgb value of territory and non-area of skin color is added up, obtain one big little be 256 × 256 × 256 form, each item in form Represent the probability that the rgb value of its correspondence occurs with colour of skin form.So, each pixel for each two field picture in video must Arrive its approximation probability as skin, and then can substantially estimate region the dividing in each two field picture of video of the approximation colour of skin Cloth.
On the other hand, owing in expressing in gesture, hands is kept in motion all the time as entirety, a certain in video Two field picture, need to choose the region that average light stream is maximum, i.e. by itself and the light stream of previous frame image in the region of the approximation colour of skin Obtain the location to motion hands.
Behind hand gesture location location, characteristic sequence to be extracted.Before and after at the described hand gesture location of described acquisition continuously The gesture state variation characteristic sequence of frame includes movement locus variation characteristic sequence and attitudes vibration characteristic sequence.Accordingly, it would be desirable to Respectively the feature in terms of movement locus aspect and attitude is extracted.
The coupling step of described movement locus variation characteristic sequence is:
The average light stream of any one frame and former frame that arrange hand gesture location region inner video image is direction vector, it may be assumed that F =(x, y), wherein x, y represent average light stream component in horizontal, longitudinal direction respectively;
Obtaining the motion feature sequence corresponding to each frame of inputted video image is (F1,F2,...,FT);
Obtaining the motion feature sequence arranged in model is (M1,M2,...,MT);
Choose motion feature sequence (F1,F2,...,FTAny one section of sequence fragment f=(F in)i,Fi+1,...,Fj), Arbitrarily vector (x in this fragment1,y1) and motion feature sequence (M1,M2,...,MTArbitrarily vector (x in)2,y2) away from From, it is:
d 1 ( ( x 1 , y 1 ) , ( x 2 , y 2 ) ) = 1 - x 1 &CenterDot; x 2 + y 1 &CenterDot; y 2 x 1 2 + y 1 2 &CenterDot; x 2 2 + y 2 2 ;
From the multiple d obtained1((x1,y1),(x2,y2)) maximizing, ask for the lower bound of maximum, just can get f= (Fi,Fi+1,...,Fj) and (M1,M2,...,MT) Fr é chet distance δF
One threshold epsilon is set1, it is judged that this threshold value and δFSize:
If δF≤ε1, then judge that in video to be identified, i-th frame to the corresponding gesture motion shown in jth frame is model coupling Gesture;
Otherwise, it is not.
Specific explanations is:
1, model is set up
Certain gestures track is to be combined in a fixed order by some most basic strokes, and therefore, we choose Specifically by direction vector, (x, sequence y) constituted is as the model representing gesture path.As in figure 2 it is shown, in figure each point is successively Representative model each element of correspondence seasonal effect in time series, (x y) is marked the direction vector corresponding to each element in the drawings by arrow.
2, feature selection and extraction
The sequence that gesture path is made up of specific direction vector represents, and the direction of motion of gesture can be by light stream The calculating of (optical flow) is asked for.According to the location estimation to motion hands, can be previous with adjacent by this frame in video (x, y), wherein x, y represent that average light stream is transversely, longitudinally to the average light stream F=of the image calculating gesture region of frame respectively Component, and as this two field picture character pair.
So for each frame (V in video1,V2,...,VT), just obtain corresponding characteristic sequence (F1,F2,..., FT)。
3, Fr é chet distance coupling
Choose motion feature sequence (F1,F2,...,FTAny one section of sequence fragment f=(F in)i,Fi+1,...,Fj), Arbitrarily vector (x in this fragment1,y1) and motion feature sequence (M1,M2,...,MTArbitrarily vector (x in)2,y2) away from From, it is:
d 1 ( ( x 1 , y 1 ) , ( x 2 , y 2 ) ) = 1 - x 1 &CenterDot; x 2 + y 1 &CenterDot; y 2 x 1 2 + y 1 2 &CenterDot; x 2 2 + y 2 2 ;
From the multiple d obtained1((x1,y1),(x2,y2)) maximizing, ask for the lower bound of maximum, just can get f= (Fi,Fi+1,...,Fj) and (M1,M2,...,MT) Fr é chet distance δF
One threshold epsilon is set1, it is judged that this threshold value and δFSize:
If δF≤ε1, then judge that in video to be identified, i-th frame to the corresponding gesture motion shown in jth frame is model coupling Gesture;
Otherwise, it is not.
The coupling step of described attitudes vibration characteristic sequence is:
Arrange during in any one frame of hand gesture location region inner video image, finger fingertip and middle finger refer to the corresponding finger of root The coordinate of heart position is attitude vectors, is Z, and wherein Z is the characteristic vector of 12 dimensions;
Obtaining the posture feature sequence corresponding to each frame of inputted video image is (Z1,Z2,...,ZT);
Obtaining the posture feature sequence arranged in model is (N1,N2,...,NT);
Choose attitude motion characteristic sequence (Z1,Z2,...,ZTAny one section of sequence fragment f=(Z in)i,Zi+1,..., Zj), obtain any vector Z in this fragment and attitude motion characteristic sequence (N1,N2,...,NTArbitrarily vector N in) away from From, it is:
d2(Z,N)=||Z-N||2
From the multiple d obtained2(Z, N) maximizing, asks for the lower bound of maximum, just can get f=(Zi,Zi+1,..., Zj) and (N1,N2,...,NT) Fr é chet distance δF
One threshold epsilon is set2, it is judged that this threshold value and δFSize:
If δF≤ε2, then judge that in video to be identified, i-th frame to the corresponding gesture motion shown in jth frame is model coupling Gesture;
Otherwise, it is not.
Specific explanations is:
1, model is set up
For specific gesture attitudes vibration process, choose its representative gesture stationary posture sequence as in order to The model of gesture attitudes vibration process is described.Each element in attitude sequence is taken from the stationary posture data base of hands, they with Feature one_to_one corresponding cited below.It is illustrated in figure 3 attitudes vibration figure.
2, character selection and abstraction
Specific gesture attitude by hands towards, finger with palm relative to information decisions such as positions.Therefore each finger is chosen Finger tip and middle finger refer to that the root joint position coordinates relative to wrist center is as the feature describing certain gestures attitude.As shown in Figure 4.
It is illustrated in figure 5 the block schematic illustration that posture feature is chosen, actual when carrying out feature assessment, need to be by this two field picture Characteristics of image retrieve in the stationary posture data base of hands, obtain the stationary posture model approximated the most with it, as It is in the estimation of this moment attitude, and each finger fingertip and the middle finger of model are referred to the root joint horizontal stroke relative to wrist center To, lengthwise position coordinate information, totally 12 dimension real numbers are as to this moment gesture attitude real features.
3, Fr é chet distance coupling
Choose attitude motion characteristic sequence (Z1,Z2,...,ZTAny one section of sequence fragment f=(Z in)i,Zi+1,..., Zj), obtain any vector Z in this fragment and attitude motion characteristic sequence (N1,N2,...,NTArbitrarily vector N in) away from From, it is:
d2(Z,N)=||Z-N||2
From the multiple d obtained2(Z, N) maximizing, asks for the lower bound of maximum, just can get f=(Zi,Zi+1,..., Zj) and (N1,N2,...,NT) Fr é chet distance δF
One threshold epsilon is set2, it is judged that this threshold value and δFSize:
If δF≤ε2, then judge that in video to be identified, i-th frame to the corresponding gesture motion shown in jth frame is model coupling Gesture;
Otherwise, it is not.
Above, only presently preferred embodiments of the present invention, but protection scope of the present invention is not limited thereto, any it is familiar with basis Those skilled in the art in the technical scope that the invention discloses, the change that can readily occur in or replacement, all should contain Within protection scope of the present invention.Therefore, protection scope of the present invention should be as the criterion with the protection domain that claim is defined.

Claims (5)

1. a dynamic gesture identification method based on Fr é chet distance, it is characterised in that at least comprise the following steps:
Obtain the hand gesture location information of dynamic gesture fragment to be identified in input video;
Obtain the gesture state variation characteristic sequence of frame the most front and back at described hand gesture location;
Characteristic sequence in the gesture state variation characteristic sequence of acquisition and preset model is carried out according to Fr é chet distance Join;
Obtain correlation result according to Fr é chet distance match information and export.
Dynamic gesture identification method based on Fr é chet distance the most according to claim 1, it is characterised in that described in obtain Take the concretely comprising the following steps of hand gesture location information of dynamic gesture fragment to be identified in input video:
According to the rgb value of pixel in video, obtain the probit that in any one two field picture of video, area of skin color occurs;
Judge to obtain all area of skin color of distribution in each two field picture of video according to described probit;
Obtain each area of skin color light flow valuve of two field picture before and after continuously;
The light flow valuve of all area of skin color according to distribution, obtains the region that average light flow valuve is maximum, i.e. hand gesture location region.
Dynamic gesture identification method based on Fr é chet distance the most according to claim 1, it is characterised in that described in obtain The gesture state variation characteristic sequence taking frame the most front and back at described hand gesture location includes movement locus variation characteristic sequence and appearance State variation characteristic sequence.
Dynamic gesture identification method based on Fr é chet distance the most according to claim 3, it is characterised in that described fortune The coupling step of dynamic trail change characteristic sequence is:
The average light stream of any one frame and former frame that arrange hand gesture location region inner video image is direction vector, it may be assumed that F= (x, y), wherein x, y represent average light stream component in horizontal, longitudinal direction respectively;
Obtaining the motion feature sequence corresponding to each frame of inputted video image is (F1,F2,…,FT);
Obtaining the motion feature sequence arranged in model is (M1,M2,…,MT);
Choose motion feature sequence (F1,F2,…,FTAny one section of sequence fragment f=(F in)i,Fi+1,…,Fj), obtain this sheet Arbitrarily vector (x in section1,y1) and motion feature sequence (M1,M2,…,MTArbitrarily vector (x in)2,y2) distance, be:
d 1 ( ( x 1 , y 1 ) , ( x 2 , y 2 ) ) = 1 - x 1 &CenterDot; x 2 + y 1 &CenterDot; y 2 x 1 2 + y 1 2 &CenterDot; x 2 2 + y 2 2 ;
From the multiple d obtained1((x1,y1),(x2,y2)) maximizing, ask for the lower bound of maximum, just can get f=(Fi, Fi+1,…,Fj) and (M1,M2,…,MT) Fr é chet distance δF
One threshold epsilon is set1, it is judged that this threshold value and δFSize:
If δF≤ε1, then judge that in video to be identified, i-th frame to the corresponding gesture motion shown in jth frame is the hands of model coupling Gesture;
Otherwise, if d11, then judge that in video to be identified, i-th frame is not model to the corresponding gesture motion shown in jth frame The gesture joined.
Dynamic gesture identification method based on Fr é chet distance the most according to claim 3, it is characterised in that described appearance The coupling step of state variation characteristic sequence is:
Finger fingertip and middle finger in any one frame of hand gesture location region inner video image are set and refer to root corresponding finger centre position The coordinate put is attitude vectors, is Z, and wherein Z is the characteristic vector of 12 dimensions;
Obtaining the posture feature sequence corresponding to each frame of inputted video image is (Z1,Z2,…,ZT);
Obtaining the posture feature sequence arranged in model is (N1,N2,…,NT);
Choose attitude motion characteristic sequence (Z1,Z2,…,ZTAny one section of sequence fragment f=(Z in)i,Zi+1,…,Zj), obtain Any vector Z in this fragment and attitude motion characteristic sequence (N1,N2,…,NTThe distance of the arbitrarily vector N in), is:
d2(Z, N)=| | Z-N | |2
From the multiple d obtained2(Z, N) maximizing, asks for the lower bound of maximum, just can get f=(Zi,Zi+1,…,Zj) and (N1,N2,…,NT) Fr é chet distance δF
One threshold epsilon is set2, it is judged that this threshold value and δFSize:
If δF≤ε2, then judge that in video to be identified, i-th frame to the corresponding gesture motion shown in jth frame is the hands of model coupling Gesture;
Otherwise, if d22, then judge that in video to be identified, i-th frame is not model to the corresponding gesture motion shown in jth frame The gesture joined.
CN201310752309.1A 2013-12-31 2013-12-31 Dynamic gesture identification method on basis of Frechet distance Active CN103745228B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310752309.1A CN103745228B (en) 2013-12-31 2013-12-31 Dynamic gesture identification method on basis of Frechet distance

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310752309.1A CN103745228B (en) 2013-12-31 2013-12-31 Dynamic gesture identification method on basis of Frechet distance

Publications (2)

Publication Number Publication Date
CN103745228A CN103745228A (en) 2014-04-23
CN103745228B true CN103745228B (en) 2017-01-11

Family

ID=50502245

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310752309.1A Active CN103745228B (en) 2013-12-31 2013-12-31 Dynamic gesture identification method on basis of Frechet distance

Country Status (1)

Country Link
CN (1) CN103745228B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104317391B (en) * 2014-09-24 2017-10-03 华中科技大学 A kind of three-dimensional palm gesture recognition exchange method and system based on stereoscopic vision
CN107368181B (en) * 2016-05-12 2020-01-14 株式会社理光 Gesture recognition method and device
CN111405299B (en) * 2016-12-19 2022-03-01 广州虎牙信息科技有限公司 Live broadcast interaction method based on video stream and corresponding device thereof
CN107133361B (en) * 2017-05-31 2020-02-07 北京小米移动软件有限公司 Gesture recognition method and device and terminal equipment
CN107563286B (en) * 2017-07-28 2020-06-23 南京邮电大学 Dynamic gesture recognition method based on Kinect depth information
CN108509049B (en) * 2018-04-19 2020-04-10 北京华捷艾米科技有限公司 Method and system for inputting gesture function
CN108729902B (en) * 2018-05-03 2021-09-10 西安永瑞自动化有限公司 Online fault diagnosis system and method for oil pumping unit
CN112733718B (en) * 2021-01-11 2021-08-06 深圳市瑞驰文体发展有限公司 Foreign matter detection-based billiard game cheating identification method and system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5454043A (en) * 1993-07-30 1995-09-26 Mitsubishi Electric Research Laboratories, Inc. Dynamic and static hand gesture recognition through low-level image analysis
US5594469A (en) * 1995-02-21 1997-01-14 Mitsubishi Electric Information Technology Center America Inc. Hand gesture machine control system
CN101763515A (en) * 2009-09-23 2010-06-30 中国科学院自动化研究所 Real-time gesture interaction method based on computer vision
CN101976330A (en) * 2010-09-26 2011-02-16 中国科学院深圳先进技术研究院 Gesture recognition method and system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5454043A (en) * 1993-07-30 1995-09-26 Mitsubishi Electric Research Laboratories, Inc. Dynamic and static hand gesture recognition through low-level image analysis
US5594469A (en) * 1995-02-21 1997-01-14 Mitsubishi Electric Information Technology Center America Inc. Hand gesture machine control system
CN101763515A (en) * 2009-09-23 2010-06-30 中国科学院自动化研究所 Real-time gesture interaction method based on computer vision
CN101976330A (en) * 2010-09-26 2011-02-16 中国科学院深圳先进技术研究院 Gesture recognition method and system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
动态手势识别中关键技术的研究;王云飞;《中国优秀硕士学位论文全文数据库 信息科技辑》;20120515(第05期);全文 *
基于视觉的手势识别技术研究;赵亚飞;《中国优秀硕士学位论文全文数据库 信息科技辑》;20120715(第07期);全文 *

Also Published As

Publication number Publication date
CN103745228A (en) 2014-04-23

Similar Documents

Publication Publication Date Title
CN103745228B (en) Dynamic gesture identification method on basis of Frechet distance
CN104589356B (en) The Dextrous Hand remote operating control method caught based on Kinect human hand movement
CN107123083B (en) Face edit methods
CN104933734B (en) A kind of human body attitude data fusion method based on more kinect
CN102184541B (en) Multi-objective optimized human body motion tracking method
US20200219135A1 (en) Device, system and method for providing service relating to advertising and product purchase by using artificial-intelligence technology
CN103003846B (en) Articulation region display device, joint area detecting device, joint area degree of membership calculation element, pass nodular region affiliation degree calculation element and joint area display packing
CN105159452B (en) A kind of control method and system based on human face modeling
CN105976395B (en) A kind of video target tracking method based on rarefaction representation
CN105536205A (en) Upper limb training system based on monocular video human body action sensing
CN113393550B (en) Fashion garment design synthesis method guided by postures and textures
CN101021900A (en) Method for making human face posture estimation utilizing dimension reduction method
CN109117766A (en) A kind of dynamic gesture identification method and system
CN105118023A (en) Real-time video human face cartoonlization generating method based on human facial feature points
CN104573665A (en) Continuous motion recognition method based on improved viterbi algorithm
CN102567716A (en) Face synthetic system and implementation method
CN106803062A (en) The recognition methods of stack noise reduction own coding neutral net images of gestures
CN106886986A (en) Image interfusion method based on the study of self adaptation group structure sparse dictionary
CN109584153A (en) Modify the methods, devices and systems of eye
CN111444488A (en) Identity authentication method based on dynamic gesture
CN108829233B (en) Interaction method and device
CN111857334A (en) Human body gesture letter recognition method and device, computer equipment and storage medium
CN105354532A (en) Hand motion frame data based gesture identification method
CN110135277A (en) A kind of Human bodys&#39; response method based on convolutional neural networks
CN106529486A (en) Racial recognition method based on three-dimensional deformed face model

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant