CN103902035B - Three-dimensional interaction device and control method thereof - Google Patents

Three-dimensional interaction device and control method thereof Download PDF

Info

Publication number
CN103902035B
CN103902035B CN201210586668.XA CN201210586668A CN103902035B CN 103902035 B CN103902035 B CN 103902035B CN 201210586668 A CN201210586668 A CN 201210586668A CN 103902035 B CN103902035 B CN 103902035B
Authority
CN
China
Prior art keywords
projecting cell
gesture
interactive
pattern
unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201210586668.XA
Other languages
Chinese (zh)
Other versions
CN103902035A (en
Inventor
林宏伟
王浩伟
董书屏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Industrial Technology Research Institute ITRI
Original Assignee
Industrial Technology Research Institute ITRI
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from TW101149581A external-priority patent/TWI454968B/en
Application filed by Industrial Technology Research Institute ITRI filed Critical Industrial Technology Research Institute ITRI
Publication of CN103902035A publication Critical patent/CN103902035A/en
Application granted granted Critical
Publication of CN103902035B publication Critical patent/CN103902035B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

A three-dimensional interaction device and a control method thereof. The three-dimensional interaction device comprises a projection unit, an image capturing unit and an image processing unit. The projection unit is used for projecting the interactive pattern to the surface of the main body, so that a user can perform interactive triggering operation on the interactive pattern through gestures. The image capturing unit is used for extracting a depth image within an image capturing range. The image processing unit receives the depth image and judges whether the depth image contains a hand image of a user. If so, the image processing unit performs hand geometric identification on the hand image to obtain the gesture interaction semanteme. The image processing unit controls the projection unit and the image capturing unit according to the gesture interaction semantic meaning. Accordingly, the present disclosure provides a portable, non-contact three-dimensional interaction device.

Description

Three-dimensional interactive device and control method thereof
Technical field
It relates to a kind of three-dimensional interactive device and control method thereof.
Background technology
The research of the most contactless man-machine interaction interface is grown up very quick, not Rick Rashid research Analyst's research of (Forrester Research) company is pointed out: if body-sensing and interaction can be allowed to be fully immersed into day The most movable, become a part for life, future will be the excellent chance of interactive application.The most existing many families New somatic game device Kinect from generation to generation that company exploitation various life application products, such as Microsoft are delivered, Use human action manipulation as the interactive media of game;In the sale hall of Boeing jumbo, erection can be Guest's three-dimensional (three-dimensional, 3D) interactive device that a time virtual voyage is provided etc..
Owing to depth image provides more complete spatial image information, therefore Developing Virtual interaction input technology The information focusing on the acquirement third dimension (i.e. the degree of depth) the most effectively, immediately and steadily.But In prior art, most emphasis focuses on sets up depth map (depth map), to reach " space three Dimension interaction " purpose of (spatial 3D interaction), but at present depth map estimation because of distance with The factors such as resolution, it is more difficult to measure the hand even absolute coordinate of finger position slight change, it is more difficult to right Should the contactless man machine interface demand of the least scope.
Existing interactive input mode belongs to interactive application on a large scale, the most large-scale immersion virtual interactive more Device, interactive digital blackboard and body-sensing interactive game etc., the object less for scope is the most relatively difficult to Being accurately positioned in three dimensions, such as accurate operation and scope for hand range size are less mutually Dynamic effect is unsatisfactory.For example, though having part interactive device by allowing the hand-held infrared ray of user throw Shadow device or wear the modes such as labelling (Marker), though being easily tracked identification, but is appropriate only for bigger Area interaction scope.And because projection imaging scope is fixing not enough with label size precision, with realization just Take formula, smaller range still has a segment distance with contactless three-dimensional man/machine interface interaction.
Summary of the invention
In view of this, the disclosure provides a kind of three-dimensional interactive device and control method thereof, can have noncontact Formula, the three dimensions interaction capability of less set point coordinate setting, reach portable and project everywhere Three-dimensional interactive effect.
The disclosure proposes a kind of three-dimensional interactive device, and it includes at projecting cell, taking unit and image Reason unit.Wherein, projecting cell, in order to project interactive pattern to the surface of main body, uses allowing user pair Interactive pattern carries out interactive trigger action with gesture, and wherein interactive pattern is projected in drop shadow spread.Take As unit is in order to be extracted in the depth image in the range of capture, wherein capture scope contains drop shadow spread.Figure As processing unit is connected to projecting cell and taking unit, in order to receive depth image and to judge depth map The hand images of user whether is contained in Xiang.If it is, graphics processing unit carries out hands to hand images Portion's geometry identification, to obtain the gesture interaction meaning of one's words.Graphics processing unit is also controlled according to the gesture interaction meaning of one's words Projecting cell processed and taking unit.
The disclosure separately proposes the control method of a kind of three-dimensional interactive device, and wherein three-dimensional interactive device includes throwing Shadow unit and taking unit.Control method comprises the following steps.To the projection coordinate of projecting cell with take As the capture coordinate of unit performs coordinates correction program.By the interactive pattern of projecting cell projection to main body Surface, uses and allows user that with gesture, interactive pattern is carried out interactive trigger action.Wherein, interactive pattern It is projected in drop shadow spread.And it is extracted in the depth image in the range of capture by taking unit, its In, capture scope contains drop shadow spread.Judge whether depth image contains the hand images of user. If it is, hand images is carried out hand geometry identification, to obtain the gesture interaction meaning of one's words.Again according to gesture The interactive meaning of one's words controls projecting cell and taking unit.
For the features described above of the disclosure and advantage can be become apparent, enforcement example cited below particularly, and join Conjunction accompanying drawing is described in detail below.
Accompanying drawing explanation
Fig. 1 is schematically shown as an enforcement example schematic of the application situation of a kind of three-dimensional interactive device.
Fig. 2 is the block chart according to the three-dimensional interactive device depicted in the disclosure one exemplary embodiment.
Fig. 3 A is the control method according to the three-dimensional interactive device depicted in the disclosure one exemplary embodiment Flow chart.
Fig. 3 B is the flow chart according to the coordinates correction program depicted in the disclosure one exemplary embodiment.
Fig. 4 is the rough schematic according to the coordinates correction program depicted in the disclosure one exemplary embodiment.
Fig. 5 is to obtain gesture interaction according to the gesture identification unit depicted in another exemplary embodiment of the disclosure The method flow diagram of the meaning of one's words.
Fig. 6 A is schematically shown as the result schematic diagram at the convex closure in a kind of analysis depth image with recess.
Fig. 6 B is schematically shown as an enforcement example schematic of the interactive pattern that a kind of projecting cell is projected.
Fig. 6 C is schematically shown as a kind of projecting cell interactive pattern of projection to the application situation of the hand of user One implements example schematic.
Fig. 7 is the sample according to the gesture interaction meaning of one's words data base depicted in another exemplary embodiment of the disclosure Example.
[main element symbol description]
10,200: three-dimensional interactive device
20: operating lamp
210: projecting cell
220: taking unit
230: graphics processing unit
232: gesture identification unit
240: coordinates correction unit
410,420,430: picture
710,720,730,740: sample
D1: drop shadow spread
E, F, G, H, e, f, g, h: coordinate
P1, P2, P3, P4: round dot pattern
P5: hand character pattern
R1: the block of capture scope
The block of R2: drop shadow spread
U: user
Each step of S310~S360: the control method of three-dimensional interactive device
Each step of S312, S314, S316: coordinates correction program
S501~S521: obtain each step of the method for the gesture interaction meaning of one's words
Detailed description of the invention
The design of taking unit/device and projecting cell/device is closed in the three-dimensional interactive device tying of the disclosure, adopts With the alignment technique of projection coordinate Yu capture coordinate, make the three-dimensional interactive device of the disclosure reach portable with Contactless interactive input operation.Owing to utilizing depth image to be engaged in identification and the tracking of object, make Three-dimensional interactive device under environment and light background variation, the most relatively have anti-light strong with avoid ambient light The function of interference.In addition, the user using the three-dimensional interactive device of the disclosure is not necessary to by wearing Labelling (Marker), three-dimensional interactive device i.e. has gesture discriminating function and three dimensions finger locating energy Power, can set up the noncontact being for example similar to the less set point such as hand (hand portion) size Formula three dimensions is interactive.
In order to make content of this disclosure be more apparent from, it is exemplified below some embodiments and really can as the disclosure Enough examples implemented according to this.The embodiment proposed is only used as explanation and is used, and is not used for limiting the disclosure Interest field.
The three-dimensional interactive device of the disclosure such as can combine with surgical medical device so that medical treatment device exists Outside the physical button input function of general contact, also provide for three dimensions finger locating function, and can allow Medical treatment device is manipulated by medical personnel with noncontact form, reduces the antibacterial infection that contact causes.Figure One enforcement example schematic of the 1 application situation being schematically shown as a kind of three-dimensional interactive device.Refer to Fig. 1 real Executing example, three-dimensional interactive device 10 is such as installed and is fixed at the position A of operating lamp 20, and three-dimensional interactive fills Putting 10 also can be installed at B or at C etc., but be not limited to this, installation position can need according to reality application Ask and set.Additionally, three-dimensional interactive device 10 also can be installed in other medical devices, not with operating lamp It is limited.
In this embodiment, three-dimensional interactive device 10 at least includes that projecting cell and taking unit (are not painted It is shown in Fig. 1).Projecting cell may be used to project interactive pattern arbitrary graphic pattern interfaces such as () such as key legend To the hand position of user U, such as palm or arm, taking unit then enters in order to extracting user U The hand depth image of row interactive operation.Changing along with the angle of operating lamp 20 or position is moved, projection is single The launching position of unit also can change with the capture position that taking unit is extracted.As it is shown in figure 1, point Distance between p and some s represents the maximum transverse axis capture scope that taking unit can extract.Point q and some r Between distance represent the maximum transversal projection scope that projecting cell can project.
Although the angle of operating lamp 20 or change in location can affect capture or the projection of three-dimensional interactive device 10 Position.But, the three-dimensional interactive device 10 of an exemplary embodiment persistently can be extracted by taking unit Contain the depth image of the hand of user U, and then differentiate hand coordinate, and projecting cell can be allowed mutually Cardon case is incident upon the hand (in the little areal extent of e.g., from about 10 × 10 centimeters) of user U.Remove Outside this, three-dimensional interactive device 10 also can accurately analyze the gesture motion change of user U, and then solves The gesture interaction meaning of one's words reading gesture motion presents interactive result.
Hereinafter an exemplary embodiment is then lifted so that the detailed embodiment of the three-dimensional interactive device 10 of Fig. 1 to be described. Fig. 2 is the block chart according to the three-dimensional interactive device depicted in the disclosure one exemplary embodiment.
Refer to Fig. 2, three-dimensional interactive device 200 at least include projecting cell 210, taking unit 220, Graphics processing unit 230, gesture identification unit 232 and coordinates correction unit 240.Its example function is divided State as follows:
Projecting cell 210 is in order to project interactive pattern to the surface of a main body.Described main body can be such as Projection screen, human body, the hand of user, operating-table, sick bed, workbench, desktop, wall, Notebook, paper, plank, etc. arbitrarily can carry the object (object) of image, all can as main body, But it is not limited to above-mentioned.In an exemplary embodiment, projecting cell 210 such as can use a kind of micro projection Machine (pico projector, or claim mini projector).In general, the light source of minitype projection machine can be adopted By light emitting diode (Light Emitting Diode, LED) or other solid state light emitters, to improve miniature throwing Lumen (Lumen) needed for shadow machine, and then increase the brightness of the image that minitype projection machine casts out. The size of minitype projection machine is about close with general mobile phone on the market.Therefore, micro projection facility There is portable and without using area restriction, be suitable for the three-dimensional interactive device 200 of the disclosure.
For example, the optional micro-projector using following different size pattern of projecting cell 210, with And other similar specifications: possess 44 inches of short-throw projection functions, brightness is " the BENQ Jobee of 200 lumens GP2(trade name) ";Or 40 inches short burnt designs, brightness are 250 lumens and tool LED bulb " ViewSonic high image quality hand held scialyscope PLED-W200(trade name) ";Or laser micro projection " i-connect ViewX(trade name) ".Above are only the example that projecting cell 210 can use, The disclosure does not limit and uses the above-mentioned commodity enumerated to come as necessary enforcement element.
In one embodiment, taking unit 220 can use " degree of depth camera " to shoot, except two dimension Outside picture, degree of depth photo-opportunity sends infrared light sources, encounters shooting object by infrared light sources anti- The time penetrated, it is judged that the distance between object and camera, and obtain " the depth map that in frame out, object is far and near Picture/depth map " (depth image/depth map).In an exemplary embodiment, taking unit 220 The degree of depth camera of noncontact active scanning specification can be used.
For example, the optional succeeding depths camera that uses of taking unit 220, and other similar rule Lattice: flight time (Time-of-flight) camera, stereoscopic vision (Stereo vision) degree of depth are photographed Machine, laser speckle (Laser speckle) camera or laser traces (Laser tracking) camera etc. Deng.
Graphics processing unit 230 can be obtained by software, hardware or a combination thereof implementation, is not any limitation as at this. Software e.g. application software or driver etc..Hardware e.g. CPU (Central Processing Unit, CPU), or other programmable general services or the microprocessor of specific use (Microprocessor), the dress such as digital signal processor (Digital Signal Processor, DSP) Put.
Graphics processing unit also includes gesture identification unit 232, and gesture identification unit 232 is except can identification Outside hand images in the depth image that taking unit 220 is extracted, more can determine whether the several of hand images What shape, judges hands by the sample in comparison gesture interaction meaning of one's words data base (not being illustrated in Fig. 2) The gesture interaction meaning of one's words.Wherein gesture interaction is one of example embodiment of the present invention, it is possible to utilize object to carry out Interaction, the object of pen, rod, punctuate etc. all can carry out interaction such as.Relatively, object can be set up mutual Dynamic data base provides comparison.
Coordinates correction unit 240 couples projecting cell 210 and taking unit 220, single in order to correct projection The projection coordinate of unit 210 and the capture coordinate of taking unit 220.Its bearing calibration is detailed later.
Fig. 3 A is the control method according to the three-dimensional interactive device depicted in the disclosure one exemplary embodiment Flow chart.The method of this exemplary embodiment is applicable to the three-dimensional interactive device 200 of Fig. 2, the most i.e. arranges in pairs or groups Each component in three-dimensional interactive device 200 illustrates the step of this exemplary embodiment:
As described in step S310, the coordinates correction unit 240 projecting cell to three-dimensional interactive device 200 The projection coordinate of 210 performs coordinates correction program with the capture coordinate of taking unit 220.Obtain projection to sit Mark with capture coordinate between coordinate transformation relation after just can subsequent steps S320, projecting cell 210 is thrown Penetrate the surface of the first interactive pattern a to main body, use and allow user that the first interactive pattern to be carried out with gesture Interactive trigger action, wherein the first interactive pattern is projected in a set drop shadow spread.This example Main body described in embodiment can be the hands of user, human body or other objects, the surface of platform, Projection screen, operating-table, sick bed, workbench, desktop, wall, notebook, paper, plank, etc. Deng arbitrarily object (object) that can carry image etc., it is not intended at this.
In step S330, taking unit 220 is extracted in the depth image in the range of capture, wherein capture model Enclose and contain set drop shadow spread.Subsequent steps S340 after obtaining depth image, image procossing list Unit 230 can judge whether to contain in depth image the hand images of user by gesture identification unit 232. If it is, further hand images is carried out hand geometry identification, to obtain the gesture interaction meaning of one's words (step S350).In step S360, it is single that graphics processing unit 230 can control projection according to the gesture interaction meaning of one's words again Unit 210 and taking unit 220.In one embodiment, graphics processing unit 230 can be according to gesture interaction The meaning of one's words controls projecting cell 210 and projects the second interactive pattern (that is, interaction result pattern) to described master The surface of body, to carry out continuing interaction;Further, graphics processing unit 230 can control taking unit 220 Persistently extract the depth image containing user gesture.
Hereinafter the coordinates correction unit 240 first described in detail in above-mentioned steps S310 performs coordinates correction journey Each step of sequence.Refer to Fig. 3 B, Fig. 3 B is according to the coordinate depicted in the disclosure one exemplary embodiment The flow chart of correction program.Fig. 4 is according to the coordinates correction program depicted in the disclosure one exemplary embodiment Rough schematic.Hereinafter please coordinate with reference to Fig. 3 B and Fig. 4 embodiment.
In an exemplary embodiment, the step of coordinates correction program more can be divided into many sub-steps S312~S316.First thrown respectively with center at one or more boundary point of drop shadow spread by projecting cell 210 Penetrate boundary marker symbol and centre mark symbol, the correcting pattern (step S312) being set with formation. In this exemplary embodiment, boundary marker symbol for example, round dot pattern;Centre mark symbol for example, hands Portion's character pattern;But being not limited thereto, boundary marker symbol and centre mark symbol can be all any Figure.Refer to Fig. 4 embodiment, picture 410 shows the correcting pattern that projecting cell 210 is projected, It includes 4 round dots pattern P1, P2, P3 and P4, and has projected hand shape at the block of center Pattern P 5.Wherein, round dot pattern P1, P2, P3 and P4 can project in order to indicate projecting cell 210 Maximum boundary scope.Hand character pattern P5 is e.g. used for pointing out user next will enter with gesture The interactive trigger action of row.
Then, taking unit 220 extraction the three-primary-color image (step S314) of correcting pattern is contained. Described herein, taking unit 220 is to extract three-primary-color image when carrying out coordinates correction program (that is RGB image), rather than depth image.Because depth image has single depth information function, And to obtain the figure of correcting pattern further.Refer to Fig. 4 embodiment, picture 420 shows that one takes As the three-primary-color image extracted is processed and BORDER PROCESSING by unit 220 again through binaryzation (Binary) After result.Taking unit 220 is in addition to photographing correcting pattern, also by background the most e.g. Other background objects such as work platforms are filmed in the lump.
Coordinates correction unit 240 just may utilize image comparison method and analyzes the boundary marker in three-primary-color image Symbol and the coordinate position of centre mark symbol, to obtain the Coordinate Conversion pass of projection coordinate and capture coordinate System's (step S316).Wherein, image comparison e.g. uses a kind of chamfer distance (Chamfer distance) Image comparison method, but be not limited to this.Every boundary marker symbol of can analysing and comparing out is marked with center The image comparison method of note symbol is all applicable to this step.
Refer to Fig. 4 embodiment, the block R1 that picture 430 displaing coordinate E, F, G and H are formed Capture scope for taking unit 220.Wherein, coordinate E (0,0) is the initial point of taking unit 220, sits Mark H (640,480) may be used to learn that the size of the extracted image of taking unit 220 is 640 × 480(picture Element).It addition, the block R2 that coordinate e, f, g and h are formed is then for the projection model of projecting cell 210 Enclose.Wherein, coordinate e (230,100) is the initial point of projecting cell 210;It addition, utilize coordinate h (460, 500) subtract each other can learn that the full-size that projecting cell 210 can project is 230 with coordinate e (230,100) × 400(pixel).Consequently, it is possible to coordinates correction unit 240 just can learn projection coordinate and capture coordinate Between coordinate transformation relation.
The detailed step of step S340, S350 is performed in order to describe the gesture identification unit 232 of Fig. 3 A in detail Rapid flow process.Hereinafter an exemplary embodiment is separately lifted as detailed description.Fig. 5 is according to another example of the disclosure Gesture identification unit depicted in embodiment obtains the method flow diagram of the gesture interaction meaning of one's words.
Refer to Fig. 5 embodiment, gesture identification unit 232 receives after depth image first with rectangular histogram (histogram) statistical method analysis depth image (step S501).In depth image, each picture Element or block all have corresponding depth value.Such as, the hands of user more near taking unit 220, Depth value is the least;Otherwise, the hands of user is further away from taking unit 220, and depth value is the biggest.Therefore, The histogrammic transverse axis representative depth values level of the degree of depth produced by this step;The longitudinal axis represents corresponding pixel count Mesh.
Judge whether the depth value of depth image has the correspondence image block more than degree of depth marginal value, to sentence Whether other depth image contains the hand images (step S503) of user.Wherein, degree of depth marginal value Such as it is set as 200.
If the hand images containing user in depth image, then according to hand profile (contour) point Convex closure (Convex hull) place in deepness image and depression (Convex deficiency) place's (step S505).For example, with recess at the convex closure during Fig. 6 A is schematically shown as a kind of analysis depth image Result schematic diagram.
Obtain at the convex closure in depth image and after the information of recess, just can be used to differentiate hand geometric form Shape (step S507).Then left hand image or right hand image (step that this hand images is user are judged Rapid S509).If left hand image, then projecting cell 210 can be by interactive pattern projection to user Left hand palm, therefore subsequent steps S511, analyze and judge the center of mass point position of hand images (left hand) Put.And in step S513, via coordinates correction unit 240, center of mass point position is exported to projecting cell 210.Consequently, it is possible to projecting cell 210 just can adjust interactive pattern according to this center of mass point coordinate correspondence Launching position or the size of the interactive pattern of adjustment.It is to say, in one embodiment, projecting cell 210 Can be exactly by the left hand position of interactive pattern projection to user, and user just may utilize the right hand Gesture change carries out interactive trigger action to the interactive pattern being incident upon on left hand position.
For example, Fig. 6 B be schematically shown as the interactive pattern that a kind of projecting cell is projected one enforcement example show It is intended to.Wherein, dotted line D1 represents the drop shadow spread that projecting cell can project.Fig. 6 C is schematically shown as one The interactive pattern of projecting cell projection is illustrated to an enforcement example of the application situation at the left hand position of user Figure.Owing to projecting cell 210 obtains the center of mass point coordinate of hand images, therefore by the interaction shown in Fig. 6 B Graphic pattern projection is to the hand region of user.In addition, when the hand of user moves, projection is single The launching position of unit 210 also can change therewith.
Return to step S509, if right hand image, represent user and carry out interactive triggering with right-hand gesture Operation, therefore subsequent steps S515, gesture identification unit 232 analyzes in hand (right hand) image Or the depth location of multiple finger tip follow the trail of the movement locus of these finger tips.
Further, the forefinger during gesture identification unit 232 analyzes hand (right hand) image and thumb Movement locus (step S517).Gesture identification unit 232 is by comparison gesture interaction meaning of one's words data base Sample and depth location judge the gesture interaction meaning of one's words (step S519) representated by movement locus.
Implementing in example one, the foundation of gesture interaction meaning of one's words data base is to work as with several groups of basic gesture external forms Make study and the sample of comparison, including hand finger standard-sized sheet, close, take take, put, click, Push away, amplify, reduce etc. various basic gesture external forms and the movement locus of correspondence thereof.Fig. 7 is according to this The sample example of gesture interaction meaning of one's words data base depicted in another exemplary embodiment is disclosed.For example, Sample 710 is to pin, with finger, the operation that object elects, and its representative gesture interaction meaning of one's words is for " to click (tap to select) ";Sample 720 is after object is clicked by finger, makees unidirectional towing, its The representative gesture interaction meaning of one's words is " unidirectional towing (slide to scroll) ";Sample 730 is at finger pair After object clicks, making the towing of direction of rotation, its representative gesture interaction meaning of one's words is " to rotate towing (spin To scroll) ";Sample 740 is with finger while object surface clicks, and promotes gently toward a direction After, finger frames out immediately, and its representative gesture interaction meaning of one's words is " touching (flick to nudge) ".
Additionally may select, in step S521, gesture identification unit 232 is analyzed by graphics processing unit 230 The gesture interaction meaning of one's words of gained exports to projecting cell 210, so that the corresponding gesture interaction of projecting cell 210 The interactive pattern of meaning of one's words projection second (such as interactive result pattern).For example, as shown in Figure 6 C should Using situation example schematic, if the gesture interaction meaning of one's words is " clicking ", then user clicks " 1 ", " 2 " or " 3 " Can produce different interactive result patterns respectively, it can be done according to the demand of actual application by user Set.For example, if three-dimensional interactive device 200 installing is applied to the medical environment of operating lamp, medical care Personnel click " 1 " with gesture such as can make projecting cell projection indicate pattern (such as arrow pattern) to patient Being intended to the position carrying out performing the operation, further, medical personnel can also use gesture and indicate pattern and carry out continuing Interaction, such as towing indicate pattern to other positions, zoom in or out sign pattern etc.;With gesture point Choosing " 2 " such as can make projecting cell project a patient data;Click " 3 " with gesture and such as can make projecting cell Projection one operation flow chart.
In addition, the corresponding different gesture interaction meaning of one's words can also have one or more three-dimensional interactive parameter. For example, if the gesture interaction meaning of one's words is " unidirectional towing ", the mutual cardon that projecting cell is projected is represented Case or object can whip potential shift move gesture stop.But, its speed moved then must be passed through with direction Three-dimensional interactive parameter determines.Gesture identification unit 232 is being analyzed and the hand fortune in track depth image During dynamic track, can be in the lump by the depth coordinate of hand images, depth value change, acceleration, strength etc. As three-dimensional interactive parameter, three-dimensional interactive parameter more can be sent to projecting cell by graphics processing unit 230 210.Consequently, it is possible to projecting cell 210 just can learn the mutual cardon projected according to three-dimensional interactive parameter Case or object to move toward which direction, and speed speed of movement etc. thereby promotes the three-dimensional of the disclosure Interaction effect.
In sum, taking unit/device and projecting cell/device are closed in disclosure tying, utilize projection coordinate Alignment technique with capture coordinate, it is possible to make the three-dimensional interactive device of the disclosure have three dimensions hand Stationkeeping ability, and it is interactive that user can be allowed with project content to carry out three dimensions in a non contact fashion.This Outward, owing to utilizing depth image to carry out hand identification and tracking so that the three-dimensional interactive device of the disclosure exists Environment, with under light background variation, the most relatively has effect that is anti-light strong and that avoid ambient light to disturb.Furthermore, The three-dimensional interactive device of the disclosure and the combination of medical device, it is provided that healthcare givers can such as be similar to hand Input operation is carried out exactly with noncontact form, fall in the three dimensions of the less set point of size etc. The low antibacterial infection caused because of contact.
Although the disclosure is open as above to implement example, so it is not limited to the disclosure, this area Technical staff, without departing from the spirit and scope of the disclosure, when making a little change and retouching, therefore The protection domain of the disclosure is when being as the criterion depending on appended claims confining spectrum.

Claims (21)

1. a three-dimensional interactive device, including:
Projecting cell, the surface of the interactive pattern of projection first to main body, use and make user first mutual to this Cardon case carries out interactive trigger action with gesture, and wherein this first interactive pattern is projected in drop shadow spread;
Taking unit, is extracted in the depth image in the range of capture, and wherein this capture scope contains this projection Scope;And
Graphics processing unit, this graphics processing unit connects this projecting cell and this taking unit, and receiving should Depth image also judges whether to contain in this depth image the hand images of this user, if it is, this figure As processing unit carries out hand geometry identification to this hand images, to obtain the gesture interaction meaning of one's words, this image Processing unit controls this projecting cell and this taking unit according to this gesture interaction meaning of one's words.
2. three-dimensional interactive device as claimed in claim 1, wherein:
This projecting cell projects boundary marker symbol at least one boundary point of this drop shadow spread respectively with center Number with centre mark symbol, with formed in order to the correcting pattern corrected.
3. three-dimensional interactive device as claimed in claim 2, also includes:
Coordinates correction unit, couples this projecting cell and this taking unit, receives this taking unit and extracted Three-primary-color image, wherein this three-primary-color image contains this correcting pattern, this coordinates correction unit use figure As comparison method analyzes the coordinate of this boundary marker symbol in this three-primary-color image and this centre mark symbol Position, closes with the Coordinate Conversion of the capture coordinate of the projection coordinate Yu this taking unit that obtain this projecting cell System.
4. three-dimensional interactive device as claimed in claim 2, wherein:
This boundary marker symbol that this projecting cell is projected is round dot pattern.
5. three-dimensional interactive device as claimed in claim 2, wherein:
This centre mark symbol that this projecting cell is projected is hand character pattern.
6. three-dimensional interactive device as claimed in claim 1, wherein this graphics processing unit also includes:
Gesture identification unit, utilizes statistics with histogram method to be analyzed this depth image, and by being somebody's turn to do This hand images is judged with recess at the convex closure of depth image.
7. three-dimensional interactive device as claimed in claim 6, wherein:
This gesture identification unit more judges the center of mass point position of this hand images, and will be to should center of mass point The center of mass point coordinate of position exports to this projecting cell.
8. three-dimensional interactive device as claimed in claim 7, wherein:
This projecting cell according to this center of mass point coordinate correspondence adjust the launching position of the second interactive pattern and this The size of two interactive patterns.
9. three-dimensional interactive device as claimed in claim 6, wherein:
This gesture identification unit more judges that the depth location of at least one finger tip in this hand images tracking should The movement locus of at least one finger tip, this gesture identification unit is by comparison gesture interaction meaning of one's words data base This depth location of sample and this hand images judges this gesture interaction language representated by this movement locus Meaning.
10. three-dimensional interactive device as claimed in claim 9, wherein:
This gesture identification unit is by this gesture interaction meaning of one's words and analyzes this depth location and this movement locus gained At least one three-dimensional interactive parameter be sent to this projecting cell and this taking unit, in order to control this projection list Unit's projection to should the second interactive pattern of the gesture interaction meaning of one's words, and control this taking unit and persistently extract and contain There is this depth image of this gesture.
11. three-dimensional interactive devices as claimed in claim 1, wherein this gesture interaction meaning of one's words at least includes a little Selection operation, unidirectional drag operation, rotate drag operation, touch operation, amplifieroperation and reduction operation.
12. three-dimensional interactive devices as claimed in claim 1, wherein this projecting cell uses minitype projection machine.
13. three-dimensional interactive devices as claimed in claim 1, wherein this taking unit is flight time photography One of them of machine, stereoscopic vision degree of depth camera, laser speckle camera or laser trace camera.
The control method of 14. 1 kinds of three-dimensional interactive devices, wherein this three-dimensional interactive device includes projecting cell And taking unit, this control method includes:
The projection coordinate of this projecting cell is performed coordinates correction program with the capture coordinate of this taking unit;
By the surface of this projecting cell projection the first interactive pattern to main body, use allow user to this One interactive pattern carries out interactive trigger action with gesture, and wherein this first interactive pattern is projected projection model In enclosing;
Being extracted in the depth image in the range of capture by this taking unit, wherein this capture scope contains this Drop shadow spread;
Judge whether this depth image contains the hand images of this user, if it is, to this hand figure As carrying out hand geometry identification, to obtain the gesture interaction meaning of one's words;And
This projecting cell and this taking unit is controlled according to this gesture interaction meaning of one's words.
The control method of 15. three-dimensional interactive devices as claimed in claim 14, wherein this coordinates correction journey Sequence includes:
Boundary marker is projected at least one boundary point of this drop shadow spread respectively with center by this projecting cell Symbol and centre mark symbol, to form correcting pattern;
The three-primary-color image containing this correcting pattern is extracted by this taking unit;And
Accord with this centre mark with this boundary marker symbol that image comparison method is analyzed in this three-primary-color image Number coordinate position, to obtain the coordinate transformation relation of this projection coordinate and this capture coordinate.
The control method of 16. three-dimensional interactive devices as claimed in claim 15, wherein this boundary marker symbol Number it is round dot pattern.
The control method of 17. three-dimensional interactive devices as claimed in claim 15, wherein this centre mark symbol Number it is hand character pattern.
The control method of 18. three-dimensional interactive devices as claimed in claim 14, wherein judges this depth map As including obtaining the step of this gesture interaction meaning of one's words:
Utilize this depth image of statistics with histogram methods analyst;
By judging this hands with recess at the depth value of this depth image and the convex closure of this depth image Portion's image;And
Judge the depth location of at least one finger tip in this hand images and follow the trail of the motion of this at least one finger tip Track, by the sample in comparison gesture interaction meaning of one's words data base and this depth location of this hand images Judge this gesture interaction meaning of one's words representated by this movement locus.
The control method of 19. three-dimensional interactive devices as claimed in claim 18, is wherein obtaining this gesture After the interactive meaning of one's words, also include:
At least one three-dimensional interactive parameter analyzing this depth location and this movement locus gained is sent to this throwing Shadow unit and this taking unit, in order to control the projection of this projecting cell to should the gesture interaction meaning of one's words second Interactive pattern, and control this taking unit and persistently extract this depth image containing this gesture.
The control method of 20. three-dimensional interactive devices as claimed in claim 19, also includes:
Judge the center of mass point position of this hand images, by should center of mass point position center of mass point coordinate output To this projecting cell;And
By this projecting cell according to this center of mass point coordinate correspondence adjust the launching position of this second interactive pattern with The size of this second interactive pattern.
The control method of 21. three-dimensional interactive devices as claimed in claim 14, wherein this gesture interaction language Meaning at least includes a selection operation, unidirectional drag operation, rotation drag operation, touches operation, amplifieroperation And reduction operation.
CN201210586668.XA 2012-12-24 2012-12-28 Three-dimensional interaction device and control method thereof Active CN103902035B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
TW101149581 2012-12-24
TW101149581A TWI454968B (en) 2012-12-24 2012-12-24 Three-dimensional interactive device and operation method thereof

Publications (2)

Publication Number Publication Date
CN103902035A CN103902035A (en) 2014-07-02
CN103902035B true CN103902035B (en) 2016-11-30

Family

ID=

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6160899A (en) * 1997-07-22 2000-12-12 Lg Electronics Inc. Method of application menu selection and activation using image cognition
CN101730874A (en) * 2006-06-28 2010-06-09 诺基亚公司 Touchless gesture based input
CN102508578A (en) * 2011-10-09 2012-06-20 清华大学深圳研究生院 Projection positioning device and method as well as interaction system and method
US8228315B1 (en) * 2011-07-12 2012-07-24 Google Inc. Methods and systems for a virtual input device
CN102763342A (en) * 2009-12-21 2012-10-31 三星电子株式会社 Mobile device and related control method for external output depending on user interaction based on image sensing module
CN102799271A (en) * 2012-07-02 2012-11-28 Tcl集团股份有限公司 Method and system for identifying interactive commands based on human hand gestures

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6160899A (en) * 1997-07-22 2000-12-12 Lg Electronics Inc. Method of application menu selection and activation using image cognition
CN101730874A (en) * 2006-06-28 2010-06-09 诺基亚公司 Touchless gesture based input
CN102763342A (en) * 2009-12-21 2012-10-31 三星电子株式会社 Mobile device and related control method for external output depending on user interaction based on image sensing module
US8228315B1 (en) * 2011-07-12 2012-07-24 Google Inc. Methods and systems for a virtual input device
CN102508578A (en) * 2011-10-09 2012-06-20 清华大学深圳研究生院 Projection positioning device and method as well as interaction system and method
CN102799271A (en) * 2012-07-02 2012-11-28 Tcl集团股份有限公司 Method and system for identifying interactive commands based on human hand gestures

Similar Documents

Publication Publication Date Title
TWI454968B (en) Three-dimensional interactive device and operation method thereof
Wang et al. Real-time hand-tracking with a color glove
CN110308789B (en) Method and system for mixed reality interaction with peripheral devices
CN108776773B (en) Three-dimensional gesture recognition method and interaction system based on depth image
Shen et al. Vision-based hand interaction in augmented reality environment
JP6079832B2 (en) Human computer interaction system, hand-to-hand pointing point positioning method, and finger gesture determination method
CN103226387B (en) Video fingertip localization method based on Kinect
CN103544472B (en) A kind of processing method and processing unit based on images of gestures
KR20160108386A (en) 3d silhouette sensing system
Wacker et al. Physical guides: An analysis of 3d sketching performance on physical objects in augmented reality
CN107688391A (en) A kind of gesture identification method and device based on monocular vision
CN102915112A (en) System and method for close-range movement tracking
CN104166509A (en) Non-contact screen interaction method and system
CN103500010B (en) A kind of video fingertip localization method
CN112667078B (en) Method, system and computer readable medium for quickly controlling mice in multi-screen scene based on sight estimation
CN104460967A (en) Recognition method of upper limb bone gestures of human body
CN107682595B (en) interactive projection method, system and computer readable storage medium
CN104714650B (en) A kind of data inputting method and device
CN107918507A (en) A kind of virtual touchpad method based on stereoscopic vision
JP2017219942A (en) Contact detection device, projector device, electronic blackboard system, digital signage device, projector device, contact detection method, program and recording medium
CN103902035B (en) Three-dimensional interaction device and control method thereof
Park et al. Interactive display of image details using a camera-coupled mobile projector
KR101868520B1 (en) Method for hand-gesture recognition and apparatus thereof
CN105045390A (en) Human upper limb skeleton gesture identification method
Bai et al. Poster: Markerless fingertip-based 3D interaction for handheld augmented reality in a small workspace

Legal Events

Date Code Title Description
PB01 Publication
SE01 Entry into force of request for substantive examination
GR01 Patent grant