CN102509348A - Method for showing actual object in shared enhanced actual scene in multi-azimuth way - Google Patents

Method for showing actual object in shared enhanced actual scene in multi-azimuth way Download PDF

Info

Publication number
CN102509348A
CN102509348A CN2011102872078A CN201110287207A CN102509348A CN 102509348 A CN102509348 A CN 102509348A CN 2011102872078 A CN2011102872078 A CN 2011102872078A CN 201110287207 A CN201110287207 A CN 201110287207A CN 102509348 A CN102509348 A CN 102509348A
Authority
CN
China
Prior art keywords
real
world object
augmented reality
shared
reality scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2011102872078A
Other languages
Chinese (zh)
Other versions
CN102509348B (en
Inventor
陈小武
赵沁平
金鑫
郭侃侃
郭宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN201110287207.8A priority Critical patent/CN102509348B/en
Publication of CN102509348A publication Critical patent/CN102509348A/en
Application granted granted Critical
Publication of CN102509348B publication Critical patent/CN102509348B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a method for showing an actual object in a shared enhanced actual scene in a multi-azimuth way. The method for showing the actual object in the shared enhanced actual scene in the multi-azimuth way comprises the following steps of: according to characteristics of vertex distribution of a scene area, and according to the coverage area of video sequences to the scene and the number of required apparatuses, optimizing setting ways of multiple video sequences; determining an observation area of each video sequence; matching and determining a shared area between the video sequences according to a characteristic that a standard marker and the dimensions of the video sequences in different azimuths are unchanged; computing a spatial position relationship of all video sequences; finishing false or true registering of each video sequence by using the standard marker; integrating the local descriptions of the video sequences on the actual object, and estimating a three-dimensional convex hull of the actual object according to a computer vision principle; for an entering new cooperative user, giving a rapid registering method according to the spatial position relationship of all video sequences; and drawing a two-dimensional mapping effect of the actual object in the video sequence in each azimuth. The application cost of the method provided by the invention is low, and by the method, a rapid response can be made to the entering new cooperative user.

Description

The multi-faceted method for expressing of a kind of real-world object of shared augmented reality scene
Technical field
The present invention relates to computational geometry, Flame Image Process and augmented reality field, particularly the method for expressing of real-world object in the multi-faceted video sequence of shared augmented reality scene.
Background technology
In the cooperating type augmented reality; Different collaborative work users gets in the augmented reality scene; In perception and the interaction area of being everlasting separately, have different viewpoints with different mutual, can accomplish the predetermined anti-task of doing jointly, need set up a shared augmented reality scene in order to make these users; Thereby need the cooperating type augmented reality system can describe true environment a plurality of orientation, that constantly change, set up scene three-dimensional space model based on many video sequences.Wherein, It is the problem that presses for solution that real-world object in the scene is carried out multi-faceted expression; Specifically comprise a plurality of video sequences being set rationally and effectively, transmit the unified observed information of each video sequence and real-world object in the scene is carried out multi-faceted expression to cover shared scene.
In order to show true environment information; The Regenbrecht of Germany Chrysler technical research division not only disposes Helmet Mounted Display for each user; And the Global Information that video camera obtains true environment is set; Can support 4 users to observe same shared augmented reality scene, carry out alternately, carry out various collaborative works, and use true light source to simulate virtual lighting effect with dummy object in the scene with upper position.Though this system has obtained the video information in a plurality of orientation, these information are not carried out associated treatment, do not consider the rationality that video sequence is provided with yet.
To a plurality of video sequences problem is set, Klee has proposed famous gallery problem: a given gallery, need how many platform video cameras, and where should place them in, can guarantee that the summation of all video camera viewing areas is maximum.Chvatal points out that at first for a polygon that contains n summit, points all in this polygon always can be observed by [n/3] individual video camera.Fisk has proved should theory; It at first is decomposed into a plurality of triangles with polygon; Adopt three coloring algorithms to all vertex colorings then, guarantee that there is various colors on adjacent summit, the corresponding vertex position of color of dyeing minimum number is exactly the placed point of video camera.
Research on Interactive Problem for many video sequences; The people such as Pilet of the federal Institute of Technology of Lausanne, SUI have proposed to support the method that method that the log-on message of many video sequences in a plurality of users' the cooperating type augmented reality is complementary and illumination information are obtained; If from one of them video can't complete observation to mark; Can utilize the log-on message in other orientation to come supplementary copy orientation log-on message; Make virtual object by correct being registered on the mark, but the influence that the error that this system does not consider to produce in the complementary process causes complementary result.
The people such as Schall of Graz, Austria Polytechnics utilize fiducial marker, have accomplished the splicing and the modeling of large scale scene.This method is divided into several sections with large scale scene, has placed a series of benchmark marker between per two adjacent parts, utilizes this benchmark marker can confirm the spatial relation of video sequence.At first begin to measure from being numbered 1 space, in obtaining this space the information of all collection points and deposit computing machine in after, more next space is measured; The collection point information of being had living space so all can obtain; And because the mark of different spaces juncture area is measured to 2 times, then pass through coordinate transformation after, can obtain the position relation in two spaces; After so whole space measurement being finished; Can obtain the position relation between a plurality of spaces, and can set up out the three-dimensional model of whole scene according to this relation, the coordinate transformation algorithm of this project is worth using for reference.
People such as the Wang Xiyong of Univ Florida USA incorporate real-world object in the virtual scene based on the laser 3 dimension scan model of color mark registration.At first, use a 3D scanner that real-world object is expressed as a dummy model with sweep trace, then, remove the noise of generation, arrange sweep trace, at last the blank between sweep trace is filled.For the real-time follow-up real-world object; This system has used the color calibration thing to obtain the position of real-world object; Calculate the position of corresponding virtual model under virtual coordinates thus, and draw, this system can be good at making user's experience events in virtual scene; But this method adopts instrument relatively more expensive, is only limited to few user.
Can draw through analyzing domestic and international present Research; Though the multi-faceted expression of many tissues or mechanism real-world object in research cooperating type augmented reality system is arranged at present; But the problem that also has three aspects: at first: the shared augmented reality scene of great majority all seldom considers how to utilize less camera to obtain bigger range of observation at present; Its video sequence of all supposing setting can fully obtain needed true environment information, the method to set up of many video sequences is not described; Secondly, after many video sequences are set, existing mechanism less to log-on message complementation study, even this problem of consideration is arranged, also seldom consider the influence that error is brought complementary result, error analysis is not fed back in complementary the calculating; At last, in representing towards the real-world object of collaborative work task, many methods are carried out modeling with entire environment, and calculated amount is big, do not consider the needs of sharing information sharing in the augmented reality scene.Because each collaboration user has different perception and interaction area, so its required real-world object information is often different.
Summary of the invention
The objective of the invention is to utilize less camera to obtain bigger range of observation, reduce the influence that error is brought complementary result, realize sharing information sharing in the augmented reality scene.
For this reason, the invention discloses a kind of multi-faceted method for expressing of real-world object of shared augmented reality scene.The multi-faceted method for expressing step of said real-world object is following:
Step 1, will to share the augmented reality scene in one plane abstract; Form polygonized structure; This polygonized structure is cut apart, and calculated based on segmentation result and can cover the observation station that the required minimum number of all parts in the augmented reality scene is shared in observation;
Step 2, in shared augmented reality scene; For each observation station is provided with at least one benchmark marker; Confirm the observation area of each observation station; Calculate the yardstick invariant features vector in the observation area of each observation station, and utilize benchmark marker and characteristic matching to confirm the shared observation area between each observation station;
Step 3, each observation station are carried out the three-dimensional registration under the guide of benchmark marker, and obtain the position of each observation station thus;
Step 4, according to the observed result of two observation stations to the benchmark marker in its shared observation area, calculate the spatial relation of these two observation stations;
Step 5, repeating step four are up to calculating each observation station and at least one other observation station spatial relation each other;
The real-world object information of modeling is wanted in step 6, extracting, makes up silhouettes figure and the disparity map of this real-world object with respect to each observation station;
Step 7, according to silhouettes figure and disparity map, create out the real-world object model of arbitrary new observation station fast.
Preferably; In the multi-faceted modeling method of the real-world object of described shared augmented reality scene; In said step 1, with shared augmented reality scene abstract in one plane be to a horizontal plane, to realize through the edge projection that will share the augmented reality scene.
Preferably; In the multi-faceted modeling method of the real-world object of described shared augmented reality scene; In said step 1, this polygonized structure is cut apart through the realization of triangle split plot design, be about to this polygonized structure and be divided into a plurality of triangles in zero lap zone each other.
Preferably, in the multi-faceted modeling method of the real-world object of described shared augmented reality scene, said observation station is the position for video camera.
Preferably, in the multi-faceted modeling method of the real-world object of described shared augmented reality scene, grasping and wanting the real-world object information of modeling is to obtain through the mode that manual work is chosen.
Preferably; In the multi-faceted modeling method of the real-world object of described shared augmented reality scene; In said step 6, grasping the real-world object information want modeling is to share new entering real-world object in the augmented reality scene through detecting, and grasps that the mode of this real-world object obtains.
Preferably; In the multi-faceted modeling method of the real-world object of described shared augmented reality scene; In said step 6; Making up this real-world object is through getting two positions for video camera in each observation station with respect to the disparity map of each observation station, measures the depth distance of real-world objects through two positions for video camera, to form disparity map.
The invention has the beneficial effects as follows:
1, the present invention is directed to the demand that true environment information is obtained in the shared augmented reality scene, use fewer purpose collecting device, guarantee fully to obtain scene environment information simultaneously, with the computing cost of the follow-up shared augmented reality scene application of remarkable reduction.
2, adopt the complementary mode of log-on message to solve the problem that single video sequence can't be observed all scenes, part has been avoided the possibility of the three-dimensional registration failure of video sequence self simultaneously.
3, to the FAQs of sharing real-world object method for expressing in the augmented reality scene; Adopted three-dimensional Convex Hull Method based on vision; In statement real-world object surface point and video sequence spatial relation; Control the expansion of quantity of information, can respond the collaborative user of new adding fast, satisfied the needs of sharing augmented reality environment.
Figure of description
Fig. 1 is this modular design figure that invents a kind of multi-faceted method for expressing of real-world object of shared augmented reality scene;
Fig. 2 is this complementary synoptic diagram of log-on message of inventing many video sequences in a kind of multi-faceted method for expressing of real-world object of shared augmented reality scene;
Fig. 3 is that this is invented in a kind of multi-faceted method for expressing of real-world object of shared augmented reality scene the adjacent video sequence and transmits the log-on message synoptic diagram;
Fig. 4 is this log-on message conveying flow figure that invents many video sequences in a kind of multi-faceted method for expressing of real-world object of shared augmented reality scene;
Fig. 5 is this synoptic diagram of inventing new collaborative user's quick three-dimensional registration in a kind of multi-faceted method for expressing of real-world object of shared augmented reality scene.
Embodiment
Below in conjunction with accompanying drawing the present invention is further specified, so that those of ordinary skills are with reference to implementing according to this behind this instructions.
As shown in Figure 1, the multi-faceted method for expressing of the real-world object of a kind of shared augmented reality scene of the present invention comprises the steps:
Step 1, many video sequences are provided with a computing module, and will share the augmented reality scene areas abstract is a plane polygon P, utilizes scan-line algorithm; Plane polygon is divided into a plurality of triangles in zero lap zone each other, at first polygon is divided into a plurality of monotone polygons, then each monotone polygon is divided into a plurality of triangles; Be decomposed in the leg-of-mutton process at polygon; Can produce line and connect polygonal different summit, these lines are inner at polygon fully, and such line is called as diagonal line; When selecting to be provided with; Calculate diagonal bars number earlier through each summit, and statistics maximal value wherein, the summit that the diagonal bars number is maximum is as being provided with a little; Calculating from this summit can observed leg-of-mutton set again, and it is removed from regional P, upgrades the diagonal line of remaining area simultaneously; Up to remaining area is zero, and when having real-world object in the scene P, the inner point of real-world object need not be observed; And the part scene caused block; Be the inner polygon P ' of P with the real-world object abstract representation in the scene this moment, and promptly inner " cavity " of P utilizes diagonal line to cut apart in initialization; The zone of P ' is removed from P, guarantee that the interior institute of P-P ' is observed a little;
Step 2, as shown in Figure 2 is in shared augmented reality scene, for each observation station is provided with at least one benchmark marker; Confirm the observation area of each observation station, use the yardstick invariant features that two width of cloth video sequence images are carried out the feature point extraction coupling then, the yardstick invariant features is the local feature of image; It maintains the invariance to rotation, scale, brightness variation, and visual angle change, affine transformation, noise are also kept stability to a certain degree, at first generates metric space; Based on metric space, carry out preliminary spatial extrema point and detect, remove unsettled extreme point then; For making algorithm have the rotation inconvenience, utilizing the gradient direction distribution character of residue extreme point neighborhood territory pixel is each key point assigned direction parameter, generates the feature descriptor of 128 dimensions at last for each characteristic point; Behind the video image that obtains two orientation, generated the feature descriptor of a large amount of points according to the constant algorithm of yardstick, it is right to calculate match point according to the Euclidean distance matching method then; Introduce relatively threshold values here, weigh characteristic matching degree between points, if the ratio of Euclidean distance minimum of a value and sub-minimum is greater than comparing threshold values; Then it fails to match, otherwise the match is successful, and relatively threshold values is more little; Matching result is accurate more, and the match point logarithm that obtains is few more;
Step 3, for two the video sequence V (1) that have shared region and V (2), if can observe certain benchmark marker m, find through feature point extraction; V (1), there is certain unique point in the zone that m is corresponding in the video image of V (2), then carries out Feature Points Matching and handles; Match point is to being present in the corresponding image-region of m in a large number, if the right number of match point, can think then that this marker m is V (1) and the shared region of V (2) greater than certain threshold values; Behind the shared region of having confirmed the different video sequence, need to select 1 p (this moment, p was under the world coordinate system) in the shared region, according to projection theory; The different video sequence has different camera coordinate systems; Because p can produce different what comes into a driver's matrixes to the plane of delineation projection of different video sequence and since p be in the shared region a bit; The hereditary property of p capable of using calculates the spatial relation of different video sequence, is M so establish p to the what comes into a driver's matrix of V (1) 1, the world coordinate system that its expression is ordered from p transforms to the camera coordinate system of V (1), and p is M to the what comes into a driver's matrix of V (2) 2, the world coordinate system that its expression is ordered from p transforms to the camera coordinate system of V (2), and to the residing camera coordinate system of V (2), its transformation matrix is M from the residing camera coordinate system transformation of V (1) 1 -1* M 2, be the position relation of video sequence with matrix representation;
Step 4, as shown in Figure 3, at a time, 1 p in space is in the range of observation of video sequence V (1); Point p its with the position of V (1) relation be SP (V (1), p), for video sequence V (2); P maybe be in the range of observation of V (2), perhaps because blocking between the real-world object makes V (2) can not see the p point; The position that therefore can't directly draw p and V (2) concerns that (V (2) p), concerns SP (V (2) according to above-mentioned V (1) that has confirmed and the position of V (2) to SP; V (1)), derive SP (V (2), p)=SP (V (2); V (1)) * SP (V (1), p), complementation is divided into the complementation between the complementary and non-contiguous video sequence between the contiguous video sequence; Weigh " distance " video sequence between with jumping figure this moment, and for two video sequences that have shared region, this distance just 1 is jumped; According to complementary algorithm, when a certain video sequence V (0) needed to obtain the log-on message of certain 1 p in the space, V (0) sent request to contiguous video sequence set; If certain video sequence V (i) receives complementary register requirement and can observe the log-on message that p is ordered that then send answer to video sequence V (i-1), this answer comprises the demarcation log-on message M of a p to video sequence V (i) from video sequence V (i-1) PS (i)This answer message will be along the reverse initial inquiry video sequence V (0) that returns of query path; If video sequence V (i) still can't draw the log-on message that p is ordered; Then turn to its contiguous video sequence V (i+1) to send new request, when video sequence V (0) obtains complementation registration return information, calculate the log-on message of p point for V (0) according to the spatial relation between the video sequence;
Step 5, each two field picture and a background image of using the background subtracting method newly to obtain compare; Obtain the projected outline of real-world object in a certain video sequence; Use thresholding to handle to every pair of pixel difference; Thereby determine the position that belongs to prospect in every two field picture, prospect is the real-world object that the user is concerned about;
Step 6, for a video sequence, in order to calculate the world coordinates value of observed point, need to obtain corresponding disparity map from this orientation; At first need belong to the orientation, two cameras are set obtain two width of cloth images at video sequence, in the image except the zone that is blocked; The match point that all picture elements can meet with a response in other piece image meets on the corresponding sweep trace of polar curve constraint search and estimates the coupling pixel sequences at two width of cloth images then, and calculates the right horizontal ordinate difference of match point; This difference is saved in the disparity map as parallax value, when calculate object on image the silhouettes line and obtain the disparity map under this orientation after, utilize projection formula to calculate the visible D coordinates value of real-world object surface point under world coordinate system in this orientation; Utilize binocular stereo vision in each orientation then, obtain the outer parameter matrix of left and right sides image respectively, and calculate the distance b of left and right cameras; Preserve the intrinsic parameter matrix of video camera simultaneously based on the computing machine scaling method; Through being the point of foreground object in the traversal silhouettes figure, utilize the corresponding parallax value size of inside and outside parameter matrix, b and this point, utilize projection formula; With the two-dimensional pixel spot projection of the plane of delineation in camera coordinate system; Can obtain corresponding three-dimensional coordinate point, owing to have the video camera in a plurality of orientation, therefore; Need be with in different camera coordinate system unification to world coordinate systems; For the three-dimensional coordinate point after the projection, also need it to be transformed in the world coordinate system according to the outer parameter information of having tried to achieve, the three-dimensional point set that this moment, each orientation calculated can be unified in the world coordinate system;
Step 7, after calculating the observable real-world object surface point of certain video sequence, obtain this moment is that the point of part surface point converges and closes, and uses the three-dimensional convex closure of putting cloud to represent the surface configuration of real-world object then: at first to select not four points of coplane; Construct a tetrahedron,, be increased in the polyhedron of front construction according to certain order then for remaining point; If it is initiate inner at polyhedron; Then can directly ignore, handle next point, if initiate outside at polyhedron; Then need construct new limit and face; Add in the current polyhedron, and delete those sightless limit and faces, obtain the model of real-world object at last;
Step 8, as shown in Figure 5, for the user of new entering, it at first sends initialization requests to all initial users; Confirm the video sequence that it is contiguous, if having shared region between the video sequence, then can be according to the spatial relation that a bit calculates in the shared region between the video sequence; And preserve this positional information, if do not have shared region between the video sequence, then the video sequence in other orientation is passed null value back through Network Transmission; When new user confirmed with its most contiguous left and right sides video sequence after; Send request to these two video sequences, obtain these two existing real-world objects in orientation and represent information, what this stylish orientation obtained still is the point set of real-world object surface point; The data of a plurality of points that can return two orientation are carried out fusion treatment, obtain the model of new real-world object.

Claims (7)

1. the multi-faceted modeling method of the real-world object of a shared augmented reality scene is characterized in that, may further comprise the steps:
Step 1, will to share the augmented reality scene in one plane abstract; Form polygonized structure; This polygonized structure is cut apart, and calculated based on segmentation result and can cover the observation station that the required minimum number of all parts in the augmented reality scene is shared in observation;
Step 2, in shared augmented reality scene; For each observation station is provided with at least one benchmark marker; Confirm the observation area of each observation station; Calculate the yardstick invariant features vector in the observation area of each observation station, and utilize benchmark marker and characteristic matching to confirm the shared observation area between each observation station;
Step 3, each observation station are carried out the three-dimensional registration under the guide of benchmark marker;
Step 4, according to the observed result of two observation stations to the benchmark marker in its shared observation area, calculate the spatial relation of these two observation stations;
Step 5, repeating step four are up to calculating each observation station and at least one other observation station spatial relation each other;
The real-world object information of modeling is wanted in step 6, extracting, makes up silhouettes figure and the disparity map of this real-world object with respect to each observation station;
Step 7, according to silhouettes figure and disparity map, create out the real-world object model of arbitrary new observation station fast.
2. the multi-faceted modeling method of the real-world object of shared augmented reality scene as claimed in claim 1; It is characterized in that; In said step 1, with shared augmented reality scene abstract in one plane be to a horizontal plane, to realize through the edge projection that will share the augmented reality scene.
3. the multi-faceted modeling method of the real-world object of shared augmented reality scene as claimed in claim 1; It is characterized in that; In said step 1; This polygonized structure is cut apart through the realization of triangle split plot design, be about to this polygonized structure and be divided into a plurality of triangles in zero lap zone each other.
4. the multi-faceted modeling method of the real-world object of shared augmented reality scene as claimed in claim 1 is characterized in that said observation station is the position for video camera.
5. the multi-faceted modeling method of the real-world object of shared augmented reality scene as claimed in claim 1 is characterized in that, grasping and wanting the real-world object information of modeling is to obtain through the mode that manual work is chosen.
6. the multi-faceted modeling method of the real-world object of shared augmented reality scene as claimed in claim 1; It is characterized in that; In said step 6; Grasping the real-world object information want modeling is to share new entering real-world object in the augmented reality scene through detecting, and grasps that the mode of this real-world object obtains.
7. the multi-faceted modeling method of the real-world object of shared augmented reality scene as claimed in claim 1; It is characterized in that; In said step 6; Making up this real-world object is through getting two positions for video camera in each observation station with respect to the disparity map of each observation station, measures the depth distance of real-world objects through two positions for video camera, to form disparity map.
CN201110287207.8A 2011-09-26 2011-09-26 Method for showing actual object in shared enhanced actual scene in multi-azimuth way Active CN102509348B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201110287207.8A CN102509348B (en) 2011-09-26 2011-09-26 Method for showing actual object in shared enhanced actual scene in multi-azimuth way

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201110287207.8A CN102509348B (en) 2011-09-26 2011-09-26 Method for showing actual object in shared enhanced actual scene in multi-azimuth way

Publications (2)

Publication Number Publication Date
CN102509348A true CN102509348A (en) 2012-06-20
CN102509348B CN102509348B (en) 2014-06-25

Family

ID=46221425

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201110287207.8A Active CN102509348B (en) 2011-09-26 2011-09-26 Method for showing actual object in shared enhanced actual scene in multi-azimuth way

Country Status (1)

Country Link
CN (1) CN102509348B (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103617317A (en) * 2013-11-26 2014-03-05 Tcl集团股份有限公司 Automatic layout method and system of intelligent 3D (three dimensional) model
WO2015000286A1 (en) * 2013-07-03 2015-01-08 央数文化(上海)股份有限公司 Three-dimensional interactive learning system and method based on augmented reality
CN104596502A (en) * 2015-01-23 2015-05-06 浙江大学 Object posture measuring method based on CAD model and monocular vision
CN104657568A (en) * 2013-11-21 2015-05-27 深圳先进技术研究院 Multiplayer mobile game system and multiplayer mobile game method based on intelligent glasses
CN105278905A (en) * 2014-07-08 2016-01-27 三星电子株式会社 Device and method to display object with visual effect
CN106984043A (en) * 2017-03-24 2017-07-28 武汉秀宝软件有限公司 The method of data synchronization and system of a kind of many people's battle games
CN108564661A (en) * 2018-01-08 2018-09-21 佛山市超体软件科技有限公司 A kind of recording method based on augmented reality scene
CN111052110A (en) * 2017-07-27 2020-04-21 韦斯特尔电子工业和贸易有限责任公司 Method, apparatus and computer program for overlaying a webpage on a 3D object
CN111882516A (en) * 2020-02-19 2020-11-03 南京信息工程大学 Image quality evaluation method based on visual saliency and deep neural network
CN113223186A (en) * 2021-07-07 2021-08-06 江西科骏实业有限公司 Processing method, equipment, product and device for realizing augmented reality
CN113920027A (en) * 2021-10-15 2022-01-11 中国科学院光电技术研究所 Method for rapidly enhancing sequence image based on bidirectional projection
US11392636B2 (en) 2013-10-17 2022-07-19 Nant Holdings Ip, Llc Augmented reality position-based service, methods, and systems
CN115633248A (en) * 2022-12-22 2023-01-20 浙江宇视科技有限公司 Multi-scene cooperative detection method and system
US11854153B2 (en) 2011-04-08 2023-12-26 Nant Holdings Ip, Llc Interference based augmented reality hosting platforms

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107851333A (en) * 2015-07-28 2018-03-27 株式会社日立制作所 Video generation device, image generation system and image generating method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101101505A (en) * 2006-07-07 2008-01-09 华为技术有限公司 Method and system for implementing three-dimensional enhanced reality
US20090087063A1 (en) * 2007-10-01 2009-04-02 Martin Edlauer Method for registering two-dimensional image data, computer program product, navigation method for navigating a treatment apparatus in the medical field, and computational device for registering two-dimensional image data
CN101720047A (en) * 2009-11-03 2010-06-02 上海大学 Method for acquiring range image by stereo matching of multi-aperture photographing based on color segmentation

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101101505A (en) * 2006-07-07 2008-01-09 华为技术有限公司 Method and system for implementing three-dimensional enhanced reality
US20090087063A1 (en) * 2007-10-01 2009-04-02 Martin Edlauer Method for registering two-dimensional image data, computer program product, navigation method for navigating a treatment apparatus in the medical field, and computational device for registering two-dimensional image data
CN101720047A (en) * 2009-11-03 2010-06-02 上海大学 Method for acquiring range image by stereo matching of multi-aperture photographing based on color segmentation

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
DIETER SCHMALSTIEG, ET AL: "Managing Complex Augmented Reality Models", 《COMPUTER GRAPHICS AND APPLICATIONS, IEEE》 *
XIN JIN, ET AL: "Cooperatively Resolving Occlusion Between Real and Virtual in Multiple Video Sequences", 《2011 SIXTH ANNUAL CHINAGRID CONFERENCE (CHINAGRID)》 *
赵沁平: "虚拟现实综述", 《中国科学 F辑:信息科学》 *

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11869160B2 (en) 2011-04-08 2024-01-09 Nant Holdings Ip, Llc Interference based augmented reality hosting platforms
US11967034B2 (en) 2011-04-08 2024-04-23 Nant Holdings Ip, Llc Augmented reality object management system
US11854153B2 (en) 2011-04-08 2023-12-26 Nant Holdings Ip, Llc Interference based augmented reality hosting platforms
WO2015000286A1 (en) * 2013-07-03 2015-01-08 央数文化(上海)股份有限公司 Three-dimensional interactive learning system and method based on augmented reality
US11392636B2 (en) 2013-10-17 2022-07-19 Nant Holdings Ip, Llc Augmented reality position-based service, methods, and systems
CN104657568B (en) * 2013-11-21 2017-10-03 深圳先进技术研究院 Many people's moving game system and methods based on intelligent glasses
CN104657568A (en) * 2013-11-21 2015-05-27 深圳先进技术研究院 Multiplayer mobile game system and multiplayer mobile game method based on intelligent glasses
CN103617317A (en) * 2013-11-26 2014-03-05 Tcl集团股份有限公司 Automatic layout method and system of intelligent 3D (three dimensional) model
CN103617317B (en) * 2013-11-26 2017-07-11 Tcl集团股份有限公司 The autoplacement method and system of intelligent 3D models
CN105278905A (en) * 2014-07-08 2016-01-27 三星电子株式会社 Device and method to display object with visual effect
CN105278905B (en) * 2014-07-08 2022-06-21 三星电子株式会社 Apparatus and method for displaying object having visual effect
CN104596502B (en) * 2015-01-23 2017-05-17 浙江大学 Object posture measuring method based on CAD model and monocular vision
CN104596502A (en) * 2015-01-23 2015-05-06 浙江大学 Object posture measuring method based on CAD model and monocular vision
CN106984043A (en) * 2017-03-24 2017-07-28 武汉秀宝软件有限公司 The method of data synchronization and system of a kind of many people's battle games
CN106984043B (en) * 2017-03-24 2020-08-07 武汉秀宝软件有限公司 Data synchronization method and system for multiplayer battle game
CN111052110A (en) * 2017-07-27 2020-04-21 韦斯特尔电子工业和贸易有限责任公司 Method, apparatus and computer program for overlaying a webpage on a 3D object
CN108564661A (en) * 2018-01-08 2018-09-21 佛山市超体软件科技有限公司 A kind of recording method based on augmented reality scene
CN108564661B (en) * 2018-01-08 2022-06-28 佛山市超体软件科技有限公司 Recording method based on augmented reality scene
CN111882516A (en) * 2020-02-19 2020-11-03 南京信息工程大学 Image quality evaluation method based on visual saliency and deep neural network
CN111882516B (en) * 2020-02-19 2023-07-07 南京信息工程大学 Image quality evaluation method based on visual saliency and deep neural network
CN113223186A (en) * 2021-07-07 2021-08-06 江西科骏实业有限公司 Processing method, equipment, product and device for realizing augmented reality
CN113920027B (en) * 2021-10-15 2023-06-13 中国科学院光电技术研究所 Sequence image rapid enhancement method based on two-way projection
CN113920027A (en) * 2021-10-15 2022-01-11 中国科学院光电技术研究所 Method for rapidly enhancing sequence image based on bidirectional projection
CN115633248A (en) * 2022-12-22 2023-01-20 浙江宇视科技有限公司 Multi-scene cooperative detection method and system

Also Published As

Publication number Publication date
CN102509348B (en) 2014-06-25

Similar Documents

Publication Publication Date Title
CN102509348B (en) Method for showing actual object in shared enhanced actual scene in multi-azimuth way
CN103400409B (en) A kind of coverage 3D method for visualizing based on photographic head attitude Fast estimation
CN102509343B (en) Binocular image and object contour-based virtual and actual sheltering treatment method
CN102938844B (en) Three-dimensional imaging is utilized to generate free viewpoint video
US9430871B2 (en) Method of generating three-dimensional (3D) models using ground based oblique imagery
CN106709947A (en) RGBD camera-based three-dimensional human body rapid modeling system
ES2392229B1 (en) METHOD OF GENERATING A MODEL OF A FLAT OBJECT FROM VIEWS OF THE OBJECT.
CN104599314A (en) Three-dimensional model reconstruction method and system
CN110148217A (en) A kind of real-time three-dimensional method for reconstructing, device and equipment
CN104036488A (en) Binocular vision-based human body posture and action research method
CN104021538A (en) Object positioning method and device
WO2015179216A1 (en) Orthogonal and collaborative disparity decomposition
CN106203429A (en) Based on the shelter target detection method under binocular stereo vision complex background
CN107492107A (en) The object identification merged based on plane with spatial information and method for reconstructing
CN113050074B (en) Camera and laser radar calibration system and calibration method in unmanned environment perception
Jin et al. An indoor location-based positioning system using stereo vision with the drone camera
CN104318566B (en) Can return to the new multi-view images plumb line path matching method of multiple height values
KR20130137076A (en) Device and method for providing 3d map representing positon of interest in real time
CN102750694A (en) Local optimum belief propagation algorithm-based binocular video depth map solution method
CN114881841A (en) Image generation method and device
Liu et al. The applications and summary of three dimensional reconstruction based on stereo vision
CN112184793B (en) Depth data processing method and device and readable storage medium
US11043019B2 (en) Method of displaying a wide-format augmented reality object
CN105339981B (en) Method for using one group of primitive registration data
US9240055B1 (en) Symmetry-based interpolation in images

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant